runModelInference

fun runModelInference(modelName: String, modelType: Pipeline.ModelInferenceType, modelBinary: SharedMemory, inputs: Array<Pipeline.ModelNodeEncoding>, outputs: Array<Pipeline.ModelNodeEncoding>)

Run the inference of an algorithm model provided in binary package.

Parameters

modelName

the name tag for the algorithm binary package.

modelType

the type of the algorithm model.

modelBinary

the shared memory which stored the algorithm binary package. The binary package's format must match the modelType, i.e., if modelType is ModelInferenceType.QNN_HTP, the binary must be a QNN Context Binary for HTP backend.

inputs

the descriptions and tensor association of inputs to the QNN when being executed. You can use this array to select which nodes in the QNN computation graph will accept data from pipeline tensors.

outputs

the descriptions and tensor association of outputs from the QNN after executed. You can use this array to select from which nodes you would like to read values to pipeline tensors.