/**
* @brief Set backend private data if needed.
- * @details This callback passes a backend private data to a given backend.
+ * @details This function passes a backend private data to a given backend.
* I.e., ML Single API backend needs which tensor filter type of NNStreamer should be used such as NNFW or VIVANTE.
*
* @since_tizen 6.0
/**
* @brief Set CLTuner configuration.
- * @details This callback passes a given CLTuner configuration value to inference engine if the inference engine supports for CLTuner feature.
+ * @details This function passes a given CLTuner configuration value to inference engine if the inference engine supports for CLTuner feature.
*
* @since_tizen 6.5
* @param[in] cltuner This contains CLTuner configuration value. See inference_engine_cltuner structure and inference_engine_cltuner_mode_e enumeration.
/**
* @brief Set target devices.
* @details See #inference_target_type_e
- * This callback passes given device types - CPU, GPU, CUSTOM or combinated one if a backend engine supports hybrid inferencea - to a backend engine.
+ * This function passes given device types - CPU, GPU, CUSTOM or combinated one if a backend engine supports hybrid inferencea - to a backend engine.
* Some backend engine can optimize a given NN model at graph level such as dropping operation or fusing operations targating
* a given device or devices when the backend engine loads the NN model file.
*
/**
* @brief Request to load model data with user-given model file information.
- * @details This callback requests a backend engine to load given model files for inference.
+ * @details This function requests a backend engine to load given model files for inference.
* The backend engine should load the given model files according to a given model file format.
*
* @since_tizen 6.0
/**
* @brief Get input tensor buffers from a given backend engine.
- * @details This callback requests a backend engine input tensor buffers.
+ * @details This function requests a backend engine input tensor buffers.
* If the backend engine is able to allocate the input tensor buffers internally, then
* it has to add the input tensor buffers to buffers vector. By doing this, upper layer
* will request a inference with the input tensor buffers.
* Otherwise, the backend engine should just return INFERENCE_ENGINE_ERROR_NONE so that
* upper layer can allocate input tensor buffers according to input layer property.
- * As for the input layer property, you can see GetInputLyaerProperty callback.
+ * As for the input layer property, you can see GetInputLyaerProperty function.
*
* @since_tizen 6.0
* @param[out] buffers A backend engine should add input tensor buffers allocated itself to buffers vector.
/**
* @brief Get output tensor buffers from a given backend engine.
- * @details This callback requests a backend engine output tensor buffers.
+ * @details This function requests a backend engine output tensor buffers.
* If the backend engine is able to allocate the output tensor buffers internally, then
* it has to add the output tensor buffers to buffers vector. By doing this, upper layer
* will request a inference with the output tensor buffers.
* Otherwise, the backend engine should just return INFERENCE_ENGINE_ERROR_NONE so that
* upper layer can allocate output tensor buffers according to output layer property.
- * As for the output layer property, you can see GetOutputLyaerProperty callback.
+ * As for the output layer property, you can see GetOutputLyaerProperty function.
*
* @since_tizen 6.0
* @param[out] buffers A backend engine should add output tensor buffers allocated itself to buffers vector.
/**
* @brief Get input layer property information from a given backend engine.
- * @details This callback requests a backend engine information about given layers by
- * SetInputTensorProperty callback.
+ * @details This function requests a backend engine information about given layers by
+ * SetInputTensorProperty function.
* If user wants to specify input layer that the backend engine starts from
- * then user can call SetInputTensorProperty callback with desired layer information.
+ * then user can call SetInputTensorProperty function with desired layer information.
* If user dosn't specify the input layer then the backend engine should add tensor information
* of the first layer in model graph in default.
*
/**
* @brief Get output layer property information from a given backend engine.
- * @details This callback requests a backend engine information about given layers by
- * SetOutputTensorProperty callback.
+ * @details This function requests a backend engine information about given layers by
+ * SetOutputTensorProperty function.
* If user wants to specify output layer that the backend engine stores the inference result
- * then user can call SetOutputTensorProperty callback with desired layer information.
+ * then user can call SetOutputTensorProperty function with desired layer information.
* If user dosn't specify the output layer then the backend engine should add tensor information
* of the last layer in model graph in default.
*
/**
* @brief Set input layer property information to a given backend engine.
- * @details This callback passes a given input layer information to a backend engine.
+ * @details This function passes a given input layer information to a backend engine.
* If user wants to start the inference from some given layer in model graph.
* The backend engine should keep the input layer found with the given information
* in model graph and then set the layer as input layer.
/**
* @brief Set output layer property information to a given backend engine.
- * @details This callback passes a given output layer information to a backend engine.
+ * @details This function passes a given output layer information to a backend engine.
* If user wants to start the inference from some given layer in model graph.
* The backend engine should keep the output layer found with the given information
* in model graph and then set the layer as output layer.
/**
* @brief Get capacity from a given backend engine.
- * @details This callback requests what supported features and constraints the backend engine has.
- * Upper layer should call this callback just after the backend engine library is loaded.
+ * @details This function requests what supported features and constraints the backend engine has.
+ * Upper layer should call this function just after the backend engine library is loaded.
*
* @since_tizen 6.0
* @param[out] capacity A backend engine should add the features and constraints it has.
/**
* @brief Run an inference with user-given input and output buffers.
- * @details This callback requests a backend engine to do inference with given input and output tensor buffers.
+ * @details This function requests a backend engine to do inference with given input and output tensor buffers.
* input and output tensor buffers can be allocated by either a backend engine or upper layer according to
* backend engine. So upper layer needs to make sure to clean up the buffers.
*
/**
* @brief Load a backend engine library with a given backend name.
- * @details This callback loads a backend engine library with a given backend name.
+ * @details This function loads a backend engine library with a given backend name.
* In order to find a backend engine library corresponding to the given backend name,
* this function makes a full name of the library file with given backend name.
* After that, it opens the library file by calling dlopen function to find a entry point
/**
* @brief Unload a backend engine library.
- * @details This callback unload a backend engine library.
+ * @details This function unload a backend engine library.
*
* @since_tizen 6.0
*/
/**
* @brief Set target devices.
* @details See #inference_target_type_e
- * This callback passes given device types - CPU, GPU, CUSTOM or combinated one if a backend engine supports hybrid inferencea - to a backend engine.
+ * This function passes given device types - CPU, GPU, CUSTOM or combinated one if a backend engine supports hybrid inferencea - to a backend engine.
* Some backend engine can optimize a given NN model at graph level such as dropping operation or fusing operations targating
* a given device or devices when the backend engine loads the NN model file.
*
/**
* @brief Set CLTuner configuration.
- * @details This callback passes a given CLTuner configuration value to inference engine if the inference engine supports for CLTuner feature.
+ * @details This function passes a given CLTuner configuration value to inference engine if the inference engine supports for CLTuner feature.
*
* @since_tizen 6.5
* @param[in] cltuner This contains CLTuner configuration value. See inference_engine_cltuner structure and inference_engine_cltuner_mode_e enumeration.
/**
* @brief Request to load model data with user-given model file information.
- * @details This callback requests a backend engine to load given model files for inference.
+ * @details This function requests a backend engine to load given model files for inference.
* The backend engine should load the given model files according to a given model file format.
*
* @since_tizen 6.0
/**
* @brief Get input tensor buffers from a given backend engine.
- * @details This callback requests a backend engine input tensor buffers.
+ * @details This function requests a backend engine input tensor buffers.
* If the backend engine is able to allocate the input tensor buffers internally, then
* it has to add the input tensor buffers to buffers vector. By doing this, upper layer
* will request a inference with the input tensor buffers.
* Otherwise, the backend engine should just return INFERENCE_ENGINE_ERROR_NONE so that
* upper layer can allocate input tensor buffers according to input layer property.
- * As for the input layer property, you can see GetInputLyaerProperty callback.
+ * As for the input layer property, you can see GetInputLyaerProperty function.
*
* @since_tizen 6.0
* @param[out] buffers A backend engine should add input tensor buffers allocated itself to buffers vector.
/**
* @brief Get output tensor buffers from a given backend engine.
- * @details This callback requests a backend engine output tensor buffers.
+ * @details This function requests a backend engine output tensor buffers.
* If the backend engine is able to allocate the output tensor buffers internally, then
* it has to add the output tensor buffers to buffers vector. By doing this, upper layer
* will request a inference with the output tensor buffers.
* Otherwise, the backend engine should just return INFERENCE_ENGINE_ERROR_NONE so that
* upper layer can allocate output tensor buffers according to output layer property.
- * As for the output layer property, you can see GetOutputLyaerProperty callback.
+ * As for the output layer property, you can see GetOutputLyaerProperty function.
*
* @since_tizen 6.0
* @param[out] buffers A backend engine should add output tensor buffers allocated itself to buffers vector.
/**
* @brief Get input layer property information from a given backend engine.
- * @details This callback requests a backend engine information about given layers by
- * SetInputTensorProperty callback.
+ * @details This function requests a backend engine information about given layers by
+ * SetInputTensorProperty function.
* If user wants to specify input layer that the backend engine starts from
- * then user can call SetInputTensorProperty callback with desired layer information.
+ * then user can call SetInputTensorProperty function with desired layer information.
* If user dosn't specify the input layer then the backend engine should add tensor information
* of the first layer in model graph in default.
*
/**
* @brief Get output layer property information from a given backend engine.
- * @details This callback requests a backend engine information about given layers by
- * SetOutputTensorProperty callback.
+ * @details This function requests a backend engine information about given layers by
+ * SetOutputTensorProperty function.
* If user wants to specify output layer that the backend engine stores the inference result
- * then user can call SetOutputTensorProperty callback with desired layer information.
+ * then user can call SetOutputTensorProperty function with desired layer information.
* If user dosn't specify the output layer then the backend engine should add tensor information
* of the last layer in model graph in default.
*
/**
* @brief Set input layer property information to a given backend engine.
- * @details This callback passes a given input layer information to a backend engine.
+ * @details This function passes a given input layer information to a backend engine.
* If user wants to start the inference from some given layer in model graph.
* The backend engine should keep the input layer found with the given information
* in model graph and then set the layer as input layer.
/**
* @brief Set output layer property information to a given backend engine.
- * @details This callback passes a given output layer information to a backend engine.
+ * @details This function passes a given output layer information to a backend engine.
* If user wants to start the inference from some given layer in model graph.
* The backend engine should keep the output layer found with the given information
* in model graph and then set the layer as output layer.
/**
* @brief Get capacity from a given backend engine.
- * @details This callback requests what supported features and constraints the backend engine has.
- * Upper layer should call this callback just after the backend engine library is loaded.
+ * @details This function requests what supported features and constraints the backend engine has.
+ * Upper layer should call this function just after the backend engine library is loaded.
*
* @since_tizen 6.0
* @param[out] capacity A backend engine should add the features and constraints it has.
/**
* @brief Run an inference with user-given input and output buffers.
- * @details This callback requests a backend engine to do inference with given input and output tensor buffers.
+ * @details This function requests a backend engine to do inference with given input and output tensor buffers.
* input and output tensor buffers can be allocated by either a backend engine or upper layer according to
* backend engine. So upper layer needs to make sure to clean up the buffers.
*
/**
* @brief Set backend name.
- * @details It will be set in BindBackend callback of InferenceEngineCommon object
+ * @details It will be set in BindBackend function of InferenceEngineCommon object
* to indicate which backend - armnn, opencv, tflite or dldt - inference will be performed by.
*
* @since_tizen 6.0
/**
* @brief Set model name.
- * @details It will be set in Load callback of InferenceEngineCommon object to indicate which pre-trained model
+ * @details It will be set in Load function of InferenceEngineCommon object to indicate which pre-trained model
* the inference will be performed on.
*
* @since_tizen 6.0
/**
* @brief Set taget devices the inference runs on.
- * @details It will be set in SetTargetDevices callback of InferenceEngineCommon object to indicate
+ * @details It will be set in SetTargetDevices function of InferenceEngineCommon object to indicate
* which Hardware - CPU or GPU - the inference will be performed on.
*
* @since_tizen 6.0
/**
* @brief Add inference env. information to a vector member, v_mProfileEnv.
- * @details It will be called in Load callback of InferenceEngineCommon object to add inference env. information
+ * @details It will be called in Load function of InferenceEngineCommon object to add inference env. information
* updated already to the vector member, which will be used to get inference env. information
* when dumping profile data.
*
/**
* @brief Start profiling with a given profile type.
- * @details It will be called at top of a callback function of InferenceEngineCommon object to collect profile data.
+ * @details It will be called at top of a function function of InferenceEngineCommon object to collect profile data.
*
* @since_tizen 6.0
* @param[in] type Profile type which can be IR_PROFILER_LATENCY or IR_PROFILER_MEMORY for now.
/**
* @brief Stop profiling to a given profile type.
- * @details It will be called at bottom of a callback function of InferenceEngineCommon object to collect profile data.
+ * @details It will be called at bottom of a function function of InferenceEngineCommon object to collect profile data.
*
* @since_tizen 6.0
* @param[in] type Profile type which can be IR_PROFILER_LATENCY or IR_PROFILER_MEMORY for now.
Name: inference-engine-interface
Summary: Interface of inference engines
-Version: 0.4.8
+Version: 0.4.9
Release: 0
Group: Multimedia/Framework
License: Apache-2.0
}
if (mUseProfiler == true) {
- // Memory usage will be measured between BindBackend ~ UnbindBackend callbacks.
+ // Memory usage will be measured between BindBackend ~ UnbindBackend functions.
mProfiler.Start(IE_PROFILER_MEMORY);
}
LOGW("ENTER");
if (mUseProfiler == true) {
- // Memory usage will be measured between BindBackend ~ UnbindBackend callbacks.
+ // Memory usage will be measured between BindBackend ~ UnbindBackend functions.
mProfiler.Stop(IE_PROFILER_MEMORY);
}
}
// If backend engine doesn't provide tensor buffers then just return.
- // In this case, InferenceEngineCommon framework will allocate the tensor buffers.
+ // In this case, upper-layer should handle the tensor buffers.
if (buffers.empty()) {
return ret;
}
}
// If backend engine doesn't provide tensor buffers then just return.
- // In this case, InferenceEngineCommon framework will allocate the tensor buffers.
+ // In this case, upper-layer should handle the tensor buffers.
if (buffers.empty()) {
return ret;
}