* mv_facial_landmark_prepare() function to prepare a network
* for the inference.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @remarks The @a infer should be released using mv_facial_landmark_destroy().
*
* @internal
* @brief Destroys inference handle and releases all its resources.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the inference to be destroyed.
*
* @brief Sets user-given model information.
* @details Use this function to change the model information instead of default one after calling @ref mv_facial_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the facial landmark object.
* @param[in] model_name Model name.
* @internal
* @brief Configures the backend for the facial landmark inference.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param [in] handle The handle to the inference
*
* @details Use this function to prepare the facial landmark inference based on
* the configured network.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the inference.
*
* @internal
* @brief Performs the facial landmark inference on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks This function is synchronous and may take considerable time to run.
*
* @param[in] handle The handle to the inference
* @internal
* @brief Performs asynchronously the facial landmark inference on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks This function operates asynchronously, so it returns immediately upon invocation.
* The inference results are inserted into the outgoing queue within the framework
* in the order of processing, and the results can be obtained through mv_facial_landmark_get_positions().
* @internal
* @brief Gets the facial landmark positions on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks pos_x and pos_y arrays are allocated internally by the framework and will remain valid
* until the handle is released.
* Please do not deallocate them directly, and if you want to use them after the handle is released,
* @brief Sets user-given inference engine and device types for inference.
* @details Use this function to change the inference engine and device types for inference instead of default ones after calling @ref mv_facial_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the facial landmark object.
* @param[in] engine_type A string of inference engine type.
* @brief Gets a number of inference engines available for facial landmark task API.
* @details Use this function to get how many inference engines are supported for facial landmark after calling @ref mv_facial_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the facial landmark object.
* @param[out] engine_count A number of inference engines available for facial landmark API.
* @brief Gets engine type to a given inference engine index.
* @details Use this function to get inference engine type with a given engine index after calling @ref mv_facial_landmark_get_engine_count().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the facial landmark object.
* @param[in] engine_index A inference engine index for getting the inference engine type.
* @brief Gets a number of device types available to a given inference engine.
* @details Use this function to get how many device types are supported for a given inference engine after calling @ref mv_facial_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the facial landmark object.
* @param[in] engine_type A inference engine string.
* @brief Gets device type list available.
* @details Use this function to get what device types are supported for current inference engine type after calling @ref mv_facial_landmark_configure().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the facial landmark object.
* @param[in] engine_type A inference engine string.
/**
* @brief The facial landmark object handle.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*/
typedef void *mv_facial_landmark_h;
* @ref mv_image_classification_prepare() function to prepare
* an image classification object.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[out] out_handle The handle to the image classification object to be created
*
/**
* @brief Destroy image classification handle and releases all its resources.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object to be destroyed.
*
/**
* @brief Configure the backend to the inference handle
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param [in] handle The handle to the inference
*
* @details Use this function to prepare inference based on
* the configured network.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param [in] handle The handle to the inference
*
* @details Use this function to inference with a given source.
*
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] source The handle to the source of the media.
* @internal
* @brief Performs asynchronously the image classification inference on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks This function operates asynchronously, so it returns immediately upon invocation.
* The inference results are inserted into the outgoing queue within the framework
* in the order of processing, and the results can be obtained through mv_image_classification_get_label().
* @brief Gets the label value as a image classification inference result.
* @details Use this function to get the label value after calling @ref mv_image_classification_inference().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @remarks The @a result must NOT be released using free()
*
* @brief Set user-given model information.
* @details Use this function to change the model information instead of default one after calling @ref mv_image_classification_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] model_file Model file name.
* @brief Set user-given inference engine and device types for inference.
* @details Use this function to change the inference engine and device types for inference instead of default ones after calling @ref mv_image_classification_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] engine_type A string of inference engine type.
* @brief Get a number of inference engines available for image classification task API.
* @details Use this function to get how many inference engines are supported for image classification after calling @ref mv_image_classification_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[out] engine_count A number of inference engines available for image classification API.
* @brief Get engine type to a given inference engine index.
* @details Use this function to get inference engine type with a given engine index after calling @ref mv_image_classification_get_engine_count().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] engine_index A inference engine index for getting the inference engine type.
* @brief Get a number of device types available to a given inference engine.
* @details Use this function to get how many device types are supported for a given inference engine after calling @ref mv_image_classification_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] engine_type A inference engine string.
* @brief Get device type list available.
* @details Use this function to get what device types are supported for current inference engine type after calling @ref mv_image_classification_configure().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] engine_type A inference engine string.
/**
* @brief The object detection 3d object handle.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*/
typedef void *mv_image_classification_h;
* mv_object_detection_3d_prepare() function to prepare a network
* for the inference.
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*
* @remarks The @a infer should be released using mv_object_detection_3d_destroy().
*
/**
* @brief Destroys inference handle and releases all its resources.
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the inference to be destroyed.
*
* @brief Set user-given model information.
* @details Use this function to change the model information instead of default one after calling @ref mv_object_detection_3d_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the object detection 3d object.
* @param[in] model_name Model name.
/**
* @brief Configures the backend for the object detection inference.
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*
* @param [in] handle The handle to the inference
*
* @details Use this function to prepare the object detection inference based on
* the configured network.
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the inference.
*
/**
* @brief Performs the object detection 3d inference on the @a source.
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
* @remarks This function is synchronous and may take considerable time to run.
*
* @param[in] handle The handle to the inference
* @brief Gets the probability value to the detected object.
* @details Use this function to get the probability value after calling @ref mv_object_detection_3d_inference().
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*
* @remarks The @a result must NOT be released using free()
*
* @brief Gets the number of points to the 3D bounding box of the detected object.
* @details Use this function to get the number of points after calling @ref mv_object_detection_3d_inference().
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*
* @remarks The @a result must NOT be released using free()
*
* @brief Gets the x and y coordinates values to the 3D bounding box of the detected object.
* @details Use this function to get the coordinates values after calling @ref mv_object_detection_3d_inference().
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*
* @remarks The @a result must NOT be released using free()
*
* @brief Set user-given inference engine and device types for inference.
* @details Use this function to change the inference engine and device types for inference instead of default ones after calling @ref mv_object_detection_3d_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the object detection 3d object.
* @param[in] engine_type A string of inference engine type.
* @brief Get a number of inference engines available for object detection 3d task API.
* @details Use this function to get how many inference engines are supported for object detection 3d after calling @ref mv_object_detection_3d_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the object detection 3d object.
* @param[out] engine_count A number of inference engines available for object detection 3d API.
* @brief Get engine type to a given inference engine index.
* @details Use this function to get inference engine type with a given engine index after calling @ref mv_object_detection_3d_get_engine_count().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the object detection 3d object.
* @param[in] engine_index A inference engine index for getting the inference engine type.
* @brief Get a number of device types available to a given inference engine.
* @details Use this function to get how many device types are supported for a given inference engine after calling @ref mv_object_detection_3d_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the object detection 3d object.
* @param[in] engine_type A inference engine string.
* @brief Get device type list available.
* @details Use this function to get what device types are supported for current inference engine type after calling @ref mv_object_detection_3d_configure().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the object detection 3d object.
* @param[in] engine_type A inference engine string.
/**
* @brief The object detection 3d handle.
*
- * @since_tizen 7.0
+ * @since_tizen 9.0
*/
typedef void *mv_object_detection_3d_h;
/**
* @brief The object detection object handle.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*/
typedef void *mv_object_detection_h;
* @ref mv_pose_landmark_prepare() function to prepare
* an pose landmark object.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[out] out_handle The handle to the pose landmark object to be created
*
/**
* @brief Destroys pose landmark handle and releases all its resources.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the pose landmark object to be destroyed.
*
* @brief Set user-given model information.
* @details Use this function to change the model information instead of default one after calling @ref mv_pose_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the pose landmark object.
* @param[in] model_name Model name.
/**
* @brief Configures the backend to the inference handle
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param [in] handle The handle to the inference
*
* @details Use this function to prepare inference based on
* the configured network.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param [in] handle The handle to the inference
*
* @details Use this function to inference with a given source.
*
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the pose landmark object.
* @param[in] source The handle to the source of the media.
* @internal
* @brief Performs asynchronously the pose landmark inference on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks This function operates asynchronously, so it returns immediately upon invocation.
* The inference results are inserted into the outgoing queue within the framework
* in the order of processing, and the results can be obtained through mv_pose_landmark_get_pos().
/**
* @brief Gets the pose landmark positions on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks pos_x and pos_y arrays are allocated internally by the framework and will remain valid
* until the handle is released.
* Please do not deallocate them directly, and if you want to use them after the handle is released,
* @brief Set user-given backend and device types for inference.
* @details Use this function to change the backend and device types for inference instead of default ones after calling @ref mv_pose_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] backend_type A string of backend type.
* @brief Get a number of inference engines available for image classification task API.
* @details Use this function to get how many inference engines are supported for image classification after calling @ref mv_pose_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[out] engine_count A number of inference engines available for image classification API.
* @brief Gets engine type to a given inference engine index.
* @details Use this function to get inference engine type with a given engine index after calling @ref mv_pose_landmark_get_engine_count().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks engine_type array is allocated internally by the framework and will remain valid
* until the handle is returned.
* Please do not deallocate it directly, and if you want to use it after the handle is returned,
* @brief Gets a number of device types available to a given inference engine.
* @details Use this function to get how many device types are supported for a given inference engine after calling @ref mv_pose_landmark_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the image classification object.
* @param[in] engine_type A inference engine string.
* @brief Gets device type list available.
* @details Use this function to get what device types are supported for current inference engine type after calling @ref mv_pose_landmark_configure().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks device_type array is allocated internally by the framework and will remain valid
* until the handle is returned.
* Please do not deallocate it directly, and if you want to use it after the handle is returned,
/**
* @brief The pose landmark object handle.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*/
typedef void *mv_pose_landmark_h;
* mv_selfie_segmentation_prepare() function to prepare a network
* for the inference.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @remarks The @a handle should be released using mv_selfie_segmentation_destroy().
*
* @internal
* @brief Destroys inference handle and releases all its resources.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the inference to be destroyed.
*
* @brief Set user-given model information.
* @details Use this function to change the model information instead of default one after calling @ref mv_selfie_segmentation_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the selfie segmentation object.
* @param[in] model_name Model name.
* @internal
* @brief Configures the backend for the selfie segmentation inference.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param [in] handle The handle to the inference
*
* @details Use this function to prepare the selfie segmentation inference based on
* the configured network.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the inference.
*
* @internal
* @brief Performs the selfie segmentation inference on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks This function is synchronous and may take considerable time to run.
*
* @param[in] source The handle to the source of the media
* @internal
* @brief Performs asynchronously the selfie segmentation inference on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
* @remarks This function operates asynchronously, so it returns immediately upon invocation.
* The inference results are inserted into the outgoing queue within the framework
* in the order of processing, and the results can be obtained through mv_selfie_segmentation_get_result()
* @internal
* @brief Gets the selfie segmentation inference result on the @a source.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] infer The handle to the inference
* @param[out] width Width size of output image.
* @brief Set user-given inference engine and device types for inference.
* @details Use this function to change the inference engine and device types for inference instead of default ones after calling @ref mv_selfie_segmentation_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the selfie segmentation object.
* @param[in] engine_type A string of inference engine type.
* @brief Get a number of inference engines available for selfie segmentation task API.
* @details Use this function to get how many inference engines are supported for selfie segmentation after calling @ref mv_selfie_segmentation_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the selfie segmentation object.
* @param[out] engine_count A number of inference engines available for selfie segmentation API.
* @brief Get engine type to a given inference engine index.
* @details Use this function to get inference engine type with a given engine index after calling @ref mv_selfie_segmentation_get_engine_count().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the selfie segmentation object.
* @param[in] engine_index A inference engine index for getting the inference engine type.
* @brief Get a number of device types available to a given inference engine.
* @details Use this function to get how many device types are supported for a given inference engine after calling @ref mv_selfie_segmentation_create().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the selfie segmentation object.
* @param[in] engine_type A inference engine string.
* @brief Get device type list available.
* @details Use this function to get what device types are supported for current inference engine type after calling @ref mv_selfie_segmentation_configure().
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*
* @param[in] handle The handle to the selfie segmentation object.
* @param[in] engine_type A inference engine string.
/**
* @brief The selfie segmentation object handle.
*
- * @since_tizen 8.0
+ * @since_tizen 9.0
*/
typedef void *mv_selfie_segmentation_h;