From: Tae-Young Chung Date: Tue, 8 Sep 2020 01:51:03 +0000 (+0900) Subject: Fix description for html link X-Git-Tag: accepted/tizen/unified/20200914.131443^0 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=refs%2Fchanges%2F23%2F243523%2F1;p=platform%2Fcore%2Fapi%2Fmediavision.git Fix description for html link '#' is used for html link of enum item Change-Id: I3472999c294a504fe37c0223b26b4eafc91b0010 Signed-off-by: Tae-Young Chung --- diff --git a/include/mv_inference_type.h b/include/mv_inference_type.h index 97a56ea..a259b77 100644 --- a/include/mv_inference_type.h +++ b/include/mv_inference_type.h @@ -36,21 +36,21 @@ extern "C" { /** * @brief Enumeration for inference backend. - * @MV_INFERENCE_BACKEND_OPENCV An open source computer vision and machine learning + * #MV_INFERENCE_BACKEND_OPENCV An open source computer vision and machine learning * software library. * (https://opencv.org/about/) - * @MV_INFERENCE_BACKEND_TFLITE Google-introduced open source inference engine for embedded systems, + * #MV_INFERENCE_BACKEND_TFLITE Google-introduced open source inference engine for embedded systems, * which runs Tensorflow Lite model. * (https://www.tensorflow.org/lite/guide/get_started) - * @MV_INFERENCE_BACKEND_ARMNN ARM-introduced open source inference engine for CPUs, GPUs and NPUs, which + * #MV_INFERENCE_BACKEND_ARMNN ARM-introduced open source inference engine for CPUs, GPUs and NPUs, which * enables efficient translation of existing neural network frameworks * such as TensorFlow, TensorFlow Lite and Caffes, allowing them to * run efficiently without modification on Embedded hardware. * (https://developer.arm.com/ip-products/processors/machine-learning/arm-nn) - * @MV_INFERENCE_BACKEND_MLAPI Samsung-introduced open source ML single API framework of NNStreamer, which + * #MV_INFERENCE_BACKEND_MLAPI Samsung-introduced open source ML single API framework of NNStreamer, which * runs various NN models via tensor filters of NNStreamer. * (https://github.com/nnstreamer/nnstreamer) - * @MV_INFERENCE_BACKEND_ONE Samsung-introduced open source inference engine called On-device Neural Engine, which + * #MV_INFERENCE_BACKEND_ONE Samsung-introduced open source inference engine called On-device Neural Engine, which * performs inference of a given NN model on various devices such as CPU, GPU, DSP and NPU. * (https://github.com/Samsung/ONE) *