Fix description for html link 23/243523/1 accepted/tizen/unified/20200914.131443 submit/tizen/20200911.033103
authorTae-Young Chung <ty83.chung@samsung.com>
Tue, 8 Sep 2020 01:51:03 +0000 (10:51 +0900)
committerTae-Young Chung <ty83.chung@samsung.com>
Tue, 8 Sep 2020 01:51:06 +0000 (10:51 +0900)
'#' is used for html link of enum item

Change-Id: I3472999c294a504fe37c0223b26b4eafc91b0010
Signed-off-by: Tae-Young Chung <ty83.chung@samsung.com>
include/mv_inference_type.h

index 97a56ea..a259b77 100644 (file)
@@ -36,21 +36,21 @@ extern "C" {
 
 /**
  * @brief Enumeration for inference backend.
- * @MV_INFERENCE_BACKEND_OPENCV An open source computer vision and machine learning
+ * #MV_INFERENCE_BACKEND_OPENCV An open source computer vision and machine learning
  *                              software library.
  *                              (https://opencv.org/about/)
- * @MV_INFERENCE_BACKEND_TFLITE Google-introduced open source inference engine for embedded systems,
+ * #MV_INFERENCE_BACKEND_TFLITE Google-introduced open source inference engine for embedded systems,
  *                              which runs Tensorflow Lite model.
  *                              (https://www.tensorflow.org/lite/guide/get_started)
- * @MV_INFERENCE_BACKEND_ARMNN ARM-introduced open source inference engine for CPUs, GPUs and NPUs, which
+ * #MV_INFERENCE_BACKEND_ARMNN ARM-introduced open source inference engine for CPUs, GPUs and NPUs, which
  *                             enables efficient translation of existing neural network frameworks
  *                             such as TensorFlow, TensorFlow Lite and Caffes, allowing them to
  *                             run efficiently without modification on Embedded hardware.
  *                             (https://developer.arm.com/ip-products/processors/machine-learning/arm-nn)
- * @MV_INFERENCE_BACKEND_MLAPI Samsung-introduced open source ML single API framework of NNStreamer, which
+ * #MV_INFERENCE_BACKEND_MLAPI Samsung-introduced open source ML single API framework of NNStreamer, which
  *                             runs various NN models via tensor filters of NNStreamer.
  *                             (https://github.com/nnstreamer/nnstreamer)
- * @MV_INFERENCE_BACKEND_ONE Samsung-introduced open source inference engine called On-device Neural Engine, which
+ * #MV_INFERENCE_BACKEND_ONE Samsung-introduced open source inference engine called On-device Neural Engine, which
  *                           performs inference of a given NN model on various devices such as CPU, GPU, DSP and NPU.
  *                           (https://github.com/Samsung/ONE)
  *