Set supported_device_types according to MLAPI backend type 62/235962/1
authorInki Dae <inki.dae@samsung.com>
Thu, 11 Jun 2020 07:16:42 +0000 (16:16 +0900)
committerInki Dae <inki.dae@samsung.com>
Thu, 11 Jun 2020 07:16:42 +0000 (16:16 +0900)
NNFW supports only CPU and GPU accelerated NN runtime so
Consider using NNFW tensor filter plugin of NNStreamer.

Change-Id: I3ed4ae5018b984c812f8bad69eebbfdae69dd030
Signed-off-by: Inki Dae <inki.dae@samsung.com>
src/inference_engine_mlapi.cpp

index ab36c9b..d6d4706 100644 (file)
@@ -366,7 +366,12 @@ namespace MLAPIImpl
                }
 
                // TODO. flag supported accel device types according to a given ML Single API of nnstreamer backend.
-               capacity->supported_accel_devices = INFERENCE_TARGET_CUSTOM;
+               if (mPluginType == INFERENCE_BACKEND_MLAPI) {
+                       capacity->supported_accel_devices = INFERENCE_TARGET_CUSTOM;
+               } else {
+                       capacity->supported_accel_devices = INFERENCE_TARGET_GPU |
+                                                                                               INFERENCE_TARGET_CPU;
+               }
 
                LOGI("LEAVE");