mv_machine_learning: introduce common asynchronous inference manager
authorInki Dae <inki.dae@samsung.com>
Thu, 7 Sep 2023 00:16:24 +0000 (09:16 +0900)
committerKwanghoon Son <k.son@samsung.com>
Wed, 25 Oct 2023 01:54:03 +0000 (10:54 +0900)
commit50a374fbdea640869d6a670cb5d749a461c8602e
treede046eb854dfca1940ea0f440700d1232b6bf5ff
parent68c9f911c70f74056c61696f836a889dffdc0e70
mv_machine_learning: introduce common asynchronous inference manager

[Issue type] new feature

Introduce a new asynchronous inference manager - AsyncManager class - for
asynchronous API support of all task groups. This patch just moves the queue
management relevant code from object detection task group to common directory
so that other task groups can use it commonly for the asynchronous API
implementation.

As of the queue management, input queue can be used commonly for all
task groups even though data type of each input tensor is different each
other by storing the input tensor data with unsigned char type but not used
for output queue.

In case of output queue, how to decode the output tensor data is different
each other according to the used pre-trained model. It means that each result
structure of the used pre-trained model should be different. Therefore,
this patch uses template type for output queue so that model-driven data
structure can be used.

Each task group who wants to use AsyncManager has to follow below rules,
 - create a async manager handle with its own inference callback function.
   i.e,
       in template<typename T> void void ObjectDetection::performAsync(...),
       {
           if (!async_manager) {
               // Create async manager handler with its own inference callback.
                _async_manager = make_unique<AsyncManager<TaskResult> >(
                          [this]() { inferenceCallback<T, TaskResult(); });
           }

           if (!_async_manager->isInputQueueEmpty<T>())
               return;
           ...

           // Push a input queue for inference request.
           _async_manager->pushToInput<T>(in_queue);

           // Invoke async manager. This triggers a internal thread for
           // performing the inference with the given input queue.
           _async_manager->invoke<T>();
       }

      and in ObjectDetection::getOutput()
  {
          if (_async_manager) {
              if (!_async_manager->isWorking())
                  throw an_exception;

              _async_manager->waitforOutputQueue();
              _current_result = _async_manager->popFromQueue();
          }
          ...
     }

     and in inference callback function of each task group,
 template<typename T, typename R> void ObjectDetection::inferenceCallback()
 {
         // Get a input queue for inference.
         AsyncInputQueue<T> inputQ = _async_manager->popFromInput<T>();
         inference<T>(inputQ.inputs);
         R &resultQ = result();
         resultQ.frame_number = inputQ.frame_number;

         // push the inference result to outgoing queue.
         _async_manager->pushToOutput(resultQ);
     }

Change-Id: Ie2b68b910a73377fa4d4ad8646b134e3c2bf709a
Signed-off-by: Inki Dae <inki.dae@samsung.com>
mv_machine_learning/common/CMakeLists.txt
mv_machine_learning/common/include/async_manager.h [new file with mode: 0644]
mv_machine_learning/common/src/async_manager.cpp [new file with mode: 0644]
mv_machine_learning/object_detection/CMakeLists.txt
mv_machine_learning/object_detection/include/object_detection.h
mv_machine_learning/object_detection/include/object_detection_type.h
mv_machine_learning/object_detection/src/object_detection.cpp
packaging/capi-media-vision.spec