mv_machine_learning: introduce common asynchronous inference manager
[Issue type] new feature
Introduce a new asynchronous inference manager - AsyncManager class - for
asynchronous API support of all task groups. This patch just moves the queue
management relevant code from object detection task group to common directory
so that other task groups can use it commonly for the asynchronous API
implementation.
As of the queue management, input queue can be used commonly for all
task groups even though data type of each input tensor is different each
other by storing the input tensor data with unsigned char type but not used
for output queue.
In case of output queue, how to decode the output tensor data is different
each other according to the used pre-trained model. It means that each result
structure of the used pre-trained model should be different. Therefore,
this patch uses template type for output queue so that model-driven data
structure can be used.
Each task group who wants to use AsyncManager has to follow below rules,
- create a async manager handle with its own inference callback function.
i.e,
in template<typename T> void void ObjectDetection::performAsync(...),
{
if (!async_manager) {
// Create async manager handler with its own inference callback.
_async_manager = make_unique<AsyncManager<TaskResult> >(
[this]() { inferenceCallback<T, TaskResult(); });
}
if (!_async_manager->isInputQueueEmpty<T>())
return;
...
// Push a input queue for inference request.
_async_manager->pushToInput<T>(in_queue);
// Invoke async manager. This triggers a internal thread for
// performing the inference with the given input queue.
_async_manager->invoke<T>();
}
and in ObjectDetection::getOutput()
{
if (_async_manager) {
if (!_async_manager->isWorking())
throw an_exception;
_async_manager->waitforOutputQueue();
_current_result = _async_manager->popFromQueue();
}
...
}
and in inference callback function of each task group,
template<typename T, typename R> void ObjectDetection::inferenceCallback()
{
// Get a input queue for inference.
AsyncInputQueue<T> inputQ = _async_manager->popFromInput<T>();
inference<T>(inputQ.inputs);
R &resultQ = result();
resultQ.frame_number = inputQ.frame_number;
// push the inference result to outgoing queue.
_async_manager->pushToOutput(resultQ);
}
Change-Id: Ie2b68b910a73377fa4d4ad8646b134e3c2bf709a
Signed-off-by: Inki Dae <inki.dae@samsung.com>