mv_machine_learning: correct tensor buffer size 26/287926/3
authorInki Dae <inki.dae@samsung.com>
Wed, 8 Feb 2023 01:25:26 +0000 (10:25 +0900)
committerInki Dae <inki.dae@samsung.com>
Wed, 8 Feb 2023 04:36:59 +0000 (13:36 +0900)
commitd693d9f112a66d07afe262064298d4f5e3be8cb5
treeaf8069cbe4622cda88fd26b474dae63366a88a65
parent92a4a0000fcaff0a1434e128c259139e3a6091ea
mv_machine_learning: correct tensor buffer size

[Version] : 0.26.7
[Issue type] : bug fix

Correct output tensor buffer size.

In original code, we set the size of inference_engine_tensor_buffer to
the one of inference_engine_tensor_info, and this incurs seg. fault
in case of not using float data type. These sizes are different each other
like below,

Size of inference_engine_tensor_info : tensor element count
Size of inference_engine_tensor_buffer : tensor element count * bytes per
an element.

So this patch calculates the size of inference_engine_tensor_buffer correctly
by checking actual bytes per an tensor element.

Change-Id: I21733fb341e93f325f6d4a3bb4df66ff69e15413
Signed-off-by: Inki Dae <inki.dae@samsung.com>
mv_machine_learning/inference/include/TensorBuffer.h
mv_machine_learning/inference/src/TensorBuffer.cpp
packaging/capi-media-vision.spec