bug fix to input and output tensor dimension 32/291532/2 accepted/tizen/unified/20230420.041532
authorInki Dae <inki.dae@samsung.com>
Tue, 18 Apr 2023 05:41:27 +0000 (14:41 +0900)
committerInki Dae <inki.dae@samsung.com>
Tue, 18 Apr 2023 06:17:43 +0000 (15:17 +0900)
commitc2c143f9a037c242347aab32693970a09c8311f6
tree918734620b8ce3e350ad5e8e73925e102ea2c635
parent4649081e074f76a83e4830d89d083799f01b304d
bug fix to input and output tensor dimension

[Version] : 0.4.9
[Issue type] : bug fix

Fix tensor dimension value from ML single API correctly.

This issue has happened since below patch was committed,
 892c8d2e9af9ce49e714a467063c45cd4ed28cba of machine_learning repo.

This is a temporary solution so it should be fixed with using
ML_TENSOR_RANK_LIMIT instead. As of now, ML_TENSOR_RANK_LIMIT is a fixed
value of 16. Ideally, if we set input or output tensor dimension to actual
tensor dimension value using ml_tensors_info_set_tensor_dimension function
then we should get same dimension value using
ml_tensors_info_get_tensor_dimension function. However, current version of
ml_tensors_info_get_tensor_dimension function always says a fixed value of 16.

Change-Id: Ie05f7be2b53a6d1c5a5dfcd36cbefc4320c05b5f
Signed-off-by: Inki Dae <inki.dae@samsung.com>
packaging/inference-engine-mlapi.spec
src/inference_engine_mlapi.cpp