Since developers can reduce the number of Tensors by calling
ml_tensors_info_set_count(), it causes the memory leak when invoking
ml_tensors_info_free() and ml_tensors_data_destroy(). This patch fixes
that bug by checking all possible elements (i.e. ML_TENSOR_SIZE_LIMIT).
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
if (!info)
return;
- for (i = 0; i < info->num_tensors; i++) {
+ for (i = 0; i < ML_TENSOR_SIZE_LIMIT; i++) {
if (info->info[i].name) {
g_free (info->info[i].name);
info->info[i].name = NULL;
_data = (ml_tensors_data_s *) data;
- for (i = 0; i < _data->num_tensors; i++) {
+ for (i = 0; i < ML_TENSOR_SIZE_LIMIT; i++) {
if (_data->tensors[i].tensor) {
g_free (_data->tensors[i].tensor);
_data->tensors[i].tensor = NULL;