[neuralnet] Support deallocate of tensors
authorParichay Kapoor <pk.kapoor@samsung.com>
Wed, 3 Mar 2021 11:28:17 +0000 (20:28 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Tue, 9 Mar 2021 08:02:51 +0000 (17:02 +0900)
commit30ec54e6c7a58feb292471c576ef7b61aa718c70
treee52b1abbf9a17a38b4852b8affea28a677b08772
parent4ef6e70035c25e7afddeb121bc5be2a896478d1a
[neuralnet] Support deallocate of tensors

Support deallocate and allocation of tensors from the neuralnet.

Also perform the deallocation of tensors after each train run.
Once a training is performed, the memory associated for that training
except the weights variables will be freed.

The memory associated with inference will not be freed until freed manually.
This will require calling deallocate() with the model object.
Note that calling a train() after inference() will free the inference memory
and the train will allocate its own memory which will be freed at the end
of the training.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
nntrainer/models/neuralnet.cpp
nntrainer/models/neuralnet.h
test/unittest/unittest_nntrainer_graph.cpp
test/unittest/unittest_nntrainer_models.cpp