[optimization] Bug fix for in-place layer optimization
authorParichay Kapoor <pk.kapoor@samsung.com>
Fri, 22 Jan 2021 03:41:14 +0000 (12:41 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Fri, 22 Jan 2021 08:14:04 +0000 (17:14 +0900)
commit70f3a3afeaecbb1c6ea4b99b96a47fdf888110eb
treea1369ae3e3742decb78dfdd742b4a7d93322761e
parent5ae735516bb66183039338e3b9c1eefb5f52d775
[optimization] Bug fix for in-place layer optimization

Inplace layer optimization is performed for multiple layers - activation and batch normalization layers
and this list will increase with data augmentation etc.
However, the in-place layers cannot work correctly consecutively if these layers are trainable.
They can work perfectly is they dont need to pass the derivative back.

For now, this patch limits two consecutive layers to be in-place.
This will be made generic later dependent on the trainable and inPlace property of the layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
nntrainer/graph/network_graph.cpp
test/unittest/unittest_nntrainer_models.cpp