There is bug when the model loads the data for the batch normalziation
layer.
During the setup the requestWeights in manager, it add the max
execution order for graddient for gradient clipping, but variable
weight also added. This pr fixs it.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
*/
if (Weight::isGradientClipByGlobalNorm(clip_by_global_norm)) {
grad_exec_order.push_back(TensorPool::PERSIST_END_ORDER);
- var_exec_order.push_back(TensorPool::PERSIST_END_ORDER);
+ // TODO: We need double check if it is OK not to add PERSIST_END_ORDER
+ // here or add other conditions
+ // var_exec_order.push_back(TensorPool::PERSIST_END_ORDER);
}
Tensor *var = nullptr, *grad = nullptr;