[ Bug ] Fix the bug read the weight for batch normalization layer
authorjijoong.moon <jijoong.moon@samsung.com>
Wed, 21 Jun 2023 06:53:52 +0000 (15:53 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Fri, 23 Jun 2023 02:22:12 +0000 (11:22 +0900)
There is bug when the model loads the data for the batch normalziation
layer.

During the setup the requestWeights in manager, it add the max
execution order for graddient for gradient clipping, but variable
weight also added. This pr fixs it.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
nntrainer/tensor/manager.cpp

index 536a273..8496207 100644 (file)
@@ -398,7 +398,9 @@ std::vector<Weight *> Manager::requestWeights(
      */
     if (Weight::isGradientClipByGlobalNorm(clip_by_global_norm)) {
       grad_exec_order.push_back(TensorPool::PERSIST_END_ORDER);
-      var_exec_order.push_back(TensorPool::PERSIST_END_ORDER);
+      // TODO: We need double check if it is OK not to add PERSIST_END_ORDER
+      // here or add other conditions
+      // var_exec_order.push_back(TensorPool::PERSIST_END_ORDER);
     }
 
     Tensor *var = nullptr, *grad = nullptr;