[Bug] s/weight_decay/weight_regularizer
authorJihoon Lee <jhoon.it.lee@samsung.com>
Tue, 16 Mar 2021 07:40:08 +0000 (16:40 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Fri, 19 Mar 2021 02:33:08 +0000 (11:33 +0900)
weight_decay is a old property which made some app test to fail. This
patch resolves the issue

revealed from #1008
resolves #1018

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Applications/MNIST/res/mnist.ini
Applications/ReinforcementLearning/DeepQ/jni/meson.build
Applications/TransferLearning/CIFAR_Classification/README.md
Applications/TransferLearning/CIFAR_Classification/jni/main.cpp
Applications/TransferLearning/CIFAR_Classification/jni/main_func.cpp
Applications/TransferLearning/CIFAR_Classification/res/Classification.ini
Applications/TransferLearning/CIFAR_Classification/res/Classification_func.ini
docs/configuration-ini.md
nntrainer/models/neuralnet.cpp

index 39b8bb4..828ebf8 100644 (file)
@@ -7,7 +7,7 @@ Optimizer = adam        # Optimizer : sgd (stochastic gradien decent),
                        #             adam (Adamtive Moment Estimation)
 Loss = cross           # Loss function : mse (mean squared error)
                         #                       cross ( for cross entropy )
-Save_Path = "mnist_model.bin"          # model path to save / read
+# Save_Path = "mnist_model.bin"        # model path to save / read
 batch_size = 32                # batch size
 beta1 = 0.9            # beta 1 for adam
 beta2 = 0.999  # beta 2 for adam
index a826cd3..8cb775f 100644 (file)
@@ -32,4 +32,4 @@ e = executable('nntrainer_deepq',
   install_dir: application_install_dir
 )
 
-test('app_DeepQ', e, args: [res_path / 'DeepQ.ini'], timeout: 60)
+test('app_DeepQ', e, args: [res_path / 'DeepQ.ini'], timeout: 60)
index cfd3338..15cec08 100644 (file)
@@ -77,8 +77,8 @@ Type = fully_connected
 Unit = 10
 bias_initializer = zeros
 Activation = softmax
-Weight_Decay = l2norm
-weight_Decay_Lambda = 0.005
+weight_regularizer = l2norm
+weight_regularizer_constant = 0.005
 ```
 
 If you want to use generator (option #2), then remove [DataSet] section, and provide dataset generator callbacks.
index 90ac701..ed989f5 100644 (file)
@@ -431,8 +431,8 @@ int main(int argc, char *argv[]) {
   }
   try {
     NN.readModel();
-  } catch (...) {
-    std::cerr << "Error during readModel" << std::endl;
+  } catch (std::exception &e) {
+    std::cerr << "Error during readModel reason: " << e.what() << std::endl;
     return 1;
   }
 
index 75fb3c9..d0550d5 100644 (file)
@@ -305,8 +305,8 @@ int main(int argc, char *argv[]) {
   }
   try {
     model->readModel();
-  } catch (...) {
-    std::cerr << "Error during readModel" << std::endl;
+  } catch (std::exception &e) {
+    std::cerr << "Error during readModel, reason: " << e.what() << std::endl;
     return 1;
   }
   model->setDataset(dataset);
index dcfea8b..94afc31 100644 (file)
@@ -33,5 +33,5 @@ input_layers = inputlayer
 Unit = 10              # Output Layer Dimension ( = Weight Width )
 Bias_initializer = zeros
 Activation = softmax   # activation : sigmoid, softmax
-Weight_Decay = l2norm
-weight_Decay_Lambda = 0.005
+weight_regularizer = l2norm
+weight_regularizer_constant = 0.005
index 2491c8f..4d93ffe 100644 (file)
@@ -30,5 +30,5 @@ input_layers = flatten
 unit = 10
 Bias_initializer = zeros
 Activation = softmax
-Weight_Decay = l2norm
-weight_Decay_Lambda = 0.005
+weight_regularizer = l2norm
+weight_regularizer_constant = 0.005
index 86b13f9..a088ac1 100644 (file)
@@ -168,7 +168,7 @@ Start with "[ ${layer name} ]". This layer name must be unique throughout networ
      * relu : ReLU function
      * softmax : softmax function
 
-8. ```weight_decay = <string>```
+8. ```weight_regularizer = <string>```
 
    set weight decay
      * l2norm : L2 normalization
@@ -232,10 +232,10 @@ Each layer requires different properties.
 
  | Layer | Properties |
  |:-------:|:---|
- | conv2d |<ul><li>filters</li><li>kernel_size</li><li>stride</li><li>padding</li><li>normalization</li><li>standardization</li><li>input_shape</li><li>bias_init_zero</li><li>activation</li><li>flatten</li><li>weight_decay</li><li>weight_regularizer_constant</li><li>weight_initializer</li></ul>|
+ | conv2d |<ul><li>filters</li><li>kernel_size</li><li>stride</li><li>padding</li><li>normalization</li><li>standardization</li><li>input_shape</li><li>bias_init_zero</li><li>activation</li><li>flatten</li><li>weight_regularizer</li><li>weight_regularizer_constant</li><li>weight_initializer</li></ul>|
  | pooling2d | <ul><li>pooling</li><li>pool_size</li><li>stride</li><li>padding</li></ul> |
  | flatten | - |
- | fully_connected | <lu><li>unit</li><li>normalization</li><li>standardization</li><li>input_shape</li><li>bias_initializer</li><li>activation</li><li>flatten</li><li>weight_decay</li><li>weight_regularizer_constant</li><li>weight_initializer</li></lu>|
+ | fully_connected | <lu><li>unit</li><li>normalization</li><li>standardization</li><li>input_shape</li><li>bias_initializer</li><li>activation</li><li>flatten</li><li>weight_regularizer</li><li>weight_regularizer_constant</li><li>weight_initializer</li></lu>|
  | input | <lu><li>normalization </li><li>standardization</li><li>input_shape</li><li>flatten</li></lu>|
  | batch_normalization | <lu><li>epsilon</li><li>flatten</li></lu> |
 
index 4d12a89..e57fd55 100644 (file)
@@ -461,7 +461,7 @@ void NeuralNetwork::saveModel() {
  */
 void NeuralNetwork::readModel() {
   if (!initialized)
-    throw std::runtime_error("Cannot save the model before initialize.");
+    throw std::runtime_error("Cannot read the model before initialize.");
 
   if (save_path == std::string()) {
     return;