Jihoon Lee [Mon, 22 Jun 2020 08:07:35 +0000 (17:07 +0900)]
[Fix/jni] Change tflite-dependency
There seems a problem with building tensorflow. This PR propose to use
prebuilt tensorflow-lite instead of building one.
**Changes proposed in this PR:**
- Change `Applications/android.mk` to use prebuilt library
- Change `prepare_tflite.sh`
- Bump tflite to 1.13.1 as suggested in #20
Resolves #207
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Jun 2020 01:39:07 +0000 (10:39 +0900)]
Make .clang-format competible with version 6
Clang-format 6 is widely used. However,
`AllowAllConstructorInitializersOnNextLine` added from #203 is supported
from clang-format 9.
This Pr reverts use of `AllowAllConstructorInitializersOnNextLine`
while having similar linting style.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 19 Jun 2020 08:29:45 +0000 (17:29 +0900)]
[meson] Arrange file order
Arrange files order for easier access
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 11:10:43 +0000 (20:10 +0900)]
[loss] Combine softmax with cross entropy
Softmax is combined with cross entropy
Cross entropy exists on its as an option but isnt supported
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 08:57:41 +0000 (17:57 +0900)]
[loss] Combined cross entropy with sigmoid
Combined cross-entropy with sigmoid version for loss because of its higher stability
Note that this happens internally and is not exposed outside
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 19 Jun 2020 07:41:45 +0000 (16:41 +0900)]
[Proposal] clang format for initializer list
preivious initialization list was hard to read when it gets too long.
eg) layers initialization list
before:
```cpp
Layer()
: last_layer(false), init_zero(false), type(LAYER_UNKNOWN),
activation(NULL), activation_prime(NULL), activation_type(ACT_UNKNOWN),
bn_follow(false), weight_decay(), weight_ini_type(WEIGHT_UNKNOWN) {}
```
after:
```cpp
Layer()
: last_layer(false),
init_zero(false),
type(LAYER_UNKNOWN),
activation(NULL),
activation_prime(NULL),
activation_type(ACT_UNKNOWN),
bn_follow(false),
weight_decay(),
weight_ini_type(WEIGHT_UNKNOWN) {}
```
**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 17 Jun 2020 12:30:33 +0000 (21:30 +0900)]
[ Pooling2D ] backwarding
This PR provides backwarding process of Pooling 2D.
. backwarding for max pooling 2D
. backwarding for average pooling 2D
. backwarding global_max, global_averge is NYI.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 18 Jun 2020 06:45:14 +0000 (15:45 +0900)]
Add test for activation layer(Wait for #187)
**Changes proposed in this PR:**
- Add test for activation layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Jun 2020 10:15:19 +0000 (19:15 +0900)]
Separate activation to layer
**Changes proposed in this PR:**
- add activation_layer.[h|cpp]
- add test to activation_layer
See also #153, #152
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 19 Jun 2020 08:15:30 +0000 (17:15 +0900)]
Fix bug in `Tensor::setValue`
`memset` can't be used to initialize a float array as explained
[here](https://stackoverflow.com/questions/
1040070/initializing-a-float-array-with-memset)
**Changes proposed in this PR:**
- change `memset` to `std::fill`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 09:10:00 +0000 (18:10 +0900)]
[warning fix] unsigned int compare with int warning
warning fix of comparing unsigned int values with signed values
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 06:10:22 +0000 (15:10 +0900)]
[parse] Parse unknown properties
Properties exposed to the users and internal are different (losslayer, etc)
Hence using `*_string.size() - 1` for unknown cases will cause bugs in parse_util
Replaced with its own individual unknown value
V2:
Combined all individual layer properties into common properties in layer.h
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Wed, 17 Jun 2020 10:00:07 +0000 (19:00 +0900)]
[ Flatten ] backwarding
This PR includes back propagation of Flatten Layer.
. backwarding Flatten Layer.
. batch(), channel(), width(), height(), setDim() of tensor
. unit test of flatten layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 04:46:04 +0000 (13:46 +0900)]
[bugfix] Bug fix with git merge
Git merged new commits causing build errors
Need urgent merge
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 17 Jun 2020 06:14:05 +0000 (15:14 +0900)]
[neuralnet] Handle adding layer with compiled model
This PR handles the case when a layer is added after a model has been compiled
Currently it has been set to give out error
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 17 Jun 2020 04:26:59 +0000 (13:26 +0900)]
[API] simplify API
Remove exposed function "model_construct_with_conf" as then "compile_with_conf" looks strange without any config
Rather, just keep one "model_construct" and "compile_with_conf" can take the config file
Updated corresponding unittests
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 16 Jun 2020 10:12:04 +0000 (19:12 +0900)]
[save/load] save/load the optimizer parameters
save and load the optimizer parameters as well for continued training
Add additive option in neural to continue training from previous training
Resolves #172
V2:
setOptimizer() bug fix to be called with set for only fc layer and not other layers
now setOptimizer() for fc_layer is unique compared to virtual defined by its parent
added continued training property for optimizer
added getSize in tensor
save optimizer type to verify that the loaded optimizer values can be used sensibly
if not loading, move the file seek ahead
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 09:00:27 +0000 (18:00 +0900)]
[fc_layer] Initialization bug fix
Add missing initialization of unit at object construction
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 04:19:51 +0000 (13:19 +0900)]
[layer] Added loss layer
Added loss layer
This is added by the framework and is hidden from the user
This separates all the cost/loss related extra work from the layers
However, Loss and cost is now available for and from all layers
Resolves #101
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Tue, 16 Jun 2020 04:15:56 +0000 (13:15 +0900)]
[ Flatten ] forwarding / copy for Flatten Layer
This PR provides the fowarding/copy function of Flatten Layer
. implement forwarding function
. implement copy function
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Tue, 16 Jun 2020 09:10:21 +0000 (18:10 +0900)]
[init] Making random deterministic
Adding determinism to the random number generators in the library
DataBuffer has multiple threads but single thread of train/valid/test
which run in sequence in my understanding
Resolves #167
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Tue, 16 Jun 2020 04:08:05 +0000 (13:08 +0900)]
[ Layer copy ] copy function for conv2d & pooling
This PR fixs the copy member function of covn2d and pooling layer.
. include copy of layer varaibles for conv2d
. implemnet copy of pooling2d layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Tue, 16 Jun 2020 09:51:58 +0000 (18:51 +0900)]
[neuralnet] Iteration bug fix in learning rate
learning rate is decayed using the iteration
however current implementation was using epoch count
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 16 Jun 2020 04:53:27 +0000 (13:53 +0900)]
[Docs] Update readme.md prerequisites
**Changes proposed in this PR:**
- Update readme.md prerequisites to include gcc version
**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 16 Jun 2020 02:41:19 +0000 (11:41 +0900)]
Optimize optimzer::calculate
**Changes proposed in this PR:**
- Optimze `optimzer::calculate` with new add_i and applyIf
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 16 Jun 2020 01:24:34 +0000 (10:24 +0900)]
Add `LazyTensor::applyIf`
This PR propose `LazyTensor::applyIf` for more control flow.
Because of the problem proposed in http://wg21.link/P0834R0
macro to wrap a function need to be used for the function.
**Changes proposed in this PR:**
- add `LazyTensor::applyIf` semantics
- add `LazyTensor::applyIf` tests
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 16 Jun 2020 01:06:45 +0000 (10:06 +0900)]
[ Flatten ] Skeleton of Flatten Layer
This PR provides skeleton code for flatten layer.
- Header & implementation of flatten layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 15 Jun 2020 07:01:16 +0000 (16:01 +0900)]
[ Pooling2D ] forwarding pooling 2D layer
This PR includs forwarding process of pooling 2d layer.
in which,
. implementation of forwarding
. unit test code for forwarding
. input / ouput generation of pooling 2D ( max pooling only )
. move zero_pad to util_func.h
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 15 Jun 2020 10:44:31 +0000 (19:44 +0900)]
[ Bug ] setActivation for input layer
Input Layer does not have activation property
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 15 Jun 2020 04:12:48 +0000 (13:12 +0900)]
[ Pooling2D ] initialize pooling 2d layer
This PR includes initialization of pooling 2d layer.
. check input dimension
. set intput / output dimemsion
. allocate hidden layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 15 Jun 2020 06:22:54 +0000 (15:22 +0900)]
[Tensor] Change Add_i to have alpha signature
**Changes proposed in this PR:**
- `Add_i(Tensor &T, float alpha)` for coefficient multiplication
- Change test accordingly
- Optimize `blas` implementation for `multiply_i`
See also: #166
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 08:39:40 +0000 (17:39 +0900)]
[util] Simplify sigmoidPrime
Simplified sigmoid prime which IMO might also be more efficient
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 08:37:36 +0000 (17:37 +0900)]
[layer] object initialization bugfix
Added bugfix to object initialization
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 15 Jun 2020 02:21:33 +0000 (11:21 +0900)]
[ Pooling2D ] Set Property for Pooling 2D Layer
This PR provides seting property for pooling 2d layer.
which is,
. stride, padding, pooling_size, pooling type
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
a2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 15 Jun 2020 01:48:37 +0000 (10:48 +0900)]
Refactor: delete Tensor::mat2vec()
This method is initially proposed to flatten and clone mat to vector.
However, `Tensor::getData()` can cover most of it's purpose, it seems
no longer in need.
**Changes proposed in this PR:**
- Delete `Tensor::mat2vec()`.
See also:
https://github.com/nnstreamer/nntrainer/pull/149#discussion_r439172191
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 12 Jun 2020 07:39:54 +0000 (16:39 +0900)]
[ Pooling2D ] Pooling 2D Layer
This PR provides skeleton of pooling 2d layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 12 Jun 2020 10:07:33 +0000 (19:07 +0900)]
[bug] error macros not defined
Corrected some of the error state macros which were wrongly defined
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Jun 2020 09:24:38 +0000 (18:24 +0900)]
[bugfix] Ignoring error in training/validation
Errors in forward and backward propagation were ignored.
Added error handling for them.
Further, model was being saved after validation.
Errors in validation would result in no model.
Changed to saving of model after training than after validation.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Jun 2020 09:54:07 +0000 (18:54 +0900)]
[layers] set status in operations
Set status in forward and backward operations
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 08:16:41 +0000 (17:16 +0900)]
[meson] Remove duplicated bits from test/meson
**Changes proposed in this PR:**
- Add conv2d_unittest to be unzipped for testing
- Add foreach loop to handle duplicated control flow
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 12 Jun 2020 07:40:26 +0000 (16:40 +0900)]
[loss] Added missing weight decay loss
Added missing weight decay loss into final loss for intermediate layers
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 06:04:55 +0000 (15:04 +0900)]
Fix numeric limits in `Tensor`
**Changes proposed in this PR:**
- Add `static constexpr min / max` to Tensor
- Change normalization to use predefined method for accelaration
Resolves #141
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 07:00:37 +0000 (16:00 +0900)]
[LazyTensor] Fix memcopy happening on lambda func
Fix unintential memcopy occurring in capture clause in lambda
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 06:50:55 +0000 (15:50 +0900)]
Add blas operation to `Tensor::l2norm`
Meanwhile, current implementation (plain version) of `l2norm` is not
effcient and have potential overflow problem.
**Changes proposed in this PR:**
- Add blas operation to `Tensor::l2norm`
- Add fixme
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 12 Jun 2020 01:27:47 +0000 (10:27 +0900)]
[ Unit Test ] Conv2D Forwarding Unit Test
This PR provides Conv2D Forwarding Unit Test.
. update python script to generate random input / kernel / golden Data
. update unit test code for conv2d & evaluate results
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 11 Jun 2020 04:06:45 +0000 (13:06 +0900)]
[tensor] Added axis dimension to average
Current implementation of average() is confusing when compared with sum().
Both sum() and average() do not take axis, and perform similar operations.
However, the semantics of their output is different -
- sum() acts over the dimensions other than batch size
- average() acts over the batch size
To avoid this confusion, average is provided with axis argument with 0 as default.
This is now analogous to sum(axis) than sum()
Further, sum() is renamed to sum_by_batch() as thats what it does and is different from sum(axis)
V2:
Applied the above for lazy_tensor
Minor updates to TensorDim are also made
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 11 Jun 2020 08:59:26 +0000 (17:59 +0900)]
Add openmp parellelism to l2norm()
**Changes proposed in this PR:**
- Fix:`meson -Denable-blas=false` has no dep found error
- Fix:sum() function bug in `USE_BLAS` is not defined
- Add openmp dependency
- Apply openmp to l2norm function
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 10 Jun 2020 09:39:11 +0000 (18:39 +0900)]
Optimizing concept for Tensor operation
**Changes proposed in this PR:**
- `i_operation` are used to calculate without memcopy
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 10 Jun 2020 05:18:25 +0000 (14:18 +0900)]
[ Conv2D ] forwarding calculation
This PR provides forwarding calculation of conv2d layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 10 Jun 2020 07:50:58 +0000 (16:50 +0900)]
Register Tensor ops to lazy tensor
**Changes proposed in this PR:**
- Register tensor ops to `LazyTensor`
- Add test fixture to `LazyTensor`
- Add tests to `LazyTensor`
- Change `Tensor::apply` signature to permit closures
**Changes proposed in V2:**
- Delete and fix unnecessary comments
- Change `Tensor::getData` signature
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 10 Jun 2020 05:14:55 +0000 (14:14 +0900)]
[ Test ] Input Generator
This PR provides layer input generator to evalute the layer
calculation.
[ To Do ]
Golden Test Result Generation should be added.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 11 Jun 2020 07:00:57 +0000 (16:00 +0900)]
[ CI ] Add Build Dependency flatbuffers-dev
for ubuntu, add flatbuffers-dev
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 10 Jun 2020 00:25:36 +0000 (09:25 +0900)]
[ Conv2d ] save & read unittest
Add save & read unit test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 9 Jun 2020 11:12:01 +0000 (20:12 +0900)]
[ Conv2d ] save & read weight from file
This PR provides read & save Kenel and Bias
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 9 Jun 2020 12:49:49 +0000 (21:49 +0900)]
[Tensor] Add divide_i / multiply_i (Tensor)
**Changes proposed in this PR:**
- Add `divide_i(Tensor)`
- Add `multiply_i(Tensor)`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 10:48:50 +0000 (19:48 +0900)]
[Tensor] Add `subtract_i(Tensor)` operator
**Changes proposed in this PR:**
- Add subtract_i(Tensor) operator for a memcpyless operation
- Lint unformated lines
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 9 Jun 2020 10:34:57 +0000 (19:34 +0900)]
[ Refactor ] Remove unused function in layer class
There is no need to exist the initialize(...) in layer class.
This PR removes this unused function.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 8 Jun 2020 04:37:30 +0000 (13:37 +0900)]
[ Conv2D ] initialize Conv2DLayer
During initialize,
1. set input & output dimension
2. set Tensor for Kernel and bias
setInputDimension() function should be called before this is called.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 9 Jun 2020 05:06:17 +0000 (14:06 +0900)]
Add multiply_i / divide_i operator
**Changes proposed in this PR:**
- Add `multiply_i(float)`
- Add `divide_i(float)`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 10:26:49 +0000 (19:26 +0900)]
[Tensor] Add `subtract_i(float)` operator
**Changes proposed in this PR:**
- Add subtract_i operator
- Revise subtract operator
- Change precision in add_i / subtract_i test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 01:46:24 +0000 (10:46 +0900)]
Add LazyTensor to support lazy evaluation
This PR propose 2 concepts.
1. Propose `*_i` type of operation to `Tensor`. Which mutates target
tensor instead of memcopying the tensor.(511c)
2. Add chaining for lazy & memcopyless operation.
For example Tensor could do the operation in such manner:
```cpp
Tensor t;
t.chain() /* Initial memcpy happens to gaurantee immutability */
.add_i(x)
.multiply_i(y) /* NYI */
.divide_i(y) /* NYI */
.run()
```
operation are delayed until `.run()` is called. This is done by
monadic object named `DefferedTensor`.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
[Tensor] Add add_i for memcpyless operation
This pr propose `*_i()` operation to Tensor to support memcpyless
operation.
**Changes proposed in this PR:**
- Add add_i for memcpyless operation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 04:56:14 +0000 (13:56 +0900)]
[Tensor] Add `add_i(Tensor T)` operator
**Changes proposed in this PR:**
- add_i(Tensor T) operator for memcopyless operation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 5 Jun 2020 08:22:41 +0000 (17:22 +0900)]
[ Refactor ] using input dimesion and output dimenstion.
Currently each layer handle just one TensorDim for Weight. However,
there are more complicata cases to use. Therefore each layer have
TensorDim instance to specify input and output activation dimension.
Original dim variable of layer is use to define weight dimension.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 4 Jun 2020 07:26:54 +0000 (16:26 +0900)]
[ Refactor ] Modify to deal with 3D Tensor
Until now, nntrainer cannot handle 3D Tensor including channel. Only
supported 2D Tensor which is batch, height, width.
From this PR, nntrainer can handle 3D Tensor include channel. But more
optimization is required.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 3 Jun 2020 05:37:32 +0000 (14:37 +0900)]
[ Layer ] Set Property for Conv2D Layer
. parse and set Convolution 2D Property
0. input shape : string
1. bias zero : bool
4. activation : string (type)
6. weight_decay : string (type)
7. weight_decay_lambda : float
9. filter : int
10. kernel_size : ( n , m )
11. stride : ( n, m )
12, padding : valid | same
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 1 Jun 2020 12:20:11 +0000 (21:20 +0900)]
[ Layer ] Draft of Conv2D Layer
This is the first draft of 2d convolution layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 5 Jun 2020 06:56:44 +0000 (15:56 +0900)]
Reduce duplicated function call
**Changes proposed in this PR:**
- Reduce `average()` call in `Optimizer::calculate
- Delete setZero call after construction `Tensor`
- Reduce indexing and multiplication in few loops
This PR results in better performance including roughly 60% decrease
in time spent in `Tensor::average()` in
Classification example(checked in vtune).
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 3 Jun 2020 01:10:03 +0000 (10:10 +0900)]
[ Refector ] Manage unit test with seperate file
unit test code is too big to manage. So make sepearte file for each
class / classes.
. unittest_nntrainer_internal
: test neural network / optimizer
. unittest_nntrainer_tensor
: test tensor/tensorDim
. unittest_nntrainer_layer
: test layers
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 4 Jun 2020 05:46:19 +0000 (14:46 +0900)]
[Example] Add exit when naviframe is empty
**Changes proposed in this PR:**
- Add and use _on_back_pressed_cb when back button is pressed
- Change `view_routes_to` signiture to return naviframe item.
- Delete unused functions
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 2 Jun 2020 03:45:55 +0000 (12:45 +0900)]
[ Bug ] possible corruption by double erase
There is possible double erase vector due to the duplicate random
number generation.
In this PR, remove to use random number and erase just once.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 2 Jun 2020 01:31:37 +0000 (10:31 +0900)]
[ Refactor ] Layer Method for weight initialization
Cannot use weight initiatlization method for other layer.
In this PR, Move it into Layer class to use for other layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 1 Jun 2020 12:17:34 +0000 (21:17 +0900)]
[ Refactor ] Tensor has TensorDim
Currently Tensor class handles the dimension with it's own data
structure.
It is better to use TensorDim for this for consistency with other
classes of NNTrainer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 1 Jun 2020 04:50:14 +0000 (13:50 +0900)]
[ Example ] Tizen C API Train with Generator Example
Demo example to train with Tizen C API Train with generator.
Add simple example to train 1 FC Layer neural network and get the
train & validation data from generator created by user.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 2 Jun 2020 22:25:37 +0000 (07:25 +0900)]
[ CI ] Fix wrong ci url
Recently we move ci server and there are some cases didn't apply the
change.
In this PR, change it in the README.md file
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 29 May 2020 06:49:17 +0000 (15:49 +0900)]
[Example] Change widget app to ui app
**Changes proposed in this PR:**
- The scaffolding was widget app, but widget app has
naviframe limitation. This PR ports widget app to ui app.
**Minor changes proposed in this PR:**
- Add handler for 'route/to' signal.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 19 May 2020 08:09:43 +0000 (17:09 +0900)]
[API] Training API with generator
There is no API to get training, validation, test data from generator
function.
In this PR, introduces new training API with generator function.
. New Tizen API.
. Add Unit Test API with generator function.
. Add Test generator for future test.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 27 May 2020 08:57:47 +0000 (17:57 +0900)]
[Example] Add basic layout with edc file
**Changes proposed in this PR**
- Add scaffolding layouts to be used to construct multiple pages
- Change group "main" -> "view"
- Update .gitignore to ignore tizen studio generated files
- v2: Add simple program to determine button focus
- v2: Add signal emition when clicked to program
**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 26 May 2020 01:08:28 +0000 (10:08 +0900)]
[ Fix ] Fix potential memory leak
There is memory leak at the last iteration of epoch. It run data
buffer again and exit without free the memory.
In this PR, this issue is fixed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 22 May 2020 10:02:09 +0000 (19:02 +0900)]
Using 1D array to get the data from user function
It is not good to get the user data as 3D std::vector format. C does
not support it. Therefore it is much better to get the data as 1D
float array (float *). In this PR, re-define function pointer to get
data as float * format.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 22 May 2020 02:46:48 +0000 (11:46 +0900)]
Use TensorDim for databuffer
change databuffer class to use TensorDim Class for input
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 22 May 2020 01:33:33 +0000 (10:33 +0900)]
Split Tensor Dim Class from Tensor
Split Tensor Dim Class from Tensor for better usability
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 20 May 2020 07:32:10 +0000 (16:32 +0900)]
Fix possible error that cause double free
There is an occasional error from unittest_tizen_capi.
```bash
[ RUN ] nntrainer_capi_nnmodel.train_with_file_01_p
training
data_buffer made
data_buffer made done
size: 2
data_buffer clear
data_buffer end
double free or corruption (out)
Aborted (core dumped)
```
**Changes proposed in this PR:**
join data buffer threads first before clearing vectors that are
used in the target function to (possibly) prevent double free
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Sangjung Woo [Tue, 19 May 2020 07:13:03 +0000 (16:13 +0900)]
[Unit Test] Add checkValidation() test case of each Layer class
This patch adds checkValidation() method test case of each Layer class.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
Sangjung Woo [Tue, 19 May 2020 07:09:29 +0000 (16:09 +0900)]
[Layer] Set layer type in constructor
Even though the layer instance is created, its type property is not set
properly so it is LAYER_UNKNOWN as default. Because of this reason,
checkValidation() method is not working properly. This patch fixed that
bug.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Tue, 19 May 2020 01:59:42 +0000 (10:59 +0900)]
[ Example ] Test Classification Example using C-APIs
Add Classification example using C-APIs.
. Using data files to train, validate.
. The datas are stored in the follow order in binary format.
'Features label Features label Features label ...'
. There are two layers, one for the input layer and the other is fc
layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 15 May 2020 10:05:32 +0000 (19:05 +0900)]
[Example] Add edc layout to the scaffolding
**Changes proposed in this PR:**
- Add main.edc
- Add view.[ch]
- Add data.h
- Change main initiation to use view function
- Move data definitions to data.h
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Mon, 18 May 2020 01:08:48 +0000 (10:08 +0900)]
[ API ] Add check the size of buffer & maximum data
Add check the size of buffer with maximum data and reset
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 15 May 2020 08:40:52 +0000 (17:40 +0900)]
[ API ] Train API with data files
Now two Options for the Train are available.
- ml_nnmodel_train_with_file(ml_nnmodel_h)
: Suppose every parameters are set from configuraiton and data
files ( Training : mandatory , Validataion : optional, Test :
optional )
- ml_nnmodel_train_with_file(ml_nnmodel_h, ...)
: Suppose using APIs for the set up the network configuration and
Network hyper parameters are given by arguments (key = value
format )
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 15 May 2020 05:18:00 +0000 (14:18 +0900)]
Add scaffolding for tizen native example app
This PR adds an anchor point for tizen native example app.
More specifically contains a tizen wearable widget that prints out 'hello world'
I am planning to build a tizen wearable app that trains user drawing labeled
with a specific unicode character but subjected to change at the moment.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 14 May 2020 06:04:55 +0000 (15:04 +0900)]
[ API ] neural network model compile API
- add neural network model compile API with optimizer & hyper
parameter list
- ml_nnmodel_compile --> ml_nnmodel_compile_with_conf
- Change Weight Init Property as an Layer Property
: Delete wini paramenter in Initialize()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Sangjung Woo [Thu, 14 May 2020 04:41:21 +0000 (13:41 +0900)]
[Layer] Refactoring Layers class
* Separate single Layers file into several files for its role.
* Cleanup unused headers and internal functions
* Rename files name (layers.[h|cpp] -> layer.[h|cpp])
* Increase patch verson of debian package since header files are added
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Thu, 14 May 2020 01:20:13 +0000 (10:20 +0900)]
[UNIT] add & modify unit test for optimizer and addlayer
- add addLayer & optimizer unit test cases
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 14 May 2020 01:18:25 +0000 (10:18 +0900)]
[api] using "unit" key to get the dimension for fc layer
Instead of using input_shape to get the fc layer dimenson, it is
simple to get the ouptut dimesion using unit keyword.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 13 May 2020 23:35:48 +0000 (08:35 +0900)]
[Fix] fix double free corruption
remove new & delete when generate input data from image
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 13 May 2020 01:14:35 +0000 (10:14 +0900)]
[API] change key, value format for the property
Instead of using multiple string to describe key-value, it would be
more intuitive to be combine with "=".
for example,
ml_*_set_property (handle, "beta1=0.332", "beta2=0.001", NULL)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Sangjung Woo [Wed, 13 May 2020 01:06:34 +0000 (10:06 +0900)]
[API] Add updateLoss method
This patch newly adds updateLoss method into FullyConnectedLayer class
to remove redundant code.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Tue, 12 May 2020 23:23:14 +0000 (08:23 +0900)]
Make headers as system header
Make system headears as system header
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 12 May 2020 06:19:49 +0000 (15:19 +0900)]
[API] set Optimizer Property
. ml_nnoptimizer_set_perperty(ml_nnopt_h opt, ...)
: variable parameter MUST be end with NULL.
. Properties:
enum class PropertyType {
learning_rate = 0,
decay_rate = 1,
decay_steps = 2,
beta1 = 3,
beta2 = 4,
epsilon = 5,
};
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 12 May 2020 01:14:11 +0000 (10:14 +0900)]
[API] create/delete neural network optimizer
create / delete optimizer
- type : sgd, adam, unknown
. ml_nnoptimizer_create(ml_nnopt_h opt, const char *type)
. ml_nnoptimizer_delete(ml_nnopt_h)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 11 May 2020 04:03:48 +0000 (13:03 +0900)]
[API] Weight Decay into Layer Class
Make Weight Decay Parameter for Layer Class. From Now on Weight Decay
can be defined layer by layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>