Jihoon Lee [Tue, 14 Jul 2020 09:38:15 +0000 (18:38 +0900)]
Refactor nntrainer load from config
`loadFromConfig` has many duplicated logics with layer::setProperty and
others
This PR refactor `loadFromConfig` to reuse already present logic.
**Changes proposed in this PR:**
- Add `std::out_of_range` exception to setProperty to represent validity
- Change error to warning when input_dim is not present at the head of
the network(for ini)
- Change `weightIni` -> `weight_ini` in `ini` for consistency
- Change unittest accordingly
- Separate dataset, network parser
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 15 Jul 2020 08:00:50 +0000 (17:00 +0900)]
[common-api] Add common api header
Add common api header for nntrainer.
This common header includes declarations common to all the APIs
and is used in nntrainer as well.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Jul 2020 04:30:09 +0000 (13:30 +0900)]
[API] Update C-API dataset handling
Update C-API to add dataset handling
Changes added to data buffer:
- Bug fix for data buffer using already freed memory
- Data buffer set properties moved out from neural network
- Data buffer generator now takes list of array for multiple inputs
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 13 Jul 2020 06:52:44 +0000 (15:52 +0900)]
[ Application ] mnist with tensorflow
This PR includes:
. MNIST Tensorflow Example to compare with nntrainer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 9 Jul 2020 00:33:15 +0000 (09:33 +0900)]
[CAPI] Add ml_nnmodel_get_summary
**Changes proposed in this PR:**
- Add ml_nnmodel_get_summary
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 13 Jul 2020 03:41:11 +0000 (12:41 +0900)]
[API] Update API namespace and use enum for opitmizer
Setting optimizer now uses enum than string
The name space of the layers has been updated
Below is the summary the updates:
ml_nnmodel_* -> ml_train_model_*
ml_nnlayer_* -> ml_train_layer_*
ml_nnopt_* -> ml_train_optimizer_*
*_delete() -> *_destroy()
ml_nnmodel_train_with_file() and ml_nnmodel_train_with_generator() have been kept back
These will be updated in the upcoming PR where dataset interface for API is updated
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Jul 2020 09:19:37 +0000 (18:19 +0900)]
[error code] Common error codes re-define issue
many ML_ERROR error codes are defined twice in nntrainer/include/nntrainer_error.h and api/capi/include/platform/ml-api-common.h
this patch reuses the definition from ml-api-common.h in nntrainer_error.h
this allows nntrainer.h to be included in source code as it contains certain callback declarations which are used in library
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Jul 2020 09:06:03 +0000 (18:06 +0900)]
[data buffer] Bug fix for data buffer
data buffer updateData() passes the reference of a on-stack variable
which is deleted after the function call exits. However the callee function
remains on a new thread causing problems.
Added bug fix for this.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 14 Jul 2020 04:01:49 +0000 (13:01 +0900)]
Fix NeuralNetwork::finalize
Delete layer in finalize had wrong index. This PR fix this issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 13 Jul 2020 01:34:51 +0000 (10:34 +0900)]
Add layer print
**Changes proposed in this PR:**
- Add print option to layer
- Add `Layer::print` function
- Add hook point to print function for derived layer.
- Add `ostream &operater<<` for Layer type
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 10 Jul 2020 11:28:44 +0000 (20:28 +0900)]
[ API ] capi-nntrainer packaging
This PR includes:
. rpm packaging of capi-nntrainer for tzien
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Dongju Chae [Tue, 14 Jul 2020 08:21:17 +0000 (17:21 +0900)]
[Fix/Coverage] Fix the coverage generator to use python3
This patch fixes the coverage generator to use python3.
It seems that Python2 package was removed in nntrainer.spec.
Signed-off-by: Dongju Chae <dongju.chae@samsung.com>
Jihoon Lee [Fri, 10 Jul 2020 07:17:44 +0000 (16:17 +0900)]
Add setProperty for each property type
Since Layer::setProperty is iterating through vectors, only part that
needs overriding is setting each particular property.
This PR add setProperty by type to relieve the issue.
w.r.t #270, this patch enables layers to check if given property type
is valid to user.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 10 Jul 2020 04:33:48 +0000 (13:33 +0900)]
Add throw_status to map status to throw
Few functions are using cstyle throw, this PR scaffolds around c style
throw to use `std::exception`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 13 Jul 2020 04:09:22 +0000 (13:09 +0900)]
[API] Bug fix in C-API get_layer
ml_nnmodel_get_layer function was adding another layer in struct NeuralNetwork if layer wasnt found in layers_map
This function was supposed to only add layer in layers_map and not modify struct NeuralNetwork
This PR applies the above bug fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 9 Jul 2020 11:21:30 +0000 (20:21 +0900)]
Refactor bn layer test fixture
**Changes proposed in this PR:**
- BN layer tf fixture is merged to BNLayer fixture
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 9 Jul 2020 10:01:49 +0000 (19:01 +0900)]
Update example to c++14 and enable exceptions
**Changes proposed in this PR:**
- Update /Application/jni to use c++14 and enable exceptions
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 7 Jul 2020 12:44:59 +0000 (21:44 +0900)]
Add UpdatableParam to manage weights / gradients
**Changes proposed in this PR:**
- Add `UpdatableParam`
- Change `Optimizer::apply_gradient` signature
- Attach `UpdatableParam` to manage weights
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 6 Jul 2020 05:51:59 +0000 (14:51 +0900)]
[API] Search layer by name in C-API
Support searching of layer by name in C-API
This is part of the more changes to support this functionality
Related issue - #260
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 2 Jul 2020 10:08:18 +0000 (19:08 +0900)]
[ MNIST ] mnist application
This PR provides mnist application which includes:
. 1 Input Layer
. 2 Conv2d
. 2 Pooling2d
. 1 Flatten
. 1 Fully Connected Layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 9 Jul 2020 12:19:01 +0000 (21:19 +0900)]
Fix to run validation process during train
Currently validation is not work. Because the dimension of hidden
tensor is different if we use one batch input.
This PR handle this issue.
Self evaluation:
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 6 Jul 2020 01:24:10 +0000 (10:24 +0900)]
[ Unit Test ] Generate Tensorflow ouptut and Comparison
This PR provides:
. Generate Tensorflow output & gradients output for conv2d, pooling2d
. Compare with nntrainer outputs
. TODO : compare gradient after getGradieint func is implemented.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Wed, 8 Jul 2020 05:26:04 +0000 (14:26 +0900)]
[API] Update C-API semantics
Update C-API semantics about layer and optimizer
While the layer and optimizer are in use, do allow deletion of layer and optimizer
Further, once the layer and optimizer are set/added to a model, their ownership is transferred to the model
So, no need to delete them separately. Deleting the model will directly delete them as well.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 8 Jul 2020 04:35:48 +0000 (13:35 +0900)]
[property] Restrict the properties set after compilation
Restrict the properties which can be set for the network after the compilation of the model has been done
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Jul 2020 08:24:28 +0000 (17:24 +0900)]
[API] Update C-API
Update C-API and corresponding code changes
Major Changes:
1. ml_nnmodel_compile_with_conf() replaced with ml_nnmodel_construct_with_conf().
This new function loads the model from the config file but does not initialize.
ml_nnmodel_compile() should be called after ml_nnmodel_construct_with_conf() before training.
2. ml_nnmodel_compile() does not take optimizer as an input.
Rather, use ml_nnmodel_set_optimizer() to set the optimizer for the model.
3. init() from neuralnet has been updated to loadFromConfig() and does not initialize the model anymore.
Rather call init() after loadFromConfig() to initialize.
This also allows updating the model config after loading model with loadFromConfig().
4. init(optimizer, args_list) has been replaced with init()
Rather call setOptimizer(optimizer) to set the optimizer and
setProperty(args_list) to set the properties before calling init().
5. Bug fixes in checkValidation()
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 10:02:48 +0000 (19:02 +0900)]
[tensor] Tensor constructor should not set zero
Tensor constructors should not zero to the memory by default
This hides many bugs in the code and also leads to big overheads
Removed setting zero in constructor and corresponding many bug fixes
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 7 Jul 2020 01:11:52 +0000 (10:11 +0900)]
Refactor layer unittest
This PR make layer unittest more concise and fix some bugs.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 6 Jul 2020 10:12:02 +0000 (19:12 +0900)]
[property] Add name property for layer
Add name property for layers
Default names are added if layer name is not given by creator
Layer addition should not be done directly by adding to layers now
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 8 Jul 2020 01:29:14 +0000 (10:29 +0900)]
Enhance Tensor print
**Changes proposed in this PR:**
- Add tensor address to the print
- Handle the case when tensor is too large
- Add test
See also: #270
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Sangjung Woo [Thu, 9 Jul 2020 02:23:59 +0000 (11:23 +0900)]
[Spec] Fix the wrong Requires section
This patch fixes the wrong Requires for libopenblas.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 01:49:51 +0000 (10:49 +0900)]
Rework bn forward & backward
Rework bn layer forward & backward pass and fix few bugs.
This patch only includes training passes.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 8 Jul 2020 02:36:23 +0000 (11:36 +0900)]
Format everyfile
Now CI checks formatting. This patch formats every exisiting *.cpp *.h
in the project
See also: #271
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 2 Jul 2020 09:43:49 +0000 (18:43 +0900)]
[ NETWORK ] initialize network for conv2d/pooling2d/flatten
This PR provides initialization of conv2d, pooling2d, flatten layer
for training configuraion file.
Also modified to update mutiple weights in the optimizer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Mon, 6 Jul 2020 10:05:25 +0000 (19:05 +0900)]
[property] Set common properties in layer.h
Many layers has common properties which are defined in layer.h
As they are in layer.h and are expected to be used in most layers, lets set their properties in layer.h itself
This reduces a lot of redundancies
If some property is not required to be handled by a specific layer in rare case, handle it as an error in that layer itself
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 7 Jul 2020 02:44:18 +0000 (11:44 +0900)]
Fix potential bug in pooling 2D layer
When pooling2d_layer is initialized once as a default.
and initialized again as a `global_max` | `global_pooling`
`output.width`, `output.height` is set to 2.
Although following scenario is highly unlikely, initializing layer
should be done deterministically
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 06:59:24 +0000 (15:59 +0900)]
[network] Add sensible defaults
Added sensible defaults for the network initialization
This covers layers loss and optimizer properties
However defaults are not added for some of the core properties because they should be input by the user
V2:
Take fixes of conv2d from #249 by @jijoongmoon to make this PR work
Resolves #236
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 07:08:19 +0000 (16:08 +0900)]
[layers] Add trainable feature
Add trainable feature for each layer which allows certain layers to just not train without affecting others
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 07:32:46 +0000 (16:32 +0900)]
[weight/gradients] Initialize Weights once only
Since layers only store weights reference, store them once than doing it in every iteration
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 07:02:21 +0000 (16:02 +0900)]
[git] ignore vscode configs and multiple vim backup files
ignore vscode configs and multiple vim backup files
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 2 Jul 2020 04:17:29 +0000 (13:17 +0900)]
[ini] update ini bias init and flatten as feature
bias init name is changes to bias_init_zero to make it more readable
flatten is now a layer feature externally rather than as a new layer itself
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 2 Jul 2020 10:55:02 +0000 (19:55 +0900)]
Prepare bn layer tc
This PR add a python function that generates bn_layer forward/backward
pass.
Return value's order & contents are subject to change for the time
being.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
**How to evaluate:**
Run python function
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 1 Jul 2020 06:14:56 +0000 (15:14 +0900)]
Restructure inner data inside Tensor
**Changes proposed in this PR:**
- Change Tensor structure to enable sharing between Tensor
- Refactor Tensor ctors
- Add copy/move ctors & assignment operators
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 25 Jun 2020 10:23:53 +0000 (19:23 +0900)]
Add exception to TensorDim::setTensorDim
**Prerequisite**
- Add capi exception wrapper (later pr)
**Changes proposed in this PR:**
- Add `std::invalid_argument` to `TensorDim::setTensorDim`
- Fix `fc_layer` positive test does not have unit declared.
- Fix tests accordingly.
- Add TensorDim test (positive && negative)
**Todo**
- ~Add TensorDim test (positive && negative)~
See also: #233
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 25 Jun 2020 12:34:08 +0000 (21:34 +0900)]
Add exception boundary to capi
**Changes proposed in this PR:**
- Add a functor that returns `errno` to corresponding `exception`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 2 Jul 2020 04:42:26 +0000 (13:42 +0900)]
[activation] Add missing config for activation layer
Add missing initialization and setting input/output dimension for activation layer
Also updated setting previous dimension to be from the last computation layer
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 29 Jun 2020 08:34:30 +0000 (17:34 +0900)]
Add ml-api-common to capi
**Changes proposed in this PR:**
- Add capi-ml-common-devel to spec file
- Add `ml-api-common.h` for dummy
- Change error code accordingly
Resolves #75
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 30 Jun 2020 00:05:28 +0000 (09:05 +0900)]
[ Pooling2D ] unittest cases for backwarding
This PR provides unittest cased for backwarding about global_max &
global average
. global_max : forwarding / backwarding
. global_average : forwarding /backwarding
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 26 Jun 2020 05:29:31 +0000 (14:29 +0900)]
[ini] Update ini parsing and format
Update the parsing and format of ini input file
Remove declaring the layers at the top of the ini file unnecessarily
Adding corresponding bug fixes and updates to unittests
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 15:02:05 +0000 (00:02 +0900)]
[unittest] generate fc unittests
Added generate forward unittests data for fully connected layer, and corresponding unittests
Added minor bug fixes as well
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Jun 2020 11:52:21 +0000 (20:52 +0900)]
[genInput] Update generation of input for fc
Update generation of input for fc layer for forward
backward is also supported, however not yet used
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Jun 2020 11:49:48 +0000 (20:49 +0900)]
[genInput] Make input data reproducible
Make input data generation reproducible with fixed seeding
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 26 Jun 2020 09:51:43 +0000 (18:51 +0900)]
[optimizer] Bug fix
Tensor copy constructor and copy assigment operator creates a copy of the vector
This led to bug in optimizer which updated the copy of the weight than the weight itself
Fixed by refernce of weight in optimizer
Resolves #241
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 25 Jun 2020 07:02:34 +0000 (16:02 +0900)]
[ Pooling2D ] global max / global average
This PR provides global max / global average.
. forwarding global_max / global_average
. backwarding global_max / global_average
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 25 Jun 2020 12:40:43 +0000 (21:40 +0900)]
[layer] Support get weights
Support getWeights functionality equivalent to getGradients
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 06:46:13 +0000 (15:46 +0900)]
[gradients] Get gradient of each layer
Added function to get gradient of each layer
Each layer is responsible to fill in the list containing the gradients
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 25 Jun 2020 02:25:02 +0000 (11:25 +0900)]
Add tensor save / read test
This Pr adds tensor save / read test.
Not directly related to the PR though, save & read better
need error handling
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 05:08:28 +0000 (14:08 +0900)]
[optimizer] Optimizer to manage list of weights
Update optimizer to handle list of weights than weights and bias inidividually
as layers like RNN, LSTM will have more than just (w,b)
V2:
calculate changed to apply_gradient as it applies gradients
applied for conv layer as well
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Wed, 24 Jun 2020 10:44:21 +0000 (19:44 +0900)]
[ Conv2D ] fix calculate parameter of optimizer
fix calculate function parameters
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 05:56:59 +0000 (14:56 +0900)]
[activation] Derivative for activation
The derivative of softmax has been hand crafted to be different from others
Refer to https://github.com/nnstreamer/nntrainer/blob/
2a650512813db6ce3bba828b5790066fbc655f14/nntrainer/src/fc_layer.cpp#L265 for original implementation
Softmax requires softmax(x) as input for derivative while other activations require x as input for derivative_
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 02:00:10 +0000 (11:00 +0900)]
Add stride and contiguous flag to tensor
**Changes proposed in this PR:**
- Add stride
- Add checking is countigous flag to tensor
This patch is an anchor for #217
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 19 Jun 2020 13:38:42 +0000 (22:38 +0900)]
[ Conv2D ] backwarding
This PR provides backwarding of Convolution 2D Layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 06:44:37 +0000 (15:44 +0900)]
Fix optimizer signature
optimizer signature now does not have `init_zero`
This quick fix deletes init_zero from `fc_layer::backward`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 02:57:49 +0000 (11:57 +0900)]
[bias] Bias updation missing for sgd
Bias updatation fixed for sgd where it only happened when bias was initialized with 0
For adam, bias updataion was happening twice
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 05:24:53 +0000 (14:24 +0900)]
[fc_layer] Add the deleted statement
Add the deleted statement about weight update from fc layer
Resolves #221
cc. @zhoonit
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 23 Jun 2020 05:08:56 +0000 (14:08 +0900)]
[tensor/tensor_dim] Added equal comparison operation
Added equal comparison operation
Currently its based on fixed epsilon
Needs updation to use variable number of bits based on exponentiation as done in gtest
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 02:08:28 +0000 (11:08 +0900)]
Move weight_decay handling from opt to layer
**Changes proposed in this PR:**
- remove weight_decay from `Optimizer::calculate` signature
- apply weight decay to fc_layer.cpp
please note that conv2d_layer::backwarding also need to handle weight
decay after this PR is merged.
Resolves #213
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 19 Jun 2020 10:31:16 +0000 (19:31 +0900)]
Attach activation layer to neuralnet.cpp
**Changes proposed in this PR:**
- strip activation function related members in `layer`
- init activation property as `activation_layer`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
resolves #153
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Jun 2020 08:07:35 +0000 (17:07 +0900)]
[Fix/jni] Change tflite-dependency
There seems a problem with building tensorflow. This PR propose to use
prebuilt tensorflow-lite instead of building one.
**Changes proposed in this PR:**
- Change `Applications/android.mk` to use prebuilt library
- Change `prepare_tflite.sh`
- Bump tflite to 1.13.1 as suggested in #20
Resolves #207
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Jun 2020 01:39:07 +0000 (10:39 +0900)]
Make .clang-format competible with version 6
Clang-format 6 is widely used. However,
`AllowAllConstructorInitializersOnNextLine` added from #203 is supported
from clang-format 9.
This Pr reverts use of `AllowAllConstructorInitializersOnNextLine`
while having similar linting style.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 19 Jun 2020 08:29:45 +0000 (17:29 +0900)]
[meson] Arrange file order
Arrange files order for easier access
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 11:10:43 +0000 (20:10 +0900)]
[loss] Combine softmax with cross entropy
Softmax is combined with cross entropy
Cross entropy exists on its as an option but isnt supported
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 08:57:41 +0000 (17:57 +0900)]
[loss] Combined cross entropy with sigmoid
Combined cross-entropy with sigmoid version for loss because of its higher stability
Note that this happens internally and is not exposed outside
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 19 Jun 2020 07:41:45 +0000 (16:41 +0900)]
[Proposal] clang format for initializer list
preivious initialization list was hard to read when it gets too long.
eg) layers initialization list
before:
```cpp
Layer()
: last_layer(false), init_zero(false), type(LAYER_UNKNOWN),
activation(NULL), activation_prime(NULL), activation_type(ACT_UNKNOWN),
bn_follow(false), weight_decay(), weight_ini_type(WEIGHT_UNKNOWN) {}
```
after:
```cpp
Layer()
: last_layer(false),
init_zero(false),
type(LAYER_UNKNOWN),
activation(NULL),
activation_prime(NULL),
activation_type(ACT_UNKNOWN),
bn_follow(false),
weight_decay(),
weight_ini_type(WEIGHT_UNKNOWN) {}
```
**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 17 Jun 2020 12:30:33 +0000 (21:30 +0900)]
[ Pooling2D ] backwarding
This PR provides backwarding process of Pooling 2D.
. backwarding for max pooling 2D
. backwarding for average pooling 2D
. backwarding global_max, global_averge is NYI.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 18 Jun 2020 06:45:14 +0000 (15:45 +0900)]
Add test for activation layer(Wait for #187)
**Changes proposed in this PR:**
- Add test for activation layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Jun 2020 10:15:19 +0000 (19:15 +0900)]
Separate activation to layer
**Changes proposed in this PR:**
- add activation_layer.[h|cpp]
- add test to activation_layer
See also #153, #152
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 19 Jun 2020 08:15:30 +0000 (17:15 +0900)]
Fix bug in `Tensor::setValue`
`memset` can't be used to initialize a float array as explained
[here](https://stackoverflow.com/questions/1040070/initializing-a-float-array-with-memset)
**Changes proposed in this PR:**
- change `memset` to `std::fill`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 09:10:00 +0000 (18:10 +0900)]
[warning fix] unsigned int compare with int warning
warning fix of comparing unsigned int values with signed values
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 06:10:22 +0000 (15:10 +0900)]
[parse] Parse unknown properties
Properties exposed to the users and internal are different (losslayer, etc)
Hence using `*_string.size() - 1` for unknown cases will cause bugs in parse_util
Replaced with its own individual unknown value
V2:
Combined all individual layer properties into common properties in layer.h
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Wed, 17 Jun 2020 10:00:07 +0000 (19:00 +0900)]
[ Flatten ] backwarding
This PR includes back propagation of Flatten Layer.
. backwarding Flatten Layer.
. batch(), channel(), width(), height(), setDim() of tensor
. unit test of flatten layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 04:46:04 +0000 (13:46 +0900)]
[bugfix] Bug fix with git merge
Git merged new commits causing build errors
Need urgent merge
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 17 Jun 2020 06:14:05 +0000 (15:14 +0900)]
[neuralnet] Handle adding layer with compiled model
This PR handles the case when a layer is added after a model has been compiled
Currently it has been set to give out error
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 17 Jun 2020 04:26:59 +0000 (13:26 +0900)]
[API] simplify API
Remove exposed function "model_construct_with_conf" as then "compile_with_conf" looks strange without any config
Rather, just keep one "model_construct" and "compile_with_conf" can take the config file
Updated corresponding unittests
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 16 Jun 2020 10:12:04 +0000 (19:12 +0900)]
[save/load] save/load the optimizer parameters
save and load the optimizer parameters as well for continued training
Add additive option in neural to continue training from previous training
Resolves #172
V2:
setOptimizer() bug fix to be called with set for only fc layer and not other layers
now setOptimizer() for fc_layer is unique compared to virtual defined by its parent
added continued training property for optimizer
added getSize in tensor
save optimizer type to verify that the loaded optimizer values can be used sensibly
if not loading, move the file seek ahead
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 09:00:27 +0000 (18:00 +0900)]
[fc_layer] Initialization bug fix
Add missing initialization of unit at object construction
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 04:19:51 +0000 (13:19 +0900)]
[layer] Added loss layer
Added loss layer
This is added by the framework and is hidden from the user
This separates all the cost/loss related extra work from the layers
However, Loss and cost is now available for and from all layers
Resolves #101
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Tue, 16 Jun 2020 04:15:56 +0000 (13:15 +0900)]
[ Flatten ] forwarding / copy for Flatten Layer
This PR provides the fowarding/copy function of Flatten Layer
. implement forwarding function
. implement copy function
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Tue, 16 Jun 2020 09:10:21 +0000 (18:10 +0900)]
[init] Making random deterministic
Adding determinism to the random number generators in the library
DataBuffer has multiple threads but single thread of train/valid/test
which run in sequence in my understanding
Resolves #167
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Tue, 16 Jun 2020 04:08:05 +0000 (13:08 +0900)]
[ Layer copy ] copy function for conv2d & pooling
This PR fixs the copy member function of covn2d and pooling layer.
. include copy of layer varaibles for conv2d
. implemnet copy of pooling2d layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Tue, 16 Jun 2020 09:51:58 +0000 (18:51 +0900)]
[neuralnet] Iteration bug fix in learning rate
learning rate is decayed using the iteration
however current implementation was using epoch count
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 16 Jun 2020 04:53:27 +0000 (13:53 +0900)]
[Docs] Update readme.md prerequisites
**Changes proposed in this PR:**
- Update readme.md prerequisites to include gcc version
**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 16 Jun 2020 02:41:19 +0000 (11:41 +0900)]
Optimize optimzer::calculate
**Changes proposed in this PR:**
- Optimze `optimzer::calculate` with new add_i and applyIf
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 16 Jun 2020 01:24:34 +0000 (10:24 +0900)]
Add `LazyTensor::applyIf`
This PR propose `LazyTensor::applyIf` for more control flow.
Because of the problem proposed in http://wg21.link/P0834R0
macro to wrap a function need to be used for the function.
**Changes proposed in this PR:**
- add `LazyTensor::applyIf` semantics
- add `LazyTensor::applyIf` tests
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 16 Jun 2020 01:06:45 +0000 (10:06 +0900)]
[ Flatten ] Skeleton of Flatten Layer
This PR provides skeleton code for flatten layer.
- Header & implementation of flatten layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 15 Jun 2020 07:01:16 +0000 (16:01 +0900)]
[ Pooling2D ] forwarding pooling 2D layer
This PR includs forwarding process of pooling 2d layer.
in which,
. implementation of forwarding
. unit test code for forwarding
. input / ouput generation of pooling 2D ( max pooling only )
. move zero_pad to util_func.h
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 15 Jun 2020 10:44:31 +0000 (19:44 +0900)]
[ Bug ] setActivation for input layer
Input Layer does not have activation property
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 15 Jun 2020 04:12:48 +0000 (13:12 +0900)]
[ Pooling2D ] initialize pooling 2d layer
This PR includes initialization of pooling 2d layer.
. check input dimension
. set intput / output dimemsion
. allocate hidden layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 15 Jun 2020 06:22:54 +0000 (15:22 +0900)]
[Tensor] Change Add_i to have alpha signature
**Changes proposed in this PR:**
- `Add_i(Tensor &T, float alpha)` for coefficient multiplication
- Change test accordingly
- Optimize `blas` implementation for `multiply_i`
See also: #166
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 08:39:40 +0000 (17:39 +0900)]
[util] Simplify sigmoidPrime
Simplified sigmoid prime which IMO might also be more efficient
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 15 Jun 2020 08:37:36 +0000 (17:37 +0900)]
[layer] object initialization bugfix
Added bugfix to object initialization
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>