Parichay Kapoor [Mon, 20 Jul 2020 07:35:16 +0000 (16:35 +0900)]
[activation] Move activation functions to activation file
Move activation operator functions to activation file as static members
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 20 Jul 2020 11:23:06 +0000 (20:23 +0900)]
[ Coverity ] Fix coverity issues
This PR includes fixes about coverity issues
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 20 Jul 2020 04:42:59 +0000 (13:42 +0900)]
[ Coverity ] Fix Coverity Issues
This PR fix coverity issues.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Mon, 20 Jul 2020 04:49:09 +0000 (13:49 +0900)]
[activation] Update input arguments of softmax
Update input arguments of softmax to Tensor const & from Tensor to match activation
function signature
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Jul 2020 05:09:38 +0000 (14:09 +0900)]
[Loss] Update MSR to MSE
Update the name of mean squared error from MSR to MSE
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Jul 2020 04:01:14 +0000 (13:01 +0900)]
[unittest] Add unittest for backwarding of loss layers
This patch adds unittest for backwarding of loss layers with fc and activation layers
Major bug fixes for backwarding of all the loss layers to match with tensorflow
Major hot fix for softmax derivative - need to update activation layers semantics to
properly fix softmax layer derivative
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Fri, 17 Jul 2020 05:58:33 +0000 (14:58 +0900)]
[ Bug ] Fix initialze buffer for adam optimizer
. Add initialize(std::shared_ptr<UpdatableParam> params, ...)
. Update initailize conv2d for optimizer
. Update to do addition for calcuate derivatives of filter
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 20 Jul 2020 02:07:28 +0000 (11:07 +0900)]
Fix summary logic error
Fix flag to be parsed properly by verbosity
cc. @jijoongmoon
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 17 Jul 2020 07:54:49 +0000 (16:54 +0900)]
[CAPI] Implement summary capi
This PR mainly add `NeuralNetwork::print`. Note that it should be highly
refactored for proper functionality. Confer @todo's in this patch
**Changes proposed in this PR:**
- add `NeuralNetwork::print`
- implement `model_get_summary`
- Add capi test for `model_get_summary`
- move `ml_train_summary_type_e` to `api-common`
- minor bug fix
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 17 Jul 2020 08:16:38 +0000 (17:16 +0900)]
Change nntrainer include in applications
nntrainer include should include api_common header as well.
Android.mk in applications has been fixed
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 17 Jul 2020 07:34:29 +0000 (16:34 +0900)]
Add header guard to nntrainer-api-common.h
Add header guard to nntrainer-api-common to prevent redlaration.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 17 Jul 2020 05:46:15 +0000 (14:46 +0900)]
Fix neuralnetwork property logic
exception::invalid_property for not allowed property should not be
ignored for some cases. So, parsing process is patched accordingly
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 17 Jul 2020 03:53:34 +0000 (12:53 +0900)]
[fc/activation] Unittest for fc, activation layers and sgd optimizer
Add combined unittest for fc and activation layers (sigmoid and softmax) backwarding
Note that this is done without the loss
As SGD is used for updates, this also acts as testing optimizer sgd
V2:
Using paramsAt() in unittest
Moved paramsAt() to public from protected
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 16 Jul 2020 01:55:57 +0000 (10:55 +0900)]
Add test for modelfiles
Add automated test scaffolding to test modelfile configuration &
NN initiation.
**Changes proposed in this PR:**
- Add IniSection class to control ini (for testing)
- Add Ini test fixture for paremeterized ini testing
- Add `unittest_nntrainer_modelfile.cpp`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 17 Jul 2020 05:12:02 +0000 (14:12 +0900)]
[api] remove duplicate declarations
Remove duplicate declarations arriving due to auto merge
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 17 Jul 2020 05:32:41 +0000 (14:32 +0900)]
[c-api] Function namespace correction
get_summary function namespace correction
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 16 Jul 2020 11:44:44 +0000 (20:44 +0900)]
[layer] Sum over batch for gradients
Gradients of the layer should perform sum over batch than average
This is consistent with tensorflow's implementation
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 16 Jul 2020 11:39:29 +0000 (20:39 +0900)]
[optimzer/layers] Gradient dimension should match weight dim
Gradient dimension should match weight dimension
Currently optimizer applies averaging of gradients, which is not correct
Apply averaging of gradients before calling applyGradients
Resolves #280
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 16 Jul 2020 10:49:43 +0000 (19:49 +0900)]
[api] remove duplicate declaration
Remove duplicate declaration of ml_train_model_run_async
cc. @jijoonmoon
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 16 Jul 2020 02:50:15 +0000 (11:50 +0900)]
Follow-up after #308
**Changes proposed in this PR:**
- Now friendship between layer and network is over
- Move `setProperty(propType)` to public
- Add custom exception in `_error.h`
- Change `setProperty` `std::out_of_range` ->
`exception::invalid_property`
- Add error code boundary to `nntrainer_exception_boundary`
- Change inherited docs to @copydoc
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
resolves #315
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 8 Jul 2020 12:49:26 +0000 (21:49 +0900)]
[API] Finalize the first draft of C-API
Finalize the first draft of C-API
V2:
Applied review comments
Updated namespace to `ml_train_*`
Added module doc
Added enums for loss and optimizer
Added loss as a parameter for model_compile
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 15 Jul 2020 07:43:22 +0000 (16:43 +0900)]
[CAPI] Add functions to be supported later
Add functions to be supported later in the internal capi header.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Jul 2020 09:48:17 +0000 (18:48 +0900)]
[CAPI] Added more unittests for tizen capi
Add unittest fot the tizen c-api related to datasets
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 07:51:54 +0000 (16:51 +0900)]
[loss/FC] Unittest for forward propogation
This PR added unittests for forward propagation for FC layer with loss and activations.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 16 Jul 2020 05:14:54 +0000 (14:14 +0900)]
[softmax] Bug fix
Softmax should compute over batch dimension
However current implementation computes over channel dimension
This patch fixes it. This also fixes softmax with cross entropy loss
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 16 Jul 2020 05:12:43 +0000 (14:12 +0900)]
[Loss] Bug fix for loss
Added bug fix for loss forwarding
- for sigmoid with cross entropy, formula was correct, however implementation was wrong, also inverted sign of the output
- for MSE, average is needed than sum
- for softmax with cross entropy, divide by input width is not needed but still mismatch
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 15 Jul 2020 10:57:53 +0000 (19:57 +0900)]
[tensor] Bug fix for average
Bug fix for average to use the correct denominator
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 14 Jul 2020 11:11:16 +0000 (20:11 +0900)]
Enable activation layer in ini
This PR expose activation layer by ini. As #308 handles props
automatically, there is no need to set else more.
See also #210
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 14 Jul 2020 09:38:15 +0000 (18:38 +0900)]
Refactor nntrainer load from config
`loadFromConfig` has many duplicated logics with layer::setProperty and
others
This PR refactor `loadFromConfig` to reuse already present logic.
**Changes proposed in this PR:**
- Add `std::out_of_range` exception to setProperty to represent validity
- Change error to warning when input_dim is not present at the head of
the network(for ini)
- Change `weightIni` -> `weight_ini` in `ini` for consistency
- Change unittest accordingly
- Separate dataset, network parser
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 15 Jul 2020 08:00:50 +0000 (17:00 +0900)]
[common-api] Add common api header
Add common api header for nntrainer.
This common header includes declarations common to all the APIs
and is used in nntrainer as well.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Jul 2020 04:30:09 +0000 (13:30 +0900)]
[API] Update C-API dataset handling
Update C-API to add dataset handling
Changes added to data buffer:
- Bug fix for data buffer using already freed memory
- Data buffer set properties moved out from neural network
- Data buffer generator now takes list of array for multiple inputs
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 13 Jul 2020 06:52:44 +0000 (15:52 +0900)]
[ Application ] mnist with tensorflow
This PR includes:
. MNIST Tensorflow Example to compare with nntrainer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 9 Jul 2020 00:33:15 +0000 (09:33 +0900)]
[CAPI] Add ml_nnmodel_get_summary
**Changes proposed in this PR:**
- Add ml_nnmodel_get_summary
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 13 Jul 2020 03:41:11 +0000 (12:41 +0900)]
[API] Update API namespace and use enum for opitmizer
Setting optimizer now uses enum than string
The name space of the layers has been updated
Below is the summary the updates:
ml_nnmodel_* -> ml_train_model_*
ml_nnlayer_* -> ml_train_layer_*
ml_nnopt_* -> ml_train_optimizer_*
*_delete() -> *_destroy()
ml_nnmodel_train_with_file() and ml_nnmodel_train_with_generator() have been kept back
These will be updated in the upcoming PR where dataset interface for API is updated
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Jul 2020 09:19:37 +0000 (18:19 +0900)]
[error code] Common error codes re-define issue
many ML_ERROR error codes are defined twice in nntrainer/include/nntrainer_error.h and api/capi/include/platform/ml-api-common.h
this patch reuses the definition from ml-api-common.h in nntrainer_error.h
this allows nntrainer.h to be included in source code as it contains certain callback declarations which are used in library
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Jul 2020 09:06:03 +0000 (18:06 +0900)]
[data buffer] Bug fix for data buffer
data buffer updateData() passes the reference of a on-stack variable
which is deleted after the function call exits. However the callee function
remains on a new thread causing problems.
Added bug fix for this.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 14 Jul 2020 04:01:49 +0000 (13:01 +0900)]
Fix NeuralNetwork::finalize
Delete layer in finalize had wrong index. This PR fix this issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 13 Jul 2020 01:34:51 +0000 (10:34 +0900)]
Add layer print
**Changes proposed in this PR:**
- Add print option to layer
- Add `Layer::print` function
- Add hook point to print function for derived layer.
- Add `ostream &operater<<` for Layer type
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 10 Jul 2020 11:28:44 +0000 (20:28 +0900)]
[ API ] capi-nntrainer packaging
This PR includes:
. rpm packaging of capi-nntrainer for tzien
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Dongju Chae [Tue, 14 Jul 2020 08:21:17 +0000 (17:21 +0900)]
[Fix/Coverage] Fix the coverage generator to use python3
This patch fixes the coverage generator to use python3.
It seems that Python2 package was removed in nntrainer.spec.
Signed-off-by: Dongju Chae <dongju.chae@samsung.com>
Jihoon Lee [Fri, 10 Jul 2020 07:17:44 +0000 (16:17 +0900)]
Add setProperty for each property type
Since Layer::setProperty is iterating through vectors, only part that
needs overriding is setting each particular property.
This PR add setProperty by type to relieve the issue.
w.r.t #270, this patch enables layers to check if given property type
is valid to user.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 10 Jul 2020 04:33:48 +0000 (13:33 +0900)]
Add throw_status to map status to throw
Few functions are using cstyle throw, this PR scaffolds around c style
throw to use `std::exception`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 13 Jul 2020 04:09:22 +0000 (13:09 +0900)]
[API] Bug fix in C-API get_layer
ml_nnmodel_get_layer function was adding another layer in struct NeuralNetwork if layer wasnt found in layers_map
This function was supposed to only add layer in layers_map and not modify struct NeuralNetwork
This PR applies the above bug fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 9 Jul 2020 11:21:30 +0000 (20:21 +0900)]
Refactor bn layer test fixture
**Changes proposed in this PR:**
- BN layer tf fixture is merged to BNLayer fixture
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 9 Jul 2020 10:01:49 +0000 (19:01 +0900)]
Update example to c++14 and enable exceptions
**Changes proposed in this PR:**
- Update /Application/jni to use c++14 and enable exceptions
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 7 Jul 2020 12:44:59 +0000 (21:44 +0900)]
Add UpdatableParam to manage weights / gradients
**Changes proposed in this PR:**
- Add `UpdatableParam`
- Change `Optimizer::apply_gradient` signature
- Attach `UpdatableParam` to manage weights
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 6 Jul 2020 05:51:59 +0000 (14:51 +0900)]
[API] Search layer by name in C-API
Support searching of layer by name in C-API
This is part of the more changes to support this functionality
Related issue - #260
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 2 Jul 2020 10:08:18 +0000 (19:08 +0900)]
[ MNIST ] mnist application
This PR provides mnist application which includes:
. 1 Input Layer
. 2 Conv2d
. 2 Pooling2d
. 1 Flatten
. 1 Fully Connected Layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 9 Jul 2020 12:19:01 +0000 (21:19 +0900)]
Fix to run validation process during train
Currently validation is not work. Because the dimension of hidden
tensor is different if we use one batch input.
This PR handle this issue.
Self evaluation:
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 6 Jul 2020 01:24:10 +0000 (10:24 +0900)]
[ Unit Test ] Generate Tensorflow ouptut and Comparison
This PR provides:
. Generate Tensorflow output & gradients output for conv2d, pooling2d
. Compare with nntrainer outputs
. TODO : compare gradient after getGradieint func is implemented.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Wed, 8 Jul 2020 05:26:04 +0000 (14:26 +0900)]
[API] Update C-API semantics
Update C-API semantics about layer and optimizer
While the layer and optimizer are in use, do allow deletion of layer and optimizer
Further, once the layer and optimizer are set/added to a model, their ownership is transferred to the model
So, no need to delete them separately. Deleting the model will directly delete them as well.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 8 Jul 2020 04:35:48 +0000 (13:35 +0900)]
[property] Restrict the properties set after compilation
Restrict the properties which can be set for the network after the compilation of the model has been done
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Jul 2020 08:24:28 +0000 (17:24 +0900)]
[API] Update C-API
Update C-API and corresponding code changes
Major Changes:
1. ml_nnmodel_compile_with_conf() replaced with ml_nnmodel_construct_with_conf().
This new function loads the model from the config file but does not initialize.
ml_nnmodel_compile() should be called after ml_nnmodel_construct_with_conf() before training.
2. ml_nnmodel_compile() does not take optimizer as an input.
Rather, use ml_nnmodel_set_optimizer() to set the optimizer for the model.
3. init() from neuralnet has been updated to loadFromConfig() and does not initialize the model anymore.
Rather call init() after loadFromConfig() to initialize.
This also allows updating the model config after loading model with loadFromConfig().
4. init(optimizer, args_list) has been replaced with init()
Rather call setOptimizer(optimizer) to set the optimizer and
setProperty(args_list) to set the properties before calling init().
5. Bug fixes in checkValidation()
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 10:02:48 +0000 (19:02 +0900)]
[tensor] Tensor constructor should not set zero
Tensor constructors should not zero to the memory by default
This hides many bugs in the code and also leads to big overheads
Removed setting zero in constructor and corresponding many bug fixes
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 7 Jul 2020 01:11:52 +0000 (10:11 +0900)]
Refactor layer unittest
This PR make layer unittest more concise and fix some bugs.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 6 Jul 2020 10:12:02 +0000 (19:12 +0900)]
[property] Add name property for layer
Add name property for layers
Default names are added if layer name is not given by creator
Layer addition should not be done directly by adding to layers now
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 8 Jul 2020 01:29:14 +0000 (10:29 +0900)]
Enhance Tensor print
**Changes proposed in this PR:**
- Add tensor address to the print
- Handle the case when tensor is too large
- Add test
See also: #270
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Sangjung Woo [Thu, 9 Jul 2020 02:23:59 +0000 (11:23 +0900)]
[Spec] Fix the wrong Requires section
This patch fixes the wrong Requires for libopenblas.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 01:49:51 +0000 (10:49 +0900)]
Rework bn forward & backward
Rework bn layer forward & backward pass and fix few bugs.
This patch only includes training passes.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 8 Jul 2020 02:36:23 +0000 (11:36 +0900)]
Format everyfile
Now CI checks formatting. This patch formats every exisiting *.cpp *.h
in the project
See also: #271
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 2 Jul 2020 09:43:49 +0000 (18:43 +0900)]
[ NETWORK ] initialize network for conv2d/pooling2d/flatten
This PR provides initialization of conv2d, pooling2d, flatten layer
for training configuraion file.
Also modified to update mutiple weights in the optimizer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Mon, 6 Jul 2020 10:05:25 +0000 (19:05 +0900)]
[property] Set common properties in layer.h
Many layers has common properties which are defined in layer.h
As they are in layer.h and are expected to be used in most layers, lets set their properties in layer.h itself
This reduces a lot of redundancies
If some property is not required to be handled by a specific layer in rare case, handle it as an error in that layer itself
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 7 Jul 2020 02:44:18 +0000 (11:44 +0900)]
Fix potential bug in pooling 2D layer
When pooling2d_layer is initialized once as a default.
and initialized again as a `global_max` | `global_pooling`
`output.width`, `output.height` is set to 2.
Although following scenario is highly unlikely, initializing layer
should be done deterministically
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 06:59:24 +0000 (15:59 +0900)]
[network] Add sensible defaults
Added sensible defaults for the network initialization
This covers layers loss and optimizer properties
However defaults are not added for some of the core properties because they should be input by the user
V2:
Take fixes of conv2d from #249 by @jijoongmoon to make this PR work
Resolves #236
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 07:08:19 +0000 (16:08 +0900)]
[layers] Add trainable feature
Add trainable feature for each layer which allows certain layers to just not train without affecting others
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 07:32:46 +0000 (16:32 +0900)]
[weight/gradients] Initialize Weights once only
Since layers only store weights reference, store them once than doing it in every iteration
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Jul 2020 07:02:21 +0000 (16:02 +0900)]
[git] ignore vscode configs and multiple vim backup files
ignore vscode configs and multiple vim backup files
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 2 Jul 2020 04:17:29 +0000 (13:17 +0900)]
[ini] update ini bias init and flatten as feature
bias init name is changes to bias_init_zero to make it more readable
flatten is now a layer feature externally rather than as a new layer itself
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 2 Jul 2020 10:55:02 +0000 (19:55 +0900)]
Prepare bn layer tc
This PR add a python function that generates bn_layer forward/backward
pass.
Return value's order & contents are subject to change for the time
being.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
**How to evaluate:**
Run python function
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 1 Jul 2020 06:14:56 +0000 (15:14 +0900)]
Restructure inner data inside Tensor
**Changes proposed in this PR:**
- Change Tensor structure to enable sharing between Tensor
- Refactor Tensor ctors
- Add copy/move ctors & assignment operators
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 25 Jun 2020 10:23:53 +0000 (19:23 +0900)]
Add exception to TensorDim::setTensorDim
**Prerequisite**
- Add capi exception wrapper (later pr)
**Changes proposed in this PR:**
- Add `std::invalid_argument` to `TensorDim::setTensorDim`
- Fix `fc_layer` positive test does not have unit declared.
- Fix tests accordingly.
- Add TensorDim test (positive && negative)
**Todo**
- ~Add TensorDim test (positive && negative)~
See also: #233
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 25 Jun 2020 12:34:08 +0000 (21:34 +0900)]
Add exception boundary to capi
**Changes proposed in this PR:**
- Add a functor that returns `errno` to corresponding `exception`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 2 Jul 2020 04:42:26 +0000 (13:42 +0900)]
[activation] Add missing config for activation layer
Add missing initialization and setting input/output dimension for activation layer
Also updated setting previous dimension to be from the last computation layer
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 29 Jun 2020 08:34:30 +0000 (17:34 +0900)]
Add ml-api-common to capi
**Changes proposed in this PR:**
- Add capi-ml-common-devel to spec file
- Add `ml-api-common.h` for dummy
- Change error code accordingly
Resolves #75
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 30 Jun 2020 00:05:28 +0000 (09:05 +0900)]
[ Pooling2D ] unittest cases for backwarding
This PR provides unittest cased for backwarding about global_max &
global average
. global_max : forwarding / backwarding
. global_average : forwarding /backwarding
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 26 Jun 2020 05:29:31 +0000 (14:29 +0900)]
[ini] Update ini parsing and format
Update the parsing and format of ini input file
Remove declaring the layers at the top of the ini file unnecessarily
Adding corresponding bug fixes and updates to unittests
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 15:02:05 +0000 (00:02 +0900)]
[unittest] generate fc unittests
Added generate forward unittests data for fully connected layer, and corresponding unittests
Added minor bug fixes as well
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Jun 2020 11:52:21 +0000 (20:52 +0900)]
[genInput] Update generation of input for fc
Update generation of input for fc layer for forward
backward is also supported, however not yet used
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Jun 2020 11:49:48 +0000 (20:49 +0900)]
[genInput] Make input data reproducible
Make input data generation reproducible with fixed seeding
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 26 Jun 2020 09:51:43 +0000 (18:51 +0900)]
[optimizer] Bug fix
Tensor copy constructor and copy assigment operator creates a copy of the vector
This led to bug in optimizer which updated the copy of the weight than the weight itself
Fixed by refernce of weight in optimizer
Resolves #241
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 25 Jun 2020 07:02:34 +0000 (16:02 +0900)]
[ Pooling2D ] global max / global average
This PR provides global max / global average.
. forwarding global_max / global_average
. backwarding global_max / global_average
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 25 Jun 2020 12:40:43 +0000 (21:40 +0900)]
[layer] Support get weights
Support getWeights functionality equivalent to getGradients
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 06:46:13 +0000 (15:46 +0900)]
[gradients] Get gradient of each layer
Added function to get gradient of each layer
Each layer is responsible to fill in the list containing the gradients
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 25 Jun 2020 02:25:02 +0000 (11:25 +0900)]
Add tensor save / read test
This Pr adds tensor save / read test.
Not directly related to the PR though, save & read better
need error handling
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 05:08:28 +0000 (14:08 +0900)]
[optimizer] Optimizer to manage list of weights
Update optimizer to handle list of weights than weights and bias inidividually
as layers like RNN, LSTM will have more than just (w,b)
V2:
calculate changed to apply_gradient as it applies gradients
applied for conv layer as well
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Wed, 24 Jun 2020 10:44:21 +0000 (19:44 +0900)]
[ Conv2D ] fix calculate parameter of optimizer
fix calculate function parameters
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 05:56:59 +0000 (14:56 +0900)]
[activation] Derivative for activation
The derivative of softmax has been hand crafted to be different from others
Refer to https://github.com/nnstreamer/nntrainer/blob/
2a650512813db6ce3bba828b5790066fbc655f14/nntrainer/src/fc_layer.cpp#L265 for original implementation
Softmax requires softmax(x) as input for derivative while other activations require x as input for derivative_
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 02:00:10 +0000 (11:00 +0900)]
Add stride and contiguous flag to tensor
**Changes proposed in this PR:**
- Add stride
- Add checking is countigous flag to tensor
This patch is an anchor for #217
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 19 Jun 2020 13:38:42 +0000 (22:38 +0900)]
[ Conv2D ] backwarding
This PR provides backwarding of Convolution 2D Layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 06:44:37 +0000 (15:44 +0900)]
Fix optimizer signature
optimizer signature now does not have `init_zero`
This quick fix deletes init_zero from `fc_layer::backward`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 02:57:49 +0000 (11:57 +0900)]
[bias] Bias updation missing for sgd
Bias updatation fixed for sgd where it only happened when bias was initialized with 0
For adam, bias updataion was happening twice
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Jun 2020 05:24:53 +0000 (14:24 +0900)]
[fc_layer] Add the deleted statement
Add the deleted statement about weight update from fc layer
Resolves #221
cc. @zhoonit
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 23 Jun 2020 05:08:56 +0000 (14:08 +0900)]
[tensor/tensor_dim] Added equal comparison operation
Added equal comparison operation
Currently its based on fixed epsilon
Needs updation to use variable number of bits based on exponentiation as done in gtest
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 24 Jun 2020 02:08:28 +0000 (11:08 +0900)]
Move weight_decay handling from opt to layer
**Changes proposed in this PR:**
- remove weight_decay from `Optimizer::calculate` signature
- apply weight decay to fc_layer.cpp
please note that conv2d_layer::backwarding also need to handle weight
decay after this PR is merged.
Resolves #213
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 19 Jun 2020 10:31:16 +0000 (19:31 +0900)]
Attach activation layer to neuralnet.cpp
**Changes proposed in this PR:**
- strip activation function related members in `layer`
- init activation property as `activation_layer`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
resolves #153
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Jun 2020 08:07:35 +0000 (17:07 +0900)]
[Fix/jni] Change tflite-dependency
There seems a problem with building tensorflow. This PR propose to use
prebuilt tensorflow-lite instead of building one.
**Changes proposed in this PR:**
- Change `Applications/android.mk` to use prebuilt library
- Change `prepare_tflite.sh`
- Bump tflite to 1.13.1 as suggested in #20
Resolves #207
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Jun 2020 01:39:07 +0000 (10:39 +0900)]
Make .clang-format competible with version 6
Clang-format 6 is widely used. However,
`AllowAllConstructorInitializersOnNextLine` added from #203 is supported
from clang-format 9.
This Pr reverts use of `AllowAllConstructorInitializersOnNextLine`
while having similar linting style.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 19 Jun 2020 08:29:45 +0000 (17:29 +0900)]
[meson] Arrange file order
Arrange files order for easier access
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 11:10:43 +0000 (20:10 +0900)]
[loss] Combine softmax with cross entropy
Softmax is combined with cross entropy
Cross entropy exists on its as an option but isnt supported
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Jun 2020 08:57:41 +0000 (17:57 +0900)]
[loss] Combined cross entropy with sigmoid
Combined cross-entropy with sigmoid version for loss because of its higher stability
Note that this happens internally and is not exposed outside
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>