platform/core/ml/nntrainer.git
4 years agoKNN notation change
brainer [Tue, 28 Jul 2020 16:26:02 +0000 (01:26 +0900)]
KNN notation change

Wrong notation in KNN is changed to k-NN

Signed-off-by: brainer <jeansanghyun@gmail.com>
4 years ago[ Coverity ] Fix Coverity Issues accepted/tizen/unified/20200728.135447 submit/tizen/20200728.023042
jijoong.moon [Fri, 24 Jul 2020 12:15:21 +0000 (21:15 +0900)]
[ Coverity ] Fix Coverity Issues

This PR incluide Fixs of Coverity Issues.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Unit Test ] Add Verification of Conv2D
jijoong.moon [Thu, 23 Jul 2020 05:47:22 +0000 (14:47 +0900)]
[ Unit Test ] Add Verification of Conv2D

Finalize Conv2D verification with tensorflow

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[API] Make C-API thread safe
Parichay Kapoor [Wed, 22 Jul 2020 03:17:34 +0000 (12:17 +0900)]
[API] Make C-API thread safe

Added thread locking safety for ml_train C-API
There is a global lock to assist with destruction of object
Each object gets its own lock which maintains safety when accessing/modifying that object

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[unittest] Moved unittest activation out of utils
Parichay Kapoor [Tue, 21 Jul 2020 09:32:15 +0000 (18:32 +0900)]
[unittest] Moved unittest activation out of utils

Activation unittests out of util
Set error tolerance level to e-6
This has exposed a bug with batchnormalization unittest - their golden values are 0
So, disabled it for now
@zhoonit please check it

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[activation] Update signature of activation prime
Parichay Kapoor [Tue, 21 Jul 2020 09:29:57 +0000 (18:29 +0900)]
[activation] Update signature of activation prime

Update signature to accomodate derivative in activation_prime
This allows softmax_prime to be managed with current activation_layer class

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoBump to 6.0
Jihoon Lee [Wed, 22 Jul 2020 07:28:27 +0000 (16:28 +0900)]
Bump to 6.0

**Changes proposed in this PR:**
- Bump project version to 6.0 in order to use nntrainer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoFix rpm spec file dependency
Jihoon Lee [Wed, 22 Jul 2020 10:23:17 +0000 (19:23 +0900)]
Fix rpm spec file dependency

This patch fix rpm spec file dependency for `capi-nntrainer`

 Resolves #357

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[neuralNet] Moving batch size property to compile time
Parichay Kapoor [Tue, 21 Jul 2020 06:46:12 +0000 (15:46 +0900)]
[neuralNet] Moving batch size property to compile time

Batch size property should be set before compiling for now
Initializing of layers allocates memory for the layers
Setting batch size property with training call requires layers to be re-initialized
which isnt supported yet

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[neuralnet] Bug fix for setTrainConfig
Parichay Kapoor [Tue, 21 Jul 2020 09:55:07 +0000 (18:55 +0900)]
[neuralnet] Bug fix for setTrainConfig

Add bug fix for setTrainConfig
Break in switch in default case was hiding the error

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[Example] Add drawing part
Jihoon Lee [Tue, 21 Jul 2020 02:35:52 +0000 (11:35 +0900)]
[Example] Add drawing part

This PR adds drawing parti of example/customShortcut

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[sigmoid] Rename sigmoidePrime to sigmoidPrime accepted/tizen/unified/20200721.042553 submit/tizen/20200721.005828
Parichay Kapoor [Mon, 20 Jul 2020 07:37:07 +0000 (16:37 +0900)]
[sigmoid] Rename sigmoidePrime to sigmoidPrime

Rename sigmoidePrime to sigmoid Prime

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[activation] Move activation functions to activation file
Parichay Kapoor [Mon, 20 Jul 2020 07:35:16 +0000 (16:35 +0900)]
[activation] Move activation functions to activation file

Move activation operator functions to activation file as static members

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Coverity ] Fix coverity issues
jijoong.moon [Mon, 20 Jul 2020 11:23:06 +0000 (20:23 +0900)]
[ Coverity ] Fix coverity issues

This PR includes fixes about coverity issues

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Coverity ] Fix Coverity Issues
jijoong.moon [Mon, 20 Jul 2020 04:42:59 +0000 (13:42 +0900)]
[ Coverity ] Fix Coverity Issues

This PR fix coverity issues.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[activation] Update input arguments of softmax
Parichay Kapoor [Mon, 20 Jul 2020 04:49:09 +0000 (13:49 +0900)]
[activation] Update input arguments of softmax

Update input arguments of softmax to Tensor const & from Tensor to match activation
function signature

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[Loss] Update MSR to MSE
Parichay Kapoor [Mon, 20 Jul 2020 05:09:38 +0000 (14:09 +0900)]
[Loss] Update MSR to MSE

Update the name of mean squared error from MSR to MSE

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[unittest] Add unittest for backwarding of loss layers
Parichay Kapoor [Mon, 20 Jul 2020 04:01:14 +0000 (13:01 +0900)]
[unittest] Add unittest for backwarding of loss layers

This patch adds unittest for backwarding of loss layers with fc and activation layers
Major bug fixes for backwarding of all the loss layers to match with tensorflow
Major hot fix for softmax derivative - need to update activation layers semantics to
properly fix softmax layer derivative

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Bug ] Fix initialze buffer for adam optimizer
jijoong.moon [Fri, 17 Jul 2020 05:58:33 +0000 (14:58 +0900)]
[ Bug ] Fix initialze buffer for adam optimizer

. Add initialize(std::shared_ptr<UpdatableParam> params, ...)
. Update initailize conv2d for optimizer
. Update to do addition for calcuate derivatives of filter

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoFix summary logic error
Jihoon Lee [Mon, 20 Jul 2020 02:07:28 +0000 (11:07 +0900)]
Fix summary logic error

Fix flag to be parsed properly by verbosity

cc. @jijoongmoon

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[CAPI] Implement summary capi
Jihoon Lee [Fri, 17 Jul 2020 07:54:49 +0000 (16:54 +0900)]
[CAPI] Implement summary capi

This PR mainly add `NeuralNetwork::print`. Note that it should be highly
refactored for proper functionality. Confer @todo's in this patch

**Changes proposed in this PR:**
- add `NeuralNetwork::print`
- implement `model_get_summary`
- Add capi test for `model_get_summary`
- move `ml_train_summary_type_e` to `api-common`
- minor bug fix

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoChange nntrainer include in applications
Jihoon Lee [Fri, 17 Jul 2020 08:16:38 +0000 (17:16 +0900)]
Change nntrainer include in applications

nntrainer include should include api_common header as well.
Android.mk in applications has been fixed

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd header guard to nntrainer-api-common.h
Jihoon Lee [Fri, 17 Jul 2020 07:34:29 +0000 (16:34 +0900)]
Add header guard to nntrainer-api-common.h

Add header guard to nntrainer-api-common to prevent redlaration.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoFix neuralnetwork property logic
Jihoon Lee [Fri, 17 Jul 2020 05:46:15 +0000 (14:46 +0900)]
Fix neuralnetwork property logic

exception::invalid_property for not allowed property should not be
ignored for some cases. So, parsing process is patched accordingly

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[fc/activation] Unittest for fc, activation layers and sgd optimizer
Parichay Kapoor [Fri, 17 Jul 2020 03:53:34 +0000 (12:53 +0900)]
[fc/activation] Unittest for fc, activation layers and sgd optimizer

Add combined unittest for fc and activation layers (sigmoid and softmax) backwarding
Note that this is done without the loss
As SGD is used for updates, this also acts as testing optimizer sgd

V2:
Using paramsAt() in unittest
Moved paramsAt() to public from protected

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoAdd test for modelfiles
Jihoon Lee [Thu, 16 Jul 2020 01:55:57 +0000 (10:55 +0900)]
Add test for modelfiles

Add automated test scaffolding to test modelfile configuration &
 NN initiation.

**Changes proposed in this PR:**
- Add IniSection class to control ini (for testing)
- Add Ini test fixture for paremeterized ini testing
- Add `unittest_nntrainer_modelfile.cpp`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[api] remove duplicate declarations
Parichay Kapoor [Fri, 17 Jul 2020 05:12:02 +0000 (14:12 +0900)]
[api] remove duplicate declarations

Remove duplicate declarations arriving due to auto merge

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[c-api] Function namespace correction
Parichay Kapoor [Fri, 17 Jul 2020 05:32:41 +0000 (14:32 +0900)]
[c-api] Function namespace correction

get_summary function namespace correction

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[layer] Sum over batch for gradients
Parichay Kapoor [Thu, 16 Jul 2020 11:44:44 +0000 (20:44 +0900)]
[layer] Sum over batch for gradients

Gradients of the layer should perform sum over batch than average
This is consistent with tensorflow's implementation

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[optimzer/layers] Gradient dimension should match weight dim
Parichay Kapoor [Thu, 16 Jul 2020 11:39:29 +0000 (20:39 +0900)]
[optimzer/layers] Gradient dimension should match weight dim

Gradient dimension should match weight dimension
Currently optimizer applies averaging of gradients, which is not correct
Apply averaging of gradients before calling applyGradients

Resolves #280

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[api] remove duplicate declaration
Parichay Kapoor [Thu, 16 Jul 2020 10:49:43 +0000 (19:49 +0900)]
[api] remove duplicate declaration

Remove duplicate declaration of ml_train_model_run_async

cc. @jijoonmoon

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoFollow-up after #308
Jihoon Lee [Thu, 16 Jul 2020 02:50:15 +0000 (11:50 +0900)]
Follow-up after #308

**Changes proposed in this PR:**
- Now friendship between layer and network is over
- Move `setProperty(propType)` to public
- Add custom exception in `_error.h`
- Change `setProperty` `std::out_of_range` ->
`exception::invalid_property`
- Add error code boundary to `nntrainer_exception_boundary`
- Change inherited docs to @copydoc

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

resolves #315

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[API] Finalize the first draft of C-API
Parichay Kapoor [Wed, 8 Jul 2020 12:49:26 +0000 (21:49 +0900)]
[API] Finalize the first draft of C-API

Finalize the first draft of C-API

V2:
Applied review comments
Updated namespace to `ml_train_*`
Added module doc
Added enums for loss and optimizer
Added loss as a parameter for model_compile

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[CAPI] Add functions to be supported later
Parichay Kapoor [Wed, 15 Jul 2020 07:43:22 +0000 (16:43 +0900)]
[CAPI] Add functions to be supported later

Add functions to be supported later in the internal capi header.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[CAPI] Added more unittests for tizen capi
Parichay Kapoor [Tue, 14 Jul 2020 09:48:17 +0000 (18:48 +0900)]
[CAPI] Added more unittests for tizen capi

Add unittest fot the tizen c-api related to datasets

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[loss/FC] Unittest for forward propogation
Parichay Kapoor [Fri, 3 Jul 2020 07:51:54 +0000 (16:51 +0900)]
[loss/FC] Unittest for forward propogation

This PR added unittests for forward propagation for FC layer with loss and activations.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[softmax] Bug fix
Parichay Kapoor [Thu, 16 Jul 2020 05:14:54 +0000 (14:14 +0900)]
[softmax] Bug fix

Softmax should compute over batch dimension
However current implementation computes over channel dimension
This patch fixes it. This also fixes softmax with cross entropy loss

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[Loss] Bug fix for loss
Parichay Kapoor [Thu, 16 Jul 2020 05:12:43 +0000 (14:12 +0900)]
[Loss] Bug fix for loss

Added bug fix for loss forwarding
- for sigmoid with cross entropy, formula was correct, however implementation was wrong, also inverted sign of the output
- for MSE, average is needed than sum
- for softmax with cross entropy, divide by input width is not needed but still mismatch

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[tensor] Bug fix for average
Parichay Kapoor [Wed, 15 Jul 2020 10:57:53 +0000 (19:57 +0900)]
[tensor] Bug fix for average

Bug fix for average to use the correct denominator

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoEnable activation layer in ini
Jihoon Lee [Tue, 14 Jul 2020 11:11:16 +0000 (20:11 +0900)]
Enable activation layer in ini

This PR expose activation layer by ini. As #308 handles props
 automatically, there is no need to set else more.

See also #210

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoRefactor nntrainer load from config
Jihoon Lee [Tue, 14 Jul 2020 09:38:15 +0000 (18:38 +0900)]
Refactor nntrainer load from config

`loadFromConfig` has many duplicated logics with layer::setProperty and
others

This PR refactor `loadFromConfig` to reuse already present logic.

**Changes proposed in this PR:**
- Add `std::out_of_range` exception to setProperty to represent validity
- Change error to warning when input_dim is not present at the head of
the network(for ini)
- Change `weightIni` -> `weight_ini` in `ini` for consistency
- Change unittest accordingly
- Separate dataset, network parser

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[common-api] Add common api header
Parichay Kapoor [Wed, 15 Jul 2020 08:00:50 +0000 (17:00 +0900)]
[common-api] Add common api header

Add common api header for nntrainer.
This common header includes declarations common to all the APIs
and is used in nntrainer as well.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[API] Update C-API dataset handling
Parichay Kapoor [Tue, 14 Jul 2020 04:30:09 +0000 (13:30 +0900)]
[API] Update C-API dataset handling

Update C-API to add dataset handling
Changes added to data buffer:
- Bug fix for data buffer using already freed memory
- Data buffer set properties moved out from neural network
- Data buffer generator now takes list of array for multiple inputs

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Application ] mnist with tensorflow
jijoong.moon [Mon, 13 Jul 2020 06:52:44 +0000 (15:52 +0900)]
[ Application ] mnist with tensorflow

This PR includes:
  . MNIST Tensorflow Example to compare with nntrainer.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[CAPI] Add ml_nnmodel_get_summary
Jihoon Lee [Thu, 9 Jul 2020 00:33:15 +0000 (09:33 +0900)]
[CAPI] Add ml_nnmodel_get_summary

**Changes proposed in this PR:**
- Add ml_nnmodel_get_summary

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[API] Update API namespace and use enum for opitmizer
Parichay Kapoor [Mon, 13 Jul 2020 03:41:11 +0000 (12:41 +0900)]
[API] Update API namespace and use enum for opitmizer

Setting optimizer now uses enum than string

The name space of the layers has been updated
Below is the summary the updates:
ml_nnmodel_* -> ml_train_model_*
ml_nnlayer_* -> ml_train_layer_*
ml_nnopt_* -> ml_train_optimizer_*
*_delete() -> *_destroy()

ml_nnmodel_train_with_file() and ml_nnmodel_train_with_generator() have been kept back
These will be updated in the upcoming PR where dataset interface for API is updated

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[error code] Common error codes re-define issue
Parichay Kapoor [Tue, 14 Jul 2020 09:19:37 +0000 (18:19 +0900)]
[error code] Common error codes re-define issue

many ML_ERROR error codes are defined twice in nntrainer/include/nntrainer_error.h and api/capi/include/platform/ml-api-common.h
this patch reuses the definition from ml-api-common.h in nntrainer_error.h
this allows nntrainer.h to be included in source code as it contains certain callback declarations which are used in library

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[data buffer] Bug fix for data buffer
Parichay Kapoor [Tue, 14 Jul 2020 09:06:03 +0000 (18:06 +0900)]
[data buffer] Bug fix for data buffer

data buffer updateData() passes the reference of a on-stack variable
which is deleted after the function call exits. However the callee function
remains on a new thread causing problems.

Added bug fix for this.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoFix NeuralNetwork::finalize
Jihoon Lee [Tue, 14 Jul 2020 04:01:49 +0000 (13:01 +0900)]
Fix NeuralNetwork::finalize

Delete layer in finalize had wrong index. This PR fix this issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd layer print
Jihoon Lee [Mon, 13 Jul 2020 01:34:51 +0000 (10:34 +0900)]
Add layer print

**Changes proposed in this PR:**
- Add print option to layer
- Add `Layer::print` function
- Add hook point to print function for derived layer.
- Add `ostream &operater<<` for Layer type

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ API ] capi-nntrainer packaging
jijoong.moon [Fri, 10 Jul 2020 11:28:44 +0000 (20:28 +0900)]
[ API ] capi-nntrainer packaging

This PR includes:
  . rpm packaging of capi-nntrainer for tzien

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[Fix/Coverage] Fix the coverage generator to use python3
Dongju Chae [Tue, 14 Jul 2020 08:21:17 +0000 (17:21 +0900)]
[Fix/Coverage] Fix the coverage generator to use python3

This patch fixes the coverage generator to use python3.
It seems that Python2 package was removed in nntrainer.spec.

Signed-off-by: Dongju Chae <dongju.chae@samsung.com>
4 years agoAdd setProperty for each property type
Jihoon Lee [Fri, 10 Jul 2020 07:17:44 +0000 (16:17 +0900)]
Add setProperty for each property type

Since Layer::setProperty is iterating through vectors, only part that
needs overriding is setting each particular property.

This PR add setProperty by type to relieve the issue.

w.r.t #270, this patch enables layers to check if given property type
 is valid to user.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd throw_status to map status to throw
Jihoon Lee [Fri, 10 Jul 2020 04:33:48 +0000 (13:33 +0900)]
Add throw_status to map status to throw

Few functions are using cstyle throw, this PR scaffolds around c style
throw to use `std::exception`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[API] Bug fix in C-API get_layer
Parichay Kapoor [Mon, 13 Jul 2020 04:09:22 +0000 (13:09 +0900)]
[API] Bug fix in C-API get_layer

ml_nnmodel_get_layer function was adding another layer in struct NeuralNetwork if layer wasnt found in layers_map
This function was supposed to only add layer in layers_map and not modify struct NeuralNetwork

This PR applies the above bug fix.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoRefactor bn layer test fixture
Jihoon Lee [Thu, 9 Jul 2020 11:21:30 +0000 (20:21 +0900)]
Refactor bn layer test fixture

**Changes proposed in this PR:**
- BN layer tf fixture is merged to BNLayer fixture

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoUpdate example to c++14 and enable exceptions
Jihoon Lee [Thu, 9 Jul 2020 10:01:49 +0000 (19:01 +0900)]
Update example to c++14 and enable exceptions

**Changes proposed in this PR:**
- Update /Application/jni to use c++14 and enable exceptions

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd UpdatableParam to manage weights / gradients
Jihoon Lee [Tue, 7 Jul 2020 12:44:59 +0000 (21:44 +0900)]
Add UpdatableParam to manage weights / gradients

**Changes proposed in this PR:**
- Add `UpdatableParam`
- Change `Optimizer::apply_gradient` signature
- Attach `UpdatableParam` to manage weights

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[API] Search layer by name in C-API
Parichay Kapoor [Mon, 6 Jul 2020 05:51:59 +0000 (14:51 +0900)]
[API] Search layer by name in C-API

Support searching of layer by name in C-API
This is part of the more changes to support this functionality

Related issue - #260

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ MNIST ] mnist application
jijoong.moon [Thu, 2 Jul 2020 10:08:18 +0000 (19:08 +0900)]
[ MNIST ] mnist application

This PR provides mnist application which includes:
. 1 Input Layer
. 2 Conv2d
. 2 Pooling2d
. 1 Flatten
. 1 Fully Connected Layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoFix to run validation process during train
jijoong.moon [Thu, 9 Jul 2020 12:19:01 +0000 (21:19 +0900)]
Fix to run validation process during train

Currently validation is not work. Because the dimension of hidden
tensor is different if we use one batch input.
This PR handle this issue.

Self evaluation:

 1. Build test: [X]Passed [ ]Failed [ ]Skipped
 2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Unit Test ] Generate Tensorflow ouptut and Comparison
jijoong.moon [Mon, 6 Jul 2020 01:24:10 +0000 (10:24 +0900)]
[ Unit Test ] Generate Tensorflow ouptut and Comparison

This PR provides:

. Generate Tensorflow output & gradients output for conv2d, pooling2d
. Compare with nntrainer outputs
. TODO : compare gradient after getGradieint func is implemented.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[API] Update C-API semantics
Parichay Kapoor [Wed, 8 Jul 2020 05:26:04 +0000 (14:26 +0900)]
[API] Update C-API semantics

Update C-API semantics about layer and optimizer
While the layer and optimizer are in use, do allow deletion of layer and optimizer
Further, once the layer and optimizer are set/added to a model, their ownership is transferred to the model
So, no need to delete them separately. Deleting the model will directly delete them as well.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[property] Restrict the properties set after compilation
Parichay Kapoor [Wed, 8 Jul 2020 04:35:48 +0000 (13:35 +0900)]
[property] Restrict the properties set after compilation

Restrict the properties which can be set for the network after the compilation of the model has been done

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[API] Update C-API
Parichay Kapoor [Tue, 7 Jul 2020 08:24:28 +0000 (17:24 +0900)]
[API] Update C-API

Update C-API and corresponding code changes

Major Changes:

1. ml_nnmodel_compile_with_conf() replaced with ml_nnmodel_construct_with_conf().
This new function loads the model from the config file but does not initialize.
ml_nnmodel_compile() should be called after ml_nnmodel_construct_with_conf() before training.

2. ml_nnmodel_compile() does not take optimizer as an input.
Rather, use ml_nnmodel_set_optimizer() to set the optimizer for the model.

3. init() from neuralnet has been updated to loadFromConfig() and does not initialize the model anymore.
Rather call init() after loadFromConfig() to initialize.
This also allows updating the model config after loading model with loadFromConfig().

4. init(optimizer, args_list) has been replaced with init()
Rather call setOptimizer(optimizer) to set the optimizer and
setProperty(args_list) to set the properties before calling init().

5. Bug fixes in checkValidation()

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[tensor] Tensor constructor should not set zero
Parichay Kapoor [Fri, 3 Jul 2020 10:02:48 +0000 (19:02 +0900)]
[tensor] Tensor constructor should not set zero

Tensor constructors should not zero to the memory by default
This hides many bugs in the code and also leads to big overheads
Removed setting zero in constructor and corresponding many bug fixes

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoRefactor layer unittest
Jihoon Lee [Tue, 7 Jul 2020 01:11:52 +0000 (10:11 +0900)]
Refactor layer unittest

This PR make layer unittest more concise and fix some bugs.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[property] Add name property for layer accepted/tizen/unified/20200709.212755 submit/tizen/20200709.014618 submit/tizen/20200709.071349
Parichay Kapoor [Mon, 6 Jul 2020 10:12:02 +0000 (19:12 +0900)]
[property] Add name property for layer

Add name property for layers
Default names are added if layer name is not given by creator
Layer addition should not be done directly by adding to layers now

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoEnhance Tensor print
Jihoon Lee [Wed, 8 Jul 2020 01:29:14 +0000 (10:29 +0900)]
Enhance Tensor print

**Changes proposed in this PR:**
- Add tensor address to the print
- Handle the case when tensor is too large
- Add test

See also: #270

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[Spec] Fix the wrong Requires section
Sangjung Woo [Thu, 9 Jul 2020 02:23:59 +0000 (11:23 +0900)]
[Spec] Fix the wrong Requires section

This patch fixes the wrong Requires for libopenblas.

Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
4 years agoRework bn forward & backward
Jihoon Lee [Wed, 24 Jun 2020 01:49:51 +0000 (10:49 +0900)]
Rework bn forward & backward

Rework bn layer forward & backward pass and fix few bugs.

This patch only includes training passes.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoFormat everyfile
Jihoon Lee [Wed, 8 Jul 2020 02:36:23 +0000 (11:36 +0900)]
Format everyfile

Now CI checks formatting. This patch formats every exisiting *.cpp *.h
in the project

See also: #271

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ NETWORK ] initialize network for conv2d/pooling2d/flatten
jijoong.moon [Thu, 2 Jul 2020 09:43:49 +0000 (18:43 +0900)]
[ NETWORK ] initialize network for conv2d/pooling2d/flatten

This PR provides initialization of conv2d, pooling2d, flatten layer
for training configuraion file.

Also modified to update mutiple weights in the optimizer.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[property] Set common properties in layer.h
Parichay Kapoor [Mon, 6 Jul 2020 10:05:25 +0000 (19:05 +0900)]
[property] Set common properties in layer.h

Many layers has common properties which are defined in layer.h
As they are in layer.h and are expected to be used in most layers, lets set their properties in layer.h itself
This reduces a lot of redundancies
If some property is not required to be handled by a specific layer in rare case, handle it as an error in that layer itself

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoFix potential bug in pooling 2D layer
Jihoon Lee [Tue, 7 Jul 2020 02:44:18 +0000 (11:44 +0900)]
Fix potential bug in pooling 2D layer

When pooling2d_layer is initialized once as a default.
and initialized again as a `global_max` | `global_pooling`

`output.width`, `output.height` is set to 2.

Although following scenario is highly unlikely, initializing layer
should be done deterministically

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[network] Add sensible defaults
Parichay Kapoor [Fri, 3 Jul 2020 06:59:24 +0000 (15:59 +0900)]
[network] Add sensible defaults

Added sensible defaults for the network initialization
This covers layers loss and optimizer properties
However defaults are not added for some of the core properties because they should be input by the user

V2:
Take fixes of conv2d from #249 by @jijoongmoon to make this PR work

Resolves #236

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[layers] Add trainable feature
Parichay Kapoor [Fri, 3 Jul 2020 07:08:19 +0000 (16:08 +0900)]
[layers] Add trainable feature

Add trainable feature for each layer which allows certain layers to just not train without affecting others

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[weight/gradients] Initialize Weights once only
Parichay Kapoor [Fri, 3 Jul 2020 07:32:46 +0000 (16:32 +0900)]
[weight/gradients] Initialize Weights once only

Since layers only store weights reference, store them once than doing it in every iteration

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[git] ignore vscode configs and multiple vim backup files
Parichay Kapoor [Fri, 3 Jul 2020 07:02:21 +0000 (16:02 +0900)]
[git] ignore vscode configs and multiple vim backup files

ignore vscode configs and multiple vim backup files

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ini] update ini bias init and flatten as feature
Parichay Kapoor [Thu, 2 Jul 2020 04:17:29 +0000 (13:17 +0900)]
[ini] update ini bias init and flatten as feature

bias init name is changes to bias_init_zero to make it more readable
flatten is now a layer feature externally rather than as a new layer itself

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoPrepare bn layer tc
Jihoon Lee [Thu, 2 Jul 2020 10:55:02 +0000 (19:55 +0900)]
Prepare bn layer tc

This PR add a python function that generates bn_layer forward/backward
pass.

Return value's order & contents are subject to change for the time
being.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
**How to evaluate:**
Run python function

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoRestructure inner data inside Tensor
Jihoon Lee [Wed, 1 Jul 2020 06:14:56 +0000 (15:14 +0900)]
Restructure inner data inside Tensor

**Changes proposed in this PR:**
- Change Tensor structure to enable sharing between Tensor
- Refactor Tensor ctors
- Add copy/move ctors & assignment operators

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd exception to TensorDim::setTensorDim
Jihoon Lee [Thu, 25 Jun 2020 10:23:53 +0000 (19:23 +0900)]
Add exception to TensorDim::setTensorDim

**Prerequisite**
- Add capi exception wrapper (later pr)

**Changes proposed in this PR:**
- Add `std::invalid_argument` to `TensorDim::setTensorDim`
- Fix `fc_layer` positive test does not have unit declared.
- Fix tests accordingly.
- Add TensorDim test (positive && negative)

**Todo**
- ~Add TensorDim test (positive && negative)~

See also: #233

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd exception boundary to capi
Jihoon Lee [Thu, 25 Jun 2020 12:34:08 +0000 (21:34 +0900)]
Add exception boundary to capi

**Changes proposed in this PR:**
- Add a functor that returns `errno` to corresponding `exception`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[activation] Add missing config for activation layer
Parichay Kapoor [Thu, 2 Jul 2020 04:42:26 +0000 (13:42 +0900)]
[activation] Add missing config for activation layer

Add missing initialization and setting input/output dimension for activation layer
Also updated setting previous dimension to be from the last computation layer

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoAdd ml-api-common to capi
Jihoon Lee [Mon, 29 Jun 2020 08:34:30 +0000 (17:34 +0900)]
Add ml-api-common to capi

**Changes proposed in this PR:**
- Add capi-ml-common-devel to spec file
- Add `ml-api-common.h` for dummy
- Change error code accordingly

Resolves #75

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Pooling2D ] unittest cases for backwarding
jijoong.moon [Tue, 30 Jun 2020 00:05:28 +0000 (09:05 +0900)]
[ Pooling2D ] unittest cases for backwarding

This PR provides unittest cased for backwarding about global_max &
global average

. global_max : forwarding / backwarding
. global_average : forwarding /backwarding

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ini] Update ini parsing and format accepted/tizen/unified/20200706.064221 submit/tizen/20200706.011448
Parichay Kapoor [Fri, 26 Jun 2020 05:29:31 +0000 (14:29 +0900)]
[ini] Update ini parsing and format

Update the parsing and format of ini input file
Remove declaring the layers at the top of the ini file unnecessarily
Adding corresponding bug fixes and updates to unittests

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[unittest] generate fc unittests
Parichay Kapoor [Wed, 24 Jun 2020 15:02:05 +0000 (00:02 +0900)]
[unittest] generate fc unittests

Added generate forward unittests data for fully connected layer, and corresponding unittests
Added minor bug fixes as well

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[genInput] Update generation of input for fc
Parichay Kapoor [Thu, 25 Jun 2020 11:52:21 +0000 (20:52 +0900)]
[genInput] Update generation of input for fc

Update generation of input for fc layer for forward
backward is also supported, however not yet used

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[genInput] Make input data reproducible
Parichay Kapoor [Thu, 25 Jun 2020 11:49:48 +0000 (20:49 +0900)]
[genInput] Make input data reproducible

Make input data generation reproducible with fixed seeding

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[optimizer] Bug fix
Parichay Kapoor [Fri, 26 Jun 2020 09:51:43 +0000 (18:51 +0900)]
[optimizer] Bug fix

Tensor copy constructor and copy assigment operator creates a copy of the vector
This led to bug in optimizer which updated the copy of the weight than the weight itself
Fixed by refernce of weight in optimizer

Resolves #241

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Pooling2D ] global max / global average
jijoong.moon [Thu, 25 Jun 2020 07:02:34 +0000 (16:02 +0900)]
[ Pooling2D ] global max / global average

This PR provides global max / global average.
. forwarding global_max / global_average
. backwarding global_max / global_average

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[layer] Support get weights
Parichay Kapoor [Thu, 25 Jun 2020 12:40:43 +0000 (21:40 +0900)]
[layer] Support get weights

Support getWeights functionality equivalent to getGradients

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[gradients] Get gradient of each layer
Parichay Kapoor [Wed, 24 Jun 2020 06:46:13 +0000 (15:46 +0900)]
[gradients] Get gradient of each layer

Added function to get gradient of each layer
Each layer is responsible to fill in the list containing the gradients

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoAdd tensor save / read test
Jihoon Lee [Thu, 25 Jun 2020 02:25:02 +0000 (11:25 +0900)]
Add tensor save / read test

This Pr adds tensor save / read test.

Not directly related to the PR though, save & read better
need error handling

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[optimizer] Optimizer to manage list of weights
Parichay Kapoor [Wed, 24 Jun 2020 05:08:28 +0000 (14:08 +0900)]
[optimizer] Optimizer to manage list of weights

Update optimizer to handle list of weights than weights and bias inidividually
as layers like RNN, LSTM will have more than just (w,b)

V2:
calculate changed to apply_gradient as it applies gradients
applied for conv layer as well

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Conv2D ] fix calculate parameter of optimizer
jijoong.moon [Wed, 24 Jun 2020 10:44:21 +0000 (19:44 +0900)]
[ Conv2D ] fix calculate parameter of optimizer

fix calculate function parameters

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[activation] Derivative for activation
Parichay Kapoor [Wed, 24 Jun 2020 05:56:59 +0000 (14:56 +0900)]
[activation] Derivative for activation

The derivative of softmax has been hand crafted to be different from others
Refer to https://github.com/nnstreamer/nntrainer/blob/2a650512813db6ce3bba828b5790066fbc655f14/nntrainer/src/fc_layer.cpp#L265 for original implementation
Softmax requires softmax(x) as input for derivative while other activations require x as input for derivative_

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoAdd stride and contiguous flag to tensor
Jihoon Lee [Wed, 24 Jun 2020 02:00:10 +0000 (11:00 +0900)]
Add stride and contiguous flag to tensor

**Changes proposed in this PR:**
- Add stride
- Add checking is countigous flag to tensor

This patch is an anchor for #217

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>