platform/core/ml/nntrainer.git
3 years ago[Databuffer] Fix queue is not emptied properly
Jihoon Lee [Tue, 6 Oct 2020 04:25:56 +0000 (13:25 +0900)]
[Databuffer] Fix queue is not emptied properly

There was an issue that databuffer is not emptying queue properly.
This patch fixes the issue.o

Reoslves #621

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[build/meson] Add gmodule_dep for nnstreamer subplugin
Yongjoo Ahn [Mon, 5 Oct 2020 06:44:39 +0000 (15:44 +0900)]
[build/meson] Add gmodule_dep for nnstreamer subplugin

- Add gmodule_dep to build nnstreamer_filter_nntrainer
- `nnstreamer_subpluin.c` includes `gmodule.h`

Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
3 years ago[Filter/refactor] Refactor error handling logic
Jihoon Lee [Mon, 28 Sep 2020 07:43:11 +0000 (16:43 +0900)]
[Filter/refactor] Refactor error handling logic

g_assert ends the program rather than give chance for calls to handle
the error code. This patch replaces g_assert with return value

v2. Remove assertion codes and unify codes to throw

- Remove assertion codes instead throw to the top level and catch
altogether
- Merge `NNTrainer::init()` logic to constructor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS] Add inference pipeline
Jihoon Lee [Fri, 25 Sep 2020 07:41:22 +0000 (16:41 +0900)]
[CS] Add inference pipeline

**Changes proposed in this PR:**
- Add pipeline for the inference

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Packaging ] Meta Package for NNTrainer
jijoong.moon [Mon, 5 Oct 2020 02:54:27 +0000 (11:54 +0900)]
[ Packaging ] Meta Package for NNTrainer

NNTrainer Meta :
 - nntrainer-core
 - nnstreamer-nntrainer if nnstreamer_filter is on
 - capi-nntrainer if with tizen

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[neuralnet] Add alternatives to getLoss
Parichay Kapoor [Mon, 5 Oct 2020 05:05:09 +0000 (14:05 +0900)]
[neuralnet] Add alternatives to getLoss

getLoss used to get the current loss of the model which was based
on the previous batch of data which the network ran on.
This does not allow getting training/validation loss.
Added getTrainingLoss and getValidationLoss for this purpose.
And update getLoss description to include this information.

As MNIST application was using getLoss() which returns the
loss of the last ran element, this value was changed with #600
as with #600 last element is a batch of data than just 1 data element.
The application is updated to now compare all three loss with
updated values.
So, this patch fixes that bug in main branch as well.

Resolves #617

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Enlarge fonts and s/test/eval
Jihoon Lee [Fri, 25 Sep 2020 02:59:50 +0000 (11:59 +0900)]
[CS] Enlarge fonts and s/test/eval

**Changes proposed in this PR:**
- Enlarge font size, and change test to eval

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoAdd print preset
Jihoon Lee [Thu, 24 Sep 2020 00:52:18 +0000 (09:52 +0900)]
Add print preset

**Changes proposed in this PR:**
- Add print preset for model and layer
- Add model print flags
- move printFlagEnums and print(out, flag) to private

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[batchsize] Change the semantics of batch size
Parichay Kapoor [Thu, 24 Sep 2020 05:09:50 +0000 (14:09 +0900)]
[batchsize] Change the semantics of batch size

This patch changes the semantics of the way batchsize is used in the library
1. batch_size is no longer property of the layer. It can still be set externally by the model.
The method to set is still public but will soon be changed to private.
2. batch_size of the input/label/derivative tensor provided to the forwarding/backwarding function
can no longer be arbitrary. It must match the batch_size set to the model and the layer.
This change in semantics to follow for a long-term design where memory for input/output
is pre-allocated.
3. batch_size can now be set at train time than the earlier design where the batch_size
had to be set at init time. This comes from the design change that the memory for the model weights
can be allocated init time, which is not dependent on the batch size. However, the memory for
input/output should be allocated at train/inference as the batch size can be different for these.
In the current design, memory is allocated every iteration. Later, when memory is allocated at once
and reused, change in batch size will change the memory at once at the first iteration (rather doing
this at init times). This change is necessary to allow running inference without the need to initialize
the model again.

V2:
Updated validation to run on a whole batch of data at once
Also updated Tensor.argmax() to perform on batch of data than on the whole data

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] weight class to simplify headers
Parichay Kapoor [Tue, 22 Sep 2020 12:51:45 +0000 (21:51 +0900)]
[weight] weight class to simplify headers

Added a weight class to simplify headers
All weight related enums and properties go to weight header
rather being dumped in layer or optimizer

V2:
Update swap to swap gradients even if not trainable
Update swap of Tensor, TensorDim to be friend functions
Copy and move assignements are now default

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Add visual queue when training is done
Jihoon Lee [Wed, 23 Sep 2020 09:28:15 +0000 (18:28 +0900)]
[CS] Add visual queue when training is done

**Changes proposed in this PR:**
- Visually notify when training is done
- Add go to main menu when training is done
- Go to main menu button shows the final best accuracy

Resolves #572, #574

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[app/test] Enable app-test with enable-test
Parichay Kapoor [Fri, 25 Sep 2020 08:05:30 +0000 (17:05 +0900)]
[app/test] Enable app-test with enable-test

Enable application testing with enable-test only

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[MNIST] Add output verification with tizen build
Parichay Kapoor [Fri, 25 Sep 2020 06:31:17 +0000 (15:31 +0900)]
[MNIST] Add output verification with tizen build

Add the training loss value verification with tizen build

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[transferlearning] Add output verification with gtest
Parichay Kapoor [Fri, 25 Sep 2020 05:56:04 +0000 (14:56 +0900)]
[transferlearning] Add output verification with gtest

Add output verification with gtest for transfer learning application

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[gitignore] Update for android build
Parichay Kapoor [Wed, 23 Sep 2020 07:02:38 +0000 (16:02 +0900)]
[gitignore] Update for android build

Update gitignore for android build to ignore *.o.d and downlaoded iniparser

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Prevent HW backkey press while training
Jihoon Lee [Wed, 23 Sep 2020 07:53:53 +0000 (16:53 +0900)]
[CS] Prevent HW backkey press while training

**Changes proposed in this PR:**
- Disable backkey press while training
- Move back_key callback to `presenter`
- Prune unnecessary argument for view/create_layout

Resolves #573

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS/Refactor] Migrate to MVP style
Jihoon Lee [Tue, 22 Sep 2020 07:23:22 +0000 (16:23 +0900)]
[CS/Refactor] Migrate to MVP style

This patch layouts Customshortcut refactor. It is being migrated
Model-View-Presenter pattern for clarity.

**Changes proposed in this PR:**
- Add `presenter_*` to main presenter function
- Add `util_*` to data for utility function
- Some static function residing in `view.c` has exposed
- Strict segregation between view and data change

See also #577

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[model] Merge model finalize to destructor
Parichay Kapoor [Tue, 22 Sep 2020 02:03:58 +0000 (11:03 +0900)]
[model] Merge model finalize to destructor

Merge model finalize to destructor itself
User need not call finalize explicitly

Other minor updates:
- Merge all train methods into 1
- isInitializable() is made private. Removed some of the name checking from this function
to when the layer itself was created.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[transfer learning] Update to use softmax prediction
Parichay Kapoor [Wed, 23 Sep 2020 05:24:31 +0000 (14:24 +0900)]
[transfer learning] Update to use softmax prediction

Update transfer learning application to use softmax prediction value
to give prediction only if softmax is beyond certain threshold

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[License] s/Apache-2.0-only/Apache-2.0
Jihoon Lee [Thu, 24 Sep 2020 05:06:57 +0000 (14:06 +0900)]
[License] s/Apache-2.0-only/Apache-2.0

Apache-2.0 license does not have to state `-only`. It is removed

This resolves `findings #10` from nnstreamer/#2765

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped

Cc: Jaeyun-jung <jy1210.jung@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[refactor] Change enums to enum class
Parichay Kapoor [Tue, 22 Sep 2020 06:27:37 +0000 (15:27 +0900)]
[refactor] Change enums to enum class

Update enums to enum class before headers are exposed

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[refactor] Cleanup headers
Parichay Kapoor [Tue, 22 Sep 2020 05:17:57 +0000 (14:17 +0900)]
[refactor] Cleanup headers

This patch cleans up the headers of the internal codes
so that the headers can directly be exposed for the c++ API

Also removed applyIf from lazy_tensor (after checking with @zhoonit)
as it exposed #define in header and was not used

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Add inference page
Jihoon Lee [Mon, 21 Sep 2020 05:41:51 +0000 (14:41 +0900)]
[CS] Add inference page

**Changes proposed in this PR:**
- Change home to accomodate inference
- Route draw inference
- Add `draw_target` to use appdata to address label

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layers] Move constructors must be noexcept
Parichay Kapoor [Mon, 21 Sep 2020 07:42:27 +0000 (16:42 +0900)]
[layers] Move constructors must be noexcept

Make all move constructors to be noexcept

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[TransferLearning] Update documentation
Parichay Kapoor [Mon, 21 Sep 2020 03:55:34 +0000 (12:55 +0900)]
[TransferLearning] Update documentation

Update documentation for the transfer learning application

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CustomShortcut] Update docs and model
Jihoon Lee [Fri, 18 Sep 2020 06:54:00 +0000 (15:54 +0900)]
[CustomShortcut] Update docs and model

**Changes proposed in this PR:**
- Update model to use 10 data
- Update readme.md

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Application ] Make Util for Application
jijoong.moon [Mon, 21 Sep 2020 07:41:36 +0000 (16:41 +0900)]
[ Application ] Make Util for Application

Currently some of application use bitmap_helper and they have it
independently. rather than do this way, It might be better to make
static lib and link.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Doc ] Add Documentation of MNIST Application
jijoong.moon [Fri, 18 Sep 2020 07:25:54 +0000 (16:25 +0900)]
[ Doc ] Add Documentation of MNIST Application

Add README.md and imagews for mnist.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Fix Logistic Regression Example to work
jijoong.moon [Fri, 18 Sep 2020 02:13:34 +0000 (11:13 +0900)]
[ Application ] Fix Logistic Regression Example to work

Current implementation of Logistic example is not working.
This PR inclues fixes.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[CS] Remove leading underscore
Jihoon Lee [Mon, 21 Sep 2020 01:28:29 +0000 (10:28 +0900)]
[CS] Remove leading underscore

**Changes proposed in this PR:**
- Remove underscore to unify code style in custom shortcut

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoFix bn layer backwarding
Jihoon Lee [Thu, 17 Sep 2020 06:14:11 +0000 (15:14 +0900)]
Fix bn layer backwarding

**Changes proposed in this PR:**
- Change formula to calculate bn layer backward propagation
- Enable conv2d test case
- Add test case when batch is 1
- Renew test case to use non constant grad_ys

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[transferlearning] Add callback for testing model
Parichay Kapoor [Fri, 18 Sep 2020 00:44:02 +0000 (09:44 +0900)]
[transferlearning] Add callback for testing model

Added callback for testing the model
The callback prints the top-1 predicted output for a given image file
Increased number of epochs back to 1000 to acheive better accuracy

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[application] Transfer learning using just C-API
Parichay Kapoor [Wed, 16 Sep 2020 12:14:23 +0000 (21:14 +0900)]
[application] Transfer learning using just C-API

Update transfer learning application to use just C-API
This divides the application into 3 parts -
1. Running the fixed part of the model - done using nnstreamer-single C-API
2. Training the trianable part of the model - done with nntrainer C-API
3. Running the testing of the trainable part of the model - done with nnstreamer-pipeline C-API with nntrainer tensor filter extension

The outputs of section 1 is kept loaded in the memory than saving to disk and loading back
The application supports section 1 using tflite directly for non-tizen platforms
Section 3 is not supported on non-tizen applications

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Application ] Doc for Logistic Regression
jijoong.moon [Fri, 18 Sep 2020 02:30:19 +0000 (11:30 +0900)]
[ Application ] Doc for Logistic Regression

Add doc for logistic regression application.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Docs] Update example link
Jihoon Lee [Fri, 18 Sep 2020 03:22:49 +0000 (12:22 +0900)]
[Docs] Update example link

This patch updates example link broken from moving folders.

resolves #567

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Fix bug in the broadcast support
Jihoon Lee [Thu, 17 Sep 2020 10:43:48 +0000 (19:43 +0900)]
[Tensor] Fix bug in the broadcast support

Fix bug that strides and buffer axis are miscalculated

**Changes proposed in this PR:**
- Clarified consecutive-one strategy and same-stride strategy
- Change last stride to 0 only if it is using consecutive-one strategy
- Add regression test

Resolves: #559

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Jijoong Moon <jijoong.moon@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Doc ] Modify image and Doc
jijoong.moon [Thu, 17 Sep 2020 05:08:59 +0000 (14:08 +0900)]
[ Doc ] Modify image and Doc

Modify image and howto generate tflite for transfer learning document.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Doc ] Add doc for 'How to generate tflite for feature extractor'
jijoong.moon [Wed, 16 Sep 2020 08:27:36 +0000 (17:27 +0900)]
[ Doc ] Add doc for 'How to generate tflite for feature extractor'

. Add md file inclues 'how to genearte tflite for feature extractor'
. import_pb_to_tensorboard.py for tensorboard from frozen model.
. add some pics

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ DataBuffer ] Fix Undeterministic Behavior when validation is false
jijoong.moon [Thu, 17 Sep 2020 01:01:06 +0000 (10:01 +0900)]
[ DataBuffer ] Fix Undeterministic Behavior when validation is false

Add check and retrun INVALID_PARAM when the validation bit is false.
If it returns INVALID_PARAM, then train_run return INVALID_PARAM as
well.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[application] Added testing for the trained model
Parichay Kapoor [Wed, 16 Sep 2020 02:12:23 +0000 (11:12 +0900)]
[application] Added testing for the trained model

Added testing for the trained model using nnstreamer CAPI
This testing of the trained model works on tizen only

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[applications] Update applications documentation
Parichay Kapoor [Tue, 15 Sep 2020 10:19:20 +0000 (19:19 +0900)]
[applications] Update applications documentation

Update the documentation for the applications

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Application] Update application to use C-API
Parichay Kapoor [Thu, 10 Sep 2020 07:58:54 +0000 (16:58 +0900)]
[Application] Update application to use C-API

Update draw-classification application to use the C-API
Also add more polishing to the application
Update building of android application to build with C-API
Update meson to use C-API than nntrainer directly

Inference of application is still remaining

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Android] Add building of C-API
Parichay Kapoor [Tue, 15 Sep 2020 10:04:13 +0000 (19:04 +0900)]
[Android] Add building of C-API

Add building of C-API for android
Also add corresponding error fix for inclusion

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoAdd bn layer properties
Jihoon Lee [Tue, 15 Sep 2020 10:02:45 +0000 (19:02 +0900)]
Add bn layer properties

This patch add properties for bn layer

**Changes proposed in this PR:**
- momentum
- mu_initializer
- var_initializer
- gamma_initializer
- beta_initializer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoAdd test for elem-wise ops broadcasting
Jihoon Lee [Tue, 15 Sep 2020 08:31:25 +0000 (17:31 +0900)]
Add test for elem-wise ops broadcasting

This patch adds test for elem-wise ops doing fine with broadcasting
support

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoAdd broadcasting support for elem-wise ops
Jihoon Lee [Tue, 15 Sep 2020 08:31:02 +0000 (17:31 +0900)]
Add broadcasting support for elem-wise ops

Batchnormalization Layer needs extensive broadcasting support other than
batchwise add.
Add broadcasting to `operation_i` family while keeping vectorization from cblas
as much as possible.

Test will be added with following commit

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Move dependencies to correct location
Parichay Kapoor [Wed, 16 Sep 2020 02:24:54 +0000 (11:24 +0900)]
[meson] Move dependencies to correct location

Move dependencies of glib and gstreamer to nnstreamer plugin from applications

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoUpdate bn layer forwarding
Jihoon Lee [Fri, 4 Sep 2020 05:19:50 +0000 (14:19 +0900)]
Update bn layer forwarding

**Changes proposed in this PR:**
- Update bn layer to more naive implementation
- Update broken bn layer tc

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoAdd gitignore for python generated files
Jihoon Lee [Mon, 14 Sep 2020 07:49:29 +0000 (16:49 +0900)]
Add gitignore for python generated files

**Changes proposed in this PR:**
- Add python generated extensions to gitignore

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[unittest] install unittest dependent files
Parichay Kapoor [Tue, 15 Sep 2020 05:53:31 +0000 (14:53 +0900)]
[unittest] install unittest dependent files

Install missing unittest dependent files when installing the unittest

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Test ] enable unittest
jijoong.moon [Wed, 16 Sep 2020 00:31:21 +0000 (09:31 +0900)]
[ Test ] enable unittest

This pr includes enabling unittest for gbs

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Bug ] Fix Tensor Copy
jijoong.moon [Tue, 15 Sep 2020 09:57:00 +0000 (18:57 +0900)]
[ Bug ] Fix Tensor Copy

we only copy if tensor len == 0

related issues : #539

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ APP ] Add VGG 16 Test Configuration for Cifar100
jijoong.moon [Mon, 14 Sep 2020 11:16:50 +0000 (20:16 +0900)]
[ APP ] Add VGG 16 Test Configuration for Cifar100

This PR includes configuration file (vgg.ini) for VGG 16 with bn layer
which is:

 conv3_64 - conv3_64, max_pooling, conv3_128 - conv3_128, max_pooling,
 conv3_256 - conv3_256 - conv3_256, max_pooling, conv3_512 - conv3_512
 - conv3_512, max_pooling, conv3_512 - conv3_512 - conv3_512,
 bn_layer,  max_pooling, fc_4096, bn_layer, fc_4096, bn_layer, fc_100,
 softmax

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[nnstreamer-plugin] Support to run unittest of nnstreamer plugin
Parichay Kapoor [Tue, 15 Sep 2020 04:27:30 +0000 (13:27 +0900)]
[nnstreamer-plugin] Support to run unittest of nnstreamer plugin

Added support to run unittest of nnstreamer plugin on tizen build on the CI server

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Applications] Refactor applications
Parichay Kapoor [Thu, 10 Sep 2020 05:18:00 +0000 (14:18 +0900)]
[Applications] Refactor applications

Refactor applications
- Classification -> TransferLearning/CIFAR_classification
- Training -> TransferLearning/Draw_classification
- mnist -> MNIST
- Delete TIZEN_CAPI as it overlaps with the unittests without even checking the final accuracy

Correspondingly update meson and some README.md

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Update default constructor of tensor to not throw
Parichay Kapoor [Mon, 7 Sep 2020 02:49:54 +0000 (11:49 +0900)]
[tensor] Update default constructor of tensor to not throw

The default constructor does no heap memory allocation (size is 0)
However it can still throw which isn't right
This patch updates the tensor default constructor to not throw because of heap allocation
to create less issues with static checkers which requires to add try catch around tensor declarations

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[bn] bug fix for setting optimizer for bn layer
Parichay Kapoor [Fri, 11 Sep 2020 02:42:41 +0000 (11:42 +0900)]
[bn] bug fix for setting optimizer for bn layer

This patch provides bugfix for setting the optiimzer for bn layer
although the weight updates are called only for the trainable params,
the optimizer is initialized with all the weights which are not trainable
and have no gradients
This results in un-necessary memory allocation for them as well as
creates issues for #517

This patch adds a count for trainable_param_size and a getTrainableParams() for the layer
and instead of all params, only trainable params of the layer are passed to the optimizer

An alternate was for the optimizer to check gradient size and ignore the ones with 0 size
but thats over-engineering at this point.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Release ] NNTrainer 0.1.1 Release
jijoong.moon [Wed, 23 Sep 2020 06:29:36 +0000 (15:29 +0900)]
[ Release ] NNTrainer 0.1.1 Release

NNTrainer v0.1.1 is released

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[dist/tizen] Package tensor filter properly
Parichay Kapoor [Wed, 23 Sep 2020 06:27:03 +0000 (15:27 +0900)]
[dist/tizen] Package tensor filter properly

NNtrainer tensor filter was packaged with nntrainer which isnt right
Now nntrainer tensor filter is packaged separately and depends on nnstreamer and nntrainer both

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[inference] Inference bug fix
Parichay Kapoor [Wed, 23 Sep 2020 05:21:09 +0000 (14:21 +0900)]
[inference] Inference bug fix

Added inference bug fix to run the last activation layer when it has been merged in the loss layer
This is done by supporting forwarding(in) method for loss which was earlier not supported
We just have to be careful for this not be called in any other case other than inference

Later this will be changed to handled with train/eval mode (which I think @zhoonit will add for bn layer)

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[TensorDim] Fix index checks
Jihoon Lee [Fri, 18 Sep 2020 02:17:48 +0000 (11:17 +0900)]
[TensorDim] Fix index checks

**Changes proposed in this PR:**
- Fix index validation to be appropriate

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ LICENCE ] Fix license header
jijoong.moon [Wed, 16 Sep 2020 07:11:00 +0000 (16:11 +0900)]
[ LICENCE ] Fix license header

NNTrainer is not LGPL. It is Apache2.0 and fix accordingly.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: HyesuAhn <linim@naver.com>
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Bug ] Free content of g_file_get_contents submit/tizen/20200915.081150
jijoong.moon [Tue, 15 Sep 2020 07:37:03 +0000 (16:37 +0900)]
[ Bug ] Free content of g_file_get_contents

Add free when return

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[dataset] Support setting user_data for generator dataset accepted/tizen/unified/20200913.212851 submit/tizen/20200911.081348
Parichay Kapoor [Fri, 11 Sep 2020 07:23:40 +0000 (16:23 +0900)]
[dataset] Support setting user_data for generator dataset

Support setting user_data for the dataset using generators
For now, there is only 1 user_data for all 3 train/valid/test dataset
The format for setting user_data is
"ml_train_dataset_set_property(dataset, "user_data", (void *) data, NULL)"

It is to be passed two consecutive arguments, and with list being terminated by NULL

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ NNSTREAMER ] Add nnstreamer filter for nntrainer submit/tizen/20200911.060431
jijoong.moon [Fri, 11 Sep 2020 04:04:33 +0000 (13:04 +0900)]
[ NNSTREAMER ] Add nnstreamer filter for nntrainer

In this PR,

 . NNStreamer filter subplugin is implemented.
 . Test Cases for subplugin

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Example ] Add Inference Mode & Example submit/tizen/20200910.143355
jijoong.moon [Mon, 7 Sep 2020 07:13:29 +0000 (16:13 +0900)]
[ Example ] Add Inference Mode & Example

This PR includes:
. Add Inferenece Mode
  Add 'is_train' to identify mode.
  : Default is training mode(true). If this is set false, then it is
    inference mode. If it is inference mode, then data buffer is not
    initialized.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Tensor] Add multiple axes support
Jihoon Lee [Fri, 4 Sep 2020 07:58:23 +0000 (16:58 +0900)]
[Tensor] Add multiple axes support

This patch adds multiple axes support for sum and average.
which is esentially needed to build bn_layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ DataBuffer ] Fix for conner cases. submit/tizen/20200910.052144
jijoong.moon [Thu, 10 Sep 2020 01:52:10 +0000 (10:52 +0900)]
[ DataBuffer ] Fix for conner cases.

When the buffer thread broadcasts "DATA_END" while main thread get the
data from buffer, the main thread return even if there are data in
data buffer.

This PR includes fix this situation.

releated issues #427

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ BLAS ] Move to use CUBLAS for gemm
jijoong.moon [Fri, 4 Sep 2020 00:45:33 +0000 (09:45 +0900)]
[ BLAS ] Move to use CUBLAS for gemm

Move CUBLAS gemm routine into blas_interface

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ DataBuffer ] Hot fix for data buffer accepted/tizen/unified/20200909.014335 submit/tizen/20200908.082940
jijoong.moon [Tue, 8 Sep 2020 07:34:36 +0000 (16:34 +0900)]
[ DataBuffer ] Hot fix for data buffer

. Remove unnecessary lock

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Coverity ] Fix Race Condition issue about databuffer
jijoong.moon [Mon, 7 Sep 2020 10:48:02 +0000 (19:48 +0900)]
[ Coverity ] Fix Race Condition issue about databuffer

. Add Lock to set data ready flag.
. change order to set the condition

. Fix undeterministic behavior of databuffer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ SVACE ] Fix minor issues caused by svace
Jihoon Lee [Mon, 7 Sep 2020 08:29:22 +0000 (17:29 +0900)]
[ SVACE ] Fix minor issues caused by svace

**Changes proposed in this PR:**
- Fix recursive header inclusion
- Add additional destory, overlapping validation

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoFix normalization for specific case
Jihoon Lee [Mon, 7 Sep 2020 04:54:39 +0000 (13:54 +0900)]
Fix normalization for specific case

**Changes proposed in this PR:**
- Fix normalization when min == max
- Add regression test from tct

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Docs] Fix typo on classificaion
Jihoon Lee [Mon, 7 Sep 2020 07:02:55 +0000 (16:02 +0900)]
[Docs] Fix typo on classificaion

**Changes proposed in this PR:**
- Fix typo that hasn't been changed after ini spec change

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ SVACE ] Using rand_r instead of rand
jijoong.moon [Mon, 7 Sep 2020 02:48:51 +0000 (11:48 +0900)]
[ SVACE ] Using rand_r instead of rand

. Classification/jni/mani.cpp

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Doc ] Add explains about Applications
jijoongmoon [Thu, 20 Aug 2020 04:42:17 +0000 (13:42 +0900)]
[ Doc ] Add explains about Applications

Modify Document of Training Exmaple (Transfer Learning)
. Applications/Training/README.md

Signed-off-by: jijoongmoon <jijoong.moon@samsung.com>
3 years ago[ CONV2D ] Add explanation about how to calculate conv2d.
jijoong.moon [Thu, 3 Sep 2020 06:57:06 +0000 (15:57 +0900)]
[ CONV2D ] Add explanation about how to calculate conv2d.

Add explanation about covn2d.
. forwarding
. Calculate DelK in backwarding
. Calculate return derivative in backwarding

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ BLAS ] Add more blas func to support and update tensor.cpp
jijoong.moon [Thu, 3 Sep 2020 03:19:29 +0000 (12:19 +0900)]
[ BLAS ] Add more blas func to support and update tensor.cpp

. add sscal, snrm2, sgemv, scopy
. modify tensor.cpp to use blas_interface

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years agoFix app test fail
Jihoon Lee [Thu, 3 Sep 2020 03:10:22 +0000 (12:10 +0900)]
Fix app test fail

Fix app classification capi func test fails with negative number

- Add destroy routine
- s/model_path/save_path/

Resolves #501

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[unittest] Add verification of accuracy for training metrics accepted/tizen/unified/20200904.035521 submit/tizen/20200903.054614
Parichay Kapoor [Wed, 2 Sep 2020 07:35:40 +0000 (16:35 +0900)]
[unittest] Add verification of accuracy for training metrics

This patch add verification of accuracy for training metrics like loss and accuracy
in capi test cases

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ MNIST ] Add save weight initialization data
jijoongmoon [Mon, 31 Aug 2020 11:51:17 +0000 (20:51 +0900)]
[ MNIST ] Add save weight initialization data

This PR includes codes to save weight initialized data for nntrainer
input. This can be read using NN.readModel() except optimizer read in
fully connected layer.

Signed-off-by: jijoongmoon <jijoong.moon@samsung.com>
3 years ago[ UTIL ] Add BLAS Interface
jijoong.moon [Fri, 28 Aug 2020 01:55:36 +0000 (10:55 +0900)]
[ UTIL ] Add BLAS Interface

In order to handle blas easily, blas interface is added.
.blas_interface.cpp
.blas_interface.h

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[CAPI] Fix potential issue in capi
Jihoon Lee [Wed, 2 Sep 2020 02:04:42 +0000 (11:04 +0900)]
[CAPI] Fix potential issue in capi

**Changes proposed in this PR:**
- Change logic to feature check first for `ml_train_construct_with_conf`
- Add exception boundary to all make_shared
- Add noexcpt to layer::getName since error is not handled in capi

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[dataset] check bufsize at init
Parichay Kapoor [Tue, 1 Sep 2020 02:37:39 +0000 (11:37 +0900)]
[dataset] check bufsize at init

This patch changes the initialization of buffer size.
The buffer size is now initialized with value 1 if user does not set it.
Then the proper checks on buffer size are performed at init
and buffer size is appropriately resized then.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ TOKEN ] Update weight decay to weight regularizer
jijoong.moon [Wed, 2 Sep 2020 03:20:22 +0000 (12:20 +0900)]
[ TOKEN ] Update weight decay to weight regularizer

Change Weight_Decay to Weight_Regularizer
Change Weight_Decay_Lambda to Weight_Regularizer_Constant

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ UNITTEST ] Integer Unit test for conv2d
jijoong.moon [Fri, 28 Aug 2020 10:15:09 +0000 (19:15 +0900)]
[ UNITTEST ] Integer Unit test for conv2d

This PR inlucdes Integet Unit Test for conv2d to evaluted accuracy.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years agoFix ahub reporting buffer overflow
Jihoon Lee [Tue, 1 Sep 2020 07:46:25 +0000 (16:46 +0900)]
Fix ahub reporting buffer overflow

**Changes proposed in this PR:**
- Add extra safeguard to sum

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tensor] bug fix for the sum
Parichay Kapoor [Tue, 1 Sep 2020 06:26:50 +0000 (15:26 +0900)]
[tensor] bug fix for the sum

add bug fix for the sum of tensors

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Add unittests for tensor dot
Parichay Kapoor [Tue, 1 Sep 2020 03:41:01 +0000 (12:41 +0900)]
[tensor] Add unittests for tensor dot

Add unittests for tensor dot

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[dot] Update tensor dot implementation
Parichay Kapoor [Wed, 26 Aug 2020 13:56:49 +0000 (22:56 +0900)]
[dot] Update tensor dot implementation

Update the implementation of tensor dot product
This update applies dot on the last dimesnion of input and second last dimension of passed tensor `m`
This allows use of dot in fc layer directly eliminating calling transpose explicitly but rather use transpose in blas

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[loadconfig] Load model and dataset in config at start
Parichay Kapoor [Fri, 28 Aug 2020 06:09:06 +0000 (15:09 +0900)]
[loadconfig] Load model and dataset in config at start

Load model and dataset in config at start so that any dependencies
which model, dataset and layer have among them can be easily solved

Resolves #389

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Conv2D ] add cblas_sgemm to calculate conv2d
jijoong.moon [Wed, 26 Aug 2020 02:08:25 +0000 (11:08 +0900)]
[ Conv2D ] add cblas_sgemm to calculate conv2d

This PR includes conv2d layer gradient calculation using cblas_gemm.
In order to do this, new functions are introduced.
. conv2d_gemm
. im2col

related issue : #477

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[unittest] Make unittest data close to int
Parichay Kapoor [Fri, 28 Aug 2020 04:14:47 +0000 (13:14 +0900)]
[unittest] Make unittest data close to int

Make unittest data close to int
This allows matching outputs with lower tolerance and higher precision

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Pooling] Bug fix for pooling global max
Parichay Kapoor [Fri, 28 Aug 2020 05:32:03 +0000 (14:32 +0900)]
[Pooling] Bug fix for pooling global max

Bug fix for pooling global max

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Add support for element-wise add layer
Parichay Kapoor [Mon, 3 Aug 2020 10:24:08 +0000 (19:24 +0900)]
[Layer] Add support for element-wise add layer

Add support for element-wise add layer along with basic unittests
This will form the base layer for supporting skip connections

V2:
Element-wise add layer -> Addition Layer

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tizen/dist] Update tizen packaging
Parichay Kapoor [Fri, 28 Aug 2020 01:48:14 +0000 (10:48 +0900)]
[tizen/dist] Update tizen packaging

Updated tizen packing spec file

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Update tensor operations to use cblas
Parichay Kapoor [Wed, 26 Aug 2020 08:41:01 +0000 (17:41 +0900)]
[tensor] Update tensor operations to use cblas

Update tensor operations to use cblas
Further, update various operations to use stl functions than implement locally
Bug fix on sum_by_batch which calculated sum of absolute values earlier

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[softmax] Update softmax implementation
Parichay Kapoor [Wed, 26 Aug 2020 10:18:34 +0000 (19:18 +0900)]
[softmax] Update softmax implementation

Update softmax implementation to use tensor operations than running manual loops

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[neuralnet] Update model_file to save_path in neuralnet properties
Parichay Kapoor [Thu, 27 Aug 2020 08:20:32 +0000 (17:20 +0900)]
[neuralnet] Update model_file to save_path in neuralnet properties

Update model_file to save_path in neuralnet properties

Resolves #488

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Update tensor apply
Parichay Kapoor [Wed, 26 Aug 2020 10:23:41 +0000 (19:23 +0900)]
[tensor] Update tensor apply

Update tensor apply to use std::transform than a simple loop

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>