platform/core/ml/nntrainer.git
2 years ago[rnncell] generate testcase for muli in/out of rnncell
hyeonseok lee [Thu, 28 Apr 2022 07:48:01 +0000 (16:48 +0900)]
[rnncell] generate testcase for muli in/out of rnncell

 - Generate multi in/out rnncell and replace testcase of 1 input/output of rnncell

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[rnncell] implement multiout for rnncell
hyeonseok lee [Tue, 26 Apr 2022 09:57:21 +0000 (18:57 +0900)]
[rnncell] implement multiout for rnncell

 - Calculate derivative inlcude duplicated process with calculate gradient.
   Needs to be optimized.
 - Remove hidden state and get previous hidden state from input
 - Remove timestep property
 - Disabled rnncell unittest. Proper unittest will be followed.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[neuralnet] set epoch_idx to zero only if train from scratch
hyeonseok lee [Fri, 22 Apr 2022 05:56:45 +0000 (14:56 +0900)]
[neuralnet] set epoch_idx to zero only if train from scratch

 - Until now, the epoch property had a meaning of how many epoch to run,
   but from now it will have a meaning of final epochs to run

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[model] added getter for current epoch_idx
hyeonseok lee [Fri, 22 Apr 2022 08:28:39 +0000 (17:28 +0900)]
[model] added getter for current epoch_idx

 - Through this function user can get epoch_idx

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[model] support stop training
hyeonseok lee [Fri, 22 Apr 2022 04:53:09 +0000 (13:53 +0900)]
[model] support stop training

 - During the training the function will check every epoch to decide stop
   the training or not.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[lstm] implement calcGradient for bidirectional lstm
hyeonseok lee [Mon, 24 Jan 2022 05:36:41 +0000 (14:36 +0900)]
[lstm] implement calcGradient for bidirectional lstm

 - Implement calcGradient for bidirectional lstm
 - Added test case for bidirectional lstm

close #1726

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[ CAPI ] change set_feature_state to compatible with tizen api
jijoong.moon [Fri, 6 May 2022 02:18:38 +0000 (11:18 +0900)]
[ CAPI ] change set_feature_state to compatible with tizen api

This patch make the set_featture_state variable compatible with tizen
ml api. It requires ml feature parameter to set and get, it fixs
accordingly.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[ Unit Test ] remove mse loss layer in makeCompiledGraph
jijoong.moon [Mon, 2 May 2022 01:53:04 +0000 (10:53 +0900)]
[ Unit Test ] remove mse loss layer  in makeCompiledGraph

This patch remove the mse loss layer in makeCompiledGraph

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[Fix] Clean code for resnet application
Jiho Chu [Wed, 13 Apr 2022 11:10:35 +0000 (20:10 +0900)]
[Fix] Clean code for resnet application

- fix typo
- Use const for catched exception
- use EXIT_SUCCESS macro
- use error code

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
2 years ago[ TEST ] add more test cases for batch normalization realizer
jijoong.moon [Wed, 20 Apr 2022 07:58:21 +0000 (16:58 +0900)]
[ TEST ] add more test cases for batch normalization realizer

This patch adds the test case for batch normazliation realizer with
kind of resnet basic block which includes the multiout layer.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[ Compiler ] Implement bn realizer with test
jijoong.moon [Tue, 19 Apr 2022 07:32:32 +0000 (16:32 +0900)]
[ Compiler ] Implement bn realizer with test

This patch completes the bn realizer for the inference.
This path is only for the inference and therefore it supposed to 1 to
1 connection ( one input and one output for the bn layer ) in the
model graph. That means if there are multiple connections, then
multi-out layer is follows and make sure bn layer has 1 to 1
in/output during compilation.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[ Compilier ] implement BN realzier
jijoong.moon [Mon, 18 Apr 2022 05:55:34 +0000 (14:55 +0900)]
[ Compilier ] implement BN realzier

Describe a commit content (Until 80 colums per line) in detail ASAP.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[ Interpreter ] add batch normalziation realizer
jijoong.moon [Wed, 20 Apr 2022 00:38:31 +0000 (09:38 +0900)]
[ Interpreter ] add batch normalziation realizer

This patch removes the batch normalizaiton layers with bn_realizer in
the GraphRepresentation for TfliteInterperter which is for the
inference.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[ Compiler ] Implement bn realizer with test
jijoong.moon [Mon, 18 Apr 2022 05:55:34 +0000 (14:55 +0900)]
[ Compiler ] Implement bn realizer with test

This patch completes the bn realizer for the inference.
This path is only for the inference and therefore it supposed to 1 to
1 connection ( one input and one output for the bn layer ) in the
model graph. That means if there are multiple connections, then
multi-out layer is follows and make sure bn layer has 1 to 1
in/output during compilation.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[CAPI] Add check if model is null to ml_train_model_get_summary()
Hyunil [Fri, 22 Apr 2022 01:06:27 +0000 (10:06 +0900)]
[CAPI] Add check if model is null to ml_train_model_get_summary()

- Check if model is null or not before checking in ml_train_model_get_summary_util()

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Hyunil <hyunil46.park@samsung.com>
2 years ago[ Android ] fix the warnning message
jijoong.moon [Thu, 21 Apr 2022 01:19:13 +0000 (10:19 +0900)]
[ Android ] fix the warnning message

This patch fixs the warnning message for the android

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[zoneout] enable zoneout only when used
hyeonseok lee [Wed, 9 Feb 2022 08:15:03 +0000 (17:15 +0900)]
[zoneout] enable zoneout only when used

 - Allocate and use zoneout mask, lstm_cell_state only when used

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[grucell] remove temporary tensor
hyeonseok lee [Wed, 5 Jan 2022 03:42:45 +0000 (12:42 +0900)]
[grucell] remove temporary tensor

 - Reduce temporary tensor

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[zoneout lstmcell] share zoneout mask tensors
hyeonseok lee [Wed, 5 Jan 2022 03:39:55 +0000 (12:39 +0900)]
[zoneout lstmcell] share zoneout mask tensors

 - Makes zoneout_mask tensors to be shared when it is unrolled

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[optimizer/lr] Add learning rate scheduler to ccapi
Parichay Kapoor [Mon, 13 Dec 2021 10:04:17 +0000 (19:04 +0900)]
[optimizer/lr] Add learning rate scheduler to ccapi

Open learning rate scheduler for the ccapi with the optimizer.
Corresponding changes are made and some tests are updated to use the new
way to set the learning rate.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[lr] Support step wise decay scheduler
Parichay Kapoor [Mon, 13 Dec 2021 05:25:29 +0000 (14:25 +0900)]
[lr] Support step wise decay scheduler

Support step wise decay learning rate scheduler, where the iterations
and learning rate are taken as parameter.
The corresponding unittests are also added.

See Also #1776

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[test] Bug fix for unittest
Parichay Kapoor [Mon, 13 Dec 2021 02:35:13 +0000 (11:35 +0900)]
[test] Bug fix for unittest

ccapi and capi use the same filenames for their ini files which can be
buggy if the unittests are run in parallel or unittests differ.
Make ini untitest names independent.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[modelloader] Support learning rate scheduler
Parichay Kapoor [Fri, 10 Dec 2021 17:47:07 +0000 (02:47 +0900)]
[modelloader] Support learning rate scheduler

Support learning rate scheduler with model loader along with a unittest.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[optimizer] Cleanup optimizers
Parichay Kapoor [Fri, 10 Dec 2021 16:29:12 +0000 (01:29 +0900)]
[optimizer] Cleanup optimizers

cleanup the optimizer interface and existing implementations.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[optimizer] Update to use learning rate schedulers
Parichay Kapoor [Fri, 10 Dec 2021 15:49:03 +0000 (00:49 +0900)]
[optimizer] Update to use learning rate schedulers

Update the optimizer to use the learning rate scheduler in the training
and weight updation.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[optimizer] Use optimizer wrapped
Parichay Kapoor [Fri, 10 Dec 2021 15:00:24 +0000 (00:00 +0900)]
[optimizer] Use optimizer wrapped

Update to use the optimizer wrapped instead of optimizer directly, and
prepare for the using learning rate scheduler.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[optimizer] Add optimizer wrapped
Parichay Kapoor [Fri, 10 Dec 2021 12:29:28 +0000 (21:29 +0900)]
[optimizer] Add optimizer wrapped

Add optimizer wrapped which wraps the opitmizer and the learning rate
scheduler.
In order to be backward compatible, each optimizer must support setting
the learning rate, decay rate and decay steps, even for new optimizers.
To make this extensible without each optimizer storing this information
and merging with the learning rate schedulers, and not creating new
interfaces, optimizer wrapped is added.
Optimizer wraps around optimizer, and owns both the optimizer and
learning rate scheduler. If the properties of LR or decay are passed to
the optimizer, they are intercepted by the optimizer wrapped and passed
to the learning rate scheduler appropriately.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[test] Add finalize tests for lr schedulers
Parichay Kapoor [Fri, 10 Dec 2021 03:59:32 +0000 (12:59 +0900)]
[test] Add finalize tests for lr schedulers

Add tests for finalize member function for the learning rate schedulers.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[ Compiler ] add batch norm layer realizer
jijoong.moon [Mon, 18 Apr 2022 01:39:00 +0000 (10:39 +0900)]
[ Compiler ] add batch norm layer realizer

This patch includes skeleton code for bn realizer which removes the
batch normalization layer for inference

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[Doc] Change nnstreamer CI server domain name.
gichan [Wed, 20 Apr 2022 01:55:41 +0000 (10:55 +0900)]
[Doc] Change nnstreamer CI server domain name.

Change nnstreamer CI server domain name.
nnstreamer.mooo.com -> ci.nnstreamer.ai

Signed-off-by: gichan <gichan2.jang@samsung.com>
2 years ago[ BLAS ] use openblas function to set num threads accepted/tizen/unified/20220419.142304 submit/tizen/20220418.053521 submit/tizen/20220418.061930
jijoong.moon [Thu, 14 Apr 2022 11:58:01 +0000 (20:58 +0900)]
[ BLAS ] use openblas function to set num threads

This patch use openblas_set_num_thread() to set the number of thread
for BLAS computation, instread of set environment variables

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[capi] added unittest for get layer
hyeonseok lee [Wed, 6 Apr 2022 14:05:53 +0000 (23:05 +0900)]
[capi] added unittest for get layer

 - Added 4 unittest for capi ml_train_model_get_layer
 - get layer api is validate using get summary function
   but this should be replace by layer getProperty(#1875)

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[neuralnet] enhance print function to print graph info
hyeonseok lee [Fri, 8 Apr 2022 11:41:39 +0000 (20:41 +0900)]
[neuralnet] enhance print function to print graph info

 - Enhance print function in neuralnet to print layer connection
   printing input tensor info is omitted.
 - Enhance print function in layer_node to print layer properties

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[ TEST ] disable app draw classification if nnstreamer is disabled
jijoong.moon [Wed, 13 Apr 2022 08:50:45 +0000 (17:50 +0900)]
[ TEST ] disable app draw classification if nnstreamer is disabled

This patch disable the test when the nnstreamer is
disabled. Prerequisite of this test is nnstreamer is enabled.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[Exporter] move builtin options for tflite into tflite_opnode
jijoong.moon [Mon, 11 Apr 2022 13:28:45 +0000 (22:28 +0900)]
[Exporter] move builtin options for tflite into tflite_opnode

This patch add getter of Flatbuffer builtin options in TfOpNodes.
More case will be added to get the builtin options.

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[ MESON ] fix bug when meson test
jijoong.moon [Mon, 11 Apr 2022 10:46:06 +0000 (19:46 +0900)]
[ MESON ] fix bug when meson test

This patch cp the golden data for enable-tflite-interpreter when do
meson test.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[neuralnet] Add null character when load optizmier type from bin file
hyeonseok lee [Wed, 6 Apr 2022 09:51:24 +0000 (18:51 +0900)]
[neuralnet] Add null character when load optizmier type from bin file

 - Added missing null character when load optimizer type from bin file

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[CI] fix android build
jijoong.moon [Wed, 6 Apr 2022 07:58:02 +0000 (16:58 +0900)]
[CI] fix android build

This patch fix ci fail when we build android using
tools/package_andorid.sh script

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[CCAPI] Rearrange enumeration of layer type accepted/tizen/unified/20220405.155809 submit/tizen/20220405.072656
Hyunil [Thu, 31 Mar 2022 09:01:31 +0000 (18:01 +0900)]
[CCAPI] Rearrange enumeration of layer type

- Classified enumeration as neural network, simple transformation and loass layer
- To perform TCT added newly, match position of enumeration between nntrainer-api-common.h and layer.h

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:    [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Hyunil <hyunil46.park@samsung.com>
2 years ago[README] update reviewers
jijoong.moon [Fri, 1 Apr 2022 01:17:14 +0000 (10:17 +0900)]
[README] update reviewers

This patch update reviewers in README

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[COVERITY] remove unreachec code
jijoong.moon [Fri, 25 Mar 2022 02:51:40 +0000 (11:51 +0900)]
[COVERITY] remove unreachec code

This patch remove unreachable code

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[CAPI] Move getlayer api to public
jijoong.moon [Thu, 24 Mar 2022 01:46:10 +0000 (10:46 +0900)]
[CAPI] Move getlayer api to public

This patch moves the get_layer api to public

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years agojni/Android: build error fix
MyungJoo Ham [Fri, 25 Mar 2022 05:45:55 +0000 (14:45 +0900)]
jni/Android: build error fix

Add ML_API_COMMON=1 for Android.mk build

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years agoML-API dependency clean-up.
MyungJoo Ham [Wed, 23 Mar 2022 10:15:05 +0000 (19:15 +0900)]
ML-API dependency clean-up.

If it's not Tizen, ML-API(C) is not mandatory.
Allow to build w/o dependencies on ML-API by default.

Fixes #1853

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years agoapi: clean up dependencies
MyungJoo Ham [Wed, 23 Mar 2022 10:14:32 +0000 (19:14 +0900)]
api: clean up dependencies

Remove unnecessary dependencies.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years agotest: remove unnecessary capi dependencies
MyungJoo Ham [Wed, 23 Mar 2022 10:13:00 +0000 (19:13 +0900)]
test: remove unnecessary capi dependencies

Many test cases do not require capi.
Reexamine api dependencies and corrected them.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years agoapplication: remove unnecessary capi dep.
MyungJoo Ham [Wed, 23 Mar 2022 10:11:55 +0000 (19:11 +0900)]
application: remove unnecessary capi dep.

1. Remove unnecessary capi dependencies.
2. Don't build apps requiring capi if capi is not available.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years agomeson: dependency on ml-api should be optional
MyungJoo Ham [Fri, 18 Mar 2022 08:20:33 +0000 (17:20 +0900)]
meson: dependency on ml-api should be optional

Users should be able to build nntrainer in a system without
ML-API and nnstreamer by default.

ML-API and nnstreamer dependency should only be mandated
in related systems (a few embedded systems).

Let's keep it "auto" so that most external users can forget
about nnstreamer and ML-API.

Addresses #1853

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years ago[CAPI] Expose Layer Enums
jijoong.moon [Wed, 23 Mar 2022 11:12:43 +0000 (20:12 +0900)]
[CAPI] Expose Layer Enums

This patch contains layer enums to be exposed.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[COVERITY] remove unreachec code accepted/tizen/unified/20220330.021238 submit/tizen/20220325.053530
jijoong.moon [Fri, 25 Mar 2022 02:51:40 +0000 (11:51 +0900)]
[COVERITY] remove unreachec code

This patch remove unreachable code

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years agoPortability: unittest uninit vars.
MyungJoo Ham [Wed, 16 Mar 2022 07:56:56 +0000 (16:56 +0900)]
Portability: unittest uninit vars.

1. gtest code has maybe-uninitialized warnings. Suppress it.
2. maybe-unitialized warning from unittest code fixed.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years agoPortability: g++-11 has different std policy.
MyungJoo Ham [Wed, 16 Mar 2022 07:39:44 +0000 (16:39 +0900)]
Portability: g++-11 has different std policy.

It requires stdexcept and limits for std::invalid_arguments and std::numeric_limits.

Fixes #1857

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years agoPortability: gcc structured binding declaration error
MyungJoo Ham [Wed, 16 Mar 2022 07:38:00 +0000 (16:38 +0900)]
Portability: gcc structured binding declaration error

gcc-7 does not comply with c++-17 fully with structured binding declaration.
This allows maybe-unused globally as a workaround.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2 years ago[fix] check return value
hyeonseok lee [Thu, 17 Mar 2022 04:33:49 +0000 (13:33 +0900)]
[fix] check return value

 - Added missing statement to check called function works properly

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years agoChange loading meta information behavior
Jihoon Lee [Fri, 11 Mar 2022 15:42:54 +0000 (00:42 +0900)]
Change loading meta information behavior

**Before this PR**

optimizer variable loaded from load_path every time.
Calling model->train(); in a row became unintuitive

1. model->train() load from original load path
thus iteration number roll back to the first one.
2. Same happens for the adam weight
3. model->load(); after model->initialize(); is noop
because loadedWeight becomes true

**After this PR**

1. model load from load_path only at initialize time
2. model->load is not implicitly overriden

**Additional Changes**

1. optimizer weight became part of weights. Now available after initialize()
2. Save format became coherent with load format
3. Some unused variables deleted

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years agoRevert "Load Optimizer Variables"
Jihoon Lee [Thu, 10 Mar 2022 09:40:10 +0000 (18:40 +0900)]
Revert "Load Optimizer Variables"

This reverts commit c669732b1f52f4aad3114839fe1ebba0f5d95f27.

As this commit contains some compatibility breaking changes, this commit
should be merged with where nntrainer is being used.

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years agoLoad Optimizer Variables
jijoong.moon [Thu, 3 Mar 2022 03:11:04 +0000 (12:11 +0900)]
Load Optimizer Variables

In this PR,
 1. new property of adam optimizer, "laod_var" is added to set loading
    momentum variables
 2. update the read and save binary file
    using  ml::train::OptimizerType
 3. update read in layer_node to skip the optimizer variables if
    load_var is set as "false"
 4. is_laod_var() is added in optimizer_devel

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[ BUILD ] add lr_scheduler.h in include dir
jijoong.moon [Wed, 2 Mar 2022 08:06:22 +0000 (17:06 +0900)]
[ BUILD ] add lr_scheduler.h in include dir

For the dependency of apt_context.h, we need to install lr_scheduler.h
in install include directory.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[layer] LSTM weight array size fix
Parichay Kapoor [Fri, 25 Feb 2022 05:17:10 +0000 (14:17 +0900)]
[layer] LSTM weight array size fix

this patch adds LSTM weight array size fix.
The lstm layer requests 17 tensors to the tensor manager. However, lstm
layer only maintains an array of size 15 to store the index of the requested
tensors. This leads to an out of range access error.

This patch increases the size of the array to store the requested
tensors indices from 15 to 17.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[capi/test] Example of batchsize set at train
Parichay Kapoor [Thu, 17 Feb 2022 01:57:26 +0000 (10:57 +0900)]
[capi/test] Example of batchsize set at train

Update capi test to include an example where batchsize is set at the
training time and not at compile time.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[test] Add finalize tests for lr schedulers
Parichay Kapoor [Fri, 10 Dec 2021 03:59:32 +0000 (12:59 +0900)]
[test] Add finalize tests for lr schedulers

Add tests for finalize member function for the learning rate schedulers.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[test] learning rate scheduler unittest
Parichay Kapoor [Thu, 9 Dec 2021 13:53:56 +0000 (22:53 +0900)]
[test] learning rate scheduler unittest

Add unittests for the constant and exponential learning rate schedulers.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[appcontext] Register learning rate scheduler with app context
Parichay Kapoor [Thu, 9 Dec 2021 13:53:08 +0000 (22:53 +0900)]
[appcontext] Register learning rate scheduler with app context

register learning rate scheduler with the app context and regiter its
type factories.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[spec] Add gcov package
hyeonseok lee [Wed, 26 Jan 2022 09:25:33 +0000 (18:25 +0900)]
[spec] Add gcov package

 - Added gcov package for automatic line coverage

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[weight] weight regularization and decay with clip
Parichay Kapoor [Fri, 18 Feb 2022 11:02:51 +0000 (20:02 +0900)]
[weight] weight regularization and decay with clip

Enable weighte regularization and decay with weiht clipping.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[weight-decay] Bug fix for weight decay with adam
Parichay Kapoor [Fri, 18 Feb 2022 11:01:58 +0000 (20:01 +0900)]
[weight-decay] Bug fix for weight decay with adam

weight decay should be applied before calling the optimizer
this was not detected earlier as it was tested with sgd

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[optim] Add pytorch reference for adam
Parichay Kapoor [Tue, 14 Dec 2021 02:06:15 +0000 (11:06 +0900)]
[optim] Add pytorch reference for adam

Add pytorch reference mode for adam which references adam implementation
of pytorch.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Load/Save] Update to load and save the adam momentum variables
jijoong.moon [Thu, 20 Jan 2022 09:06:41 +0000 (18:06 +0900)]
[Load/Save] Update to load and save the adam momentum variables

In this PR, Load and save is updated for the adam optimizer variables.
In order to do it, optimizer load twice, one for the weight, and one
for the optimizer variables.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[lr-scheduler] Add finalize to interface
Parichay Kapoor [Fri, 10 Dec 2021 03:48:40 +0000 (12:48 +0900)]
[lr-scheduler] Add finalize to interface

Add finalize to the interface of the learning rate scheduler with the
purpose to verify that the required properties have been set.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[lr] Support exponential learning rate scheduler
Parichay Kapoor [Thu, 9 Dec 2021 12:37:24 +0000 (21:37 +0900)]
[lr] Support exponential learning rate scheduler

Support exponential decay learning rate scheduler with decay rate and
decay steps are the controllable properties.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[lr] Support constant learning rate scheduler
Parichay Kapoor [Thu, 9 Dec 2021 12:17:07 +0000 (21:17 +0900)]
[lr] Support constant learning rate scheduler

Support constant learning rate scheduler.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[lr] Add interface for learning rate scheduler
Parichay Kapoor [Thu, 9 Dec 2021 11:54:32 +0000 (20:54 +0900)]
[lr] Add interface for learning rate scheduler

Add interface for the learning rate scheduler which all learning rate
schedulers must abide by.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[optimizer] prepare for learning rate scheduler
Parichay Kapoor [Thu, 9 Dec 2021 11:52:48 +0000 (20:52 +0900)]
[optimizer] prepare for learning rate scheduler

Move declaration and definition of learning rate related properties out
to common properties from optimizer_impl class.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[api] Update model inference API
Parichay Kapoor [Tue, 8 Feb 2022 04:32:05 +0000 (13:32 +0900)]
[api] Update model inference API

Update model inference to accept inputs and labels which are const
vectors.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[weight-decay] Enable for batch normalization
Parichay Kapoor [Thu, 10 Feb 2022 02:05:15 +0000 (11:05 +0900)]
[weight-decay] Enable for batch normalization

Enable weight decay for batch normalization.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[test] unittest for weight decay
Parichay Kapoor [Thu, 27 Jan 2022 06:36:46 +0000 (15:36 +0900)]
[test] unittest for weight decay

added unittest for weight decay along with fix.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layers] Support for weight decay to layers
Parichay Kapoor [Mon, 24 Jan 2022 09:54:32 +0000 (18:54 +0900)]
[layers] Support for weight decay to layers

Add support for weight decay to the layers with weights.
Further update the requestWeight API of the layer context to accept
weight decay, and update the usage at manager.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[weight] Support weight decay
Parichay Kapoor [Mon, 24 Jan 2022 03:58:43 +0000 (12:58 +0900)]
[weight] Support weight decay

Add support for weight decay property which will enable decay of weights
with each applying of the gradient.
Weight decay can be enabled individually for both weight and bias.
This is kept separate from regularizer as they both behave differently.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[lstm] implement bidirectional lstm forward
hyeonseok lee [Fri, 14 Jan 2022 04:29:57 +0000 (13:29 +0900)]
[lstm] implement bidirectional lstm forward

 - Make batch_first_forward function
 - For now only support forward for bidirectional lstm

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[ SAVE/LOAD ] save / load optimizer variables
jijoong.moon [Thu, 20 Jan 2022 09:06:41 +0000 (18:06 +0900)]
[ SAVE/LOAD ] save / load optimizer variables

Enable save / load optimizer variables such as M and V for adam
optimizer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[lstm] remove timestep property
hyeonseok lee [Wed, 12 Jan 2022 11:29:41 +0000 (20:29 +0900)]
[lstm] remove timestep property

 - Remove timestep property from lstm layer.
   This will disable unrolling the lstm layer.
 - Adjust recurrent unittest to simple lstm unittest.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[Fix] Flatten realizer to maintain original name
Jihoon Lee [Fri, 17 Dec 2021 06:04:27 +0000 (15:04 +0900)]
[Fix] Flatten realizer to maintain original name

This patch changes flatten realizer to maintain original node name.
Please see the included test patch to get the concept.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Tensor Pool] Add expose/persist concept
Jihoon Lee [Wed, 19 Jan 2022 10:40:02 +0000 (19:40 +0900)]
[Tensor Pool] Add expose/persist concept

This patch add expose/persist concept to the tensor pool and manager.

When a tensor is exposed, this means that the tensor is gauranteed to
remain valid max_exec where max_exec is the value passed along allocateTensors(max_exec);

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[ExecOrder] Add exec order control and fix inference
Jihoon Lee [Wed, 19 Jan 2022 09:41:28 +0000 (18:41 +0900)]
[ExecOrder] Add exec order control and fix inference

This patch add exec order control and reduce memory overhead of saving
inference result

exec order cotrol has two part

1. Additional exec order. This enable graph to add some exec order.
2. Expose bit. This enables request to be visible after the end of the
execution.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Network Graph] Split max_exec_order
Jihoon Lee [Wed, 19 Jan 2022 08:25:04 +0000 (17:25 +0900)]
[Network Graph] Split max_exec_order

This patch split max_exec_order to three part.

1. graph_exec_end: graph execution end
2. backward_iter_end: end node of backward operation
3. forward_iter_end: end node of the forward operation

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[manager] Separate backwarding -> grad, deriv
Jihoon Lee [Thu, 13 Jan 2022 15:27:00 +0000 (00:27 +0900)]
[manager] Separate backwarding -> grad, deriv

This patch separate backwaring to grad and deriv for the requestTensors

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Debug] Add a naive validator to the optimized planner
Jihoon Lee [Thu, 13 Jan 2022 15:25:50 +0000 (00:25 +0900)]
[Debug] Add a naive validator to the optimized planner

This patch add a naive validator to the optimized planner. This doubles
the memory consumption. So kept commented but when time comes, this
function can be enabled.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Fix] fix lifespan of recurrent cells
Jihoon Lee [Thu, 13 Jan 2022 15:25:24 +0000 (00:25 +0900)]
[Fix] fix lifespan of recurrent cells

This patch fixes lifespan of recurrent cells

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Fix] change max order to point last
Jihoon Lee [Thu, 13 Jan 2022 15:22:29 +0000 (00:22 +0900)]
[Fix] change max order to point last

This patch fixes bug that gradient clipping was not applied correctly
when optimization is on.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Fix] BN life span
Jihoon Lee [Thu, 13 Jan 2022 15:20:30 +0000 (00:20 +0900)]
[Fix] BN life span

t_reduced is used in forward, but lifespan was in backwarding. This
patch fixes the issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[compat] remove contrib from tf headers
Jihoon Lee [Tue, 18 Jan 2022 06:48:27 +0000 (15:48 +0900)]
[compat] remove contrib from tf headers

This patch remove contrib path from tflite headers as contrib headers is
being removed.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Fix] Non-const reference
Jihoon Lee [Thu, 13 Jan 2022 02:24:32 +0000 (11:24 +0900)]
[Fix] Non-const reference

This patch fixes returning reference of local variable.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Android] Delegate option control to android.mk
Jihoon Lee [Mon, 10 Jan 2022 08:51:50 +0000 (17:51 +0900)]
[Android] Delegate option control to android.mk

This patch delegates option control in android mk to meson for debug and
optimized build

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[ Network Graph ] Temporal Fix in getInputGrad
jijoong.moon [Fri, 7 Jan 2022 03:57:47 +0000 (12:57 +0900)]
[ Network Graph ] Temporal Fix in getInputGrad

When we turn on the inPlaceOptimization, it will fail if we have
mulitiple inputs: input layer ( can be inplace operatation ), normal
layers. Then, we are not allocate the grad between input layer and
this layer and it cause the error during calcDeriviate which requires
the grad tensor of input layer.

This patch fix this temporally by creating tensor buffer it reqruies.

and it includes modification of mem_check script to generate proper
output.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[grucell] enable multi inout
hyeonseok lee [Thu, 30 Dec 2021 04:54:28 +0000 (13:54 +0900)]
[grucell] enable multi inout

 - Enable multi inout for grucell
 - Generate grucell layer/model unittest

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[grucell] refactoring grucell
hyeonseok lee [Wed, 29 Dec 2021 13:17:18 +0000 (22:17 +0900)]
[grucell] refactoring grucell

 - Rename grucell variables
 - Uncommented genModelTests

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[Dropout] Disable test strong match
hyeonseok lee [Fri, 31 Dec 2021 09:34:32 +0000 (18:34 +0900)]
[Dropout] Disable test strong match

As dropout is statistcally matching strong output of 60%, it turned out that
it is true only sometime, diabling dropout strong match

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[Recurrent] Add input seqeuncing mechanism
Jihoon Lee [Thu, 30 Dec 2021 09:35:13 +0000 (18:35 +0900)]
[Recurrent] Add input seqeuncing mechanism

As some kind of recurrent model requires input layer it self to be
changed along the sequence, this patch proposes a simple mechanism to
add time stamp suffix for the input layers as well

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[lstmcell] support multi in/out
hyeonseok lee [Sat, 18 Dec 2021 20:33:03 +0000 (05:33 +0900)]
[lstmcell] support multi in/out

 - Refactoring lstmcell layer to support multi in/out (3 input / 2 output)
 - Regenerate lstmcell testcase for multi in/out

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[bugfix] zoneout lstmcell regenerate mask
hyeonseok lee [Mon, 27 Dec 2021 10:47:11 +0000 (19:47 +0900)]
[bugfix] zoneout lstmcell regenerate mask

 - Current implementation regenetate the zoneout mask in calcGradient.
    Fix it to reuse the zoneout mask and remove renerating the mask.

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>