jijoong.moon [Fri, 1 Apr 2022 01:17:14 +0000 (10:17 +0900)]
[README] update reviewers
This patch update reviewers in README
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 25 Mar 2022 02:51:40 +0000 (11:51 +0900)]
[COVERITY] remove unreachec code
This patch remove unreachable code
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 24 Mar 2022 01:46:10 +0000 (10:46 +0900)]
[CAPI] Move getlayer api to public
This patch moves the get_layer api to public
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
MyungJoo Ham [Fri, 25 Mar 2022 05:45:55 +0000 (14:45 +0900)]
jni/Android: build error fix
Add ML_API_COMMON=1 for Android.mk build
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
MyungJoo Ham [Wed, 23 Mar 2022 10:15:05 +0000 (19:15 +0900)]
ML-API dependency clean-up.
If it's not Tizen, ML-API(C) is not mandatory.
Allow to build w/o dependencies on ML-API by default.
Fixes #1853
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
MyungJoo Ham [Wed, 23 Mar 2022 10:14:32 +0000 (19:14 +0900)]
api: clean up dependencies
Remove unnecessary dependencies.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
MyungJoo Ham [Wed, 23 Mar 2022 10:13:00 +0000 (19:13 +0900)]
test: remove unnecessary capi dependencies
Many test cases do not require capi.
Reexamine api dependencies and corrected them.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
MyungJoo Ham [Wed, 23 Mar 2022 10:11:55 +0000 (19:11 +0900)]
application: remove unnecessary capi dep.
1. Remove unnecessary capi dependencies.
2. Don't build apps requiring capi if capi is not available.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
MyungJoo Ham [Fri, 18 Mar 2022 08:20:33 +0000 (17:20 +0900)]
meson: dependency on ml-api should be optional
Users should be able to build nntrainer in a system without
ML-API and nnstreamer by default.
ML-API and nnstreamer dependency should only be mandated
in related systems (a few embedded systems).
Let's keep it "auto" so that most external users can forget
about nnstreamer and ML-API.
Addresses #1853
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
jijoong.moon [Wed, 23 Mar 2022 11:12:43 +0000 (20:12 +0900)]
[CAPI] Expose Layer Enums
This patch contains layer enums to be exposed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 25 Mar 2022 02:51:40 +0000 (11:51 +0900)]
[COVERITY] remove unreachec code
This patch remove unreachable code
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
MyungJoo Ham [Wed, 16 Mar 2022 07:56:56 +0000 (16:56 +0900)]
Portability: unittest uninit vars.
1. gtest code has maybe-uninitialized warnings. Suppress it.
2. maybe-unitialized warning from unittest code fixed.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
MyungJoo Ham [Wed, 16 Mar 2022 07:39:44 +0000 (16:39 +0900)]
Portability: g++-11 has different std policy.
It requires stdexcept and limits for std::invalid_arguments and std::numeric_limits.
Fixes #1857
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
MyungJoo Ham [Wed, 16 Mar 2022 07:38:00 +0000 (16:38 +0900)]
Portability: gcc structured binding declaration error
gcc-7 does not comply with c++-17 fully with structured binding declaration.
This allows maybe-unused globally as a workaround.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
hyeonseok lee [Thu, 17 Mar 2022 04:33:49 +0000 (13:33 +0900)]
[fix] check return value
- Added missing statement to check called function works properly
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Fri, 11 Mar 2022 15:42:54 +0000 (00:42 +0900)]
Change loading meta information behavior
**Before this PR**
optimizer variable loaded from load_path every time.
Calling model->train(); in a row became unintuitive
1. model->train() load from original load path
thus iteration number roll back to the first one.
2. Same happens for the adam weight
3. model->load(); after model->initialize(); is noop
because loadedWeight becomes true
**After this PR**
1. model load from load_path only at initialize time
2. model->load is not implicitly overriden
**Additional Changes**
1. optimizer weight became part of weights. Now available after initialize()
2. Save format became coherent with load format
3. Some unused variables deleted
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 10 Mar 2022 09:40:10 +0000 (18:40 +0900)]
Revert "Load Optimizer Variables"
This reverts commit
c669732b1f52f4aad3114839fe1ebba0f5d95f27.
As this commit contains some compatibility breaking changes, this commit
should be merged with where nntrainer is being used.
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 3 Mar 2022 03:11:04 +0000 (12:11 +0900)]
Load Optimizer Variables
In this PR,
1. new property of adam optimizer, "laod_var" is added to set loading
momentum variables
2. update the read and save binary file
using ml::train::OptimizerType
3. update read in layer_node to skip the optimizer variables if
load_var is set as "false"
4. is_laod_var() is added in optimizer_devel
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 2 Mar 2022 08:06:22 +0000 (17:06 +0900)]
[ BUILD ] add lr_scheduler.h in include dir
For the dependency of apt_context.h, we need to install lr_scheduler.h
in install include directory.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 25 Feb 2022 05:17:10 +0000 (14:17 +0900)]
[layer] LSTM weight array size fix
this patch adds LSTM weight array size fix.
The lstm layer requests 17 tensors to the tensor manager. However, lstm
layer only maintains an array of size 15 to store the index of the requested
tensors. This leads to an out of range access error.
This patch increases the size of the array to store the requested
tensors indices from 15 to 17.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 17 Feb 2022 01:57:26 +0000 (10:57 +0900)]
[capi/test] Example of batchsize set at train
Update capi test to include an example where batchsize is set at the
training time and not at compile time.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 10 Dec 2021 03:59:32 +0000 (12:59 +0900)]
[test] Add finalize tests for lr schedulers
Add tests for finalize member function for the learning rate schedulers.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 13:53:56 +0000 (22:53 +0900)]
[test] learning rate scheduler unittest
Add unittests for the constant and exponential learning rate schedulers.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 13:53:08 +0000 (22:53 +0900)]
[appcontext] Register learning rate scheduler with app context
register learning rate scheduler with the app context and regiter its
type factories.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Wed, 26 Jan 2022 09:25:33 +0000 (18:25 +0900)]
[spec] Add gcov package
- Added gcov package for automatic line coverage
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Fri, 18 Feb 2022 11:02:51 +0000 (20:02 +0900)]
[weight] weight regularization and decay with clip
Enable weighte regularization and decay with weiht clipping.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 18 Feb 2022 11:01:58 +0000 (20:01 +0900)]
[weight-decay] Bug fix for weight decay with adam
weight decay should be applied before calling the optimizer
this was not detected earlier as it was tested with sgd
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 14 Dec 2021 02:06:15 +0000 (11:06 +0900)]
[optim] Add pytorch reference for adam
Add pytorch reference mode for adam which references adam implementation
of pytorch.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 20 Jan 2022 09:06:41 +0000 (18:06 +0900)]
[Load/Save] Update to load and save the adam momentum variables
In this PR, Load and save is updated for the adam optimizer variables.
In order to do it, optimizer load twice, one for the weight, and one
for the optimizer variables.
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 10 Dec 2021 03:48:40 +0000 (12:48 +0900)]
[lr-scheduler] Add finalize to interface
Add finalize to the interface of the learning rate scheduler with the
purpose to verify that the required properties have been set.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 12:37:24 +0000 (21:37 +0900)]
[lr] Support exponential learning rate scheduler
Support exponential decay learning rate scheduler with decay rate and
decay steps are the controllable properties.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 12:17:07 +0000 (21:17 +0900)]
[lr] Support constant learning rate scheduler
Support constant learning rate scheduler.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 11:54:32 +0000 (20:54 +0900)]
[lr] Add interface for learning rate scheduler
Add interface for the learning rate scheduler which all learning rate
schedulers must abide by.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 11:52:48 +0000 (20:52 +0900)]
[optimizer] prepare for learning rate scheduler
Move declaration and definition of learning rate related properties out
to common properties from optimizer_impl class.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 8 Feb 2022 04:32:05 +0000 (13:32 +0900)]
[api] Update model inference API
Update model inference to accept inputs and labels which are const
vectors.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 10 Feb 2022 02:05:15 +0000 (11:05 +0900)]
[weight-decay] Enable for batch normalization
Enable weight decay for batch normalization.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 27 Jan 2022 06:36:46 +0000 (15:36 +0900)]
[test] unittest for weight decay
added unittest for weight decay along with fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 24 Jan 2022 09:54:32 +0000 (18:54 +0900)]
[layers] Support for weight decay to layers
Add support for weight decay to the layers with weights.
Further update the requestWeight API of the layer context to accept
weight decay, and update the usage at manager.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 24 Jan 2022 03:58:43 +0000 (12:58 +0900)]
[weight] Support weight decay
Add support for weight decay property which will enable decay of weights
with each applying of the gradient.
Weight decay can be enabled individually for both weight and bias.
This is kept separate from regularizer as they both behave differently.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Fri, 14 Jan 2022 04:29:57 +0000 (13:29 +0900)]
[lstm] implement bidirectional lstm forward
- Make batch_first_forward function
- For now only support forward for bidirectional lstm
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Thu, 20 Jan 2022 09:06:41 +0000 (18:06 +0900)]
[ SAVE/LOAD ] save / load optimizer variables
Enable save / load optimizer variables such as M and V for adam
optimizer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Wed, 12 Jan 2022 11:29:41 +0000 (20:29 +0900)]
[lstm] remove timestep property
- Remove timestep property from lstm layer.
This will disable unrolling the lstm layer.
- Adjust recurrent unittest to simple lstm unittest.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Fri, 17 Dec 2021 06:04:27 +0000 (15:04 +0900)]
[Fix] Flatten realizer to maintain original name
This patch changes flatten realizer to maintain original node name.
Please see the included test patch to get the concept.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 19 Jan 2022 10:40:02 +0000 (19:40 +0900)]
[Tensor Pool] Add expose/persist concept
This patch add expose/persist concept to the tensor pool and manager.
When a tensor is exposed, this means that the tensor is gauranteed to
remain valid max_exec where max_exec is the value passed along allocateTensors(max_exec);
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 19 Jan 2022 09:41:28 +0000 (18:41 +0900)]
[ExecOrder] Add exec order control and fix inference
This patch add exec order control and reduce memory overhead of saving
inference result
exec order cotrol has two part
1. Additional exec order. This enable graph to add some exec order.
2. Expose bit. This enables request to be visible after the end of the
execution.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 19 Jan 2022 08:25:04 +0000 (17:25 +0900)]
[Network Graph] Split max_exec_order
This patch split max_exec_order to three part.
1. graph_exec_end: graph execution end
2. backward_iter_end: end node of backward operation
3. forward_iter_end: end node of the forward operation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:27:00 +0000 (00:27 +0900)]
[manager] Separate backwarding -> grad, deriv
This patch separate backwaring to grad and deriv for the requestTensors
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:25:50 +0000 (00:25 +0900)]
[Debug] Add a naive validator to the optimized planner
This patch add a naive validator to the optimized planner. This doubles
the memory consumption. So kept commented but when time comes, this
function can be enabled.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:25:24 +0000 (00:25 +0900)]
[Fix] fix lifespan of recurrent cells
This patch fixes lifespan of recurrent cells
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:22:29 +0000 (00:22 +0900)]
[Fix] change max order to point last
This patch fixes bug that gradient clipping was not applied correctly
when optimization is on.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:20:30 +0000 (00:20 +0900)]
[Fix] BN life span
t_reduced is used in forward, but lifespan was in backwarding. This
patch fixes the issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 18 Jan 2022 06:48:27 +0000 (15:48 +0900)]
[compat] remove contrib from tf headers
This patch remove contrib path from tflite headers as contrib headers is
being removed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 02:24:32 +0000 (11:24 +0900)]
[Fix] Non-const reference
This patch fixes returning reference of local variable.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 10 Jan 2022 08:51:50 +0000 (17:51 +0900)]
[Android] Delegate option control to android.mk
This patch delegates option control in android mk to meson for debug and
optimized build
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 7 Jan 2022 03:57:47 +0000 (12:57 +0900)]
[ Network Graph ] Temporal Fix in getInputGrad
When we turn on the inPlaceOptimization, it will fail if we have
mulitiple inputs: input layer ( can be inplace operatation ), normal
layers. Then, we are not allocate the grad between input layer and
this layer and it cause the error during calcDeriviate which requires
the grad tensor of input layer.
This patch fix this temporally by creating tensor buffer it reqruies.
and it includes modification of mem_check script to generate proper
output.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Thu, 30 Dec 2021 04:54:28 +0000 (13:54 +0900)]
[grucell] enable multi inout
- Enable multi inout for grucell
- Generate grucell layer/model unittest
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 29 Dec 2021 13:17:18 +0000 (22:17 +0900)]
[grucell] refactoring grucell
- Rename grucell variables
- Uncommented genModelTests
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 31 Dec 2021 09:34:32 +0000 (18:34 +0900)]
[Dropout] Disable test strong match
As dropout is statistcally matching strong output of 60%, it turned out that
it is true only sometime, diabling dropout strong match
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Thu, 30 Dec 2021 09:35:13 +0000 (18:35 +0900)]
[Recurrent] Add input seqeuncing mechanism
As some kind of recurrent model requires input layer it self to be
changed along the sequence, this patch proposes a simple mechanism to
add time stamp suffix for the input layers as well
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 20:33:03 +0000 (05:33 +0900)]
[lstmcell] support multi in/out
- Refactoring lstmcell layer to support multi in/out (3 input / 2 output)
- Regenerate lstmcell testcase for multi in/out
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 27 Dec 2021 10:47:11 +0000 (19:47 +0900)]
[bugfix] zoneout lstmcell regenerate mask
- Current implementation regenetate the zoneout mask in calcGradient.
Fix it to reuse the zoneout mask and remove renerating the mask.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 22:20:33 +0000 (07:20 +0900)]
[zoneout lstmcell] support multi in/out
- Refactoring zoneout lstmcell layer to support multi in/out (3 input / 2output)
- Regenerate zoneout testcase for multi in/out
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 29 Dec 2021 07:32:27 +0000 (16:32 +0900)]
[Update] Multiout model golden test
This patch generate multiout model golden test
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 11:26:03 +0000 (20:26 +0900)]
[Output] Return zero grad for empty output
Along this patchline implements returning zero grad for empty output
Particualary in this patch...
- Remove manager::requestOutput
- Use requestTensors instead of requestOutput to request outputs
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 08:01:07 +0000 (17:01 +0900)]
[Clean] Remove getOutputDimensions()
This patch removes initContext::getOutputDimensions().
This function is only used in the test so removed
(It is used in network_graph but soon will be substitued)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 07:42:20 +0000 (16:42 +0900)]
[context] Receive out conn info and use
Init Context receive out conn info (true if conn info exist, false if
not) and use it to determine given output is dangled
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 06:17:03 +0000 (15:17 +0900)]
[Manager] update requestInputs to use requestTensor
This patch update requestInputs to use requestTensor_ as a bridge.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 21 Dec 2021 11:47:31 +0000 (20:47 +0900)]
[Context] out_dim -> out spec
This patch prepare migration to tensor spec v2 by substituting out
dimension to out specification inside layer_context
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 21 Dec 2021 09:02:02 +0000 (18:02 +0900)]
[Tensor] OutputGrad defaults to be zero if not given
This patch creates output grad(incoming) to all zero if the given output
is not trainable. This will make the given output be considered as
constant. If a user wants to check if output is constant-like(having
zero gradient as it's partner), she can easily check if
outputHasGradient
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 20 Dec 2021 12:53:05 +0000 (21:53 +0900)]
[Const] Make incoming derivative const
As incoming derivative shall not be changed by the layer, this patch
changes runcontext::getIncomingDerivatives and runcontext::getOutputGrad
to return const.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 20 Dec 2021 07:26:21 +0000 (16:26 +0900)]
[Test] Add test with dangling output
This patch add test with dangling output, this test passes test but
fails optimized run.
Current aim is to make this pass optimized run as well.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 03:47:13 +0000 (12:47 +0900)]
[lstm] refactoring lstm layer
- Refactoring lstm layer to use lstm cell core functions.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 01:21:29 +0000 (10:21 +0900)]
[zoneout lstmcell] refactoring zoneout lstmcell layer
- Refactoring zoneout lstmcell layer to use lstmcore functions.
- Preserve lstm_cell_state tensor for calcGradient.
- Remove lstmcell core layer
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 00:43:23 +0000 (09:43 +0900)]
[lstmcell] refactoring lstmcell layer
- Refactoring LSTMCell layer to use LSTM core functions.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 00:29:31 +0000 (09:29 +0900)]
[lstmcell core] prepare refactoring lstmcell core layer
- LSTM cell core layer will be refactoring from layer based to function based.
This commit prepare the core functions which is not used right now.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Fri, 17 Dec 2021 09:07:31 +0000 (18:07 +0900)]
[Recurrent] Support connection for as_sequence
This patch enables recurrent realizer to consume support connection
residing in as_sequence parameter
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 17 Dec 2021 04:26:39 +0000 (13:26 +0900)]
[RecurrentRealizer] Modify to have connection
This patch modifies recurrent realizer to have connetion
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 13:54:18 +0000 (22:54 +0900)]
[Realizer] Change input connection semantics
Input connection semantic changed to be more intuitive by mapping
connection one to one
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 13:07:19 +0000 (22:07 +0900)]
[Realizer] Change slice realizer to get connection
This patch change slice realizer to get connection
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 12:51:10 +0000 (21:51 +0900)]
[Semantics] InitContext NumOutput is hint
This patch changes init context num output to be a hint how many outputs
has to be allocated. Layers may or may not depend on this information.
What Layer must gaurantee is that number of acutal outputs must be
bigger than the init context num output requested, this fixes dangling,
unused variable issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 06:27:28 +0000 (15:27 +0900)]
[Spec] Change nntrainer install dir
This patch change nntrainer install dir to /usr/prefix/*,
see also https://github.com/nnstreamer/nnstreamer/issues/3560
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 08:20:32 +0000 (17:20 +0900)]
[Identity] Add identity layer test
This patch add identity layer test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 07:20:09 +0000 (16:20 +0900)]
[Identity] Implement and connect identity layer
THis patch implement and connect identity layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 06:54:25 +0000 (15:54 +0900)]
[Identity] Add identity header
This patch add identity layer header.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 22 Dec 2021 07:17:53 +0000 (16:17 +0900)]
[Graph] set trainable only if trainable
This patch fixes bug that needsCalcGradient is set true when it
shouldn't be
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 21 Dec 2021 01:34:09 +0000 (10:34 +0900)]
[layer] MAE layer bug fix
MAE layer bug fix:
- forwarding fixed to propagate input unchanged
- backwarding updated to scale with size of the tensor
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Dec 2021 05:04:56 +0000 (14:04 +0900)]
[optimizer] no request vars for non-trainable weights
optimizer is requesting variables for non-trainable weights, but it
should not.
This patch provides the corresponding fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Dec 2021 05:01:02 +0000 (14:01 +0900)]
[tensor] Tensor pool request bug fix
If the max execution order for the tensor pool was less than the
largest execution order for a tensor in the tensor pool, then the max
execution order was being included in the execution order of that
tensor.
This patch resolves this issue for end order, as well as for the start
order.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 12:15:24 +0000 (21:15 +0900)]
[Tensor] Add tensor cat
This patch add tensor cat method and it's test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 10:52:51 +0000 (19:52 +0900)]
[tensor] Add tensor::split
This patch add tensor split for wider use
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 05:05:03 +0000 (14:05 +0900)]
[Trivial] Tensor save changes /ofstream/ostream
There is no reason to limit saving to ofstream. Thus it is changed to
ostream
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 14 Dec 2021 03:00:00 +0000 (12:00 +0900)]
[API] Add layer visitor for the model
This patch add visitor for the model api
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Fri, 10 Dec 2021 08:22:41 +0000 (17:22 +0900)]
[gru] enable bias_hh, reset_after
- Enable bias_hh in gru.
- Enable reset_after in gru, grucell. If reset_after is set to true,
apply reset gate after matrix multiplication.
close #1768
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:53:06 +0000 (14:53 +0900)]
[loss] optimize mse part 3
optimize mse layer to use scaled l2norm instead of manually calculating
the l2norm.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:39:55 +0000 (14:39 +0900)]
[loss] optimize mse part 2
optimize mse backwarding calculation by merging multiple and divide into
a single call.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:36:58 +0000 (14:36 +0900)]
[loss] optimize mse part1
reduce memory usage for mse forwarding.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 16:18:57 +0000 (01:18 +0900)]
[grucell] enable 2 bias
- Enable bias_hh in grucell
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 09:01:38 +0000 (18:01 +0900)]
[lstm, lstmcell, zoneout] enable 2 bias
- Enable bias_hh in lstm, lstmcell, zoneout lstmcell layers
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 02:48:07 +0000 (11:48 +0900)]
[rnn] enable 2 bias
- Enable bias_hh in rnn
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 8 Dec 2021 03:52:02 +0000 (12:52 +0900)]
[rnncell] enable 2 bias
- Make a integrate bias property. It decide whether integrate 2 bias to 1
or not. It will be used in rnn variant for now.
- Added bias_hh in rnncell
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>