Jihoon Lee [Tue, 28 Dec 2021 08:01:07 +0000 (17:01 +0900)]
[Clean] Remove getOutputDimensions()
This patch removes initContext::getOutputDimensions().
This function is only used in the test so removed
(It is used in network_graph but soon will be substitued)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 07:42:20 +0000 (16:42 +0900)]
[context] Receive out conn info and use
Init Context receive out conn info (true if conn info exist, false if
not) and use it to determine given output is dangled
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 06:17:03 +0000 (15:17 +0900)]
[Manager] update requestInputs to use requestTensor
This patch update requestInputs to use requestTensor_ as a bridge.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 21 Dec 2021 11:47:31 +0000 (20:47 +0900)]
[Context] out_dim -> out spec
This patch prepare migration to tensor spec v2 by substituting out
dimension to out specification inside layer_context
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 21 Dec 2021 09:02:02 +0000 (18:02 +0900)]
[Tensor] OutputGrad defaults to be zero if not given
This patch creates output grad(incoming) to all zero if the given output
is not trainable. This will make the given output be considered as
constant. If a user wants to check if output is constant-like(having
zero gradient as it's partner), she can easily check if
outputHasGradient
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 20 Dec 2021 12:53:05 +0000 (21:53 +0900)]
[Const] Make incoming derivative const
As incoming derivative shall not be changed by the layer, this patch
changes runcontext::getIncomingDerivatives and runcontext::getOutputGrad
to return const.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 20 Dec 2021 07:26:21 +0000 (16:26 +0900)]
[Test] Add test with dangling output
This patch add test with dangling output, this test passes test but
fails optimized run.
Current aim is to make this pass optimized run as well.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 03:47:13 +0000 (12:47 +0900)]
[lstm] refactoring lstm layer
- Refactoring lstm layer to use lstm cell core functions.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 01:21:29 +0000 (10:21 +0900)]
[zoneout lstmcell] refactoring zoneout lstmcell layer
- Refactoring zoneout lstmcell layer to use lstmcore functions.
- Preserve lstm_cell_state tensor for calcGradient.
- Remove lstmcell core layer
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 00:43:23 +0000 (09:43 +0900)]
[lstmcell] refactoring lstmcell layer
- Refactoring LSTMCell layer to use LSTM core functions.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 00:29:31 +0000 (09:29 +0900)]
[lstmcell core] prepare refactoring lstmcell core layer
- LSTM cell core layer will be refactoring from layer based to function based.
This commit prepare the core functions which is not used right now.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Fri, 17 Dec 2021 09:07:31 +0000 (18:07 +0900)]
[Recurrent] Support connection for as_sequence
This patch enables recurrent realizer to consume support connection
residing in as_sequence parameter
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 17 Dec 2021 04:26:39 +0000 (13:26 +0900)]
[RecurrentRealizer] Modify to have connection
This patch modifies recurrent realizer to have connetion
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 13:54:18 +0000 (22:54 +0900)]
[Realizer] Change input connection semantics
Input connection semantic changed to be more intuitive by mapping
connection one to one
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 13:07:19 +0000 (22:07 +0900)]
[Realizer] Change slice realizer to get connection
This patch change slice realizer to get connection
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 12:51:10 +0000 (21:51 +0900)]
[Semantics] InitContext NumOutput is hint
This patch changes init context num output to be a hint how many outputs
has to be allocated. Layers may or may not depend on this information.
What Layer must gaurantee is that number of acutal outputs must be
bigger than the init context num output requested, this fixes dangling,
unused variable issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 06:27:28 +0000 (15:27 +0900)]
[Spec] Change nntrainer install dir
This patch change nntrainer install dir to /usr/prefix/*,
see also https://github.com/nnstreamer/nnstreamer/issues/3560
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 08:20:32 +0000 (17:20 +0900)]
[Identity] Add identity layer test
This patch add identity layer test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 07:20:09 +0000 (16:20 +0900)]
[Identity] Implement and connect identity layer
THis patch implement and connect identity layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 06:54:25 +0000 (15:54 +0900)]
[Identity] Add identity header
This patch add identity layer header.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 22 Dec 2021 07:17:53 +0000 (16:17 +0900)]
[Graph] set trainable only if trainable
This patch fixes bug that needsCalcGradient is set true when it
shouldn't be
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 21 Dec 2021 01:34:09 +0000 (10:34 +0900)]
[layer] MAE layer bug fix
MAE layer bug fix:
- forwarding fixed to propagate input unchanged
- backwarding updated to scale with size of the tensor
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Dec 2021 05:04:56 +0000 (14:04 +0900)]
[optimizer] no request vars for non-trainable weights
optimizer is requesting variables for non-trainable weights, but it
should not.
This patch provides the corresponding fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Dec 2021 05:01:02 +0000 (14:01 +0900)]
[tensor] Tensor pool request bug fix
If the max execution order for the tensor pool was less than the
largest execution order for a tensor in the tensor pool, then the max
execution order was being included in the execution order of that
tensor.
This patch resolves this issue for end order, as well as for the start
order.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 12:15:24 +0000 (21:15 +0900)]
[Tensor] Add tensor cat
This patch add tensor cat method and it's test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 10:52:51 +0000 (19:52 +0900)]
[tensor] Add tensor::split
This patch add tensor split for wider use
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 05:05:03 +0000 (14:05 +0900)]
[Trivial] Tensor save changes /ofstream/ostream
There is no reason to limit saving to ofstream. Thus it is changed to
ostream
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 14 Dec 2021 03:00:00 +0000 (12:00 +0900)]
[API] Add layer visitor for the model
This patch add visitor for the model api
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Fri, 10 Dec 2021 08:22:41 +0000 (17:22 +0900)]
[gru] enable bias_hh, reset_after
- Enable bias_hh in gru.
- Enable reset_after in gru, grucell. If reset_after is set to true,
apply reset gate after matrix multiplication.
close #1768
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:53:06 +0000 (14:53 +0900)]
[loss] optimize mse part 3
optimize mse layer to use scaled l2norm instead of manually calculating
the l2norm.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:39:55 +0000 (14:39 +0900)]
[loss] optimize mse part 2
optimize mse backwarding calculation by merging multiple and divide into
a single call.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:36:58 +0000 (14:36 +0900)]
[loss] optimize mse part1
reduce memory usage for mse forwarding.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 16:18:57 +0000 (01:18 +0900)]
[grucell] enable 2 bias
- Enable bias_hh in grucell
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 09:01:38 +0000 (18:01 +0900)]
[lstm, lstmcell, zoneout] enable 2 bias
- Enable bias_hh in lstm, lstmcell, zoneout lstmcell layers
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 02:48:07 +0000 (11:48 +0900)]
[rnn] enable 2 bias
- Enable bias_hh in rnn
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 8 Dec 2021 03:52:02 +0000 (12:52 +0900)]
[rnncell] enable 2 bias
- Make a integrate bias property. It decide whether integrate 2 bias to 1
or not. It will be used in rnn variant for now.
- Added bias_hh in rnncell
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 06:46:14 +0000 (15:46 +0900)]
[test] enable zoneout lstm test
enable zoneout lstm test
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 04:21:34 +0000 (13:21 +0900)]
[layer] bug fix for array initializer
Bug fix for array initializer storing the indices of the requested
weight so that accessing weights not requested always causes error.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 10:18:24 +0000 (19:18 +0900)]
[Connection] Enhance parsing logic
This patch enhances parsing logic of connection especially related to
indexing
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 09:50:41 +0000 (18:50 +0900)]
[Trivial] Complement connection doxygen
Add missing parameter explanation in doxygen
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 09:38:06 +0000 (18:38 +0900)]
[node/trivial] Change input layers to input conns
This patch chagne input layers local variable name to input connections to
make it consistent
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 05:30:07 +0000 (14:30 +0900)]
[manager] Remove activation input exec order from backwarding
This patch removes activation input exec order from backwarding as input
of the activation layer is not used in the backwarding.
This leads to change in the unittests as we cannot check all the
outputs, especially close to the end of the model. So, with optimization
enabled, only the output layer's forwarding is checked.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 12:25:44 +0000 (21:25 +0900)]
[layer] Disable bias enabled for fc and conv
This patch enables disable bias for fully connected and convolution layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 12:17:32 +0000 (21:17 +0900)]
[layer] Add disable bias property for layer impl
Add disable bias property for the layer impl class.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 02:50:46 +0000 (11:50 +0900)]
[test] Disable zoneout lstm unittests
Disable zoneout lstms unittests till they are fixed.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:35:02 +0000 (02:35 +0900)]
[test] recoder to read from input from file
support recorder to read data from file than always generating random
data.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:33:11 +0000 (02:33 +0900)]
[model] Add clip gradient norm property
Add clip gradient norm property for the the model as a simple interface
to set for all the layers.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:11:45 +0000 (02:11 +0900)]
[layer] Embedding fix
Undo earlier changes due to bug in them.
The changes will be proposed again with more unittests.
The changes have been commented for now.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:10:18 +0000 (02:10 +0900)]
[layer] zoneout lstm bug fix
zoneout lstm bug fix for handling the scenario where mask rate is set to
zero for either hidden or cell.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:06:31 +0000 (02:06 +0900)]
[realizer] bug fix for activation realizer
Bug fix for activaiton realizer to find the realizer where the layer is
looping to itself with name change.
TODO: also do for flatten and other realizers which modified the graph
in similar fashion.
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:04:04 +0000 (02:04 +0900)]
[layer] Fixes for mol attention layer
- gradient accumulation
- support only 1 output in case state update is not used
- bug fix in backwarding
- activation to work out of place
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:02:15 +0000 (02:02 +0900)]
[realizer] recurrent realizer bug fix in concat
concat to not loop over all inputs multiple times.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Tue, 7 Dec 2021 05:05:38 +0000 (14:05 +0900)]
[hotfix] Enable zoneout lstmcell unittest
- Fix broken zoneout lstmcell unittest
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 08:00:36 +0000 (17:00 +0900)]
[layer] Bug fix for embedding layer
This patch adds bug fix for the embedding layer related to the index of
the data.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 08:31:29 +0000 (17:31 +0900)]
[Recurrent] Support connection for recurrents
This patch support connection for recurrent input, outputs
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 07:44:50 +0000 (16:44 +0900)]
[Recurrent] recurrent using zero connection
This patch implements multi inout recurrent using zero connection
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 05:27:09 +0000 (14:27 +0900)]
[Recurrent] Support multiple sequence
This patch updates recurrent realizer to suupporting multiple sequence with layer name not boolean
with `as_sequence` property
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 05:18:53 +0000 (14:18 +0900)]
[Test] Update multiout support specification
This patch updates multiout support specification.
Patch planned: 1. support return sequence with layer name
2. support multiple input, output layers
3. support multiple input, output connections
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 04:37:52 +0000 (13:37 +0900)]
[Clean] Prepare remap realizer for further changes
This patch add comments and terms(idx -> time_idx) inside remap realizer
to prepare further changes
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Thu, 2 Dec 2021 15:55:39 +0000 (00:55 +0900)]
[unittest] zoneout lstmcell unittest
- unittest for zoneout lstmcell layer
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 30 Nov 2021 18:51:18 +0000 (03:51 +0900)]
[zoneout lstmcell] Implement zoneout lstm cell
- Zoneout lstmcell is based on the paper and the github repo
which is mentioned in paper.
- Todo: Zoneout at inference time is not implemented yet.
refer: https://arxiv.org/pdf/1606.01305.pdf
https://github.com/teganmaharaj/zoneout
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 30 Nov 2021 11:55:54 +0000 (20:55 +0900)]
[lstmcell] Refactoring the lstmcell
- Refactoring the lstmcell to lstm core layer.
This lstm core layer will be used in zoneout lstmcell layer
- lstm core layer is designed to have 3 inputs, 2 outputs
like other framework.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 04:23:04 +0000 (13:23 +0900)]
[Fix] Add dstate to mol attention batch
This patch fixes that dstate does not have the batch size updated
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 3 Dec 2021 12:11:06 +0000 (21:11 +0900)]
[graph] Extend lifespan of model outputs
The lifespan of model outputs is extended to the max forward exec order
so that the outputs remain valid till the full forward is completed and
they can be checked in the training or even inference.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Dec 2021 12:09:51 +0000 (21:09 +0900)]
[layer] fix for mol attention layer
Bug fix for mol attention layer as getOutputDerivatives is not available
in calcGradient, so the usage is replaced with a temporary tensor.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Dec 2021 07:00:40 +0000 (16:00 +0900)]
[layer] Support multiple outputs for mol attention layer
Support multiple outputs for mol attention layer along with the
unitests.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 2 Dec 2021 04:06:45 +0000 (13:06 +0900)]
[layer] Support filter masking in mol attention
Add support for filter based masking in mol attention.
Add corresponding unittest.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 03:54:18 +0000 (12:54 +0900)]
[Fix] Setting batch after finalize/initialize
This patch delays setting batch after finalize/initialize.
Setting batch must be done after runcontext has made, the reason is that
we have semantics that finalize should be independent of batch size, but
if batch size is set before output_dims batch must be set according to
input_dims batch, which is not desirable.
**V2**
Fix hidden bugs related to this
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 24 Nov 2021 10:58:34 +0000 (19:58 +0900)]
[Test] Add multiout tests
This patch add multiout tests
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 24 Nov 2021 09:09:08 +0000 (18:09 +0900)]
[Fix] fix inconsistency when saving a model
When saving a model, if added order is different from sorted order,
loading a model with binary breaks some times
eg)
a -> b
\
v
c
and layer was added a, b, c
it is being sorted to a, c, b
When this is saved, it is saved a, c, b.
When loading the graph from saved file it is loaded a, b, c.
This patch fixes this inconsistency happens when saving a model.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 14:30:16 +0000 (23:30 +0900)]
[graph] Use connection info for outputs
This patch enable connection infromation to inbound connection starting
from output while deprecating some unused methods
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 13:10:05 +0000 (22:10 +0900)]
[graph] implement setOutputConnections
THis patch implement setOutputConnections. Now, every connection has
defined place to be, we can pin point where the connection has to go.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 12:47:36 +0000 (21:47 +0900)]
[tidy] Remove getInputLayers usage
This patch removes getInputLayers usage as connection is now defined by
name + index, getInputLayers poses ambiguity.
**Changes proposed in this PR:**
- Also, connection now has hash function defined.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 12:11:26 +0000 (21:11 +0900)]
[nn] Apply input connection to compile/initialize
**Changes proposed in this PR:**
- input connection is used for previous realizer/graph.initialize
- temporary string cast operator deleted
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 11:28:49 +0000 (20:28 +0900)]
[nn] InputLayer->InputConnection
This patch substitue input layer to input connection while removing
props::InputLayer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 10:19:05 +0000 (19:19 +0900)]
[props] Extract connection
This patch extract connection from common_properties to have more room
to handle connections freely
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 2 Dec 2021 08:37:50 +0000 (17:37 +0900)]
[layer] perform layer context check on first forward
This patch enables layer context check on the first forward itself,
which revealed a bug in forward which was earlier being shown in the
calcGradient in mol attention layer.
Added the corresponding fix for mol attention layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 06:35:18 +0000 (15:35 +0900)]
[layer] Add MoL layer unittest
Add MoL layer golden test.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 06:33:08 +0000 (15:33 +0900)]
[layer] Add full support of MoL attention layer
Add MoL attention layer backwarding, exportTo, setbatch member
functions.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 05:34:07 +0000 (14:34 +0900)]
[layer] Prepare for multiple inheritance for layer
Prepare for multiple inheritance for layers by marking public virtual of
the base class.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 02:32:13 +0000 (11:32 +0900)]
[translayer] Fix for fc without bias
Bug fix to work with fc layer with disabled bias.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 02:30:51 +0000 (11:30 +0900)]
[tensor] Bug fix for copy
This patch adds bug fix for copy with stride where both the source and
destination tensors are now allowed to be strided tensors.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 29 Nov 2021 06:22:00 +0000 (15:22 +0900)]
[layer] Use dot derivative in fc/attention
Use the dot derivative operation in both fc and attention layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 29 Nov 2021 06:20:32 +0000 (15:20 +0900)]
[tensor] Add derivatives for dot operation
Add derivatives for the dot operation as an easier interface to
calculate derivative for a dot operation used in the forward for both
the inputs.
Add the corresponding interface for dot_batched as well.
See Also #1721
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 2 Dec 2021 08:47:46 +0000 (17:47 +0900)]
[Tensor Sharing] Make tensor sharing private by default
Add a boolean to tell sharing should be enabled or not for a certian
`requestTensor()`. If this value is off, it means make tensor should be
only used inside the certain tensor (like a local variable) and not
shared.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 2 Dec 2021 04:22:27 +0000 (13:22 +0900)]
[ Fix ] bug fixes in tensor_pool and resnet
There were bugs releate with,
. Tensor_pool try to allocate even if there is no need
. Resnet in RandomData generate batch size data
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 1 Dec 2021 08:53:23 +0000 (17:53 +0900)]
[Header] Remove nntrainer_log.h from app_context.h
This patch removes nntrainer_log.h from app_context.h and implement
additional safecheck
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 06:23:31 +0000 (15:23 +0900)]
[Dev] Expose app context header
This patch exposes app context header to make custom layer deployable.
Please note that this is a temporary measure and we might want to use
diffrent abstraction for this.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 25 Nov 2021 13:46:25 +0000 (22:46 +0900)]
[layer] Unittests for reduce mean layer
This patch adds unittests for reduce mean layer with bug fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Nov 2021 06:31:08 +0000 (15:31 +0900)]
[layer] Add support for reduce mean layer
This patch adds support for reduce_mean layer with forwarding and
backwarding implementation.
Basic unittests are added.
Golden unittests will come in the next patch.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Nov 2021 12:07:57 +0000 (21:07 +0900)]
[gradclip] hot fix + unittests
gradient clipping works by adding extending the execution order of the
gradients to the last node, where the global norm is calculated and the
gradients are clipped and applied.
However, weight sharing use of gradients also relies on the last access
of the gradient and gradient clipping disturbs the balance of gradient
last access.
As a quick fix, if gradient clip is enabled, the last access is replaced
with second to last access.
A better way would be for clipping to be layer, and then last access by
clipping layer would be a valid access and balance to the system can be
maintained.
Unittests for gradient clipping is added with and without weight
sharing.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 15:01:30 +0000 (00:01 +0900)]
[model/graph] Clip the gradients and then apply
- Calculate the global norm for the gradients which needs to be clipped
- Clip the gradients with the calculated global norm
- apply the clipped gradients
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 12:30:56 +0000 (21:30 +0900)]
[model/graph] Skip apply gradients if it is to be clipped
skip applying gradients if it is to be clipped by the global norm.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 12:15:28 +0000 (21:15 +0900)]
[graph/manager] Extend gradient execution for clipping
Extend the execution order for the gradients if it used for clipping by
global norm. The gradient is extended by the max execution order where
the norm of the gradients will be calculated and used to update and
apply the gradients.
This is done while requesting the gradient itself at the first time as
the layer properties are finalized by then.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 11:24:11 +0000 (20:24 +0900)]
[layernode] Add property clip gradient by norm
Add the property clip gradient by norm and propagate the property to
each weight by the layer.
ClipGradByNorm property will clip the gradients by the global norm of
the weights with this value set. Each layer's weight can have different
scaling clip value.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 09:25:16 +0000 (18:25 +0900)]
[Test] Add simple multiple inout case
This patch add simple multiple inout case
**Additional changes**
- model test now have options for v2
- fix bugs that save_load test was not actually loading graph
from the saved ini
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 13:40:57 +0000 (22:40 +0900)]
[trivial] Add note on the freq
This patch add note on the frequency map
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 29 Nov 2021 05:00:44 +0000 (14:00 +0900)]
[tensor] Support batched dot operation
Add support for batched dot operaiton. Provide cleanup for both the
attention layers.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 26 Nov 2021 03:20:59 +0000 (12:20 +0900)]
[layer] Unittest + forwarding fix for mol
This patch adds basic validation unittests + forwarding fix for mol
attention layer.
Further a validation test for mol layer is also added for larger size,
and nntrainer model is fixed to support multiple inputs.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Nov 2021 01:49:49 +0000 (10:49 +0900)]
[layer] Forwarding implementation for mol attention layer
This patch provides the forwarding implementation for the mol attention
layer. New underlying requirements have been added with this which
requires more tensor operations to support strided operations like
softmax and divide which will be supported soon.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>