Jihoon Lee [Thu, 13 Jan 2022 15:27:00 +0000 (00:27 +0900)]
[manager] Separate backwarding -> grad, deriv
This patch separate backwaring to grad and deriv for the requestTensors
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:25:50 +0000 (00:25 +0900)]
[Debug] Add a naive validator to the optimized planner
This patch add a naive validator to the optimized planner. This doubles
the memory consumption. So kept commented but when time comes, this
function can be enabled.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:25:24 +0000 (00:25 +0900)]
[Fix] fix lifespan of recurrent cells
This patch fixes lifespan of recurrent cells
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:22:29 +0000 (00:22 +0900)]
[Fix] change max order to point last
This patch fixes bug that gradient clipping was not applied correctly
when optimization is on.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 15:20:30 +0000 (00:20 +0900)]
[Fix] BN life span
t_reduced is used in forward, but lifespan was in backwarding. This
patch fixes the issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 18 Jan 2022 06:48:27 +0000 (15:48 +0900)]
[compat] remove contrib from tf headers
This patch remove contrib path from tflite headers as contrib headers is
being removed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 13 Jan 2022 02:24:32 +0000 (11:24 +0900)]
[Fix] Non-const reference
This patch fixes returning reference of local variable.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 10 Jan 2022 08:51:50 +0000 (17:51 +0900)]
[Android] Delegate option control to android.mk
This patch delegates option control in android mk to meson for debug and
optimized build
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 7 Jan 2022 03:57:47 +0000 (12:57 +0900)]
[ Network Graph ] Temporal Fix in getInputGrad
When we turn on the inPlaceOptimization, it will fail if we have
mulitiple inputs: input layer ( can be inplace operatation ), normal
layers. Then, we are not allocate the grad between input layer and
this layer and it cause the error during calcDeriviate which requires
the grad tensor of input layer.
This patch fix this temporally by creating tensor buffer it reqruies.
and it includes modification of mem_check script to generate proper
output.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Thu, 30 Dec 2021 04:54:28 +0000 (13:54 +0900)]
[grucell] enable multi inout
- Enable multi inout for grucell
- Generate grucell layer/model unittest
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 29 Dec 2021 13:17:18 +0000 (22:17 +0900)]
[grucell] refactoring grucell
- Rename grucell variables
- Uncommented genModelTests
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 31 Dec 2021 09:34:32 +0000 (18:34 +0900)]
[Dropout] Disable test strong match
As dropout is statistcally matching strong output of 60%, it turned out that
it is true only sometime, diabling dropout strong match
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Thu, 30 Dec 2021 09:35:13 +0000 (18:35 +0900)]
[Recurrent] Add input seqeuncing mechanism
As some kind of recurrent model requires input layer it self to be
changed along the sequence, this patch proposes a simple mechanism to
add time stamp suffix for the input layers as well
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 20:33:03 +0000 (05:33 +0900)]
[lstmcell] support multi in/out
- Refactoring lstmcell layer to support multi in/out (3 input / 2 output)
- Regenerate lstmcell testcase for multi in/out
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 27 Dec 2021 10:47:11 +0000 (19:47 +0900)]
[bugfix] zoneout lstmcell regenerate mask
- Current implementation regenetate the zoneout mask in calcGradient.
Fix it to reuse the zoneout mask and remove renerating the mask.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 22:20:33 +0000 (07:20 +0900)]
[zoneout lstmcell] support multi in/out
- Refactoring zoneout lstmcell layer to support multi in/out (3 input / 2output)
- Regenerate zoneout testcase for multi in/out
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 29 Dec 2021 07:32:27 +0000 (16:32 +0900)]
[Update] Multiout model golden test
This patch generate multiout model golden test
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 11:26:03 +0000 (20:26 +0900)]
[Output] Return zero grad for empty output
Along this patchline implements returning zero grad for empty output
Particualary in this patch...
- Remove manager::requestOutput
- Use requestTensors instead of requestOutput to request outputs
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 08:01:07 +0000 (17:01 +0900)]
[Clean] Remove getOutputDimensions()
This patch removes initContext::getOutputDimensions().
This function is only used in the test so removed
(It is used in network_graph but soon will be substitued)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 07:42:20 +0000 (16:42 +0900)]
[context] Receive out conn info and use
Init Context receive out conn info (true if conn info exist, false if
not) and use it to determine given output is dangled
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 28 Dec 2021 06:17:03 +0000 (15:17 +0900)]
[Manager] update requestInputs to use requestTensor
This patch update requestInputs to use requestTensor_ as a bridge.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 21 Dec 2021 11:47:31 +0000 (20:47 +0900)]
[Context] out_dim -> out spec
This patch prepare migration to tensor spec v2 by substituting out
dimension to out specification inside layer_context
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 21 Dec 2021 09:02:02 +0000 (18:02 +0900)]
[Tensor] OutputGrad defaults to be zero if not given
This patch creates output grad(incoming) to all zero if the given output
is not trainable. This will make the given output be considered as
constant. If a user wants to check if output is constant-like(having
zero gradient as it's partner), she can easily check if
outputHasGradient
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 20 Dec 2021 12:53:05 +0000 (21:53 +0900)]
[Const] Make incoming derivative const
As incoming derivative shall not be changed by the layer, this patch
changes runcontext::getIncomingDerivatives and runcontext::getOutputGrad
to return const.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 20 Dec 2021 07:26:21 +0000 (16:26 +0900)]
[Test] Add test with dangling output
This patch add test with dangling output, this test passes test but
fails optimized run.
Current aim is to make this pass optimized run as well.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 03:47:13 +0000 (12:47 +0900)]
[lstm] refactoring lstm layer
- Refactoring lstm layer to use lstm cell core functions.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 01:21:29 +0000 (10:21 +0900)]
[zoneout lstmcell] refactoring zoneout lstmcell layer
- Refactoring zoneout lstmcell layer to use lstmcore functions.
- Preserve lstm_cell_state tensor for calcGradient.
- Remove lstmcell core layer
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 00:43:23 +0000 (09:43 +0900)]
[lstmcell] refactoring lstmcell layer
- Refactoring LSTMCell layer to use LSTM core functions.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Sat, 18 Dec 2021 00:29:31 +0000 (09:29 +0900)]
[lstmcell core] prepare refactoring lstmcell core layer
- LSTM cell core layer will be refactoring from layer based to function based.
This commit prepare the core functions which is not used right now.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Fri, 17 Dec 2021 09:07:31 +0000 (18:07 +0900)]
[Recurrent] Support connection for as_sequence
This patch enables recurrent realizer to consume support connection
residing in as_sequence parameter
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 17 Dec 2021 04:26:39 +0000 (13:26 +0900)]
[RecurrentRealizer] Modify to have connection
This patch modifies recurrent realizer to have connetion
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 13:54:18 +0000 (22:54 +0900)]
[Realizer] Change input connection semantics
Input connection semantic changed to be more intuitive by mapping
connection one to one
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 13:07:19 +0000 (22:07 +0900)]
[Realizer] Change slice realizer to get connection
This patch change slice realizer to get connection
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 12:51:10 +0000 (21:51 +0900)]
[Semantics] InitContext NumOutput is hint
This patch changes init context num output to be a hint how many outputs
has to be allocated. Layers may or may not depend on this information.
What Layer must gaurantee is that number of acutal outputs must be
bigger than the init context num output requested, this fixes dangling,
unused variable issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 06:27:28 +0000 (15:27 +0900)]
[Spec] Change nntrainer install dir
This patch change nntrainer install dir to /usr/prefix/*,
see also https://github.com/nnstreamer/nnstreamer/issues/3560
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 08:20:32 +0000 (17:20 +0900)]
[Identity] Add identity layer test
This patch add identity layer test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 07:20:09 +0000 (16:20 +0900)]
[Identity] Implement and connect identity layer
THis patch implement and connect identity layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 16 Dec 2021 06:54:25 +0000 (15:54 +0900)]
[Identity] Add identity header
This patch add identity layer header.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 22 Dec 2021 07:17:53 +0000 (16:17 +0900)]
[Graph] set trainable only if trainable
This patch fixes bug that needsCalcGradient is set true when it
shouldn't be
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 21 Dec 2021 01:34:09 +0000 (10:34 +0900)]
[layer] MAE layer bug fix
MAE layer bug fix:
- forwarding fixed to propagate input unchanged
- backwarding updated to scale with size of the tensor
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Dec 2021 05:04:56 +0000 (14:04 +0900)]
[optimizer] no request vars for non-trainable weights
optimizer is requesting variables for non-trainable weights, but it
should not.
This patch provides the corresponding fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 20 Dec 2021 05:01:02 +0000 (14:01 +0900)]
[tensor] Tensor pool request bug fix
If the max execution order for the tensor pool was less than the
largest execution order for a tensor in the tensor pool, then the max
execution order was being included in the execution order of that
tensor.
This patch resolves this issue for end order, as well as for the start
order.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 12:15:24 +0000 (21:15 +0900)]
[Tensor] Add tensor cat
This patch add tensor cat method and it's test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 10:52:51 +0000 (19:52 +0900)]
[tensor] Add tensor::split
This patch add tensor split for wider use
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 15 Dec 2021 05:05:03 +0000 (14:05 +0900)]
[Trivial] Tensor save changes /ofstream/ostream
There is no reason to limit saving to ofstream. Thus it is changed to
ostream
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 14 Dec 2021 03:00:00 +0000 (12:00 +0900)]
[API] Add layer visitor for the model
This patch add visitor for the model api
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Fri, 10 Dec 2021 08:22:41 +0000 (17:22 +0900)]
[gru] enable bias_hh, reset_after
- Enable bias_hh in gru.
- Enable reset_after in gru, grucell. If reset_after is set to true,
apply reset gate after matrix multiplication.
close #1768
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:53:06 +0000 (14:53 +0900)]
[loss] optimize mse part 3
optimize mse layer to use scaled l2norm instead of manually calculating
the l2norm.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:39:55 +0000 (14:39 +0900)]
[loss] optimize mse part 2
optimize mse backwarding calculation by merging multiple and divide into
a single call.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:36:58 +0000 (14:36 +0900)]
[loss] optimize mse part1
reduce memory usage for mse forwarding.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 16:18:57 +0000 (01:18 +0900)]
[grucell] enable 2 bias
- Enable bias_hh in grucell
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 09:01:38 +0000 (18:01 +0900)]
[lstm, lstmcell, zoneout] enable 2 bias
- Enable bias_hh in lstm, lstmcell, zoneout lstmcell layers
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 9 Dec 2021 02:48:07 +0000 (11:48 +0900)]
[rnn] enable 2 bias
- Enable bias_hh in rnn
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 8 Dec 2021 03:52:02 +0000 (12:52 +0900)]
[rnncell] enable 2 bias
- Make a integrate bias property. It decide whether integrate 2 bias to 1
or not. It will be used in rnn variant for now.
- Added bias_hh in rnncell
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 06:46:14 +0000 (15:46 +0900)]
[test] enable zoneout lstm test
enable zoneout lstm test
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 9 Dec 2021 04:21:34 +0000 (13:21 +0900)]
[layer] bug fix for array initializer
Bug fix for array initializer storing the indices of the requested
weight so that accessing weights not requested always causes error.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 10:18:24 +0000 (19:18 +0900)]
[Connection] Enhance parsing logic
This patch enhances parsing logic of connection especially related to
indexing
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 09:50:41 +0000 (18:50 +0900)]
[Trivial] Complement connection doxygen
Add missing parameter explanation in doxygen
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 09:38:06 +0000 (18:38 +0900)]
[node/trivial] Change input layers to input conns
This patch chagne input layers local variable name to input connections to
make it consistent
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 05:30:07 +0000 (14:30 +0900)]
[manager] Remove activation input exec order from backwarding
This patch removes activation input exec order from backwarding as input
of the activation layer is not used in the backwarding.
This leads to change in the unittests as we cannot check all the
outputs, especially close to the end of the model. So, with optimization
enabled, only the output layer's forwarding is checked.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 12:25:44 +0000 (21:25 +0900)]
[layer] Disable bias enabled for fc and conv
This patch enables disable bias for fully connected and convolution layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 12:17:32 +0000 (21:17 +0900)]
[layer] Add disable bias property for layer impl
Add disable bias property for the layer impl class.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 7 Dec 2021 02:50:46 +0000 (11:50 +0900)]
[test] Disable zoneout lstm unittests
Disable zoneout lstms unittests till they are fixed.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:35:02 +0000 (02:35 +0900)]
[test] recoder to read from input from file
support recorder to read data from file than always generating random
data.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:33:11 +0000 (02:33 +0900)]
[model] Add clip gradient norm property
Add clip gradient norm property for the the model as a simple interface
to set for all the layers.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:11:45 +0000 (02:11 +0900)]
[layer] Embedding fix
Undo earlier changes due to bug in them.
The changes will be proposed again with more unittests.
The changes have been commented for now.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:10:18 +0000 (02:10 +0900)]
[layer] zoneout lstm bug fix
zoneout lstm bug fix for handling the scenario where mask rate is set to
zero for either hidden or cell.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:06:31 +0000 (02:06 +0900)]
[realizer] bug fix for activation realizer
Bug fix for activaiton realizer to find the realizer where the layer is
looping to itself with name change.
TODO: also do for flatten and other realizers which modified the graph
in similar fashion.
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:04:04 +0000 (02:04 +0900)]
[layer] Fixes for mol attention layer
- gradient accumulation
- support only 1 output in case state update is not used
- bug fix in backwarding
- activation to work out of place
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 17:02:15 +0000 (02:02 +0900)]
[realizer] recurrent realizer bug fix in concat
concat to not loop over all inputs multiple times.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Tue, 7 Dec 2021 05:05:38 +0000 (14:05 +0900)]
[hotfix] Enable zoneout lstmcell unittest
- Fix broken zoneout lstmcell unittest
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Mon, 6 Dec 2021 08:00:36 +0000 (17:00 +0900)]
[layer] Bug fix for embedding layer
This patch adds bug fix for the embedding layer related to the index of
the data.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 08:31:29 +0000 (17:31 +0900)]
[Recurrent] Support connection for recurrents
This patch support connection for recurrent input, outputs
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 07:44:50 +0000 (16:44 +0900)]
[Recurrent] recurrent using zero connection
This patch implements multi inout recurrent using zero connection
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 05:27:09 +0000 (14:27 +0900)]
[Recurrent] Support multiple sequence
This patch updates recurrent realizer to suupporting multiple sequence with layer name not boolean
with `as_sequence` property
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 05:18:53 +0000 (14:18 +0900)]
[Test] Update multiout support specification
This patch updates multiout support specification.
Patch planned: 1. support return sequence with layer name
2. support multiple input, output layers
3. support multiple input, output connections
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 04:37:52 +0000 (13:37 +0900)]
[Clean] Prepare remap realizer for further changes
This patch add comments and terms(idx -> time_idx) inside remap realizer
to prepare further changes
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Thu, 2 Dec 2021 15:55:39 +0000 (00:55 +0900)]
[unittest] zoneout lstmcell unittest
- unittest for zoneout lstmcell layer
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 30 Nov 2021 18:51:18 +0000 (03:51 +0900)]
[zoneout lstmcell] Implement zoneout lstm cell
- Zoneout lstmcell is based on the paper and the github repo
which is mentioned in paper.
- Todo: Zoneout at inference time is not implemented yet.
refer: https://arxiv.org/pdf/1606.01305.pdf
https://github.com/teganmaharaj/zoneout
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 30 Nov 2021 11:55:54 +0000 (20:55 +0900)]
[lstmcell] Refactoring the lstmcell
- Refactoring the lstmcell to lstm core layer.
This lstm core layer will be used in zoneout lstmcell layer
- lstm core layer is designed to have 3 inputs, 2 outputs
like other framework.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Mon, 6 Dec 2021 04:23:04 +0000 (13:23 +0900)]
[Fix] Add dstate to mol attention batch
This patch fixes that dstate does not have the batch size updated
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 3 Dec 2021 12:11:06 +0000 (21:11 +0900)]
[graph] Extend lifespan of model outputs
The lifespan of model outputs is extended to the max forward exec order
so that the outputs remain valid till the full forward is completed and
they can be checked in the training or even inference.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Dec 2021 12:09:51 +0000 (21:09 +0900)]
[layer] fix for mol attention layer
Bug fix for mol attention layer as getOutputDerivatives is not available
in calcGradient, so the usage is replaced with a temporary tensor.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 3 Dec 2021 07:00:40 +0000 (16:00 +0900)]
[layer] Support multiple outputs for mol attention layer
Support multiple outputs for mol attention layer along with the
unitests.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 2 Dec 2021 04:06:45 +0000 (13:06 +0900)]
[layer] Support filter masking in mol attention
Add support for filter based masking in mol attention.
Add corresponding unittest.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 03:54:18 +0000 (12:54 +0900)]
[Fix] Setting batch after finalize/initialize
This patch delays setting batch after finalize/initialize.
Setting batch must be done after runcontext has made, the reason is that
we have semantics that finalize should be independent of batch size, but
if batch size is set before output_dims batch must be set according to
input_dims batch, which is not desirable.
**V2**
Fix hidden bugs related to this
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 24 Nov 2021 10:58:34 +0000 (19:58 +0900)]
[Test] Add multiout tests
This patch add multiout tests
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 24 Nov 2021 09:09:08 +0000 (18:09 +0900)]
[Fix] fix inconsistency when saving a model
When saving a model, if added order is different from sorted order,
loading a model with binary breaks some times
eg)
a -> b
\
v
c
and layer was added a, b, c
it is being sorted to a, c, b
When this is saved, it is saved a, c, b.
When loading the graph from saved file it is loaded a, b, c.
This patch fixes this inconsistency happens when saving a model.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 14:30:16 +0000 (23:30 +0900)]
[graph] Use connection info for outputs
This patch enable connection infromation to inbound connection starting
from output while deprecating some unused methods
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 13:10:05 +0000 (22:10 +0900)]
[graph] implement setOutputConnections
THis patch implement setOutputConnections. Now, every connection has
defined place to be, we can pin point where the connection has to go.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 12:47:36 +0000 (21:47 +0900)]
[tidy] Remove getInputLayers usage
This patch removes getInputLayers usage as connection is now defined by
name + index, getInputLayers poses ambiguity.
**Changes proposed in this PR:**
- Also, connection now has hash function defined.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 12:11:26 +0000 (21:11 +0900)]
[nn] Apply input connection to compile/initialize
**Changes proposed in this PR:**
- input connection is used for previous realizer/graph.initialize
- temporary string cast operator deleted
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 11:28:49 +0000 (20:28 +0900)]
[nn] InputLayer->InputConnection
This patch substitue input layer to input connection while removing
props::InputLayer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 10:19:05 +0000 (19:19 +0900)]
[props] Extract connection
This patch extract connection from common_properties to have more room
to handle connections freely
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 2 Dec 2021 08:37:50 +0000 (17:37 +0900)]
[layer] perform layer context check on first forward
This patch enables layer context check on the first forward itself,
which revealed a bug in forward which was earlier being shown in the
calcGradient in mol attention layer.
Added the corresponding fix for mol attention layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 06:35:18 +0000 (15:35 +0900)]
[layer] Add MoL layer unittest
Add MoL layer golden test.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 06:33:08 +0000 (15:33 +0900)]
[layer] Add full support of MoL attention layer
Add MoL attention layer backwarding, exportTo, setbatch member
functions.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 05:34:07 +0000 (14:34 +0900)]
[layer] Prepare for multiple inheritance for layer
Prepare for multiple inheritance for layers by marking public virtual of
the base class.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 02:32:13 +0000 (11:32 +0900)]
[translayer] Fix for fc without bias
Bug fix to work with fc layer with disabled bias.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 1 Dec 2021 02:30:51 +0000 (11:30 +0900)]
[tensor] Bug fix for copy
This patch adds bug fix for copy with stride where both the source and
destination tensors are now allowed to be strided tensors.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>