Parichay Kapoor [Mon, 29 Nov 2021 06:20:32 +0000 (15:20 +0900)]
[tensor] Add derivatives for dot operation
Add derivatives for the dot operation as an easier interface to
calculate derivative for a dot operation used in the forward for both
the inputs.
Add the corresponding interface for dot_batched as well.
See Also #1721
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 2 Dec 2021 08:47:46 +0000 (17:47 +0900)]
[Tensor Sharing] Make tensor sharing private by default
Add a boolean to tell sharing should be enabled or not for a certian
`requestTensor()`. If this value is off, it means make tensor should be
only used inside the certain tensor (like a local variable) and not
shared.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 2 Dec 2021 04:22:27 +0000 (13:22 +0900)]
[ Fix ] bug fixes in tensor_pool and resnet
There were bugs releate with,
. Tensor_pool try to allocate even if there is no need
. Resnet in RandomData generate batch size data
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 1 Dec 2021 08:53:23 +0000 (17:53 +0900)]
[Header] Remove nntrainer_log.h from app_context.h
This patch removes nntrainer_log.h from app_context.h and implement
additional safecheck
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 30 Nov 2021 06:23:31 +0000 (15:23 +0900)]
[Dev] Expose app context header
This patch exposes app context header to make custom layer deployable.
Please note that this is a temporary measure and we might want to use
diffrent abstraction for this.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 25 Nov 2021 13:46:25 +0000 (22:46 +0900)]
[layer] Unittests for reduce mean layer
This patch adds unittests for reduce mean layer with bug fix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Nov 2021 06:31:08 +0000 (15:31 +0900)]
[layer] Add support for reduce mean layer
This patch adds support for reduce_mean layer with forwarding and
backwarding implementation.
Basic unittests are added.
Golden unittests will come in the next patch.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 25 Nov 2021 12:07:57 +0000 (21:07 +0900)]
[gradclip] hot fix + unittests
gradient clipping works by adding extending the execution order of the
gradients to the last node, where the global norm is calculated and the
gradients are clipped and applied.
However, weight sharing use of gradients also relies on the last access
of the gradient and gradient clipping disturbs the balance of gradient
last access.
As a quick fix, if gradient clip is enabled, the last access is replaced
with second to last access.
A better way would be for clipping to be layer, and then last access by
clipping layer would be a valid access and balance to the system can be
maintained.
Unittests for gradient clipping is added with and without weight
sharing.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 15:01:30 +0000 (00:01 +0900)]
[model/graph] Clip the gradients and then apply
- Calculate the global norm for the gradients which needs to be clipped
- Clip the gradients with the calculated global norm
- apply the clipped gradients
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 12:30:56 +0000 (21:30 +0900)]
[model/graph] Skip apply gradients if it is to be clipped
skip applying gradients if it is to be clipped by the global norm.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 12:15:28 +0000 (21:15 +0900)]
[graph/manager] Extend gradient execution for clipping
Extend the execution order for the gradients if it used for clipping by
global norm. The gradient is extended by the max execution order where
the norm of the gradients will be calculated and used to update and
apply the gradients.
This is done while requesting the gradient itself at the first time as
the layer properties are finalized by then.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 11:24:11 +0000 (20:24 +0900)]
[layernode] Add property clip gradient by norm
Add the property clip gradient by norm and propagate the property to
each weight by the layer.
ClipGradByNorm property will clip the gradients by the global norm of
the weights with this value set. Each layer's weight can have different
scaling clip value.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 09:25:16 +0000 (18:25 +0900)]
[Test] Add simple multiple inout case
This patch add simple multiple inout case
**Additional changes**
- model test now have options for v2
- fix bugs that save_load test was not actually loading graph
from the saved ini
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 13:40:57 +0000 (22:40 +0900)]
[trivial] Add note on the freq
This patch add note on the frequency map
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 29 Nov 2021 05:00:44 +0000 (14:00 +0900)]
[tensor] Support batched dot operation
Add support for batched dot operaiton. Provide cleanup for both the
attention layers.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 26 Nov 2021 03:20:59 +0000 (12:20 +0900)]
[layer] Unittest + forwarding fix for mol
This patch adds basic validation unittests + forwarding fix for mol
attention layer.
Further a validation test for mol layer is also added for larger size,
and nntrainer model is fixed to support multiple inputs.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Nov 2021 01:49:49 +0000 (10:49 +0900)]
[layer] Forwarding implementation for mol attention layer
This patch provides the forwarding implementation for the mol attention
layer. New underlying requirements have been added with this which
requires more tensor operations to support strided operations like
softmax and divide which will be supported soon.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Nov 2021 01:01:56 +0000 (10:01 +0900)]
[layer] Support properties for MoL attention layer
This patch provides support for all the properties required for MoL
attention layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Nov 2021 00:38:01 +0000 (09:38 +0900)]
[layer] Scaffolding for MoL Attention Layer
This patch prepares scaffolding for MoL Attention Layer.
Attention layer has also been updated for MoLAttentionLayer to extend
AttentionLayer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 29 Nov 2021 03:47:20 +0000 (12:47 +0900)]
[KLD loss] kld loss scaffolding
This patch add kld loss scaffolding
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 04:39:53 +0000 (13:39 +0900)]
[Clean] Remove unused function
remove unused function before adaption multioutput.
Main reason for the clean is that the functions are not used but
hard to maintain for the incoming change of adapting multi output
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 04:26:15 +0000 (13:26 +0900)]
[nn] Attach activation realizer
This patch attach activation realizer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 04:00:44 +0000 (13:00 +0900)]
[Test] Activation realizer test
This patch implement activation realizer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 23 Nov 2021 03:58:50 +0000 (12:58 +0900)]
[Realizer] Implement activation realizer
This patch implement activation realizer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 13:40:57 +0000 (22:40 +0900)]
[nn] Attach multiout realizer
This patch add multiout realizer to the neural network.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 13:16:44 +0000 (22:16 +0900)]
[Test] Add multiout realizer test
This patch add multiout realizer test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Nov 2021 01:04:15 +0000 (10:04 +0900)]
[MultiOut] Implement multiout realizer
This patch implements multiout realizer which will gaurantee that
each input_layer only refers to a single connection (tensor)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 17 Nov 2021 06:07:04 +0000 (15:07 +0900)]
[Remap] Introduce remapping only identifiers
This patch introduce way to remap only connections.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 17 Nov 2021 04:09:37 +0000 (13:09 +0900)]
[Multiout/trivial] Add scaffolding for mo realizer
This patch add multiout realizer scaffolding
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 06:15:27 +0000 (15:15 +0900)]
[layer] Embedding layer support ZeroMaskIdx
Embedding layer support zero mask index which if set will pass zero
values for that particular index in the forward propagation and will not
be updated with gradient.
Further minor updates to tensor related to getting tensor values.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 24 Nov 2021 05:24:04 +0000 (14:24 +0900)]
[layer] Embedding input data format
Embedding layer should assume that the input data is written in int
format than in float data. Reading data as float can lead to wrong
values when typecasting to int if std::lround() is not properly used.
Further, as other frameworks require using integer data for embedding,
its best if follow the same input formats.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 10:08:58 +0000 (19:08 +0900)]
[Node] Add get/set method to manipulate connection
This patch add get/set method to input connections for further use
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 08:47:53 +0000 (17:47 +0900)]
[Node] replace output layer string -> connection
This patch replaces output layer from string to connection
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 08:31:53 +0000 (17:31 +0900)]
[Node] Replace input layers -> input connection
This patch replace input layers to input connection.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 06:43:16 +0000 (15:43 +0900)]
[Connection] Add indexing to connection
This patch revisition connection spec to contain identifier and index to
make use of it.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Mon, 29 Nov 2021 06:13:21 +0000 (15:13 +0900)]
[fix] Set input layer when given input layer is empty
- Added input layer from the graph when given input layer is empty
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Mon, 29 Nov 2021 05:45:57 +0000 (14:45 +0900)]
[trivial/Fix] add dependency header
As fwd declaration is widely used, run context header needs to be added
where it is being used
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Fri, 26 Nov 2021 03:44:13 +0000 (12:44 +0900)]
[QuickFix] Disable contiguous check on add_i
- For now disable contiguous check of tensor on add_i
since the add_strided does not support broadcast.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Fri, 26 Nov 2021 01:54:16 +0000 (10:54 +0900)]
[ Android ] add static option for openmp
This is the fix for "cannot find libomp.so"
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 19 Nov 2021 10:10:55 +0000 (19:10 +0900)]
[layer] missing layer constructor
Add missing layer constructor for permute.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 19 Nov 2021 10:07:02 +0000 (19:07 +0900)]
[layer] Bug fix for permute layer
Permute layer backward bug in the calculation of the reverse
direction of the permute is fixed.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 19 Nov 2021 10:06:35 +0000 (19:06 +0900)]
[layer] Shape mismatch check in reshape layer
Add shape mismatch check in reshape layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 19 Nov 2021 04:21:59 +0000 (13:21 +0900)]
[layer devel] clean up header dependency
This patch clean up header dependency related to layer devel which
is being included in so many places and does not need layer_context to
be included in it's translation unit
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 19 Nov 2021 03:33:13 +0000 (12:33 +0900)]
[Build] Use iosfwd instead of iostream
This patch changes <iostream> to iosfwd when it is not really needed
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Nov 2021 08:57:11 +0000 (17:57 +0900)]
[nn] Apply previous input realizer
This patch applies previous input realizer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Nov 2021 08:38:10 +0000 (17:38 +0900)]
[Fix] Flatten realizer to not rely on default Layer Addition
As default layer add happens before realizer, flatten layer must not
depend on the layer adding behavior.
This patch fixes flatten realizer to not depend.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Nov 2021 07:02:30 +0000 (16:02 +0900)]
[Realizer] Implement previous input realizer
This patch implement previous input realizer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Nov 2021 07:00:17 +0000 (16:00 +0900)]
[Realizer] Implement previous input realizer test
This patch implement previous input realizer test to demonstrate
specification of previout input realizer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Nov 2021 01:21:48 +0000 (10:21 +0900)]
[Scaffolding] previous input realizer
This patch implements previous input realizer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 18 Nov 2021 08:40:01 +0000 (17:40 +0900)]
[test] Enable layer node test
Enable the layer node test.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 25 Nov 2021 07:13:54 +0000 (16:13 +0900)]
[Add] add leaky relu to actifun
This patch add leaky relu to actifun, for now there is no need to add
slope parameter, this is more like a quick fix
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Mon, 22 Nov 2021 06:49:36 +0000 (15:49 +0900)]
[unittest] Implement grucell unittest
- Verify grucell with tensorflow by layer unittest
- Added grucell model unittest to verify multi unit/timestep, stacked, unroll situation with pytorch
- Todo: Finds other way without copying when convert gate order of pytorch grucell
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 22 Nov 2021 06:46:17 +0000 (15:46 +0900)]
[grucell] Implement grucell
- Support add with non-contiguous tensor
- Implement grucell which is run only 1 timestep
- Todo:
1. Make it more efficient with strided tensor(Reduce copy tensor)
2. Reduce temporary tensor or do inplace operation
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 28 Oct 2021 07:35:22 +0000 (16:35 +0900)]
[bugfix] Bugfix on tensor sum
- Check beta is null when copy the origin data on sum function and added unittest
- Remove argument beta on outplace sum function
- Makes bias_hh to zero instead of bias_ih in recurrent models when convert pytorch to nntrainer
- Fix typos
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Mon, 15 Nov 2021 05:16:35 +0000 (14:16 +0900)]
[graph] Reduce metadata overhead of slice realizer
Slice realizer was storing the path for each node from the start nodes
to themselves. This can be big, and also comes with a overhead of
storing and copying these paths for each node.
This patch updates the implementation of the dfs to a recursive approach
which maintains the stack of the nodes in the memory which represents
the path till now.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Nov 2021 05:34:30 +0000 (14:34 +0900)]
[graph] Bug fix for slice realizer
Below bugs have been fixed for the slice realizer:
- end layers were being added using children which were not set yet
- the dfs_stack never ignored nodes which had already been visited which
was leading to infinite loop
- an invalid graph given with loops would be stuck in infinite loop in
the DFS
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Nov 2021 05:33:27 +0000 (14:33 +0900)]
[layer] Add exportTo for concat layer
Concat layer has properties which were not being saved. This patch adds
the corresponding support.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 15 Nov 2021 08:21:06 +0000 (17:21 +0900)]
[Test] Add singleshot case test
Add singleshot test case when ml_inference get null input information
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 15 Nov 2021 08:19:40 +0000 (17:19 +0900)]
[Filter] Add dimension inference mechanism
This patch add dimension inference mechanism as requested.
Please refer to nnstreamer/api#105 for details
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 10 Nov 2021 09:10:13 +0000 (18:10 +0900)]
[Tp] Propose new tensor spec
This patch proposes new tensor specification
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 3 Nov 2021 06:26:19 +0000 (15:26 +0900)]
[tensor] Add checks for non-contiguous tensor before operations
This patch adds checks for contiguous tensors ensuring that operations
which dont support non-contiguous tensors throw on receiving the
non-contiguous tensors.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 17 Nov 2021 03:02:35 +0000 (12:02 +0900)]
[Trivial] Delete duplicated file
This patch delete duplicated test file that is not used anywhere
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 22 Nov 2021 04:14:21 +0000 (13:14 +0900)]
[Feature check] Add tizen 6.5 compatibility
As prefix has been added to the `ml_tizen_set_feature_state` for tizen
7.0 this patch makes backward compatible to the old tizen version
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 19 Nov 2021 01:00:23 +0000 (10:00 +0900)]
[ README ] Add SSDC2021 presentation
Add SSDC2021 Presentation in README
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 15 Nov 2021 04:47:13 +0000 (13:47 +0900)]
[QuickFix] Check weight access for the last
This patch sets lastGradientAccess and firstGraidentAccess based on the
weight access order for non-trainable tensor.
This is a improviser and must be handled in a correct way.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Nov 2021 08:44:03 +0000 (17:44 +0900)]
[Weights] Add last access concept
As all weights are shared now, we need last accesss to be deteremined
This patch implements such behavior with a test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Nov 2021 05:35:17 +0000 (14:35 +0900)]
[QuickFix] Add not dependent weight sharing
This patch allows weight sharing regardless of allocation order of
weight sharing
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 18 Nov 2021 05:03:27 +0000 (14:03 +0900)]
[Build] Fix fbs causing rebuilding
This patch fixes flatbuffer cause rebuilding.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 17 Nov 2021 22:56:02 +0000 (07:56 +0900)]
[Fix] api feature check problem
This is an emergency measure for feature check is failing inside
unittest of gbs build.
Also enabled ml_inference test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 9 Nov 2021 08:06:53 +0000 (17:06 +0900)]
[Tp] Rename requestTensor -> request
This patch renames requestTensor to request
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 9 Nov 2021 06:12:41 +0000 (15:12 +0900)]
[Tp] requestPreallocated -> view
This patch rename requestPreallocated to view
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 9 Nov 2021 05:44:14 +0000 (14:44 +0900)]
[Tp] rename externallyAllocated -> placeholder
This patch renames externallyAllocated -> placeholder
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Nov 2021 02:02:45 +0000 (11:02 +0900)]
[KNN] Fix querying label when not training
This patch fix querying label when not training, which is not feasible
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 11 Nov 2021 07:48:56 +0000 (16:48 +0900)]
[KNN] Implement knn saving
As knn was not saving it's props, this patch implements saving property
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 10 Nov 2021 12:19:19 +0000 (21:19 +0900)]
[Test] Add ml singleshot example
This patch adds integrated test to show and test ml singleshot.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 5 Nov 2021 09:42:23 +0000 (18:42 +0900)]
[tp] Implement reidentifySource()
This patch implements reidentifySource(). Test will be followed
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 9 Nov 2021 05:25:38 +0000 (14:25 +0900)]
[Tp] Add dimension check to extend
This patch add dimension check to extend
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 9 Nov 2021 05:21:05 +0000 (14:21 +0900)]
[Tp] rename create -> request
As per request, create has been renamed to request
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 5 Nov 2021 09:03:46 +0000 (18:03 +0900)]
[Trivial] Use if instead of template
This patch changes template to if constexpr. There seems to be false
positive with cppcheck with previous implementation, but do not need to
bother by a code used only in very small places.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 5 Nov 2021 08:38:02 +0000 (17:38 +0900)]
Policy based tensor request for tensor pool.
This commit specifically implement createOrExtend().
From this commit, basic tensor pool request will be possible except
`reidentifySource()`.
`reidentifySource()` will come as a separate PR to get better review.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 5 Nov 2021 08:08:13 +0000 (17:08 +0900)]
[TP] Implement extend()
This patch implements extend() and corresponding tests
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 5 Nov 2021 07:36:06 +0000 (16:36 +0900)]
[Tpool] Remove updateExternalTensor
Instead of requesting externally to update external tensor, now syncing
dependency done upon request and renamed to `fillPlaceholder` to align
with incoming changes
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 5 Nov 2021 07:22:20 +0000 (16:22 +0900)]
[Clean] Separate src spec from dep spec
As source specification and dep specification is being deviated, this
patch prepares for the further code changes with minor clearance.
**Changes proposed in this PR:**
- RequestSpec now has tensor, details
- Detail is `std::variant<SourceDetails, DependencyDetails>`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 5 Nov 2021 07:17:58 +0000 (16:17 +0900)]
[Trivial] ZERO_LIFESPAN -> UNMANAGED
This patch renames zero_lifespan to unmanaged.
zero_lifespan suggests it should be extendable but it's precisely not.
So changed name for clarity.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 4 Nov 2021 11:41:22 +0000 (20:41 +0900)]
[tpool] Implement view()
**Changes proposed in this PR:**
- merge updateExternalTensor()
- enable offset for view/placeholder
- corresponding tests
- s/requestSpec/RequestSpec
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 4 Nov 2021 09:07:18 +0000 (18:07 +0900)]
[TPool] implement create/placeholder
This patch implements create/view and its corresponding test.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 4 Nov 2021 08:47:56 +0000 (17:47 +0900)]
[TPool] Add interface to enable extensive sharing
This patch add interface to enable extensive sharing, including
reassigning or tensor pointer sharing (previously discussed as s3
sharing)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 3 Nov 2021 05:37:08 +0000 (14:37 +0900)]
[layer] Add modes to inplace layer execution
There are now 3 modes to inplace layer execution (compared to 2
previously - yes/no):
- none: not inplace
- restricting: if the current layer is inplace, then not every layer
right next to it can be inplace. The next layer can check its
conditions.
- non-restricting: this layer does not add any restrictions, and the
nexy layer can treat this layer just as an out-of-place layer when
deciding if the next layer should work in-place.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 1 Nov 2021 12:20:51 +0000 (21:20 +0900)]
[layer] Inplace support for multiout layer
This patch adds support for in-place execution of the multiout layer.
Corresponding unittests are also added.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Fri, 5 Nov 2021 07:14:32 +0000 (16:14 +0900)]
[unittest] Implement rnncell unittest
- Generate rnn, rnncell layer unittest
- Generate model unittest which is composed of rnncell
- Verified with multi timestep/stacked/loop_unrolling situation
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 29 Oct 2021 09:06:09 +0000 (18:06 +0900)]
[rnncell] Implement rnncell
- Implement rnncell which is only run on 1 timestep
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 4 Nov 2021 10:56:44 +0000 (19:56 +0900)]
[Bugfix] Regenerate lstmcell unittest
- Set lstm bias_ih to unused in init to fix the bug
- Regenerate the lstmcell unittest
- Fix typo
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Tue, 2 Nov 2021 01:13:19 +0000 (10:13 +0900)]
[graph] Allocate memory based on which node runs
Currently, the memory was allocated assuming all the nodes were running
backwarding, and all the layers are trainable. But that is not true.
This patch applies the corresponding optimization and allocates memory
corresponding to the functions which will run on these nodes.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 1 Nov 2021 11:02:51 +0000 (20:02 +0900)]
[layer] LSTMCell bug fix
This patch applies bug fix for lstmcell.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 1 Nov 2021 06:51:42 +0000 (15:51 +0900)]
[graph] Skip backwarding bug fix
Current implementation assumes that the graph is always linear and skips
the backwarding for contiguous set of layers at the start of the model
which should not do the backwarding.
This patch makes a fix to this issue but allowing layers in the middle
of the sorted graph to skip backwarding.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Tue, 2 Nov 2021 11:28:35 +0000 (20:28 +0900)]
[ README ] fix typo & add link for cc APIs
fix typo & add link for cc APIs
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 28 Oct 2021 06:32:38 +0000 (15:32 +0900)]
[meson] Add android tflite disable option
This patch add android tflite disable option
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 2 Nov 2021 02:59:18 +0000 (11:59 +0900)]
[ README ] update readme
This patch includes readme update.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Mon, 1 Nov 2021 01:13:16 +0000 (10:13 +0900)]
[layer] Support prefix for sharing weight names
This patch allows support for prefix for sharing weight names across
layers, so that layers which should share their weights can do so by
using the same prefix.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 1 Nov 2021 01:08:48 +0000 (10:08 +0900)]
[layer] Simplify naming scheme for requested tensors
All layers were required to make unique name for their requests.
This is now rather done by layer context by appending layer name as a
prefix to the requested name.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>