platform/core/ml/nntrainer.git
2 years ago[hotfix] Enable zoneout lstmcell unittest
hyeonseok lee [Tue, 7 Dec 2021 05:05:38 +0000 (14:05 +0900)]
[hotfix] Enable zoneout lstmcell unittest

 - Fix broken zoneout lstmcell unittest

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[layer] Bug fix for embedding layer
Parichay Kapoor [Mon, 6 Dec 2021 08:00:36 +0000 (17:00 +0900)]
[layer] Bug fix for embedding layer

This patch adds bug fix for the embedding layer related to the index of
the data.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Recurrent] Support connection for recurrents
Jihoon Lee [Mon, 6 Dec 2021 08:31:29 +0000 (17:31 +0900)]
[Recurrent] Support connection for recurrents

This patch support connection for recurrent input, outputs

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Recurrent] recurrent using zero connection
Jihoon Lee [Mon, 6 Dec 2021 07:44:50 +0000 (16:44 +0900)]
[Recurrent] recurrent using zero connection

This patch implements multi inout recurrent using zero connection

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Recurrent] Support multiple sequence
Jihoon Lee [Mon, 6 Dec 2021 05:27:09 +0000 (14:27 +0900)]
[Recurrent] Support multiple sequence

This patch updates recurrent realizer to suupporting multiple sequence with layer name not boolean
with `as_sequence` property

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Test] Update multiout support specification
Jihoon Lee [Mon, 6 Dec 2021 05:18:53 +0000 (14:18 +0900)]
[Test] Update multiout support specification

This patch updates multiout support specification.

Patch planned: 1. support return sequence with layer name
               2. support multiple input, output layers
               3. support multiple input, output connections

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Clean] Prepare remap realizer for further changes
Jihoon Lee [Mon, 6 Dec 2021 04:37:52 +0000 (13:37 +0900)]
[Clean] Prepare remap realizer for further changes

This patch add comments and terms(idx -> time_idx) inside remap realizer
to prepare further changes

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[unittest] zoneout lstmcell unittest
hyeonseok lee [Thu, 2 Dec 2021 15:55:39 +0000 (00:55 +0900)]
[unittest] zoneout lstmcell unittest

 - unittest for zoneout lstmcell layer

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[zoneout lstmcell] Implement zoneout lstm cell
hyeonseok lee [Tue, 30 Nov 2021 18:51:18 +0000 (03:51 +0900)]
[zoneout lstmcell] Implement zoneout lstm cell

 - Zoneout lstmcell is based on the paper and the github repo
   which is mentioned in paper.
 - Todo: Zoneout at inference time is not implemented yet.

refer: https://arxiv.org/pdf/1606.01305.pdf
       https://github.com/teganmaharaj/zoneout

Self evaluation:

    Build test: [X]Passed [ ]Failed [ ]Skipped
    Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[lstmcell] Refactoring the lstmcell
hyeonseok lee [Tue, 30 Nov 2021 11:55:54 +0000 (20:55 +0900)]
[lstmcell] Refactoring the lstmcell

 - Refactoring the lstmcell to lstm core layer.
   This lstm core layer will be used in zoneout lstmcell layer
 - lstm core layer is designed to have 3 inputs, 2 outputs
   like other framework.

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[Fix] Add dstate to mol attention batch
Jihoon Lee [Mon, 6 Dec 2021 04:23:04 +0000 (13:23 +0900)]
[Fix] Add dstate to mol attention batch

This patch fixes that dstate does not have the batch size updated

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[graph] Extend lifespan of model outputs
Parichay Kapoor [Fri, 3 Dec 2021 12:11:06 +0000 (21:11 +0900)]
[graph] Extend lifespan of model outputs

The lifespan of model outputs is extended to the max forward exec order
so that the outputs remain valid till the full forward is completed and
they can be checked in the training or even inference.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] fix for mol attention layer
Parichay Kapoor [Fri, 3 Dec 2021 12:09:51 +0000 (21:09 +0900)]
[layer] fix for mol attention layer

Bug fix for mol attention layer as getOutputDerivatives is not available
in calcGradient, so the usage is replaced with a temporary tensor.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Support multiple outputs for mol attention layer
Parichay Kapoor [Fri, 3 Dec 2021 07:00:40 +0000 (16:00 +0900)]
[layer] Support multiple outputs for mol attention layer

Support multiple outputs for mol attention layer along with the
unitests.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Support filter masking in mol attention
Parichay Kapoor [Thu, 2 Dec 2021 04:06:45 +0000 (13:06 +0900)]
[layer] Support filter masking in mol attention

Add support for filter based masking in mol attention.
Add corresponding unittest.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Fix] Setting batch after finalize/initialize
Jihoon Lee [Tue, 30 Nov 2021 03:54:18 +0000 (12:54 +0900)]
[Fix] Setting batch after finalize/initialize

This patch delays setting batch after finalize/initialize.

Setting batch must be done after runcontext has made, the reason is that
we have semantics that finalize should be independent of batch size, but
if batch size is set before output_dims batch must be set according to
input_dims batch, which is not desirable.

**V2**
Fix hidden bugs related to this

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Test] Add multiout tests
Jihoon Lee [Wed, 24 Nov 2021 10:58:34 +0000 (19:58 +0900)]
[Test] Add multiout tests

This patch add multiout tests

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Fix] fix inconsistency when saving a model
Jihoon Lee [Wed, 24 Nov 2021 09:09:08 +0000 (18:09 +0900)]
[Fix] fix inconsistency when saving a model

When saving a model, if added order is different from sorted order,
loading a model with binary breaks some times

eg)
a -> b
  \
   v
   c

and layer was added a, b, c
it is being sorted to a, c, b

When this is saved, it is saved a, c, b.
When loading the graph from saved file it is loaded a, b, c.

This patch fixes this inconsistency happens when saving a model.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[graph] Use connection info for outputs
Jihoon Lee [Tue, 23 Nov 2021 14:30:16 +0000 (23:30 +0900)]
[graph] Use connection info for outputs

This patch enable connection infromation to inbound connection starting
from output while deprecating some unused methods

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[graph] implement setOutputConnections
Jihoon Lee [Tue, 23 Nov 2021 13:10:05 +0000 (22:10 +0900)]
[graph] implement setOutputConnections

THis patch implement setOutputConnections. Now, every connection has
defined place to be, we can pin point where the connection has to go.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[tidy] Remove getInputLayers usage
Jihoon Lee [Tue, 23 Nov 2021 12:47:36 +0000 (21:47 +0900)]
[tidy] Remove getInputLayers usage

This patch removes getInputLayers usage as connection is now defined by
name + index, getInputLayers poses ambiguity.

**Changes proposed in this PR:**
- Also, connection now has hash function defined.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[nn] Apply input connection to compile/initialize
Jihoon Lee [Tue, 23 Nov 2021 12:11:26 +0000 (21:11 +0900)]
[nn] Apply input connection to compile/initialize

**Changes proposed in this PR:**
- input connection is used for previous realizer/graph.initialize
- temporary string cast operator deleted

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[nn] InputLayer->InputConnection
Jihoon Lee [Tue, 23 Nov 2021 11:28:49 +0000 (20:28 +0900)]
[nn] InputLayer->InputConnection

This patch substitue input layer to input connection while removing
props::InputLayer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[props] Extract connection
Jihoon Lee [Tue, 23 Nov 2021 10:19:05 +0000 (19:19 +0900)]
[props] Extract connection

This patch extract connection from common_properties to have more room
to handle connections freely

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[layer] perform layer context check on first forward
Parichay Kapoor [Thu, 2 Dec 2021 08:37:50 +0000 (17:37 +0900)]
[layer] perform layer context check on first forward

This patch enables layer context check on the first forward itself,
which revealed a bug in forward which was earlier being shown in the
calcGradient in mol attention layer.
Added the corresponding fix for mol attention layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Add MoL layer unittest
Parichay Kapoor [Wed, 1 Dec 2021 06:35:18 +0000 (15:35 +0900)]
[layer] Add MoL layer unittest

Add MoL layer golden test.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Add full support of MoL attention layer
Parichay Kapoor [Wed, 1 Dec 2021 06:33:08 +0000 (15:33 +0900)]
[layer] Add full support of MoL attention layer

Add MoL attention layer backwarding, exportTo, setbatch member
functions.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Prepare for multiple inheritance for layer
Parichay Kapoor [Wed, 1 Dec 2021 05:34:07 +0000 (14:34 +0900)]
[layer] Prepare for multiple inheritance for layer

Prepare for multiple inheritance for layers by marking public virtual of
the base class.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[translayer] Fix for fc without bias
Parichay Kapoor [Wed, 1 Dec 2021 02:32:13 +0000 (11:32 +0900)]
[translayer] Fix for fc without bias

Bug fix to work with fc layer with disabled bias.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[tensor] Bug fix for copy
Parichay Kapoor [Wed, 1 Dec 2021 02:30:51 +0000 (11:30 +0900)]
[tensor] Bug fix for copy

This patch adds bug fix for copy with stride where both the source and
destination tensors are now allowed to be strided tensors.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Use dot derivative in fc/attention
Parichay Kapoor [Mon, 29 Nov 2021 06:22:00 +0000 (15:22 +0900)]
[layer] Use dot derivative in fc/attention

Use the dot derivative operation in both fc and attention layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[tensor] Add derivatives for dot operation
Parichay Kapoor [Mon, 29 Nov 2021 06:20:32 +0000 (15:20 +0900)]
[tensor] Add derivatives for dot operation

Add derivatives for the dot operation as an easier interface to
calculate derivative for a dot operation used in the forward for both
the inputs.
Add the corresponding interface for dot_batched as well.

See Also #1721

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Tensor Sharing] Make tensor sharing private by default
Jihoon Lee [Thu, 2 Dec 2021 08:47:46 +0000 (17:47 +0900)]
[Tensor Sharing] Make tensor sharing private by default

Add a boolean to tell sharing should be enabled or not for a certian
`requestTensor()`. If this value is off, it means make tensor should be
only used inside the certain tensor (like a local variable) and not
shared.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[ Fix ] bug fixes in tensor_pool and resnet
jijoong.moon [Thu, 2 Dec 2021 04:22:27 +0000 (13:22 +0900)]
[ Fix ] bug fixes in tensor_pool and resnet

There were bugs releate with,
  . Tensor_pool try to allocate even if there is no need
  . Resnet in RandomData generate batch size data

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[Header] Remove nntrainer_log.h from app_context.h
Jihoon Lee [Wed, 1 Dec 2021 08:53:23 +0000 (17:53 +0900)]
[Header] Remove nntrainer_log.h from app_context.h

This patch removes nntrainer_log.h from app_context.h and implement
additional safecheck

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Dev] Expose app context header
Jihoon Lee [Tue, 30 Nov 2021 06:23:31 +0000 (15:23 +0900)]
[Dev] Expose app context header

This patch exposes app context header to make custom layer deployable.
Please note that this is a temporary measure and we might want to use
diffrent abstraction for this.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[layer] Unittests for reduce mean layer
Parichay Kapoor [Thu, 25 Nov 2021 13:46:25 +0000 (22:46 +0900)]
[layer] Unittests for reduce mean layer

This patch adds unittests for reduce mean layer with bug fix.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Add support for reduce mean layer
Parichay Kapoor [Thu, 25 Nov 2021 06:31:08 +0000 (15:31 +0900)]
[layer] Add support for reduce mean layer

This patch adds support for reduce_mean layer with forwarding and
backwarding implementation.
Basic unittests are added.

Golden unittests will come in the next patch.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[gradclip] hot fix + unittests
Parichay Kapoor [Thu, 25 Nov 2021 12:07:57 +0000 (21:07 +0900)]
[gradclip] hot fix + unittests

gradient clipping works by adding extending the execution order of the
gradients to the last node, where the global norm is calculated and the
gradients are clipped and applied.
However, weight sharing use of gradients also relies on the last access
of the gradient and gradient clipping disturbs the balance of gradient
last access.
As a quick fix, if gradient clip is enabled, the last access is replaced
with second to last access.

A better way would be for clipping to be layer, and then last access by
clipping layer would be a valid access and balance to the system can be
maintained.

Unittests for gradient clipping is added with and without weight
sharing.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[model/graph] Clip the gradients and then apply
Parichay Kapoor [Wed, 24 Nov 2021 15:01:30 +0000 (00:01 +0900)]
[model/graph] Clip the gradients and then apply

- Calculate the global norm for the gradients which needs to be clipped
- Clip the gradients with the calculated global norm
- apply the clipped gradients

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[model/graph] Skip apply gradients if it is to be clipped
Parichay Kapoor [Wed, 24 Nov 2021 12:30:56 +0000 (21:30 +0900)]
[model/graph] Skip apply gradients if it is to be clipped

skip applying gradients if it is to be clipped by the global norm.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[graph/manager] Extend gradient execution for clipping
Parichay Kapoor [Wed, 24 Nov 2021 12:15:28 +0000 (21:15 +0900)]
[graph/manager] Extend gradient execution for clipping

Extend the execution order for the gradients if it used for clipping by
global norm. The gradient is extended by the max execution order where
the norm of the gradients will be calculated and used to update and
apply the gradients.
This is done while requesting the gradient itself at the first time as
the layer properties are finalized by then.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layernode] Add property clip gradient by norm
Parichay Kapoor [Wed, 24 Nov 2021 11:24:11 +0000 (20:24 +0900)]
[layernode] Add property clip gradient by norm

Add the property clip gradient by norm and propagate the property to
each weight by the layer.
ClipGradByNorm property will clip the gradients by the global norm of
the weights with this value set. Each layer's weight can have different
scaling clip value.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Test] Add simple multiple inout case
Jihoon Lee [Mon, 22 Nov 2021 09:25:16 +0000 (18:25 +0900)]
[Test] Add simple multiple inout case

This patch add simple multiple inout case

**Additional changes**
- model test now have options for v2
- fix bugs that save_load test was not actually loading graph
from the saved ini

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[trivial] Add note on the freq
Jihoon Lee [Mon, 22 Nov 2021 13:40:57 +0000 (22:40 +0900)]
[trivial] Add note on the freq

This patch add note on the frequency map

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[tensor] Support batched dot operation
Parichay Kapoor [Mon, 29 Nov 2021 05:00:44 +0000 (14:00 +0900)]
[tensor] Support batched dot operation

Add support for batched dot operaiton. Provide cleanup for both the
attention layers.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Unittest + forwarding fix for mol
Parichay Kapoor [Fri, 26 Nov 2021 03:20:59 +0000 (12:20 +0900)]
[layer] Unittest + forwarding fix for mol

This patch adds basic validation unittests + forwarding fix for mol
attention layer.
Further a validation test for mol layer is also added for larger size,
and nntrainer model is fixed to support multiple inputs.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Forwarding implementation for mol attention layer
Parichay Kapoor [Fri, 12 Nov 2021 01:49:49 +0000 (10:49 +0900)]
[layer] Forwarding implementation for mol attention layer

This patch provides the forwarding implementation for the mol attention
layer. New underlying requirements have been added with this which
requires more tensor operations to support strided operations like
softmax and divide which will be supported soon.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Support properties for MoL attention layer
Parichay Kapoor [Fri, 12 Nov 2021 01:01:56 +0000 (10:01 +0900)]
[layer] Support properties for MoL attention layer

This patch provides support for all the properties required for MoL
attention layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Scaffolding for MoL Attention Layer
Parichay Kapoor [Fri, 12 Nov 2021 00:38:01 +0000 (09:38 +0900)]
[layer] Scaffolding for MoL Attention Layer

This patch prepares scaffolding for MoL Attention Layer.
Attention layer has also been updated for MoLAttentionLayer to extend
AttentionLayer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[KLD loss] kld loss scaffolding
Jihoon Lee [Mon, 29 Nov 2021 03:47:20 +0000 (12:47 +0900)]
[KLD loss] kld loss scaffolding

This patch add kld loss scaffolding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Clean] Remove unused function
Jihoon Lee [Tue, 23 Nov 2021 04:39:53 +0000 (13:39 +0900)]
[Clean] Remove unused function

remove unused function before adaption multioutput.

Main reason for the clean is that the functions are not used but
hard to maintain for the incoming change of adapting multi output

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[nn] Attach activation realizer
Jihoon Lee [Tue, 23 Nov 2021 04:26:15 +0000 (13:26 +0900)]
[nn] Attach activation realizer

This patch attach activation realizer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Test] Activation realizer test
Jihoon Lee [Tue, 23 Nov 2021 04:00:44 +0000 (13:00 +0900)]
[Test] Activation realizer test

This patch implement activation realizer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Realizer] Implement activation realizer
Jihoon Lee [Tue, 23 Nov 2021 03:58:50 +0000 (12:58 +0900)]
[Realizer] Implement activation realizer

This patch implement activation realizer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[nn] Attach multiout realizer
Jihoon Lee [Mon, 22 Nov 2021 13:40:57 +0000 (22:40 +0900)]
[nn] Attach multiout realizer

This patch add multiout realizer to the neural network.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Test] Add multiout realizer test
Jihoon Lee [Mon, 22 Nov 2021 13:16:44 +0000 (22:16 +0900)]
[Test] Add multiout realizer test

This patch add multiout realizer test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[MultiOut] Implement multiout realizer
Jihoon Lee [Thu, 18 Nov 2021 01:04:15 +0000 (10:04 +0900)]
[MultiOut] Implement multiout realizer

This patch implements multiout realizer which will gaurantee that
each input_layer only refers to a single connection (tensor)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Remap] Introduce remapping only identifiers
Jihoon Lee [Wed, 17 Nov 2021 06:07:04 +0000 (15:07 +0900)]
[Remap] Introduce remapping only identifiers

This patch introduce way to remap only connections.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Multiout/trivial] Add scaffolding for mo realizer
Jihoon Lee [Wed, 17 Nov 2021 04:09:37 +0000 (13:09 +0900)]
[Multiout/trivial] Add scaffolding for mo realizer

This patch add multiout realizer scaffolding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[layer] Embedding layer support ZeroMaskIdx
Parichay Kapoor [Wed, 24 Nov 2021 06:15:27 +0000 (15:15 +0900)]
[layer] Embedding layer support ZeroMaskIdx

Embedding layer support zero mask index which if set will pass zero
values for that particular index in the forward propagation and will not
be updated with gradient.

Further minor updates to tensor related to getting tensor values.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Embedding input data format
Parichay Kapoor [Wed, 24 Nov 2021 05:24:04 +0000 (14:24 +0900)]
[layer] Embedding input data format

Embedding layer should assume that the input data is written in int
format than in float data. Reading data as float can lead to wrong
values when typecasting to int if std::lround() is not properly used.
Further, as other frameworks require using integer data for embedding,
its best if follow the same input formats.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Node] Add get/set method to manipulate connection
Jihoon Lee [Mon, 22 Nov 2021 10:08:58 +0000 (19:08 +0900)]
[Node] Add get/set method to manipulate connection

This patch add get/set method to input connections for further use

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Node] replace output layer string -> connection
Jihoon Lee [Mon, 22 Nov 2021 08:47:53 +0000 (17:47 +0900)]
[Node] replace output layer string -> connection

This patch replaces output layer from string to connection

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Node] Replace input layers -> input connection
Jihoon Lee [Mon, 22 Nov 2021 08:31:53 +0000 (17:31 +0900)]
[Node] Replace input layers -> input connection

This patch replace input layers to input connection.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Connection] Add indexing to connection
Jihoon Lee [Mon, 22 Nov 2021 06:43:16 +0000 (15:43 +0900)]
[Connection] Add indexing to connection

This patch revisition connection spec to contain identifier and index to
make use of it.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[fix] Set input layer when given input layer is empty
hyeonseok lee [Mon, 29 Nov 2021 06:13:21 +0000 (15:13 +0900)]
[fix] Set input layer when given input layer is empty

 - Added input layer from the graph when given input layer is empty

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[trivial/Fix] add dependency header
Jihoon Lee [Mon, 29 Nov 2021 05:45:57 +0000 (14:45 +0900)]
[trivial/Fix] add dependency header

As fwd declaration is widely used, run context header needs to be added
where it is being used

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[QuickFix] Disable contiguous check on add_i
hyeonseok lee [Fri, 26 Nov 2021 03:44:13 +0000 (12:44 +0900)]
[QuickFix] Disable contiguous check on add_i

 - For now disable contiguous check of tensor on add_i
    since the add_strided does not support broadcast.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[ Android ] add static option for openmp
jijoong.moon [Fri, 26 Nov 2021 01:54:16 +0000 (10:54 +0900)]
[ Android ] add static option for openmp

This is the fix for "cannot find libomp.so"

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[layer] missing layer constructor
Parichay Kapoor [Fri, 19 Nov 2021 10:10:55 +0000 (19:10 +0900)]
[layer] missing layer constructor

Add missing layer constructor for permute.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Bug fix for permute layer
Parichay Kapoor [Fri, 19 Nov 2021 10:07:02 +0000 (19:07 +0900)]
[layer] Bug fix for permute layer

Permute layer backward bug in the calculation of the reverse
direction of the permute is fixed.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Shape mismatch check in reshape layer
Parichay Kapoor [Fri, 19 Nov 2021 10:06:35 +0000 (19:06 +0900)]
[layer] Shape mismatch check in reshape layer

Add shape mismatch check in reshape layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago [layer devel] clean up header dependency
Jihoon Lee [Fri, 19 Nov 2021 04:21:59 +0000 (13:21 +0900)]
 [layer devel] clean up header dependency

This patch clean up header dependency related to layer devel which
is being included in so many places and does not need layer_context to
be included in it's translation unit

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Build] Use iosfwd instead of iostream
Jihoon Lee [Fri, 19 Nov 2021 03:33:13 +0000 (12:33 +0900)]
[Build] Use iosfwd instead of iostream

This patch changes <iostream> to iosfwd when it is not really needed

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[nn] Apply previous input realizer
Jihoon Lee [Thu, 18 Nov 2021 08:57:11 +0000 (17:57 +0900)]
[nn] Apply previous input realizer

This patch applies previous input realizer.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Fix] Flatten realizer to not rely on default Layer Addition
Jihoon Lee [Thu, 18 Nov 2021 08:38:10 +0000 (17:38 +0900)]
[Fix] Flatten realizer to not rely on default Layer Addition

As default layer add happens before realizer, flatten layer must not
depend on the layer adding behavior.

This patch fixes flatten realizer to not depend.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Realizer] Implement previous input realizer
Jihoon Lee [Thu, 18 Nov 2021 07:02:30 +0000 (16:02 +0900)]
[Realizer] Implement previous input realizer

This patch implement previous input realizer.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Realizer] Implement previous input realizer test
Jihoon Lee [Thu, 18 Nov 2021 07:00:17 +0000 (16:00 +0900)]
[Realizer] Implement previous input realizer test

This patch implement previous input realizer test to demonstrate
specification of previout input realizer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Scaffolding] previous input realizer
Jihoon Lee [Thu, 18 Nov 2021 01:21:48 +0000 (10:21 +0900)]
[Scaffolding] previous input realizer

This patch implements previous input realizer.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[test] Enable layer node test
Parichay Kapoor [Thu, 18 Nov 2021 08:40:01 +0000 (17:40 +0900)]
[test] Enable layer node test

Enable the layer node test.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Add] add leaky relu to actifun
Jihoon Lee [Thu, 25 Nov 2021 07:13:54 +0000 (16:13 +0900)]
[Add] add leaky relu to actifun

This patch add leaky relu to actifun, for now there is no need to add
slope parameter, this is more like a quick fix

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[unittest] Implement grucell unittest
hyeonseok lee [Mon, 22 Nov 2021 06:49:36 +0000 (15:49 +0900)]
[unittest] Implement grucell unittest

 - Verify grucell with tensorflow by layer unittest
 - Added grucell model unittest to verify multi unit/timestep, stacked, unroll situation with pytorch
 - Todo: Finds other way without copying when convert gate order of pytorch grucell

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[grucell] Implement grucell
hyeonseok lee [Mon, 22 Nov 2021 06:46:17 +0000 (15:46 +0900)]
[grucell] Implement grucell

 - Support add with non-contiguous tensor
 - Implement grucell which is run only 1 timestep
 - Todo:
   1. Make it more efficient with strided tensor(Reduce copy tensor)
   2. Reduce temporary tensor or do inplace operation

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[bugfix] Bugfix on tensor sum
hyeonseok lee [Thu, 28 Oct 2021 07:35:22 +0000 (16:35 +0900)]
[bugfix] Bugfix on tensor sum

 - Check beta is null when copy the origin data on sum function and added unittest
 - Remove argument beta on outplace sum function
 - Makes bias_hh to zero instead of bias_ih in recurrent models when convert pytorch to nntrainer
 - Fix typos

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
2 years ago[graph] Reduce metadata overhead of slice realizer
Parichay Kapoor [Mon, 15 Nov 2021 05:16:35 +0000 (14:16 +0900)]
[graph] Reduce metadata overhead of slice realizer

Slice realizer was storing the path for each node from the start nodes
to themselves. This can be big, and also comes with a overhead of
storing and copying these paths for each node.

This patch updates the implementation of the dfs to a recursive approach
which maintains the stack of the nodes in the memory which represents
the path till now.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[graph] Bug fix for slice realizer
Parichay Kapoor [Fri, 12 Nov 2021 05:34:30 +0000 (14:34 +0900)]
[graph] Bug fix for slice realizer

Below bugs have been fixed for the slice realizer:
- end layers were being added using children which were not set yet
- the dfs_stack never ignored nodes which had already been visited which
was leading to infinite loop
- an invalid graph given with loops would be stuck in infinite loop in
the DFS

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[layer] Add exportTo for concat layer
Parichay Kapoor [Fri, 12 Nov 2021 05:33:27 +0000 (14:33 +0900)]
[layer] Add exportTo for concat layer

Concat layer has properties which were not being saved. This patch adds
the corresponding support.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Test] Add singleshot case test
Jihoon Lee [Mon, 15 Nov 2021 08:21:06 +0000 (17:21 +0900)]
[Test] Add singleshot case test

Add singleshot test case when ml_inference get null input information

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Filter] Add dimension inference mechanism
Jihoon Lee [Mon, 15 Nov 2021 08:19:40 +0000 (17:19 +0900)]
[Filter] Add dimension inference mechanism

This patch add dimension inference mechanism as requested.
Please refer to nnstreamer/api#105 for details

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Tp] Propose new tensor spec
Jihoon Lee [Wed, 10 Nov 2021 09:10:13 +0000 (18:10 +0900)]
[Tp] Propose new tensor spec

This patch proposes new tensor specification

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[tensor] Add checks for non-contiguous tensor before operations
Parichay Kapoor [Wed, 3 Nov 2021 06:26:19 +0000 (15:26 +0900)]
[tensor] Add checks for non-contiguous tensor before operations

This patch adds checks for contiguous tensors ensuring that operations
which dont support non-contiguous tensors throw on receiving the
non-contiguous tensors.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
2 years ago[Trivial] Delete duplicated file
Jihoon Lee [Wed, 17 Nov 2021 03:02:35 +0000 (12:02 +0900)]
[Trivial] Delete duplicated file

This patch delete duplicated test file that is not used anywhere

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Feature check] Add tizen 6.5 compatibility
Jihoon Lee [Mon, 22 Nov 2021 04:14:21 +0000 (13:14 +0900)]
[Feature check] Add tizen 6.5 compatibility

As prefix has been added to the `ml_tizen_set_feature_state` for tizen
7.0 this patch makes backward compatible to the old tizen version

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[ README ] Add SSDC2021 presentation
jijoong.moon [Fri, 19 Nov 2021 01:00:23 +0000 (10:00 +0900)]
[ README ] Add SSDC2021 presentation

Add SSDC2021 Presentation in README

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
2 years ago[QuickFix] Check weight access for the last
Jihoon Lee [Mon, 15 Nov 2021 04:47:13 +0000 (13:47 +0900)]
[QuickFix] Check weight access for the last

This patch sets lastGradientAccess and firstGraidentAccess based on the
weight access order for non-trainable tensor.

This is a improviser and must be handled in a correct way.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Weights] Add last access concept
Jihoon Lee [Fri, 12 Nov 2021 08:44:03 +0000 (17:44 +0900)]
[Weights] Add last access concept

As all weights are shared now, we need last accesss to be deteremined
This patch implements such behavior with a test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[QuickFix] Add not dependent weight sharing
Jihoon Lee [Fri, 12 Nov 2021 05:35:17 +0000 (14:35 +0900)]
[QuickFix] Add not dependent weight sharing

This patch allows weight sharing regardless of allocation order of
weight sharing

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Build] Fix fbs causing rebuilding
Jihoon Lee [Thu, 18 Nov 2021 05:03:27 +0000 (14:03 +0900)]
[Build] Fix fbs causing rebuilding

This patch fixes flatbuffer cause rebuilding.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
2 years ago[Fix] api feature check problem
Jihoon Lee [Wed, 17 Nov 2021 22:56:02 +0000 (07:56 +0900)]
[Fix] api feature check problem

This is an emergency measure for feature check is failing inside
unittest of gbs build.

Also enabled ml_inference test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>