platform/core/ml/nntrainer.git
3 years ago[Test] Add model v2 test generator
Jihoon Lee [Tue, 19 Oct 2021 10:00:56 +0000 (19:00 +0900)]
[Test] Add model v2 test generator

This patch add model v2 test generator using pytorch.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Connect v2 test to the parameterized test
Jihoon Lee [Tue, 19 Oct 2021 08:39:59 +0000 (17:39 +0900)]
[Test] Connect v2 test to the parameterized test

This patch connect v2 test to the model param test for convenience

+ add validateFor_v2. This is separated out because label dimension is
not given, although it should be merged to validateFor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Add models v2 test
Jihoon Lee [Tue, 19 Oct 2021 08:32:39 +0000 (17:32 +0900)]
[Test] Add models v2 test

This patch proposes models v2 test which is much lighter.

**Purpose**
1. Lighter model test as layer golden test compensates for it.
2. Easier debugging especially by checking offset, and moving metatdat
3. Give greater flexibility to the golden test generator on the model
architecture
4. Not saving golden file but making it on the fly on the builder side

**Major Difference**
1. Gradient comparison is skipped as it is automatically checked at the
next iteration.
2. Add metadata about num_iterations to file
3. Add size check while reading the golden file
4. Delete redundant information that can be infered.
5. Every iteration has it's own input information.

**Golden Format**
```
    ## file format is as below
    # [<number of iteration(int)> <Iteration> <Iteration>...<Iteration>]
    # Each iteration contains
    # [<input(Tensors)><Label(Tensors)><Parameters(Tensors)><Output(Tensors)>]
    # Each tensor contains
    # [<num_elements(int32)><data_point(float32)>...<data_point(float32)>]
```

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Trivial] Open sizeCheckedReadTensor to util
Jihoon Lee [Tue, 19 Oct 2021 08:29:15 +0000 (17:29 +0900)]
[Trivial] Open sizeCheckedReadTensor to util

This patch opens sizeCheckedReadTensor to util for the later use

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix/Sharing] Fix gradient updation log
Jihoon Lee [Tue, 19 Oct 2021 08:20:00 +0000 (17:20 +0900)]
[Fix/Sharing] Fix gradient updation log

Gradient should be initialized at the very first (backward) access, but the
shared weight was updated at the very last (backward) access, this patch
resolves the issue by adding isFirstAccess to the var_grad.h

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer] Support reshape layer
Parichay Kapoor [Tue, 19 Oct 2021 07:10:44 +0000 (16:10 +0900)]
[layer] Support reshape layer

This patch provides support for reshape layer and basic unittests.
The flatten layer is also updated to use reshape layer internally.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[test] GRU with reset_after=False
Parichay Kapoor [Tue, 19 Oct 2021 05:14:54 +0000 (14:14 +0900)]
[test] GRU with reset_after=False

GRU matches with tf1 but not with tf2.
Tf2 changes the reset_after to True as default, which leads to an extra
bias. This patch forces the reset_after to be always false to enable
matching with tf1 and tf2.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Add checks for external tensors
Parichay Kapoor [Tue, 19 Oct 2021 02:40:52 +0000 (11:40 +0900)]
[tensor] Add checks for external tensors

This patch adds checks while setting the external tensors:
- ensure that the external tensor size is more than the required memory
- if the tensor size is 0, its data must be null

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer/test] Add unittests for conv1d
Parichay Kapoor [Mon, 18 Oct 2021 08:32:46 +0000 (17:32 +0900)]
[layer/test] Add unittests for conv1d

This patch adds layers golden tests for conv1d.
conv1d is also added to the list of transpose layers while saving data
from tensorflow.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Add conv1d implementation
Parichay Kapoor [Mon, 18 Oct 2021 08:30:36 +0000 (17:30 +0900)]
[layer] Add conv1d implementation

This patch adds conv1d implementation support.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[properties] Bug fix for padding compute
Parichay Kapoor [Mon, 18 Oct 2021 08:28:45 +0000 (17:28 +0900)]
[properties] Bug fix for padding compute

This patch provides bug fix for padding compute in the usage of stride.
The stride along width and height were being used wrongly.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[LSTM] Add lstm fix and test run
Jihoon Lee [Mon, 18 Oct 2021 02:21:48 +0000 (11:21 +0900)]
[LSTM] Add lstm fix and test run

This patch add lstm fix and test run.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Neuralnet] Add debug mode to check optimizer
Jihoon Lee [Fri, 15 Oct 2021 09:24:42 +0000 (18:24 +0900)]
[Neuralnet] Add debug mode to check optimizer

This patch add debug mode to check optimizer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] tensor pool prerequest bug
Jihoon Lee [Fri, 15 Oct 2021 09:23:51 +0000 (18:23 +0900)]
[Fix] tensor pool prerequest bug

This patch fix tensor pool prereuqestedrequest bug by that shared_name
was wrong

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Trivial] Move makeGraph function out
Jihoon Lee [Fri, 15 Oct 2021 06:29:42 +0000 (15:29 +0900)]
[Trivial] Move makeGraph function out

this patch moves makeGraph function from compiler_util -> test_util

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Make model test to take arbitrary nn
Jihoon Lee [Fri, 15 Oct 2021 06:05:15 +0000 (15:05 +0900)]
[Test] Make model test to take arbitrary nn

This patch updates model test to accept arbitrary neuralnetwork instead
of ini only

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Trivial] Extract golden test from nnt_models
Jihoon Lee [Fri, 15 Oct 2021 04:52:57 +0000 (13:52 +0900)]
[Trivial] Extract golden test from nnt_models

This patch extracts golden test from nntrainer models, nothing done.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Trivial] Separate Watchers into a new file
Jihoon Lee [Fri, 15 Oct 2021 04:23:55 +0000 (13:23 +0900)]
[Trivial] Separate Watchers into a new file

This patch separates watchers into a new file

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[mem-opt] Support in-place batch normalization
Parichay Kapoor [Wed, 8 Sep 2021 11:56:50 +0000 (20:56 +0900)]
[mem-opt] Support in-place batch normalization

This patch provides support for in-place batch normalization.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[batchnorm] Optimize batch norm memory usage
Parichay Kapoor [Wed, 8 Sep 2021 11:48:16 +0000 (20:48 +0900)]
[batchnorm] Optimize batch norm memory usage

This patch optimized batch norm memory usage by updating the exec order
for its input as its not used in calcDerivative.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[in-place] Support in-place activation
Parichay Kapoor [Tue, 7 Sep 2021 07:21:27 +0000 (16:21 +0900)]
[in-place] Support in-place activation

This patch adds support for in-place activation. Further, the in-place
optimization has been moved to the model compile phase.
This patch also adds support for running the model in in-place as well
as out-of-place mode.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[LayerGroups] Normalize identifiers
Jihoon Lee [Thu, 14 Oct 2021 17:07:03 +0000 (02:07 +0900)]
[LayerGroups] Normalize identifiers

This patch normalize identifiers inside addWithRefrenceLayers() for further use

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor] Recurrent realizer with in/out props
Jihoon Lee [Thu, 14 Oct 2021 17:06:37 +0000 (02:06 +0900)]
[Refactor] Recurrent realizer with in/out props

This patch adds recurrent realizer with in/out props

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Realizer] Implement input realizer
Jihoon Lee [Thu, 14 Oct 2021 16:14:06 +0000 (01:14 +0900)]
[Realizer] Implement input realizer

This patch implements input realizer which connect input to external
input

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Realizer] Implement slice realizer
Jihoon Lee [Thu, 14 Oct 2021 11:06:23 +0000 (20:06 +0900)]
[Realizer] Implement slice realizer

This patch implements slice realizer and it's test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer/test] Bug fix for dropout + unittest
Parichay Kapoor [Fri, 15 Oct 2021 11:37:20 +0000 (20:37 +0900)]
[layer/test] Bug fix for dropout + unittest

This patch adds bug fix for dropout for training as well as inference
mode. Futher, added unittest:
- when dropout_rate is 0 or 100%, all the values are checked
- when dropout_rate between 0 and 100, weak check ensures that either
values are equal or one of the values (golden vs output) is 0
- when dropout_rate r between 0 and 100, strong check ensures that
100 - 2*r percentage of values must always match

All the checks for performed in the unittests.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Recurrent] Add timestep property to recurrent layers
Jihoon Lee [Thu, 14 Oct 2021 03:30:07 +0000 (12:30 +0900)]
[Recurrent] Add timestep property to recurrent layers

This patch creates recurrent layer setting property support to the
recurrent wrapper.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[packaging] Debian launchpad buildfix for focal
Parichay Kapoor [Fri, 15 Oct 2021 03:55:36 +0000 (12:55 +0900)]
[packaging] Debian launchpad buildfix for focal

This patch provides buildfix for focal buildfix for launchpad.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[network] Bug fix for trainable with supportBackwarding
Parichay Kapoor [Thu, 14 Oct 2021 13:17:34 +0000 (22:17 +0900)]
[network] Bug fix for trainable with supportBackwarding

This patch resolves the bug for calling calcDerivative on layer which
do not support backwaring.
The issue rises from the assumption that there will be no layer
requiring backwarding after the last non-trainable layer. But this is
not valid for multi-input scenario. With multi-input, there can be over
two input layers where they both do not support backwarding but the rest of
the model can be trainable.

This patch changes this error check. Below are the updated semantics:
1. After initialization of the graph, a check is added to ensure that
for each trainable layer, all the layers all of ahead must support
backwarding, so that the trainable layer can be trained. If any layer
ahead of it does not support backwarding, error is thrown.
2. A layer is only trainable if its trainable property is set to true
(defaults to true) and contains at least 1 weight. If a layer does not
contain any weights, the layer is treated as non-trainable.
3. When backwarding the model, backwarding is called only for layers
which support backwarding, and skipped for layers which donot.

The updated semantics ensure the dependency of the flow of the
derivatives and allows mixture of layers which support and do not
support backwarding.

Resolves #1017

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Model/API] renew addWithReferenceLayers
Jihoon Lee [Thu, 14 Oct 2021 06:55:15 +0000 (15:55 +0900)]
[Model/API] renew addWithReferenceLayers

This patch add refactored version of addWithReferenceLayers

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Model] Add Model::addWithReferenceLayers
Jihoon Lee [Wed, 13 Oct 2021 13:26:53 +0000 (22:26 +0900)]
[Model] Add Model::addWithReferenceLayers

This patch add addWithReferenceLayers prototype

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer] Support setBatch for attention layer
Parichay Kapoor [Thu, 14 Oct 2021 11:58:30 +0000 (20:58 +0900)]
[layer] Support setBatch for attention layer

This patch provides support for setBatch for attention layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Add constructor for attention layer
Parichay Kapoor [Thu, 14 Oct 2021 10:36:49 +0000 (19:36 +0900)]
[layer] Add constructor for attention layer

This patch adds constructor for attention layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Support single timestep for lstm
Parichay Kapoor [Thu, 14 Oct 2021 03:28:34 +0000 (12:28 +0900)]
[layer] Support single timestep for lstm

This patch adds support for single timestep for lstm.
This is achieved with two external properties:

1. timestep - provides the current timestep for which lstm will run
2. max_timestep - the maximum timestep till which lstm will run

This patch also verifies that this LSTM implementation already does gradient stacking appropriately.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Bug fix of setBatch for LSTM/GRU/RNN
Parichay Kapoor [Thu, 14 Oct 2021 10:20:11 +0000 (19:20 +0900)]
[layer] Bug fix of setBatch for LSTM/GRU/RNN

LSTM/GRU/RNN requested tensors from manager and the shape of tensor
depends on the batch size. However, the layers didnot override the
setBatch to update the batchsize of the request tensors. This patch
provides the corresponding bugfix.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Remove setBatch for init context
Parichay Kapoor [Thu, 14 Oct 2021 10:06:59 +0000 (19:06 +0900)]
[layer] Remove setBatch for init context

This patch remove setBatch for the init context from the layer
interface. setBatch is now only needed to be set for runContext.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph/model] Support for multi-label/input for the model
Parichay Kapoor [Thu, 14 Oct 2021 08:51:37 +0000 (17:51 +0900)]
[graph/model] Support for multi-label/input for the model

This patch adds the support for mutli-label while training the model.
- Multi labels/inputs are now allowed for the model by taking the
dimensions from the graph than the first and last nodes
- Outputs are now also taken from the graph for validation

Another bug fix is added related to setBatch. The caches input and label
dimensions were not updated when batchsize was updated in the network
graph. This patch updates the bug for updating the batch size in the
network graph.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Sharing] SKip saving shared weights
Jihoon Lee [Thu, 14 Oct 2021 02:00:01 +0000 (11:00 +0900)]
[Sharing] SKip saving shared weights

If weight is not original, skip saving it

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Recurrent] Implement finalizing graph
Jihoon Lee [Tue, 12 Oct 2021 16:57:05 +0000 (01:57 +0900)]
[Recurrent] Implement finalizing graph

This patch implements finalizing graph with/without return sequence
property

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Recurrent] Implement unrolling
Jihoon Lee [Tue, 12 Oct 2021 16:12:03 +0000 (01:12 +0900)]
[Recurrent] Implement unrolling

This patch implement unrolling in RecurrentRealizer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[LayerNode] Add cloneConfiguration function
Jihoon Lee [Tue, 12 Oct 2021 15:00:10 +0000 (00:00 +0900)]
[LayerNode] Add cloneConfiguration function

This patch add cloneConfiguration function, which creates a new node
from an exisiting node

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Recurrent] Add verification and preparation
Jihoon Lee [Tue, 12 Oct 2021 14:50:28 +0000 (23:50 +0900)]
[Recurrent] Add verification and preparation

This patch add logic to verify and add connection from input <->
external inputs

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Realizer] Implement remap realizer
Jihoon Lee [Tue, 12 Oct 2021 12:41:43 +0000 (21:41 +0900)]
[Realizer] Implement remap realizer

This patch introduce remap realizer which remaps identifier inside a
graph representation. Please refer to the test to see what this realizer
does.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Recurrent] Add skleton of recurrent realizer
Jihoon Lee [Tue, 12 Oct 2021 11:02:53 +0000 (20:02 +0900)]
[Recurrent] Add skleton of recurrent realizer

This patch add skeleton of and some basic verification
for the recurrent realizer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Conv1D ] Add Skeleton code for Conv1D Layer
jijoongmoon [Wed, 13 Oct 2021 11:45:58 +0000 (20:45 +0900)]
[ Conv1D ] Add Skeleton code for Conv1D Layer

This commit includes:
  . Skeleton code for conv1D
  . Padding1D Property
  . and minor

Signed-off-by: jijoongmoon <jijoong.moon@samsung.com>
3 years ago[Realizer] Apply flatten realizer
Jihoon Lee [Tue, 12 Oct 2021 08:37:51 +0000 (17:37 +0900)]
[Realizer] Apply flatten realizer

This patch applies flatten realizer to model compile. Later, neuralnet
will not have model_graph until compile() is being called

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Model] Add memory optimization property
Jihoon Lee [Tue, 12 Oct 2021 08:34:21 +0000 (17:34 +0900)]
[Model] Add memory optimization property

This patch add memory optimization property to neuralnetwork. The main
purpose of this is to fixating memory optimzation boolean to be applied
only at neuralnet::compile();

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test/realizer] Add flatten realizer test
Jihoon Lee [Tue, 12 Oct 2021 06:49:03 +0000 (15:49 +0900)]
[Test/realizer] Add flatten realizer test

This patch adds flatten realizer test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Sharing] Implement tensor sharing
Jihoon Lee [Wed, 13 Oct 2021 10:48:28 +0000 (19:48 +0900)]
[Sharing] Implement tensor sharing

This patch implement tensor sharing.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[test/layers] Add gru layer testing
Parichay Kapoor [Wed, 13 Oct 2021 05:38:24 +0000 (14:38 +0900)]
[test/layers] Add gru layer testing

This patch added gru layer unittest for layer golden tests.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Graph] Add realizer test skeleton
Jihoon Lee [Sat, 9 Oct 2021 08:54:21 +0000 (17:54 +0900)]
[Graph] Add realizer test skeleton

This patch add realizer test skleton with separating utils to compiler
test utils

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Graph/recurrent] Add concept of realizer
Jihoon Lee [Sat, 9 Oct 2021 08:26:47 +0000 (17:26 +0900)]
[Graph/recurrent] Add concept of realizer

This patch add graph realizer. Graph realizer will preprocess graph
which can be effectively done as a lowering process of compile

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Interpreter] Change signature of interpreter
Jihoon Lee [Sat, 9 Oct 2021 07:54:53 +0000 (16:54 +0900)]
[Interpreter] Change signature of interpreter

Instead of returning networkgraph from the interpreter, it returns the
graph representation, which is a specification to generate a graph.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Tflite Interpreter disable
Jihoon Lee [Sat, 9 Oct 2021 07:15:06 +0000 (16:15 +0900)]
[Fix] Tflite Interpreter disable

This patch update tflite interpreter to pass the test

The main problem was that, before tensors to be saved was distinguished
by when it's not allocated. Now it's manually decided by it's
kind(weight, input, outputs)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[inputs] remove multi input realization
Jihoon Lee [Sat, 9 Oct 2021 01:51:20 +0000 (10:51 +0900)]
[inputs] remove multi input realization

This patch removes multiinput realization behavior for now, this is not
used else where so it's fine to remove

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer] Update dropout rate property name
Parichay Kapoor [Wed, 13 Oct 2021 02:18:36 +0000 (11:18 +0900)]
[layer] Update dropout rate property name

Update dropout rate property name from `dropout` to `dropout_rate`.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Debian] Disable debug on normal package build
Jihoon Lee [Fri, 8 Oct 2021 11:25:07 +0000 (20:25 +0900)]
[Debian] Disable debug on normal package build

This patch disable debug on normal package build

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[test] Added unittests for LSTM
Parichay Kapoor [Fri, 8 Oct 2021 11:36:58 +0000 (20:36 +0900)]
[test] Added unittests for LSTM

This patch adds unittests for LSTM layer
1. single and multi timesteps
2. with and without return sequences
3. with setting activations differently

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Attention support for different key value
Parichay Kapoor [Fri, 8 Oct 2021 08:33:12 +0000 (17:33 +0900)]
[layer] Attention support for different key value

This patch adds support for different values of key and value to
be given to the attention layer.
Corresponding unittests are also added.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix] Prevent connect input layer when in the middle
Jihoon Lee [Fri, 8 Oct 2021 03:55:45 +0000 (12:55 +0900)]
[Fix] Prevent connect input layer when in the middle

This patch update network graph to prevent making connections when input
layer is in the middle

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Trivial] Add tensor dim constructor from array
Jihoon Lee [Fri, 8 Oct 2021 04:37:14 +0000 (13:37 +0900)]
[Trivial] Add tensor dim constructor from array

This patch adds tensor dim constructor from array

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Trivial] Open up util_func.h header
Jihoon Lee [Thu, 7 Oct 2021 11:06:45 +0000 (20:06 +0900)]
[Trivial] Open up util_func.h header

This patch open up util_func.h to devel packages

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Neuralnet] set input, output layers
Jihoon Lee [Wed, 6 Oct 2021 16:54:15 +0000 (01:54 +0900)]
[Neuralnet] set input, output layers

This patch enables setting multiple input and output layers explicitly

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Neuralnet] Add property of input layers, label layers
Jihoon Lee [Wed, 6 Oct 2021 16:38:50 +0000 (01:38 +0900)]
[Neuralnet] Add property of input layers, label layers

This patch add property, input layers and label layers to neuralnet

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[graph] update getter of input/output dims
Jihoon Lee [Wed, 6 Oct 2021 16:19:09 +0000 (01:19 +0900)]
[graph] update getter of input/output dims

This patch update input/output dims to properly reflect model input,
output dimensions, not just a single object.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Graph] Identify model_input, model_label
Jihoon Lee [Wed, 6 Oct 2021 15:39:09 +0000 (00:39 +0900)]
[Graph] Identify model_input, model_label

This patch add ability of graph to identify model_input and model_label
with determined order, which follows semantics described in #1374

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[test] Add unittest for attention layer
Parichay Kapoor [Thu, 7 Oct 2021 08:34:31 +0000 (17:34 +0900)]
[test] Add unittest for attention layer

This patch adds unittest for attention layer.
- Backwarding implementation is fixed for attention layer
- more wider coverage unittests are added
- layer golden test is updated to generate float input data which is
needed for the attention

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[test] Add unittest for attention layer
Parichay Kapoor [Tue, 5 Oct 2021 07:23:27 +0000 (16:23 +0900)]
[test] Add unittest for attention layer

This patch adds unittest for attention layer:
- unittest generator for layers is updated to work for multi-input
layers
- initial unittest for attention layer is added

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] attention layer backwarding match
Parichay Kapoor [Tue, 5 Oct 2021 07:21:51 +0000 (16:21 +0900)]
[layer] attention layer backwarding match

This patch adds bug fix for backwarding operation for the attention
layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Attention layer bugfix
Parichay Kapoor [Tue, 5 Oct 2021 05:27:06 +0000 (14:27 +0900)]
[layer] Attention layer bugfix

This patch adds bugfix for the forwarding of the attention layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Bug fix for softmax operation
Parichay Kapoor [Tue, 5 Oct 2021 05:25:29 +0000 (14:25 +0900)]
[layer] Bug fix for softmax operation

Current implementation of softmax operation applies flatten on the
tensor unintentionally and calculates softmax on the last 3 dimensions
of the given tensor.
This patch updates the softmax operation to apply its operations on the
last dimension only.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[WeightSharing] Remove zero grad
Jihoon Lee [Wed, 6 Oct 2021 12:18:04 +0000 (21:18 +0900)]
[WeightSharing] Remove zero grad

Removing zero grad function in the cost of the layer should handle the scenarios

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Add recurrent model test
Jihoon Lee [Tue, 5 Oct 2021 11:12:45 +0000 (20:12 +0900)]
[Test] Add recurrent model test

This patch contains intial test for the recurrent model.

In this patch, there are three fc layers sharing the same weights

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[fclayer] Update gradient to accumulate
Jihoon Lee [Tue, 5 Oct 2021 11:08:12 +0000 (20:08 +0900)]
[fclayer] Update gradient to accumulate

This patch update gradient calculation to accumulate for fully connected
layer.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor pool] Query execution order by source
Jihoon Lee [Tue, 5 Oct 2021 11:03:10 +0000 (20:03 +0900)]
[Tensor pool] Query execution order by source

This patch enables querying execution order by source tensor. As
dependent tensor does not have the ground truth.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Layer] Add constant derivative layer
Jihoon Lee [Tue, 5 Oct 2021 06:52:47 +0000 (15:52 +0900)]
[Layer] Add constant derivative layer

This patch adds constant derivative layer. This layer will be used to
simulate a backward operation without any loss.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[WeightSharing] enable weight sharing from manager
Jihoon Lee [Tue, 5 Oct 2021 04:43:37 +0000 (13:43 +0900)]
[WeightSharing] enable weight sharing from manager

This patch enables weight sharing from manager.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[WeightSharing] Pass shared_name from the original
Jihoon Lee [Tue, 5 Oct 2021 04:27:25 +0000 (13:27 +0900)]
[WeightSharing] Pass shared_name from the original

This patch adds creating shared_weight_names from the original source
and pass it to the manager.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Property] Add shared_from key to the layer node
Jihoon Lee [Fri, 1 Oct 2021 08:07:45 +0000 (17:07 +0900)]
[Property] Add shared_from key to the layer node

This patch add shared_from key to the layer node

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[WeightSharing] Implement isFirst/lastAccess
Jihoon Lee [Tue, 5 Oct 2021 02:05:25 +0000 (11:05 +0900)]
[WeightSharing] Implement isFirst/lastAccess

This patch implements isFirstAccess and isLastAccess making nntrainer
ready for the weight sharing while fixing overriding issue in example
pow layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Recurrent] Add zero grad / delegate apply gradient
Jihoon Lee [Fri, 1 Oct 2021 05:41:31 +0000 (14:41 +0900)]
[Recurrent] Add zero grad / delegate apply gradient

This patch add zeroing the grad mechanism + delegating apply gradient
to the network graph.
The main reason for this change is that when sharing gradient and
derivatives, 1. the value has to be accumulated starting from zero
2. gradient has to be applied only at the last access.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Recurrent] Propagate Trainable variable to weights
Jihoon Lee [Fri, 1 Oct 2021 03:55:17 +0000 (12:55 +0900)]
[Recurrent] Propagate Trainable variable to weights

This patch propagate trainable variable to weights to prepare sharing
weights

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Disable enable-debug by default
Parichay Kapoor [Thu, 7 Oct 2021 04:22:26 +0000 (13:22 +0900)]
[meson] Disable enable-debug by default

This patch sets enable-debug to false by default which was enabled
mistakenly by #1607.
enable-debug is set to true for ubuntu and tizen build in CI only with
unit_test set to true in the CI.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ API ] Add Inference in CCAPI to get loss value
jijoong.moon [Fri, 24 Sep 2021 04:44:24 +0000 (13:44 +0900)]
[ API ] Add Inference in CCAPI to get loss value

Add Inference API to get the loss value

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[pkg] Enable debug mode for CI
Parichay Kapoor [Thu, 7 Oct 2021 03:32:33 +0000 (12:32 +0900)]
[pkg] Enable debug mode for CI

This patch enables debug mode for the CI build for both ubuntu and
tizen. This enables all the debug tests to be done in the CI which were
disabled till now.
Fixes required to enable the DEBUG mode is also added.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Add checks for layer tensor overwrite bug
Parichay Kapoor [Wed, 6 Oct 2021 07:32:47 +0000 (16:32 +0900)]
[layer] Add checks for layer tensor overwrite bug

This patch adds a check to ensure that when layer tensors are created
and overwrites existing tensors. These checks are enabled only in DEBUG
mode to ensure that they only run in CI mode, and are called after each
operation - forwarding, calcGradient, and calcDerivative.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[conv] Update temporary memory requests
Parichay Kapoor [Fri, 3 Sep 2021 09:46:52 +0000 (18:46 +0900)]
[conv] Update temporary memory requests

This patch updates the request for temporary memory in the convolution
layer.
- im2col and col2im results are both the same size and used exclusive of
each other but both are requested for the backwarding. so, instead of
requesting both, they can share their memories.
- as the values in these tensors can be discarded between forwarding and
backwarding, two independent tensors are requested for forwarding and
backwarding so that the memory can be reused in the intermediate
duration.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Dataset/test] Update batch before creating tensor
Jihoon Lee [Wed, 6 Oct 2021 03:17:28 +0000 (12:17 +0900)]
[Dataset/test] Update batch before creating tensor

Dataset sample creating now creates a tensor after updating batch to one

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

resolves #1604

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer] Add backwarding for attention layer
Parichay Kapoor [Fri, 1 Oct 2021 08:23:04 +0000 (17:23 +0900)]
[layer] Add backwarding for attention layer

This patch adds backwarding for attention layer. Corresponding unittests
will be added in the next patch.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer/test] Add basic test for attention
Parichay Kapoor [Fri, 1 Oct 2021 06:09:21 +0000 (15:09 +0900)]
[layer/test] Add basic test for attention

This patch adds basic unittest for attention layer.
To achieve, the existing tests are modified to support multiple inputs
in the test format.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Scaffolding attention layer
Parichay Kapoor [Fri, 1 Oct 2021 06:01:18 +0000 (15:01 +0900)]
[layer] Scaffolding attention layer

This patch adds the initial commit for attention layer.
- add class description
- add basic forwarding

This implements the common form of attention layer where key and value
are the same tensor. The other format will be supported soon.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layernode] Bug fix for finalize
Parichay Kapoor [Fri, 1 Oct 2021 05:59:49 +0000 (14:59 +0900)]
[layernode] Bug fix for finalize

This patch adds bug fix for finalize of the layer node. The checks of
the inputs dimensions and input shapes has been fixed when multiple
inputs are expected to be set.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Test] Add warmup to the golden layer
Jihoon Lee [Fri, 24 Sep 2021 08:10:25 +0000 (17:10 +0900)]
[Test] Add warmup to the golden layer

This patch add warmup forwarding to the layer golden test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Add conv2d golden tests
Jihoon Lee [Fri, 24 Sep 2021 08:00:39 +0000 (17:00 +0900)]
[Test] Add conv2d golden tests

**Changes proposed in this PR:**
- Conv2d Golden tests
- remove _golden_ to the name of test cases, as it is attached in the
extension

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[batchnorm] Optimize batch norm backwarding
Parichay Kapoor [Tue, 28 Sep 2021 10:59:50 +0000 (19:59 +0900)]
[batchnorm] Optimize batch norm backwarding

Remove the extra full size extra memory requirement as the cost of the
reduced memory. The difference in memory requirmement can be
significant. Earlier memory requirement was b*c*h*w vs now it is just c
where the assumption is that batch norm if is normalizing along
axis=channel.
This is achieved by reordering of the operations.
Note: this change has no performance impact.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[batchnorm] Optimize batch norm forward memory
Parichay Kapoor [Tue, 28 Sep 2021 10:40:33 +0000 (19:40 +0900)]
[batchnorm] Optimize batch norm forward memory

Reduce the memory comsumption of batch norm for forwarding by re-using
the output tensor as temporary memory than requesting new memory.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[rebase] Rebase fix
Parichay Kapoor [Wed, 6 Oct 2021 02:35:50 +0000 (11:35 +0900)]
[rebase] Rebase fix

This patch adds rebase fix.
Further some of the temporary fixes in the previous commits are also
removed.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[in-place] Make input layer work in-place
Parichay Kapoor [Mon, 6 Sep 2021 09:39:37 +0000 (18:39 +0900)]
[in-place] Make input layer work in-place

This patch makes input layer work inplace. This is done by support of
externally allocated tensors in tensorPool, and making input of input
layer and labels to be externally allocated tensors.
Input layer is updated to work in-place.
Further the methodology to set inputs and labels has also been updated.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[inplace opt] Support in-place no-op flatten layer
Parichay Kapoor [Mon, 6 Sep 2021 04:50:40 +0000 (13:50 +0900)]
[inplace opt] Support in-place no-op flatten layer

This patch updates the flatten layer to be a no-op layer. This is done
with the flatten layer setting the input and output shapes at finalize
time and making flatten layer execute in-place. Changes in this patch:
1. requestPreallocatedTensor() in TensorPool now returns a new tensor
which will eventually share the memory with the preallocated tensor than
returning the preallocated tensor itself. This allows tensor metadata to
be changed (like name, shape, etc) which sharing the memory. This is
done by storing the dependency link between tensors in token.
Corresponding unittests are also added.
2. Manager now supports giving shared tensors for outputs (shared with
some inputs) to support in-place running of some layers.
3. Flatten layer is updated to be a basic no-op and to perform
flattening once at compile time.
4. Update flatten layer supportBackwarding to true
5. Input layer updated to not edit tensor mapping. Input layer will be
updated to be in-place in the next patch.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph/manager] Enable memory v1 optimizations
Parichay Kapoor [Fri, 3 Sep 2021 08:59:21 +0000 (17:59 +0900)]
[graph/manager] Enable memory v1 optimizations

This patch adds interface to enable memory optimizations with the neural
network. Enabling the interface changes the planner being used for the
memory allocation.
With this patch, OptimizedV1Planner is put to use when enabling
optimizations.

Unittest of models is updated to disable optimizations in the
non-optimized test cases.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>