platform/core/ml/nntrainer.git
3 years ago[layercontext] Minor bugfix for layer context
Parichay Kapoor [Tue, 22 Jun 2021 05:02:19 +0000 (14:02 +0900)]
[layercontext] Minor bugfix for layer context

Minor bugfix for layer context
Removes the extra trainable parameter from layer context

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[networkgraph] Network graph updated for Layer V2
Parichay Kapoor [Thu, 17 Jun 2021 05:52:27 +0000 (14:52 +0900)]
[networkgraph] Network graph updated for Layer V2

Network graph updated to work with LayerV2
This involves settings input dimension in InitContext, and setting
up RunContext for each layer before their execution.

Further, some helper functions are also added in LayerNode.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Test/Prepare] Add layer semantics test
Jihoon Lee [Thu, 17 Jun 2021 06:15:31 +0000 (15:15 +0900)]
[Test/Prepare] Add layer semantics test

For layer_v2, there will be more structured way to access tests.
This patch adds a skeleton for layer semantics test.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Layer] Add trainable prop
Jihoon Lee [Thu, 17 Jun 2021 01:30:01 +0000 (10:30 +0900)]
[Layer] Add trainable prop

This patch add trainable property to layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[manager] Add support for request Inputs/outputs
Parichay Kapoor [Wed, 16 Jun 2021 08:19:49 +0000 (17:19 +0900)]
[manager] Add support for request Inputs/outputs

Added support for requesting inputs and outputs
Also update implementation to use a common function for easier
management

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer-internal] refactoring weight, input, output to layer_node
hyeonseok lee [Thu, 17 Jun 2021 02:14:07 +0000 (11:14 +0900)]
[layer-internal] refactoring weight, input, output to layer_node

- Implement getter for variable and grad of weight, input, output
- Implement getNumWeights

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[graph] Support creating RunLayerContext
Parichay Kapoor [Wed, 16 Jun 2021 10:55:19 +0000 (19:55 +0900)]
[graph] Support creating RunLayerContext

Implement the function to create RunLayerContext given the
InitLayerContext and update it in the given LayerNode.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Add support for request Inputs/outputs
Parichay Kapoor [Wed, 16 Jun 2021 08:19:49 +0000 (17:19 +0900)]
[manager] Add support for request Inputs/outputs

Added support for requesting inputs and outputs
Also update implementation to use a common function for easier
management

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Properties] Add float and boolean cases
Jihoon Lee [Wed, 16 Jun 2021 10:33:56 +0000 (19:33 +0900)]
[Properties] Add float and boolean cases

Add float and boolean case to be used later.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Props] Add concept of empty to property
Jihoon Lee [Fri, 21 May 2021 08:48:15 +0000 (17:48 +0900)]
[Props] Add concept of empty to property

This patch adds empty concept to property instead of having insensible
value inside a property as a default

before this patch,
```cpp
auto a = Property<int>();
EXPECT_EQ(a.get(), -1); // let's assume -1 is default value which might not be sensible.
```
after this patch

```cpp
auto a = Property<int>();
EXPECT_THROW(a.get(), std::invalid_argument);
EXPECT_TRUE(a.empty());
```

so underlying value remains always null or valid.

Now, it became natural to distinguish mandatory props and non-mandatory
props

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Layer] Scaffold layer impl
Jihoon Lee [Wed, 16 Jun 2021 07:22:24 +0000 (16:22 +0900)]
[Layer] Scaffold layer impl

**Major changes**
- Create `LayerImpl`
- Change `LayerDevel` signature
  s/int initialize/void finalize
  s/int setProperty/void setProperty
- Move createLayer to layerImpl

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Layer] Add const to the layer::getName()
Jihoon Lee [Wed, 16 Jun 2021 08:41:46 +0000 (17:41 +0900)]
[Layer] Add const to the layer::getName()

getName() is independent of const, so this patch adds const to
layer::getName();

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer-internal] refactoring getInputDimension getOutputDimension
hyeonseok lee [Wed, 16 Jun 2021 05:53:21 +0000 (14:53 +0900)]
[layer-internal] refactoring getInputDimension getOutputDimension

implement getInputDimension, getOutputDimension in layer_node

Self evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[neuralnet] Bug fix for graph usage
Parichay Kapoor [Thu, 17 Jun 2021 01:52:59 +0000 (10:52 +0900)]
[neuralnet] Bug fix for graph usage

Bug fix patch to use constant iterators for the graph.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph] Remove non-const iterators
Parichay Kapoor [Fri, 11 Jun 2021 05:15:54 +0000 (14:15 +0900)]
[graph] Remove non-const iterators

Remove non-const iterators for the graph.
Update the corresponding usages for the graph.

Note: the pattern for (auto const &node : graph) has been
removed as this can begin()/end() in the background.
Workaround for this is to use std::as_const(graph) but this
is only available from c++17, so this patch uses
the basic const iterator for loop pattern.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph] Graph iterator for sorted and unsorted
Parichay Kapoor [Fri, 11 Jun 2021 03:28:04 +0000 (12:28 +0900)]
[graph] Graph iterator for sorted and unsorted

Updated graph iteartor to run over unsorted list of nodes
if the graph is not yet sorted. If the graph is sorted,
the iterator iterates over sorted nodes.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph] Update node_names to unordered_set
Parichay Kapoor [Fri, 11 Jun 2021 03:13:50 +0000 (12:13 +0900)]
[graph] Update node_names to unordered_set

Update node_names storing all the names of the nodes
in the graph for uniqueness to use unordered_set.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph] Make adjacency list lazily
Parichay Kapoor [Fri, 11 Jun 2021 03:08:26 +0000 (12:08 +0900)]
[graph] Make adjacency list lazily

Update graph to make adjacenct list lazily when required
during the topological sort.
Until then, keep the node_list which contains simply all the nodes.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Support request Tensors and Weights
Parichay Kapoor [Wed, 16 Jun 2021 05:32:29 +0000 (14:32 +0900)]
[manager] Support request Tensors and Weights

Added support for request Tensors and Weights
Moved spec of Tensors and Weights to VarGrad and Weights correspondingly
Also created constructors for weights and var_grad with their specs

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer_internal] refactoring read, save
hyeonseok lee [Tue, 15 Jun 2021 12:38:31 +0000 (21:38 +0900)]
[layer_internal] refactoring read, save

implement read, save function in layer_node

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Test/Scaffolding] Prepare test for the new layer
Jihoon Lee [Wed, 16 Jun 2021 03:20:12 +0000 (12:20 +0900)]
[Test/Scaffolding] Prepare test for the new layer

For now, layer test is too cluttered so that it is not agile enought to
make any change.

This PR mainly separates the tests into two parts to make the test
scallable itself.

1. Layer only test leaving inside `unittest_layers_{layer_name}`
2. Common tests like setProperty, golden_test in `layers_common_tests.h`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor] s/Layer/LayerV1
Jihoon Lee [Tue, 15 Jun 2021 07:43:08 +0000 (16:43 +0900)]
[Refactor] s/Layer/LayerV1

Change class name of layer to layerV1 to prepare migration
This should pass CI test.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[context] Layer context creation scaffolding
Parichay Kapoor [Wed, 16 Jun 2021 02:50:09 +0000 (11:50 +0900)]
[context] Layer context creation scaffolding

Added todo list for the layer context creation
Added minor new interfaces for the layer context
Disable in-place optimization and inference memory optimization

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[addition] Bug fix addition layer calcDerivative
Parichay Kapoor [Wed, 16 Jun 2021 03:03:56 +0000 (12:03 +0900)]
[addition] Bug fix addition layer calcDerivative

Bug fix for addition layer calcDerivative().
addition layer assign the same tensor derivative memory back to its
other layers. If the other layers connecting with addition layers are
inplace, then this can lead to wrong results.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Plugin/Test] Add preset plugin for general test
Jihoon Lee [Fri, 11 Jun 2021 04:42:13 +0000 (13:42 +0900)]
[Plugin/Test] Add preset plugin for general test

This extracts common test to be used multiple times.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Custom/Loss] Add scaffolding for loss example
Jihoon Lee [Thu, 10 Jun 2021 10:14:57 +0000 (19:14 +0900)]
[Custom/Loss] Add scaffolding for loss example

This patch generates a skeleton code for mae custom loss example.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CC17] Update type to inline static
Jihoon Lee [Fri, 11 Jun 2021 11:13:24 +0000 (20:13 +0900)]
[CC17] Update type to inline static

This patch update `type` to be inline static data member inside an
object for clarity

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ GRU ] Add GRU Unittest
jijoong.moon [Wed, 16 Jun 2021 08:20:26 +0000 (17:20 +0900)]
[ GRU ] Add GRU Unittest

This commit includes,
  . unittests of gru layer
  . add keras code to compare

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Recurrent ] Fix activation type
jijoong.moon [Thu, 10 Jun 2021 12:01:01 +0000 (21:01 +0900)]
[ Recurrent ] Fix activation type

using hidden_state_activation_type

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ GRU ] implement fowarding / backwarding of GRU Layer
jijoong.moon [Thu, 10 Jun 2021 04:57:47 +0000 (13:57 +0900)]
[ GRU ] implement fowarding / backwarding of GRU Layer

This commit includes,
  . forwarding implementation
  . calGradient implementation
  . calDerivative implementation
  . gru_basic unittest

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Props] Add concept of empty to property
Jihoon Lee [Fri, 21 May 2021 08:48:15 +0000 (17:48 +0900)]
[Props] Add concept of empty to property

This patch adds empty concept to property instead of having insensible
value inside a property as a default

before this patch,
```cpp
auto a = Property<int>();
EXPECT_EQ(a.get(), -1); // let's assume -1 is default value which might not be sensible.
```
after this patch

```cpp
auto a = Property<int>();
EXPECT_THROW(a.get(), std::invalid_argument);
EXPECT_TRUE(a.empty());
```

so underlying value remains always null or valid.

Now, it became natural to distinguish mandatory props and non-mandatory
props

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Unit Test ] Fix LSTM / RNN Unittest
jijoong.moon [Wed, 16 Jun 2021 12:08:38 +0000 (21:08 +0900)]
[ Unit Test ] Fix LSTM / RNN Unittest

This commit includes,
  . fix wrong unittest cases of lstm & rnn

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ GRU ] Skeleton Code for GRU
jijoong.moon [Fri, 28 May 2021 03:45:12 +0000 (12:45 +0900)]
[ GRU ] Skeleton Code for GRU

This commit includes,
 . Skeleton Code for GRU Layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[docs] Update docs for tflite version
Parichay Kapoor [Tue, 15 Jun 2021 07:32:05 +0000 (16:32 +0900)]
[docs] Update docs for tflite version

Updated docs with tensorflow-lite version to be 2.3.0 or higher
and added missing dependencies for local meson build as well.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Refactor] s/Layer/LayerV1
Jihoon Lee [Tue, 15 Jun 2021 07:43:08 +0000 (16:43 +0900)]
[Refactor] s/Layer/LayerV1

Change class name of layer to layerV1 to prepare migration
This should pass CI test.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[API/Devel] Retouch devel header
Jihoon Lee [Tue, 15 Jun 2021 06:35:27 +0000 (15:35 +0900)]
[API/Devel] Retouch devel header

**Changes proposed in this PR:**
- Change function sinature to exploit forward declaration
- s/export_to/exportTo
- Change header guard to be unique

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[neuralnet] Neural network bugfix
Parichay Kapoor [Wed, 16 Jun 2021 03:17:55 +0000 (12:17 +0900)]
[neuralnet] Neural network bugfix

Two PRs merged have caused main branch to fail.
This patch provides temporary fix.
Proper fix  will be done with layer_v2.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[unittests] Update unittests for new gtest version
Parichay Kapoor [Wed, 16 Jun 2021 03:17:00 +0000 (12:17 +0900)]
[unittests] Update unittests for new gtest version

Remove gtest version check comments and update unittests for the new
gtest versions.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Loss] Apply requireLabel to feed label
Jihoon Lee [Thu, 10 Jun 2021 04:21:03 +0000 (13:21 +0900)]
[Loss] Apply requireLabel to feed label

Currently, loss layer is the only one that can accept label which was
prohibiting building a custom loss layer.

**Major Changes**
- Apply requireLabel to feed label
- Last layer is not assumed loss layer when feeding label(still it is in
other places, need to be dealt)

**Minor Changes**
- Feeding multiple label is now possible (spliting label is future work)
- [Fix] Count Trainable includes loss layer
- [Fix] Handle cases When all layer is not trainable for
neuralnet::backwarding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer] Move name to LayerNode
Parichay Kapoor [Fri, 11 Jun 2021 09:02:56 +0000 (18:02 +0900)]
[layer] Move name to LayerNode

Move layer identifier name to LayerNode.
Now each node in the graph has name as its direct identifier.

This patch also disable unittest_nntrianer_layers for some while
till layer refactoring is taking place. However, other unittests
and apptests must remain enabled to ensure that everything else is
working.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[split layer] Initialize class member bugfix
Parichay Kapoor [Mon, 14 Jun 2021 02:45:16 +0000 (11:45 +0900)]
[split layer] Initialize class member bugfix

Split layer has class member leading_helper_dim
which remains uninitialized and results in bugs
when setBatchSize() is called before initialization of the layer.

This patch fixes the bug.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer context] Propose layer context and layer_devel
Parichay Kapoor [Thu, 10 Jun 2021 04:38:00 +0000 (13:38 +0900)]
[layer context] Propose layer context and layer_devel

This patch proposes layer context and layer_devel.
The layer context provides basic necessities for each layer
like inputs, outputs, weights, tensors. Weights are the trainable tensors
required by the layers which can be requested with the context interfaces.
Correspondingly, any other tensor requirements by the layer can also be
requested with the context.

There are two types of context :
1. InitContext - provides context for initialization with input_dimensions
filled. This contains specifications for all the inputs/outputs/weights/tensors
which will be made available during executiong.
2. RunContext - provides allocated weights/inputs/outputs/tensors along with
some of the basic properties of the layer like trainable, etc.

This design allows layer devel to be stateless (derived layer can be stateful with their
own member objects), and behave as layer ops.
Further, this allows dependencies on weights/var_grads as context simplified the interface
with just tensors.
Also, this patch also removes dependency on the manager from layer_devel header.

layer context for each layer is owned by the corresponding LayerNode.
Many of the layer getters like getInputs/Outputs/Weights/etc can now
be moved to LayerNode which does not concern the custom layer writer,
and simplifies the layer operation code as well.

See also #986

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[C++17] Bump base version to c++17
Jihoon Lee [Fri, 11 Jun 2021 10:52:31 +0000 (19:52 +0900)]
[C++17] Bump base version to c++17

This patch bumps baseline version to c++17.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[deb/pkg] Updates for debian packaging
Parichay Kapoor [Mon, 14 Jun 2021 02:54:32 +0000 (11:54 +0900)]
[deb/pkg] Updates for debian packaging

Update tolerance for sqrt unittest for debian arm64 packaging
increase timeout for appdraw classification app for debian armhf packaging

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Props] Implement input_layers as a property
Jihoon Lee [Fri, 14 May 2021 10:55:58 +0000 (19:55 +0900)]
[Props] Implement input_layers as a property

This patch implements input_layers as a property with following changes
below

**Changes proposed in this PR:**
- Add ConnectionSpec object for generic purpose
- Add InputSepc as a property
- Add following tests

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test/models] Add addition test
Jihoon Lee [Fri, 11 Jun 2021 10:35:39 +0000 (19:35 +0900)]
[Test/models] Add addition test

This patch adds addition test, while add support for reading multiple
inputs and outputs inside `unittest_nntrainer_models.cpp`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Retouch] unittest_models test cases
Jihoon Lee [Fri, 11 Jun 2021 10:11:51 +0000 (19:11 +0900)]
[Retouch] unittest_models test cases

**Changes proposed in this PR:**
- Reduce scale of inputs
- Add iterations for recurrent layers (and delete _n) family tcs
- Clean duplicated codes in genModelTests.py

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[deb/pkg] Increase c-api unittest timeout
Parichay Kapoor [Mon, 14 Jun 2021 01:47:45 +0000 (10:47 +0900)]
[deb/pkg] Increase c-api unittest timeout

Increase c-api unittest timeout even further to
pass armhf build on launchpad.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Docs] Update custom documentation for loss layer
Jihoon Lee [Thu, 10 Jun 2021 08:45:20 +0000 (17:45 +0900)]
[Docs] Update custom documentation for loss layer

This patch updates documentation of custom for loss layer.
Also deletes redundant `pow.cpp` that sneaked in.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[deb] Update lower tolerance bug + increase timeouts
Parichay Kapoor [Fri, 11 Jun 2021 09:34:37 +0000 (18:34 +0900)]
[deb] Update lower tolerance bug + increase timeouts

Inverse the check for lower error tolerance for debian build
Also increase timeouts for nntrainer capi/ccapi unittests
as they timeout for armhf architecture on launchpad.

See also #1251

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Trivial] Change return -> retval
Jihoon Lee [Thu, 10 Jun 2021 04:42:09 +0000 (13:42 +0900)]
[Trivial] Change return -> retval

When stating value in doxygen it is recommended to use @retval, this
patch reflects the issue

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[interpreter] Updated export_to usage with LayerNode
Parichay Kapoor [Thu, 10 Jun 2021 08:43:56 +0000 (17:43 +0900)]
[interpreter] Updated export_to usage with LayerNode

Updated export_to function usage based on LayerNode

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layerNode] Refactor get/set number of inputs/outputs
Parichay Kapoor [Mon, 7 Jun 2021 06:52:50 +0000 (15:52 +0900)]
[layerNode] Refactor get/set number of inputs/outputs

Refactor get/set number of inputs/outputs.
This is now handled by layer_node.
Updating the input/output layers updates the count.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer/node] Move input/output layers to LayerNode
Parichay Kapoor [Mon, 7 Jun 2021 06:20:16 +0000 (15:20 +0900)]
[layer/node] Move input/output layers to LayerNode

This patch moves input/output layers to LayerNode.
Also create an interface for accessing/setting input/output layers
from in the LayerNode.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[api] Added api dependency in pc files
Parichay Kapoor [Fri, 4 Jun 2021 03:51:10 +0000 (12:51 +0900)]
[api] Added api dependency in pc files

Added API dependecy in pc files for the capi and ccapi.
The dependency is of the capi-ml-common which provides ml-api-common.h header.
Further, ml-api-common.h header from nntrainer repo is also removed.
Updated android build to download ml-api-common.h for nnstreamer daily builds.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ RNN ] Add Multi-Layer RNN Unit Tests
jijoong.moon [Fri, 28 May 2021 02:24:46 +0000 (11:24 +0900)]
[ RNN ] Add Multi-Layer RNN Unit Tests

This commit includes,
 . Multi-Layered RNN Unit Tests
 . Some Fixes to run

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Trivial] Fix NDK warning -Wbraced-scalar-init
Jihoon Lee [Thu, 10 Jun 2021 08:30:35 +0000 (17:30 +0900)]
[Trivial] Fix NDK warning -Wbraced-scalar-init

This patch fixes trivial NDK warning below

```
././../nntrainer/layers/lstm.cpp:315:25: warning: braces around scalar initializer [-Wbraced-scalar-init]
  Tensor dh_nx = Tensor({derivative_.width()});
                        ^~~~~~~~~~~~~~~~~~~~~
././../nntrainer/layers/lstm.cpp:316:25: warning: braces around scalar initializer [-Wbraced-scalar-init]
  Tensor dc_nx = Tensor({derivative_.width()});
                        ^~~~~~~~~~~~~~~~~~~~~
././../nntrainer/layers/lstm.cpp:352:26: warning: braces around scalar initializer [-Wbraced-scalar-init]
        hs_prev = Tensor({hs_t.width()});
                         ^~~~~~~~~~~~~~
././../nntrainer/layers/lstm.cpp:354:26: warning: braces around scalar initializer [-Wbraced-scalar-init]
        cs_prev = Tensor({cs_t.width()});
```

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[test/model] Cleanup unittest_nntrainer_models
Parichay Kapoor [Thu, 10 Jun 2021 08:29:34 +0000 (17:29 +0900)]
[test/model] Cleanup unittest_nntrainer_models

Cleaned up unittest_nntrainer_models.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Support inPlace() as an interface for the layer
Parichay Kapoor [Tue, 8 Jun 2021 05:30:11 +0000 (14:30 +0900)]
[layer] Support inPlace() as an interface for the layer

Support inPlace() as an interface for the layer.
If returns true, manager can set its inputs/outputs to be
the same memory location, but not necessarily.
Corresponding graph code is also updated.

Resolves #867

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Add requireLabel() function
Jihoon Lee [Tue, 8 Jun 2021 06:07:11 +0000 (15:07 +0900)]
[Layer] Add requireLabel() function

Add a simple function `Layer::requireLayer()`.
This function is used to detect if a certain layer requires a layer.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[APP] Add tensorflow based product rating
Parichay Kapoor [Wed, 2 Jun 2021 05:34:14 +0000 (14:34 +0900)]
[APP] Add tensorflow based product rating

Added tensorflow based product rating.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[SplitLayer] Support flexible dimension
Jihoon Lee [Wed, 2 Jun 2021 07:19:59 +0000 (16:19 +0900)]
[SplitLayer] Support flexible dimension

This patch adds support for the flexible dimension in split layer.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Permute] Add permute layer forward/backward
Jihoon Lee [Mon, 10 May 2021 02:53:39 +0000 (11:53 +0900)]
[Permute] Add permute layer forward/backward

This patch implements forward/backward with a following test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ RNN ] Add unittest case & return_sequence
jijoong.moon [Fri, 21 May 2021 02:53:36 +0000 (11:53 +0900)]
[ RNN ] Add unittest case & return_sequence

This PR includes.
  . unittest of rnn layer
  . support of return sequence of rnn layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Recorder] Add multiout equivalent in transLayer
Jihoon Lee [Wed, 26 May 2021 12:15:44 +0000 (21:15 +0900)]
[Recorder] Add multiout equivalent in transLayer

When getting gradient from `gradientTape` it adds all derivatives of the
dependent layer. While this behavior is true, we need to measure to what
extent each layer contributed to the layer.

This patch adds a multiout layer which consists of some stublayer that
seperates derivatives one by one.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ LSTM ] Add Multi-Layerd LSTM Unittest
jijoong.moon [Thu, 27 May 2021 11:20:32 +0000 (20:20 +0900)]
[ LSTM ] Add Multi-Layerd LSTM Unittest

This commit includes,
  . Multi-Layerd LSTM Unit test.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Permute] Implement peripheral functions
Jihoon Lee [Thu, 6 May 2021 12:22:57 +0000 (21:22 +0900)]
[Permute] Implement peripheral functions

This patch implements initialize, export and setProperty of permute
layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer] Move distribute to LayerNode
Parichay Kapoor [Fri, 21 May 2021 09:17:58 +0000 (18:17 +0900)]
[layer] Move distribute to LayerNode

move distribute to LayerNode from layer
Further, the distribute layer is now supposed to be used
as a wrapper of a layer maintained by the LayerNode,
and must not be maintained externally.
Rather, use getDistribute() to check if the node is distributed
and treat the layer as a regular layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Add permute layer scaffolding
Jihoon Lee [Tue, 4 May 2021 01:50:09 +0000 (10:50 +0900)]
[Layer] Add permute layer scaffolding

This patch adds permute layer scaffolding

**Minor Fix**
- Add error handling for transpose tensordim

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ LSTM ] Add More Unit Test Cases
jijoong.moon [Thu, 27 May 2021 04:43:41 +0000 (13:43 +0900)]
[ LSTM ] Add More Unit Test Cases

This Commit inclues,
 . Additional unittest
   - with return sequneces
   - with multiple batch size
   - with multiple batch + return sequences
   - with multiple batch + return sequences + multi-iteration
 . Fixed initialization bug during training
 . Remove check for tensor dot product to support multi-dimension
 tensor.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ LSTM ] Add Unit test for LSTM Layer
jijoong.moon [Mon, 24 May 2021 05:54:44 +0000 (14:54 +0900)]
[ LSTM ] Add Unit test for LSTM Layer

This commit includes,
  . unit test case for lstm layer
  . fixup of potential bug : tanh prime calculation
  .

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ LSTM ] Add return_sequence
jijoong.moon [Tue, 18 May 2021 01:25:52 +0000 (10:25 +0900)]
[ LSTM ] Add return_sequence

This commit includes,
  . impelementation of return_sequence. If it is true, then it
  generate all the output of time iteration and if it false, then
  only last output is available.
  . add 'return_sequence' keyward

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ LSTM ] LSTM Unit Test Cases
jijoong.moon [Tue, 18 May 2021 01:24:54 +0000 (10:24 +0900)]
[ LSTM ] LSTM Unit Test Cases

This PR includes,
 . LSTM Unittest Case
 . python code in genModels

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ LSTM ] Fix implementation & add backwarding testcase
jijoong.moon [Thu, 13 May 2021 04:03:49 +0000 (13:03 +0900)]
[ LSTM ] Fix implementation & add backwarding testcase

This commits includes:
  . fix implementation ( apply actvation )
  . add one unittest to check functionality of backwarding

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Exporter] Implement saving fc node
Jihoon Lee [Fri, 30 Apr 2021 10:34:06 +0000 (19:34 +0900)]
[Exporter] Implement saving fc node

**Changes proposed in this PR:**
- This patch add saving fullyconnected layer with node_exporter
- add dedeicated buffer to `TfNode`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[app] reduce Logistic regression application epochs
Parichay Kapoor [Tue, 8 Jun 2021 06:11:54 +0000 (15:11 +0900)]
[app] reduce Logistic regression application epochs

Reduce logistic regression application epochs.
This application is to check working than verify accuracy and can
run with fewer epochs.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Recorder] Add Input map generation
Jihoon Lee [Wed, 26 May 2021 12:09:29 +0000 (21:09 +0900)]
[Recorder] Add Input map generation

There was a bug that layer input is directly from the previous layer.
If the model is not sequential, this assumption is not true, making
Addition test fail.

This patch build `model.recoder__input_maps` to correctly locate
 the input tensor with some minor clean ups

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Enable tflite-interpreter by default
Parichay Kapoor [Thu, 20 May 2021 05:22:08 +0000 (14:22 +0900)]
[meson] Enable tflite-interpreter by default

Enable tflite-interpreter by default as tflite layer is
already enabled.
Move flatc dependency out to the main meson.build
Also add proper path for the generated tf_schema

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Move flatten property to layerNode
Parichay Kapoor [Tue, 18 May 2021 05:44:09 +0000 (14:44 +0900)]
[layer] Move flatten property to layerNode

Move flatten property out to layerNode
also prepare for moving other properties out to layerNode

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Refactor] Delegate tfNode creation to exporter
Jihoon Lee [Thu, 29 Apr 2021 06:31:33 +0000 (15:31 +0900)]
[Refactor] Delegate tfNode creation to exporter

This patch enables delegating exporting logic to exporter.
Note that this is supposed to be 1:1 mapping, if it has to be m:n
mapping, it should be directly handled by the interpreter, which
actually can see the node.

Also s/`save_result`/`saveResult` and `get_result`/`getResult`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
[Android] Add tflite interpreter option

This patch add tflite interpreter option to ndk build

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Initialize member variable
hyeonseok lee [Tue, 1 Jun 2021 04:52:09 +0000 (13:52 +0900)]
[Fix] Initialize member variable

 - Initialize member variable sorted
 - Catch exception when create the tensor

resolves: 1231330, 1143556

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[deb/pkg] Reduce error tolerance for release build
Parichay Kapoor [Fri, 4 Jun 2021 05:54:46 +0000 (14:54 +0900)]
[deb/pkg] Reduce error tolerance for release build

Reduce error tolerance for release builds to succeed build on launchpad and tizen builds.
The default error tolerance is only reduced for all release builds for all tests that use defined tolerance.
However, the CI will run at original tolerance for all the builds and will ensure that
values are matched with original precision.

The tolerance has to be reduced because different os
have different versions and implementations of the blas
which results in minor difference in the results of the float
operations. However, when the difference is accumulated over
a range of operations, the unittests fail to match the
exact values.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Tflite] Separate `tfopnode`
Jihoon Lee [Wed, 28 Apr 2021 10:46:58 +0000 (19:46 +0900)]
[Tflite] Separate `tfopnode`

This patch separates tfopnode to a file.

tfOpNode will be directly built from the exporter if it doesn't need to
be fused.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tflite] Update dependency from tflite to tf2lite
Parichay Kapoor [Wed, 2 Jun 2021 12:44:31 +0000 (21:44 +0900)]
[tflite] Update dependency from tflite to tf2lite

Update dependency from tensorflow-lite to tensorflow2-lite.
This is done for Tizen and Debian build. It has already been done for Android.

Tensorflow2-lite header is dependent on flatbuffer, so flatbuffers-devel
is additionally added as a build-dependency.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[pkg/spec] Update spec packaging for tflite layer
Parichay Kapoor [Fri, 21 May 2021 00:55:10 +0000 (09:55 +0900)]
[pkg/spec] Update spec packaging for tflite layer

Update spec packaging for tflite layer with tflite dependency

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[test] Remove run_unittests
Parichay Kapoor [Thu, 3 Jun 2021 05:15:27 +0000 (14:15 +0900)]
[test] Remove run_unittests

Remove run_unittests

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[deb/pkg] Minor update to deb packaging
Parichay Kapoor [Thu, 3 Jun 2021 05:13:13 +0000 (14:13 +0900)]
[deb/pkg] Minor update to deb packaging

Update debian packaging:
- update the unittests to use meson tests
- update dh_missing to fail than give warnings

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[unittest] Update unittest tolerance for debian build
Parichay Kapoor [Thu, 3 Jun 2021 03:34:47 +0000 (12:34 +0900)]
[unittest] Update unittest tolerance for debian build

Update unittest tolerance to be 1e-5 for debian build to match
across platforms (bionic, focal) and architectures.
Similar issues have occured for i586 tizen as well.
So, this patch lowers the tolerance for all ccapi and capi test.

This issue occurs to os and library versioning issues.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[deb/pkg] Add nnstreamer filter and update testing
Parichay Kapoor [Thu, 3 Jun 2021 01:59:44 +0000 (10:59 +0900)]
[deb/pkg] Add nnstreamer filter and update testing

Add nnstreamer filter in the packaging
Also update the unittests to use the meson tests

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Release ] NNTrainer 0.2.0 Release
jijoong.moon [Wed, 2 Jun 2021 13:15:52 +0000 (22:15 +0900)]
[ Release ] NNTrainer 0.2.0 Release

NNTrainer v0.2.0 is released.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[deb/pkg] Update debian install filenames
Parichay Kapoor [Wed, 2 Jun 2021 12:51:52 +0000 (21:51 +0900)]
[deb/pkg] Update debian install filenames

Update debian install filenames for the capi and ccapi.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Product Ratings] Fix inconsistent batch
Jihoon Lee [Wed, 2 Jun 2021 06:42:00 +0000 (15:42 +0900)]
[Product Ratings] Fix inconsistent batch

This patch fixes inconcistent batch size between source file and model
configuration.

Also found a bug #1238, which will be dealt seperately.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[debian] Refactor debian packaging
Parichay Kapoor [Wed, 2 Jun 2021 04:50:16 +0000 (13:50 +0900)]
[debian] Refactor debian packaging

Refactor debian package to make separate packages for capi and ccapi, along with their devels.
Other bug fixes/improvements are:
- Update the names of the packages in pkg-config files for the capi and ccapi
- Update the installed files based on the requirement by the devel headers
(this might be even expanded). This will be refactored again once layer_internal and neuralnet
cleanup is performed.
- Update the included headers

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[APP] Change embeding to product rating
Jihoon Lee [Tue, 1 Jun 2021 12:11:36 +0000 (21:11 +0900)]
[APP] Change embeding to product rating

This patch changes embedding application to product rating application
which is more practical use-case

**Changes proposed in this PR:**
- Update Readme
- Change directory and app name embedding->productratings
- tune model and sample dataset

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tflite] implement build operations in subgraph
Jihoon Lee [Mon, 19 Apr 2021 11:52:39 +0000 (20:52 +0900)]
[tflite] implement build operations in subgraph

This patch implements build operation in the subgraph

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[context/optimizer] Enable plugged optimizer
Parichay Kapoor [Tue, 1 Jun 2021 04:47:17 +0000 (13:47 +0900)]
[context/optimizer] Enable plugged optimizer

Enable plugged optimizer with the app_context

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Update optimizer setProperty interface
Parichay Kapoor [Tue, 1 Jun 2021 02:30:01 +0000 (11:30 +0900)]
[optimizer] Update optimizer setProperty interface

Update optimizer setProperty interface based on the requirement
from the custom optimizer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Example] Concat example
Jihoon Lee [Thu, 27 May 2021 07:44:40 +0000 (16:44 +0900)]
[Example] Concat example

This patch adds concat example with embedding

The structure is

      (input)
     /       \
  (embed1)   (embed2)
    \         /
      (concat)
         |
      (flatten)
         |
        (fc)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Prevent creating multiout for split
Jihoon Lee [Tue, 1 Jun 2021 04:51:21 +0000 (13:51 +0900)]
[Fix] Prevent creating multiout for split

This patch make an exception for split layer to not create multiout
layer. While fixing trivial build error of sign compare

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[split/test] Added unittests for split layer
Parichay Kapoor [Mon, 31 May 2021 09:44:43 +0000 (18:44 +0900)]
[split/test] Added unittests for split layer

Added unittests for split layer covering initialization,
forwarding and backwarding. Corresponding bug fixes are
also added.
This has revealed an issue with concat layer #1226
which will be resolved soon.

Also add split layer to android build

Resolves #1208

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>