platform/core/ml/nntrainer.git
3 years ago[Chore] Move resource path
Jihoon Lee [Tue, 2 Mar 2021 07:10:18 +0000 (07:10 +0000)]
[Chore] Move resource path

Move resoure path from build root to res path for layer plugin

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conf] Add default conf nntrainer.ini
Jihoon Lee [Fri, 29 Jan 2021 09:28:00 +0000 (18:28 +0900)]
[Conf] Add default conf nntrainer.ini

This patch adds nntrainer.ini (defaulted to be `/etc/nntrainer.ini`)
The path is defaulted to be '/etc/nntrainer.ini' but it is subjected to
change by changing `--sysconfdir=''` and `--prefix`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[AppContext] Default Path loader
Jihoon Lee [Thu, 28 Jan 2021 12:59:04 +0000 (21:59 +0900)]
[AppContext] Default Path loader

Add appcontext default path loader.

**Changes proposed in this PR:**
- add `AppContext::registerLayerFromDirectory`
- call `add_extension_object` routine for `AppContext::Global()`. The
routine searches the directory of `NNTRAINER_PATH` and from
`nntrainer.ini`(installing ini in upcoming PR)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[dataset/unittest] Bug fix for segmentation fault submit/tizen/20210305.081230
Parichay Kapoor [Fri, 5 Mar 2021 02:32:12 +0000 (11:32 +0900)]
[dataset/unittest] Bug fix for segmentation fault

The unittest gives segmentation fault when trying to close a dataset
object which has not been opened successfully.
Initializing the object to false solves the issue.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Docs] Fix broken links
Juyeong Lee [Thu, 25 Feb 2021 17:54:02 +0000 (02:54 +0900)]
[Docs] Fix broken links

This patch updates broken links in contributing guide.

Signed-off-by: Juyeong Lee <2jy22@naver.com>
3 years ago[ Document ] Add RELEASE.md
jijoong.moon [Wed, 3 Mar 2021 05:29:15 +0000 (14:29 +0900)]
[ Document ] Add RELEASE.md

Add release.md to specify release policy.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Test] Add conv various situation test accepted/tizen/unified/20210305.034114 submit/tizen/20210303.100614 submit/tizen/20210305.020917
Jihoon Lee [Mon, 1 Feb 2021 06:36:10 +0000 (15:36 +0900)]
[Test] Add conv various situation test

Convolutional layer when variable stride is given case was missing from
the test, the patch adds it

- basic conv for reference
- conv with same padding
- conv with multi strides
- conv with multi strides + same padding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2D] Rework conv2d layer
Jihoon Lee [Wed, 17 Feb 2021 04:30:15 +0000 (13:30 +0900)]
[Conv2D] Rework conv2d layer

This patch contains major rework of conv2d layer to cope with
padded, strided configuration

**Major changes proposed in this PR**
- Rework conv2d::calcDerivative by
  - Add and exploit col2im method
- Rework conv2d::calcGradient by
  - Add dilation arugment to im2col

**Minor changes proposed in this PR:**
- Delete obsolete profiler

**Effect on performance**
- strip_pad has been removed, thus less memcpy, allocation

**Major restriction**
This patch mainly refactors the flow of how conv2d backpropagates.
There were some delegate issues still left to be handled. Those issue
can be handled on top of this patch.

- if calculated output is not an integer either on forward / backward
this will be hard error for now.
- same padding can only be applicable when kernel size is odd.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix/Test] Fix ccapi test fail
Jihoon Lee [Wed, 3 Mar 2021 05:48:43 +0000 (14:48 +0900)]
[Fix/Test] Fix ccapi test fail

As resource path has been changed, the resource path has to be changed.
This patch resolves the issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix/Svace] Add try catch to model-run
MyungJoo Ham [Wed, 3 Mar 2021 02:11:51 +0000 (11:11 +0900)]
[Fix/Svace] Add try catch to model-run

The two functions of the following code may throw
exceptions. Handle them.
```
if (arg == "model") {
  return api_model_run();
} else {
  return ini_model_run(arg);
}
```

This fixes SVACE 457479.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
3 years ago[model] Allow updating batch size after allocation
Parichay Kapoor [Thu, 4 Feb 2021 11:07:48 +0000 (20:07 +0900)]
[model] Allow updating batch size after allocation

Allow updating the batch size after memory allocation has been done
This allows changing the batch size for training anytime.

Further, it allows changing the batch size from training to inference and back
which is demonstrated with MNIST example in nnstreamer element of nntrainer.
The model file starts with batch size of 32, which is changed to 1 for inference.

Also added another unittest in cpp-api which sets different batch size in different
stages of the model, after initialization, after compile, after train and trains multiple times
with different batch sizes to validate that the training works.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Allow updating batch size after allocation
Parichay Kapoor [Thu, 4 Feb 2021 11:03:15 +0000 (20:03 +0900)]
[tensor] Allow updating batch size after allocation

Allow updating batch size after memory allocation for tensor.
If previously allocated memory is first set to null and then allocated again
with the new batch size. As tensors share memory, the previously allocated
might not be freed when set to null and can lead to high peak memory usage.
It is recommended to free the memory for all tensors whose batch size is changing
first, update batch size and then allocate again to keep the peak memory requirement
at check.

As the memory is allowed to be re-allocated, the src_tensor for a tensor is not
reset anymore after memory allocation. When reallocating, the src_tensor is used
again in order to maintain the memory management done once duing initialization.

This patch also sets appropriate interface in Var_Grad and Weight classes as well.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[filter] Extract inference wrapper header
Jihoon Lee [Thu, 18 Feb 2021 04:50:12 +0000 (13:50 +0900)]
[filter] Extract inference wrapper header

This patch extracts inference wrapper header for testability and better
maintainability.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[loss] Allow no loss in the model
Parichay Kapoor [Thu, 4 Feb 2021 04:45:22 +0000 (13:45 +0900)]
[loss] Allow no loss in the model

Allow setting no loss in the model
This allows inferencing a model, and creating submodels
with backbones to infer some particular output features

Updated existing unittest with no loss to succeed
and added more unittests to validate that models run successfully
without loss

V2:
Updated mnist_inf.ini to be without loss

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Update to camelcase
Parichay Kapoor [Fri, 29 Jan 2021 11:42:12 +0000 (20:42 +0900)]
[optimizer] Update to camelcase

Update apply_gradient(s) to applyGradient(s)

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] Move apply gradient to weight
Parichay Kapoor [Fri, 29 Jan 2021 11:34:18 +0000 (20:34 +0900)]
[weight] Move apply gradient to weight

Move apply gradient which will just add the gradient
to the variable multiplied with the gradient.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight/layer] Move weight regularization out of layers
Parichay Kapoor [Fri, 29 Jan 2021 11:03:51 +0000 (20:03 +0900)]
[weight/layer] Move weight regularization out of layers

Move weight regularization out of layers to weights
and remove same code from the all the layers.
Loss and grads from weight regularization is done by the
weight itself.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix/Bug] NeuralNet check save path
Jihoon Lee [Thu, 28 Jan 2021 12:14:10 +0000 (21:14 +0900)]
[Fix/Bug] NeuralNet check save path

Add a check to see if save_path is valid if not, issue a warning

**Semantic changes:**
From this patch, default save path is gone

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor/App] Update readme
Jihoon Lee [Mon, 22 Feb 2021 06:37:07 +0000 (15:37 +0900)]
[Refactor/App] Update readme

Update `readme` to instruct how to run based on the changed resource
path

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor/app] Move resources to res path
Jihoon Lee [Mon, 22 Feb 2021 06:15:50 +0000 (15:15 +0900)]
[Refactor/app] Move resources to res path

**Major Changes proposed in this PR:**
- Move resources to res root to make sure it is installed when
installing application
- Change test script accordingly

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Clean/iniTest] add logic to erase ini after test
Jihoon Lee [Fri, 19 Feb 2021 06:58:24 +0000 (15:58 +0900)]
[Clean/iniTest] add logic to erase ini after test

This patch add logic to erase ini after test for better determinancy
and cleaner build directory.

v2: also deprecating config_str in favor of
`ScopedIni`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor/test] Move resources to test path
Jihoon Lee [Fri, 19 Feb 2021 03:06:01 +0000 (12:06 +0900)]
[Refactor/test] Move resources to test path

**Situation this commit tries to deal with**
Currently, resources are scattered and managed on it's own way.
eg) from .tar.gz, test_models, resources on the applications.
This causes gap between buildtime test and install an example/test and
try run.
Also, every temporary resources are packed at build folder
making it harder to debug the log file, models let alone making some
confusions which files should be referred to when running an
application.

**This commit remedy the issue for*
- runtest after `ninja` and runttest after `ninja install` and moving to
`${bindir}` give consistent result.

**Major changes proposed in this commit**
- creating a `${buildroot}/res` root.
- hard links (if not possible, copies) resources to `${buildroot}/res`
- Each test refer to the dynamic calculated path instead of `${pwd}`
- The path is calculated by 1) directly refering to NNTRAINER_RESOURCE_PATH
env variable, 2) if not, predefined path starting from `${pwd}/res`

**Minor changes proposed in this commit:**
- Deprecate some internal test that is already being tested

**Planned updates after the commit**
- Deprecate config_str in favor of `IniTestWrapper` as config_str makes
it harder to track
- Make sure test generated files are deleted after each test
- Managing application resources
- ~Packaging updates to include soley managed `${buildroot}/res`
folder~(already configured)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chores] Richer log to some functions
Jihoon Lee [Fri, 19 Feb 2021 03:21:47 +0000 (12:21 +0900)]
[Chores] Richer log to some functions

This patch makes some log to contain runtime information.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Util] Delete internal namespace
Jihoon Lee [Wed, 27 Jan 2021 05:32:33 +0000 (14:32 +0900)]
[Util] Delete internal namespace

This patch removes internal namespace from nntrainer::exception

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Set meson buildtype to release
Parichay Kapoor [Fri, 5 Feb 2021 07:49:36 +0000 (16:49 +0900)]
[meson] Set meson buildtype to release

Setting -O optimization manually gives meson warnings.
Instead update meson buildtype to release
This sets debug to 0 and optimization to level 3.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Build] change pkg name
Jaeyun [Tue, 16 Feb 2021 10:49:04 +0000 (19:49 +0900)]
[Build] change pkg name

update pkg name to machine-learning-training.

TODO:
1. separate training api pkg in debian
2. migrate training api to api repo

Signed-off-by: Jaeyun <jy1210.jung@samsung.com>
3 years ago[Fix] Gcc-9 complaning about parentheses
Jihoon Lee [Tue, 2 Mar 2021 04:12:52 +0000 (04:12 +0000)]
[Fix] Gcc-9 complaning about parentheses

Because of unclear parentheses on the `ErrorNotification`
gcc-9 issued a warning about the `NNTR_THROW_IF_CLEANUP` macro, when
no stream is given.

This patch fixes the issue.

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Add test to create layer from an example
Jihoon Lee [Wed, 27 Jan 2021 12:43:50 +0000 (21:43 +0900)]
[Test] Add test to create layer from an example

This patch adds libpow_layer.so as an example.
Also this patch adds a simple test to load and register the so file.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[AppContext] Add register plugin function
Jihoon Lee [Wed, 27 Jan 2021 12:42:36 +0000 (21:42 +0900)]
[AppContext] Add register plugin function

This patch adds register plugin function to be used by loading library

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Pluggable] Add layer wrapper for plugged layer
Jihoon Lee [Wed, 27 Jan 2021 12:39:14 +0000 (21:39 +0900)]
[Pluggable] Add layer wrapper for plugged layer

As layer loaded from a `dlsym` cannot destroyed by `delete`, we must
need a way to delete it from destroy funcion. This patch add a simple
wrapper to deal with the problem.

Additionally, This patch is making functions in layer_internal
virtual which paves way to clean up api. Please think
`plugged_layer.h` as a layer api draft

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[API] Add layer pluggable definition
Jihoon Lee [Tue, 26 Jan 2021 11:33:18 +0000 (20:33 +0900)]
[API] Add layer pluggable definition

Add layer pluggable definition to be used when creating a custom layer
to be plugged into nntrainer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add simpleshot runner
Jihoon Lee [Mon, 18 Jan 2021 07:15:27 +0000 (16:15 +0900)]
[SimpleShot] Add simpleshot runner

All layers are ready for the simpleshot but there is no executable
executable yet.
This patch adds simpleshot executable

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] add centroid nearest neighbor layer
Jihoon Lee [Sat, 9 Jan 2021 12:58:34 +0000 (21:58 +0900)]
[SimpleShot] add centroid nearest neighbor layer

Add centroid nearest neighbor layer with validation

v2: renamed to centroid_knn for brevity

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add centering layer
Jihoon Lee [Sat, 9 Jan 2021 04:23:58 +0000 (13:23 +0900)]
[SimpleShot] Add centering layer

Add centering layer with tests

**minor changes**
- rename simpleshot centering test
- add test_util lib to simpleshot test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add test util cpp
Jihoon Lee [Fri, 8 Jan 2021 06:11:38 +0000 (15:11 +0900)]
[SimpleShot] Add test util cpp

Add a simple utility to the simpleshot app

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add scaffolding for the application
Jihoon Lee [Fri, 8 Jan 2021 05:28:25 +0000 (14:28 +0900)]
[SimpleShot] Add scaffolding for the application

This patch adds simpleshot directory to application.
Nothing's present yet just a simple structures

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Handle edgecase when using shared memory
Jihoon Lee [Mon, 18 Jan 2021 07:12:42 +0000 (16:12 +0900)]
[Fix] Handle edgecase when using shared memory

When using shared memory, if weight_size is 0, mmap spits error.
This patch fixes the issue by handling when weight_size is 0

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Loss] Fix loss layer were allocating a new mem
Jihoon Lee [Fri, 22 Jan 2021 08:09:38 +0000 (17:09 +0900)]
[Loss] Fix loss layer were allocating a new mem

As allocation is managed in `manager`, layer shouldn't be allocate a new
memory that are managed by memory.
However, loss layer was allocating a new memory. This patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Cc: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix warning] Add override specifier
Juyeong Lee [Thu, 25 Feb 2021 21:36:29 +0000 (06:36 +0900)]
[Fix warning] Add override specifier

Add override specifier to fix some Android build warnings. (Related to

Resolves #798

Signed-off-by: Juyeong Lee <2jy22@naver.com>
3 years ago[Docs] Fix typos in README.md accepted/tizen/unified/20210226.131826 submit/tizen/20210226.034152
Juyeong Lee [Thu, 25 Feb 2021 17:41:27 +0000 (02:41 +0900)]
[Docs] Fix typos in README.md

This is a trivial commit that fixes typos in README.md

Signed-off-by: Juyeong Lee <2jy22@naver.com>
3 years ago[Fix/Svace] Add try catch to the top level app
Jihoon Lee [Thu, 25 Feb 2021 04:46:30 +0000 (13:46 +0900)]
[Fix/Svace] Add try catch to the top level app

Add try catch to the top level app to ensure exception does not
propagates to the very top level

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Build] change dependency to nnstreamer api accepted/tizen/unified/20210218.080536 submit/tizen/20210217.032056
Jaeyun [Tue, 16 Feb 2021 04:58:14 +0000 (13:58 +0900)]
[Build] change dependency to nnstreamer api

Now api repo is newly added, change dependency to ml-inference-api.

Signed-off-by: Jaeyun <jy1210.jung@samsung.com>
3 years ago[Build] Decouple gtest from nntrainer_test_util
Jihoon Lee [Thu, 28 Jan 2021 04:56:18 +0000 (13:56 +0900)]
[Build] Decouple gtest from nntrainer_test_util

As nntrainer_test_util had gtest, it was preventing from compiler checks
some trivial bugs (like sign compare) and had some weird bug.

This patch decouples gtest from nntrainer_test_util while changing gtest
to static build.

Resolves #910

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Util] Add gtest style throw macro
Jihoon Lee [Wed, 27 Jan 2021 05:32:33 +0000 (14:32 +0900)]
[Util] Add gtest style throw macro

Add gtest style throw macro for productivity.
With this patch, if you have to throw, do in one line

`NNTR_THROW_IF(true, std::invalid_argument) << log << what << ever`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chore] s/since/Since/
Jihoon Lee [Tue, 16 Feb 2021 03:06:08 +0000 (12:06 +0900)]
[Chore] s/since/Since/

From tizen convention since should be Since, this patch resolves the
issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Update lazy alloc ctor to lazy alloc
Jihoon Lee [Mon, 15 Feb 2021 07:28:28 +0000 (16:28 +0900)]
[Fix] Update lazy alloc ctor to lazy alloc

This patch fixes lazy alloc ctor to lazy allocate which were delegated
to eager ctor thus causing allocating tensor twice for the lazy
allocated tensor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[custom-app] svace: added missing try-catch accepted/tizen/unified/20210210.052156 submit/tizen/20210209.084149
Parichay Kapoor [Tue, 9 Feb 2021 04:02:38 +0000 (13:02 +0900)]
[custom-app] svace: added missing try-catch

Added missing try-catch reported by svace for the custom tizen application

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] svace issue fix
Parichay Kapoor [Tue, 9 Feb 2021 03:11:59 +0000 (12:11 +0900)]
[manager] svace issue fix

Uninitialized class member issue fixed for manager.cpp - MMapedMemory
Added class member initialization

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[conv] svace integer overflow fix
Parichay Kapoor [Tue, 9 Feb 2021 02:58:51 +0000 (11:58 +0900)]
[conv] svace integer overflow fix

Added integer overflow fix notified by svace for convolution

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoFix svace issue GraphWatcher UNINIT.CTOR
hyeonseok lee [Tue, 9 Feb 2021 06:29:20 +0000 (15:29 +0900)]
Fix svace issue GraphWatcher UNINIT.CTOR

In the constructor the member variable expected_loss is not initialized.
expected_loss will be read later in readIteration function so initialized with 0.0

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue manager UNREACHABLE_CODE
hyeonseok lee [Tue, 9 Feb 2021 06:17:26 +0000 (15:17 +0900)]
Fix svace issue manager UNREACHABLE_CODE

The fd_ variable is only changed when on the ANDROID environment,
so enclosed with ifdef

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue createDataset NO_CATCH
hyeonseok lee [Tue, 9 Feb 2021 03:41:39 +0000 (12:41 +0900)]
Fix svace issue createDataset NO_CATCH

Add try catch statement to handle exception in main.cpp

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue readModel NO_CATCH
hyeonseok lee [Tue, 9 Feb 2021 03:32:01 +0000 (12:32 +0900)]
Fix svace issue readModel NO_CATCH

Add try catch statement to handle exception in main.cpp and main_func.cpp

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Weight] Cleanup train argument for initialize gradient
Parichay Kapoor [Tue, 26 Jan 2021 09:57:48 +0000 (18:57 +0900)]
[Weight] Cleanup train argument for initialize gradient

Cleanup train argument for initialize gradient
This argument was needed when weights and gradient were initialized together
But now, the caller must call initializeGradient only when its trainable

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ DOC ] add badge for android arm
jijoong.moon [Fri, 5 Feb 2021 08:06:18 +0000 (17:06 +0900)]
[ DOC ] add badge for android arm

Add badge for android arm

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Document ] Add daily build status
jijoong.moon [Fri, 5 Feb 2021 01:34:19 +0000 (10:34 +0900)]
[ Document ] Add daily build status

Add daily build status in readme

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[var_grad] Improve nomenclature
Parichay Kapoor [Mon, 1 Feb 2021 07:59:49 +0000 (16:59 +0900)]
[var_grad] Improve nomenclature

There is a big confusion with the name of the functions in var_grad
about initialize variables and gradients
initializeWeights->initializeVariables
initializeGrad->initializeGradients

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Split tensor constructor into 2
Parichay Kapoor [Mon, 1 Feb 2021 07:26:59 +0000 (16:26 +0900)]
[tensor] Split tensor constructor into 2

Create a new tensor constructor for lazy allocation
than adding the lazy allocation feature to the existing one

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[bug fixes] Update shared mem and in-place opt
Parichay Kapoor [Fri, 29 Jan 2021 07:11:06 +0000 (16:11 +0900)]
[bug fixes] Update shared mem and in-place opt

Update shared memory and in-place opt to share memory with previous
layers with different strategies based on the type of optimization the
layer is doing for reducing its memory usage.

Added weight and var_grad to create weight/var_grad with lazy
memory allocation.
Update SrcSharedTensor to remove the check is source is allocated
as it cannot be guaranteed.
Bug fix for tensor is allocated

Update nntrainer extension for nnstreamer to use a different
ini file for inference which does not require changing batch size.
Added minor bug fixes to the patch

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Manager] Support lazy memory allocation with manager
Parichay Kapoor [Tue, 26 Jan 2021 11:44:19 +0000 (20:44 +0900)]
[Manager] Support lazy memory allocation with manager

Support lazy memory allocation with manager

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph] In-place bug fix
Parichay Kapoor [Mon, 25 Jan 2021 06:56:27 +0000 (15:56 +0900)]
[graph] In-place bug fix

This patch applied bug fix for in-place layer optimization.
As in-place layers and other layers manages derivative memory
differently, in-place layers cannot directly re-use the tensors
of their neighboring layers. But rather have to use tensors differently.
This patch applies this bug-fix.

See also #878

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Support late memory allocation for tensors
Parichay Kapoor [Mon, 25 Jan 2021 06:31:33 +0000 (15:31 +0900)]
[tensor] Support late memory allocation for tensors

Support late memory allocation for tensors.
This helps create tensor wrapper elements and allocate memory
later when needed.
This allows finalizing the graph, creating all the tensors and graph
connections for all the layers which can be stored in offline setup
and then loaded directly from file.

This decoupling allows to reuse tensors with every call to train
where we can allocate and deallocate memory without having to create
and do tensor management again and again.

In short term, this is necessary for in-place layer optimization.
See Also #879

The implementation requires caching the source tensor when creating
shared tensors as shared tensors cannot be created unless source
tensor memory has been allocated.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ ResNet ] Add Resnet18 Application
jijoong.moon [Wed, 27 Jan 2021 08:04:58 +0000 (17:04 +0900)]
[ ResNet ] Add Resnet18 Application

This PR includes ini configuration file for resnet18

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ DEBIAN ] Fix gcc version for debian build
jijoong.moon [Fri, 5 Feb 2021 00:57:37 +0000 (09:57 +0900)]
[ DEBIAN ] Fix gcc version for debian build

Set higher gcc version to build debian

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Bug fix of Android target Lib
jijoong.moon [Thu, 4 Feb 2021 11:19:19 +0000 (20:19 +0900)]
[ Application ] Bug fix of Android target Lib

Fix the path of tensorflow-lite lib

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Fix to use Tensorflow Lite 2.3.0
jijoong.moon [Thu, 4 Feb 2021 07:48:11 +0000 (16:48 +0900)]
[ Application ] Fix to use Tensorflow Lite 2.3.0

Fix Android.mk to use Tensorflow Lite 2.3.0

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[app_context] Resolve build issue with gcc5
Parichay Kapoor [Thu, 4 Feb 2021 05:26:10 +0000 (14:26 +0900)]
[app_context] Resolve build issue with gcc5

This solves the build issue for gcc5 when using std::call_once
Original code calls bind with copy of the argument to the function
and fails are reference is expected with gcc5

This patch adds a std::ref explicitly to make this work with gcc5

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ MESON ] fix capi-nnstreamer dependency
jijoong.moon [Thu, 4 Feb 2021 04:53:05 +0000 (13:53 +0900)]
[ MESON ] fix capi-nnstreamer dependency

remove capi-nnstreamer dependency

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ ANDROID ] change the tflite version
jijoong.moon [Thu, 4 Feb 2021 01:59:26 +0000 (10:59 +0900)]
[ ANDROID ] change the tflite version

Change Tensorflow-Lite Version from 1.13.1 to 2.30.0

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Doc ] add how to run android application accepted/tizen/unified/20210204.134412 submit/tizen/20210204.034810
jijoong.moon [Tue, 2 Feb 2021 12:48:51 +0000 (21:48 +0900)]
[ Doc ] add how to run android application

Add documentation to build and run in android

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[adam] Reduce temporary memory allocation
Parichay Kapoor [Fri, 29 Jan 2021 11:28:12 +0000 (20:28 +0900)]
[adam] Reduce temporary memory allocation

Reduce temporary memory allocation for adam
This is done by reusing gradient memory to calculate the final
update which is to be applied for the weight.

This reduces temporary memory allocations, which were being every epoch
for each weight, but also reduces peak memory usage.

Resolves #917

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[blas] Add missing return
Parichay Kapoor [Fri, 29 Jan 2021 09:37:34 +0000 (18:37 +0900)]
[blas] Add missing return

Add bug fix for blas raw implementation for missing return.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix] Resolve static init order fiasco
Jihoon Lee [Wed, 27 Jan 2021 01:15:12 +0000 (10:15 +0900)]
[Fix] Resolve static init order fiasco

As static initialization order out of a translation unit is undefined,
initializing global caused some undefined behavior.

Initializing global app context is delayed until it is first called.

See https://gcc.gnu.org/legacy-ml/gcc-patches/2017-03/msg00863.html

Resolves #893

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Handle copy edge case
Jihoon Lee [Tue, 26 Jan 2021 10:16:38 +0000 (19:16 +0900)]
[Tensor] Handle copy edge case

When copying uninitialized tensor to another unintialized tensor,
tensor::copy tried to reshape uninitialized dimension.

This patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CAPI/acr] Update tizen capi
Jihoon Lee [Tue, 26 Jan 2021 03:09:45 +0000 (12:09 +0900)]
[CAPI/acr] Update tizen capi

**Changes proposed in this PR:**
- Setting void *user_data
- Update layer type enum

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Rearrange Methods
Jihoon Lee [Tue, 26 Jan 2021 04:20:33 +0000 (13:20 +0900)]
[Tensor] Rearrange Methods

This patch gathers tensor by arithmetic operation (tensor.h is reflected
with rebase)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] rearrange methods
Jihoon Lee [Thu, 21 Jan 2021 09:48:24 +0000 (18:48 +0900)]
[Tensor] rearrange methods

- Add missing out param methods
- Change way it is delegated for some methods
- Rename s/operator_/apply_broadcast
- Remove `operator_i` and `operator_i_util`
- Assure dimension checks

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor/Clean] Relocate tensor methods
Jihoon Lee [Thu, 21 Jan 2021 04:40:13 +0000 (13:40 +0900)]
[Tensor/Clean] Relocate tensor methods

This patch relocate arithmetic methods while adding some missing
outplace operation signature.

Order is

1. inplace -> outplace -> outplace (with allocated memory)
2. multiply -> divide -> add -> subtract
3. scalar -> tensor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tensor] Update interface for tensor::map
Parichay Kapoor [Wed, 27 Jan 2021 02:25:58 +0000 (11:25 +0900)]
[tensor] Update interface for tensor::map

Update interface for tensor::map to include the size of the original
buffer to ensure that the buffer contains enough memory required
by the tensor shape wrapping around the memory.
Added another negative unittest for it.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] bug fix for Tensor::map
Parichay Kapoor [Tue, 26 Jan 2021 07:53:22 +0000 (16:53 +0900)]
[manager] bug fix for Tensor::map

This patch exposes a bug from Tensor::Map
where the offset is not checked when assigning the data.

This is because of the bug in manager for batch normalization layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Disable user_shared_memory
Parichay Kapoor [Tue, 26 Jan 2021 07:50:02 +0000 (16:50 +0900)]
[manager] Disable user_shared_memory

Disable user_shared_memory to as NNAPI is not required to be supported.
It is further needed to decouple tensor structure allocation and its
internal memory allocation.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Docs] Generate nntrainer docs
Gichan Jang [Wed, 27 Jan 2021 06:39:22 +0000 (15:39 +0900)]
[Docs] Generate nntrainer docs

Generate nntrainer docs.
See: https://nntrainer.github.io/

 - Sub-documents such as Application/* need to be added
 - The table and others need to be modified according to the hotdoc rule.

Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
3 years ago[spec] add backward competibility under tizen 6
Jihoon Lee [Wed, 20 Jan 2021 08:00:51 +0000 (17:00 +0900)]
[spec] add backward competibility under tizen 6

Since `ml-error-common` redclares error enum inside nnstreamer in tizen
5.5. This patch workarounds the issue by making a fake ml-api-error.h

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Clean up ml-api-common dependency
Jihoon Lee [Wed, 20 Jan 2021 07:07:11 +0000 (16:07 +0900)]
[meson] Clean up ml-api-common dependency

This patch cleans up the ml-api-common dependency.
If it is seems to be stable with multiple platform, we can remove
`api/capi/include/platform/ml-api-common.h`. let's keep it for now

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Reflect changes to upstream/main
Jihoon Lee [Tue, 26 Jan 2021 04:04:36 +0000 (13:04 +0900)]
[Fix] Reflect changes to upstream/main

From merging some big prs there happend some inconsistency which casued
a build break. This patch solves the issue

**Changes proposed in this PR:**
- Use manager.initializeTensor() in the unittest
- Add training signature to forwarding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoLicense Fix / Relicense to Apache-2.0
MyungJoo Ham [Mon, 25 Jan 2021 08:42:45 +0000 (17:42 +0900)]
License Fix / Relicense to Apache-2.0

1. Do not use "Apache-2.0-only". It's "Apache-2.0".
2. Relicense files to Apache-2.0. (The author permits; I'm the author.)

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
3 years ago[data augmentation] support for random translate
Parichay Kapoor [Wed, 20 Jan 2021 09:05:32 +0000 (18:05 +0900)]
[data augmentation] support for random translate

Added support for random translate which is fractional and does mirroring
This is implemented with opencv, but build is allowed without opencv
The model can be built but using this layer without opencv will throw

Added corresponding unittest as well.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[data augmentation] Support for random flip
Parichay Kapoor [Wed, 20 Jan 2021 08:03:40 +0000 (17:03 +0900)]
[data augmentation] Support for random flip

Add support for random flip data augmentation along with its unittests

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[dynamic-training] Add dynamic training using derivatives
Parichay Kapoor [Tue, 5 Jan 2021 15:16:03 +0000 (00:16 +0900)]
[dynamic-training] Add dynamic training using derivatives

Added dynamic training using derivatives where the decision to
apply the gradient is calculated using the derivative received
without calculating the gradient itself.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[dyanmic training] Adding dynamic-training code
Parichay Kapoor [Mon, 4 Jan 2021 12:02:36 +0000 (21:02 +0900)]
[dyanmic training] Adding dynamic-training code

Added dynamic-training code with both max and l2norm mode
Verified working with existing examples given the threshold

TODO: support dynamic training with derivative

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix/TFlite] Fix tflite allocation
Jihoon Lee [Mon, 18 Jan 2021 06:58:39 +0000 (15:58 +0900)]
[Fix/TFlite] Fix tflite allocation

Now, memory alllocation is handled outside of each layer.
Accordingly, allocating out tensor shouldn't be done inside a layer.

For the same reason, loss layer backwarding needs some fix, for now
it is just commented and will be handled soon

This patch handles the issue for tflite layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[var_grad] Remove redundant argument for initializeWeight
Parichay Kapoor [Fri, 22 Jan 2021 09:43:34 +0000 (18:43 +0900)]
[var_grad] Remove redundant argument for initializeWeight

remove redundant argument for initializeWeight - gtrain
as weight initialization is independent of if the weight is
going to be used in training or not.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] Decouple init of weight and gradients
Parichay Kapoor [Wed, 20 Jan 2021 03:17:41 +0000 (12:17 +0900)]
[weight] Decouple init of weight and gradients

Decouple initialization of weight variables and its corresponding gradients
Weights are always intialized and used later with inference/train
but gradients are initialized only with training and with different
configurations based on the chosen optimization strategies.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[pooling] Do not allocate memory in initialize
Parichay Kapoor [Mon, 4 Jan 2021 10:13:59 +0000 (19:13 +0900)]
[pooling] Do not allocate memory in initialize

Set batch size in initialize for pooling layer allocates memory.
However, the final batch size is allowed to change in inference/training.
This unnecessarily changes the peak memory requirement.
For now, this memory is allocated with forwarding.
Later this will be handled as a tensor with manager once int data type is supported.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Donot allocate adam for inference
Parichay Kapoor [Mon, 4 Jan 2021 10:12:42 +0000 (19:12 +0900)]
[manager] Donot allocate adam for inference

Donot allocate adam and gradient memory for weights
when the model is being executed for inference

V2:
Separate memory allocation for weights and gradients
Gradient memory allocation is decided based on training/inference
However weight memory is always to be allocated and must be loaded
before readModel(), so need to be separated

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[README] Add coverity badge
Gichan Jang [Fri, 22 Jan 2021 08:08:58 +0000 (17:08 +0900)]
[README] Add coverity badge

Add nntrainer coverity badge to README.

Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
3 years ago[optimization] Bug fix for in-place layer optimization
Parichay Kapoor [Fri, 22 Jan 2021 03:41:14 +0000 (12:41 +0900)]
[optimization] Bug fix for in-place layer optimization

Inplace layer optimization is performed for multiple layers - activation and batch normalization layers
and this list will increase with data augmentation etc.
However, the in-place layers cannot work correctly consecutively if these layers are trainable.
They can work perfectly is they dont need to pass the derivative back.

For now, this patch limits two consecutive layers to be in-place.
This will be made generic later dependent on the trainable and inPlace property of the layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[inference] Add input validation for inference
Parichay Kapoor [Tue, 19 Jan 2021 14:15:20 +0000 (23:15 +0900)]
[inference] Add input validation for inference

Add input validation for inference of the neural network

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Tensor] Add outplace method for arithmetic ops
Jihoon Lee [Sat, 9 Jan 2021 07:04:21 +0000 (16:04 +0900)]
[Tensor] Add outplace method for arithmetic ops

Add outplace ops with already allocated tensor.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Update meson for ubuntu 20.04
Parichay Kapoor [Tue, 19 Jan 2021 12:58:47 +0000 (21:58 +0900)]
[meson] Update meson for ubuntu 20.04

Update meson to work with ubuntu 20.04
Also add some missing checks

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>