platform/core/ml/nntrainer.git
3 years ago[Docs] Fix typos in README.md accepted/tizen/unified/20210226.131826 submit/tizen/20210226.034152
Juyeong Lee [Thu, 25 Feb 2021 17:41:27 +0000 (02:41 +0900)]
[Docs] Fix typos in README.md

This is a trivial commit that fixes typos in README.md

Signed-off-by: Juyeong Lee <2jy22@naver.com>
3 years ago[Fix/Svace] Add try catch to the top level app
Jihoon Lee [Thu, 25 Feb 2021 04:46:30 +0000 (13:46 +0900)]
[Fix/Svace] Add try catch to the top level app

Add try catch to the top level app to ensure exception does not
propagates to the very top level

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Build] change dependency to nnstreamer api accepted/tizen/unified/20210218.080536 submit/tizen/20210217.032056
Jaeyun [Tue, 16 Feb 2021 04:58:14 +0000 (13:58 +0900)]
[Build] change dependency to nnstreamer api

Now api repo is newly added, change dependency to ml-inference-api.

Signed-off-by: Jaeyun <jy1210.jung@samsung.com>
3 years ago[Build] Decouple gtest from nntrainer_test_util
Jihoon Lee [Thu, 28 Jan 2021 04:56:18 +0000 (13:56 +0900)]
[Build] Decouple gtest from nntrainer_test_util

As nntrainer_test_util had gtest, it was preventing from compiler checks
some trivial bugs (like sign compare) and had some weird bug.

This patch decouples gtest from nntrainer_test_util while changing gtest
to static build.

Resolves #910

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Util] Add gtest style throw macro
Jihoon Lee [Wed, 27 Jan 2021 05:32:33 +0000 (14:32 +0900)]
[Util] Add gtest style throw macro

Add gtest style throw macro for productivity.
With this patch, if you have to throw, do in one line

`NNTR_THROW_IF(true, std::invalid_argument) << log << what << ever`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chore] s/since/Since/
Jihoon Lee [Tue, 16 Feb 2021 03:06:08 +0000 (12:06 +0900)]
[Chore] s/since/Since/

From tizen convention since should be Since, this patch resolves the
issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Update lazy alloc ctor to lazy alloc
Jihoon Lee [Mon, 15 Feb 2021 07:28:28 +0000 (16:28 +0900)]
[Fix] Update lazy alloc ctor to lazy alloc

This patch fixes lazy alloc ctor to lazy allocate which were delegated
to eager ctor thus causing allocating tensor twice for the lazy
allocated tensor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[custom-app] svace: added missing try-catch accepted/tizen/unified/20210210.052156 submit/tizen/20210209.084149
Parichay Kapoor [Tue, 9 Feb 2021 04:02:38 +0000 (13:02 +0900)]
[custom-app] svace: added missing try-catch

Added missing try-catch reported by svace for the custom tizen application

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] svace issue fix
Parichay Kapoor [Tue, 9 Feb 2021 03:11:59 +0000 (12:11 +0900)]
[manager] svace issue fix

Uninitialized class member issue fixed for manager.cpp - MMapedMemory
Added class member initialization

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[conv] svace integer overflow fix
Parichay Kapoor [Tue, 9 Feb 2021 02:58:51 +0000 (11:58 +0900)]
[conv] svace integer overflow fix

Added integer overflow fix notified by svace for convolution

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoFix svace issue GraphWatcher UNINIT.CTOR
hyeonseok lee [Tue, 9 Feb 2021 06:29:20 +0000 (15:29 +0900)]
Fix svace issue GraphWatcher UNINIT.CTOR

In the constructor the member variable expected_loss is not initialized.
expected_loss will be read later in readIteration function so initialized with 0.0

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue manager UNREACHABLE_CODE
hyeonseok lee [Tue, 9 Feb 2021 06:17:26 +0000 (15:17 +0900)]
Fix svace issue manager UNREACHABLE_CODE

The fd_ variable is only changed when on the ANDROID environment,
so enclosed with ifdef

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue createDataset NO_CATCH
hyeonseok lee [Tue, 9 Feb 2021 03:41:39 +0000 (12:41 +0900)]
Fix svace issue createDataset NO_CATCH

Add try catch statement to handle exception in main.cpp

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue readModel NO_CATCH
hyeonseok lee [Tue, 9 Feb 2021 03:32:01 +0000 (12:32 +0900)]
Fix svace issue readModel NO_CATCH

Add try catch statement to handle exception in main.cpp and main_func.cpp

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Weight] Cleanup train argument for initialize gradient
Parichay Kapoor [Tue, 26 Jan 2021 09:57:48 +0000 (18:57 +0900)]
[Weight] Cleanup train argument for initialize gradient

Cleanup train argument for initialize gradient
This argument was needed when weights and gradient were initialized together
But now, the caller must call initializeGradient only when its trainable

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ DOC ] add badge for android arm
jijoong.moon [Fri, 5 Feb 2021 08:06:18 +0000 (17:06 +0900)]
[ DOC ] add badge for android arm

Add badge for android arm

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Document ] Add daily build status
jijoong.moon [Fri, 5 Feb 2021 01:34:19 +0000 (10:34 +0900)]
[ Document ] Add daily build status

Add daily build status in readme

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[var_grad] Improve nomenclature
Parichay Kapoor [Mon, 1 Feb 2021 07:59:49 +0000 (16:59 +0900)]
[var_grad] Improve nomenclature

There is a big confusion with the name of the functions in var_grad
about initialize variables and gradients
initializeWeights->initializeVariables
initializeGrad->initializeGradients

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Split tensor constructor into 2
Parichay Kapoor [Mon, 1 Feb 2021 07:26:59 +0000 (16:26 +0900)]
[tensor] Split tensor constructor into 2

Create a new tensor constructor for lazy allocation
than adding the lazy allocation feature to the existing one

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[bug fixes] Update shared mem and in-place opt
Parichay Kapoor [Fri, 29 Jan 2021 07:11:06 +0000 (16:11 +0900)]
[bug fixes] Update shared mem and in-place opt

Update shared memory and in-place opt to share memory with previous
layers with different strategies based on the type of optimization the
layer is doing for reducing its memory usage.

Added weight and var_grad to create weight/var_grad with lazy
memory allocation.
Update SrcSharedTensor to remove the check is source is allocated
as it cannot be guaranteed.
Bug fix for tensor is allocated

Update nntrainer extension for nnstreamer to use a different
ini file for inference which does not require changing batch size.
Added minor bug fixes to the patch

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Manager] Support lazy memory allocation with manager
Parichay Kapoor [Tue, 26 Jan 2021 11:44:19 +0000 (20:44 +0900)]
[Manager] Support lazy memory allocation with manager

Support lazy memory allocation with manager

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[graph] In-place bug fix
Parichay Kapoor [Mon, 25 Jan 2021 06:56:27 +0000 (15:56 +0900)]
[graph] In-place bug fix

This patch applied bug fix for in-place layer optimization.
As in-place layers and other layers manages derivative memory
differently, in-place layers cannot directly re-use the tensors
of their neighboring layers. But rather have to use tensors differently.
This patch applies this bug-fix.

See also #878

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Support late memory allocation for tensors
Parichay Kapoor [Mon, 25 Jan 2021 06:31:33 +0000 (15:31 +0900)]
[tensor] Support late memory allocation for tensors

Support late memory allocation for tensors.
This helps create tensor wrapper elements and allocate memory
later when needed.
This allows finalizing the graph, creating all the tensors and graph
connections for all the layers which can be stored in offline setup
and then loaded directly from file.

This decoupling allows to reuse tensors with every call to train
where we can allocate and deallocate memory without having to create
and do tensor management again and again.

In short term, this is necessary for in-place layer optimization.
See Also #879

The implementation requires caching the source tensor when creating
shared tensors as shared tensors cannot be created unless source
tensor memory has been allocated.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ ResNet ] Add Resnet18 Application
jijoong.moon [Wed, 27 Jan 2021 08:04:58 +0000 (17:04 +0900)]
[ ResNet ] Add Resnet18 Application

This PR includes ini configuration file for resnet18

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ DEBIAN ] Fix gcc version for debian build
jijoong.moon [Fri, 5 Feb 2021 00:57:37 +0000 (09:57 +0900)]
[ DEBIAN ] Fix gcc version for debian build

Set higher gcc version to build debian

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Bug fix of Android target Lib
jijoong.moon [Thu, 4 Feb 2021 11:19:19 +0000 (20:19 +0900)]
[ Application ] Bug fix of Android target Lib

Fix the path of tensorflow-lite lib

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Fix to use Tensorflow Lite 2.3.0
jijoong.moon [Thu, 4 Feb 2021 07:48:11 +0000 (16:48 +0900)]
[ Application ] Fix to use Tensorflow Lite 2.3.0

Fix Android.mk to use Tensorflow Lite 2.3.0

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[app_context] Resolve build issue with gcc5
Parichay Kapoor [Thu, 4 Feb 2021 05:26:10 +0000 (14:26 +0900)]
[app_context] Resolve build issue with gcc5

This solves the build issue for gcc5 when using std::call_once
Original code calls bind with copy of the argument to the function
and fails are reference is expected with gcc5

This patch adds a std::ref explicitly to make this work with gcc5

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ MESON ] fix capi-nnstreamer dependency
jijoong.moon [Thu, 4 Feb 2021 04:53:05 +0000 (13:53 +0900)]
[ MESON ] fix capi-nnstreamer dependency

remove capi-nnstreamer dependency

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ ANDROID ] change the tflite version
jijoong.moon [Thu, 4 Feb 2021 01:59:26 +0000 (10:59 +0900)]
[ ANDROID ] change the tflite version

Change Tensorflow-Lite Version from 1.13.1 to 2.30.0

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Doc ] add how to run android application accepted/tizen/unified/20210204.134412 submit/tizen/20210204.034810
jijoong.moon [Tue, 2 Feb 2021 12:48:51 +0000 (21:48 +0900)]
[ Doc ] add how to run android application

Add documentation to build and run in android

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[adam] Reduce temporary memory allocation
Parichay Kapoor [Fri, 29 Jan 2021 11:28:12 +0000 (20:28 +0900)]
[adam] Reduce temporary memory allocation

Reduce temporary memory allocation for adam
This is done by reusing gradient memory to calculate the final
update which is to be applied for the weight.

This reduces temporary memory allocations, which were being every epoch
for each weight, but also reduces peak memory usage.

Resolves #917

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[blas] Add missing return
Parichay Kapoor [Fri, 29 Jan 2021 09:37:34 +0000 (18:37 +0900)]
[blas] Add missing return

Add bug fix for blas raw implementation for missing return.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix] Resolve static init order fiasco
Jihoon Lee [Wed, 27 Jan 2021 01:15:12 +0000 (10:15 +0900)]
[Fix] Resolve static init order fiasco

As static initialization order out of a translation unit is undefined,
initializing global caused some undefined behavior.

Initializing global app context is delayed until it is first called.

See https://gcc.gnu.org/legacy-ml/gcc-patches/2017-03/msg00863.html

Resolves #893

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Handle copy edge case
Jihoon Lee [Tue, 26 Jan 2021 10:16:38 +0000 (19:16 +0900)]
[Tensor] Handle copy edge case

When copying uninitialized tensor to another unintialized tensor,
tensor::copy tried to reshape uninitialized dimension.

This patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CAPI/acr] Update tizen capi
Jihoon Lee [Tue, 26 Jan 2021 03:09:45 +0000 (12:09 +0900)]
[CAPI/acr] Update tizen capi

**Changes proposed in this PR:**
- Setting void *user_data
- Update layer type enum

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Rearrange Methods
Jihoon Lee [Tue, 26 Jan 2021 04:20:33 +0000 (13:20 +0900)]
[Tensor] Rearrange Methods

This patch gathers tensor by arithmetic operation (tensor.h is reflected
with rebase)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] rearrange methods
Jihoon Lee [Thu, 21 Jan 2021 09:48:24 +0000 (18:48 +0900)]
[Tensor] rearrange methods

- Add missing out param methods
- Change way it is delegated for some methods
- Rename s/operator_/apply_broadcast
- Remove `operator_i` and `operator_i_util`
- Assure dimension checks

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor/Clean] Relocate tensor methods
Jihoon Lee [Thu, 21 Jan 2021 04:40:13 +0000 (13:40 +0900)]
[Tensor/Clean] Relocate tensor methods

This patch relocate arithmetic methods while adding some missing
outplace operation signature.

Order is

1. inplace -> outplace -> outplace (with allocated memory)
2. multiply -> divide -> add -> subtract
3. scalar -> tensor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tensor] Update interface for tensor::map
Parichay Kapoor [Wed, 27 Jan 2021 02:25:58 +0000 (11:25 +0900)]
[tensor] Update interface for tensor::map

Update interface for tensor::map to include the size of the original
buffer to ensure that the buffer contains enough memory required
by the tensor shape wrapping around the memory.
Added another negative unittest for it.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] bug fix for Tensor::map
Parichay Kapoor [Tue, 26 Jan 2021 07:53:22 +0000 (16:53 +0900)]
[manager] bug fix for Tensor::map

This patch exposes a bug from Tensor::Map
where the offset is not checked when assigning the data.

This is because of the bug in manager for batch normalization layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Disable user_shared_memory
Parichay Kapoor [Tue, 26 Jan 2021 07:50:02 +0000 (16:50 +0900)]
[manager] Disable user_shared_memory

Disable user_shared_memory to as NNAPI is not required to be supported.
It is further needed to decouple tensor structure allocation and its
internal memory allocation.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Docs] Generate nntrainer docs
Gichan Jang [Wed, 27 Jan 2021 06:39:22 +0000 (15:39 +0900)]
[Docs] Generate nntrainer docs

Generate nntrainer docs.
See: https://nntrainer.github.io/

 - Sub-documents such as Application/* need to be added
 - The table and others need to be modified according to the hotdoc rule.

Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
3 years ago[spec] add backward competibility under tizen 6
Jihoon Lee [Wed, 20 Jan 2021 08:00:51 +0000 (17:00 +0900)]
[spec] add backward competibility under tizen 6

Since `ml-error-common` redclares error enum inside nnstreamer in tizen
5.5. This patch workarounds the issue by making a fake ml-api-error.h

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Clean up ml-api-common dependency
Jihoon Lee [Wed, 20 Jan 2021 07:07:11 +0000 (16:07 +0900)]
[meson] Clean up ml-api-common dependency

This patch cleans up the ml-api-common dependency.
If it is seems to be stable with multiple platform, we can remove
`api/capi/include/platform/ml-api-common.h`. let's keep it for now

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Reflect changes to upstream/main
Jihoon Lee [Tue, 26 Jan 2021 04:04:36 +0000 (13:04 +0900)]
[Fix] Reflect changes to upstream/main

From merging some big prs there happend some inconsistency which casued
a build break. This patch solves the issue

**Changes proposed in this PR:**
- Use manager.initializeTensor() in the unittest
- Add training signature to forwarding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoLicense Fix / Relicense to Apache-2.0
MyungJoo Ham [Mon, 25 Jan 2021 08:42:45 +0000 (17:42 +0900)]
License Fix / Relicense to Apache-2.0

1. Do not use "Apache-2.0-only". It's "Apache-2.0".
2. Relicense files to Apache-2.0. (The author permits; I'm the author.)

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
3 years ago[data augmentation] support for random translate
Parichay Kapoor [Wed, 20 Jan 2021 09:05:32 +0000 (18:05 +0900)]
[data augmentation] support for random translate

Added support for random translate which is fractional and does mirroring
This is implemented with opencv, but build is allowed without opencv
The model can be built but using this layer without opencv will throw

Added corresponding unittest as well.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[data augmentation] Support for random flip
Parichay Kapoor [Wed, 20 Jan 2021 08:03:40 +0000 (17:03 +0900)]
[data augmentation] Support for random flip

Add support for random flip data augmentation along with its unittests

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[dynamic-training] Add dynamic training using derivatives
Parichay Kapoor [Tue, 5 Jan 2021 15:16:03 +0000 (00:16 +0900)]
[dynamic-training] Add dynamic training using derivatives

Added dynamic training using derivatives where the decision to
apply the gradient is calculated using the derivative received
without calculating the gradient itself.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[dyanmic training] Adding dynamic-training code
Parichay Kapoor [Mon, 4 Jan 2021 12:02:36 +0000 (21:02 +0900)]
[dyanmic training] Adding dynamic-training code

Added dynamic-training code with both max and l2norm mode
Verified working with existing examples given the threshold

TODO: support dynamic training with derivative

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix/TFlite] Fix tflite allocation
Jihoon Lee [Mon, 18 Jan 2021 06:58:39 +0000 (15:58 +0900)]
[Fix/TFlite] Fix tflite allocation

Now, memory alllocation is handled outside of each layer.
Accordingly, allocating out tensor shouldn't be done inside a layer.

For the same reason, loss layer backwarding needs some fix, for now
it is just commented and will be handled soon

This patch handles the issue for tflite layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[var_grad] Remove redundant argument for initializeWeight
Parichay Kapoor [Fri, 22 Jan 2021 09:43:34 +0000 (18:43 +0900)]
[var_grad] Remove redundant argument for initializeWeight

remove redundant argument for initializeWeight - gtrain
as weight initialization is independent of if the weight is
going to be used in training or not.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] Decouple init of weight and gradients
Parichay Kapoor [Wed, 20 Jan 2021 03:17:41 +0000 (12:17 +0900)]
[weight] Decouple init of weight and gradients

Decouple initialization of weight variables and its corresponding gradients
Weights are always intialized and used later with inference/train
but gradients are initialized only with training and with different
configurations based on the chosen optimization strategies.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[pooling] Do not allocate memory in initialize
Parichay Kapoor [Mon, 4 Jan 2021 10:13:59 +0000 (19:13 +0900)]
[pooling] Do not allocate memory in initialize

Set batch size in initialize for pooling layer allocates memory.
However, the final batch size is allowed to change in inference/training.
This unnecessarily changes the peak memory requirement.
For now, this memory is allocated with forwarding.
Later this will be handled as a tensor with manager once int data type is supported.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Donot allocate adam for inference
Parichay Kapoor [Mon, 4 Jan 2021 10:12:42 +0000 (19:12 +0900)]
[manager] Donot allocate adam for inference

Donot allocate adam and gradient memory for weights
when the model is being executed for inference

V2:
Separate memory allocation for weights and gradients
Gradient memory allocation is decided based on training/inference
However weight memory is always to be allocated and must be loaded
before readModel(), so need to be separated

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[README] Add coverity badge
Gichan Jang [Fri, 22 Jan 2021 08:08:58 +0000 (17:08 +0900)]
[README] Add coverity badge

Add nntrainer coverity badge to README.

Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
3 years ago[optimization] Bug fix for in-place layer optimization
Parichay Kapoor [Fri, 22 Jan 2021 03:41:14 +0000 (12:41 +0900)]
[optimization] Bug fix for in-place layer optimization

Inplace layer optimization is performed for multiple layers - activation and batch normalization layers
and this list will increase with data augmentation etc.
However, the in-place layers cannot work correctly consecutively if these layers are trainable.
They can work perfectly is they dont need to pass the derivative back.

For now, this patch limits two consecutive layers to be in-place.
This will be made generic later dependent on the trainable and inPlace property of the layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[inference] Add input validation for inference
Parichay Kapoor [Tue, 19 Jan 2021 14:15:20 +0000 (23:15 +0900)]
[inference] Add input validation for inference

Add input validation for inference of the neural network

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Tensor] Add outplace method for arithmetic ops
Jihoon Lee [Sat, 9 Jan 2021 07:04:21 +0000 (16:04 +0900)]
[Tensor] Add outplace method for arithmetic ops

Add outplace ops with already allocated tensor.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Update meson for ubuntu 20.04
Parichay Kapoor [Tue, 19 Jan 2021 12:58:47 +0000 (21:58 +0900)]
[meson] Update meson for ubuntu 20.04

Update meson to work with ubuntu 20.04
Also add some missing checks

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[docs] Add missing dependencies
Parichay Kapoor [Tue, 19 Jan 2021 13:01:32 +0000 (22:01 +0900)]
[docs] Add missing dependencies

Add missing dependencies required to build nntrainer with meson

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Add eval mode for the training accepted/tizen/unified/20210122.084701 submit/tizen/20210122.000930
Jihoon Lee [Thu, 7 Jan 2021 06:50:33 +0000 (15:50 +0900)]
[Layer] Add eval mode for the training

**Changes proposed in this PR:**
- This patch add eval mode for the training forward and
fix batch normalization layer accordingly

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Fix ] Fix Logistic Regression Example Error
jijoong.moon [Thu, 14 Jan 2021 03:58:31 +0000 (12:58 +0900)]
[ Fix ] Fix Logistic Regression Example Error

This PR includes fixes about logistic regression application

Change forwarding function

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years agoEnable trainable property to layer
hyeonseok lee [Tue, 5 Jan 2021 12:03:39 +0000 (21:03 +0900)]
Enable trainable property to layer

Set trainable value to false in constructor in activation layer, flatten_layer

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Tools] Fix bug that translayer cannot detect bn
Jihoon Lee [Fri, 8 Jan 2021 03:02:47 +0000 (12:02 +0900)]
[Tools] Fix bug that translayer cannot detect bn

For batchnormalization in tf 2.3 it is not detected in transLayer, so
added new type to detect batch normalization layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[transfer learning] Enable test on ubuntu
Parichay Kapoor [Wed, 30 Dec 2020 13:02:53 +0000 (22:02 +0900)]
[transfer learning] Enable test on ubuntu

Enable testing of the trained model on ubuntu
Added check to ensure that nnstreamer is enabled

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Optimize input/output memory for inference
Parichay Kapoor [Tue, 29 Dec 2020 12:11:16 +0000 (21:11 +0900)]
[manager] Optimize input/output memory for inference

Optimize input/output memory for inference by using a shared buffer
where the max([sum(input_l, output_l)) for l from all layers]) memory
is allocated for inference.

Baseline working unittest added with models unittest which ensures
that inference works with and without optimizations without any
failures. Value verification tests is done by nnstreamer subplugin of
nntrainer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoSupport sum value in profiler
hyeonseok lee [Tue, 5 Jan 2021 11:55:11 +0000 (20:55 +0900)]
Support sum value in profiler

Now profiler will show the avg, min, max, sum values

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Test] Disable deriv verification when opt is on
Jihoon Lee [Tue, 29 Dec 2020 11:51:13 +0000 (20:51 +0900)]
[Test] Disable deriv verification when opt is on

This patch disables derivative verification but only checks the whole
return derivatives.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2d] Optimize layer loop
Jihoon Lee [Mon, 28 Dec 2020 07:48:16 +0000 (16:48 +0900)]
[Conv2d] Optimize layer loop

This optimize layer loops by

- minimize padding calculation
- Maximize cache hit by tranposing the matrix
- maximize cache hit by reordering loop order
- ~use single offset to minimize offset calculation~
- ~add shortcut when kernel size is 1~

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2d] Reuse im2col array by batch
Jihoon Lee [Mon, 28 Dec 2020 06:44:20 +0000 (15:44 +0900)]
[Conv2d] Reuse im2col array by batch

This patch enables reusing im2col array by batch, while saving
initializing time setting to zero.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2d] Change conv2d gemm to dot
Jihoon Lee [Mon, 28 Dec 2020 06:08:01 +0000 (15:08 +0900)]
[Conv2d] Change conv2d gemm to dot

- Change conv2dgemm to dot to enable optimization path inside dot
operation
- Add beta option to dot operation (C = alpha*A*B + beta*C)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[bugfix] Fix model path and dataset path in model_loader.cpp
hyeonseok lee [Tue, 29 Dec 2020 10:35:42 +0000 (19:35 +0900)]
[bugfix] Fix model path and dataset path in model_loader.cpp

Fix model path and dataset path to involve working directory path

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[dist/tizen] Enable base unittests for tizen build
Parichay Kapoor [Tue, 29 Dec 2020 06:18:55 +0000 (15:18 +0900)]
[dist/tizen] Enable base unittests for tizen build

Enable nntrainer unittests for tizen build
Not sure why or when this got commented
but lets enable it

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Optimize model input/output
Parichay Kapoor [Thu, 24 Dec 2020 06:09:54 +0000 (15:09 +0900)]
[model] Optimize model input/output

Optimize models extra input/output memory allocation counting towards peak memory allocation.
Memory is allocated with for input of input layer and output/gradient of output layer.
However, that memory is never used as train_run() allocates new buffer and passes it to the
input layer/loss layer.
This patch takes the already allocted memory from input/loss layer to be used to collect input/label data.

This patch also removes the extra parameters from forwarding/backwarding and with corresponding
with_val functions. Further, two types of forwarding in loss layer has been merged to just 1 function.
Now, loss layer and input layer does not need to be distinguished and can be treated as a regular layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Conv] Optimize im2col
Jihoon Lee [Thu, 24 Dec 2020 06:49:44 +0000 (15:49 +0900)]
[Conv] Optimize im2col

This patch optimize im2col by...

- Add padding as a argument instead of passing pad value
- Skip creating padded tensor and assignment for padded index
- Refactor variable names for clarity

See also #824

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Optimize accessor
Jihoon Lee [Thu, 24 Dec 2020 06:45:19 +0000 (15:45 +0900)]
[Tensor] Optimize accessor

This patch...
- inlines some accessor with noexcept specifier to boost up
- Add getValuePadded to reduce memory copy to make a padded tensor

see also #825

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Parichay Kapoor <pk.kappor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Assign default value for max_deriv size
Jihoon Lee [Tue, 29 Dec 2020 01:31:19 +0000 (10:31 +0900)]
[Fix] Assign default value for max_deriv size

This patch initialize max_dervative_size to avoid unexpected termination

resolves #834

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[model/test] Duplicate models test for optimization
Parichay Kapoor [Wed, 23 Dec 2020 09:55:05 +0000 (18:55 +0900)]
[model/test] Duplicate models test for optimization

Run models test twice, once with all the optimizations enabled
and then once with all the optimizations disabled.

This ensures that both the modes work properly.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[activation] Making activation in-place
Parichay Kapoor [Tue, 22 Dec 2020 01:44:14 +0000 (10:44 +0900)]
[activation] Making activation in-place

Added activation layer to be in-place.
Each layer now allocates memory for its output than for its input.

For activation layer, if its memory is optimized, then the memory
for the layer behind activation layer is not allocated.
And the memory for the derivative of the activation layer is shared
among all such layers.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Use gradient instead of variable for derivative
Parichay Kapoor [Fri, 18 Dec 2020 04:57:40 +0000 (13:57 +0900)]
[layer] Use gradient instead of variable for derivative

Use gradient instead of variable for derivative
Manager internally sets gradient memory same as variable for the optimization
but hides this kind of optimizations from the layer

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Manager tracks input/output memory
Parichay Kapoor [Fri, 18 Dec 2020 04:04:48 +0000 (13:04 +0900)]
[manager] Manager tracks input/output memory

Manager tracks input/output memory and allocates it
based on if the execution is training or inference

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[inputlayer] Input layer must be non-trainable
Parichay Kapoor [Fri, 18 Dec 2020 03:22:55 +0000 (12:22 +0900)]
[inputlayer] Input layer must be non-trainable

Input layer must always be non-trainable as it does not support backwarding operation

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Move layer input/output management to manager
Parichay Kapoor [Thu, 17 Dec 2020 07:29:28 +0000 (16:29 +0900)]
[layer] Move layer input/output management to manager

Move layer inputs/outputs memory management to the manager.
This is accomplished by replacing the use of NetBuffers instead of Var_Grad.

Now, all the memory of weights, gradients, inputs, outputs and derivatives
are managed by the manager, and allows more optimizations to be done with
inputs/outputs.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Profiler] Change profiler specs
Jihoon Lee [Fri, 18 Dec 2020 05:23:07 +0000 (14:23 +0900)]
[Profiler] Change profiler specs

- Profiler time unit is changed: milli -> microsecond
- Now report is ordered by key

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Apply ops level profiler
Jihoon Lee [Fri, 18 Dec 2020 03:16:19 +0000 (12:16 +0900)]
[Profiler] Apply ops level profiler

This patch attaches ops level profiler

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Add event registerator
Jihoon Lee [Fri, 18 Dec 2020 01:52:14 +0000 (10:52 +0900)]
[Profiler] Add event registerator

Profiler can now dynamically register event and send it to
profileListenr as of this patch with fixing few bugs

resolves #814

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Manager] Add MMaped memory
Jihoon Lee [Thu, 17 Dec 2020 12:33:00 +0000 (12:33 +0000)]
[Manager] Add MMaped memory

There was a requirement to separate weight memory region and grad memory
region.
To easily separate those two, this patch introduces no abstraction:
`MMapedMemory` while separating weight and grad mmap

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Manager/Fix] Disallow copy ctor of manager
Jihoon Lee [Wed, 16 Dec 2020 04:44:41 +0000 (13:44 +0900)]
[Manager/Fix] Disallow copy ctor of manager

Since manager is holding a memory, it shouldn't be copied as ownership
becoms not clear. This patch delets copy ctor / assignment ops. While
chainging signature for members and functions that uses manager

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Android] Manage ndk to deal with changes
Jihoon Lee [Wed, 16 Dec 2020 11:06:04 +0000 (11:06 +0000)]
[Android] Manage ndk to deal with changes

1. Upgrade ndk version to 29
2. Add dependent library
3. Fix syntax for Application.mk

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Add Tensor Wrap method
Jihoon Lee [Tue, 15 Dec 2020 04:50:49 +0000 (13:50 +0900)]
[Tensor] Add Tensor Wrap method

Add Tensor some factory methods
1. burrows external memory and use from
2. create from shared pointer without copy

To restrict unwanted use, those methods are static methods
called `Tensor::Wrap`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[TensorDim] Add initializer list ctor
Jihoon Lee [Tue, 15 Dec 2020 04:30:45 +0000 (13:30 +0900)]
[TensorDim] Add initializer list ctor

This patch adds a tensordim
initializer list ctor to easily pass as a functional argument

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tensor] argmax bugfix
Parichay Kapoor [Wed, 23 Dec 2020 15:22:43 +0000 (00:22 +0900)]
[tensor] argmax bugfix

Apply memory allocation bugfix to argmax
where a empty vector is being addressed

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Set stride for shared tensor accepted/tizen/unified/20201222.122522 submit/tizen/20201222.073053
Parichay Kapoor [Fri, 18 Dec 2020 05:21:42 +0000 (14:21 +0900)]
[tensor] Set stride for shared tensor

Set stride for shared tensor

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Support in-place batch normalization
Parichay Kapoor [Tue, 15 Dec 2020 11:10:39 +0000 (20:10 +0900)]
[layer] Support in-place batch normalization

Support in-place batch normalization where the batch normalization
input/output is not stored and is over-written by the next layer.

This patch removes the input/output memory requirement when using
batch normalization layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ ARGMAX ] Fix bug about argmax
jijoong.moon [Fri, 18 Dec 2020 10:23:42 +0000 (19:23 +0900)]
[ ARGMAX ] Fix bug about argmax

Need to fix to calcuate argmax in tensor

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Test] Add macro to check if backbone is enabled
Jihoon Lee [Mon, 14 Dec 2020 13:53:51 +0000 (13:53 +0000)]
[Test] Add macro to check if backbone is enabled

When backbone is not enabled, test fails because backbone is not enabled
This patch adds a define in the test so that test can pass when backbone
is not enabled

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] Assure unintialized members accepted/tizen/unified/20201217.124219 submit/tizen/20201217.045640
Jihoon Lee [Wed, 16 Dec 2020 08:38:51 +0000 (17:38 +0900)]
[svace] Assure unintialized members

nnstreamer_layer had two unintialized members.
This patch initializes those two

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] Error handling for applications/test
Jihoon Lee [Wed, 16 Dec 2020 07:41:44 +0000 (16:41 +0900)]
[svace] Error handling for applications/test

1. Fix inconsistent alloc/dealloc(new/free)
2. Add try catch to some statements
3. Fix memory leak from `asprintf`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>