platform/core/ml/nntrainer.git
3 years ago[graph] In-place bug fix
Parichay Kapoor [Mon, 25 Jan 2021 06:56:27 +0000 (15:56 +0900)]
[graph] In-place bug fix

This patch applied bug fix for in-place layer optimization.
As in-place layers and other layers manages derivative memory
differently, in-place layers cannot directly re-use the tensors
of their neighboring layers. But rather have to use tensors differently.
This patch applies this bug-fix.

See also #878

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Support late memory allocation for tensors
Parichay Kapoor [Mon, 25 Jan 2021 06:31:33 +0000 (15:31 +0900)]
[tensor] Support late memory allocation for tensors

Support late memory allocation for tensors.
This helps create tensor wrapper elements and allocate memory
later when needed.
This allows finalizing the graph, creating all the tensors and graph
connections for all the layers which can be stored in offline setup
and then loaded directly from file.

This decoupling allows to reuse tensors with every call to train
where we can allocate and deallocate memory without having to create
and do tensor management again and again.

In short term, this is necessary for in-place layer optimization.
See Also #879

The implementation requires caching the source tensor when creating
shared tensors as shared tensors cannot be created unless source
tensor memory has been allocated.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ ResNet ] Add Resnet18 Application
jijoong.moon [Wed, 27 Jan 2021 08:04:58 +0000 (17:04 +0900)]
[ ResNet ] Add Resnet18 Application

This PR includes ini configuration file for resnet18

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ DEBIAN ] Fix gcc version for debian build
jijoong.moon [Fri, 5 Feb 2021 00:57:37 +0000 (09:57 +0900)]
[ DEBIAN ] Fix gcc version for debian build

Set higher gcc version to build debian

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Bug fix of Android target Lib
jijoong.moon [Thu, 4 Feb 2021 11:19:19 +0000 (20:19 +0900)]
[ Application ] Bug fix of Android target Lib

Fix the path of tensorflow-lite lib

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Fix to use Tensorflow Lite 2.3.0
jijoong.moon [Thu, 4 Feb 2021 07:48:11 +0000 (16:48 +0900)]
[ Application ] Fix to use Tensorflow Lite 2.3.0

Fix Android.mk to use Tensorflow Lite 2.3.0

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[app_context] Resolve build issue with gcc5
Parichay Kapoor [Thu, 4 Feb 2021 05:26:10 +0000 (14:26 +0900)]
[app_context] Resolve build issue with gcc5

This solves the build issue for gcc5 when using std::call_once
Original code calls bind with copy of the argument to the function
and fails are reference is expected with gcc5

This patch adds a std::ref explicitly to make this work with gcc5

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ MESON ] fix capi-nnstreamer dependency
jijoong.moon [Thu, 4 Feb 2021 04:53:05 +0000 (13:53 +0900)]
[ MESON ] fix capi-nnstreamer dependency

remove capi-nnstreamer dependency

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ ANDROID ] change the tflite version
jijoong.moon [Thu, 4 Feb 2021 01:59:26 +0000 (10:59 +0900)]
[ ANDROID ] change the tflite version

Change Tensorflow-Lite Version from 1.13.1 to 2.30.0

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Doc ] add how to run android application accepted/tizen/unified/20210204.134412 submit/tizen/20210204.034810
jijoong.moon [Tue, 2 Feb 2021 12:48:51 +0000 (21:48 +0900)]
[ Doc ] add how to run android application

Add documentation to build and run in android

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[adam] Reduce temporary memory allocation
Parichay Kapoor [Fri, 29 Jan 2021 11:28:12 +0000 (20:28 +0900)]
[adam] Reduce temporary memory allocation

Reduce temporary memory allocation for adam
This is done by reusing gradient memory to calculate the final
update which is to be applied for the weight.

This reduces temporary memory allocations, which were being every epoch
for each weight, but also reduces peak memory usage.

Resolves #917

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[blas] Add missing return
Parichay Kapoor [Fri, 29 Jan 2021 09:37:34 +0000 (18:37 +0900)]
[blas] Add missing return

Add bug fix for blas raw implementation for missing return.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix] Resolve static init order fiasco
Jihoon Lee [Wed, 27 Jan 2021 01:15:12 +0000 (10:15 +0900)]
[Fix] Resolve static init order fiasco

As static initialization order out of a translation unit is undefined,
initializing global caused some undefined behavior.

Initializing global app context is delayed until it is first called.

See https://gcc.gnu.org/legacy-ml/gcc-patches/2017-03/msg00863.html

Resolves #893

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Handle copy edge case
Jihoon Lee [Tue, 26 Jan 2021 10:16:38 +0000 (19:16 +0900)]
[Tensor] Handle copy edge case

When copying uninitialized tensor to another unintialized tensor,
tensor::copy tried to reshape uninitialized dimension.

This patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CAPI/acr] Update tizen capi
Jihoon Lee [Tue, 26 Jan 2021 03:09:45 +0000 (12:09 +0900)]
[CAPI/acr] Update tizen capi

**Changes proposed in this PR:**
- Setting void *user_data
- Update layer type enum

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Rearrange Methods
Jihoon Lee [Tue, 26 Jan 2021 04:20:33 +0000 (13:20 +0900)]
[Tensor] Rearrange Methods

This patch gathers tensor by arithmetic operation (tensor.h is reflected
with rebase)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] rearrange methods
Jihoon Lee [Thu, 21 Jan 2021 09:48:24 +0000 (18:48 +0900)]
[Tensor] rearrange methods

- Add missing out param methods
- Change way it is delegated for some methods
- Rename s/operator_/apply_broadcast
- Remove `operator_i` and `operator_i_util`
- Assure dimension checks

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor/Clean] Relocate tensor methods
Jihoon Lee [Thu, 21 Jan 2021 04:40:13 +0000 (13:40 +0900)]
[Tensor/Clean] Relocate tensor methods

This patch relocate arithmetic methods while adding some missing
outplace operation signature.

Order is

1. inplace -> outplace -> outplace (with allocated memory)
2. multiply -> divide -> add -> subtract
3. scalar -> tensor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tensor] Update interface for tensor::map
Parichay Kapoor [Wed, 27 Jan 2021 02:25:58 +0000 (11:25 +0900)]
[tensor] Update interface for tensor::map

Update interface for tensor::map to include the size of the original
buffer to ensure that the buffer contains enough memory required
by the tensor shape wrapping around the memory.
Added another negative unittest for it.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] bug fix for Tensor::map
Parichay Kapoor [Tue, 26 Jan 2021 07:53:22 +0000 (16:53 +0900)]
[manager] bug fix for Tensor::map

This patch exposes a bug from Tensor::Map
where the offset is not checked when assigning the data.

This is because of the bug in manager for batch normalization layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Disable user_shared_memory
Parichay Kapoor [Tue, 26 Jan 2021 07:50:02 +0000 (16:50 +0900)]
[manager] Disable user_shared_memory

Disable user_shared_memory to as NNAPI is not required to be supported.
It is further needed to decouple tensor structure allocation and its
internal memory allocation.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Docs] Generate nntrainer docs
Gichan Jang [Wed, 27 Jan 2021 06:39:22 +0000 (15:39 +0900)]
[Docs] Generate nntrainer docs

Generate nntrainer docs.
See: https://nntrainer.github.io/

 - Sub-documents such as Application/* need to be added
 - The table and others need to be modified according to the hotdoc rule.

Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
3 years ago[spec] add backward competibility under tizen 6
Jihoon Lee [Wed, 20 Jan 2021 08:00:51 +0000 (17:00 +0900)]
[spec] add backward competibility under tizen 6

Since `ml-error-common` redclares error enum inside nnstreamer in tizen
5.5. This patch workarounds the issue by making a fake ml-api-error.h

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Clean up ml-api-common dependency
Jihoon Lee [Wed, 20 Jan 2021 07:07:11 +0000 (16:07 +0900)]
[meson] Clean up ml-api-common dependency

This patch cleans up the ml-api-common dependency.
If it is seems to be stable with multiple platform, we can remove
`api/capi/include/platform/ml-api-common.h`. let's keep it for now

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Reflect changes to upstream/main
Jihoon Lee [Tue, 26 Jan 2021 04:04:36 +0000 (13:04 +0900)]
[Fix] Reflect changes to upstream/main

From merging some big prs there happend some inconsistency which casued
a build break. This patch solves the issue

**Changes proposed in this PR:**
- Use manager.initializeTensor() in the unittest
- Add training signature to forwarding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoLicense Fix / Relicense to Apache-2.0
MyungJoo Ham [Mon, 25 Jan 2021 08:42:45 +0000 (17:42 +0900)]
License Fix / Relicense to Apache-2.0

1. Do not use "Apache-2.0-only". It's "Apache-2.0".
2. Relicense files to Apache-2.0. (The author permits; I'm the author.)

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
3 years ago[data augmentation] support for random translate
Parichay Kapoor [Wed, 20 Jan 2021 09:05:32 +0000 (18:05 +0900)]
[data augmentation] support for random translate

Added support for random translate which is fractional and does mirroring
This is implemented with opencv, but build is allowed without opencv
The model can be built but using this layer without opencv will throw

Added corresponding unittest as well.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[data augmentation] Support for random flip
Parichay Kapoor [Wed, 20 Jan 2021 08:03:40 +0000 (17:03 +0900)]
[data augmentation] Support for random flip

Add support for random flip data augmentation along with its unittests

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[dynamic-training] Add dynamic training using derivatives
Parichay Kapoor [Tue, 5 Jan 2021 15:16:03 +0000 (00:16 +0900)]
[dynamic-training] Add dynamic training using derivatives

Added dynamic training using derivatives where the decision to
apply the gradient is calculated using the derivative received
without calculating the gradient itself.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[dyanmic training] Adding dynamic-training code
Parichay Kapoor [Mon, 4 Jan 2021 12:02:36 +0000 (21:02 +0900)]
[dyanmic training] Adding dynamic-training code

Added dynamic-training code with both max and l2norm mode
Verified working with existing examples given the threshold

TODO: support dynamic training with derivative

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix/TFlite] Fix tflite allocation
Jihoon Lee [Mon, 18 Jan 2021 06:58:39 +0000 (15:58 +0900)]
[Fix/TFlite] Fix tflite allocation

Now, memory alllocation is handled outside of each layer.
Accordingly, allocating out tensor shouldn't be done inside a layer.

For the same reason, loss layer backwarding needs some fix, for now
it is just commented and will be handled soon

This patch handles the issue for tflite layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[var_grad] Remove redundant argument for initializeWeight
Parichay Kapoor [Fri, 22 Jan 2021 09:43:34 +0000 (18:43 +0900)]
[var_grad] Remove redundant argument for initializeWeight

remove redundant argument for initializeWeight - gtrain
as weight initialization is independent of if the weight is
going to be used in training or not.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] Decouple init of weight and gradients
Parichay Kapoor [Wed, 20 Jan 2021 03:17:41 +0000 (12:17 +0900)]
[weight] Decouple init of weight and gradients

Decouple initialization of weight variables and its corresponding gradients
Weights are always intialized and used later with inference/train
but gradients are initialized only with training and with different
configurations based on the chosen optimization strategies.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[pooling] Do not allocate memory in initialize
Parichay Kapoor [Mon, 4 Jan 2021 10:13:59 +0000 (19:13 +0900)]
[pooling] Do not allocate memory in initialize

Set batch size in initialize for pooling layer allocates memory.
However, the final batch size is allowed to change in inference/training.
This unnecessarily changes the peak memory requirement.
For now, this memory is allocated with forwarding.
Later this will be handled as a tensor with manager once int data type is supported.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Donot allocate adam for inference
Parichay Kapoor [Mon, 4 Jan 2021 10:12:42 +0000 (19:12 +0900)]
[manager] Donot allocate adam for inference

Donot allocate adam and gradient memory for weights
when the model is being executed for inference

V2:
Separate memory allocation for weights and gradients
Gradient memory allocation is decided based on training/inference
However weight memory is always to be allocated and must be loaded
before readModel(), so need to be separated

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[README] Add coverity badge
Gichan Jang [Fri, 22 Jan 2021 08:08:58 +0000 (17:08 +0900)]
[README] Add coverity badge

Add nntrainer coverity badge to README.

Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
3 years ago[optimization] Bug fix for in-place layer optimization
Parichay Kapoor [Fri, 22 Jan 2021 03:41:14 +0000 (12:41 +0900)]
[optimization] Bug fix for in-place layer optimization

Inplace layer optimization is performed for multiple layers - activation and batch normalization layers
and this list will increase with data augmentation etc.
However, the in-place layers cannot work correctly consecutively if these layers are trainable.
They can work perfectly is they dont need to pass the derivative back.

For now, this patch limits two consecutive layers to be in-place.
This will be made generic later dependent on the trainable and inPlace property of the layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[inference] Add input validation for inference
Parichay Kapoor [Tue, 19 Jan 2021 14:15:20 +0000 (23:15 +0900)]
[inference] Add input validation for inference

Add input validation for inference of the neural network

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Tensor] Add outplace method for arithmetic ops
Jihoon Lee [Sat, 9 Jan 2021 07:04:21 +0000 (16:04 +0900)]
[Tensor] Add outplace method for arithmetic ops

Add outplace ops with already allocated tensor.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Update meson for ubuntu 20.04
Parichay Kapoor [Tue, 19 Jan 2021 12:58:47 +0000 (21:58 +0900)]
[meson] Update meson for ubuntu 20.04

Update meson to work with ubuntu 20.04
Also add some missing checks

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[docs] Add missing dependencies
Parichay Kapoor [Tue, 19 Jan 2021 13:01:32 +0000 (22:01 +0900)]
[docs] Add missing dependencies

Add missing dependencies required to build nntrainer with meson

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Add eval mode for the training accepted/tizen/unified/20210122.084701 submit/tizen/20210122.000930
Jihoon Lee [Thu, 7 Jan 2021 06:50:33 +0000 (15:50 +0900)]
[Layer] Add eval mode for the training

**Changes proposed in this PR:**
- This patch add eval mode for the training forward and
fix batch normalization layer accordingly

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Fix ] Fix Logistic Regression Example Error
jijoong.moon [Thu, 14 Jan 2021 03:58:31 +0000 (12:58 +0900)]
[ Fix ] Fix Logistic Regression Example Error

This PR includes fixes about logistic regression application

Change forwarding function

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years agoEnable trainable property to layer
hyeonseok lee [Tue, 5 Jan 2021 12:03:39 +0000 (21:03 +0900)]
Enable trainable property to layer

Set trainable value to false in constructor in activation layer, flatten_layer

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Tools] Fix bug that translayer cannot detect bn
Jihoon Lee [Fri, 8 Jan 2021 03:02:47 +0000 (12:02 +0900)]
[Tools] Fix bug that translayer cannot detect bn

For batchnormalization in tf 2.3 it is not detected in transLayer, so
added new type to detect batch normalization layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[transfer learning] Enable test on ubuntu
Parichay Kapoor [Wed, 30 Dec 2020 13:02:53 +0000 (22:02 +0900)]
[transfer learning] Enable test on ubuntu

Enable testing of the trained model on ubuntu
Added check to ensure that nnstreamer is enabled

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Optimize input/output memory for inference
Parichay Kapoor [Tue, 29 Dec 2020 12:11:16 +0000 (21:11 +0900)]
[manager] Optimize input/output memory for inference

Optimize input/output memory for inference by using a shared buffer
where the max([sum(input_l, output_l)) for l from all layers]) memory
is allocated for inference.

Baseline working unittest added with models unittest which ensures
that inference works with and without optimizations without any
failures. Value verification tests is done by nnstreamer subplugin of
nntrainer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoSupport sum value in profiler
hyeonseok lee [Tue, 5 Jan 2021 11:55:11 +0000 (20:55 +0900)]
Support sum value in profiler

Now profiler will show the avg, min, max, sum values

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Test] Disable deriv verification when opt is on
Jihoon Lee [Tue, 29 Dec 2020 11:51:13 +0000 (20:51 +0900)]
[Test] Disable deriv verification when opt is on

This patch disables derivative verification but only checks the whole
return derivatives.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2d] Optimize layer loop
Jihoon Lee [Mon, 28 Dec 2020 07:48:16 +0000 (16:48 +0900)]
[Conv2d] Optimize layer loop

This optimize layer loops by

- minimize padding calculation
- Maximize cache hit by tranposing the matrix
- maximize cache hit by reordering loop order
- ~use single offset to minimize offset calculation~
- ~add shortcut when kernel size is 1~

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2d] Reuse im2col array by batch
Jihoon Lee [Mon, 28 Dec 2020 06:44:20 +0000 (15:44 +0900)]
[Conv2d] Reuse im2col array by batch

This patch enables reusing im2col array by batch, while saving
initializing time setting to zero.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2d] Change conv2d gemm to dot
Jihoon Lee [Mon, 28 Dec 2020 06:08:01 +0000 (15:08 +0900)]
[Conv2d] Change conv2d gemm to dot

- Change conv2dgemm to dot to enable optimization path inside dot
operation
- Add beta option to dot operation (C = alpha*A*B + beta*C)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[bugfix] Fix model path and dataset path in model_loader.cpp
hyeonseok lee [Tue, 29 Dec 2020 10:35:42 +0000 (19:35 +0900)]
[bugfix] Fix model path and dataset path in model_loader.cpp

Fix model path and dataset path to involve working directory path

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[dist/tizen] Enable base unittests for tizen build
Parichay Kapoor [Tue, 29 Dec 2020 06:18:55 +0000 (15:18 +0900)]
[dist/tizen] Enable base unittests for tizen build

Enable nntrainer unittests for tizen build
Not sure why or when this got commented
but lets enable it

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Optimize model input/output
Parichay Kapoor [Thu, 24 Dec 2020 06:09:54 +0000 (15:09 +0900)]
[model] Optimize model input/output

Optimize models extra input/output memory allocation counting towards peak memory allocation.
Memory is allocated with for input of input layer and output/gradient of output layer.
However, that memory is never used as train_run() allocates new buffer and passes it to the
input layer/loss layer.
This patch takes the already allocted memory from input/loss layer to be used to collect input/label data.

This patch also removes the extra parameters from forwarding/backwarding and with corresponding
with_val functions. Further, two types of forwarding in loss layer has been merged to just 1 function.
Now, loss layer and input layer does not need to be distinguished and can be treated as a regular layer.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Conv] Optimize im2col
Jihoon Lee [Thu, 24 Dec 2020 06:49:44 +0000 (15:49 +0900)]
[Conv] Optimize im2col

This patch optimize im2col by...

- Add padding as a argument instead of passing pad value
- Skip creating padded tensor and assignment for padded index
- Refactor variable names for clarity

See also #824

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Optimize accessor
Jihoon Lee [Thu, 24 Dec 2020 06:45:19 +0000 (15:45 +0900)]
[Tensor] Optimize accessor

This patch...
- inlines some accessor with noexcept specifier to boost up
- Add getValuePadded to reduce memory copy to make a padded tensor

see also #825

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Parichay Kapoor <pk.kappor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Assign default value for max_deriv size
Jihoon Lee [Tue, 29 Dec 2020 01:31:19 +0000 (10:31 +0900)]
[Fix] Assign default value for max_deriv size

This patch initialize max_dervative_size to avoid unexpected termination

resolves #834

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[model/test] Duplicate models test for optimization
Parichay Kapoor [Wed, 23 Dec 2020 09:55:05 +0000 (18:55 +0900)]
[model/test] Duplicate models test for optimization

Run models test twice, once with all the optimizations enabled
and then once with all the optimizations disabled.

This ensures that both the modes work properly.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[activation] Making activation in-place
Parichay Kapoor [Tue, 22 Dec 2020 01:44:14 +0000 (10:44 +0900)]
[activation] Making activation in-place

Added activation layer to be in-place.
Each layer now allocates memory for its output than for its input.

For activation layer, if its memory is optimized, then the memory
for the layer behind activation layer is not allocated.
And the memory for the derivative of the activation layer is shared
among all such layers.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Use gradient instead of variable for derivative
Parichay Kapoor [Fri, 18 Dec 2020 04:57:40 +0000 (13:57 +0900)]
[layer] Use gradient instead of variable for derivative

Use gradient instead of variable for derivative
Manager internally sets gradient memory same as variable for the optimization
but hides this kind of optimizations from the layer

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Manager tracks input/output memory
Parichay Kapoor [Fri, 18 Dec 2020 04:04:48 +0000 (13:04 +0900)]
[manager] Manager tracks input/output memory

Manager tracks input/output memory and allocates it
based on if the execution is training or inference

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[inputlayer] Input layer must be non-trainable
Parichay Kapoor [Fri, 18 Dec 2020 03:22:55 +0000 (12:22 +0900)]
[inputlayer] Input layer must be non-trainable

Input layer must always be non-trainable as it does not support backwarding operation

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Move layer input/output management to manager
Parichay Kapoor [Thu, 17 Dec 2020 07:29:28 +0000 (16:29 +0900)]
[layer] Move layer input/output management to manager

Move layer inputs/outputs memory management to the manager.
This is accomplished by replacing the use of NetBuffers instead of Var_Grad.

Now, all the memory of weights, gradients, inputs, outputs and derivatives
are managed by the manager, and allows more optimizations to be done with
inputs/outputs.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Profiler] Change profiler specs
Jihoon Lee [Fri, 18 Dec 2020 05:23:07 +0000 (14:23 +0900)]
[Profiler] Change profiler specs

- Profiler time unit is changed: milli -> microsecond
- Now report is ordered by key

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Apply ops level profiler
Jihoon Lee [Fri, 18 Dec 2020 03:16:19 +0000 (12:16 +0900)]
[Profiler] Apply ops level profiler

This patch attaches ops level profiler

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Add event registerator
Jihoon Lee [Fri, 18 Dec 2020 01:52:14 +0000 (10:52 +0900)]
[Profiler] Add event registerator

Profiler can now dynamically register event and send it to
profileListenr as of this patch with fixing few bugs

resolves #814

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Manager] Add MMaped memory
Jihoon Lee [Thu, 17 Dec 2020 12:33:00 +0000 (12:33 +0000)]
[Manager] Add MMaped memory

There was a requirement to separate weight memory region and grad memory
region.
To easily separate those two, this patch introduces no abstraction:
`MMapedMemory` while separating weight and grad mmap

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Manager/Fix] Disallow copy ctor of manager
Jihoon Lee [Wed, 16 Dec 2020 04:44:41 +0000 (13:44 +0900)]
[Manager/Fix] Disallow copy ctor of manager

Since manager is holding a memory, it shouldn't be copied as ownership
becoms not clear. This patch delets copy ctor / assignment ops. While
chainging signature for members and functions that uses manager

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Android] Manage ndk to deal with changes
Jihoon Lee [Wed, 16 Dec 2020 11:06:04 +0000 (11:06 +0000)]
[Android] Manage ndk to deal with changes

1. Upgrade ndk version to 29
2. Add dependent library
3. Fix syntax for Application.mk

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Add Tensor Wrap method
Jihoon Lee [Tue, 15 Dec 2020 04:50:49 +0000 (13:50 +0900)]
[Tensor] Add Tensor Wrap method

Add Tensor some factory methods
1. burrows external memory and use from
2. create from shared pointer without copy

To restrict unwanted use, those methods are static methods
called `Tensor::Wrap`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[TensorDim] Add initializer list ctor
Jihoon Lee [Tue, 15 Dec 2020 04:30:45 +0000 (13:30 +0900)]
[TensorDim] Add initializer list ctor

This patch adds a tensordim
initializer list ctor to easily pass as a functional argument

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tensor] argmax bugfix
Parichay Kapoor [Wed, 23 Dec 2020 15:22:43 +0000 (00:22 +0900)]
[tensor] argmax bugfix

Apply memory allocation bugfix to argmax
where a empty vector is being addressed

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Set stride for shared tensor accepted/tizen/unified/20201222.122522 submit/tizen/20201222.073053
Parichay Kapoor [Fri, 18 Dec 2020 05:21:42 +0000 (14:21 +0900)]
[tensor] Set stride for shared tensor

Set stride for shared tensor

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Support in-place batch normalization
Parichay Kapoor [Tue, 15 Dec 2020 11:10:39 +0000 (20:10 +0900)]
[layer] Support in-place batch normalization

Support in-place batch normalization where the batch normalization
input/output is not stored and is over-written by the next layer.

This patch removes the input/output memory requirement when using
batch normalization layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ ARGMAX ] Fix bug about argmax
jijoong.moon [Fri, 18 Dec 2020 10:23:42 +0000 (19:23 +0900)]
[ ARGMAX ] Fix bug about argmax

Need to fix to calcuate argmax in tensor

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Test] Add macro to check if backbone is enabled
Jihoon Lee [Mon, 14 Dec 2020 13:53:51 +0000 (13:53 +0000)]
[Test] Add macro to check if backbone is enabled

When backbone is not enabled, test fails because backbone is not enabled
This patch adds a define in the test so that test can pass when backbone
is not enabled

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] Assure unintialized members accepted/tizen/unified/20201217.124219 submit/tizen/20201217.045640
Jihoon Lee [Wed, 16 Dec 2020 08:38:51 +0000 (17:38 +0900)]
[svace] Assure unintialized members

nnstreamer_layer had two unintialized members.
This patch initializes those two

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] Error handling for applications/test
Jihoon Lee [Wed, 16 Dec 2020 07:41:44 +0000 (16:41 +0900)]
[svace] Error handling for applications/test

1. Fix inconsistent alloc/dealloc(new/free)
2. Add try catch to some statements
3. Fix memory leak from `asprintf`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] assure file to be closed before remove
Jihoon Lee [Wed, 16 Dec 2020 06:55:52 +0000 (15:55 +0900)]
[svace] assure file to be closed before remove

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Docs] Remove unnecessary HTML link for feature/privilege.
Sangjung Woo [Thu, 17 Dec 2020 03:12:38 +0000 (12:12 +0900)]
[Docs] Remove unnecessary HTML link for feature/privilege.

This patch removes the unnecessary HTML link for feature/privilege.

Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
3 years ago[Optim] Add shortcut to dot product
Jihoon Lee [Fri, 11 Dec 2020 08:13:07 +0000 (17:13 +0900)]
[Optim] Add shortcut to dot product

When dimension is 1, it is vector by matrix or vector by vector
multiplication. This patch adds a shortcut in that situation

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] fix lda, ldb param
Jihoon Lee [Fri, 11 Dec 2020 07:58:34 +0000 (16:58 +0900)]
[Fix] fix lda, ldb param

**Changes proposed in this PR:**
- lda, ldb, ldc is for layout so it should be set in terms of memory
layout, this patch fixes the issue while adding a corresponding test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Add basic profilerlistener
Jihoon Lee [Wed, 9 Dec 2020 07:23:03 +0000 (16:23 +0900)]
[Profiler] Add basic profilerlistener

This patch adds global profiler listener for various purpose

From this patch,
1. Profiler can called globally with designated event key
2. Listener reporting suite included
3. Enum key has changed to int key to deal with unhashable
key compile error in few platforms.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

v2)
1. Change listener to RAII object (with forcing profiler, event
designation)
2. Add unsubscribe method
3. Change event register to set to prevent notifying a listener twice
4. Change semintics to not allow adding same listener twice

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Add profiler test
Jihoon Lee [Wed, 9 Dec 2020 05:40:24 +0000 (14:40 +0900)]
[Test] Add profiler test

**Changes proposed in this PR:**
- Add profiler test
- Wire profiler sources / header to the build system

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Separate Profiler for wider use
Jihoon Lee [Wed, 9 Dec 2020 04:09:34 +0000 (13:09 +0900)]
[Profiler] Separate Profiler for wider use

This patch extracts profiler from neuralnet.

Also, this seperates `ProfileListener` which
should be used for client side while `Profiler`
is used in library side

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson.build] Change join_paths to / in meson.build files
hyeonseok lee [Wed, 9 Dec 2020 03:13:09 +0000 (12:13 +0900)]
[meson.build] Change join_paths to / in meson.build files

Replace join_paths in meson.build files to /

Check issue #709 for more details

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Android] Integrate openblas into android
Jihoon Lee [Tue, 8 Dec 2020 07:25:08 +0000 (16:25 +0900)]
[Android] Integrate openblas into android

Android ndk was not building on top of openblas

This patch fixes the problem

resolves #794

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[mnist] Update saved model file
Parichay Kapoor [Wed, 9 Dec 2020 13:11:29 +0000 (22:11 +0900)]
[mnist] Update saved model file

As saving the optimizer parameters has been updated, the previous
model file gives wrong result. This patch adds the new model file.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[network] Rework the backwarding
Parichay Kapoor [Thu, 3 Dec 2020 10:14:07 +0000 (19:14 +0900)]
[network] Rework the backwarding

- remove forwarding from backwarding
backwarding should just do backwarding and no more
- moved backwarding back to neuralnetwork so that graph
does not has to care about how to backward etc.
Graph just provides iterators for iterating the graph
in reverse. Graph does not know that layers have backwarding etc.

Also this removes dependency of graph from optimizer.

V2:
Added comment fixes for the corresponding PR

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Move optimizer out of layer
Parichay Kapoor [Thu, 3 Dec 2020 08:38:28 +0000 (17:38 +0900)]
[optimizer] Move optimizer out of layer

This patch moves optimizer out of layer.
Now backwarding just calculates derivatives and gradient
but does not applies the gradient.
This gradient applying is done by the model.

Layer still support applyGradient operation but requires optimizer
as an argument.
This decouples layers from optimizers and can operate independently.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Simplify optimizer initialize
Parichay Kapoor [Thu, 3 Dec 2020 06:19:02 +0000 (15:19 +0900)]
[optimizer] Simplify optimizer initialize

As there is just one optimizer and shared by layers, it must be initialized just once by the neural network.
Also, addOptimizerVariables() moved out separately from initialize() as initialize() should work
on optimizers parameters and should not need list of weights.

Also remove set_tensor argument which was redundant

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Move optimizer variables to weights
Parichay Kapoor [Thu, 3 Dec 2020 05:43:09 +0000 (14:43 +0900)]
[optimizer] Move optimizer variables to weights

Move optimizer variables to weights
Now all the weight related tensors are handled by weights themselves
So, optimizer can be shared across all layers, no need to create new
copies for all layers

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[vgg] Added pytorch model for vgg16
Parichay Kapoor [Tue, 8 Dec 2020 04:34:11 +0000 (13:34 +0900)]
[vgg] Added pytorch model for vgg16

Added pytorch model for vgg16
This is to benchmark against tf and nntrainer

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[vgg] Update to official vgg16 model
Parichay Kapoor [Tue, 8 Dec 2020 04:32:41 +0000 (13:32 +0900)]
[vgg] Update to official vgg16 model

Update the nntrainer and tensorflow to use official VGG16 model architecture
The FC layers setup is different as the cifar100 dataset has just 100 output classes
than 1000 classes of the imagenet.
Further, the number of epochs are reduced to 1.
When training, this can be increased appropriately.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[MNIST] Added pytorch version
Parichay Kapoor [Mon, 7 Dec 2020 03:48:02 +0000 (12:48 +0900)]
[MNIST] Added pytorch version

Added pytorch version of MNIST for benchmarking purpose
This code is only tested with CPU

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ndk] Add enable profile flag
Jihoon Lee [Mon, 7 Dec 2020 11:31:44 +0000 (20:31 +0900)]
[ndk] Add enable profile flag

This patch add enable profile flag for ndk build for profiling purpose

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Experiment] Add profiler
Jihoon Lee [Fri, 4 Dec 2020 09:29:32 +0000 (18:29 +0900)]
[Experiment] Add profiler

This patch add `enable-profile` option to enable profile. Also this
patch adds a simple profiling logic to `neuralnet::inference`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Meson] Add ndk-build to be part of ndk build
Jihoon Lee [Thu, 3 Dec 2020 06:38:29 +0000 (15:38 +0900)]
[Meson] Add ndk-build to be part of ndk build

**Changes proposed in this PR:**
- Add option to build library using ndk

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chores] CustomShortcut bug accepted/tizen/unified/20201207.123248 submit/tizen/20201207.013927
Jihoon Lee [Wed, 2 Dec 2020 10:11:14 +0000 (19:11 +0900)]
[Chores] CustomShortcut bug

As ini format has been changed, ini for customshortcut need change

This patch handles it.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>