platform/core/ml/nntrainer.git
3 years ago[layer] Move layer input/output management to manager
Parichay Kapoor [Thu, 17 Dec 2020 07:29:28 +0000 (16:29 +0900)]
[layer] Move layer input/output management to manager

Move layer inputs/outputs memory management to the manager.
This is accomplished by replacing the use of NetBuffers instead of Var_Grad.

Now, all the memory of weights, gradients, inputs, outputs and derivatives
are managed by the manager, and allows more optimizations to be done with
inputs/outputs.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Profiler] Change profiler specs
Jihoon Lee [Fri, 18 Dec 2020 05:23:07 +0000 (14:23 +0900)]
[Profiler] Change profiler specs

- Profiler time unit is changed: milli -> microsecond
- Now report is ordered by key

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Apply ops level profiler
Jihoon Lee [Fri, 18 Dec 2020 03:16:19 +0000 (12:16 +0900)]
[Profiler] Apply ops level profiler

This patch attaches ops level profiler

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Add event registerator
Jihoon Lee [Fri, 18 Dec 2020 01:52:14 +0000 (10:52 +0900)]
[Profiler] Add event registerator

Profiler can now dynamically register event and send it to
profileListenr as of this patch with fixing few bugs

resolves #814

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Manager] Add MMaped memory
Jihoon Lee [Thu, 17 Dec 2020 12:33:00 +0000 (12:33 +0000)]
[Manager] Add MMaped memory

There was a requirement to separate weight memory region and grad memory
region.
To easily separate those two, this patch introduces no abstraction:
`MMapedMemory` while separating weight and grad mmap

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Manager/Fix] Disallow copy ctor of manager
Jihoon Lee [Wed, 16 Dec 2020 04:44:41 +0000 (13:44 +0900)]
[Manager/Fix] Disallow copy ctor of manager

Since manager is holding a memory, it shouldn't be copied as ownership
becoms not clear. This patch delets copy ctor / assignment ops. While
chainging signature for members and functions that uses manager

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Android] Manage ndk to deal with changes
Jihoon Lee [Wed, 16 Dec 2020 11:06:04 +0000 (11:06 +0000)]
[Android] Manage ndk to deal with changes

1. Upgrade ndk version to 29
2. Add dependent library
3. Fix syntax for Application.mk

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Add Tensor Wrap method
Jihoon Lee [Tue, 15 Dec 2020 04:50:49 +0000 (13:50 +0900)]
[Tensor] Add Tensor Wrap method

Add Tensor some factory methods
1. burrows external memory and use from
2. create from shared pointer without copy

To restrict unwanted use, those methods are static methods
called `Tensor::Wrap`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[TensorDim] Add initializer list ctor
Jihoon Lee [Tue, 15 Dec 2020 04:30:45 +0000 (13:30 +0900)]
[TensorDim] Add initializer list ctor

This patch adds a tensordim
initializer list ctor to easily pass as a functional argument

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[tensor] argmax bugfix
Parichay Kapoor [Wed, 23 Dec 2020 15:22:43 +0000 (00:22 +0900)]
[tensor] argmax bugfix

Apply memory allocation bugfix to argmax
where a empty vector is being addressed

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Set stride for shared tensor accepted/tizen/unified/20201222.122522 submit/tizen/20201222.073053
Parichay Kapoor [Fri, 18 Dec 2020 05:21:42 +0000 (14:21 +0900)]
[tensor] Set stride for shared tensor

Set stride for shared tensor

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Support in-place batch normalization
Parichay Kapoor [Tue, 15 Dec 2020 11:10:39 +0000 (20:10 +0900)]
[layer] Support in-place batch normalization

Support in-place batch normalization where the batch normalization
input/output is not stored and is over-written by the next layer.

This patch removes the input/output memory requirement when using
batch normalization layer.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ ARGMAX ] Fix bug about argmax
jijoong.moon [Fri, 18 Dec 2020 10:23:42 +0000 (19:23 +0900)]
[ ARGMAX ] Fix bug about argmax

Need to fix to calcuate argmax in tensor

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Test] Add macro to check if backbone is enabled
Jihoon Lee [Mon, 14 Dec 2020 13:53:51 +0000 (13:53 +0000)]
[Test] Add macro to check if backbone is enabled

When backbone is not enabled, test fails because backbone is not enabled
This patch adds a define in the test so that test can pass when backbone
is not enabled

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] Assure unintialized members accepted/tizen/unified/20201217.124219 submit/tizen/20201217.045640
Jihoon Lee [Wed, 16 Dec 2020 08:38:51 +0000 (17:38 +0900)]
[svace] Assure unintialized members

nnstreamer_layer had two unintialized members.
This patch initializes those two

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] Error handling for applications/test
Jihoon Lee [Wed, 16 Dec 2020 07:41:44 +0000 (16:41 +0900)]
[svace] Error handling for applications/test

1. Fix inconsistent alloc/dealloc(new/free)
2. Add try catch to some statements
3. Fix memory leak from `asprintf`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[svace] assure file to be closed before remove
Jihoon Lee [Wed, 16 Dec 2020 06:55:52 +0000 (15:55 +0900)]
[svace] assure file to be closed before remove

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Docs] Remove unnecessary HTML link for feature/privilege.
Sangjung Woo [Thu, 17 Dec 2020 03:12:38 +0000 (12:12 +0900)]
[Docs] Remove unnecessary HTML link for feature/privilege.

This patch removes the unnecessary HTML link for feature/privilege.

Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
3 years ago[Optim] Add shortcut to dot product
Jihoon Lee [Fri, 11 Dec 2020 08:13:07 +0000 (17:13 +0900)]
[Optim] Add shortcut to dot product

When dimension is 1, it is vector by matrix or vector by vector
multiplication. This patch adds a shortcut in that situation

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] fix lda, ldb param
Jihoon Lee [Fri, 11 Dec 2020 07:58:34 +0000 (16:58 +0900)]
[Fix] fix lda, ldb param

**Changes proposed in this PR:**
- lda, ldb, ldc is for layout so it should be set in terms of memory
layout, this patch fixes the issue while adding a corresponding test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Add basic profilerlistener
Jihoon Lee [Wed, 9 Dec 2020 07:23:03 +0000 (16:23 +0900)]
[Profiler] Add basic profilerlistener

This patch adds global profiler listener for various purpose

From this patch,
1. Profiler can called globally with designated event key
2. Listener reporting suite included
3. Enum key has changed to int key to deal with unhashable
key compile error in few platforms.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

v2)
1. Change listener to RAII object (with forcing profiler, event
designation)
2. Add unsubscribe method
3. Change event register to set to prevent notifying a listener twice
4. Change semintics to not allow adding same listener twice

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Add profiler test
Jihoon Lee [Wed, 9 Dec 2020 05:40:24 +0000 (14:40 +0900)]
[Test] Add profiler test

**Changes proposed in this PR:**
- Add profiler test
- Wire profiler sources / header to the build system

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Profiler] Separate Profiler for wider use
Jihoon Lee [Wed, 9 Dec 2020 04:09:34 +0000 (13:09 +0900)]
[Profiler] Separate Profiler for wider use

This patch extracts profiler from neuralnet.

Also, this seperates `ProfileListener` which
should be used for client side while `Profiler`
is used in library side

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson.build] Change join_paths to / in meson.build files
hyeonseok lee [Wed, 9 Dec 2020 03:13:09 +0000 (12:13 +0900)]
[meson.build] Change join_paths to / in meson.build files

Replace join_paths in meson.build files to /

Check issue #709 for more details

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Android] Integrate openblas into android
Jihoon Lee [Tue, 8 Dec 2020 07:25:08 +0000 (16:25 +0900)]
[Android] Integrate openblas into android

Android ndk was not building on top of openblas

This patch fixes the problem

resolves #794

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[mnist] Update saved model file
Parichay Kapoor [Wed, 9 Dec 2020 13:11:29 +0000 (22:11 +0900)]
[mnist] Update saved model file

As saving the optimizer parameters has been updated, the previous
model file gives wrong result. This patch adds the new model file.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[network] Rework the backwarding
Parichay Kapoor [Thu, 3 Dec 2020 10:14:07 +0000 (19:14 +0900)]
[network] Rework the backwarding

- remove forwarding from backwarding
backwarding should just do backwarding and no more
- moved backwarding back to neuralnetwork so that graph
does not has to care about how to backward etc.
Graph just provides iterators for iterating the graph
in reverse. Graph does not know that layers have backwarding etc.

Also this removes dependency of graph from optimizer.

V2:
Added comment fixes for the corresponding PR

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Move optimizer out of layer
Parichay Kapoor [Thu, 3 Dec 2020 08:38:28 +0000 (17:38 +0900)]
[optimizer] Move optimizer out of layer

This patch moves optimizer out of layer.
Now backwarding just calculates derivatives and gradient
but does not applies the gradient.
This gradient applying is done by the model.

Layer still support applyGradient operation but requires optimizer
as an argument.
This decouples layers from optimizers and can operate independently.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Simplify optimizer initialize
Parichay Kapoor [Thu, 3 Dec 2020 06:19:02 +0000 (15:19 +0900)]
[optimizer] Simplify optimizer initialize

As there is just one optimizer and shared by layers, it must be initialized just once by the neural network.
Also, addOptimizerVariables() moved out separately from initialize() as initialize() should work
on optimizers parameters and should not need list of weights.

Also remove set_tensor argument which was redundant

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Move optimizer variables to weights
Parichay Kapoor [Thu, 3 Dec 2020 05:43:09 +0000 (14:43 +0900)]
[optimizer] Move optimizer variables to weights

Move optimizer variables to weights
Now all the weight related tensors are handled by weights themselves
So, optimizer can be shared across all layers, no need to create new
copies for all layers

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[vgg] Added pytorch model for vgg16
Parichay Kapoor [Tue, 8 Dec 2020 04:34:11 +0000 (13:34 +0900)]
[vgg] Added pytorch model for vgg16

Added pytorch model for vgg16
This is to benchmark against tf and nntrainer

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[vgg] Update to official vgg16 model
Parichay Kapoor [Tue, 8 Dec 2020 04:32:41 +0000 (13:32 +0900)]
[vgg] Update to official vgg16 model

Update the nntrainer and tensorflow to use official VGG16 model architecture
The FC layers setup is different as the cifar100 dataset has just 100 output classes
than 1000 classes of the imagenet.
Further, the number of epochs are reduced to 1.
When training, this can be increased appropriately.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[MNIST] Added pytorch version
Parichay Kapoor [Mon, 7 Dec 2020 03:48:02 +0000 (12:48 +0900)]
[MNIST] Added pytorch version

Added pytorch version of MNIST for benchmarking purpose
This code is only tested with CPU

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ndk] Add enable profile flag
Jihoon Lee [Mon, 7 Dec 2020 11:31:44 +0000 (20:31 +0900)]
[ndk] Add enable profile flag

This patch add enable profile flag for ndk build for profiling purpose

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Experiment] Add profiler
Jihoon Lee [Fri, 4 Dec 2020 09:29:32 +0000 (18:29 +0900)]
[Experiment] Add profiler

This patch add `enable-profile` option to enable profile. Also this
patch adds a simple profiling logic to `neuralnet::inference`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Meson] Add ndk-build to be part of ndk build
Jihoon Lee [Thu, 3 Dec 2020 06:38:29 +0000 (15:38 +0900)]
[Meson] Add ndk-build to be part of ndk build

**Changes proposed in this PR:**
- Add option to build library using ndk

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chores] CustomShortcut bug accepted/tizen/unified/20201207.123248 submit/tizen/20201207.013927
Jihoon Lee [Wed, 2 Dec 2020 10:11:14 +0000 (19:11 +0900)]
[Chores] CustomShortcut bug

As ini format has been changed, ini for customshortcut need change

This patch handles it.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[manager] Share gradient memory for all layer
Parichay Kapoor [Wed, 2 Dec 2020 05:44:26 +0000 (14:44 +0900)]
[manager] Share gradient memory for all layer

This patch allows sharing the memory for gradient across all the layers
The maximum size of the gradient is allocated and all layers have unique tensors
which internally points to this tensor.

This optimization feature can be disabled for a model (as done with automated models unittest)

Manager is also moved to nntrainer/tensor as manager is managing all the weights (tensors) and will
in future manage all the inputs/outputs.
If the functionality of manager is extended, then it can be appropriately moved.

See also #774
Resolves #766

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layers/manager] Register weights with manager
Parichay Kapoor [Wed, 2 Dec 2020 02:52:44 +0000 (11:52 +0900)]
[layers/manager] Register weights with manager

All the weights of the layer are now registered with manager
Manager allocates memory for these weights and in future
handle their updates etc

See also #774 #766

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] Updated weights to be vector
Parichay Kapoor [Tue, 1 Dec 2020 12:07:49 +0000 (21:07 +0900)]
[weight] Updated weights to be vector

Updated weights of layer to be vector than a shared_ptr array
This is for easier management and updating weight internally when
gradient will share the memory

See also #774 #766

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Added nntrainer manager for weights
Parichay Kapoor [Tue, 1 Dec 2020 11:14:57 +0000 (20:14 +0900)]
[manager] Added nntrainer manager for weights

Added manager to manage all the allocated weights
This patch also adds manager to the model and passes manager to the
initialize which allows weights to be added to the manager.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight/var_grad] Make internal variable as shared_ptr
Parichay Kapoor [Tue, 1 Dec 2020 10:46:38 +0000 (19:46 +0900)]
[weight/var_grad] Make internal variable as shared_ptr

Internal variables in weights/var_grad, namely, the variable and gradient itself
are changed to shared_ptr so that weights can be shared without worrying about
shallow copies.

Also changed the copy constructor to not create new Tensor as copy constructor
of weight will get called and its unnecessary + unintentional overhead.
As weight is just wrapper over tensor, their copy constructors should follow
same behavior as tensor which is to not create new memory.
Added clone as an alternative to create new copy of a given weight.

See also #774 #766

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ CONV2D ] seperate conv2d_gemm and im2col
jijoong.moon [Tue, 1 Dec 2020 04:31:03 +0000 (13:31 +0900)]
[ CONV2D ] seperate conv2d_gemm and im2col

It is better to split conv2d_gemm and im2col

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[unittest] Enable disabled unittest
Parichay Kapoor [Wed, 2 Dec 2020 05:29:19 +0000 (14:29 +0900)]
[unittest] Enable disabled unittest

Enable fc layer disabled unittest

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[var_grad] Trainable inferred from gradient submit/tizen/20201201.095355 submit/tizen/20201202.082821
Parichay Kapoor [Tue, 1 Dec 2020 01:48:32 +0000 (10:48 +0900)]
[var_grad] Trainable inferred from gradient

Trainable property of a variable was earlier inferred by storing a trainable variable
Now, trainable will be inferred using gradient.uninitialized()

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Update tensor operation signature
Parichay Kapoor [Mon, 30 Nov 2020 05:52:20 +0000 (14:52 +0900)]
[tensor] Update tensor operation signature

Update tensor operation signature to return Tensor reference as a retval
than a tensor itself. This avoid creating dummy tensors as a return (which might have been
optimized by the compiler but lets do manually as the input is also a reference).

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CustomLayer] Update readme.md
Jihoon Lee [Mon, 30 Nov 2020 07:34:09 +0000 (16:34 +0900)]
[CustomLayer] Update readme.md

Add readme.md about how to run and expected output

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Custom] Add actual example
Jihoon Lee [Thu, 26 Nov 2020 07:50:53 +0000 (16:50 +0900)]
[Custom] Add actual example

**Changes proposed in this PR:**
- Add an example to create the custom layer to be used with ini
- Add an example to create the custom layer to be used with api

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Custom] Add an example scaffolding
Jihoon Lee [Wed, 18 Nov 2020 02:30:40 +0000 (11:30 +0900)]
[Custom] Add an example scaffolding

Add a layer example that depends on the user's custom code
This patch generates scaffolding to the `Application/Custom` folder

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Graph ] remove grad mem buffer for backwarding
jijoong.moon [Fri, 27 Nov 2020 01:12:45 +0000 (10:12 +0900)]
[ Graph ] remove grad mem buffer for backwarding

This PR includes,
  . remove grad memory buffer in n_buffes for graph. We do not need
  this because we could use var memory buffer of n_buffers to
  backwarding.
  . For MNIST, memory consumption is reduced 3.5 to 2.6

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ModelLoader] Use vector<string> when create layer
Jihoon Lee [Fri, 27 Nov 2020 08:30:47 +0000 (17:30 +0900)]
[ModelLoader] Use vector<string> when create layer

When creating a layer from an ini, enum based properties were used.
This prevents adding a new properties without changing the api header.

This patch moves to setting layer properties to vector<string>
to enable setting properties without changing the api header, eventually
 enabling custom properties in custom layer.

**Semantics Change propesed in this PR**
Ini won't ignore the properties that is not supported since model_loader
would not know if it is supported or not

See also #716

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[bnlayer] bug fix for inference
Parichay Kapoor [Wed, 25 Nov 2020 11:45:09 +0000 (20:45 +0900)]
[bnlayer] bug fix for inference

Batch normalization bug fix for inference mode
when add() was used instead of add_i()

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Support multiply/divide with given output
Parichay Kapoor [Wed, 25 Nov 2020 11:43:11 +0000 (20:43 +0900)]
[tensor] Support multiply/divide with given output

Support multiply/divide with given output tensor
This reduces temporary allocations for bn layer

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[pooling] Reduce temporary mem allocations
Parichay Kapoor [Wed, 25 Nov 2020 11:41:54 +0000 (20:41 +0900)]
[pooling] Reduce temporary mem allocations

Reduce temporary memory allocations for pooling
Remove unnecessary temporary memory allocations which can be
replaced with a slice view
Also removed unnecessary setting memory to zero

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[activaiton] Reduce temporary memory alloc
Parichay Kapoor [Wed, 25 Nov 2020 11:40:32 +0000 (20:40 +0900)]
[activaiton] Reduce temporary memory alloc

Reduce temporary memory allocations by activation layer
by using the hidden and ret_derivative class variables
This temporarily increases peak memory but alloc-dealloc is removed
from every iteration

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[regex] Make regex static const
Parichay Kapoor [Wed, 25 Nov 2020 11:39:27 +0000 (20:39 +0900)]
[regex] Make regex static const

Make regex static const
Although it is using static string, that memory is always being allocated inside regex
Making is static const only makes it once for the function lifetime

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[neuralnet] Skip backwarding for non-trainable layers
Parichay Kapoor [Mon, 16 Nov 2020 07:31:00 +0000 (16:31 +0900)]
[neuralnet] Skip backwarding for non-trainable layers

This patch skips the backwarding for the non-trainable layers.
Further, the last trainable layer skips calcDerivative as well.
This results in much fewer calculations as well as more importantly, reduced memory.

See also #732

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Add built-in ops to the context
Jihoon Lee [Mon, 16 Nov 2020 06:36:49 +0000 (15:36 +0900)]
[Layer] Add built-in ops to the context

**Changes proposed in this PR:**
- Add default layers to the global context

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[AppContext] Fix key is case sensitive
Jihoon Lee [Mon, 16 Nov 2020 06:59:20 +0000 (15:59 +0900)]
[AppContext] Fix key is case sensitive

In current semantics, type key should be case insensitive however case
was sensitive, this patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[conv2d] More optimizations for conv2d
Parichay Kapoor [Fri, 20 Nov 2020 07:04:49 +0000 (16:04 +0900)]
[conv2d] More optimizations for conv2d

This patch provides more optimizations for conv2d
by avoiding more memcopies and operations along with modification
to internal interface of conv2d_gemm operation

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[conv2d] Bug fix for regularization loss
Parichay Kapoor [Fri, 20 Nov 2020 05:46:37 +0000 (14:46 +0900)]
[conv2d] Bug fix for regularization loss

Regularization loss for conv2d layer took average over output filters
than adding it up. This patch fixes it.

See also #761

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[conv2d] Refactor conv2d layer
Parichay Kapoor [Fri, 20 Nov 2020 05:37:55 +0000 (14:37 +0900)]
[conv2d] Refactor conv2d layer

Conv2d layer has some issues #761
This patch addresses some of them:
- Weight is now independent of the filter size. Different filter
weights have now been combined. This has resulted in easier addressing of weights
- Above combining of weights also reduced many mem-copies of weights to bring it in a particular shape
- Moved to use getSlice() to get access to some data than creating copies

Now, all layers have fixed number of weights.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[AppContext] Register Default ops at the begining
Jihoon Lee [Fri, 13 Nov 2020 02:51:12 +0000 (11:51 +0900)]
[AppContext] Register Default ops at the begining

**Changes proposed in this PR:**
- Register default optimizer at the beginning of the load

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Deps] Remove openmp dependency
Jihoon Lee [Wed, 18 Nov 2020 02:44:25 +0000 (11:44 +0900)]
[Deps] Remove openmp dependency

Openmp is no longer used. It is deleted to reduce memory consumption

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layers] Split backwarding into smaller functions
Parichay Kapoor [Mon, 16 Nov 2020 07:00:40 +0000 (16:00 +0900)]
[layers] Split backwarding into smaller functions

Split layer backwarding into smaller functions for optimization purposes
- calcDerivative() - calculate the derivative to be passed to previous layers
  this function must be implemented by all derived layers
- calcGradient() - calculate the gradient for the weights of the layer
- applyGradient() - apply the gradients to the weights of the layer

Also, now input->backwarding() throws than just silently not doing anything.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[var_grad] Add var_grad for input/output lists
Parichay Kapoor [Fri, 13 Nov 2020 06:41:08 +0000 (15:41 +0900)]
[var_grad] Add var_grad for input/output lists

Added var_grad for input/output lists which also combines derivatives
This is the baseclass for the weights
This update will help with graph class

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[AppContext] Add registerer,invoke factory methods
Jihoon Lee [Wed, 11 Nov 2020 12:30:23 +0000 (21:30 +0900)]
[AppContext] Add registerer,invoke factory methods

**Changes proposed in this PR:**
- Add factory registerer
- Add factory invoker
- Register built-in objects to each layers(postponed)
- Add tests

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ GRAPH ] Remove unused function and add doxygen note accepted/tizen/unified/20201130.122952 submit/tizen/20201127.015104
jijoong.moon [Fri, 20 Nov 2020 07:39:23 +0000 (16:39 +0900)]
[ GRAPH ] Remove unused function and add doxygen note

In neural network class, there is fucntions which should be moved to
graph.
In this PR, remove member functions which is not used any more and add
doxygen comment in graph header.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ ANDROID ] Enable graph for andoid build
jijoong.moon [Fri, 20 Nov 2020 06:28:17 +0000 (15:28 +0900)]
[ ANDROID ] Enable graph for andoid build

Fix Android.mk to support graph

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ GRAPH ] Split initilization & Assign Memory
jijoong.moon [Fri, 20 Nov 2020 05:22:03 +0000 (14:22 +0900)]
[ GRAPH ] Split initilization & Assign Memory

Split ini and mem assignment

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ NNSTREAMER ] Fix NNStreamer Filter for graph
jijoong.moon [Fri, 20 Nov 2020 02:35:18 +0000 (11:35 +0900)]
[ NNSTREAMER ] Fix NNStreamer Filter for graph

Describe a commit content (Until 80 colums per line) in detail ASAP.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ NNSTREAMER FILTER ] Fix nnstreamer filter to support graph
jijoong.moon [Fri, 20 Nov 2020 00:58:43 +0000 (09:58 +0900)]
[ NNSTREAMER FILTER ] Fix nnstreamer filter to support graph

Describe a commit content (Until 80 colums per line) in detail ASAP.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Fix ] istrequal to check length of string
jijoong.moon [Fri, 20 Nov 2020 00:51:04 +0000 (09:51 +0900)]
[ Fix ] istrequal to check length of string

fix istreaqual

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ GRAPH ] Support Backbone Network
jijoong.moon [Thu, 19 Nov 2020 09:53:31 +0000 (18:53 +0900)]
[ GRAPH ] Support Backbone Network

This PR includes :
 . Modification of Network Graph to enable Backbone network support
 . Fixes for unittest

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ GRAPH ] Add Compiled Variable
jijoong.moon [Thu, 19 Nov 2020 00:43:17 +0000 (09:43 +0900)]
[ GRAPH ] Add Compiled Variable

- In order to make sure to run initialize after success of compile(),
compiled variable is used.
- additional unittest cases are fixed.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ UNITTEST ] Fix unitest & Applications to support NetworkGraph
jijoong.moon [Wed, 18 Nov 2020 07:20:40 +0000 (16:20 +0900)]
[ UNITTEST ] Fix unitest & Applications to support NetworkGraph

Unittest and Applications need to be fixed to support NetworkGraph.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ UNIT TEST ] Enable Unit test for graph
jijoong.moon [Tue, 17 Nov 2020 12:19:27 +0000 (21:19 +0900)]
[ UNIT TEST ] Enable Unit test for graph

Because of graph implementation, Unittest must be changed.
In this PR, Enabling Unit Test & Bug Fixes are included.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Graph ] Add Graph into Network
jijoong.moon [Mon, 16 Nov 2020 05:39:57 +0000 (14:39 +0900)]
[ Graph ] Add Graph into Network

Describe a commit content (Until 80 colums per line) in detail ASAP.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Graph ] Modify backwarding/forwarding to use graph data
jijoong.moon [Fri, 13 Nov 2020 02:13:51 +0000 (11:13 +0900)]
[ Graph ] Modify backwarding/forwarding to use graph data

In this PR, Using graph data is eanbled.
 . Each layer use grap data instead of input/hidden tensors.
 . Not any more capy between layers

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ GRAPH ] Initialize graph
jijoong.moon [Mon, 9 Nov 2020 13:03:39 +0000 (22:03 +0900)]
[ GRAPH ] Initialize graph

This PR includes,
  . Set the Network Buffer
  . Initialize Layer
  . Calculate Dimension
  . Assign Network Buffer at each layer properly

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ LAYER ] Expose Output Layer
jijoong.moon [Mon, 9 Nov 2020 01:10:01 +0000 (10:10 +0900)]
[ LAYER ] Expose Output Layer

It is not enough to support various connection in network if we
support output layer only in implicitly. This PR includes eanbling
output layer explicitly.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Example ] Mini Resnet
jijoong.moon [Mon, 9 Nov 2020 01:08:06 +0000 (10:08 +0900)]
[ Example ] Mini Resnet

This PR includes,
    . Addintion of Mini Resnet Example to test
    . Skip connection
    . Ouput Layer
    . Addition Layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Graph ] Neural Network Graph
jijoong.moon [Wed, 21 Oct 2020 02:42:22 +0000 (11:42 +0900)]
[ Graph ] Neural Network Graph

In this PR, NetworkGraph Class is introduced and compile method is
implemented.
During the compile process, it takes layers vector which is created by
model_loader and create required layers like Activation, Flatten,
Concat or Addition Layers. Also it modify inputs relatively and
Calculate the order of calculation. So far, only serial processing is
supported.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[fc,tensor,adam] Change signature for sum accepted/tizen/unified/20201120.125454 submit/tizen/20201119.063013
Parichay Kapoor [Fri, 13 Nov 2020 13:01:20 +0000 (22:01 +0900)]
[fc,tensor,adam] Change signature for sum

Added new signature for sum
Also removed unnecessary calculation from fc
Optimzed adam calculation to reduce extra memory requirement

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Reduce overall memory overhead
Parichay Kapoor [Fri, 13 Nov 2020 12:17:56 +0000 (21:17 +0900)]
[tensor] Reduce overall memory overhead

Reduce overall tensor memory overhead
- standardization and normalization now take in-place
- input and label tensor in train are now allocated only once externally and reused for each epoch
that being used in each epoch
- add_i does not allocate new memory now
- removed support for normalization and standardization for conv layer

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ccapi] Direct methods to create layers
Parichay Kapoor [Wed, 11 Nov 2020 10:49:18 +0000 (19:49 +0900)]
[ccapi] Direct methods to create layers

Added syntactic sugar constructors for layers and loss
This allows creating various layers and losses directly with c++ API

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ccapi] Syntactic sugar constructors
Parichay Kapoor [Wed, 11 Nov 2020 07:45:05 +0000 (16:45 +0900)]
[ccapi] Syntactic sugar constructors

Added syntactic sugar constructors for optimizers
which allow making optimizer closer to existing API
and more readable.

See also #734

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Update tensor signature for apply
Parichay Kapoor [Thu, 12 Nov 2020 11:51:53 +0000 (20:51 +0900)]
[tensor] Update tensor signature for apply

This patch updates tensor signature to include output for apply
Correspondingly, activation layer and loss layer have been updated.

More changes in the patch
- Convolution layer has been updated to reuse its output derivative.
- added a getBatchSlice() which provides a slice of the original tensor by batch
and avoids creating a copy everytime

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Update tensor op signature for dot
Parichay Kapoor [Thu, 12 Nov 2020 05:23:24 +0000 (14:23 +0900)]
[tensor] Update tensor op signature for dot

Update tensor op signature for dot
Further, fc layer has been updated based on this signature

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Optimizer] Add enum type factory
Jihoon Lee [Wed, 11 Nov 2020 09:43:35 +0000 (18:43 +0900)]
[Optimizer] Add enum type factory

This patch adds integer factory to align with capi

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Model] Apply appcontext
Jihoon Lee [Wed, 11 Nov 2020 02:59:20 +0000 (11:59 +0900)]
[Model] Apply appcontext

Apply appcontext to NeuralNetwork. From this patch `chdir()` hack is not
needed :)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[model] Handle loss layer to be added from user
Parichay Kapoor [Wed, 11 Nov 2020 12:14:25 +0000 (21:14 +0900)]
[model] Handle loss layer to be added from user

Handle loss layer to be added from the user which is created with c++ API

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[AppContext] Add AppContext
Jihoon Lee [Mon, 9 Nov 2020 03:16:48 +0000 (12:16 +0900)]
[AppContext] Add AppContext

This patch add basic app context with setting current working directory

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[docs] Update docs about backbone features
Parichay Kapoor [Tue, 10 Nov 2020 04:57:47 +0000 (13:57 +0900)]
[docs] Update docs about backbone features

Update documentation for the ini with the newly added backbone features

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[backbone/ini] Support subgraph with ini backbone
Parichay Kapoor [Tue, 10 Nov 2020 02:23:27 +0000 (11:23 +0900)]
[backbone/ini] Support subgraph with ini backbone

Added support for subgraph of a given ini backbone
Added corresponding unittests as well

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[backbone] Add ini backbone properties
Parichay Kapoor [Mon, 9 Nov 2020 06:21:29 +0000 (15:21 +0900)]
[backbone] Add ini backbone properties

Added ini backbone properties :
- scaleSize - scale the size of the model
- preload - load the weights of this backbone model before adding to the model
preload has issues which requires the format of the model file to update
wait for #361

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Change layer type to string
Jihoon Lee [Tue, 10 Nov 2020 01:03:00 +0000 (10:03 +0900)]
[Layer] Change layer type to string

This patch changes layer type to string

**Changes proposed in this PR:**
- Add Layer::getType() and Layer::type
- Add `istrequal` to the `parse_util` for case insensitive compare
- Test changes accordingly

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoUpdate README.md
Mete Ozay [Wed, 11 Nov 2020 12:05:19 +0000 (12:05 +0000)]
Update README.md

- Update links to examples
- Update Getting Started and Running Examples
- Fix grammar and typo.

Signed-off-by: Mete Ozay <meteozay@gmail.com>
3 years ago[nnstreamer/backbone] Update to support more backbone
Parichay Kapoor [Wed, 4 Nov 2020 04:00:11 +0000 (13:00 +0900)]
[nnstreamer/backbone] Update to support more backbone

Update nnstreamer backbone layer to support more backbones than just tflite
Wait for this PR to merge till the bug fix on nnstreamer
https://github.com/nnstreamer/nnstreamer/pull/2850 is merged and reflected

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[cifar] Update cifar application with backbone
Parichay Kapoor [Tue, 10 Nov 2020 12:23:15 +0000 (21:23 +0900)]
[cifar] Update cifar application with backbone

Update cifar application using databuffer with generator to
use tflite backbone.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>