platform/core/ml/nntrainer.git
4 years ago[optimizer] Bug fix
Parichay Kapoor [Fri, 26 Jun 2020 09:51:43 +0000 (18:51 +0900)]
[optimizer] Bug fix

Tensor copy constructor and copy assigment operator creates a copy of the vector
This led to bug in optimizer which updated the copy of the weight than the weight itself
Fixed by refernce of weight in optimizer

Resolves #241

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Pooling2D ] global max / global average
jijoong.moon [Thu, 25 Jun 2020 07:02:34 +0000 (16:02 +0900)]
[ Pooling2D ] global max / global average

This PR provides global max / global average.
. forwarding global_max / global_average
. backwarding global_max / global_average

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[layer] Support get weights
Parichay Kapoor [Thu, 25 Jun 2020 12:40:43 +0000 (21:40 +0900)]
[layer] Support get weights

Support getWeights functionality equivalent to getGradients

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[gradients] Get gradient of each layer
Parichay Kapoor [Wed, 24 Jun 2020 06:46:13 +0000 (15:46 +0900)]
[gradients] Get gradient of each layer

Added function to get gradient of each layer
Each layer is responsible to fill in the list containing the gradients

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoAdd tensor save / read test
Jihoon Lee [Thu, 25 Jun 2020 02:25:02 +0000 (11:25 +0900)]
Add tensor save / read test

This Pr adds tensor save / read test.

Not directly related to the PR though, save & read better
need error handling

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[optimizer] Optimizer to manage list of weights
Parichay Kapoor [Wed, 24 Jun 2020 05:08:28 +0000 (14:08 +0900)]
[optimizer] Optimizer to manage list of weights

Update optimizer to handle list of weights than weights and bias inidividually
as layers like RNN, LSTM will have more than just (w,b)

V2:
calculate changed to apply_gradient as it applies gradients
applied for conv layer as well

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Conv2D ] fix calculate parameter of optimizer
jijoong.moon [Wed, 24 Jun 2020 10:44:21 +0000 (19:44 +0900)]
[ Conv2D ] fix calculate parameter of optimizer

fix calculate function parameters

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[activation] Derivative for activation
Parichay Kapoor [Wed, 24 Jun 2020 05:56:59 +0000 (14:56 +0900)]
[activation] Derivative for activation

The derivative of softmax has been hand crafted to be different from others
Refer to https://github.com/nnstreamer/nntrainer/blob/2a650512813db6ce3bba828b5790066fbc655f14/nntrainer/src/fc_layer.cpp#L265 for original implementation
Softmax requires softmax(x) as input for derivative while other activations require x as input for derivative_

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoAdd stride and contiguous flag to tensor
Jihoon Lee [Wed, 24 Jun 2020 02:00:10 +0000 (11:00 +0900)]
Add stride and contiguous flag to tensor

**Changes proposed in this PR:**
- Add stride
- Add checking is countigous flag to tensor

This patch is an anchor for #217

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Conv2D ] backwarding
jijoong.moon [Fri, 19 Jun 2020 13:38:42 +0000 (22:38 +0900)]
[ Conv2D ] backwarding

This PR provides backwarding of Convolution 2D Layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoFix optimizer signature
Jihoon Lee [Wed, 24 Jun 2020 06:44:37 +0000 (15:44 +0900)]
Fix optimizer signature

optimizer signature now does not have `init_zero`

This quick fix deletes init_zero from `fc_layer::backward`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[bias] Bias updation missing for sgd
Parichay Kapoor [Wed, 24 Jun 2020 02:57:49 +0000 (11:57 +0900)]
[bias] Bias updation missing for sgd

Bias updatation fixed for sgd where it only happened when bias was initialized with 0
For adam, bias updataion was happening twice

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[fc_layer] Add the deleted statement
Parichay Kapoor [Wed, 24 Jun 2020 05:24:53 +0000 (14:24 +0900)]
[fc_layer] Add the deleted statement

Add the deleted statement about weight update from fc layer
Resolves #221

cc. @zhoonit

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[tensor/tensor_dim] Added equal comparison operation
Parichay Kapoor [Tue, 23 Jun 2020 05:08:56 +0000 (14:08 +0900)]
[tensor/tensor_dim] Added equal comparison operation

Added equal comparison operation
Currently its based on fixed epsilon
Needs updation to use variable number of bits based on exponentiation as done in gtest

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoMove weight_decay handling from opt to layer
Jihoon Lee [Wed, 24 Jun 2020 02:08:28 +0000 (11:08 +0900)]
Move weight_decay handling from opt to layer

**Changes proposed in this PR:**
- remove weight_decay from `Optimizer::calculate` signature
- apply weight decay to fc_layer.cpp

please note that conv2d_layer::backwarding also need to handle weight
decay after this PR is merged.

Resolves #213

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAttach activation layer to neuralnet.cpp
Jihoon Lee [Fri, 19 Jun 2020 10:31:16 +0000 (19:31 +0900)]
Attach activation layer to neuralnet.cpp

**Changes proposed in this PR:**
- strip activation function related members in `layer`
- init activation property as `activation_layer`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

resolves #153

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[Fix/jni] Change tflite-dependency
Jihoon Lee [Mon, 22 Jun 2020 08:07:35 +0000 (17:07 +0900)]
[Fix/jni] Change tflite-dependency

There seems a problem with building tensorflow. This PR propose to use
prebuilt tensorflow-lite instead of building one.

**Changes proposed in this PR:**
- Change `Applications/android.mk` to use prebuilt library
- Change `prepare_tflite.sh`
- Bump tflite to 1.13.1 as suggested in #20

Resolves #207

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoMake .clang-format competible with version 6
Jihoon Lee [Tue, 23 Jun 2020 01:39:07 +0000 (10:39 +0900)]
Make .clang-format competible with version 6

Clang-format 6 is widely used. However,
`AllowAllConstructorInitializersOnNextLine` added from #203 is supported
from clang-format 9.

This Pr reverts use of `AllowAllConstructorInitializersOnNextLine`
while having similar linting style.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[meson] Arrange file order
Parichay Kapoor [Fri, 19 Jun 2020 08:29:45 +0000 (17:29 +0900)]
[meson] Arrange file order

Arrange files order for easier access

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[loss] Combine softmax with cross entropy
Parichay Kapoor [Thu, 18 Jun 2020 11:10:43 +0000 (20:10 +0900)]
[loss] Combine softmax with cross entropy

Softmax is combined with cross entropy
Cross entropy exists on its as an option but isnt supported

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[loss] Combined cross entropy with sigmoid
Parichay Kapoor [Thu, 18 Jun 2020 08:57:41 +0000 (17:57 +0900)]
[loss] Combined cross entropy with sigmoid

Combined cross-entropy with sigmoid version for loss because of its higher stability
Note that this happens internally and is not exposed outside

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[Proposal] clang format for initializer list
Jihoon Lee [Fri, 19 Jun 2020 07:41:45 +0000 (16:41 +0900)]
[Proposal] clang format for initializer list

preivious initialization list was hard to read when it gets too long.

eg) layers initialization list

before:
```cpp
  Layer()
    : last_layer(false), init_zero(false), type(LAYER_UNKNOWN),
      activation(NULL), activation_prime(NULL), activation_type(ACT_UNKNOWN),
      bn_follow(false), weight_decay(), weight_ini_type(WEIGHT_UNKNOWN) {}
```

after:
```cpp
  Layer()
    : last_layer(false),
      init_zero(false),
      type(LAYER_UNKNOWN),
      activation(NULL),
      activation_prime(NULL),
      activation_type(ACT_UNKNOWN),
      bn_follow(false),
      weight_decay(),
      weight_ini_type(WEIGHT_UNKNOWN) {}
```

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Pooling2D ] backwarding
jijoong.moon [Wed, 17 Jun 2020 12:30:33 +0000 (21:30 +0900)]
[ Pooling2D ] backwarding

This PR provides backwarding process of Pooling 2D.
. backwarding for max pooling 2D
. backwarding for average pooling 2D
. backwarding global_max, global_averge is NYI.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoAdd test for activation layer(Wait for #187)
Jihoon Lee [Thu, 18 Jun 2020 06:45:14 +0000 (15:45 +0900)]
Add test for activation layer(Wait for #187)

**Changes proposed in this PR:**
- Add test for activation layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoSeparate activation to layer
Jihoon Lee [Thu, 18 Jun 2020 10:15:19 +0000 (19:15 +0900)]
Separate activation to layer

**Changes proposed in this PR:**
- add activation_layer.[h|cpp]
- add test to activation_layer

See also #153, #152

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoFix bug in `Tensor::setValue`
Jihoon Lee [Fri, 19 Jun 2020 08:15:30 +0000 (17:15 +0900)]
Fix bug in `Tensor::setValue`

`memset` can't be used to initialize a float array as explained
[here](https://stackoverflow.com/questions/1040070/initializing-a-float-array-with-memset)

**Changes proposed in this PR:**
- change `memset` to `std::fill`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[warning fix] unsigned int compare with int warning
Parichay Kapoor [Thu, 18 Jun 2020 09:10:00 +0000 (18:10 +0900)]
[warning fix] unsigned int compare with int warning

warning fix of comparing unsigned int values with signed values

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[parse] Parse unknown properties
Parichay Kapoor [Thu, 18 Jun 2020 06:10:22 +0000 (15:10 +0900)]
[parse] Parse unknown properties

Properties exposed to the users and internal are different (losslayer, etc)
Hence using `*_string.size() - 1` for unknown cases will cause bugs in parse_util
Replaced with its own individual unknown value

V2:
Combined all individual layer properties into common properties in layer.h

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Flatten ] backwarding
jijoong.moon [Wed, 17 Jun 2020 10:00:07 +0000 (19:00 +0900)]
[ Flatten ] backwarding

This PR includes back propagation of Flatten Layer.
. backwarding Flatten Layer.
. batch(), channel(), width(), height(), setDim() of tensor
. unit test of flatten layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[bugfix] Bug fix with git merge
Parichay Kapoor [Thu, 18 Jun 2020 04:46:04 +0000 (13:46 +0900)]
[bugfix] Bug fix with git merge

Git merged new commits causing build errors
Need urgent merge

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[neuralnet] Handle adding layer with compiled model
Parichay Kapoor [Wed, 17 Jun 2020 06:14:05 +0000 (15:14 +0900)]
[neuralnet] Handle adding layer with compiled model

This PR handles the case when a layer is added after a model has been compiled
Currently it has been set to give out error

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[API] simplify API
Parichay Kapoor [Wed, 17 Jun 2020 04:26:59 +0000 (13:26 +0900)]
[API] simplify API

Remove exposed function "model_construct_with_conf" as then "compile_with_conf" looks strange without any config
Rather, just keep one "model_construct" and "compile_with_conf" can take the config file
Updated corresponding unittests

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[save/load] save/load the optimizer parameters
Parichay Kapoor [Tue, 16 Jun 2020 10:12:04 +0000 (19:12 +0900)]
[save/load] save/load the optimizer parameters

save and load the optimizer parameters as well for continued training
Add additive option in neural to continue training from previous training

Resolves #172

V2:
setOptimizer() bug fix to be called with set for only fc layer and not other layers
now setOptimizer() for fc_layer is unique compared to virtual defined by its parent
added continued training property for optimizer
added getSize in tensor
save optimizer type to verify that the loaded optimizer values can be used sensibly
if not loading, move the file seek ahead

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[fc_layer] Initialization bug fix
Parichay Kapoor [Mon, 15 Jun 2020 09:00:27 +0000 (18:00 +0900)]
[fc_layer] Initialization bug fix

Add missing initialization of unit at object construction

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[layer] Added loss layer
Parichay Kapoor [Mon, 15 Jun 2020 04:19:51 +0000 (13:19 +0900)]
[layer] Added loss layer

Added loss layer
This is added by the framework and is hidden from the user
This separates all the cost/loss related extra work from the layers
However, Loss and cost is now available for and from all layers

Resolves #101

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Flatten ] forwarding / copy for Flatten Layer
jijoong.moon [Tue, 16 Jun 2020 04:15:56 +0000 (13:15 +0900)]
[ Flatten ] forwarding / copy for Flatten Layer

This PR provides the fowarding/copy function of Flatten Layer
. implement forwarding function
. implement copy function

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[init] Making random deterministic
Parichay Kapoor [Tue, 16 Jun 2020 09:10:21 +0000 (18:10 +0900)]
[init] Making random deterministic

Adding determinism to the random number generators in the library
DataBuffer has multiple threads but single thread of train/valid/test
which run in sequence in my understanding

Resolves #167

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Layer copy ] copy function for conv2d & pooling
jijoong.moon [Tue, 16 Jun 2020 04:08:05 +0000 (13:08 +0900)]
[ Layer copy ] copy function for conv2d & pooling

This PR fixs the copy member function of covn2d and pooling layer.
. include copy of layer varaibles for conv2d
. implemnet copy of pooling2d layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[neuralnet] Iteration bug fix in learning rate
Parichay Kapoor [Tue, 16 Jun 2020 09:51:58 +0000 (18:51 +0900)]
[neuralnet] Iteration bug fix in learning rate

learning rate is decayed using the iteration
however current implementation was using epoch count

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[Docs] Update readme.md prerequisites
Jihoon Lee [Tue, 16 Jun 2020 04:53:27 +0000 (13:53 +0900)]
[Docs] Update readme.md prerequisites

**Changes proposed in this PR:**
- Update readme.md prerequisites to include gcc version

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoOptimize optimzer::calculate
Jihoon Lee [Tue, 16 Jun 2020 02:41:19 +0000 (11:41 +0900)]
Optimize optimzer::calculate

**Changes proposed in this PR:**
- Optimze `optimzer::calculate` with new add_i and applyIf

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd `LazyTensor::applyIf`
Jihoon Lee [Tue, 16 Jun 2020 01:24:34 +0000 (10:24 +0900)]
Add `LazyTensor::applyIf`

This PR propose `LazyTensor::applyIf` for more control flow.

Because of the problem proposed in http://wg21.link/P0834R0
macro to wrap a function need to be used for the function.

**Changes proposed in this PR:**
- add `LazyTensor::applyIf` semantics
- add `LazyTensor::applyIf` tests

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Flatten ] Skeleton of Flatten Layer
jijoong.moon [Tue, 16 Jun 2020 01:06:45 +0000 (10:06 +0900)]
[ Flatten ] Skeleton of Flatten Layer

This PR provides skeleton code for flatten layer.
- Header & implementation of flatten layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Pooling2D ] forwarding pooling 2D layer
jijoong.moon [Mon, 15 Jun 2020 07:01:16 +0000 (16:01 +0900)]
[ Pooling2D ] forwarding pooling 2D layer

This PR includs forwarding process of pooling 2d layer.
in which,
. implementation of forwarding
. unit test code for forwarding
. input / ouput generation of pooling 2D ( max pooling only )
. move zero_pad to util_func.h

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Bug ] setActivation for input layer
jijoong.moon [Mon, 15 Jun 2020 10:44:31 +0000 (19:44 +0900)]
[ Bug ] setActivation for input layer

Input Layer does not have activation property

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Pooling2D ] initialize pooling 2d layer
jijoong.moon [Mon, 15 Jun 2020 04:12:48 +0000 (13:12 +0900)]
[ Pooling2D ] initialize pooling 2d layer

This PR includes initialization of pooling 2d layer.

. check input dimension
. set intput / output dimemsion
. allocate hidden layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[Tensor] Change Add_i to have alpha signature
Jihoon Lee [Mon, 15 Jun 2020 06:22:54 +0000 (15:22 +0900)]
[Tensor] Change Add_i to have alpha signature

**Changes proposed in this PR:**
- `Add_i(Tensor &T, float alpha)` for coefficient multiplication
- Change test accordingly
- Optimize `blas` implementation for `multiply_i`

See also: #166

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[util] Simplify sigmoidPrime
Parichay Kapoor [Mon, 15 Jun 2020 08:39:40 +0000 (17:39 +0900)]
[util] Simplify sigmoidPrime

Simplified sigmoid prime which IMO might also be more efficient

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[layer] object initialization bugfix
Parichay Kapoor [Mon, 15 Jun 2020 08:37:36 +0000 (17:37 +0900)]
[layer] object initialization bugfix

Added bugfix to object initialization

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[ Pooling2D ] Set Property for Pooling 2D Layer
jijoong.moon [Mon, 15 Jun 2020 02:21:33 +0000 (11:21 +0900)]
[ Pooling2D ] Set Property for Pooling 2D Layer

This PR provides seting property for pooling 2d layer.
which is,
  . stride, padding, pooling_size, pooling type

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
a2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoRefactor: delete Tensor::mat2vec()
Jihoon Lee [Mon, 15 Jun 2020 01:48:37 +0000 (10:48 +0900)]
Refactor: delete Tensor::mat2vec()

This method is initially proposed to flatten and clone mat to vector.

However, `Tensor::getData()` can cover most of it's purpose, it seems
no longer in need.

**Changes proposed in this PR:**
- Delete `Tensor::mat2vec()`.

See also:
https://github.com/nnstreamer/nntrainer/pull/149#discussion_r439172191

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Pooling2D ] Pooling 2D Layer
jijoong.moon [Fri, 12 Jun 2020 07:39:54 +0000 (16:39 +0900)]
[ Pooling2D ] Pooling 2D Layer

This PR provides skeleton of pooling 2d layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[bug] error macros not defined
Parichay Kapoor [Fri, 12 Jun 2020 10:07:33 +0000 (19:07 +0900)]
[bug] error macros not defined

Corrected some of the error state macros which were wrongly defined

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[bugfix] Ignoring error in training/validation
Parichay Kapoor [Fri, 12 Jun 2020 09:24:38 +0000 (18:24 +0900)]
[bugfix] Ignoring error in training/validation

Errors in forward and backward propagation were ignored.
Added error handling for them.

Further, model was being saved after validation.
Errors in validation would result in no model.
Changed to saving of model after training than after validation.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[layers] set status in operations
Parichay Kapoor [Fri, 12 Jun 2020 09:54:07 +0000 (18:54 +0900)]
[layers] set status in operations

Set status in forward and backward operations

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years ago[meson] Remove duplicated bits from test/meson
Jihoon Lee [Fri, 12 Jun 2020 08:16:41 +0000 (17:16 +0900)]
[meson] Remove duplicated bits from test/meson

**Changes proposed in this PR:**
- Add conv2d_unittest to be unzipped for testing
- Add foreach loop to handle duplicated control flow

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[loss] Added missing weight decay loss
Parichay Kapoor [Fri, 12 Jun 2020 07:40:26 +0000 (16:40 +0900)]
[loss] Added missing weight decay loss

Added missing weight decay loss into final loss for intermediate layers

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoFix numeric limits in `Tensor`
Jihoon Lee [Fri, 12 Jun 2020 06:04:55 +0000 (15:04 +0900)]
Fix numeric limits in `Tensor`

**Changes proposed in this PR:**
- Add `static constexpr min / max` to Tensor
- Change normalization to use predefined method for accelaration

Resolves #141

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[LazyTensor] Fix memcopy happening on lambda func
Jihoon Lee [Fri, 12 Jun 2020 07:00:37 +0000 (16:00 +0900)]
[LazyTensor] Fix memcopy happening on lambda func

Fix unintential memcopy occurring in capture clause in lambda

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd blas operation to `Tensor::l2norm`
Jihoon Lee [Fri, 12 Jun 2020 06:50:55 +0000 (15:50 +0900)]
Add blas operation to `Tensor::l2norm`

Meanwhile, current implementation (plain version) of `l2norm` is not
effcient and have potential overflow problem.

**Changes proposed in this PR:**
- Add blas operation to `Tensor::l2norm`
- Add fixme

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Unit Test ] Conv2D Forwarding Unit Test
jijoong.moon [Fri, 12 Jun 2020 01:27:47 +0000 (10:27 +0900)]
[ Unit Test ] Conv2D Forwarding Unit Test

This PR provides Conv2D Forwarding Unit Test.
. update python script to generate random input / kernel / golden Data
. update unit test code for conv2d & evaluate results

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[tensor] Added axis dimension to average
Parichay Kapoor [Thu, 11 Jun 2020 04:06:45 +0000 (13:06 +0900)]
[tensor] Added axis dimension to average

Current implementation of average() is confusing when compared with sum().
Both sum() and average() do not take axis, and perform similar operations.
However, the semantics of their output is different -
- sum() acts over the dimensions other than batch size
- average() acts over the batch size

To avoid this confusion, average is provided with axis argument with 0 as default.
This is now analogous to sum(axis) than sum()
Further, sum() is renamed to sum_by_batch() as thats what it does and is different from sum(axis)

V2:
Applied the above for lazy_tensor

Minor updates to TensorDim are also made

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
4 years agoAdd openmp parellelism to l2norm()
Jihoon Lee [Thu, 11 Jun 2020 08:59:26 +0000 (17:59 +0900)]
Add openmp parellelism to l2norm()

**Changes proposed in this PR:**
- Fix:`meson -Denable-blas=false` has no dep found error
- Fix:sum() function bug in `USE_BLAS` is not defined
- Add openmp dependency
- Apply openmp to l2norm function

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoOptimizing concept for Tensor operation
Jihoon Lee [Wed, 10 Jun 2020 09:39:11 +0000 (18:39 +0900)]
Optimizing concept for Tensor operation

**Changes proposed in this PR:**
- `i_operation` are used to calculate without memcopy

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Conv2D ] forwarding calculation
jijoong.moon [Wed, 10 Jun 2020 05:18:25 +0000 (14:18 +0900)]
[ Conv2D ] forwarding calculation

This PR provides forwarding calculation of conv2d layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoRegister Tensor ops to lazy tensor
Jihoon Lee [Wed, 10 Jun 2020 07:50:58 +0000 (16:50 +0900)]
Register Tensor ops to lazy tensor

**Changes proposed in this PR:**
- Register tensor ops to `LazyTensor`
- Add test fixture to `LazyTensor`
- Add tests to `LazyTensor`
- Change `Tensor::apply` signature to permit closures

**Changes proposed in V2:**
- Delete and fix unnecessary comments
- Change `Tensor::getData` signature

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Test ] Input Generator
jijoong.moon [Wed, 10 Jun 2020 05:14:55 +0000 (14:14 +0900)]
[ Test ] Input Generator

This PR provides layer input generator to evalute the layer
calculation.

[ To Do ]
Golden Test Result Generation should be added.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ CI ] Add Build Dependency flatbuffers-dev
jijoong.moon [Thu, 11 Jun 2020 07:00:57 +0000 (16:00 +0900)]
[ CI ] Add Build Dependency flatbuffers-dev

for ubuntu, add flatbuffers-dev

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Conv2d ] save & read unittest
jijoong.moon [Wed, 10 Jun 2020 00:25:36 +0000 (09:25 +0900)]
[ Conv2d ] save & read unittest

Add save & read unit test

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Conv2d ] save & read weight from file
jijoong.moon [Tue, 9 Jun 2020 11:12:01 +0000 (20:12 +0900)]
[ Conv2d ] save & read weight from file

This PR provides read & save Kenel and Bias

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[Tensor] Add divide_i / multiply_i (Tensor)
Jihoon Lee [Tue, 9 Jun 2020 12:49:49 +0000 (21:49 +0900)]
[Tensor] Add divide_i / multiply_i (Tensor)

**Changes proposed in this PR:**
- Add `divide_i(Tensor)`
- Add `multiply_i(Tensor)`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[Tensor] Add `subtract_i(Tensor)` operator
Jihoon Lee [Mon, 8 Jun 2020 10:48:50 +0000 (19:48 +0900)]
[Tensor] Add `subtract_i(Tensor)` operator

**Changes proposed in this PR:**
- Add subtract_i(Tensor) operator for a memcpyless operation
- Lint unformated lines

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Refactor ] Remove unused function in layer class
jijoong.moon [Tue, 9 Jun 2020 10:34:57 +0000 (19:34 +0900)]
[ Refactor ] Remove unused function in layer class

There is no need to exist the initialize(...) in layer class.
This PR removes this unused function.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Conv2D ] initialize Conv2DLayer
jijoong.moon [Mon, 8 Jun 2020 04:37:30 +0000 (13:37 +0900)]
[ Conv2D ] initialize Conv2DLayer

During initialize,
 1. set input & output dimension
 2. set Tensor for Kernel and bias
 setInputDimension() function should be called before this is called.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoAdd multiply_i / divide_i operator
Jihoon Lee [Tue, 9 Jun 2020 05:06:17 +0000 (14:06 +0900)]
Add multiply_i / divide_i operator

**Changes proposed in this PR:**
- Add `multiply_i(float)`
- Add `divide_i(float)`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[Tensor] Add `subtract_i(float)` operator
Jihoon Lee [Mon, 8 Jun 2020 10:26:49 +0000 (19:26 +0900)]
[Tensor] Add `subtract_i(float)` operator

**Changes proposed in this PR:**
- Add subtract_i operator
- Revise subtract operator
- Change precision in add_i / subtract_i test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years agoAdd LazyTensor to support lazy evaluation
Jihoon Lee [Mon, 8 Jun 2020 01:46:24 +0000 (10:46 +0900)]
Add LazyTensor to support lazy evaluation

This PR propose 2 concepts.

1. Propose `*_i` type of operation to `Tensor`. Which mutates target
tensor instead of memcopying the tensor.(511c)
2. Add chaining for lazy & memcopyless operation.

For example Tensor could do the operation in such manner:

```cpp
Tensor t;
t.chain() /* Initial memcpy happens to gaurantee immutability */
 .add_i(x)
 .multiply_i(y) /* NYI */
 .divide_i(y) /* NYI */
 .run()
```

operation are delayed until `.run()` is called. This is done by
monadic object named `DefferedTensor`.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
[Tensor] Add add_i for memcpyless operation

This pr propose `*_i()` operation to Tensor to support memcpyless
operation.

**Changes proposed in this PR:**
- Add add_i for memcpyless operation

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[Tensor] Add `add_i(Tensor T)` operator
Jihoon Lee [Mon, 8 Jun 2020 04:56:14 +0000 (13:56 +0900)]
[Tensor] Add `add_i(Tensor T)` operator

**Changes proposed in this PR:**
- add_i(Tensor T) operator for memcopyless operation

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Refactor ] using input dimesion and output dimenstion.
jijoong.moon [Fri, 5 Jun 2020 08:22:41 +0000 (17:22 +0900)]
[ Refactor ] using input dimesion and output dimenstion.

Currently each layer handle just one TensorDim for Weight. However,
there are more complicata cases to use. Therefore each layer have
TensorDim instance to specify input and output activation dimension.
Original dim variable of layer is use to define weight dimension.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Refactor ] Modify to deal with 3D Tensor
jijoong.moon [Thu, 4 Jun 2020 07:26:54 +0000 (16:26 +0900)]
[ Refactor ] Modify to deal with 3D Tensor

Until now, nntrainer cannot handle 3D Tensor including channel. Only
supported 2D Tensor which is batch, height, width.
From this PR, nntrainer can handle 3D Tensor include channel. But more
optimization is required.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Layer ] Set Property for Conv2D Layer
jijoong.moon [Wed, 3 Jun 2020 05:37:32 +0000 (14:37 +0900)]
[ Layer ] Set Property for Conv2D Layer

. parse and set Convolution 2D Property
   0. input shape : string
   1. bias zero : bool
   4. activation : string (type)
   6. weight_decay : string (type)
   7. weight_decay_lambda : float
   9. filter : int
   10. kernel_size : ( n , m )
   11. stride : ( n, m )
   12, padding : valid | same

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Layer ] Draft of Conv2D Layer
jijoong.moon [Mon, 1 Jun 2020 12:20:11 +0000 (21:20 +0900)]
[ Layer ] Draft of Conv2D Layer

This is the first draft of 2d convolution layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoReduce duplicated function call
Jihoon Lee [Fri, 5 Jun 2020 06:56:44 +0000 (15:56 +0900)]
Reduce duplicated function call

**Changes proposed in this PR:**
- Reduce `average()` call in `Optimizer::calculate
- Delete setZero call after construction `Tensor`
- Reduce indexing and multiplication in few loops

This PR results in better performance including roughly 60% decrease
in time spent in `Tensor::average()` in
Classification example(checked in vtune).

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Refector ] Manage unit test with seperate file
jijoong.moon [Wed, 3 Jun 2020 01:10:03 +0000 (10:10 +0900)]
[ Refector ] Manage unit test with seperate file

unit test code is too big to manage. So make sepearte file for each
class / classes.
. unittest_nntrainer_internal
  : test neural network / optimizer
. unittest_nntrainer_tensor
  : test tensor/tensorDim
. unittest_nntrainer_layer
  : test layers

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[Example] Add exit when naviframe is empty
Jihoon Lee [Thu, 4 Jun 2020 05:46:19 +0000 (14:46 +0900)]
[Example] Add exit when naviframe is empty

**Changes proposed in this PR:**
- Add and use _on_back_pressed_cb when back button is pressed
- Change `view_routes_to` signiture to return naviframe item.
- Delete unused functions

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Bug ] possible corruption by double erase
jijoong.moon [Tue, 2 Jun 2020 03:45:55 +0000 (12:45 +0900)]
[ Bug ] possible corruption by double erase

There is possible double erase vector due to the duplicate random
number generation.
In this PR, remove to use random number and erase just once.

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Refactor ] Layer Method for weight initialization
jijoong.moon [Tue, 2 Jun 2020 01:31:37 +0000 (10:31 +0900)]
[ Refactor ] Layer Method for weight initialization

Cannot use weight initiatlization method for other layer.
In this PR, Move it into Layer class to use for other layer.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Refactor ] Tensor has TensorDim
jijoong.moon [Mon, 1 Jun 2020 12:17:34 +0000 (21:17 +0900)]
[ Refactor ] Tensor has TensorDim

Currently Tensor class handles the dimension with it's own data
structure.

It is better to use TensorDim for this for consistency with other
classes of NNTrainer.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ Example ] Tizen C API Train with Generator Example
jijoong.moon [Mon, 1 Jun 2020 04:50:14 +0000 (13:50 +0900)]
[ Example ] Tizen C API Train with Generator Example

Demo example to train with Tizen C API Train with generator.

Add simple example to train 1 FC Layer neural network and get the
train & validation data from generator created by user.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[ CI ] Fix wrong ci url
jijoong.moon [Tue, 2 Jun 2020 22:25:37 +0000 (07:25 +0900)]
[ CI ] Fix wrong ci url

Recently we move ci server and there are some cases didn't apply the
change.

In this PR, change it in the README.md file

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[Example] Change widget app to ui app
Jihoon Lee [Fri, 29 May 2020 06:49:17 +0000 (15:49 +0900)]
[Example] Change widget app to ui app

**Changes proposed in this PR:**
- The scaffolding was widget app, but widget app has
  naviframe limitation. This PR ports widget app to ui app.

**Minor changes proposed in this PR:**
- Add handler for 'route/to' signal.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[API] Training API with generator
jijoong.moon [Tue, 19 May 2020 08:09:43 +0000 (17:09 +0900)]
[API] Training API with generator

There is no API to get training, validation, test data from generator
function.

In this PR, introduces new training API with generator function.
. New Tizen API.
. Add Unit Test API with generator function.
. Add Test generator for future test.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years ago[Example] Add basic layout with edc file
Jihoon Lee [Wed, 27 May 2020 08:57:47 +0000 (17:57 +0900)]
[Example] Add basic layout with edc file

**Changes proposed in this PR**
- Add scaffolding layouts to be used to construct multiple pages
- Change group "main" -> "view"
- Update .gitignore to ignore tizen studio generated files
- v2: Add simple program to determine button focus
- v2: Add signal emition when clicked to program

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[ Fix ] Fix potential memory leak
jijoong.moon [Tue, 26 May 2020 01:08:28 +0000 (10:08 +0900)]
[ Fix ] Fix potential memory leak

There is memory leak at the last iteration of epoch. It run data
buffer again and exit without free the memory.
In this PR, this issue is fixed.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoUsing 1D array to get the data from user function
jijoong.moon [Fri, 22 May 2020 10:02:09 +0000 (19:02 +0900)]
Using 1D array to get the data from user function

It is not good to get the user data as 3D std::vector format. C does
not support it. Therefore it is much better to get the data as 1D
float array (float *). In this PR, re-define function pointer to get
data as float * format.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoUse TensorDim for databuffer
jijoong.moon [Fri, 22 May 2020 02:46:48 +0000 (11:46 +0900)]
Use TensorDim for databuffer

change databuffer class to use TensorDim Class for input

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoSplit Tensor Dim Class from Tensor
jijoong.moon [Fri, 22 May 2020 01:33:33 +0000 (10:33 +0900)]
Split Tensor Dim Class from Tensor

Split Tensor Dim Class from Tensor for better usability

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
4 years agoFix possible error that cause double free
Jihoon Lee [Wed, 20 May 2020 07:32:10 +0000 (16:32 +0900)]
Fix possible error that cause double free

There is an occasional error from unittest_tizen_capi.

```bash
[ RUN      ] nntrainer_capi_nnmodel.train_with_file_01_p
training
data_buffer made
data_buffer made done
size: 2
data_buffer clear
data_buffer end
double free or corruption (out)
Aborted (core dumped)
```

**Changes proposed in this PR:**
join data buffer threads first before clearing vectors that are
used in the target function to (possibly) prevent double free

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
4 years ago[Unit Test] Add checkValidation() test case of each Layer class
Sangjung Woo [Tue, 19 May 2020 07:13:03 +0000 (16:13 +0900)]
[Unit Test] Add checkValidation() test case of each Layer class

This patch adds checkValidation() method test case of each Layer class.

Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
4 years ago[Layer] Set layer type in constructor
Sangjung Woo [Tue, 19 May 2020 07:09:29 +0000 (16:09 +0900)]
[Layer] Set layer type in constructor

Even though the layer instance is created, its type property is not set
properly so it is LAYER_UNKNOWN as default. Because of this reason,
checkValidation() method is not working properly. This patch fixed that
bug.

Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>