Parichay Kapoor [Fri, 12 Jun 2020 09:24:38 +0000 (18:24 +0900)]
[bugfix] Ignoring error in training/validation
Errors in forward and backward propagation were ignored.
Added error handling for them.
Further, model was being saved after validation.
Errors in validation would result in no model.
Changed to saving of model after training than after validation.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Jun 2020 09:54:07 +0000 (18:54 +0900)]
[layers] set status in operations
Set status in forward and backward operations
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 08:16:41 +0000 (17:16 +0900)]
[meson] Remove duplicated bits from test/meson
**Changes proposed in this PR:**
- Add conv2d_unittest to be unzipped for testing
- Add foreach loop to handle duplicated control flow
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 12 Jun 2020 07:40:26 +0000 (16:40 +0900)]
[loss] Added missing weight decay loss
Added missing weight decay loss into final loss for intermediate layers
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 06:04:55 +0000 (15:04 +0900)]
Fix numeric limits in `Tensor`
**Changes proposed in this PR:**
- Add `static constexpr min / max` to Tensor
- Change normalization to use predefined method for accelaration
Resolves #141
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 07:00:37 +0000 (16:00 +0900)]
[LazyTensor] Fix memcopy happening on lambda func
Fix unintential memcopy occurring in capture clause in lambda
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Jun 2020 06:50:55 +0000 (15:50 +0900)]
Add blas operation to `Tensor::l2norm`
Meanwhile, current implementation (plain version) of `l2norm` is not
effcient and have potential overflow problem.
**Changes proposed in this PR:**
- Add blas operation to `Tensor::l2norm`
- Add fixme
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 12 Jun 2020 01:27:47 +0000 (10:27 +0900)]
[ Unit Test ] Conv2D Forwarding Unit Test
This PR provides Conv2D Forwarding Unit Test.
. update python script to generate random input / kernel / golden Data
. update unit test code for conv2d & evaluate results
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 11 Jun 2020 04:06:45 +0000 (13:06 +0900)]
[tensor] Added axis dimension to average
Current implementation of average() is confusing when compared with sum().
Both sum() and average() do not take axis, and perform similar operations.
However, the semantics of their output is different -
- sum() acts over the dimensions other than batch size
- average() acts over the batch size
To avoid this confusion, average is provided with axis argument with 0 as default.
This is now analogous to sum(axis) than sum()
Further, sum() is renamed to sum_by_batch() as thats what it does and is different from sum(axis)
V2:
Applied the above for lazy_tensor
Minor updates to TensorDim are also made
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 11 Jun 2020 08:59:26 +0000 (17:59 +0900)]
Add openmp parellelism to l2norm()
**Changes proposed in this PR:**
- Fix:`meson -Denable-blas=false` has no dep found error
- Fix:sum() function bug in `USE_BLAS` is not defined
- Add openmp dependency
- Apply openmp to l2norm function
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 10 Jun 2020 09:39:11 +0000 (18:39 +0900)]
Optimizing concept for Tensor operation
**Changes proposed in this PR:**
- `i_operation` are used to calculate without memcopy
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 10 Jun 2020 05:18:25 +0000 (14:18 +0900)]
[ Conv2D ] forwarding calculation
This PR provides forwarding calculation of conv2d layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 10 Jun 2020 07:50:58 +0000 (16:50 +0900)]
Register Tensor ops to lazy tensor
**Changes proposed in this PR:**
- Register tensor ops to `LazyTensor`
- Add test fixture to `LazyTensor`
- Add tests to `LazyTensor`
- Change `Tensor::apply` signature to permit closures
**Changes proposed in V2:**
- Delete and fix unnecessary comments
- Change `Tensor::getData` signature
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 10 Jun 2020 05:14:55 +0000 (14:14 +0900)]
[ Test ] Input Generator
This PR provides layer input generator to evalute the layer
calculation.
[ To Do ]
Golden Test Result Generation should be added.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 11 Jun 2020 07:00:57 +0000 (16:00 +0900)]
[ CI ] Add Build Dependency flatbuffers-dev
for ubuntu, add flatbuffers-dev
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 10 Jun 2020 00:25:36 +0000 (09:25 +0900)]
[ Conv2d ] save & read unittest
Add save & read unit test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 9 Jun 2020 11:12:01 +0000 (20:12 +0900)]
[ Conv2d ] save & read weight from file
This PR provides read & save Kenel and Bias
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 9 Jun 2020 12:49:49 +0000 (21:49 +0900)]
[Tensor] Add divide_i / multiply_i (Tensor)
**Changes proposed in this PR:**
- Add `divide_i(Tensor)`
- Add `multiply_i(Tensor)`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 10:48:50 +0000 (19:48 +0900)]
[Tensor] Add `subtract_i(Tensor)` operator
**Changes proposed in this PR:**
- Add subtract_i(Tensor) operator for a memcpyless operation
- Lint unformated lines
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 9 Jun 2020 10:34:57 +0000 (19:34 +0900)]
[ Refactor ] Remove unused function in layer class
There is no need to exist the initialize(...) in layer class.
This PR removes this unused function.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 8 Jun 2020 04:37:30 +0000 (13:37 +0900)]
[ Conv2D ] initialize Conv2DLayer
During initialize,
1. set input & output dimension
2. set Tensor for Kernel and bias
setInputDimension() function should be called before this is called.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 9 Jun 2020 05:06:17 +0000 (14:06 +0900)]
Add multiply_i / divide_i operator
**Changes proposed in this PR:**
- Add `multiply_i(float)`
- Add `divide_i(float)`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 10:26:49 +0000 (19:26 +0900)]
[Tensor] Add `subtract_i(float)` operator
**Changes proposed in this PR:**
- Add subtract_i operator
- Revise subtract operator
- Change precision in add_i / subtract_i test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 01:46:24 +0000 (10:46 +0900)]
Add LazyTensor to support lazy evaluation
This PR propose 2 concepts.
1. Propose `*_i` type of operation to `Tensor`. Which mutates target
tensor instead of memcopying the tensor.(511c)
2. Add chaining for lazy & memcopyless operation.
For example Tensor could do the operation in such manner:
```cpp
Tensor t;
t.chain() /* Initial memcpy happens to gaurantee immutability */
.add_i(x)
.multiply_i(y) /* NYI */
.divide_i(y) /* NYI */
.run()
```
operation are delayed until `.run()` is called. This is done by
monadic object named `DefferedTensor`.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
[Tensor] Add add_i for memcpyless operation
This pr propose `*_i()` operation to Tensor to support memcpyless
operation.
**Changes proposed in this PR:**
- Add add_i for memcpyless operation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 8 Jun 2020 04:56:14 +0000 (13:56 +0900)]
[Tensor] Add `add_i(Tensor T)` operator
**Changes proposed in this PR:**
- add_i(Tensor T) operator for memcopyless operation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 5 Jun 2020 08:22:41 +0000 (17:22 +0900)]
[ Refactor ] using input dimesion and output dimenstion.
Currently each layer handle just one TensorDim for Weight. However,
there are more complicata cases to use. Therefore each layer have
TensorDim instance to specify input and output activation dimension.
Original dim variable of layer is use to define weight dimension.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 4 Jun 2020 07:26:54 +0000 (16:26 +0900)]
[ Refactor ] Modify to deal with 3D Tensor
Until now, nntrainer cannot handle 3D Tensor including channel. Only
supported 2D Tensor which is batch, height, width.
From this PR, nntrainer can handle 3D Tensor include channel. But more
optimization is required.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 3 Jun 2020 05:37:32 +0000 (14:37 +0900)]
[ Layer ] Set Property for Conv2D Layer
. parse and set Convolution 2D Property
0. input shape : string
1. bias zero : bool
4. activation : string (type)
6. weight_decay : string (type)
7. weight_decay_lambda : float
9. filter : int
10. kernel_size : ( n , m )
11. stride : ( n, m )
12, padding : valid | same
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 1 Jun 2020 12:20:11 +0000 (21:20 +0900)]
[ Layer ] Draft of Conv2D Layer
This is the first draft of 2d convolution layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 5 Jun 2020 06:56:44 +0000 (15:56 +0900)]
Reduce duplicated function call
**Changes proposed in this PR:**
- Reduce `average()` call in `Optimizer::calculate
- Delete setZero call after construction `Tensor`
- Reduce indexing and multiplication in few loops
This PR results in better performance including roughly 60% decrease
in time spent in `Tensor::average()` in
Classification example(checked in vtune).
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 3 Jun 2020 01:10:03 +0000 (10:10 +0900)]
[ Refector ] Manage unit test with seperate file
unit test code is too big to manage. So make sepearte file for each
class / classes.
. unittest_nntrainer_internal
: test neural network / optimizer
. unittest_nntrainer_tensor
: test tensor/tensorDim
. unittest_nntrainer_layer
: test layers
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 4 Jun 2020 05:46:19 +0000 (14:46 +0900)]
[Example] Add exit when naviframe is empty
**Changes proposed in this PR:**
- Add and use _on_back_pressed_cb when back button is pressed
- Change `view_routes_to` signiture to return naviframe item.
- Delete unused functions
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 2 Jun 2020 03:45:55 +0000 (12:45 +0900)]
[ Bug ] possible corruption by double erase
There is possible double erase vector due to the duplicate random
number generation.
In this PR, remove to use random number and erase just once.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 2 Jun 2020 01:31:37 +0000 (10:31 +0900)]
[ Refactor ] Layer Method for weight initialization
Cannot use weight initiatlization method for other layer.
In this PR, Move it into Layer class to use for other layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 1 Jun 2020 12:17:34 +0000 (21:17 +0900)]
[ Refactor ] Tensor has TensorDim
Currently Tensor class handles the dimension with it's own data
structure.
It is better to use TensorDim for this for consistency with other
classes of NNTrainer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 1 Jun 2020 04:50:14 +0000 (13:50 +0900)]
[ Example ] Tizen C API Train with Generator Example
Demo example to train with Tizen C API Train with generator.
Add simple example to train 1 FC Layer neural network and get the
train & validation data from generator created by user.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 2 Jun 2020 22:25:37 +0000 (07:25 +0900)]
[ CI ] Fix wrong ci url
Recently we move ci server and there are some cases didn't apply the
change.
In this PR, change it in the README.md file
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 29 May 2020 06:49:17 +0000 (15:49 +0900)]
[Example] Change widget app to ui app
**Changes proposed in this PR:**
- The scaffolding was widget app, but widget app has
naviframe limitation. This PR ports widget app to ui app.
**Minor changes proposed in this PR:**
- Add handler for 'route/to' signal.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 19 May 2020 08:09:43 +0000 (17:09 +0900)]
[API] Training API with generator
There is no API to get training, validation, test data from generator
function.
In this PR, introduces new training API with generator function.
. New Tizen API.
. Add Unit Test API with generator function.
. Add Test generator for future test.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 27 May 2020 08:57:47 +0000 (17:57 +0900)]
[Example] Add basic layout with edc file
**Changes proposed in this PR**
- Add scaffolding layouts to be used to construct multiple pages
- Change group "main" -> "view"
- Update .gitignore to ignore tizen studio generated files
- v2: Add simple program to determine button focus
- v2: Add signal emition when clicked to program
**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [X]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 26 May 2020 01:08:28 +0000 (10:08 +0900)]
[ Fix ] Fix potential memory leak
There is memory leak at the last iteration of epoch. It run data
buffer again and exit without free the memory.
In this PR, this issue is fixed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 22 May 2020 10:02:09 +0000 (19:02 +0900)]
Using 1D array to get the data from user function
It is not good to get the user data as 3D std::vector format. C does
not support it. Therefore it is much better to get the data as 1D
float array (float *). In this PR, re-define function pointer to get
data as float * format.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 22 May 2020 02:46:48 +0000 (11:46 +0900)]
Use TensorDim for databuffer
change databuffer class to use TensorDim Class for input
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 22 May 2020 01:33:33 +0000 (10:33 +0900)]
Split Tensor Dim Class from Tensor
Split Tensor Dim Class from Tensor for better usability
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 20 May 2020 07:32:10 +0000 (16:32 +0900)]
Fix possible error that cause double free
There is an occasional error from unittest_tizen_capi.
```bash
[ RUN ] nntrainer_capi_nnmodel.train_with_file_01_p
training
data_buffer made
data_buffer made done
size: 2
data_buffer clear
data_buffer end
double free or corruption (out)
Aborted (core dumped)
```
**Changes proposed in this PR:**
join data buffer threads first before clearing vectors that are
used in the target function to (possibly) prevent double free
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Sangjung Woo [Tue, 19 May 2020 07:13:03 +0000 (16:13 +0900)]
[Unit Test] Add checkValidation() test case of each Layer class
This patch adds checkValidation() method test case of each Layer class.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
Sangjung Woo [Tue, 19 May 2020 07:09:29 +0000 (16:09 +0900)]
[Layer] Set layer type in constructor
Even though the layer instance is created, its type property is not set
properly so it is LAYER_UNKNOWN as default. Because of this reason,
checkValidation() method is not working properly. This patch fixed that
bug.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Tue, 19 May 2020 01:59:42 +0000 (10:59 +0900)]
[ Example ] Test Classification Example using C-APIs
Add Classification example using C-APIs.
. Using data files to train, validate.
. The datas are stored in the follow order in binary format.
'Features label Features label Features label ...'
. There are two layers, one for the input layer and the other is fc
layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 15 May 2020 10:05:32 +0000 (19:05 +0900)]
[Example] Add edc layout to the scaffolding
**Changes proposed in this PR:**
- Add main.edc
- Add view.[ch]
- Add data.h
- Change main initiation to use view function
- Move data definitions to data.h
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Mon, 18 May 2020 01:08:48 +0000 (10:08 +0900)]
[ API ] Add check the size of buffer & maximum data
Add check the size of buffer with maximum data and reset
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 15 May 2020 08:40:52 +0000 (17:40 +0900)]
[ API ] Train API with data files
Now two Options for the Train are available.
- ml_nnmodel_train_with_file(ml_nnmodel_h)
: Suppose every parameters are set from configuraiton and data
files ( Training : mandatory , Validataion : optional, Test :
optional )
- ml_nnmodel_train_with_file(ml_nnmodel_h, ...)
: Suppose using APIs for the set up the network configuration and
Network hyper parameters are given by arguments (key = value
format )
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 15 May 2020 05:18:00 +0000 (14:18 +0900)]
Add scaffolding for tizen native example app
This PR adds an anchor point for tizen native example app.
More specifically contains a tizen wearable widget that prints out 'hello world'
I am planning to build a tizen wearable app that trains user drawing labeled
with a specific unicode character but subjected to change at the moment.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 14 May 2020 06:04:55 +0000 (15:04 +0900)]
[ API ] neural network model compile API
- add neural network model compile API with optimizer & hyper
parameter list
- ml_nnmodel_compile --> ml_nnmodel_compile_with_conf
- Change Weight Init Property as an Layer Property
: Delete wini paramenter in Initialize()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Sangjung Woo [Thu, 14 May 2020 04:41:21 +0000 (13:41 +0900)]
[Layer] Refactoring Layers class
* Separate single Layers file into several files for its role.
* Cleanup unused headers and internal functions
* Rename files name (layers.[h|cpp] -> layer.[h|cpp])
* Increase patch verson of debian package since header files are added
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Thu, 14 May 2020 01:20:13 +0000 (10:20 +0900)]
[UNIT] add & modify unit test for optimizer and addlayer
- add addLayer & optimizer unit test cases
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 14 May 2020 01:18:25 +0000 (10:18 +0900)]
[api] using "unit" key to get the dimension for fc layer
Instead of using input_shape to get the fc layer dimenson, it is
simple to get the ouptut dimesion using unit keyword.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 13 May 2020 23:35:48 +0000 (08:35 +0900)]
[Fix] fix double free corruption
remove new & delete when generate input data from image
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 13 May 2020 01:14:35 +0000 (10:14 +0900)]
[API] change key, value format for the property
Instead of using multiple string to describe key-value, it would be
more intuitive to be combine with "=".
for example,
ml_*_set_property (handle, "beta1=0.332", "beta2=0.001", NULL)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Sangjung Woo [Wed, 13 May 2020 01:06:34 +0000 (10:06 +0900)]
[API] Add updateLoss method
This patch newly adds updateLoss method into FullyConnectedLayer class
to remove redundant code.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Tue, 12 May 2020 23:23:14 +0000 (08:23 +0900)]
Make headers as system header
Make system headears as system header
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 12 May 2020 06:19:49 +0000 (15:19 +0900)]
[API] set Optimizer Property
. ml_nnoptimizer_set_perperty(ml_nnopt_h opt, ...)
: variable parameter MUST be end with NULL.
. Properties:
enum class PropertyType {
learning_rate = 0,
decay_rate = 1,
decay_steps = 2,
beta1 = 3,
beta2 = 4,
epsilon = 5,
};
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 12 May 2020 01:14:11 +0000 (10:14 +0900)]
[API] create/delete neural network optimizer
create / delete optimizer
- type : sgd, adam, unknown
. ml_nnoptimizer_create(ml_nnopt_h opt, const char *type)
. ml_nnoptimizer_delete(ml_nnopt_h)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 11 May 2020 04:03:48 +0000 (13:03 +0900)]
[API] Weight Decay into Layer Class
Make Weight Decay Parameter for Layer Class. From Now on Weight Decay
can be defined layer by layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 11 May 2020 00:41:18 +0000 (09:41 +0900)]
[API] Modify to get Multiple Property Arguments
Modify ml_nnlayer_set_property () to get multiple arguments for
property.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 8 May 2020 10:51:24 +0000 (19:51 +0900)]
[API] Add add layer API test case
add test case for ml_nnmodel_add_layer() api
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 8 May 2020 10:49:18 +0000 (19:49 +0900)]
use std::shared_ptr instead of pointer of instance
using std::shared_ptr to handle heap memory space
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 8 May 2020 06:52:55 +0000 (15:52 +0900)]
[API] add layer API
Implement layer addition API into model
. ml_nnmodel_add_layer( ml_nnmodel_h model, ml_nnlayer_h layer)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 8 May 2020 06:49:09 +0000 (15:49 +0900)]
[API] Change enum variable to string for better usage
Change setProperty ( enum , string ) --> setProperty(string, string)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 8 May 2020 04:39:01 +0000 (13:39 +0900)]
[API] add set property unittest
add set property layer unittest.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 7 May 2020 06:20:48 +0000 (15:20 +0900)]
[API] Set Property API for Neural Network Layer
. Implement Set Property API for Neural Network Layer
- Each layer has setProperty methods with its own Property enum
class.
- Layer Class has virtual setProperty Class
- Seperate parseType and make parse_util.h & cpp to use other
files.
- Use TensorDim for Define demesion of hidden layer
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Thu, 7 May 2020 03:47:13 +0000 (12:47 +0900)]
Fix potential memory leak when finalize.
This PR...
1. Solves memory leak detected from valgrind.
2. Correct the author of `unittest_util_func.cpp`.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Dongju Chae [Thu, 7 May 2020 06:34:09 +0000 (15:34 +0900)]
[UnitTest/DCube] Genereate gtest xml files to report unittest results
This patch generates gtest xml files reported to DCube as unittest
results.
Signed-off-by: Dongju Chae <dongju.chae@samsung.com>
jijoong.moon [Wed, 6 May 2020 11:04:28 +0000 (20:04 +0900)]
[API] Create / Delete Layer API
- Create / Delete Layer API
. ml_nnlayer_create(ml_nnlayer_h *nnlayer, ml_layer_type_e type);
: create InputLayer, FullyConnectedLayer instance
. ml_nnlayer_delete(ml_nnlayer_h *nnlayr)
: check validation and delete instance
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 7 May 2020 05:56:39 +0000 (14:56 +0900)]
[UNITTEST] Fix Build Error
Fix Build Error when compile unittest
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 4 May 2020 08:10:08 +0000 (17:10 +0900)]
Delete overlapping logic in test/meson.build
This PR..
1. Make library for test_util so, it does not need compile over and
over again
2. Combine identical variable with diffrent name
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 4 May 2020 07:51:17 +0000 (16:51 +0900)]
Separate databuffer_file unittest
Separate databuffer_file unittest for further development (#61)
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Geunsik Lim [Wed, 6 May 2020 11:13:30 +0000 (20:13 +0900)]
doc: Fixed typos
This commit is trivial. It is to fix a typo in README.md file.
Signed-off-by: Geunsik Lim <geunsik.lim@samsung.com>
jijoong.moon [Wed, 6 May 2020 03:21:03 +0000 (12:21 +0900)]
[Layer] Remove Output Layer
Output Layer Can be replaced by Fully Connected Layer.
No need to be exist.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 4 May 2020 07:38:18 +0000 (16:38 +0900)]
[CAPI] Tizen CAPI Example
Add Tizen CAPI Example with configuration file
- Add example application for tizen capi which run neural network with
configuration file
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 29 Apr 2020 05:20:41 +0000 (14:20 +0900)]
[CAPI] Add Train API
. Add ml_nnmodel_train(ml_nnmodel_h model)
: this run train()
. Add Unit test for this
: run 1 epoch with small data set
: just check the mechanism
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 29 Apr 2020 07:48:57 +0000 (16:48 +0900)]
Add unittest for utils_func
Add some unittests and fix errors for utils
- changed answer to be declaritive
- fix tolerence is calculated one way (should be wrapped with fabs)
- add prime test unit tests
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 29 Apr 2020 04:40:45 +0000 (13:40 +0900)]
[SVACE] Fix svace issues.
Fix 1 Major SVACE Issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 29 Apr 2020 04:18:28 +0000 (13:18 +0900)]
[UNIT TEST] Add more tizen unit test
. Add more unittest
. modify replace string func to change multiple line
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 29 Apr 2020 00:23:16 +0000 (09:23 +0900)]
[SVACE] Fix SVACE Issues
Resolve SVACE Issues
: 6 Critical, 9 Major
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 29 Apr 2020 02:27:46 +0000 (11:27 +0900)]
Add test dataset automatically with meson
Fix issue that `ninja -C build test` is failing because there is no test data set
included.
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 28 Apr 2020 09:16:51 +0000 (18:16 +0900)]
[SVACE] Fix Svace Issues
Fix Svace results
. Critical 13, Major 10 Issues : [ 440232 ] ~ [ 440258 ]
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 28 Apr 2020 04:31:32 +0000 (13:31 +0900)]
[API] Add Neural Network Model Compile C API
Add neural network model compile c api
: ml_nnmodel_compile (ml_nnmodel_h model)
. check validation of model
. if it is Ok, then run NeuralNetwork.ini()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 28 Apr 2020 02:06:39 +0000 (11:06 +0900)]
Split unittest for util func to diffrent source. (#61)
Split util_func unittest to `unittest_util_func.cpp`.
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [ ]Passed [X]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 28 Apr 2020 01:35:32 +0000 (10:35 +0900)]
Add Reviewer
Add Reviewer in README.md
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 27 Apr 2020 07:47:11 +0000 (16:47 +0900)]
[README] Add some explanation
* Update prerequisite
* Propose GIAG Build Environment
* Make example section
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Mon, 27 Apr 2020 05:56:05 +0000 (14:56 +0900)]
Split files for DataBufferFromFile & DataBufferformCallback
data_buffer.h & cpp is too big to manage. Its inherited clases are
saved in seperate files.
- data_buffer_func.h & data_buffer_func.cpp
- data_buffer_file.h & data_buffer_file.cpp
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 27 Apr 2020 08:24:30 +0000 (17:24 +0900)]
Move images to docs/images
This PR moves images to docs/images to clean up docs folder for later
use.
Self evaluation:
Build test: [ ]Passed [ ]Failed [X]Skipped
Run test: [ ]Passed [ ]Failed [X]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 23 Apr 2020 06:11:58 +0000 (15:11 +0900)]
Introduce DataBufferFromCallback Class
With this class, it is possible to train with user specific data
generation callback funciton. This is the default to get training
data unless there is [DataSet] key in configuration file.
- in Application/Classification/jni/main_func.cpp
NN.train(getMiniBatch_train, getMiniBatch_val, getMiniBatch_train)
Then, data buffer thread call these functions to get newest data with
size of mini batch.
The format this function should be :
/*
* @brief Callback function to get user specific data
* @param[in] X data 3D float vector type
* @param[in] Y label 3D float vector type
* @param[out] status status for error handle
* @retval true / false generate all data for this epoch
*/
bool func(std::vector<std::vector<std::vector<float>>>& X,
std::vector<std::vector<std::vector<float>>>& Y, int &status)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 24 Apr 2020 01:39:34 +0000 (10:39 +0900)]
Add Tensor unit tests
Add Tensor unit tests
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 23 Apr 2020 11:50:02 +0000 (20:50 +0900)]
Add Unit Test Cases for math utilities function
Add unit test cases for math utilities functions
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoongmoon [Wed, 22 Apr 2020 15:25:06 +0000 (00:25 +0900)]
Modify softmax calculation to handle large values.
Currently it is not numerically stable.
it is more stable with this pr.
Signed-off-by: jijoongmoon <jijoong.moon@samsung.com>
jijoongmoon [Tue, 21 Apr 2020 16:26:58 +0000 (01:26 +0900)]
Throw error exception from thread
thow error exceptions from data buffer thread to main.
By this pr, program stops normally with proper error handling.
Signed-off-by: jijoongmoon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 21 Apr 2020 11:16:23 +0000 (20:16 +0900)]
Introduce DataBufferFromDataFile Class
In order to handle various input generation cases, make DataBuffer
Class is base class and inherit this, several children classes is
introduced.
. DataBufferFromDataFile
. DataBufferFromCallback : NYI
. DataBufferFromFramework : NYI
. others. : NYI
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 20 Apr 2020 08:40:21 +0000 (17:40 +0900)]
Add train function in NeuralNetwork Class.
Add train member function of NeuralNetwork Class.
- NeuralNetwork Class has DataBuffer instance to handle datas.
- New Keywords are introduced for specify data set
. TrainData, ValidData, TestData, LabelData
- DataBuffer instance is initialized with parameters from Neural
Network instance.
- In train function,
. Data Buffer instance is running ( collecting or generating data )
. get the data from Data Buffer instance.
. Backwarding
. Display progress ( Data Buffer member funciton )
. Validate if it is enabled.
- Add Classification Example with train member function
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoongmoon [Sun, 19 Apr 2020 14:47:31 +0000 (23:47 +0900)]
Initialize data buffer when neural network initializes.
Initilaize data buffer and set parameters from neural network.
Signed-off-by: jijoongmoon <jijoong.moon@samsung.com>