platform/core/ml/nntrainer.git
3 years ago[restructure] Restructure the core files
Parichay Kapoor [Wed, 4 Nov 2020 02:25:58 +0000 (11:25 +0900)]
[restructure] Restructure the core files

This patch restructures the internal files
include and src folders are replaced with more relevant and clustered folders
headers and souces now live together

Also the headers exposed in the packaging are severely limited than exposing
all the headers. Updated for Android and Tizen packaging as well.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Delegate] Add delegate support header
Parichay Kapoor [Fri, 7 Aug 2020 08:00:19 +0000 (17:00 +0900)]
[Delegate] Add delegate support header

Add delegate support header
This supports settings backend and device
Added some properties but they are experimental

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoUpdate application to use backbone
Parichay Kapoor [Thu, 29 Oct 2020 06:38:39 +0000 (15:38 +0900)]
Update application to use backbone

Update transfer learning application to use backbone
Now feature extractor has been removed from the application
and dependency on tflite is also removed

However, the caching of features from feature extractor is not yet supported.
This results in the application to be slow to run full 1000 epochs on gbs
Till caching is supported, application test from gbs build is removed

Also fixed android packaging for nntrainer with tflite and KNN application

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[backbone] Added native support for tflite backbone
Parichay Kapoor [Tue, 3 Nov 2020 10:03:42 +0000 (19:03 +0900)]
[backbone] Added native support for tflite backbone

Added native support for tflite backbone
The unittests are verified for tflite backbone as tflite backbone
is preferred over nnstreamer backbone.
Interfacing directly with tflite API allows directly using tensor memory
for forwarding, avoid memcpy incurred with nnstreamer backbone.

TfLite takes input shape of format NHWC.
However, nntrainer takes the shape NCHW.
This will be resolved later with a transpose operator later.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Layer ] Add input_layers keyword
jijoong.moon [Thu, 29 Oct 2020 07:10:55 +0000 (16:10 +0900)]
[ Layer ] Add input_layers keyword

This PR includes enabling 'input_layers' keyword.
With this keyword, we could specify layer's input tensor.

. Added skip in parse_util to remove space and '[',']' in string and
split with ',' delimiter.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[backbone] unittest for external backbone
Parichay Kapoor [Thu, 29 Oct 2020 06:35:14 +0000 (15:35 +0900)]
[backbone] unittest for external backbone

Added support for external backbone
Also added a small add.tflite file for unittests
trainable is not supported with external backbones for now
Added tizen packaging and other unittest fixes

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[backbone] Support for external backbone
Parichay Kapoor [Thu, 29 Oct 2020 06:32:42 +0000 (15:32 +0900)]
[backbone] Support for external backbone

Added support for external backbone with nnstreamer
This is enabled for both ubuntu and Tizen
This patch creates a nnstreamer layer to support running different kind of model files
with nnstreamer single c-api

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoAdd github account on README.md file
hyeonseok lee [Wed, 4 Nov 2020 06:54:48 +0000 (15:54 +0900)]
Add github account on README.md file

Add my github account on README.md file

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[IntegratedTest] Add batch normalization test
Jihoon Lee [Wed, 28 Oct 2020 11:42:37 +0000 (20:42 +0900)]
[IntegratedTest] Add batch normalization test

Add test that using batchnormalization with a minor structural change on
the tester

**Changes proposed in this PR for tester:**
- Implement reorder logic for bn layer in `recorder.py`
- Add color to some debug prints
- Let layer update inside actual `backwarding`
- save weights including non_trainable but save gradients only for the
trainables

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test/Util] Update ini / debug info
Jihoon Lee [Wed, 28 Oct 2020 06:44:10 +0000 (15:44 +0900)]
[Test/Util] Update ini / debug info

**Changes proposed in this PR:**
- Add some helperfunction to iniTestWrapper
- Add color and adam debug info to recorder.py

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[concat] Move validity checks to init
Parichay Kapoor [Tue, 3 Nov 2020 04:33:05 +0000 (13:33 +0900)]
[concat] Move validity checks to init

Move the dimension checks for validity of the various inputs
for concat layer are moved to initialize().
They are kept back in forwarding() under DEBUG conditional.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensorfilter] Bug fix of tranpose
Parichay Kapoor [Tue, 3 Nov 2020 13:07:44 +0000 (22:07 +0900)]
[tensorfilter] Bug fix of tranpose

nntrainer takes data of NCHW
however, nnstreamer's tensor_converter produces data of format NHWC.
Existing implementation transposed the dimensions to match to NCHW format
However, the data was not transposed.
The unittest passed because the channel was just 1 which does not require
transpose on data side.

This patch removes the transpose of dimension from the nntrainer tensor_filter
and the transpose is done properly in the pipeline with nnstreamer's tensor_transform

v2:
Update transfer learning application which uses the pipeline.

v3:
Updated customshortcut application

Resolves #695

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoTroubleshooting installation process with observed errors and working solutions
Mete Ozay [Tue, 3 Nov 2020 07:53:36 +0000 (07:53 +0000)]
Troubleshooting installation process with observed errors and working solutions

Signed-off-by: Mete Ozay <meteozay@gmail.com>
Troubleshooting installation process with observed errors and working solutions

3 years ago[Fix/test] Fix adam iteration start to 1
Jihoon Lee [Wed, 28 Oct 2020 05:23:47 +0000 (14:23 +0900)]
[Fix/test] Fix adam iteration start to 1

Fix bug that epoch_idx is not saved for continue_train
Fix iteration to start from 0 in model test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Layer ] Add Concat Layer
jijoong.moon [Tue, 27 Oct 2020 04:51:00 +0000 (13:51 +0900)]
[ Layer ] Add Concat Layer

This PR includes Concatenate layer.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Test] Refactor recorder.py
Jihoon Lee [Fri, 23 Oct 2020 06:36:39 +0000 (15:36 +0900)]
[Test] Refactor recorder.py

This commit mainly patches KerasRecorder to be more flexible

**Changes proposed in this PR:**
- Pass file, label info at `KerasRecorder.run` phase instead of __init__
to leave room to reuse the model
- Allow initiation with SequentialModel for usuability
- Deal with cross_sigmoid, cross_softmax
- Move some functions out of class

**V2**
Since `KerasRecorder` class was highly coupled to a certain model and
made it hard to make some variation out of it, (e.g. using "mse" instead
of "cross" will need to make a whole new class and it is much
error-prone.
This patch move the class implementation to several functions.

This will be used with `functools.partial` so to easily generate loss,
optimizer variation.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
[Test/Refactor] Restructure data format

Restructure golden data to reduce redundant data and only check updated
weight thus making code more readable

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix/bn] Fix batchnormalize layer
Jihoon Lee [Wed, 28 Oct 2020 11:07:35 +0000 (20:07 +0900)]
[Fix/bn] Fix batchnormalize layer

Fix batchnormalizationLayer that epsilon is included toward saving
moving variacne that could lead to long term inaccuracy

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix/Optimizer] Fix decay_rate application
Jihoon Lee [Wed, 28 Oct 2020 04:56:11 +0000 (13:56 +0900)]
[Fix/Optimizer] Fix decay_rate application

There was a bug that decay_rate were always applied when it is set to
default value. Becuase `decay_steps != -1` was evaluated to true.

This patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[backbone] Add trainable feature to backbone
Parichay Kapoor [Fri, 23 Oct 2020 06:00:51 +0000 (15:00 +0900)]
[backbone] Add trainable feature to backbone

This patch adds supporting training feature to backbone
Corresponding unittests are also added

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Layer ] Multiple Input Dimensions
jijoong.moon [Tue, 27 Oct 2020 11:01:52 +0000 (20:01 +0900)]
[ Layer ] Multiple Input Dimensions

Current implementaion only take one input. In order to take multiple
input, input_dim / output_dim should be vector type.
This PR includes this fixes except about addition layer which requires
actual multiple input. This will be done consequtive PR.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[backbone] Add unittests for backbone
Parichay Kapoor [Fri, 23 Oct 2020 03:06:04 +0000 (12:06 +0900)]
[backbone] Add unittests for backbone

Added unittests for backbone where model constructed
with and without backbone are matched to be equivalent.
Corresponding bug fixes are also added.

See also #660

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Remove redundant checks
Parichay Kapoor [Thu, 22 Oct 2020 08:39:10 +0000 (17:39 +0900)]
[model] Remove redundant checks

Remove redundant check of adding layer to ensure that
name is unique. This is already ensure with ensureName()

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Add support of ini based backbone to the model
Parichay Kapoor [Thu, 22 Oct 2020 08:28:26 +0000 (17:28 +0900)]
[model] Add support of ini based backbone to the model

This patch adds support of ini based backbone to the model neural network
From the point of view of ini file, backbone is treated as a layer itself.
This allows a graph of layers to be represented as a layer itself in the
ini file.

With this design, backbone must be specified as a layer with property backbone
as shown below with a sample pseudo-ini:
```ini
[Block1]
backbone: base_block.ini

[PoolLayer]
type: pooling2d

[Block2]
backbone: base_block.ini
```

ModelLoader loads the layer configuration from the backbone independently
and then extends the existing graph in the main model with this newly created
graph from the backbone ini.

The names of all layers which are inserted from the backbone to a model are
prefixed with the name of the backbone for easier management and for the user
to identify/manage the layers from a backbone.

The patch allows nested backbones and multiple backbones in a model description.
Unittests for this backbone support will follow in the next patch.

See also #660

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Layer] Fix 'stream << ' delegation
Jihoon Lee [Thu, 22 Oct 2020 05:14:42 +0000 (14:14 +0900)]
[Layer] Fix 'stream << ' delegation

There was a bug that `std::cout << layer` is calling undefined function.
This patche fixes the issue and add some regression test.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[README.md] Add hall-of-fame section to README.md
Dongju Chae [Thu, 29 Oct 2020 02:36:22 +0000 (11:36 +0900)]
[README.md] Add hall-of-fame section to README.md

This patch adds hall-of-fame section to README.md

Signed-off-by: Dongju Chae <dongju.chae@samsung.com>
3 years ago[Test] Add ParamTest scaffolding for model tests
Jihoon Lee [Wed, 21 Oct 2020 10:20:05 +0000 (19:20 +0900)]
[Test] Add ParamTest scaffolding for model tests

Add Parameterized test scaffolding for integrated tests with minor
changes

**Additional Changes proposed in this PR:**
- Add `TensorDim::TensorDim(const std::string &shape)
- Move `Tensor::epsilon` to public

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor/Test] Move iniTest to modelfile
Jihoon Lee [Wed, 21 Oct 2020 02:38:19 +0000 (11:38 +0900)]
[Refactor/Test] Move iniTest to modelfile

This patch moves a test class and a factory function used in
model file test to `unittest_nntrainer_modelfile` from `test_util`

**Changes proposed in this PR:**
- Move iniTest to `unittest_nntrainer_modelfile`
- Move `mkIniTc`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ TENSOR ] Change to get Tensor Vector
jijoong.moon [Wed, 21 Oct 2020 11:56:26 +0000 (20:56 +0900)]
[ TENSOR ] Change to get Tensor Vector

Currently we only take and push one tensor per layer. For the skip
connection or other layer which takes and make multiple tensors, we
need to handle tensors as an input and output of layer.

In this PR, tensor input chaneged to tensor vector.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[layer] Print layer name over layer address
Parichay Kapoor [Fri, 23 Oct 2020 06:36:26 +0000 (15:36 +0900)]
[layer] Print layer name over layer address

Print layer name if available
if layer name is not avaiable, then print layer object name and its address

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer/optimizer] Bugfix for variadic templates
Parichay Kapoor [Wed, 21 Oct 2020 03:09:25 +0000 (12:09 +0900)]
[layer/optimizer] Bugfix for variadic templates

layer/optimizer factory constructor used variadic templates
However, that wasn't used and could not be used with the way it was made
The way to fix it was using c-style variable number of arguments in between
to allow different function signatures work with templates at compile time
as c-style variable number of arguments are not interpreted at compile time.

This creates an interface like below:
createLayer(LayerType::LAYER_FC, 4, ActivationType::ACT_RELU)
vs the current interface
createLayer(LayerType::LAYER_FC, {"unit=4", "activation=relu"})

The first is more compact and performs some checks on types (not all) etc.
However, the second is more expressive and readable.
I have update the internal and external factory methods which makes the final
style to the latter version.

Simplified c-API implementation to also use these factory methods.

Resolves #655

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Application ] VGG using learning rate decay
jijoong.moon [Wed, 21 Oct 2020 02:16:43 +0000 (11:16 +0900)]
[ Application ] VGG using learning rate decay

Change to use learning rate decay

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Test] Add unittest_nntrainer_models
Jihoon Lee [Tue, 20 Oct 2020 10:36:37 +0000 (19:36 +0900)]
[Test] Add unittest_nntrainer_models

This patch generates `NodeWatcher` and `GraphWatcher` to make a test for
unittest_nntrainer_model test.

**Changes proposed in this PR:**
- Add multi iteration test
- Add gtest scafolding

**Additional Patch will be followed**
- Handle loss + activation merge scenario
- Add param test
- Add mnist as a test
- Add final inference result

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ccapi] Added unittests for ccapi
Parichay Kapoor [Tue, 20 Oct 2020 02:37:01 +0000 (11:37 +0900)]
[ccapi] Added unittests for ccapi

Added unittests for ccapi
Updated the signature for setDataFile and setFunc for dataset
Also added some fixes and funcationalities revealed with unittests

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Weight] Add ctors
Jihoon Lee [Mon, 19 Oct 2020 11:02:07 +0000 (20:02 +0900)]
[Weight] Add ctors

Weight did not have copy/move ctor which they should have.
This patch adds them.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ccapi] Enable pkgconfig
Parichay Kapoor [Tue, 20 Oct 2020 06:42:57 +0000 (15:42 +0900)]
[ccapi] Enable pkgconfig

Enable pkgconfig for ccapi
Updated gitignore to not ignore *.pc.in files

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[app] MNIST application with cc api
Parichay Kapoor [Mon, 19 Oct 2020 07:30:44 +0000 (16:30 +0900)]
[app] MNIST application with cc api

Updated MNIST applicaiton to use cc api
Added corresponding bug fix to android packaging of cc api

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ccapi] Initial draft for c++ API
Parichay Kapoor [Tue, 6 Oct 2020 04:22:39 +0000 (13:22 +0900)]
[ccapi] Initial draft for c++ API

Added initial draft for c++ API
This includes major headers with pure virtual classes
Updated internal classes to inherit them
Updated files names for clashing internal files
Updated capi to use the factory methods
Updated internal enum classes with cc api exposed enum classes

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[vgg] Tensorflow VGG bugfix
Parichay Kapoor [Wed, 21 Oct 2020 05:06:08 +0000 (14:06 +0900)]
[vgg] Tensorflow VGG bugfix

Added bugfix for tensorflow VGG for the evaluation part

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Bug/Act] Fix setActivation call properly
Jihoon Lee [Mon, 19 Oct 2020 11:00:22 +0000 (20:00 +0900)]
[Bug/Act] Fix setActivation call properly

`Act::setActivation` should be called for activationLayer.
However, because `virtual Layer::setActivation` had diffrent signature
`Act::setActivation`, There was no way this function can be called.

This patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[NN] call setBatchSize at init
Jihoon Lee [Mon, 19 Oct 2020 11:03:09 +0000 (20:03 +0900)]
[NN] call setBatchSize at init

setBatchSize is not called at init which cause problem when calling
layers from outside.
This fixes the issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ ADMIN ] Add code of conduct
jijoong.moon [Tue, 20 Oct 2020 01:56:43 +0000 (10:56 +0900)]
[ ADMIN ] Add code of conduct

Add documentaions releate with code of conduct.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[IntegratedTest] Add test generator
Jihoon Lee [Mon, 12 Oct 2020 08:33:23 +0000 (17:33 +0900)]
[IntegratedTest] Add test generator

Add test generator and model recorder for integrated test cases.

This runs multiple epochs and save related informations.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[IntegratedTest] Add methods and types for tests
Jihoon Lee [Mon, 12 Oct 2020 08:18:09 +0000 (17:18 +0900)]
[IntegratedTest] Add methods and types for tests

**Changes proposed in this PR:**
- Open up getter for layer::num_weights and neuralnet flatGraph
- Use getOutputDimension for some codes
- Add types for future competibility

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[databuffer] Cleanup databuffers for ccapi
Parichay Kapoor [Mon, 12 Oct 2020 01:23:55 +0000 (10:23 +0900)]
[databuffer] Cleanup databuffers for ccapi

Cleanup databuffer headers for ccapi

See also #199

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Pooling 2D ] Fix calculate max when pooling 2d
jijoong.moon [Mon, 12 Oct 2020 07:25:30 +0000 (16:25 +0900)]
[ Pooling 2D ] Fix calculate max when pooling 2d

In this PR, numeric_limits min() return minmum positive value. It
needs to be negative.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] NNTrainer Validation with VGG
jijoong.moon [Mon, 5 Oct 2020 11:56:40 +0000 (20:56 +0900)]
[ Application ] NNTrainer Validation with VGG

This PR includes:
  - NNTrainer Validation with VGG Model.
  - Input is cifar100.
    . 100 Classes, 450 Images for train, 50 Images for validation

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Docs] Remove docker GIAG from readme.md
Jihoon Lee [Wed, 14 Oct 2020 10:25:57 +0000 (19:25 +0900)]
[Docs] Remove docker GIAG from readme.md

As docker GIAG build support is no longer maintained, this patch removes
the guide to docker GIAG build.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[GitHub Replaced @NNStreamer/nntrainer with @nnstreamer/nnstreamer
Geunsik Lim [Mon, 12 Oct 2020 09:56:01 +0000 (18:56 +0900)]
[GitHub Replaced @NNStreamer/nntrainer with @nnstreamer/nnstreamer

This commit is to fix an incorrect "@org/team-name" (e.g., @NNStreamer/nntrainer)
since there is no the team name that is called "@NNStreamer/nntrainer". So, we need
to replace @NNStreamer/nntrainer with @nnstreamer/nnstreamer.
For more details, please refer to the below webpage.

* NNStreamer's Teams - https://github.com/orgs/nnstreamer/teams

Signed-off-by: Geunsik Lim <geunsik.lim@samsung.com>
3 years ago[layer] Update the layer constructors
Parichay Kapoor [Thu, 8 Oct 2020 08:47:29 +0000 (17:47 +0900)]
[layer] Update the layer constructors

Update the constructors of layer to take arguments
Added layer_factor creator method
Updated model_loader and capi to use the factory creator

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Refactor optimizer
Parichay Kapoor [Thu, 8 Oct 2020 06:42:32 +0000 (15:42 +0900)]
[optimizer] Refactor optimizer

This patch refactors optimizer in the following fashion:
- split optimizer implementations of different types to derived classes of adam and sgd
- create a optimizer_factory to create the optimizer class objs based on its type
  - this can be directly used with ccapi
- OptParam struct has been removed
- applyGradients has been broken down into different methods
- updated associated unittests

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Max Pooling ] Fix bug when max_id created
jijoong.moon [Mon, 12 Oct 2020 01:48:08 +0000 (10:48 +0900)]
[ Max Pooling ] Fix bug when max_id created

This PR includs:
  - make Layer::setBatch virtual function to set batch size
  - max_id for max poooling should be created at initialization time
  but the batch size is not set properly at this time. So when the
  setBatch is called for each layer, it is set properly.

TODO: depnding on inference and training, max_id should be handled
properly. It is not needed for inference.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years agoRefactor getDataFromBuffer
Jihoon Lee [Tue, 6 Oct 2020 04:20:13 +0000 (13:20 +0900)]
Refactor getDataFromBuffer

This patch rearranges getDataFromBuffer to not rely on nested buffers to
unvoid unnecessary allocation & deallocation.

This patch provides notable speedup for some cases.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS/Docs] Update readme
Jihoon Lee [Thu, 8 Oct 2020 08:10:07 +0000 (17:10 +0900)]
[CS/Docs] Update readme

**Changes proposed in this PR:**
- Delete manual package install requirements
- Update demo footage

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS] Add cutoff logic through accuracy
Jihoon Lee [Thu, 8 Oct 2020 07:24:44 +0000 (16:24 +0900)]
[CS] Add cutoff logic through accuracy

**Changes proposed in this PR:**
- Add more epochs
- Add cutoff logic to check if the guess is confident

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS/Refactor] Scrape ecore pipe from the train_
Jihoon Lee [Thu, 8 Oct 2020 06:53:07 +0000 (15:53 +0900)]
[CS/Refactor] Scrape ecore pipe from the train_

**Changes proposed in this PR:**
- Ecore Pipe is no longer used to update UI
- Broaden the canvas to make it easier to draw
- s/tries/epoch for training progress
- Minor bug fix that was generating warnings

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor/model] Move some member functions to private
Parichay Kapoor [Tue, 6 Oct 2020 03:34:47 +0000 (12:34 +0900)]
[Tensor/model] Move some member functions to private

Move BroadcastInfo to private
Move printMetrics to private

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] Update layer interface
Parichay Kapoor [Tue, 6 Oct 2020 03:12:04 +0000 (12:12 +0900)]
[layer] Update layer interface

Update the interface for layer
This changes the public and private exposed member functions
Corresponding unittests are updated

Major changes are for Flatten and Loss layers - they both now support
setting basic layer properties such are input_shape etc.

See also #199

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Improve UI Interaction
Jihoon Lee [Wed, 7 Oct 2020 05:48:46 +0000 (14:48 +0900)]
[CS] Improve UI Interaction

**Changes proposed in this PR:**
- Add disable mode for the buttons
- Disable eval when 'model.bin' is not present
- Remove `model.bin` when exit
- Change colors for the buttons
- Enlarge emoji
- Fewer code retouches and fix bugs..

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS] Add color to the label and fix status
Jihoon Lee [Tue, 6 Oct 2020 11:47:42 +0000 (20:47 +0900)]
[CS] Add color to the label and fix status

**Changes proposed in this PR:**
- Add color to the result label
- Fix retun status for void functions

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS] Change png -> jpeg
Jihoon Lee [Tue, 6 Oct 2020 11:42:37 +0000 (20:42 +0900)]
[CS] Change png -> jpeg

Currently, pngdec gstelement is not supported in the wearable by
default. Change save data format to jpeg in order to use jpegdec.

See also #553

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[typo] Fix minor typos in spec file
Yongjoo Ahn [Tue, 6 Oct 2020 07:28:51 +0000 (16:28 +0900)]
[typo] Fix minor typos in spec file

- Fix minor typos in spec file

Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
3 years ago[CS] Show inference
Jihoon Lee [Mon, 28 Sep 2020 11:44:22 +0000 (20:44 +0900)]
[CS] Show inference

This patch shows inference to the page after evaluation.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS] s/draw_target/label + mode/
Jihoon Lee [Mon, 28 Sep 2020 10:50:30 +0000 (19:50 +0900)]
[CS] s/draw_target/label + mode/

This patch changes ad->draw_target to draw->label and draw->mode to
incorporate with inference and training at the same time

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[inference] Remove deciding for inference at model construction
Parichay Kapoor [Thu, 24 Sep 2020 05:18:15 +0000 (14:18 +0900)]
[inference] Remove deciding for inference at model construction

Currently, with is_train, user has to decide at model construction if the model
is to be used for inference or train but we still allow changing it.
We dont need that difference, rather using inference/train should be allowed
at anytime by just changing the batch-size (this has been allowed with the previous commit)

Now, once the train is done, with model loaded in memory, inference can be called directly.
Inference can be run by creating and loading the model from disk.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Databuffer] Fix queue is not emptied properly
Jihoon Lee [Tue, 6 Oct 2020 04:25:56 +0000 (13:25 +0900)]
[Databuffer] Fix queue is not emptied properly

There was an issue that databuffer is not emptying queue properly.
This patch fixes the issue.o

Reoslves #621

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[build/meson] Add gmodule_dep for nnstreamer subplugin
Yongjoo Ahn [Mon, 5 Oct 2020 06:44:39 +0000 (15:44 +0900)]
[build/meson] Add gmodule_dep for nnstreamer subplugin

- Add gmodule_dep to build nnstreamer_filter_nntrainer
- `nnstreamer_subpluin.c` includes `gmodule.h`

Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
3 years ago[Filter/refactor] Refactor error handling logic
Jihoon Lee [Mon, 28 Sep 2020 07:43:11 +0000 (16:43 +0900)]
[Filter/refactor] Refactor error handling logic

g_assert ends the program rather than give chance for calls to handle
the error code. This patch replaces g_assert with return value

v2. Remove assertion codes and unify codes to throw

- Remove assertion codes instead throw to the top level and catch
altogether
- Merge `NNTrainer::init()` logic to constructor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS] Add inference pipeline
Jihoon Lee [Fri, 25 Sep 2020 07:41:22 +0000 (16:41 +0900)]
[CS] Add inference pipeline

**Changes proposed in this PR:**
- Add pipeline for the inference

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Packaging ] Meta Package for NNTrainer
jijoong.moon [Mon, 5 Oct 2020 02:54:27 +0000 (11:54 +0900)]
[ Packaging ] Meta Package for NNTrainer

NNTrainer Meta :
 - nntrainer-core
 - nnstreamer-nntrainer if nnstreamer_filter is on
 - capi-nntrainer if with tizen

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[neuralnet] Add alternatives to getLoss
Parichay Kapoor [Mon, 5 Oct 2020 05:05:09 +0000 (14:05 +0900)]
[neuralnet] Add alternatives to getLoss

getLoss used to get the current loss of the model which was based
on the previous batch of data which the network ran on.
This does not allow getting training/validation loss.
Added getTrainingLoss and getValidationLoss for this purpose.
And update getLoss description to include this information.

As MNIST application was using getLoss() which returns the
loss of the last ran element, this value was changed with #600
as with #600 last element is a batch of data than just 1 data element.
The application is updated to now compare all three loss with
updated values.
So, this patch fixes that bug in main branch as well.

Resolves #617

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Enlarge fonts and s/test/eval
Jihoon Lee [Fri, 25 Sep 2020 02:59:50 +0000 (11:59 +0900)]
[CS] Enlarge fonts and s/test/eval

**Changes proposed in this PR:**
- Enlarge font size, and change test to eval

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoAdd print preset
Jihoon Lee [Thu, 24 Sep 2020 00:52:18 +0000 (09:52 +0900)]
Add print preset

**Changes proposed in this PR:**
- Add print preset for model and layer
- Add model print flags
- move printFlagEnums and print(out, flag) to private

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[batchsize] Change the semantics of batch size
Parichay Kapoor [Thu, 24 Sep 2020 05:09:50 +0000 (14:09 +0900)]
[batchsize] Change the semantics of batch size

This patch changes the semantics of the way batchsize is used in the library
1. batch_size is no longer property of the layer. It can still be set externally by the model.
The method to set is still public but will soon be changed to private.
2. batch_size of the input/label/derivative tensor provided to the forwarding/backwarding function
can no longer be arbitrary. It must match the batch_size set to the model and the layer.
This change in semantics to follow for a long-term design where memory for input/output
is pre-allocated.
3. batch_size can now be set at train time than the earlier design where the batch_size
had to be set at init time. This comes from the design change that the memory for the model weights
can be allocated init time, which is not dependent on the batch size. However, the memory for
input/output should be allocated at train/inference as the batch size can be different for these.
In the current design, memory is allocated every iteration. Later, when memory is allocated at once
and reused, change in batch size will change the memory at once at the first iteration (rather doing
this at init times). This change is necessary to allow running inference without the need to initialize
the model again.

V2:
Updated validation to run on a whole batch of data at once
Also updated Tensor.argmax() to perform on batch of data than on the whole data

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] weight class to simplify headers
Parichay Kapoor [Tue, 22 Sep 2020 12:51:45 +0000 (21:51 +0900)]
[weight] weight class to simplify headers

Added a weight class to simplify headers
All weight related enums and properties go to weight header
rather being dumped in layer or optimizer

V2:
Update swap to swap gradients even if not trainable
Update swap of Tensor, TensorDim to be friend functions
Copy and move assignements are now default

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Add visual queue when training is done
Jihoon Lee [Wed, 23 Sep 2020 09:28:15 +0000 (18:28 +0900)]
[CS] Add visual queue when training is done

**Changes proposed in this PR:**
- Visually notify when training is done
- Add go to main menu when training is done
- Go to main menu button shows the final best accuracy

Resolves #572, #574

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[app/test] Enable app-test with enable-test
Parichay Kapoor [Fri, 25 Sep 2020 08:05:30 +0000 (17:05 +0900)]
[app/test] Enable app-test with enable-test

Enable application testing with enable-test only

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[MNIST] Add output verification with tizen build
Parichay Kapoor [Fri, 25 Sep 2020 06:31:17 +0000 (15:31 +0900)]
[MNIST] Add output verification with tizen build

Add the training loss value verification with tizen build

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[transferlearning] Add output verification with gtest
Parichay Kapoor [Fri, 25 Sep 2020 05:56:04 +0000 (14:56 +0900)]
[transferlearning] Add output verification with gtest

Add output verification with gtest for transfer learning application

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[gitignore] Update for android build
Parichay Kapoor [Wed, 23 Sep 2020 07:02:38 +0000 (16:02 +0900)]
[gitignore] Update for android build

Update gitignore for android build to ignore *.o.d and downlaoded iniparser

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Prevent HW backkey press while training
Jihoon Lee [Wed, 23 Sep 2020 07:53:53 +0000 (16:53 +0900)]
[CS] Prevent HW backkey press while training

**Changes proposed in this PR:**
- Disable backkey press while training
- Move back_key callback to `presenter`
- Prune unnecessary argument for view/create_layout

Resolves #573

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[CS/Refactor] Migrate to MVP style
Jihoon Lee [Tue, 22 Sep 2020 07:23:22 +0000 (16:23 +0900)]
[CS/Refactor] Migrate to MVP style

This patch layouts Customshortcut refactor. It is being migrated
Model-View-Presenter pattern for clarity.

**Changes proposed in this PR:**
- Add `presenter_*` to main presenter function
- Add `util_*` to data for utility function
- Some static function residing in `view.c` has exposed
- Strict segregation between view and data change

See also #577

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[model] Merge model finalize to destructor
Parichay Kapoor [Tue, 22 Sep 2020 02:03:58 +0000 (11:03 +0900)]
[model] Merge model finalize to destructor

Merge model finalize to destructor itself
User need not call finalize explicitly

Other minor updates:
- Merge all train methods into 1
- isInitializable() is made private. Removed some of the name checking from this function
to when the layer itself was created.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[transfer learning] Update to use softmax prediction
Parichay Kapoor [Wed, 23 Sep 2020 05:24:31 +0000 (14:24 +0900)]
[transfer learning] Update to use softmax prediction

Update transfer learning application to use softmax prediction value
to give prediction only if softmax is beyond certain threshold

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[License] s/Apache-2.0-only/Apache-2.0
Jihoon Lee [Thu, 24 Sep 2020 05:06:57 +0000 (14:06 +0900)]
[License] s/Apache-2.0-only/Apache-2.0

Apache-2.0 license does not have to state `-only`. It is removed

This resolves `findings #10` from nnstreamer/#2765

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped

Cc: Jaeyun-jung <jy1210.jung@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[refactor] Change enums to enum class
Parichay Kapoor [Tue, 22 Sep 2020 06:27:37 +0000 (15:27 +0900)]
[refactor] Change enums to enum class

Update enums to enum class before headers are exposed

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[refactor] Cleanup headers
Parichay Kapoor [Tue, 22 Sep 2020 05:17:57 +0000 (14:17 +0900)]
[refactor] Cleanup headers

This patch cleans up the headers of the internal codes
so that the headers can directly be exposed for the c++ API

Also removed applyIf from lazy_tensor (after checking with @zhoonit)
as it exposed #define in header and was not used

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CS] Add inference page
Jihoon Lee [Mon, 21 Sep 2020 05:41:51 +0000 (14:41 +0900)]
[CS] Add inference page

**Changes proposed in this PR:**
- Change home to accomodate inference
- Route draw inference
- Add `draw_target` to use appdata to address label

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layers] Move constructors must be noexcept
Parichay Kapoor [Mon, 21 Sep 2020 07:42:27 +0000 (16:42 +0900)]
[layers] Move constructors must be noexcept

Make all move constructors to be noexcept

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[TransferLearning] Update documentation
Parichay Kapoor [Mon, 21 Sep 2020 03:55:34 +0000 (12:55 +0900)]
[TransferLearning] Update documentation

Update documentation for the transfer learning application

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[CustomShortcut] Update docs and model
Jihoon Lee [Fri, 18 Sep 2020 06:54:00 +0000 (15:54 +0900)]
[CustomShortcut] Update docs and model

**Changes proposed in this PR:**
- Update model to use 10 data
- Update readme.md

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[ Application ] Make Util for Application
jijoong.moon [Mon, 21 Sep 2020 07:41:36 +0000 (16:41 +0900)]
[ Application ] Make Util for Application

Currently some of application use bitmap_helper and they have it
independently. rather than do this way, It might be better to make
static lib and link.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Doc ] Add Documentation of MNIST Application
jijoong.moon [Fri, 18 Sep 2020 07:25:54 +0000 (16:25 +0900)]
[ Doc ] Add Documentation of MNIST Application

Add README.md and imagews for mnist.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ Application ] Fix Logistic Regression Example to work
jijoong.moon [Fri, 18 Sep 2020 02:13:34 +0000 (11:13 +0900)]
[ Application ] Fix Logistic Regression Example to work

Current implementation of Logistic example is not working.
This PR inclues fixes.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[CS] Remove leading underscore
Jihoon Lee [Mon, 21 Sep 2020 01:28:29 +0000 (10:28 +0900)]
[CS] Remove leading underscore

**Changes proposed in this PR:**
- Remove underscore to unify code style in custom shortcut

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years agoFix bn layer backwarding
Jihoon Lee [Thu, 17 Sep 2020 06:14:11 +0000 (15:14 +0900)]
Fix bn layer backwarding

**Changes proposed in this PR:**
- Change formula to calculate bn layer backward propagation
- Enable conv2d test case
- Add test case when batch is 1
- Renew test case to use non constant grad_ys

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[transferlearning] Add callback for testing model
Parichay Kapoor [Fri, 18 Sep 2020 00:44:02 +0000 (09:44 +0900)]
[transferlearning] Add callback for testing model

Added callback for testing the model
The callback prints the top-1 predicted output for a given image file
Increased number of epochs back to 1000 to acheive better accuracy

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[application] Transfer learning using just C-API
Parichay Kapoor [Wed, 16 Sep 2020 12:14:23 +0000 (21:14 +0900)]
[application] Transfer learning using just C-API

Update transfer learning application to use just C-API
This divides the application into 3 parts -
1. Running the fixed part of the model - done using nnstreamer-single C-API
2. Training the trianable part of the model - done with nntrainer C-API
3. Running the testing of the trainable part of the model - done with nnstreamer-pipeline C-API with nntrainer tensor filter extension

The outputs of section 1 is kept loaded in the memory than saving to disk and loading back
The application supports section 1 using tflite directly for non-tizen platforms
Section 3 is not supported on non-tizen applications

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Application ] Doc for Logistic Regression
jijoong.moon [Fri, 18 Sep 2020 02:30:19 +0000 (11:30 +0900)]
[ Application ] Doc for Logistic Regression

Add doc for logistic regression application.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Docs] Update example link
Jihoon Lee [Fri, 18 Sep 2020 03:22:49 +0000 (12:22 +0900)]
[Docs] Update example link

This patch updates example link broken from moving folders.

resolves #567

**Self evaluation:**
1. Build test: [ ]Passed [ ]Failed [x]Skipped
2. Run test: [ ]Passed [ ]Failed [x]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Tensor] Fix bug in the broadcast support
Jihoon Lee [Thu, 17 Sep 2020 10:43:48 +0000 (19:43 +0900)]
[Tensor] Fix bug in the broadcast support

Fix bug that strides and buffer axis are miscalculated

**Changes proposed in this PR:**
- Clarified consecutive-one strategy and same-stride strategy
- Change last stride to 0 only if it is using consecutive-one strategy
- Add regression test

Resolves: #559

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Jijoong Moon <jijoong.moon@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>