Jihoon Lee [Wed, 11 Nov 2020 02:59:20 +0000 (11:59 +0900)]
[Model] Apply appcontext
Apply appcontext to NeuralNetwork. From this patch `chdir()` hack is not
needed :)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 11 Nov 2020 12:14:25 +0000 (21:14 +0900)]
[model] Handle loss layer to be added from user
Handle loss layer to be added from the user which is created with c++ API
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 9 Nov 2020 03:16:48 +0000 (12:16 +0900)]
[AppContext] Add AppContext
This patch add basic app context with setting current working directory
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 10 Nov 2020 04:57:47 +0000 (13:57 +0900)]
[docs] Update docs about backbone features
Update documentation for the ini with the newly added backbone features
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 10 Nov 2020 02:23:27 +0000 (11:23 +0900)]
[backbone/ini] Support subgraph with ini backbone
Added support for subgraph of a given ini backbone
Added corresponding unittests as well
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 9 Nov 2020 06:21:29 +0000 (15:21 +0900)]
[backbone] Add ini backbone properties
Added ini backbone properties :
- scaleSize - scale the size of the model
- preload - load the weights of this backbone model before adding to the model
preload has issues which requires the format of the model file to update
wait for #361
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 10 Nov 2020 01:03:00 +0000 (10:03 +0900)]
[Layer] Change layer type to string
This patch changes layer type to string
**Changes proposed in this PR:**
- Add Layer::getType() and Layer::type
- Add `istrequal` to the `parse_util` for case insensitive compare
- Test changes accordingly
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Mete Ozay [Wed, 11 Nov 2020 12:05:19 +0000 (12:05 +0000)]
Update README.md
- Update links to examples
- Update Getting Started and Running Examples
- Fix grammar and typo.
Signed-off-by: Mete Ozay <meteozay@gmail.com>
Parichay Kapoor [Wed, 4 Nov 2020 04:00:11 +0000 (13:00 +0900)]
[nnstreamer/backbone] Update to support more backbone
Update nnstreamer backbone layer to support more backbones than just tflite
Wait for this PR to merge till the bug fix on nnstreamer
https://github.com/nnstreamer/nnstreamer/pull/2850 is merged and reflected
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 10 Nov 2020 12:23:15 +0000 (21:23 +0900)]
[cifar] Update cifar application with backbone
Update cifar application using databuffer with generator to
use tflite backbone.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 5 Nov 2020 10:48:24 +0000 (19:48 +0900)]
[Optimizer] Change enum type to string
Change optimizer type to string
**Changes proposed in this PR:**
- Bullet points are okay, too
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 6 Nov 2020 05:55:36 +0000 (14:55 +0900)]
[Fix] Throw when read/save fails
Currently, read save didn't throw when failing on those operation.
From this patch, if an object fails to read, it throws an exception
to preven false positive on test result
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 4 Nov 2020 11:31:03 +0000 (20:31 +0900)]
[Test] Add conv2d Integrated test
This patch add conv2d integrated test
Model test passes, layer test failes due to bug fix
**Changes proposed in this PR:**
- add conv2d integrated test/generator
- add transLayer for channelfirst <-> channel last conversion
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 4 Nov 2020 11:25:26 +0000 (20:25 +0900)]
[Conv] Fix convlayer backwarding
There was missed calculation doing conv2d backwarding.
This patch fixes the issue.
With this patch, some test fails. I'll take a look and fix accordingly
within this commit
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [ ]Passed [X]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Mon, 9 Nov 2020 13:01:40 +0000 (22:01 +0900)]
[ POOLING ] Fix global pooling dimensions
For the global pooling, we need to set the output dimension on last
dimension, width.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 6 Nov 2020 04:41:32 +0000 (13:41 +0900)]
[backbone] Documentation for backbone
Added documentation for backbone in ini-configuration
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 5 Nov 2020 12:39:20 +0000 (21:39 +0900)]
[ Layer ] Add Output Layer to support multiple output
Currently there is no way to support multiple output.
However it is not neccessary to be explicit support.
Instead if user use "output_layers = layername0, layername1", then nntrainer
automatically add output layer. ( comparing ouput dimension and input
layer's dimension, we could decide it is addition or split operation. )
Also, if certain layer is specify as an input layer from mutiple
layers, then it should be added implicitly.
TODO: support split operation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Fri, 6 Nov 2020 06:21:18 +0000 (15:21 +0900)]
[README] Modify the image url
Modify the url from absolute path to relative path
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Thu, 5 Nov 2020 04:55:13 +0000 (13:55 +0900)]
[bn layer] Support non-trainable mode
Support non-trainable mode for bn layer when the parameters
for this layer are not updated.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 2 Nov 2020 11:21:09 +0000 (20:21 +0900)]
[Test/Py] Add translayer
Add `translayer` to encapsulate keras layer to use nntrainer layout
This can be used for `genInput.py` as well.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 6 Nov 2020 02:59:07 +0000 (11:59 +0900)]
[ Util ] Add how to generate cifar 10/100 bmp images
Add document explain how to generate cifar 10/100 bmp images
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 6 Nov 2020 02:28:09 +0000 (11:28 +0900)]
[README] Bugfix for image display
Add bugfix for readme where images were not being shown
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 4 Nov 2020 02:27:53 +0000 (11:27 +0900)]
[debian/dist] Updated debian packaging of files
Updated debian packaging to limit the files which are packaged and exposed.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 4 Nov 2020 02:25:58 +0000 (11:25 +0900)]
[restructure] Restructure the core files
This patch restructures the internal files
include and src folders are replaced with more relevant and clustered folders
headers and souces now live together
Also the headers exposed in the packaging are severely limited than exposing
all the headers. Updated for Android and Tizen packaging as well.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 7 Aug 2020 08:00:19 +0000 (17:00 +0900)]
[Delegate] Add delegate support header
Add delegate support header
This supports settings backend and device
Added some properties but they are experimental
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 29 Oct 2020 06:38:39 +0000 (15:38 +0900)]
Update application to use backbone
Update transfer learning application to use backbone
Now feature extractor has been removed from the application
and dependency on tflite is also removed
However, the caching of features from feature extractor is not yet supported.
This results in the application to be slow to run full 1000 epochs on gbs
Till caching is supported, application test from gbs build is removed
Also fixed android packaging for nntrainer with tflite and KNN application
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 3 Nov 2020 10:03:42 +0000 (19:03 +0900)]
[backbone] Added native support for tflite backbone
Added native support for tflite backbone
The unittests are verified for tflite backbone as tflite backbone
is preferred over nnstreamer backbone.
Interfacing directly with tflite API allows directly using tensor memory
for forwarding, avoid memcpy incurred with nnstreamer backbone.
TfLite takes input shape of format NHWC.
However, nntrainer takes the shape NCHW.
This will be resolved later with a transpose operator later.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 29 Oct 2020 07:10:55 +0000 (16:10 +0900)]
[ Layer ] Add input_layers keyword
This PR includes enabling 'input_layers' keyword.
With this keyword, we could specify layer's input tensor.
. Added skip in parse_util to remove space and '[',']' in string and
split with ',' delimiter.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 29 Oct 2020 06:35:14 +0000 (15:35 +0900)]
[backbone] unittest for external backbone
Added support for external backbone
Also added a small add.tflite file for unittests
trainable is not supported with external backbones for now
Added tizen packaging and other unittest fixes
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 29 Oct 2020 06:32:42 +0000 (15:32 +0900)]
[backbone] Support for external backbone
Added support for external backbone with nnstreamer
This is enabled for both ubuntu and Tizen
This patch creates a nnstreamer layer to support running different kind of model files
with nnstreamer single c-api
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Wed, 4 Nov 2020 06:54:48 +0000 (15:54 +0900)]
Add github account on README.md file
Add my github account on README.md file
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Wed, 28 Oct 2020 11:42:37 +0000 (20:42 +0900)]
[IntegratedTest] Add batch normalization test
Add test that using batchnormalization with a minor structural change on
the tester
**Changes proposed in this PR for tester:**
- Implement reorder logic for bn layer in `recorder.py`
- Add color to some debug prints
- Let layer update inside actual `backwarding`
- save weights including non_trainable but save gradients only for the
trainables
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 28 Oct 2020 06:44:10 +0000 (15:44 +0900)]
[Test/Util] Update ini / debug info
**Changes proposed in this PR:**
- Add some helperfunction to iniTestWrapper
- Add color and adam debug info to recorder.py
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 3 Nov 2020 04:33:05 +0000 (13:33 +0900)]
[concat] Move validity checks to init
Move the dimension checks for validity of the various inputs
for concat layer are moved to initialize().
They are kept back in forwarding() under DEBUG conditional.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 3 Nov 2020 13:07:44 +0000 (22:07 +0900)]
[tensorfilter] Bug fix of tranpose
nntrainer takes data of NCHW
however, nnstreamer's tensor_converter produces data of format NHWC.
Existing implementation transposed the dimensions to match to NCHW format
However, the data was not transposed.
The unittest passed because the channel was just 1 which does not require
transpose on data side.
This patch removes the transpose of dimension from the nntrainer tensor_filter
and the transpose is done properly in the pipeline with nnstreamer's tensor_transform
v2:
Update transfer learning application which uses the pipeline.
v3:
Updated customshortcut application
Resolves #695
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Mete Ozay [Tue, 3 Nov 2020 07:53:36 +0000 (07:53 +0000)]
Troubleshooting installation process with observed errors and working solutions
Signed-off-by: Mete Ozay <meteozay@gmail.com>
Troubleshooting installation process with observed errors and working solutions
Jihoon Lee [Wed, 28 Oct 2020 05:23:47 +0000 (14:23 +0900)]
[Fix/test] Fix adam iteration start to 1
Fix bug that epoch_idx is not saved for continue_train
Fix iteration to start from 0 in model test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 27 Oct 2020 04:51:00 +0000 (13:51 +0900)]
[ Layer ] Add Concat Layer
This PR includes Concatenate layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 23 Oct 2020 06:36:39 +0000 (15:36 +0900)]
[Test] Refactor recorder.py
This commit mainly patches KerasRecorder to be more flexible
**Changes proposed in this PR:**
- Pass file, label info at `KerasRecorder.run` phase instead of __init__
to leave room to reuse the model
- Allow initiation with SequentialModel for usuability
- Deal with cross_sigmoid, cross_softmax
- Move some functions out of class
**V2**
Since `KerasRecorder` class was highly coupled to a certain model and
made it hard to make some variation out of it, (e.g. using "mse" instead
of "cross" will need to make a whole new class and it is much
error-prone.
This patch move the class implementation to several functions.
This will be used with `functools.partial` so to easily generate loss,
optimizer variation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
[Test/Refactor] Restructure data format
Restructure golden data to reduce redundant data and only check updated
weight thus making code more readable
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 28 Oct 2020 11:07:35 +0000 (20:07 +0900)]
[Fix/bn] Fix batchnormalize layer
Fix batchnormalizationLayer that epsilon is included toward saving
moving variacne that could lead to long term inaccuracy
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 28 Oct 2020 04:56:11 +0000 (13:56 +0900)]
[Fix/Optimizer] Fix decay_rate application
There was a bug that decay_rate were always applied when it is set to
default value. Becuase `decay_steps != -1` was evaluated to true.
This patch fixes the issue.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 23 Oct 2020 06:00:51 +0000 (15:00 +0900)]
[backbone] Add trainable feature to backbone
This patch adds supporting training feature to backbone
Corresponding unittests are also added
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Tue, 27 Oct 2020 11:01:52 +0000 (20:01 +0900)]
[ Layer ] Multiple Input Dimensions
Current implementaion only take one input. In order to take multiple
input, input_dim / output_dim should be vector type.
This PR includes this fixes except about addition layer which requires
actual multiple input. This will be done consequtive PR.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 23 Oct 2020 03:06:04 +0000 (12:06 +0900)]
[backbone] Add unittests for backbone
Added unittests for backbone where model constructed
with and without backbone are matched to be equivalent.
Corresponding bug fixes are also added.
See also #660
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 22 Oct 2020 08:39:10 +0000 (17:39 +0900)]
[model] Remove redundant checks
Remove redundant check of adding layer to ensure that
name is unique. This is already ensure with ensureName()
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 22 Oct 2020 08:28:26 +0000 (17:28 +0900)]
[model] Add support of ini based backbone to the model
This patch adds support of ini based backbone to the model neural network
From the point of view of ini file, backbone is treated as a layer itself.
This allows a graph of layers to be represented as a layer itself in the
ini file.
With this design, backbone must be specified as a layer with property backbone
as shown below with a sample pseudo-ini:
```ini
[Block1]
backbone: base_block.ini
[PoolLayer]
type: pooling2d
[Block2]
backbone: base_block.ini
```
ModelLoader loads the layer configuration from the backbone independently
and then extends the existing graph in the main model with this newly created
graph from the backbone ini.
The names of all layers which are inserted from the backbone to a model are
prefixed with the name of the backbone for easier management and for the user
to identify/manage the layers from a backbone.
The patch allows nested backbones and multiple backbones in a model description.
Unittests for this backbone support will follow in the next patch.
See also #660
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 22 Oct 2020 05:14:42 +0000 (14:14 +0900)]
[Layer] Fix 'stream << ' delegation
There was a bug that `std::cout << layer` is calling undefined function.
This patche fixes the issue and add some regression test.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Dongju Chae [Thu, 29 Oct 2020 02:36:22 +0000 (11:36 +0900)]
[README.md] Add hall-of-fame section to README.md
This patch adds hall-of-fame section to README.md
Signed-off-by: Dongju Chae <dongju.chae@samsung.com>
Jihoon Lee [Wed, 21 Oct 2020 10:20:05 +0000 (19:20 +0900)]
[Test] Add ParamTest scaffolding for model tests
Add Parameterized test scaffolding for integrated tests with minor
changes
**Additional Changes proposed in this PR:**
- Add `TensorDim::TensorDim(const std::string &shape)
- Move `Tensor::epsilon` to public
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 21 Oct 2020 02:38:19 +0000 (11:38 +0900)]
[Refactor/Test] Move iniTest to modelfile
This patch moves a test class and a factory function used in
model file test to `unittest_nntrainer_modelfile` from `test_util`
**Changes proposed in this PR:**
- Move iniTest to `unittest_nntrainer_modelfile`
- Move `mkIniTc`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 21 Oct 2020 11:56:26 +0000 (20:56 +0900)]
[ TENSOR ] Change to get Tensor Vector
Currently we only take and push one tensor per layer. For the skip
connection or other layer which takes and make multiple tensors, we
need to handle tensors as an input and output of layer.
In this PR, tensor input chaneged to tensor vector.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 23 Oct 2020 06:36:26 +0000 (15:36 +0900)]
[layer] Print layer name over layer address
Print layer name if available
if layer name is not avaiable, then print layer object name and its address
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 21 Oct 2020 03:09:25 +0000 (12:09 +0900)]
[layer/optimizer] Bugfix for variadic templates
layer/optimizer factory constructor used variadic templates
However, that wasn't used and could not be used with the way it was made
The way to fix it was using c-style variable number of arguments in between
to allow different function signatures work with templates at compile time
as c-style variable number of arguments are not interpreted at compile time.
This creates an interface like below:
createLayer(LayerType::LAYER_FC, 4, ActivationType::ACT_RELU)
vs the current interface
createLayer(LayerType::LAYER_FC, {"unit=4", "activation=relu"})
The first is more compact and performs some checks on types (not all) etc.
However, the second is more expressive and readable.
I have update the internal and external factory methods which makes the final
style to the latter version.
Simplified c-API implementation to also use these factory methods.
Resolves #655
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Wed, 21 Oct 2020 02:16:43 +0000 (11:16 +0900)]
[ Application ] VGG using learning rate decay
Change to use learning rate decay
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 20 Oct 2020 10:36:37 +0000 (19:36 +0900)]
[Test] Add unittest_nntrainer_models
This patch generates `NodeWatcher` and `GraphWatcher` to make a test for
unittest_nntrainer_model test.
**Changes proposed in this PR:**
- Add multi iteration test
- Add gtest scafolding
**Additional Patch will be followed**
- Handle loss + activation merge scenario
- Add param test
- Add mnist as a test
- Add final inference result
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 20 Oct 2020 02:37:01 +0000 (11:37 +0900)]
[ccapi] Added unittests for ccapi
Added unittests for ccapi
Updated the signature for setDataFile and setFunc for dataset
Also added some fixes and funcationalities revealed with unittests
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 19 Oct 2020 11:02:07 +0000 (20:02 +0900)]
[Weight] Add ctors
Weight did not have copy/move ctor which they should have.
This patch adds them.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 20 Oct 2020 06:42:57 +0000 (15:42 +0900)]
[ccapi] Enable pkgconfig
Enable pkgconfig for ccapi
Updated gitignore to not ignore *.pc.in files
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 19 Oct 2020 07:30:44 +0000 (16:30 +0900)]
[app] MNIST application with cc api
Updated MNIST applicaiton to use cc api
Added corresponding bug fix to android packaging of cc api
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 6 Oct 2020 04:22:39 +0000 (13:22 +0900)]
[ccapi] Initial draft for c++ API
Added initial draft for c++ API
This includes major headers with pure virtual classes
Updated internal classes to inherit them
Updated files names for clashing internal files
Updated capi to use the factory methods
Updated internal enum classes with cc api exposed enum classes
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 21 Oct 2020 05:06:08 +0000 (14:06 +0900)]
[vgg] Tensorflow VGG bugfix
Added bugfix for tensorflow VGG for the evaluation part
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 19 Oct 2020 11:00:22 +0000 (20:00 +0900)]
[Bug/Act] Fix setActivation call properly
`Act::setActivation` should be called for activationLayer.
However, because `virtual Layer::setActivation` had diffrent signature
`Act::setActivation`, There was no way this function can be called.
This patch fixes the issue.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 19 Oct 2020 11:03:09 +0000 (20:03 +0900)]
[NN] call setBatchSize at init
setBatchSize is not called at init which cause problem when calling
layers from outside.
This fixes the issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Tue, 20 Oct 2020 01:56:43 +0000 (10:56 +0900)]
[ ADMIN ] Add code of conduct
Add documentaions releate with code of conduct.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 12 Oct 2020 08:33:23 +0000 (17:33 +0900)]
[IntegratedTest] Add test generator
Add test generator and model recorder for integrated test cases.
This runs multiple epochs and save related informations.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 12 Oct 2020 08:18:09 +0000 (17:18 +0900)]
[IntegratedTest] Add methods and types for tests
**Changes proposed in this PR:**
- Open up getter for layer::num_weights and neuralnet flatGraph
- Use getOutputDimension for some codes
- Add types for future competibility
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 12 Oct 2020 01:23:55 +0000 (10:23 +0900)]
[databuffer] Cleanup databuffers for ccapi
Cleanup databuffer headers for ccapi
See also #199
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 12 Oct 2020 07:25:30 +0000 (16:25 +0900)]
[ Pooling 2D ] Fix calculate max when pooling 2d
In this PR, numeric_limits min() return minmum positive value. It
needs to be negative.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 5 Oct 2020 11:56:40 +0000 (20:56 +0900)]
[ Application ] NNTrainer Validation with VGG
This PR includes:
- NNTrainer Validation with VGG Model.
- Input is cifar100.
. 100 Classes, 450 Images for train, 50 Images for validation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 14 Oct 2020 10:25:57 +0000 (19:25 +0900)]
[Docs] Remove docker GIAG from readme.md
As docker GIAG build support is no longer maintained, this patch removes
the guide to docker GIAG build.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Geunsik Lim [Mon, 12 Oct 2020 09:56:01 +0000 (18:56 +0900)]
[GitHub Replaced @NNStreamer/nntrainer with @nnstreamer/nnstreamer
This commit is to fix an incorrect "@org/team-name" (e.g., @NNStreamer/nntrainer)
since there is no the team name that is called "@NNStreamer/nntrainer". So, we need
to replace @NNStreamer/nntrainer with @nnstreamer/nnstreamer.
For more details, please refer to the below webpage.
* NNStreamer's Teams - https://github.com/orgs/nnstreamer/teams
Signed-off-by: Geunsik Lim <geunsik.lim@samsung.com>
Parichay Kapoor [Thu, 8 Oct 2020 08:47:29 +0000 (17:47 +0900)]
[layer] Update the layer constructors
Update the constructors of layer to take arguments
Added layer_factor creator method
Updated model_loader and capi to use the factory creator
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 8 Oct 2020 06:42:32 +0000 (15:42 +0900)]
[optimizer] Refactor optimizer
This patch refactors optimizer in the following fashion:
- split optimizer implementations of different types to derived classes of adam and sgd
- create a optimizer_factory to create the optimizer class objs based on its type
- this can be directly used with ccapi
- OptParam struct has been removed
- applyGradients has been broken down into different methods
- updated associated unittests
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 12 Oct 2020 01:48:08 +0000 (10:48 +0900)]
[ Max Pooling ] Fix bug when max_id created
This PR includs:
- make Layer::setBatch virtual function to set batch size
- max_id for max poooling should be created at initialization time
but the batch size is not set properly at this time. So when the
setBatch is called for each layer, it is set properly.
TODO: depnding on inference and training, max_id should be handled
properly. It is not needed for inference.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 6 Oct 2020 04:20:13 +0000 (13:20 +0900)]
Refactor getDataFromBuffer
This patch rearranges getDataFromBuffer to not rely on nested buffers to
unvoid unnecessary allocation & deallocation.
This patch provides notable speedup for some cases.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 8 Oct 2020 08:10:07 +0000 (17:10 +0900)]
[CS/Docs] Update readme
**Changes proposed in this PR:**
- Delete manual package install requirements
- Update demo footage
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 8 Oct 2020 07:24:44 +0000 (16:24 +0900)]
[CS] Add cutoff logic through accuracy
**Changes proposed in this PR:**
- Add more epochs
- Add cutoff logic to check if the guess is confident
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 8 Oct 2020 06:53:07 +0000 (15:53 +0900)]
[CS/Refactor] Scrape ecore pipe from the train_
**Changes proposed in this PR:**
- Ecore Pipe is no longer used to update UI
- Broaden the canvas to make it easier to draw
- s/tries/epoch for training progress
- Minor bug fix that was generating warnings
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 6 Oct 2020 03:34:47 +0000 (12:34 +0900)]
[Tensor/model] Move some member functions to private
Move BroadcastInfo to private
Move printMetrics to private
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 6 Oct 2020 03:12:04 +0000 (12:12 +0900)]
[layer] Update layer interface
Update the interface for layer
This changes the public and private exposed member functions
Corresponding unittests are updated
Major changes are for Flatten and Loss layers - they both now support
setting basic layer properties such are input_shape etc.
See also #199
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 7 Oct 2020 05:48:46 +0000 (14:48 +0900)]
[CS] Improve UI Interaction
**Changes proposed in this PR:**
- Add disable mode for the buttons
- Disable eval when 'model.bin' is not present
- Remove `model.bin` when exit
- Change colors for the buttons
- Enlarge emoji
- Fewer code retouches and fix bugs..
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 6 Oct 2020 11:47:42 +0000 (20:47 +0900)]
[CS] Add color to the label and fix status
**Changes proposed in this PR:**
- Add color to the result label
- Fix retun status for void functions
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 6 Oct 2020 11:42:37 +0000 (20:42 +0900)]
[CS] Change png -> jpeg
Currently, pngdec gstelement is not supported in the wearable by
default. Change save data format to jpeg in order to use jpegdec.
See also #553
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Yongjoo Ahn [Tue, 6 Oct 2020 07:28:51 +0000 (16:28 +0900)]
[typo] Fix minor typos in spec file
- Fix minor typos in spec file
Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
Jihoon Lee [Mon, 28 Sep 2020 11:44:22 +0000 (20:44 +0900)]
[CS] Show inference
This patch shows inference to the page after evaluation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 28 Sep 2020 10:50:30 +0000 (19:50 +0900)]
[CS] s/draw_target/label + mode/
This patch changes ad->draw_target to draw->label and draw->mode to
incorporate with inference and training at the same time
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 24 Sep 2020 05:18:15 +0000 (14:18 +0900)]
[inference] Remove deciding for inference at model construction
Currently, with is_train, user has to decide at model construction if the model
is to be used for inference or train but we still allow changing it.
We dont need that difference, rather using inference/train should be allowed
at anytime by just changing the batch-size (this has been allowed with the previous commit)
Now, once the train is done, with model loaded in memory, inference can be called directly.
Inference can be run by creating and loading the model from disk.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 6 Oct 2020 04:25:56 +0000 (13:25 +0900)]
[Databuffer] Fix queue is not emptied properly
There was an issue that databuffer is not emptying queue properly.
This patch fixes the issue.o
Reoslves #621
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Yongjoo Ahn [Mon, 5 Oct 2020 06:44:39 +0000 (15:44 +0900)]
[build/meson] Add gmodule_dep for nnstreamer subplugin
- Add gmodule_dep to build nnstreamer_filter_nntrainer
- `nnstreamer_subpluin.c` includes `gmodule.h`
Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
Jihoon Lee [Mon, 28 Sep 2020 07:43:11 +0000 (16:43 +0900)]
[Filter/refactor] Refactor error handling logic
g_assert ends the program rather than give chance for calls to handle
the error code. This patch replaces g_assert with return value
v2. Remove assertion codes and unify codes to throw
- Remove assertion codes instead throw to the top level and catch
altogether
- Merge `NNTrainer::init()` logic to constructor
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 25 Sep 2020 07:41:22 +0000 (16:41 +0900)]
[CS] Add inference pipeline
**Changes proposed in this PR:**
- Add pipeline for the inference
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Mon, 5 Oct 2020 02:54:27 +0000 (11:54 +0900)]
[ Packaging ] Meta Package for NNTrainer
NNTrainer Meta :
- nntrainer-core
- nnstreamer-nntrainer if nnstreamer_filter is on
- capi-nntrainer if with tizen
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Mon, 5 Oct 2020 05:05:09 +0000 (14:05 +0900)]
[neuralnet] Add alternatives to getLoss
getLoss used to get the current loss of the model which was based
on the previous batch of data which the network ran on.
This does not allow getting training/validation loss.
Added getTrainingLoss and getValidationLoss for this purpose.
And update getLoss description to include this information.
As MNIST application was using getLoss() which returns the
loss of the last ran element, this value was changed with #600
as with #600 last element is a batch of data than just 1 data element.
The application is updated to now compare all three loss with
updated values.
So, this patch fixes that bug in main branch as well.
Resolves #617
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 25 Sep 2020 02:59:50 +0000 (11:59 +0900)]
[CS] Enlarge fonts and s/test/eval
**Changes proposed in this PR:**
- Enlarge font size, and change test to eval
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 24 Sep 2020 00:52:18 +0000 (09:52 +0900)]
Add print preset
**Changes proposed in this PR:**
- Add print preset for model and layer
- Add model print flags
- move printFlagEnums and print(out, flag) to private
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Cc: Parichay Kapoor <pk.kapoor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 24 Sep 2020 05:09:50 +0000 (14:09 +0900)]
[batchsize] Change the semantics of batch size
This patch changes the semantics of the way batchsize is used in the library
1. batch_size is no longer property of the layer. It can still be set externally by the model.
The method to set is still public but will soon be changed to private.
2. batch_size of the input/label/derivative tensor provided to the forwarding/backwarding function
can no longer be arbitrary. It must match the batch_size set to the model and the layer.
This change in semantics to follow for a long-term design where memory for input/output
is pre-allocated.
3. batch_size can now be set at train time than the earlier design where the batch_size
had to be set at init time. This comes from the design change that the memory for the model weights
can be allocated init time, which is not dependent on the batch size. However, the memory for
input/output should be allocated at train/inference as the batch size can be different for these.
In the current design, memory is allocated every iteration. Later, when memory is allocated at once
and reused, change in batch size will change the memory at once at the first iteration (rather doing
this at init times). This change is necessary to allow running inference without the need to initialize
the model again.
V2:
Updated validation to run on a whole batch of data at once
Also updated Tensor.argmax() to perform on batch of data than on the whole data
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 22 Sep 2020 12:51:45 +0000 (21:51 +0900)]
[weight] weight class to simplify headers
Added a weight class to simplify headers
All weight related enums and properties go to weight header
rather being dumped in layer or optimizer
V2:
Update swap to swap gradients even if not trainable
Update swap of Tensor, TensorDim to be friend functions
Copy and move assignements are now default
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Wed, 23 Sep 2020 09:28:15 +0000 (18:28 +0900)]
[CS] Add visual queue when training is done
**Changes proposed in this PR:**
- Visually notify when training is done
- Add go to main menu when training is done
- Go to main menu button shows the final best accuracy
Resolves #572, #574
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 25 Sep 2020 08:05:30 +0000 (17:05 +0900)]
[app/test] Enable app-test with enable-test
Enable application testing with enable-test only
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 25 Sep 2020 06:31:17 +0000 (15:31 +0900)]
[MNIST] Add output verification with tizen build
Add the training loss value verification with tizen build
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>