Parichay Kapoor [Fri, 19 Mar 2021 11:38:16 +0000 (20:38 +0900)]
[optimizer] Add optimizer section to INI
Add a new optimizer section to INI
This is essential to allow custom optimizers as the
properties of optimizers needs to be iterables.
Currently, optimizer needs to specified in the model, which
along being not the best design, limits the properties to
be set for the optimizer as the model loader has to know
which properties are for optimizer and which for model.
This patch separates the optimizer as a new section and updates
the unittests.
Note that this removes the default optimizer from adam. Now, it is
necessary for an optimizer to be defined in the model file, if to be used for training.
For inference, model can be defined without an optimizer.
Updated documentation for the changes made.
See also #986
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jaeyun [Wed, 31 Mar 2021 06:11:32 +0000 (15:11 +0900)]
[Test/Util] use exit in script
'return' in script is not available, call 'exit' instead.
Signed-off-by: Jaeyun <jy1210.jung@samsung.com>
jijoong.moon [Wed, 31 Mar 2021 11:33:52 +0000 (20:33 +0900)]
[ LSTM ] Add Skeleton LSTM Layer Class
This PR includes skeleton code of lstm layer
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Tue, 30 Mar 2021 05:48:40 +0000 (14:48 +0900)]
[Pooling2D] Change warning to error at init
For global_* pooling change warning to error for some parameter check
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 29 Mar 2021 08:41:53 +0000 (17:41 +0900)]
[Pooling2D] Rework global max pooling
**Changes proposed in this PR:**
- Rework global max pooling to work with lambda function
- Remove switch cases for the main loop in pooling
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 25 Mar 2021 13:14:44 +0000 (22:14 +0900)]
[Pooling2d] Rework Average Pooling
This patch refactors average pooling layer to not rely
on the pooling loop.
From this patch, max_idx is used to remember number of
effective elements used for the average calculation
Also add some checks on the `initailize`,
Also global average pooling is merged to average pooling
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 25 Mar 2021 11:08:24 +0000 (20:08 +0900)]
[Pooling2d] Rework pooling to avoid padding
This patch
1. set up scaffolding for the pooling to reuse some loop.
2. initialize index vector when setting batch (fixes a hidden
bug)
3. implements max pooling without additional memory allocation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 29 Mar 2021 12:49:44 +0000 (21:49 +0900)]
[Pooling2D/Test] Add model tests
Add model test for various types of pooling2d
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Tue, 30 Mar 2021 04:51:54 +0000 (13:51 +0900)]
Handle BAD_OVERRIDE issue
Match type signatures with nntrainer::Optimizer by adding const
resolves: 458424, 458425
Self evaluation:
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Mon, 22 Mar 2021 06:53:00 +0000 (15:53 +0900)]
[rpm] Fix nnstreamer dependency
This patch resolves nnstreamer dependency problem. Thus making rpm to
enable plugin related test.
This patch is wating for nnstreamer/api#21
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 30 Mar 2021 02:35:44 +0000 (11:35 +0900)]
[graph] Bug fix for no loss
Added bug fix for handling the scenario when there is no loss
specified. The issue was the difference in ordering of the loss
types in the parser and loss type enums.
This patch fixes it. Further, graph now simply returns if no loss found.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Tue, 30 Mar 2021 03:48:16 +0000 (12:48 +0900)]
Handle Unchecked return value issue
Added return value of remove function
resolves: 1222078, 1223055
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Tue, 30 Mar 2021 03:42:00 +0000 (12:42 +0900)]
[graph/ini] Support for default input_layers
This patch adds support for default input_layers
If no input_layers is specified, the layer above
the current layer in the ini file or the previously
adding layer in the model becomes the input layer for the current layer.
However, this connection is only made if the current layer
has no input layers.
Resolves #1046
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 25 Mar 2021 04:17:16 +0000 (13:17 +0900)]
[Conv2d/Test] Add conv2d multistride test
This patch enables conv2d uneven multistride test where some part of
images are dropped
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 25 Mar 2021 04:15:19 +0000 (13:15 +0900)]
[Conv2d] Rework calcDerivative
**Changes proposed in this PR:**
- Rework conv2d::calcDerivative to enable calculating derivative when
some last strides are dropped.
- Disable channelmode im2col (commented because it is in the critical
path)
- Minor optimization on conv2d::forward
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 24 Mar 2021 07:19:48 +0000 (16:19 +0900)]
[Conv] Check conv2d restriction
This patch moves conv2d checks to the initialization.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Wed, 24 Mar 2021 01:35:58 +0000 (10:35 +0900)]
[ RNN ] Set tanh as an default activation function
Set the tanh as an default activation function
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 24 Mar 2021 11:06:48 +0000 (20:06 +0900)]
[ RNN ] Fix Tensor copy bug and set proper initialization
This commit includes
. Fix Tensor copy. using getSharedDataTensor from hidden and input
. set proper hidden initialization depending on traning and inference
. Fix proper calculation of hidden
. Remove setActivation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 22 Mar 2021 12:28:34 +0000 (21:28 +0900)]
[ RNN ] support various activation function
In this PR, various activation functions are supported using ActiFunc
Class.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 19 Mar 2021 08:34:34 +0000 (17:34 +0900)]
[ RNN ] Implementaion of forwarding
- Implementation of RNN forwarding.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 18 Mar 2021 03:54:37 +0000 (12:54 +0900)]
[ RNN ] Add initialization of RNN
Add Initialization of RNN Layer
- Set dimension and Weight
- Add Unittest for initialization
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Thu, 18 Mar 2021 11:39:59 +0000 (20:39 +0900)]
[optimizer] Refactor optimizers
refactor optimizers to update to the new design. Now there are
two internal optimizer headers -
1. opitmizer_devel - which is used external of the optimizer. The
interface provided here must be sufficient to use the optimizer by
the models or other classes.
2. optimizer_impl - which also follows the same interface but provides
implementations for some more functions as well as some more member
variables.
optimizer.h in the API is a subset of the optimizer_devel which is the
exposed API for an external user to use.
optimizer_devel.h is the API which a user must implement when creating
their own optimizer.
optimizer_impl.h provides another class which a custom optimizer implementer
can extend for their ease.
These classes follow the below design
optimizer.h (ml::train::Optimizer) <- optimizer_devel.h (nntrainer::Optimizer)
<- optimizer_impl.h (nntrainer::OptimizerImpl)
Further, model, graph, and all other classes contains objects of type nntrainer::Optimizer
The c++ API is also reduced to match with the C API.
Once all the c++ APIs are matched with c API, then we can move more functions
from *_devel.h files to *.h files.
See also #986
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 18 Mar 2021 10:15:39 +0000 (19:15 +0900)]
[model-loader] Empty backbone now gives error
Added a check in model-loader where empty backbone now gives error.
For some reason, even the negative testcases for this scenario were made like
positive test cases. Updated them as well.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 17 Mar 2021 04:19:12 +0000 (13:19 +0900)]
[graph] Bug fix for name update
Bug fix for name updates when new layers are being
added in the graph. Earlier, the names were modified only
for the first entry but this can be a problem later.
Further this imposed restrictions on the order of the use of
updateNameInLayers() and addLayerNode().
Now, with this patch, the names are updated for all layers
except self. Further, the names are updated for all occurences
than just the first layer. This resolves both the issues mentioned
above.
Added some comments and checks for the ith variable before accesing it.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 16 Mar 2021 11:04:56 +0000 (20:04 +0900)]
[graph] Cleanup the graph
Cleanup redundant member variables from graph
Rename some of the functions. Some others to be modified
have been added as TODO.
getters have modified to work better.
Some redundant members have been removed from layer_internal as well.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 16 Mar 2021 10:11:06 +0000 (19:11 +0900)]
[graph] Add compiled status to graph
Add compiled status to graph
Compiling multiple times or modifying the graph after compilation
is erronous. Getting sorted list before completing the compilation
is also an error.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 16 Mar 2021 07:05:02 +0000 (16:05 +0900)]
[graph] Bug fixes in layer realizations
Add bug fixes for layer realizations for making the graph
Many scenarios were missed about creating connections for the graph
which this patch fixes.
Further, this patch reduce the isCompilable() checks as the first
layer cant be determined till the topologicalSort() of the graph is done.
Added checkCompiledGraph() where first layer and some more checks like
types are checked.
See Also #986
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 16 Mar 2021 05:16:05 +0000 (14:16 +0900)]
[graph] Move compile to graph
The compile has moved to graph and easier API exposed to model.
Also many public exposed functions reduced.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 16 Mar 2021 03:28:09 +0000 (12:28 +0900)]
[graph] Remove layers from graph
Update graph to remove layers and have a single data structure
for maintainence purposes. Now, adj represents the nodes list
which is unsorted. Sorted list of nodes is available after topological
sort in Sorted.
Additional Bug fixes:
- getSortedLayerNode was giving out wrong indices, its fixed now
- topological sort now empties sorted list before adding new elements
- other minor fixes to make adj replace layers
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 22 Mar 2021 12:25:20 +0000 (21:25 +0900)]
[ ActFunc ] Separate Activation Function from Activation Layer
In order to use Activation function method in other Layer, it is
better to make sepearte the activation function definition and Layer.
In this PR, new ActiFunc Class is introduced and midified the
activation layer to use it.
RNN Layer also use this as an private variable.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Tue, 23 Mar 2021 09:05:41 +0000 (18:05 +0900)]
[tensor_filter_nntrainer] Set custom deleter to unique_ptr
g_new() should match with g_malloc instread delete
Refer to https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html
It handles HEAP_INCOMPATIBLE.FREE issue
resolves: 458008, 458009
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Tue, 23 Mar 2021 05:35:56 +0000 (14:35 +0900)]
[TestHub] Produce xml output for gtest
As xml output is needed for the testhub, this patch add generation for
the meson test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Mon, 22 Mar 2021 06:25:37 +0000 (15:25 +0900)]
Handle uncaught exception issue
Added try catch statement
resolves: 1143556, 1144840
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 22 Mar 2021 03:26:16 +0000 (12:26 +0900)]
Handle Large stack use(STACK_USE) issue
Coverity warns that in embedded application stack overflow may occur.
Because featureVector size(1080000) is bigger than maximum stack size(250000).
So define featureVector at the global area.
resolves: 1216560, 1216582
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 22 Mar 2021 03:17:35 +0000 (12:17 +0900)]
Handle Not restoring ostream format(STREAM_FORMAT_STATE) issue
Restore outputstream adjustflag to standard stream
resolve: 1222083
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 22 Mar 2021 01:50:13 +0000 (10:50 +0900)]
Handle Uninitialized scalar field(UNINIT_CTOR) issue
Set member variable buffer_size, buffer_axis in constructor
resolve: 1216586
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 19 Mar 2021 06:56:28 +0000 (15:56 +0900)]
Handle Unchecked return value(CHECKED_RETURN) issues
Added check the return value
resolves: 1222078, 1222441
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 19 Mar 2021 05:54:28 +0000 (14:54 +0900)]
Handle Unused value issue
Store status value using originStatus to protect from overwriting
resolves: 1216574, 1216576, 1216588, 1216594
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Thu, 11 Mar 2021 11:46:06 +0000 (20:46 +0900)]
[ Application ] Add Embedding Layer Training Example
This PR includes,
- Simple Logistic Regression Application to train Embedding Layer + Flatten +
Fullyconnected Layer.
- Generate Input + Keras Code
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Fri, 19 Mar 2021 09:16:58 +0000 (18:16 +0900)]
[models/test] Valgrind models test fix
Valgrind gives the error of uninitialized memory usage for this
unittest which is fixed with this patch.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 19 Mar 2021 08:44:55 +0000 (17:44 +0900)]
[test/bug] Bug fix for layers unittest
This patch adds bug fix for layers unittest
resetLayer() used to first free the layer and then call reset on the manager.
Freeing the layer used to free the memory of the weights, which left
the reference_wrapper for the weights in the manager in an undefined state.
This patch changes the order. Further, this patch removes some unsafe
practices from the var_grad.
This is a hotfix. The inability to check the validity of the weights in the manager
will be handled separately.
Resolves #1027
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 16 Mar 2021 08:50:43 +0000 (17:50 +0900)]
[Test] Add pluggable layer test
Add some missing test for the pluggable layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 16 Mar 2021 07:58:01 +0000 (16:58 +0900)]
[Package] Use meson test
As meson already provides test suite, this patch substitue old test to
meson test. This is to provide coherency between build test and simple
test conducted on a local machine.
resolves #998
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 16 Mar 2021 07:40:08 +0000 (16:40 +0900)]
[Bug] s/weight_decay/weight_regularizer
weight_decay is a old property which made some app test to fail. This
patch resolves the issue
revealed from #1008
resolves #1018
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 10 Mar 2021 14:03:56 +0000 (23:03 +0900)]
[Weights] Split weight variable init and alloc
Split the initialization and memory allocation for weights
3 exposed bugs with this has been resolved:
- manager does not allow tracking var_grads once initialized
- resetGradient confirms the allocation is done before accessing the memory
- allocateVariable() calls the correct initializer now
Further, the logic of reinitialize in unittest for layers has been
split into two parts - initialize and reinitialize where
reinitialize will reset layer and manager and then call initialize.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 10 Mar 2021 13:19:29 +0000 (22:19 +0900)]
[layers] Weight creation bug fix
The name of the weight being passed as string was being interpreted as a string
for the variable alloc_now. This patch fixes it by passing the arguments
appropriately.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Wed, 17 Mar 2021 04:46:47 +0000 (13:46 +0900)]
[ RNN ] Add Skeleton Code for RNNLayer
This PR includes,
skeleton of RNN Layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 11 Mar 2021 11:40:12 +0000 (20:40 +0900)]
[ UNITTEST ] Add More Embedding Layer Unit Test
This PR includes,
. Unit test cases of Embedding Layer inference & backwarding
. Add Input Generation for Embedding Layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 9 Mar 2021 04:40:33 +0000 (13:40 +0900)]
[ LAYER ] Add backwarding of Embedding Layer
This PR includes
- Backwarding implementaiton of Embedding Layer
- Add Test Cases ( More Test Cases will be added )
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 12 Mar 2021 12:53:40 +0000 (21:53 +0900)]
[Filter] Refactor filter to be more robust
**Changes Proposed in this PR aim for**
1. nntrainer filter no longer requires dimensions specified
1. nntrainer filter now adapts to the incoming batchsize(required exposing
neuralnet::setBatchSize)
1. nntrainer filter now do not copy incoming input to inference from the
filter side
1. nntrainer filter adapts to the multiple input, multiple output
**Major Changes**
`getInputDim`, `getOutTensorDim` is replaced to `setInputDim`
nntrainer->run now recognizes more than 1 input, 1 output
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 10 Mar 2021 02:15:39 +0000 (11:15 +0900)]
[Chores] s/nnstreamer-filter/nnstreamer
To match the directory name, moved test/nnstreamer-filter to nnstreamer
+ minor reformatting on a meson.build
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 16 Mar 2021 09:39:56 +0000 (18:39 +0900)]
[docs] Minor indentation fix
Minor indentation fix
Report from https://github.com/nnstreamer/nntrainer/pull/993#discussion_r594884810
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Tue, 16 Mar 2021 08:59:18 +0000 (17:59 +0900)]
Add Unittest for util_func
Unittest for following functions
rotate_180
readString
writeString
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Fri, 12 Mar 2021 13:09:11 +0000 (22:09 +0900)]
[all] remove friends
Remove friendship between classes as it makes extending the interface
difficult. Some friendships exist which will be removed in upcoming
PRs.
See Also #986
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 15 Mar 2021 12:03:14 +0000 (21:03 +0900)]
[AppContext] Remove throw regular path
There was a throw in a regular call path. This patch removes the error
by changing semantics.
@kparichay thank you for the report.
resolves #1012
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 12 Mar 2021 11:09:41 +0000 (20:09 +0900)]
[loader] Change semantics of path in ini
Ini description is expected to be somewhat determinisitic.
But paths inside ini is not deterministic. For example, if you have
save_path=model.bin,
if A application trains it and B application try to use it, reading the
same ini does not gaurantee B to read to the weights from A.
It is because A and B can have diffrent working path.
To make it consistent, this patch changes semantics of paths' in ini.
1. if model.appcontext.hasWorkingDirectory(), path inside ini are interpreted from
the workingDirectory
2. else, it is interpreted **relative to where ini is located at.**
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Mon, 15 Mar 2021 06:59:18 +0000 (15:59 +0900)]
[graph] bug fix on layer names
Major bug fix on layer names. Layer names were being checked for duplicacy
but not added in the names list. This patch solves that bug.
Also added bug fixes to backbone being added to the graph properly
with appropriate name checkings now.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Mar 2021 13:09:11 +0000 (22:09 +0900)]
[all] remove friends
Remove friendship between classes as it makes extending the interface
difficult. Some friendships exist which will be removed in upcoming
PRs.
See Also #986
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Mar 2021 13:01:27 +0000 (22:01 +0900)]
[model] Move layers from model to graph
Move layers from model to graph.
This places all the graph related elements in the graph
and easy to extend and reason about model.
This shows the redundancy in the graph which shall be handled in
upcoming PRs.
See Also #986
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Tue, 16 Mar 2021 01:29:13 +0000 (10:29 +0900)]
[SE] Make coverage build runnable
This patch is a workaround to make coverage build runnable for now.
There will be following patches to enable the commented lines.
See also #997
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 9 Mar 2021 10:15:16 +0000 (19:15 +0900)]
[docs] Added documentation for memory management
Added documentation for memory management.
Resolves #945
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 12 Mar 2021 12:45:45 +0000 (21:45 +0900)]
[model] Read and save after init
Add a check in model to read and save after init only
Added a corresponding unittest for the negative case
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 12 Mar 2021 07:45:26 +0000 (16:45 +0900)]
[Clean] Delete overlapping profiler
Delete overlapping profiler got into from merge order
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 11 Mar 2021 13:54:44 +0000 (22:54 +0900)]
[layer/unittest] Delete old unittest
Delete negative unittest of loss layer which tests the failure of forward
without label. However, forward label is now a valid operation for loss layer.
The current reason why this unittest fails is because the number of inputs
are not managed correctly and it fails. This is managed with the previous
PR to ensure that number of inputs and outputs remain correct.
However, now it segfaults because the layer has not been initialized, and
its inputs and output havenot been assigned.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 11 Mar 2021 11:50:14 +0000 (20:50 +0900)]
[layer] refactor num of inputs/outputs
Refactor layer to properly handle the number of inputs
and outputs, which is maintained across the in/out dimensions,
in/out data.
This is not maintained for in/out layers as these properties will move
out of layers in the next commit.
Also added bugfix in output_layer which used to clear the output dimension
rendering the num outputs 0 for sometime.
NetworkGraph setNumNetBufferSize is also removed as the above
refactoring handles it at the layer level.
See Also #986
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Wed, 10 Mar 2021 05:40:13 +0000 (14:40 +0900)]
Handle NO_CATCH issue
Added missing try-catch statement
resolve 457589
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Wed, 10 Mar 2021 02:31:42 +0000 (11:31 +0900)]
[Meson/Clean] remove duplicating deps
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 9 Mar 2021 13:00:26 +0000 (22:00 +0900)]
[model] Update batch size check
Update batch size check to exactly match with the model
This is because batch size is used by certain layers for calculation and
running an input with wrong batch (with correct memory allocations) can result
in wrong output.
This is independent of the running a batch size 16 with memory allocated for
bigger batch size. That will still work with this patch as long as batch size
is set correctly for the model, and memory allocated is at least big enough
to handle this batch size.
See also #888
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 9 Mar 2021 12:18:30 +0000 (21:18 +0900)]
[app] Set env to run the app test
Set environment variable in meson to run the application test
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 9 Mar 2021 06:09:08 +0000 (15:09 +0900)]
[Manager] Remove alloc for first/last layer during inference
Remove the allocation of input of the first layer and label of the last
layer during inference. This is because those tensors are overridden at
the call to inference by the tensors provided externally by the user.
Note that the output of the last layer will still be allocated by the manager.
Also refactor initializeTensors() in manager into two functions
separated for inference and training.
See Also #974 #888
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Mon, 8 Mar 2021 04:30:52 +0000 (13:30 +0900)]
Handle NO_CATCH issues
Added missing try-catch statement
resolve 457555, 457556, 457557, 457558
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Thu, 4 Mar 2021 10:16:39 +0000 (19:16 +0900)]
[manager] Support deinitialize
With consecutive runs of inference and train, manager
needs to deallocate as well as deinitialize the tensor mappings
and start. This patch adds support for this deinitializing.
See also #974
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 9 Mar 2021 05:54:07 +0000 (14:54 +0900)]
[model] Bug fix for inference
Add bug fix for inference mode where the last layer is run twice.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 3 Mar 2021 12:30:44 +0000 (21:30 +0900)]
[model] Free memory for inference
Support freeing memory for inference with free_mem argument in inference
This defaults to true.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 3 Mar 2021 11:28:17 +0000 (20:28 +0900)]
[neuralnet] Support deallocate of tensors
Support deallocate and allocation of tensors from the neuralnet.
Also perform the deallocation of tensors after each train run.
Once a training is performed, the memory associated for that training
except the weights variables will be freed.
The memory associated with inference will not be freed until freed manually.
This will require calling deallocate() with the model object.
Note that calling a train() after inference() will free the inference memory
and the train will allocate its own memory which will be freed at the end
of the training.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Mon, 8 Mar 2021 05:41:14 +0000 (14:41 +0900)]
[ LAYER ] Embedding Layer Inference
This PR includes inference calculation of Embedding Layer.
- Implementation of Embedding Layer
- Few test cases for inference. ( More test will be added )
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 8 Mar 2021 07:04:47 +0000 (16:04 +0900)]
[CAPI] Add mapping to newly added enum
This patch adds mapping to newly added enum and corresponding test
Please note that tct will be added based on this PR.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 3 Mar 2021 11:08:03 +0000 (20:08 +0900)]
[manager] Check on re-initialize
Add a check on re-initialize for the ccapi unittest
As re-initialize is removed, the memory intialization is also updated
which changes the final loss values
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 3 Mar 2021 10:16:01 +0000 (19:16 +0900)]
[manager] Support deallocation of memory
Create a proper support for deallocation of memory by the manager which
for now will be done with destruction of manager or the model itself.
This frees all the memory involved in the model for inference or training.
See Also #974
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 3 Mar 2021 05:03:05 +0000 (14:03 +0900)]
[manager] Add check in manager for multiple init and allocate
Add a check in manager to avoid multiple initializations of the same
variables and their allocations. This happens when inference/train is called
multiple times on the same model.
This patch adds a check inside the manager which does not re-init or allocate
if it is already initialized/allocated.
See Also #965
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Thu, 4 Mar 2021 11:55:43 +0000 (20:55 +0900)]
[ Layer ] Add Skeleton for Embedding Layer
In this PR,
Skeleton Code for Embedding Layer is included.
- header / source and some modification for layer factory.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Wed, 3 Mar 2021 11:07:24 +0000 (20:07 +0900)]
[rpm/tizen] Support 6.0 build
This patch resolves dependency name by given distro version.
Tested building on 6.5 and 6.0 latest snapshot.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Cc: Sangjung Woo <sangjung.woo@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 2 Mar 2021 07:10:18 +0000 (07:10 +0000)]
[Chore] Move resource path
Move resoure path from build root to res path for layer plugin
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 29 Jan 2021 09:28:00 +0000 (18:28 +0900)]
[Conf] Add default conf nntrainer.ini
This patch adds nntrainer.ini (defaulted to be `/etc/nntrainer.ini`)
The path is defaulted to be '/etc/nntrainer.ini' but it is subjected to
change by changing `--sysconfdir=''` and `--prefix`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 28 Jan 2021 12:59:04 +0000 (21:59 +0900)]
[AppContext] Default Path loader
Add appcontext default path loader.
**Changes proposed in this PR:**
- add `AppContext::registerLayerFromDirectory`
- call `add_extension_object` routine for `AppContext::Global()`. The
routine searches the directory of `NNTRAINER_PATH` and from
`nntrainer.ini`(installing ini in upcoming PR)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 5 Mar 2021 02:32:12 +0000 (11:32 +0900)]
[dataset/unittest] Bug fix for segmentation fault
The unittest gives segmentation fault when trying to close a dataset
object which has not been opened successfully.
Initializing the object to false solves the issue.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Juyeong Lee [Thu, 25 Feb 2021 17:54:02 +0000 (02:54 +0900)]
[Docs] Fix broken links
This patch updates broken links in contributing guide.
Signed-off-by: Juyeong Lee <2jy22@naver.com>
jijoong.moon [Wed, 3 Mar 2021 05:29:15 +0000 (14:29 +0900)]
[ Document ] Add RELEASE.md
Add release.md to specify release policy.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 1 Feb 2021 06:36:10 +0000 (15:36 +0900)]
[Test] Add conv various situation test
Convolutional layer when variable stride is given case was missing from
the test, the patch adds it
- basic conv for reference
- conv with same padding
- conv with multi strides
- conv with multi strides + same padding
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 17 Feb 2021 04:30:15 +0000 (13:30 +0900)]
[Conv2D] Rework conv2d layer
This patch contains major rework of conv2d layer to cope with
padded, strided configuration
**Major changes proposed in this PR**
- Rework conv2d::calcDerivative by
- Add and exploit col2im method
- Rework conv2d::calcGradient by
- Add dilation arugment to im2col
**Minor changes proposed in this PR:**
- Delete obsolete profiler
**Effect on performance**
- strip_pad has been removed, thus less memcpy, allocation
**Major restriction**
This patch mainly refactors the flow of how conv2d backpropagates.
There were some delegate issues still left to be handled. Those issue
can be handled on top of this patch.
- if calculated output is not an integer either on forward / backward
this will be hard error for now.
- same padding can only be applicable when kernel size is odd.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 3 Mar 2021 05:48:43 +0000 (14:48 +0900)]
[Fix/Test] Fix ccapi test fail
As resource path has been changed, the resource path has to be changed.
This patch resolves the issue
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
MyungJoo Ham [Wed, 3 Mar 2021 02:11:51 +0000 (11:11 +0900)]
[Fix/Svace] Add try catch to model-run
The two functions of the following code may throw
exceptions. Handle them.
```
if (arg == "model") {
return api_model_run();
} else {
return ini_model_run(arg);
}
```
This fixes SVACE 457479.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Parichay Kapoor [Thu, 4 Feb 2021 11:07:48 +0000 (20:07 +0900)]
[model] Allow updating batch size after allocation
Allow updating the batch size after memory allocation has been done
This allows changing the batch size for training anytime.
Further, it allows changing the batch size from training to inference and back
which is demonstrated with MNIST example in nnstreamer element of nntrainer.
The model file starts with batch size of 32, which is changed to 1 for inference.
Also added another unittest in cpp-api which sets different batch size in different
stages of the model, after initialization, after compile, after train and trains multiple times
with different batch sizes to validate that the training works.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 4 Feb 2021 11:03:15 +0000 (20:03 +0900)]
[tensor] Allow updating batch size after allocation
Allow updating batch size after memory allocation for tensor.
If previously allocated memory is first set to null and then allocated again
with the new batch size. As tensors share memory, the previously allocated
might not be freed when set to null and can lead to high peak memory usage.
It is recommended to free the memory for all tensors whose batch size is changing
first, update batch size and then allocate again to keep the peak memory requirement
at check.
As the memory is allowed to be re-allocated, the src_tensor for a tensor is not
reset anymore after memory allocation. When reallocating, the src_tensor is used
again in order to maintain the memory management done once duing initialization.
This patch also sets appropriate interface in Var_Grad and Weight classes as well.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 18 Feb 2021 04:50:12 +0000 (13:50 +0900)]
[filter] Extract inference wrapper header
This patch extracts inference wrapper header for testability and better
maintainability.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Thu, 4 Feb 2021 04:45:22 +0000 (13:45 +0900)]
[loss] Allow no loss in the model
Allow setting no loss in the model
This allows inferencing a model, and creating submodels
with backbones to infer some particular output features
Updated existing unittest with no loss to succeed
and added more unittests to validate that models run successfully
without loss
V2:
Updated mnist_inf.ini to be without loss
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 29 Jan 2021 11:42:12 +0000 (20:42 +0900)]
[optimizer] Update to camelcase
Update apply_gradient(s) to applyGradient(s)
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 29 Jan 2021 11:34:18 +0000 (20:34 +0900)]
[weight] Move apply gradient to weight
Move apply gradient which will just add the gradient
to the variable multiplied with the gradient.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 29 Jan 2021 11:03:51 +0000 (20:03 +0900)]
[weight/layer] Move weight regularization out of layers
Move weight regularization out of layers to weights
and remove same code from the all the layers.
Loss and grads from weight regularization is done by the
weight itself.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 28 Jan 2021 12:14:10 +0000 (21:14 +0900)]
[Fix/Bug] NeuralNet check save path
Add a check to see if save_path is valid if not, issue a warning
**Semantic changes:**
From this patch, default save path is gone
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>