platform/core/ml/nntrainer.git
3 years agoHandle Uninitialized scalar field(UNINIT_CTOR) issue
hyeonseok lee [Mon, 22 Mar 2021 01:50:13 +0000 (10:50 +0900)]
Handle Uninitialized scalar field(UNINIT_CTOR) issue

Set member variable buffer_size, buffer_axis in constructor
resolve: 1216586

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoHandle Unchecked return value(CHECKED_RETURN) issues
hyeonseok lee [Fri, 19 Mar 2021 06:56:28 +0000 (15:56 +0900)]
Handle Unchecked return value(CHECKED_RETURN) issues

Added check the return value
resolves: 1222078, 1222441

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoHandle Unused value issue
hyeonseok lee [Fri, 19 Mar 2021 05:54:28 +0000 (14:54 +0900)]
Handle Unused value issue

Store status value using originStatus to protect from overwriting
resolves: 1216574, 1216576, 1216588, 1216594

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[ Application ] Add Embedding Layer Training Example
jijoong.moon [Thu, 11 Mar 2021 11:46:06 +0000 (20:46 +0900)]
[ Application ] Add Embedding Layer Training Example

This PR includes,
 - Simple Logistic Regression Application to train Embedding Layer + Flatten +
   Fullyconnected Layer.
 - Generate Input + Keras Code

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[models/test] Valgrind models test fix
Parichay Kapoor [Fri, 19 Mar 2021 09:16:58 +0000 (18:16 +0900)]
[models/test] Valgrind models test fix

Valgrind gives the error of uninitialized memory usage for this
unittest which is fixed with this patch.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[test/bug] Bug fix for layers unittest
Parichay Kapoor [Fri, 19 Mar 2021 08:44:55 +0000 (17:44 +0900)]
[test/bug] Bug fix for layers unittest

This patch adds bug fix for layers unittest
resetLayer() used to first free the layer and then call reset on the manager.
Freeing the layer used to free the memory of the weights, which left
the reference_wrapper for the weights in the manager in an undefined state.

This patch changes the order. Further, this patch removes some unsafe
practices from the var_grad.

This is a hotfix. The inability to check the validity of the weights in the manager
will be handled separately.

Resolves #1027

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Test] Add pluggable layer test
Jihoon Lee [Tue, 16 Mar 2021 08:50:43 +0000 (17:50 +0900)]
[Test] Add pluggable layer test

Add some missing test for the pluggable layer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Package] Use meson test
Jihoon Lee [Tue, 16 Mar 2021 07:58:01 +0000 (16:58 +0900)]
[Package] Use meson test

As meson already provides test suite, this patch substitue old test to
meson test. This is to provide coherency between build test and simple
test conducted on a local machine.

resolves #998

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Bug] s/weight_decay/weight_regularizer
Jihoon Lee [Tue, 16 Mar 2021 07:40:08 +0000 (16:40 +0900)]
[Bug] s/weight_decay/weight_regularizer

weight_decay is a old property which made some app test to fail. This
patch resolves the issue

revealed from #1008
resolves #1018

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Weights] Split weight variable init and alloc
Parichay Kapoor [Wed, 10 Mar 2021 14:03:56 +0000 (23:03 +0900)]
[Weights] Split weight variable init and alloc

Split the initialization and memory allocation for weights
3 exposed bugs with this has been resolved:
- manager does not allow tracking var_grads once initialized
- resetGradient confirms the allocation is done before accessing the memory
- allocateVariable() calls the correct initializer now

Further, the logic of reinitialize in unittest for layers has been
split into two parts - initialize and reinitialize where
reinitialize will reset layer and manager and then call initialize.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layers] Weight creation bug fix
Parichay Kapoor [Wed, 10 Mar 2021 13:19:29 +0000 (22:19 +0900)]
[layers] Weight creation bug fix

The name of the weight being passed as string was being interpreted as a string
for the variable alloc_now. This patch fixes it by passing the arguments
appropriately.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ RNN ] Add Skeleton Code for RNNLayer
jijoong.moon [Wed, 17 Mar 2021 04:46:47 +0000 (13:46 +0900)]
[ RNN ] Add Skeleton Code for RNNLayer

This PR includes,
skeleton of RNN Layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ UNITTEST ] Add More Embedding Layer Unit Test
jijoong.moon [Thu, 11 Mar 2021 11:40:12 +0000 (20:40 +0900)]
[ UNITTEST ] Add More Embedding Layer Unit Test

This PR includes,
. Unit test cases of Embedding Layer inference & backwarding
. Add Input Generation for Embedding Layer

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[ LAYER ] Add backwarding of Embedding Layer
jijoong.moon [Tue, 9 Mar 2021 04:40:33 +0000 (13:40 +0900)]
[ LAYER ] Add backwarding of Embedding Layer

This PR includes
 - Backwarding implementaiton of Embedding Layer
 - Add Test Cases ( More Test Cases will be added )

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Filter] Refactor filter to be more robust accepted/tizen/unified/20210318.063524 submit/tizen/20210318.034025
Jihoon Lee [Fri, 12 Mar 2021 12:53:40 +0000 (21:53 +0900)]
[Filter] Refactor filter to be more robust

**Changes Proposed in this PR aim for**
1. nntrainer filter no longer requires dimensions specified
1. nntrainer filter now adapts to the incoming batchsize(required exposing
neuralnet::setBatchSize)
1. nntrainer filter now do not copy incoming input to inference from the
filter side
1. nntrainer filter adapts to the multiple input, multiple output

**Major Changes**
`getInputDim`, `getOutTensorDim` is replaced to `setInputDim`
nntrainer->run now recognizes more than 1 input, 1 output

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chores] s/nnstreamer-filter/nnstreamer
Jihoon Lee [Wed, 10 Mar 2021 02:15:39 +0000 (11:15 +0900)]
[Chores] s/nnstreamer-filter/nnstreamer

To match the directory name, moved test/nnstreamer-filter to nnstreamer

+ minor reformatting on a meson.build

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[docs] Minor indentation fix submit/tizen/20210317.111732
Parichay Kapoor [Tue, 16 Mar 2021 09:39:56 +0000 (18:39 +0900)]
[docs] Minor indentation fix

Minor indentation fix
Report from https://github.com/nnstreamer/nntrainer/pull/993#discussion_r594884810

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoAdd Unittest for util_func
hyeonseok lee [Tue, 16 Mar 2021 08:59:18 +0000 (17:59 +0900)]
Add Unittest for util_func

Unittest for following functions
rotate_180
readString
writeString

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[all] remove friends
Parichay Kapoor [Fri, 12 Mar 2021 13:09:11 +0000 (22:09 +0900)]
[all] remove friends

Remove friendship between classes as it makes extending the interface
difficult. Some friendships exist which will be removed in upcoming
PRs.

See Also #986

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[AppContext] Remove throw regular path
Jihoon Lee [Mon, 15 Mar 2021 12:03:14 +0000 (21:03 +0900)]
[AppContext] Remove throw regular path

There was a throw in a regular call path. This patch removes the error
by changing semantics.

@kparichay thank you for the report.

resolves #1012

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[loader] Change semantics of path in ini
Jihoon Lee [Fri, 12 Mar 2021 11:09:41 +0000 (20:09 +0900)]
[loader] Change semantics of path in ini

Ini description is expected to be somewhat determinisitic.

But paths inside ini is not deterministic. For example, if you have
save_path=model.bin,

if A application trains it and B application try to use it, reading the
same ini does not gaurantee B to read to the weights from A.

It is because A and B can have diffrent working path.

To make it consistent, this patch changes semantics of paths' in ini.

1. if model.appcontext.hasWorkingDirectory(), path inside ini are interpreted from
the workingDirectory
2. else, it is interpreted **relative to where ini is located at.**

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[graph] bug fix on layer names
Parichay Kapoor [Mon, 15 Mar 2021 06:59:18 +0000 (15:59 +0900)]
[graph] bug fix on layer names

Major bug fix on layer names. Layer names were being checked for duplicacy
but not added in the names list. This patch solves that bug.

Also added bug fixes to backbone being added to the graph properly
with appropriate name checkings now.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[all] remove friends
Parichay Kapoor [Fri, 12 Mar 2021 13:09:11 +0000 (22:09 +0900)]
[all] remove friends

Remove friendship between classes as it makes extending the interface
difficult. Some friendships exist which will be removed in upcoming
PRs.

See Also #986

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Move layers from model to graph
Parichay Kapoor [Fri, 12 Mar 2021 13:01:27 +0000 (22:01 +0900)]
[model] Move layers from model to graph

Move layers from model to graph.
This places all the graph related elements in the graph
and easy to extend and reason about model.
This shows the redundancy in the graph which shall be handled in
upcoming PRs.

See Also #986

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[SE] Make coverage build runnable
Jihoon Lee [Tue, 16 Mar 2021 01:29:13 +0000 (10:29 +0900)]
[SE] Make coverage build runnable

This patch is a workaround to make coverage build runnable for now.
There will be following patches to enable the commented lines.

See also #997

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[docs] Added documentation for memory management
Parichay Kapoor [Tue, 9 Mar 2021 10:15:16 +0000 (19:15 +0900)]
[docs] Added documentation for memory management

Added documentation for memory management.

Resolves #945

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Read and save after init
Parichay Kapoor [Fri, 12 Mar 2021 12:45:45 +0000 (21:45 +0900)]
[model] Read and save after init

Add a check in model to read and save after init only
Added a corresponding unittest for the negative case

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Clean] Delete overlapping profiler
Jihoon Lee [Fri, 12 Mar 2021 07:45:26 +0000 (16:45 +0900)]
[Clean] Delete overlapping profiler

Delete overlapping profiler got into from merge order

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[layer/unittest] Delete old unittest
Parichay Kapoor [Thu, 11 Mar 2021 13:54:44 +0000 (22:54 +0900)]
[layer/unittest] Delete old unittest

Delete negative unittest of loss layer which tests the failure of forward
without label. However, forward label is now a valid operation for loss layer.
The current reason why this unittest fails is because the number of inputs
are not managed correctly and it fails. This is managed with the previous
PR to ensure that number of inputs and outputs remain correct.
However, now it segfaults because the layer has not been initialized, and
its inputs and output havenot been assigned.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[layer] refactor num of inputs/outputs
Parichay Kapoor [Thu, 11 Mar 2021 11:50:14 +0000 (20:50 +0900)]
[layer] refactor num of inputs/outputs

Refactor layer to properly handle the number of inputs
and outputs, which is maintained across the in/out dimensions,
in/out data.
This is not maintained for in/out layers as these properties will move
out of layers in the next commit.

Also added bugfix in output_layer which used to clear the output dimension
rendering the num outputs 0 for sometime.

NetworkGraph setNumNetBufferSize is also removed as the above
refactoring handles it at the layer level.

See Also #986

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoHandle NO_CATCH issue
hyeonseok lee [Wed, 10 Mar 2021 05:40:13 +0000 (14:40 +0900)]
Handle NO_CATCH issue

Added missing try-catch statement
resolve 457589

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[Meson/Clean] remove duplicating deps
Jihoon Lee [Wed, 10 Mar 2021 02:31:42 +0000 (11:31 +0900)]
[Meson/Clean] remove duplicating deps

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[model] Update batch size check
Parichay Kapoor [Tue, 9 Mar 2021 13:00:26 +0000 (22:00 +0900)]
[model] Update batch size check

Update batch size check to exactly match with the model
This is because batch size is used by certain layers for calculation and
running an input with wrong batch (with correct memory allocations) can result
in wrong output.

This is independent of the running a batch size 16 with memory allocated for
bigger batch size. That will still work with this patch as long as batch size
is set correctly for the model, and memory allocated is at least big enough
to handle this batch size.

See also #888

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[app] Set env to run the app test
Parichay Kapoor [Tue, 9 Mar 2021 12:18:30 +0000 (21:18 +0900)]
[app] Set env to run the app test

Set environment variable in meson to run the application test

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Manager] Remove alloc for first/last layer during inference
Parichay Kapoor [Tue, 9 Mar 2021 06:09:08 +0000 (15:09 +0900)]
[Manager] Remove alloc for first/last layer during inference

Remove the allocation of input of the first layer and label of the last
layer during inference. This is because those tensors are overridden at
the call to inference by the tensors provided externally by the user.
Note that the output of the last layer will still be allocated by the manager.

Also refactor initializeTensors() in manager into two functions
separated for inference and training.

See Also #974 #888

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoHandle NO_CATCH issues
hyeonseok lee [Mon, 8 Mar 2021 04:30:52 +0000 (13:30 +0900)]
Handle NO_CATCH issues

Added missing try-catch statement
resolve 457555, 457556, 457557, 457558

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years ago[manager] Support deinitialize
Parichay Kapoor [Thu, 4 Mar 2021 10:16:39 +0000 (19:16 +0900)]
[manager] Support deinitialize

With consecutive runs of inference and train, manager
needs to deallocate as well as deinitialize the tensor mappings
and start. This patch adds support for this deinitializing.

See also #974

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Bug fix for inference
Parichay Kapoor [Tue, 9 Mar 2021 05:54:07 +0000 (14:54 +0900)]
[model] Bug fix for inference

Add bug fix for inference mode where the last layer is run twice.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[model] Free memory for inference
Parichay Kapoor [Wed, 3 Mar 2021 12:30:44 +0000 (21:30 +0900)]
[model] Free memory for inference

Support freeing memory for inference with free_mem argument in inference
This defaults to true.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[neuralnet] Support deallocate of tensors
Parichay Kapoor [Wed, 3 Mar 2021 11:28:17 +0000 (20:28 +0900)]
[neuralnet] Support deallocate of tensors

Support deallocate and allocation of tensors from the neuralnet.

Also perform the deallocation of tensors after each train run.
Once a training is performed, the memory associated for that training
except the weights variables will be freed.

The memory associated with inference will not be freed until freed manually.
This will require calling deallocate() with the model object.
Note that calling a train() after inference() will free the inference memory
and the train will allocate its own memory which will be freed at the end
of the training.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ LAYER ] Embedding Layer Inference
jijoong.moon [Mon, 8 Mar 2021 05:41:14 +0000 (14:41 +0900)]
[ LAYER ] Embedding Layer Inference

This PR includes inference calculation of Embedding Layer.
- Implementation of Embedding Layer
- Few test cases for inference. ( More test will be added )

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[CAPI] Add mapping to newly added enum accepted/tizen/unified/20210310.144922 submit/tizen/20210309.050722
Jihoon Lee [Mon, 8 Mar 2021 07:04:47 +0000 (16:04 +0900)]
[CAPI] Add mapping to newly added enum

This patch adds mapping to newly added enum and corresponding test

Please note that tct will be added based on this PR.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[manager] Check on re-initialize
Parichay Kapoor [Wed, 3 Mar 2021 11:08:03 +0000 (20:08 +0900)]
[manager] Check on re-initialize

Add a check on re-initialize for the ccapi unittest
As re-initialize is removed, the memory intialization is also updated
which changes the final loss values

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Support deallocation of memory
Parichay Kapoor [Wed, 3 Mar 2021 10:16:01 +0000 (19:16 +0900)]
[manager] Support deallocation of memory

Create a proper support for deallocation of memory by the manager which
for now will be done with destruction of manager or the model itself.

This frees all the memory involved in the model for inference or training.

See Also #974

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] Add check in manager for multiple init and allocate
Parichay Kapoor [Wed, 3 Mar 2021 05:03:05 +0000 (14:03 +0900)]
[manager] Add check in manager for multiple init and allocate

Add a check in manager to avoid multiple initializations of the same
variables and their allocations. This happens when inference/train is called
multiple times on the same model.

This patch adds a check inside the manager which does not re-init or allocate
if it is already initialized/allocated.

See Also #965

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[ Layer ] Add Skeleton for Embedding Layer accepted/tizen/unified/20210308.132526 submit/tizen/20210308.024729
jijoong.moon [Thu, 4 Mar 2021 11:55:43 +0000 (20:55 +0900)]
[ Layer ] Add Skeleton for Embedding Layer

In this PR,
Skeleton Code for Embedding Layer is included.
- header / source and some modification for layer factory.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[rpm/tizen] Support 6.0 build
Jihoon Lee [Wed, 3 Mar 2021 11:07:24 +0000 (20:07 +0900)]
[rpm/tizen] Support 6.0 build

This patch resolves dependency name by given distro version.

Tested building on 6.5 and 6.0 latest snapshot.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Cc: Sangjung Woo <sangjung.woo@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chore] Move resource path
Jihoon Lee [Tue, 2 Mar 2021 07:10:18 +0000 (07:10 +0000)]
[Chore] Move resource path

Move resoure path from build root to res path for layer plugin

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conf] Add default conf nntrainer.ini
Jihoon Lee [Fri, 29 Jan 2021 09:28:00 +0000 (18:28 +0900)]
[Conf] Add default conf nntrainer.ini

This patch adds nntrainer.ini (defaulted to be `/etc/nntrainer.ini`)
The path is defaulted to be '/etc/nntrainer.ini' but it is subjected to
change by changing `--sysconfdir=''` and `--prefix`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[AppContext] Default Path loader
Jihoon Lee [Thu, 28 Jan 2021 12:59:04 +0000 (21:59 +0900)]
[AppContext] Default Path loader

Add appcontext default path loader.

**Changes proposed in this PR:**
- add `AppContext::registerLayerFromDirectory`
- call `add_extension_object` routine for `AppContext::Global()`. The
routine searches the directory of `NNTRAINER_PATH` and from
`nntrainer.ini`(installing ini in upcoming PR)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[dataset/unittest] Bug fix for segmentation fault submit/tizen/20210305.081230
Parichay Kapoor [Fri, 5 Mar 2021 02:32:12 +0000 (11:32 +0900)]
[dataset/unittest] Bug fix for segmentation fault

The unittest gives segmentation fault when trying to close a dataset
object which has not been opened successfully.
Initializing the object to false solves the issue.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Docs] Fix broken links
Juyeong Lee [Thu, 25 Feb 2021 17:54:02 +0000 (02:54 +0900)]
[Docs] Fix broken links

This patch updates broken links in contributing guide.

Signed-off-by: Juyeong Lee <2jy22@naver.com>
3 years ago[ Document ] Add RELEASE.md
jijoong.moon [Wed, 3 Mar 2021 05:29:15 +0000 (14:29 +0900)]
[ Document ] Add RELEASE.md

Add release.md to specify release policy.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
3 years ago[Test] Add conv various situation test accepted/tizen/unified/20210305.034114 submit/tizen/20210303.100614 submit/tizen/20210305.020917
Jihoon Lee [Mon, 1 Feb 2021 06:36:10 +0000 (15:36 +0900)]
[Test] Add conv various situation test

Convolutional layer when variable stride is given case was missing from
the test, the patch adds it

- basic conv for reference
- conv with same padding
- conv with multi strides
- conv with multi strides + same padding

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Conv2D] Rework conv2d layer
Jihoon Lee [Wed, 17 Feb 2021 04:30:15 +0000 (13:30 +0900)]
[Conv2D] Rework conv2d layer

This patch contains major rework of conv2d layer to cope with
padded, strided configuration

**Major changes proposed in this PR**
- Rework conv2d::calcDerivative by
  - Add and exploit col2im method
- Rework conv2d::calcGradient by
  - Add dilation arugment to im2col

**Minor changes proposed in this PR:**
- Delete obsolete profiler

**Effect on performance**
- strip_pad has been removed, thus less memcpy, allocation

**Major restriction**
This patch mainly refactors the flow of how conv2d backpropagates.
There were some delegate issues still left to be handled. Those issue
can be handled on top of this patch.

- if calculated output is not an integer either on forward / backward
this will be hard error for now.
- same padding can only be applicable when kernel size is odd.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix/Test] Fix ccapi test fail
Jihoon Lee [Wed, 3 Mar 2021 05:48:43 +0000 (14:48 +0900)]
[Fix/Test] Fix ccapi test fail

As resource path has been changed, the resource path has to be changed.
This patch resolves the issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix/Svace] Add try catch to model-run
MyungJoo Ham [Wed, 3 Mar 2021 02:11:51 +0000 (11:11 +0900)]
[Fix/Svace] Add try catch to model-run

The two functions of the following code may throw
exceptions. Handle them.
```
if (arg == "model") {
  return api_model_run();
} else {
  return ini_model_run(arg);
}
```

This fixes SVACE 457479.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
3 years ago[model] Allow updating batch size after allocation
Parichay Kapoor [Thu, 4 Feb 2021 11:07:48 +0000 (20:07 +0900)]
[model] Allow updating batch size after allocation

Allow updating the batch size after memory allocation has been done
This allows changing the batch size for training anytime.

Further, it allows changing the batch size from training to inference and back
which is demonstrated with MNIST example in nnstreamer element of nntrainer.
The model file starts with batch size of 32, which is changed to 1 for inference.

Also added another unittest in cpp-api which sets different batch size in different
stages of the model, after initialization, after compile, after train and trains multiple times
with different batch sizes to validate that the training works.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[tensor] Allow updating batch size after allocation
Parichay Kapoor [Thu, 4 Feb 2021 11:03:15 +0000 (20:03 +0900)]
[tensor] Allow updating batch size after allocation

Allow updating batch size after memory allocation for tensor.
If previously allocated memory is first set to null and then allocated again
with the new batch size. As tensors share memory, the previously allocated
might not be freed when set to null and can lead to high peak memory usage.
It is recommended to free the memory for all tensors whose batch size is changing
first, update batch size and then allocate again to keep the peak memory requirement
at check.

As the memory is allowed to be re-allocated, the src_tensor for a tensor is not
reset anymore after memory allocation. When reallocating, the src_tensor is used
again in order to maintain the memory management done once duing initialization.

This patch also sets appropriate interface in Var_Grad and Weight classes as well.

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[filter] Extract inference wrapper header
Jihoon Lee [Thu, 18 Feb 2021 04:50:12 +0000 (13:50 +0900)]
[filter] Extract inference wrapper header

This patch extracts inference wrapper header for testability and better
maintainability.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[loss] Allow no loss in the model
Parichay Kapoor [Thu, 4 Feb 2021 04:45:22 +0000 (13:45 +0900)]
[loss] Allow no loss in the model

Allow setting no loss in the model
This allows inferencing a model, and creating submodels
with backbones to infer some particular output features

Updated existing unittest with no loss to succeed
and added more unittests to validate that models run successfully
without loss

V2:
Updated mnist_inf.ini to be without loss

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[optimizer] Update to camelcase
Parichay Kapoor [Fri, 29 Jan 2021 11:42:12 +0000 (20:42 +0900)]
[optimizer] Update to camelcase

Update apply_gradient(s) to applyGradient(s)

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight] Move apply gradient to weight
Parichay Kapoor [Fri, 29 Jan 2021 11:34:18 +0000 (20:34 +0900)]
[weight] Move apply gradient to weight

Move apply gradient which will just add the gradient
to the variable multiplied with the gradient.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[weight/layer] Move weight regularization out of layers
Parichay Kapoor [Fri, 29 Jan 2021 11:03:51 +0000 (20:03 +0900)]
[weight/layer] Move weight regularization out of layers

Move weight regularization out of layers to weights
and remove same code from the all the layers.
Loss and grads from weight regularization is done by the
weight itself.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix/Bug] NeuralNet check save path
Jihoon Lee [Thu, 28 Jan 2021 12:14:10 +0000 (21:14 +0900)]
[Fix/Bug] NeuralNet check save path

Add a check to see if save_path is valid if not, issue a warning

**Semantic changes:**
From this patch, default save path is gone

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor/App] Update readme
Jihoon Lee [Mon, 22 Feb 2021 06:37:07 +0000 (15:37 +0900)]
[Refactor/App] Update readme

Update `readme` to instruct how to run based on the changed resource
path

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor/app] Move resources to res path
Jihoon Lee [Mon, 22 Feb 2021 06:15:50 +0000 (15:15 +0900)]
[Refactor/app] Move resources to res path

**Major Changes proposed in this PR:**
- Move resources to res root to make sure it is installed when
installing application
- Change test script accordingly

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Clean/iniTest] add logic to erase ini after test
Jihoon Lee [Fri, 19 Feb 2021 06:58:24 +0000 (15:58 +0900)]
[Clean/iniTest] add logic to erase ini after test

This patch add logic to erase ini after test for better determinancy
and cleaner build directory.

v2: also deprecating config_str in favor of
`ScopedIni`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Refactor/test] Move resources to test path
Jihoon Lee [Fri, 19 Feb 2021 03:06:01 +0000 (12:06 +0900)]
[Refactor/test] Move resources to test path

**Situation this commit tries to deal with**
Currently, resources are scattered and managed on it's own way.
eg) from .tar.gz, test_models, resources on the applications.
This causes gap between buildtime test and install an example/test and
try run.
Also, every temporary resources are packed at build folder
making it harder to debug the log file, models let alone making some
confusions which files should be referred to when running an
application.

**This commit remedy the issue for*
- runtest after `ninja` and runttest after `ninja install` and moving to
`${bindir}` give consistent result.

**Major changes proposed in this commit**
- creating a `${buildroot}/res` root.
- hard links (if not possible, copies) resources to `${buildroot}/res`
- Each test refer to the dynamic calculated path instead of `${pwd}`
- The path is calculated by 1) directly refering to NNTRAINER_RESOURCE_PATH
env variable, 2) if not, predefined path starting from `${pwd}/res`

**Minor changes proposed in this commit:**
- Deprecate some internal test that is already being tested

**Planned updates after the commit**
- Deprecate config_str in favor of `IniTestWrapper` as config_str makes
it harder to track
- Make sure test generated files are deleted after each test
- Managing application resources
- ~Packaging updates to include soley managed `${buildroot}/res`
folder~(already configured)

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chores] Richer log to some functions
Jihoon Lee [Fri, 19 Feb 2021 03:21:47 +0000 (12:21 +0900)]
[Chores] Richer log to some functions

This patch makes some log to contain runtime information.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Util] Delete internal namespace
Jihoon Lee [Wed, 27 Jan 2021 05:32:33 +0000 (14:32 +0900)]
[Util] Delete internal namespace

This patch removes internal namespace from nntrainer::exception

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[meson] Set meson buildtype to release
Parichay Kapoor [Fri, 5 Feb 2021 07:49:36 +0000 (16:49 +0900)]
[meson] Set meson buildtype to release

Setting -O optimization manually gives meson warnings.
Instead update meson buildtype to release
This sets debug to 0 and optimization to level 3.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Build] change pkg name
Jaeyun [Tue, 16 Feb 2021 10:49:04 +0000 (19:49 +0900)]
[Build] change pkg name

update pkg name to machine-learning-training.

TODO:
1. separate training api pkg in debian
2. migrate training api to api repo

Signed-off-by: Jaeyun <jy1210.jung@samsung.com>
3 years ago[Fix] Gcc-9 complaning about parentheses
Jihoon Lee [Tue, 2 Mar 2021 04:12:52 +0000 (04:12 +0000)]
[Fix] Gcc-9 complaning about parentheses

Because of unclear parentheses on the `ErrorNotification`
gcc-9 issued a warning about the `NNTR_THROW_IF_CLEANUP` macro, when
no stream is given.

This patch fixes the issue.

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Test] Add test to create layer from an example
Jihoon Lee [Wed, 27 Jan 2021 12:43:50 +0000 (21:43 +0900)]
[Test] Add test to create layer from an example

This patch adds libpow_layer.so as an example.
Also this patch adds a simple test to load and register the so file.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[AppContext] Add register plugin function
Jihoon Lee [Wed, 27 Jan 2021 12:42:36 +0000 (21:42 +0900)]
[AppContext] Add register plugin function

This patch adds register plugin function to be used by loading library

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Pluggable] Add layer wrapper for plugged layer
Jihoon Lee [Wed, 27 Jan 2021 12:39:14 +0000 (21:39 +0900)]
[Pluggable] Add layer wrapper for plugged layer

As layer loaded from a `dlsym` cannot destroyed by `delete`, we must
need a way to delete it from destroy funcion. This patch add a simple
wrapper to deal with the problem.

Additionally, This patch is making functions in layer_internal
virtual which paves way to clean up api. Please think
`plugged_layer.h` as a layer api draft

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[API] Add layer pluggable definition
Jihoon Lee [Tue, 26 Jan 2021 11:33:18 +0000 (20:33 +0900)]
[API] Add layer pluggable definition

Add layer pluggable definition to be used when creating a custom layer
to be plugged into nntrainer

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add simpleshot runner
Jihoon Lee [Mon, 18 Jan 2021 07:15:27 +0000 (16:15 +0900)]
[SimpleShot] Add simpleshot runner

All layers are ready for the simpleshot but there is no executable
executable yet.
This patch adds simpleshot executable

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] add centroid nearest neighbor layer
Jihoon Lee [Sat, 9 Jan 2021 12:58:34 +0000 (21:58 +0900)]
[SimpleShot] add centroid nearest neighbor layer

Add centroid nearest neighbor layer with validation

v2: renamed to centroid_knn for brevity

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add centering layer
Jihoon Lee [Sat, 9 Jan 2021 04:23:58 +0000 (13:23 +0900)]
[SimpleShot] Add centering layer

Add centering layer with tests

**minor changes**
- rename simpleshot centering test
- add test_util lib to simpleshot test

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add test util cpp
Jihoon Lee [Fri, 8 Jan 2021 06:11:38 +0000 (15:11 +0900)]
[SimpleShot] Add test util cpp

Add a simple utility to the simpleshot app

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[SimpleShot] Add scaffolding for the application
Jihoon Lee [Fri, 8 Jan 2021 05:28:25 +0000 (14:28 +0900)]
[SimpleShot] Add scaffolding for the application

This patch adds simpleshot directory to application.
Nothing's present yet just a simple structures

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Handle edgecase when using shared memory
Jihoon Lee [Mon, 18 Jan 2021 07:12:42 +0000 (16:12 +0900)]
[Fix] Handle edgecase when using shared memory

When using shared memory, if weight_size is 0, mmap spits error.
This patch fixes the issue by handling when weight_size is 0

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Loss] Fix loss layer were allocating a new mem
Jihoon Lee [Fri, 22 Jan 2021 08:09:38 +0000 (17:09 +0900)]
[Loss] Fix loss layer were allocating a new mem

As allocation is managed in `manager`, layer shouldn't be allocate a new
memory that are managed by memory.
However, loss layer was allocating a new memory. This patch fixes the issue.

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Cc: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[Fix warning] Add override specifier
Juyeong Lee [Thu, 25 Feb 2021 21:36:29 +0000 (06:36 +0900)]
[Fix warning] Add override specifier

Add override specifier to fix some Android build warnings. (Related to

Resolves #798

Signed-off-by: Juyeong Lee <2jy22@naver.com>
3 years ago[Docs] Fix typos in README.md accepted/tizen/unified/20210226.131826 submit/tizen/20210226.034152
Juyeong Lee [Thu, 25 Feb 2021 17:41:27 +0000 (02:41 +0900)]
[Docs] Fix typos in README.md

This is a trivial commit that fixes typos in README.md

Signed-off-by: Juyeong Lee <2jy22@naver.com>
3 years ago[Fix/Svace] Add try catch to the top level app
Jihoon Lee [Thu, 25 Feb 2021 04:46:30 +0000 (13:46 +0900)]
[Fix/Svace] Add try catch to the top level app

Add try catch to the top level app to ensure exception does not
propagates to the very top level

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Build] change dependency to nnstreamer api accepted/tizen/unified/20210218.080536 submit/tizen/20210217.032056
Jaeyun [Tue, 16 Feb 2021 04:58:14 +0000 (13:58 +0900)]
[Build] change dependency to nnstreamer api

Now api repo is newly added, change dependency to ml-inference-api.

Signed-off-by: Jaeyun <jy1210.jung@samsung.com>
3 years ago[Build] Decouple gtest from nntrainer_test_util
Jihoon Lee [Thu, 28 Jan 2021 04:56:18 +0000 (13:56 +0900)]
[Build] Decouple gtest from nntrainer_test_util

As nntrainer_test_util had gtest, it was preventing from compiler checks
some trivial bugs (like sign compare) and had some weird bug.

This patch decouples gtest from nntrainer_test_util while changing gtest
to static build.

Resolves #910

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Util] Add gtest style throw macro
Jihoon Lee [Wed, 27 Jan 2021 05:32:33 +0000 (14:32 +0900)]
[Util] Add gtest style throw macro

Add gtest style throw macro for productivity.
With this patch, if you have to throw, do in one line

`NNTR_THROW_IF(true, std::invalid_argument) << log << what << ever`

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Chore] s/since/Since/
Jihoon Lee [Tue, 16 Feb 2021 03:06:08 +0000 (12:06 +0900)]
[Chore] s/since/Since/

From tizen convention since should be Since, this patch resolves the
issue

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[Fix] Update lazy alloc ctor to lazy alloc
Jihoon Lee [Mon, 15 Feb 2021 07:28:28 +0000 (16:28 +0900)]
[Fix] Update lazy alloc ctor to lazy alloc

This patch fixes lazy alloc ctor to lazy allocate which were delegated
to eager ctor thus causing allocating tensor twice for the lazy
allocated tensor

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
3 years ago[custom-app] svace: added missing try-catch accepted/tizen/unified/20210210.052156 submit/tizen/20210209.084149
Parichay Kapoor [Tue, 9 Feb 2021 04:02:38 +0000 (13:02 +0900)]
[custom-app] svace: added missing try-catch

Added missing try-catch reported by svace for the custom tizen application

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[manager] svace issue fix
Parichay Kapoor [Tue, 9 Feb 2021 03:11:59 +0000 (12:11 +0900)]
[manager] svace issue fix

Uninitialized class member issue fixed for manager.cpp - MMapedMemory
Added class member initialization

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years ago[conv] svace integer overflow fix
Parichay Kapoor [Tue, 9 Feb 2021 02:58:51 +0000 (11:58 +0900)]
[conv] svace integer overflow fix

Added integer overflow fix notified by svace for convolution

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
3 years agoFix svace issue GraphWatcher UNINIT.CTOR
hyeonseok lee [Tue, 9 Feb 2021 06:29:20 +0000 (15:29 +0900)]
Fix svace issue GraphWatcher UNINIT.CTOR

In the constructor the member variable expected_loss is not initialized.
expected_loss will be read later in readIteration function so initialized with 0.0

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue manager UNREACHABLE_CODE
hyeonseok lee [Tue, 9 Feb 2021 06:17:26 +0000 (15:17 +0900)]
Fix svace issue manager UNREACHABLE_CODE

The fd_ variable is only changed when on the ANDROID environment,
so enclosed with ifdef

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue createDataset NO_CATCH
hyeonseok lee [Tue, 9 Feb 2021 03:41:39 +0000 (12:41 +0900)]
Fix svace issue createDataset NO_CATCH

Add try catch statement to handle exception in main.cpp

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
3 years agoFix svace issue readModel NO_CATCH
hyeonseok lee [Tue, 9 Feb 2021 03:32:01 +0000 (12:32 +0900)]
Fix svace issue readModel NO_CATCH

Add try catch statement to handle exception in main.cpp and main_func.cpp

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>