hyeonseok lee [Thu, 25 Aug 2022 14:07:27 +0000 (23:07 +0900)]
[unittest] generate positional encoding unittest
- Generate positional encoding layer/model unittest.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 25 Aug 2022 14:03:54 +0000 (23:03 +0900)]
[positional encoding] implement positional encoding layer
- Positional encoding just needed to be calculated only once
so make its lifespan as max lifespan
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 15 Jul 2022 05:51:02 +0000 (14:51 +0900)]
[multi head attention] added unittest
- Added layer/model unittest for multi head attention
- Change == operator overloading to pass if both tensor has nan value
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 15 Jul 2022 05:46:20 +0000 (14:46 +0900)]
[multi_head_attention] implement calcDerivative, calcGradient
- Implement multi head attention calcDerivative, calcGradient
- Needs to support bool type attention mask
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 13 Jul 2022 02:25:08 +0000 (11:25 +0900)]
[multi head attention] implement calcCommonDerivative
- implement multi head attention calcCommonDerivative
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 13 Jul 2022 02:22:43 +0000 (11:22 +0900)]
[multi head attention] implement forwarding
- implement multi head attention forwarding
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 19 Jul 2022 03:22:05 +0000 (12:22 +0900)]
[multi head attention] implement finalize
- Implement multi head attention finalize
- Remove finalizeCommon
- Remove ProvideAttentionMask proeperty, inout_idx member variable
cause NNTrainer assumes that input of multi head attention will be at least 3
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 25 Aug 2022 14:41:52 +0000 (23:41 +0900)]
[api] added newly implemented layer enum
- Added newly implemented layer enum to nntrainer-api-common.h
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DonghakPark [Mon, 29 Aug 2022 06:22:25 +0000 (15:22 +0900)]
[Trivial] Fix Typo
Fix Typo in nntrainer/compiler/*
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: DonghakPark <donghak.park@samsung.com>
jijoong.moon [Thu, 4 Aug 2022 04:22:57 +0000 (13:22 +0900)]
[ LSTM ] Optimize LSTM Gradient calculation
Gradient computaion of LSTM takes over 60% of computation. This patch
includes the optimization using the tensor dimenson which only has
width.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:46:51 +0000 (15:46 +0900)]
[Applicatin] Add profile for VGG
This patch adds the profile in VGG Application
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Wed, 13 Jul 2022 02:17:29 +0000 (11:17 +0900)]
[multi head attention] Added multi head attention scaffold
- Added calcCommonDerivative
- Added finalizeCommon
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:47:56 +0000 (15:47 +0900)]
[ Layer ] pallelizaiton along batch for polling forward computation
This patch includes parallelization along batch in forward computation
of Pooling 2D layer
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:44:36 +0000 (15:44 +0900)]
[ LAYERS ] LSTM : parallelization along batch direction (calGradient)
This patch includes parallelization along batch direction for
calculation of LSTM Gradient.
Also thread id is added in thread callback parameter to use it internally.
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyunil park [Fri, 19 Aug 2022 09:51:37 +0000 (18:51 +0900)]
[Property] Remove if-else statement in DroupOutRate::isValid()
- Remove if-else statement
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
jijoong.moon [Fri, 19 Aug 2022 01:50:06 +0000 (10:50 +0900)]
[ UTIL ] add frequency data in mem_usage.sh
Sometimes, cpu frequency check is required.
This patch includes the cpu frequency logging in mem_usage.sh
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Thu, 18 Aug 2022 12:51:12 +0000 (21:51 +0900)]
[neuralnet] adjust epoch_idx when stop_cb is called
- Assume that stop_cb immediately called so reduce current epoch_idx by 1
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:46:51 +0000 (15:46 +0900)]
[Applicatin] Add profile for VGG
This patch adds the profile in VGG Application
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Wed, 3 Aug 2022 04:31:32 +0000 (13:31 +0900)]
[jni] revise android build
- remove jni/Android.mk file. Will use jni/Android.mk.in
- Revise docs to use tools/package_android.sh when build for android
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
MyungJoo Ham [Thu, 11 Aug 2022 05:52:04 +0000 (14:52 +0900)]
Fix inappropriate SPDX license tag
SPDX-License_Identifier --> SPDX-License-Identifier
Added one more # to the first line for doxygen.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
DonghakPark [Fri, 5 Aug 2022 04:23:52 +0000 (13:23 +0900)]
[trivial] Reorganize README.md files
This is Proposal PR with issue #1974
- Remove Exmaple contents in nntrainer/README.md
- Add Example contents in nntrainer/Application/README.md
- Close #1974
Signed-off-by: DonghakPark <donghak.park@samsung.com>
jijoong.moon [Mon, 18 Jul 2022 04:43:16 +0000 (13:43 +0900)]
[ Activation ] improve tanh compuataion
This patch improves the computation of tanh.
Rather than calling tanh fuction, it is faster when
sigmoid is used.
tanh(x) = 2.0*sigmoid(2.0*x) -1.0;
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 18 Jul 2022 04:40:43 +0000 (13:40 +0900)]
[ Layer ] Conv2d Gradient Computation with Multi-Threads
This patch includes multi-threading for gradient computation of conv2d
layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
DonghakPark [Mon, 25 Jul 2022 04:09:06 +0000 (13:09 +0900)]
[Application] Add AlexNet(Fused) Application, Merge cifar_dataloader into utils
- Add AlexNet(Fused) Application
- Update meson.build (add Alexnet subdir)
- ADD main.cpp (AlexNet), alex.ini, README.md
- Merge cifar_dataloader into utils/datagen/cifar
- Close #1969
Signed-off-by: Donghak Park <donghak.park@samsung.com>
hyeonseok lee [Tue, 5 Jul 2022 11:36:36 +0000 (20:36 +0900)]
[conv] support causal padding in conv1d
- Replace padding2d to padding1d in conv1d
- Enable causal property in conv1d
- Added unittest for causal padding in conv1d
Close #1947
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DonghakPark [Thu, 14 Jul 2022 06:45:58 +0000 (15:45 +0900)]
[Application] Update README.md File
- Update README res/mnist.ini part (Split Optimizer, LearningRateScheduler)
- Update align code comment
Signed-off-by : DongHak Park <donghak.park@samsung.com>
jijoong.moon [Thu, 7 Jul 2022 13:25:45 +0000 (22:25 +0900)]
[ Layers ] paralleize Forwarding of conv2d
This patch includes the batch direction parallelization of forwarding
in Conv2D layer.
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Mon, 18 Jul 2022 02:43:09 +0000 (11:43 +0900)]
[trivial] fix ahub issue
- Added try catch statement
- Delete structually dead code
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Thu, 7 Jul 2022 09:32:37 +0000 (18:32 +0900)]
[ Utils ] create NNTrThread Features
This patch includes the NNTrThreads Features. This will be used for the
Multi-Thread Feature of nntrainer, such as Thread Pool, for loop
multi-threading along batch direction.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 7 Jul 2022 09:24:54 +0000 (18:24 +0900)]
[ Layers ] Add Parallelization along batch direction
This patch demostrate the batch direction parallelization with conv2d
calcDerivatives.
. add the meson option with 'nntr-num-threads' key and int value.
. add extra compile option, NNTR_NUM_THREADS ( default value is 1 )
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Mon, 11 Jul 2022 06:32:22 +0000 (15:32 +0900)]
[Application] Rename main_sample.cpp to main.cpp in KNN
- Rename the filename main_sample.cpp to main.cpp
- Modify meson.build and Android.mk
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 11 Jul 2022 06:21:20 +0000 (15:21 +0900)]
[Application] Remove unused file of KNN
- Remove main.cpp
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
hyeonseok lee [Tue, 28 Jun 2022 05:21:44 +0000 (14:21 +0900)]
[conv] support dilation property
- Support dilation property in conv1d/conv2d layer
- Added unittest with dilation
Close #1922
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Fri, 1 Jul 2022 14:16:31 +0000 (23:16 +0900)]
[ Trivial ] Fix the "Deprecated-declarations" error
This patch updates 'INSTANTIATE_TEST_CASE_P' which will be deprecated
with 'INISTATIATE_TEST_SUITE_P'.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jiho Chu [Thu, 30 Jun 2022 02:25:21 +0000 (11:25 +0900)]
[BUILD] fix supported gmock version
google mock API is changed from 1.10.0,
and it does not support MOCK_METHOD under 1.10.0 version.
'profiler' test is only enabled for proper gmock.
MOCK_METHOD macro needs the upper verison(>=1.10) of gmock.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
jijoong.moon [Mon, 27 Jun 2022 09:59:07 +0000 (18:59 +0900)]
[ Application ] Fix VGG using CCAPI
This patch includes the fixes of VGG Application.
- Remove the training data generation with batch size.
: Recently we update the data generation callback to fill one data
set a time
- Remove include the nntrainer interanls : neuralnet.h
: Update to use CCAPI
- Reuse the data generation with ciar_dataloader.h in Resnet
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Wed, 22 Jun 2022 09:01:39 +0000 (18:01 +0900)]
[TEST] Add unittest related to setProperty for dataset
Add unittest for ml_train_dataset_set_property_for_mode_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Wed, 22 Jun 2022 08:51:55 +0000 (17:51 +0900)]
[CAPI] Add set property with single param for dataset
Add ml_train_dataset_set_property_for_mode_with_single_param().
ml_train_dataset_set_property_for_mode() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive param as a single string from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
jijoong.moon [Wed, 22 Jun 2022 06:15:03 +0000 (15:15 +0900)]
[ BUILD ] fix binary size of libnntrainer with android build script
This patch fix the wrong size of nntrainer library, libnntrainer.so,
when using the android build script.
- change the ndk-build options with NDK_LIBS_OUT
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Tue, 21 Jun 2022 09:41:56 +0000 (18:41 +0900)]
[neuralnet] check tflite interpreter is enabled when export
- Added ifdef statement to check tflite interpreter is enabled
- Added override specifier in get/setWeights
- Restore precision ostream format
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 20 Jun 2022 07:35:26 +0000 (16:35 +0900)]
[profiler] restore precision
- After print profile information restore precision
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 17 Jun 2022 06:45:53 +0000 (15:45 +0900)]
[fix] fix svace issue
- Delete duplicated unittest
- Added try catch statement to catch exception
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 16 Jun 2022 08:48:23 +0000 (17:48 +0900)]
[Application] add step learning rate scheduler example
- Added how to use step learning rate scheduler in mnist application
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Wed, 15 Jun 2022 12:53:36 +0000 (21:53 +0900)]
[ TEST ] add golden data for grucell_fc
This patch includes the golden data of grucell_fc unit test.
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 15 Jun 2022 11:37:08 +0000 (20:37 +0900)]
[ Recurrent ] property for dynamic time sequence
.This patch provides property for dynamic time sequence in recurrent
realizer. The sementic of this is "dynamic_time_seq = true/false"
.Add grucell + Fully Connected uint test case for reference of dynamic
time sequence
Related : #1933
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Thu, 16 Jun 2022 00:35:29 +0000 (09:35 +0900)]
[TEST] Add unittest related to setProperty for optimizer
- Add unittest for ml_train_optimizer_set_property_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Thu, 16 Jun 2022 00:18:01 +0000 (09:18 +0900)]
[CAPI] Add set property with single param for optimizer
Add ml_train_optimizer_set_property_with_single_param().
ml_train_optimizer_set_property() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive param as a single string from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
DonghakPark [Wed, 15 Jun 2022 07:29:57 +0000 (16:29 +0900)]
[trivial] fix typo
- Fix typo
- Fix ist --> <sup>i</sup> for readability
Signed-off-by: Donghak Park donghak.park@samsung.com
Hyunil [Tue, 14 Jun 2022 06:10:33 +0000 (15:10 +0900)]
[CAPI] Add unittest related to setProperty for layer
- Add unittest for ml_train_layer_set_property_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 13 Jun 2022 09:17:27 +0000 (18:17 +0900)]
[CAPI] Add ml_train_layer_set_property_with_single_param() internally for C# DllImport
ml_train_layer_set_property() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive param as a single string from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 13 Jun 2022 02:11:18 +0000 (11:11 +0900)]
[CAPI] Modify model_compile_with_single_param() and ml_train_model_run_with_single_param()
Modified the internal APIs created as a C# va_list issue since capi can receive
single string in the form "key=value | key=value" and all objects have loadProperties
in setProperty which can split it with '|'.
so, the original code was reverted and added new code related call capi directly.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Sangjung Woo [Wed, 15 Jun 2022 01:53:49 +0000 (10:53 +0900)]
[Spec] Add flatbuffers-devel BuildRequires as default
The compiler module always uses flatc command but 'flatbuffers-devel'
dependency is added in case of the Tizen version is lower than 6.5.
Because of this reason, the buildbreak issue occurs when building for
Tizen 7.0. This patch fixes this bug.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Fri, 10 Jun 2022 10:50:30 +0000 (19:50 +0900)]
[ Tensor ] remove rank 2 limitation for dot op
This patch removes the limitaion of rank 2 for dot op.
It expectes to compute with 4D tensor as 2D with [BxCxH, W]
dimension.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Fri, 10 Jun 2022 06:59:00 +0000 (15:59 +0900)]
[unittest] added zoneout mask unittest
- Test the zoneout mask is generated according to the zoneout rate
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 6 May 2022 02:21:04 +0000 (11:21 +0900)]
[layer] fix typo
- Fix typo
- Delete duplicated code
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Mon, 13 Jun 2022 00:45:35 +0000 (09:45 +0900)]
[ Doc ] Update Readme.md
Add new doc which explains NNTrainer internal
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Fri, 3 Jun 2022 06:18:11 +0000 (15:18 +0900)]
[docs] Update configuration-ini.md file
- Remove learning_rate from oprimizer section
- Add learning rate scheduler section
- Remove dataset section
- Add train set, validation set and test set section
- Add many types to layer section
- Add table about type, key, value, default value and description for each layers
- Update configuration file example
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
jijoong.moon [Thu, 2 Jun 2022 10:57:51 +0000 (19:57 +0900)]
[ Packaging ] Packaging for Tizen 6.0
This patch includes fixes to support Tizne 6.0 build
. Fix .spec & meson.build
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Tue, 7 Jun 2022 07:06:00 +0000 (16:06 +0900)]
[bug_fix] bug fix zoneout lstmcell layer
- This patch enable zoneout even though the zoneout rate is smaller than epsilon
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Hyunil [Tue, 24 May 2022 02:02:45 +0000 (11:02 +0900)]
[CAPI] Add unittest related to compile and train
- Add unittest for ml_train_model_compile_with_single_param()
- Add unittest for ml_train_model_run_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 23 May 2022 03:32:45 +0000 (12:32 +0900)]
[Property] Modify error log
- Add double quotation mark to log to indicate wrong with "key=value, key=value"
- Add example key and value
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Tue, 24 May 2022 05:39:02 +0000 (14:39 +0900)]
[CAPI] Add ml_train_model_run_with_single_param() internally for C# DllImport
ml_train_model_run() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive model compile param as a single string
from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Tue, 17 May 2022 03:07:44 +0000 (12:07 +0900)]
[CAPI] Add ml_train_model_compile_with_single_param() internally for C# DllImport
ml_train_model_compile() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive model compile param as a single string
from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Jiho Chu [Mon, 30 May 2022 04:56:38 +0000 (13:56 +0900)]
[Utils] Add Memory profile feature to Profiler
This patch implement memory profiling feature.
Profile is refactored to handle memory statics information.
Unnecessary dependency from ProfileListener to Profile
class is removed, and inner information is redesigned to handle
both time and memory profiling.
For the memory profile, below infomation is managed:
event: ALLOC | DEALLOC
current size: total allocated memory size
info: user friendly tag for the analysis
duration: time interval between alloc and dealloc
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Mon, 30 May 2022 04:54:42 +0000 (13:54 +0900)]
[Test] Modify profiler test to add Memory event
This patch implement profiler test.
Whole codes are refectored to test both 'time' and 'memory' profiling.
It uses fixture test and gmock is used to checking callback function.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
JRazek [Fri, 27 May 2022 17:28:39 +0000 (19:28 +0200)]
[ccapi/nntrainer] Add getters for compiled, initialized and loadedFromConfig
This commit adds bool getters for the states of compiled, initialized and loadedFromConfig in ml::train::Model class.
Signed-off-by: JRazek <jakub.razek2@gmail.com>
hyeonseok lee [Fri, 20 May 2022 03:56:06 +0000 (12:56 +0900)]
[ccapi] include common header in meson
- Newly created common header file was missing in meson build
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Mon, 16 May 2022 11:10:11 +0000 (20:10 +0900)]
[ Application ] Update Logistic Regression to use CCAPI
This patch includes
. Update Logistic Regression to use CCAPIs
. Update Inference process
. static link about openmp
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
seongwoo [Wed, 18 May 2022 07:43:47 +0000 (16:43 +0900)]
[ccapi] Add exports interface.
This patch adds "exports" interface and implementation of neuralnet which literally exports the model according to given export method.
Also, this patch includes introducing `common.h` to ccapi, which defines ExportMethods.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: seongwoo <mhs4670go@naver.com>
seongwoo [Thu, 12 May 2022 07:59:31 +0000 (16:59 +0900)]
[layers] flatten layer consider target_shape
This patch makes a flatten layer consider target_shape and use it during exporting.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: seongwoo <mhs4670go@naver.com>
seongwoo [Thu, 12 May 2022 07:50:18 +0000 (16:50 +0900)]
[ Compiler ] add loss layer realizer
This patch includes loss realizer implementation and tests which removes the loss layer for exports.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: seongwoo <mhs4670go@naver.com>
Udit Jain [Fri, 13 May 2022 07:08:22 +0000 (16:08 +0900)]
Update how-to-run-example-android.md
Updated documentation to include some additional things I needed and added some troubleshooting cases.
Signed-off-by: Udit <udit.jain@samsung.com>
hyeonseok lee [Thu, 12 May 2022 10:51:29 +0000 (19:51 +0900)]
[unittest] add negative unittest
- Added negative unittest
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 12 May 2022 07:25:42 +0000 (16:25 +0900)]
[unittest] added assertion in unittest
- For the unittest which has no assertion add assertion to the test
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 13 May 2022 13:06:39 +0000 (22:06 +0900)]
[unittest] remove duplicated unittest
- remove duplicated unittest
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Hyunil [Thu, 12 May 2022 11:04:36 +0000 (20:04 +0900)]
:[test] Add some negative tests to unittest
There are some case where there is a positive test but no negative test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
jijoong.moon [Tue, 26 Apr 2022 12:35:53 +0000 (21:35 +0900)]
[ CCAPI ] add copy model configuration & get/set Weight of layer
This patch enables the API for the copy model configuration and
get/set Weight Tensor data with vector of float pointer.
sementics of this APIs are:
. copyConfigurarion(ml::train::Model &from)
. std::vector<float*> getWeights()
. void setWeights(const std::vector<float*>)
This is only copy the model configuration, it reauires the compile and
initialization before train.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Thu, 28 Apr 2022 07:48:01 +0000 (16:48 +0900)]
[rnncell] generate testcase for muli in/out of rnncell
- Generate multi in/out rnncell and replace testcase of 1 input/output of rnncell
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 26 Apr 2022 09:57:21 +0000 (18:57 +0900)]
[rnncell] implement multiout for rnncell
- Calculate derivative inlcude duplicated process with calculate gradient.
Needs to be optimized.
- Remove hidden state and get previous hidden state from input
- Remove timestep property
- Disabled rnncell unittest. Proper unittest will be followed.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 22 Apr 2022 05:56:45 +0000 (14:56 +0900)]
[neuralnet] set epoch_idx to zero only if train from scratch
- Until now, the epoch property had a meaning of how many epoch to run,
but from now it will have a meaning of final epochs to run
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 22 Apr 2022 08:28:39 +0000 (17:28 +0900)]
[model] added getter for current epoch_idx
- Through this function user can get epoch_idx
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 22 Apr 2022 04:53:09 +0000 (13:53 +0900)]
[model] support stop training
- During the training the function will check every epoch to decide stop
the training or not.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 24 Jan 2022 05:36:41 +0000 (14:36 +0900)]
[lstm] implement calcGradient for bidirectional lstm
- Implement calcGradient for bidirectional lstm
- Added test case for bidirectional lstm
close #1726
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Fri, 6 May 2022 02:18:38 +0000 (11:18 +0900)]
[ CAPI ] change set_feature_state to compatible with tizen api
This patch make the set_featture_state variable compatible with tizen
ml api. It requires ml feature parameter to set and get, it fixs
accordingly.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 2 May 2022 01:53:04 +0000 (10:53 +0900)]
[ Unit Test ] remove mse loss layer in makeCompiledGraph
This patch remove the mse loss layer in makeCompiledGraph
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jiho Chu [Wed, 13 Apr 2022 11:10:35 +0000 (20:10 +0900)]
[Fix] Clean code for resnet application
- fix typo
- Use const for catched exception
- use EXIT_SUCCESS macro
- use error code
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
jijoong.moon [Wed, 20 Apr 2022 07:58:21 +0000 (16:58 +0900)]
[ TEST ] add more test cases for batch normalization realizer
This patch adds the test case for batch normazliation realizer with
kind of resnet basic block which includes the multiout layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 19 Apr 2022 07:32:32 +0000 (16:32 +0900)]
[ Compiler ] Implement bn realizer with test
This patch completes the bn realizer for the inference.
This path is only for the inference and therefore it supposed to 1 to
1 connection ( one input and one output for the bn layer ) in the
model graph. That means if there are multiple connections, then
multi-out layer is follows and make sure bn layer has 1 to 1
in/output during compilation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 18 Apr 2022 05:55:34 +0000 (14:55 +0900)]
[ Compilier ] implement BN realzier
Describe a commit content (Until 80 colums per line) in detail ASAP.
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 20 Apr 2022 00:38:31 +0000 (09:38 +0900)]
[ Interpreter ] add batch normalziation realizer
This patch removes the batch normalizaiton layers with bn_realizer in
the GraphRepresentation for TfliteInterperter which is for the
inference.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 18 Apr 2022 05:55:34 +0000 (14:55 +0900)]
[ Compiler ] Implement bn realizer with test
This patch completes the bn realizer for the inference.
This path is only for the inference and therefore it supposed to 1 to
1 connection ( one input and one output for the bn layer ) in the
model graph. That means if there are multiple connections, then
multi-out layer is follows and make sure bn layer has 1 to 1
in/output during compilation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Fri, 22 Apr 2022 01:06:27 +0000 (10:06 +0900)]
[CAPI] Add check if model is null to ml_train_model_get_summary()
- Check if model is null or not before checking in ml_train_model_get_summary_util()
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
jijoong.moon [Thu, 21 Apr 2022 01:19:13 +0000 (10:19 +0900)]
[ Android ] fix the warnning message
This patch fixs the warnning message for the android
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Wed, 9 Feb 2022 08:15:03 +0000 (17:15 +0900)]
[zoneout] enable zoneout only when used
- Allocate and use zoneout mask, lstm_cell_state only when used
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 5 Jan 2022 03:42:45 +0000 (12:42 +0900)]
[grucell] remove temporary tensor
- Reduce temporary tensor
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 5 Jan 2022 03:39:55 +0000 (12:39 +0900)]
[zoneout lstmcell] share zoneout mask tensors
- Makes zoneout_mask tensors to be shared when it is unrolled
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 10:04:17 +0000 (19:04 +0900)]
[optimizer/lr] Add learning rate scheduler to ccapi
Open learning rate scheduler for the ccapi with the optimizer.
Corresponding changes are made and some tests are updated to use the new
way to set the learning rate.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 05:25:29 +0000 (14:25 +0900)]
[lr] Support step wise decay scheduler
Support step wise decay learning rate scheduler, where the iterations
and learning rate are taken as parameter.
The corresponding unittests are also added.
See Also #1776
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 13 Dec 2021 02:35:13 +0000 (11:35 +0900)]
[test] Bug fix for unittest
ccapi and capi use the same filenames for their ini files which can be
buggy if the unittests are run in parallel or unittests differ.
Make ini untitest names independent.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 10 Dec 2021 17:47:07 +0000 (02:47 +0900)]
[modelloader] Support learning rate scheduler
Support learning rate scheduler with model loader along with a unittest.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>