SeoHyungjun [Mon, 31 Oct 2022 04:35:59 +0000 (13:35 +0900)]
[typo] Fix typo
fix the typo error in README.md file
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
DonghakPark [Thu, 27 Oct 2022 11:58:41 +0000 (20:58 +0900)]
[typo] Fix typo error
Fix the typo error through spell checker
- Application/
- api/ccapi/
- nntrainer/
Signed-off-by: DonghakPark <donghak.park@samsung.com>
Seungbaek Hong [Mon, 31 Oct 2022 07:12:35 +0000 (16:12 +0900)]
Fix typo error
Fix the typo error in getting-started.md file.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.net>
DonghakPark [Thu, 20 Oct 2022 10:53:07 +0000 (19:53 +0900)]
[fix] fix coverity_tizen issue
- Add try, catch statement to catch exception
- Add NNTR_THROW_IF to check return
Signed-off-by: DonghakPark <donghak.park@samsung.com>
hyeonseok lee [Tue, 27 Sep 2022 03:02:55 +0000 (12:02 +0900)]
[unittest] refine unittest
- Use epsilon to compare float variable
- Use unused enum LayerCreateSetPropertyOptions
- fix typos
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 26 Sep 2022 10:47:47 +0000 (19:47 +0900)]
[unittest] remove unittest_nntrainer_layers
- Remove unittest_nntrainer_layers.cpp cause this test have been disable.
- Rename unittest_layers_v2 to unittest
Close #2002
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 27 Sep 2022 03:00:20 +0000 (12:00 +0900)]
[unittest] migrate unittest from unittest_nntrainer_layers to unittest_layers
- Migrate unittest for test negative property from unittest_nntrainer_layer to
unittest_layers
- Added LayerPropertySemantics which is inherited from LayerSemantics
to test negative property only
- Added LayerPropertySemantics in Simpleshot and pow layer unittest
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DonghakPark [Fri, 21 Oct 2022 00:23:06 +0000 (09:23 +0900)]
[typo] Fix typo error
Fix the typo errors in bellow dir
- Applications/
- nntrainer/optimizers/
Signed-off-by: DonghakPark <donghak.park@samsung.com>
Jiho Chu [Mon, 17 Oct 2022 02:18:17 +0000 (11:18 +0900)]
[Tensor] Add constructor for user swap path
Swap file path could be changed by model property.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Fri, 14 Oct 2022 13:19:37 +0000 (22:19 +0900)]
[Model] Add memory swap path property
MemorySwapPath property is added to define path for swap files.
If it's not defined, it creates files in crurent directory.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
seungbaek [Tue, 18 Oct 2022 02:46:07 +0000 (11:46 +0900)]
[README] Fix typo error in README
Fixed the typo error in the title of the paper below.
- NNTrainer: Light-Weight On-Device Training Framework, arXiv, 2022
Signed-off-by: seungbaek <sb92.hong@samsung.com>
hyeonseok lee [Fri, 14 Oct 2022 12:07:52 +0000 (21:07 +0900)]
[memory] extract MemoryData class from memory_pool to another file
- Extract MemoryData class from memory_pool.h and make memory_data.h file
- Add doxygen
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 14 Oct 2022 09:28:20 +0000 (18:28 +0900)]
[neuralnet] check stop_cb by every layer
- Check stop_cb before doing forwarding/calcGrading/calcDerivative by every layer
to stop training more quickly.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jiho Chu [Thu, 25 Aug 2022 08:02:01 +0000 (17:02 +0900)]
[TEST] fix for cache pool modification
It modified for cache pool implementation.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Tue, 19 Jul 2022 11:33:49 +0000 (20:33 +0900)]
[Property] Add memory_swap property
'memory_swap' property is added, and it can be used to select
which pool to use, memory pool or cache pool.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Tue, 19 Jul 2022 11:30:54 +0000 (20:30 +0900)]
[Memory] Implement memory pool for swap device
Memory pool is modified for swap device.
New memory data class is introduced and stores allocation information.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Fri, 1 Jul 2022 13:22:43 +0000 (22:22 +0900)]
[Tensor] Add swap device
Swap device class is introduce.
It operates with cache memory and keeps data permantly in the device.
The storage size is fixed when the device opens, sparse data size
is occupied with garbage data. When the device close, the storage
file is removed.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Fri, 1 Jul 2022 12:27:09 +0000 (21:27 +0900)]
[Memory] Add cache pool
Initial implementaion of cache pool.
It inherites from memory pool to utilize optimized memory information.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
MyungJoo Ham [Tue, 4 Oct 2022 12:36:32 +0000 (21:36 +0900)]
debian: fix occasional debuild error
Sometime debuild has errors with dh_clean:
```
dh_clean: error: find . \( \( \
\( -path .\*/.git -o -path .\*/.svn -o -path .\*/.bzr -o -path .\*/.hg -o -path .\*/CVS -o -path .\*/.pc -o -path .\*/_darcs \) -prune -o -type f -a \
\( -name '#*#' -o -name '.*~' -o -name '*~' -o -name DEADJOE \
-o -name '*.orig' -o -name '*.rej' -o -name '*.bak' \
-o -name '.*.orig' -o -name .*.rej -o -name '.SUMS' \
-o -name TAGS -o \( -path '*/.deps/*' -a -name '*.P' \) \
\) -exec rm -f {} + \) -o \
\( -type d -a -name autom4te.cache -prune -exec rm -rf {} + \) \) returned exit code 1
make: *** [debian/rules:28: clean] Error 25
```
Override dh_clean to avoid this.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
hyeonseok lee [Tue, 4 Oct 2022 05:54:42 +0000 (14:54 +0900)]
[trivial] fix ahub issues
- Remove noexcept in copy function cause it throws when data is non contiguous
- Initialize char array
- Delete array not element("[]" was missing)
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 30 Sep 2022 02:33:20 +0000 (11:33 +0900)]
[padding1d] change padding1d key to padding
- Change Padding1D key to padding to use same key with Padding2D.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 29 Sep 2022 09:19:26 +0000 (18:19 +0900)]
[neuralnet] add log about training
- Added log when start/finish training
- Added log when get current epoch is called
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jiho Chu [Thu, 25 Aug 2022 04:36:08 +0000 (13:36 +0900)]
[Profile] Add memory statiscs & annotation
It add two functions:
- provide PROFILE_MEM_ANNOTATE macro
- print average and maximum usage of memory.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
hyeonseok lee [Thu, 8 Sep 2022 08:53:59 +0000 (17:53 +0900)]
[lstm] make lstm_core as class
- Make lstm_core as class so let lstm/lstm_cell/zoneout_lstmcell inherit this.
close #1997
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Mon, 26 Sep 2022 05:29:12 +0000 (14:29 +0900)]
[Release] NNTrainer 0.4.0 Release
NNTrainer v0.4.0 is released.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Mon, 26 Sep 2022 04:57:21 +0000 (13:57 +0900)]
[bugfix] initialize local variable
- Initialize local variable
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 21 Sep 2022 03:14:28 +0000 (12:14 +0900)]
[unittest] added negative testcase
- Added negative testcase
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 21 Sep 2022 02:40:14 +0000 (11:40 +0900)]
[unittest] rename negative testcase name to _n place at the end
- Since the indicator _p/_n is at the middle of testcase name all of test cases
are regarded as positive testcases(default). So place indicator at the end.
- Added '_' right before 'n' to indicate negative testcase.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Fri, 16 Sep 2022 06:22:21 +0000 (15:22 +0900)]
[ README ] add new features in README
Add new features:
. positional encoding layer
. Multi-head attention layer
. layer normalization layer
. kld loss
. learning rate schedule
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Mon, 5 Sep 2022 05:32:57 +0000 (14:32 +0900)]
[layer] convert throw to nntr_throw in layer finalize
- Instead of directly using throw convert it to use NNTR_THROW_IF in layer finalize
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 22 Aug 2022 08:00:41 +0000 (17:00 +0900)]
[unittest] generate transformer unittest
- Generate transformer encoder layer unittest
- Generate transformer decoder layer unittest
- Generate transformer unittest
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 2 Aug 2022 12:03:30 +0000 (21:03 +0900)]
[unittest] layer normalization
- generate layer normalization layer unittest
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 27 Jul 2022 10:02:51 +0000 (19:02 +0900)]
[layer normalization] implement layer normalization
- implement layer normalization layer based on batch normalization
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 25 Aug 2022 14:07:27 +0000 (23:07 +0900)]
[unittest] generate positional encoding unittest
- Generate positional encoding layer/model unittest.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 25 Aug 2022 14:03:54 +0000 (23:03 +0900)]
[positional encoding] implement positional encoding layer
- Positional encoding just needed to be calculated only once
so make its lifespan as max lifespan
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 15 Jul 2022 05:51:02 +0000 (14:51 +0900)]
[multi head attention] added unittest
- Added layer/model unittest for multi head attention
- Change == operator overloading to pass if both tensor has nan value
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 15 Jul 2022 05:46:20 +0000 (14:46 +0900)]
[multi_head_attention] implement calcDerivative, calcGradient
- Implement multi head attention calcDerivative, calcGradient
- Needs to support bool type attention mask
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 13 Jul 2022 02:25:08 +0000 (11:25 +0900)]
[multi head attention] implement calcCommonDerivative
- implement multi head attention calcCommonDerivative
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 13 Jul 2022 02:22:43 +0000 (11:22 +0900)]
[multi head attention] implement forwarding
- implement multi head attention forwarding
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Tue, 19 Jul 2022 03:22:05 +0000 (12:22 +0900)]
[multi head attention] implement finalize
- Implement multi head attention finalize
- Remove finalizeCommon
- Remove ProvideAttentionMask proeperty, inout_idx member variable
cause NNTrainer assumes that input of multi head attention will be at least 3
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 25 Aug 2022 14:41:52 +0000 (23:41 +0900)]
[api] added newly implemented layer enum
- Added newly implemented layer enum to nntrainer-api-common.h
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DonghakPark [Mon, 29 Aug 2022 06:22:25 +0000 (15:22 +0900)]
[Trivial] Fix Typo
Fix Typo in nntrainer/compiler/*
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: DonghakPark <donghak.park@samsung.com>
jijoong.moon [Thu, 4 Aug 2022 04:22:57 +0000 (13:22 +0900)]
[ LSTM ] Optimize LSTM Gradient calculation
Gradient computaion of LSTM takes over 60% of computation. This patch
includes the optimization using the tensor dimenson which only has
width.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:46:51 +0000 (15:46 +0900)]
[Applicatin] Add profile for VGG
This patch adds the profile in VGG Application
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Wed, 13 Jul 2022 02:17:29 +0000 (11:17 +0900)]
[multi head attention] Added multi head attention scaffold
- Added calcCommonDerivative
- Added finalizeCommon
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:47:56 +0000 (15:47 +0900)]
[ Layer ] pallelizaiton along batch for polling forward computation
This patch includes parallelization along batch in forward computation
of Pooling 2D layer
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:44:36 +0000 (15:44 +0900)]
[ LAYERS ] LSTM : parallelization along batch direction (calGradient)
This patch includes parallelization along batch direction for
calculation of LSTM Gradient.
Also thread id is added in thread callback parameter to use it internally.
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyunil park [Fri, 19 Aug 2022 09:51:37 +0000 (18:51 +0900)]
[Property] Remove if-else statement in DroupOutRate::isValid()
- Remove if-else statement
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
jijoong.moon [Fri, 19 Aug 2022 01:50:06 +0000 (10:50 +0900)]
[ UTIL ] add frequency data in mem_usage.sh
Sometimes, cpu frequency check is required.
This patch includes the cpu frequency logging in mem_usage.sh
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Thu, 18 Aug 2022 12:51:12 +0000 (21:51 +0900)]
[neuralnet] adjust epoch_idx when stop_cb is called
- Assume that stop_cb immediately called so reduce current epoch_idx by 1
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Wed, 3 Aug 2022 06:46:51 +0000 (15:46 +0900)]
[Applicatin] Add profile for VGG
This patch adds the profile in VGG Application
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Wed, 3 Aug 2022 04:31:32 +0000 (13:31 +0900)]
[jni] revise android build
- remove jni/Android.mk file. Will use jni/Android.mk.in
- Revise docs to use tools/package_android.sh when build for android
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
MyungJoo Ham [Thu, 11 Aug 2022 05:52:04 +0000 (14:52 +0900)]
Fix inappropriate SPDX license tag
SPDX-License_Identifier --> SPDX-License-Identifier
Added one more # to the first line for doxygen.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
DonghakPark [Fri, 5 Aug 2022 04:23:52 +0000 (13:23 +0900)]
[trivial] Reorganize README.md files
This is Proposal PR with issue #1974
- Remove Exmaple contents in nntrainer/README.md
- Add Example contents in nntrainer/Application/README.md
- Close #1974
Signed-off-by: DonghakPark <donghak.park@samsung.com>
jijoong.moon [Mon, 18 Jul 2022 04:43:16 +0000 (13:43 +0900)]
[ Activation ] improve tanh compuataion
This patch improves the computation of tanh.
Rather than calling tanh fuction, it is faster when
sigmoid is used.
tanh(x) = 2.0*sigmoid(2.0*x) -1.0;
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 18 Jul 2022 04:40:43 +0000 (13:40 +0900)]
[ Layer ] Conv2d Gradient Computation with Multi-Threads
This patch includes multi-threading for gradient computation of conv2d
layer.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
DonghakPark [Mon, 25 Jul 2022 04:09:06 +0000 (13:09 +0900)]
[Application] Add AlexNet(Fused) Application, Merge cifar_dataloader into utils
- Add AlexNet(Fused) Application
- Update meson.build (add Alexnet subdir)
- ADD main.cpp (AlexNet), alex.ini, README.md
- Merge cifar_dataloader into utils/datagen/cifar
- Close #1969
Signed-off-by: Donghak Park <donghak.park@samsung.com>
hyeonseok lee [Tue, 5 Jul 2022 11:36:36 +0000 (20:36 +0900)]
[conv] support causal padding in conv1d
- Replace padding2d to padding1d in conv1d
- Enable causal property in conv1d
- Added unittest for causal padding in conv1d
Close #1947
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DonghakPark [Thu, 14 Jul 2022 06:45:58 +0000 (15:45 +0900)]
[Application] Update README.md File
- Update README res/mnist.ini part (Split Optimizer, LearningRateScheduler)
- Update align code comment
Signed-off-by : DongHak Park <donghak.park@samsung.com>
jijoong.moon [Thu, 7 Jul 2022 13:25:45 +0000 (22:25 +0900)]
[ Layers ] paralleize Forwarding of conv2d
This patch includes the batch direction parallelization of forwarding
in Conv2D layer.
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Mon, 18 Jul 2022 02:43:09 +0000 (11:43 +0900)]
[trivial] fix ahub issue
- Added try catch statement
- Delete structually dead code
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Thu, 7 Jul 2022 09:32:37 +0000 (18:32 +0900)]
[ Utils ] create NNTrThread Features
This patch includes the NNTrThreads Features. This will be used for the
Multi-Thread Feature of nntrainer, such as Thread Pool, for loop
multi-threading along batch direction.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 7 Jul 2022 09:24:54 +0000 (18:24 +0900)]
[ Layers ] Add Parallelization along batch direction
This patch demostrate the batch direction parallelization with conv2d
calcDerivatives.
. add the meson option with 'nntr-num-threads' key and int value.
. add extra compile option, NNTR_NUM_THREADS ( default value is 1 )
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Mon, 11 Jul 2022 06:32:22 +0000 (15:32 +0900)]
[Application] Rename main_sample.cpp to main.cpp in KNN
- Rename the filename main_sample.cpp to main.cpp
- Modify meson.build and Android.mk
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 11 Jul 2022 06:21:20 +0000 (15:21 +0900)]
[Application] Remove unused file of KNN
- Remove main.cpp
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
hyeonseok lee [Tue, 28 Jun 2022 05:21:44 +0000 (14:21 +0900)]
[conv] support dilation property
- Support dilation property in conv1d/conv2d layer
- Added unittest with dilation
Close #1922
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Fri, 1 Jul 2022 14:16:31 +0000 (23:16 +0900)]
[ Trivial ] Fix the "Deprecated-declarations" error
This patch updates 'INSTANTIATE_TEST_CASE_P' which will be deprecated
with 'INISTATIATE_TEST_SUITE_P'.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jiho Chu [Thu, 30 Jun 2022 02:25:21 +0000 (11:25 +0900)]
[BUILD] fix supported gmock version
google mock API is changed from 1.10.0,
and it does not support MOCK_METHOD under 1.10.0 version.
'profiler' test is only enabled for proper gmock.
MOCK_METHOD macro needs the upper verison(>=1.10) of gmock.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
jijoong.moon [Mon, 27 Jun 2022 09:59:07 +0000 (18:59 +0900)]
[ Application ] Fix VGG using CCAPI
This patch includes the fixes of VGG Application.
- Remove the training data generation with batch size.
: Recently we update the data generation callback to fill one data
set a time
- Remove include the nntrainer interanls : neuralnet.h
: Update to use CCAPI
- Reuse the data generation with ciar_dataloader.h in Resnet
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Wed, 22 Jun 2022 09:01:39 +0000 (18:01 +0900)]
[TEST] Add unittest related to setProperty for dataset
Add unittest for ml_train_dataset_set_property_for_mode_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Wed, 22 Jun 2022 08:51:55 +0000 (17:51 +0900)]
[CAPI] Add set property with single param for dataset
Add ml_train_dataset_set_property_for_mode_with_single_param().
ml_train_dataset_set_property_for_mode() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive param as a single string from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
jijoong.moon [Wed, 22 Jun 2022 06:15:03 +0000 (15:15 +0900)]
[ BUILD ] fix binary size of libnntrainer with android build script
This patch fix the wrong size of nntrainer library, libnntrainer.so,
when using the android build script.
- change the ndk-build options with NDK_LIBS_OUT
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Tue, 21 Jun 2022 09:41:56 +0000 (18:41 +0900)]
[neuralnet] check tflite interpreter is enabled when export
- Added ifdef statement to check tflite interpreter is enabled
- Added override specifier in get/setWeights
- Restore precision ostream format
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 20 Jun 2022 07:35:26 +0000 (16:35 +0900)]
[profiler] restore precision
- After print profile information restore precision
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 17 Jun 2022 06:45:53 +0000 (15:45 +0900)]
[fix] fix svace issue
- Delete duplicated unittest
- Added try catch statement to catch exception
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 16 Jun 2022 08:48:23 +0000 (17:48 +0900)]
[Application] add step learning rate scheduler example
- Added how to use step learning rate scheduler in mnist application
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Wed, 15 Jun 2022 12:53:36 +0000 (21:53 +0900)]
[ TEST ] add golden data for grucell_fc
This patch includes the golden data of grucell_fc unit test.
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Wed, 15 Jun 2022 11:37:08 +0000 (20:37 +0900)]
[ Recurrent ] property for dynamic time sequence
.This patch provides property for dynamic time sequence in recurrent
realizer. The sementic of this is "dynamic_time_seq = true/false"
.Add grucell + Fully Connected uint test case for reference of dynamic
time sequence
Related : #1933
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Thu, 16 Jun 2022 00:35:29 +0000 (09:35 +0900)]
[TEST] Add unittest related to setProperty for optimizer
- Add unittest for ml_train_optimizer_set_property_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Thu, 16 Jun 2022 00:18:01 +0000 (09:18 +0900)]
[CAPI] Add set property with single param for optimizer
Add ml_train_optimizer_set_property_with_single_param().
ml_train_optimizer_set_property() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive param as a single string from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
DonghakPark [Wed, 15 Jun 2022 07:29:57 +0000 (16:29 +0900)]
[trivial] fix typo
- Fix typo
- Fix ist --> <sup>i</sup> for readability
Signed-off-by: Donghak Park donghak.park@samsung.com
Hyunil [Tue, 14 Jun 2022 06:10:33 +0000 (15:10 +0900)]
[CAPI] Add unittest related to setProperty for layer
- Add unittest for ml_train_layer_set_property_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 13 Jun 2022 09:17:27 +0000 (18:17 +0900)]
[CAPI] Add ml_train_layer_set_property_with_single_param() internally for C# DllImport
ml_train_layer_set_property() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive param as a single string from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 13 Jun 2022 02:11:18 +0000 (11:11 +0900)]
[CAPI] Modify model_compile_with_single_param() and ml_train_model_run_with_single_param()
Modified the internal APIs created as a C# va_list issue since capi can receive
single string in the form "key=value | key=value" and all objects have loadProperties
in setProperty which can split it with '|'.
so, the original code was reverted and added new code related call capi directly.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Sangjung Woo [Wed, 15 Jun 2022 01:53:49 +0000 (10:53 +0900)]
[Spec] Add flatbuffers-devel BuildRequires as default
The compiler module always uses flatc command but 'flatbuffers-devel'
dependency is added in case of the Tizen version is lower than 6.5.
Because of this reason, the buildbreak issue occurs when building for
Tizen 7.0. This patch fixes this bug.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
jijoong.moon [Fri, 10 Jun 2022 10:50:30 +0000 (19:50 +0900)]
[ Tensor ] remove rank 2 limitation for dot op
This patch removes the limitaion of rank 2 for dot op.
It expectes to compute with 4D tensor as 2D with [BxCxH, W]
dimension.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Fri, 10 Jun 2022 06:59:00 +0000 (15:59 +0900)]
[unittest] added zoneout mask unittest
- Test the zoneout mask is generated according to the zoneout rate
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 6 May 2022 02:21:04 +0000 (11:21 +0900)]
[layer] fix typo
- Fix typo
- Delete duplicated code
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Mon, 13 Jun 2022 00:45:35 +0000 (09:45 +0900)]
[ Doc ] Update Readme.md
Add new doc which explains NNTrainer internal
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Hyunil [Fri, 3 Jun 2022 06:18:11 +0000 (15:18 +0900)]
[docs] Update configuration-ini.md file
- Remove learning_rate from oprimizer section
- Add learning rate scheduler section
- Remove dataset section
- Add train set, validation set and test set section
- Add many types to layer section
- Add table about type, key, value, default value and description for each layers
- Update configuration file example
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
jijoong.moon [Thu, 2 Jun 2022 10:57:51 +0000 (19:57 +0900)]
[ Packaging ] Packaging for Tizen 6.0
This patch includes fixes to support Tizne 6.0 build
. Fix .spec & meson.build
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Tue, 7 Jun 2022 07:06:00 +0000 (16:06 +0900)]
[bug_fix] bug fix zoneout lstmcell layer
- This patch enable zoneout even though the zoneout rate is smaller than epsilon
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Hyunil [Tue, 24 May 2022 02:02:45 +0000 (11:02 +0900)]
[CAPI] Add unittest related to compile and train
- Add unittest for ml_train_model_compile_with_single_param()
- Add unittest for ml_train_model_run_with_single_param()
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Mon, 23 May 2022 03:32:45 +0000 (12:32 +0900)]
[Property] Modify error log
- Add double quotation mark to log to indicate wrong with "key=value, key=value"
- Add example key and value
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Tue, 24 May 2022 05:39:02 +0000 (14:39 +0900)]
[CAPI] Add ml_train_model_run_with_single_param() internally for C# DllImport
ml_train_model_run() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive model compile param as a single string
from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Hyunil [Tue, 17 May 2022 03:07:44 +0000 (12:07 +0900)]
[CAPI] Add ml_train_model_compile_with_single_param() internally for C# DllImport
ml_train_model_compile() has a va_list and this is a problem with DllImport in C#.
va_list must be passed dynamically for each architect, this is not guaranteed to
work for some architects. This api receive model compile param as a single string
from C# DllImport.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Hyunil <hyunil46.park@samsung.com>
Jiho Chu [Mon, 30 May 2022 04:56:38 +0000 (13:56 +0900)]
[Utils] Add Memory profile feature to Profiler
This patch implement memory profiling feature.
Profile is refactored to handle memory statics information.
Unnecessary dependency from ProfileListener to Profile
class is removed, and inner information is redesigned to handle
both time and memory profiling.
For the memory profile, below infomation is managed:
event: ALLOC | DEALLOC
current size: total allocated memory size
info: user friendly tag for the analysis
duration: time interval between alloc and dealloc
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Mon, 30 May 2022 04:54:42 +0000 (13:54 +0900)]
[Test] Modify profiler test to add Memory event
This patch implement profiler test.
Whole codes are refectored to test both 'time' and 'memory' profiling.
It uses fixture test and gmock is used to checking callback function.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
JRazek [Fri, 27 May 2022 17:28:39 +0000 (19:28 +0200)]
[ccapi/nntrainer] Add getters for compiled, initialized and loadedFromConfig
This commit adds bool getters for the states of compiled, initialized and loadedFromConfig in ml::train::Model class.
Signed-off-by: JRazek <jakub.razek2@gmail.com>
hyeonseok lee [Fri, 20 May 2022 03:56:06 +0000 (12:56 +0900)]
[ccapi] include common header in meson
- Newly created common header file was missing in meson build
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>