Seungbaek Hong [Tue, 30 May 2023 12:23:17 +0000 (21:23 +0900)]
[Application] darknet53 pytorch implementation for yolo v3
Added pytorch darknet53 model for yolo v3.
It is used in yolo v3 as a backbone model.
I'll add nntrainer implementation, too.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Adwaith Anand [Fri, 4 Aug 2023 14:30:47 +0000 (20:00 +0530)]
Removed unwanted ternary operators
Ternary operators which was used in assignment of boolean values is
removed since it was redundant.
Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
Adwaith Anand [Wed, 12 Jul 2023 12:49:06 +0000 (18:19 +0530)]
[FullyConnected] Added NHWC support for FC_Layer inference part.
This also contains the unit tests to evaluate.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
Donghak PARK [Wed, 2 Aug 2023 10:42:15 +0000 (19:42 +0900)]
[Doc] Fix tizen reference link
Tizen reference link has been changed.
So, updated getting-started.md with the lastest link.
previous : https://source.tizen.org/documentation/reference/git-build-system/usage/gbs-build
updated : https://docs.tizen.org/platform/developing/building
Signed-off-by: Donghak PARK <donghak.park@samsung.com>
Jiho Chu [Thu, 3 Aug 2023 10:01:22 +0000 (19:01 +0900)]
[TEST] Add timeout option
It adds timeout option to adjust meson test timeout.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Tue, 1 Aug 2023 10:34:02 +0000 (19:34 +0900)]
[FIX] modified for checking weight grad
This path checks requested memory is weight gradient which information
will be used for planning.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Donghyeon Jeong [Wed, 2 Aug 2023 05:15:51 +0000 (14:15 +0900)]
[Compiler] Preserve connection order in multi-out realizer
Create multiout nodes with a given connection order in building a frequency map.
Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
hyeonseok lee [Thu, 27 Jul 2023 12:57:40 +0000 (21:57 +0900)]
[bugfix] added warning flag to compile with gcc 13
- Added Wno-maybe-uninitialized flag
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DongHak Park [Fri, 14 Apr 2023 09:00:01 +0000 (18:00 +0900)]
[TFLite Export] Update node_exporter
Add Epsilon Props to additional_props for fusing
- For Fusing we need Epsilon for batch norm
Add padding, stride props to props_vector
- For Conv Fusing we need to made new BuiltinOption and for building new BuiltinOption with FUSED activation we need padding,stride
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Fri, 14 Apr 2023 08:35:07 +0000 (17:35 +0900)]
[TFLite Export] Add Realized Path for Fused Op
For Fused OP Made Realized Path
1. Check Trainable
- check node is trainable or not for fusing
2. Conv + ReLU Fusing
3. Batch Normalization Fusing
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Fri, 14 Apr 2023 08:27:46 +0000 (17:27 +0900)]
[TFLite Export] Add variable, functions TfOpNodes for Fused OP export
for Export Tflite format with Fused Op add some Variable and Function
1. Add getter, setter, replace to weights
- for Fused Op we need to adjust weights after made Opnode
2. Add isToBeRemove variable
- After made Opnode, check condition and mark as to be remove
3. Add additional_props
- for BatchNormalization Fused Op we need additional props from nntrainer
- made vector<float> variable for save additional data
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Seungbaek Hong [Thu, 1 Jun 2023 08:27:07 +0000 (17:27 +0900)]
[LOG] print output dim instead of input dim in model summary
When we print the model architecture using summarize method,
nntrainer prints input dimension of each layer.
But, tensorflow and pytorch are printing output dimmension
of each layer in the summary, thus it is inconvenient
to compare each layer with tf and torch models.
Thus, I suggest to print output dimension of each layer
instead of input dimension in the model summary.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyeonseok lee [Fri, 21 Jul 2023 12:40:56 +0000 (21:40 +0900)]
remove unused variable
- Remove unused variables
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 21 Jul 2023 11:12:38 +0000 (20:12 +0900)]
remove warning flags related to compile with gcc-13
- Remove warning flags which helps to compile with gcc 13.
- Remove multiout testcase cause this test cannot guarantees the multiout layer order
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Seungbaek Hong [Wed, 19 Jul 2023 02:21:02 +0000 (11:21 +0900)]
[ahub] fix ahub issues
Fix some issues of svace and coverity.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
jijoong.moon [Fri, 21 Jul 2023 02:04:38 +0000 (11:04 +0900)]
[Toolchain] Enable gcc-13 support
This patch includes gcc-13 compatible fixes.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Mon, 17 Jul 2023 11:42:13 +0000 (20:42 +0900)]
[graph_node] handle deprecated stl iterator
- Explicitly provide the parameter as default parameter for stl iterator is deprecated.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Thu, 29 Jun 2023 12:36:30 +0000 (21:36 +0900)]
[ Property ] Add Tensor Type property in model
This PR enables the tensor type in model property as
"tensor_type=NHWC" or "tensor_type=NCHW". This information goes to
network_grap and layer node & manager.
Then, each layer can get the model tensor type information and it can
be used to request tensor or just using temporal tensor.
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Adwaith Anand [Wed, 28 Jun 2023 10:19:43 +0000 (15:49 +0530)]
[ Tensor ] Support NHWC for dot, add/multiply_strided and other ops
This PR includes changes of Tensor and TensorDim to support NHWC
computation for dot, add_strided, multiply_strided, cat, split,
and transpose. It also includes unittests to evaluate.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
Signed-off-by: Manohara HK <manohara.hk@samsung.com>
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
SeoHyungjun [Thu, 29 Jun 2023 02:15:26 +0000 (11:15 +0900)]
[fix_ahub] Fix Ahub Defect
The transfer_learning variable is a variable set by the
user and does not change during execution.
Changed bool to const bool.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
SeoHyungjun [Thu, 22 Jun 2023 07:31:21 +0000 (16:31 +0900)]
[fix_ahub] Fix AHub Defect
- Fixed the NNTrainerTrain constructor so that the member variable notiofier is initialized.
- Fixed nntrainer_model_start_training to stop when nntrainer and notifier are null.
- Fixed AUTO_CAUSES_COPY issue.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
jijoong.moon [Wed, 21 Jun 2023 06:53:52 +0000 (15:53 +0900)]
[ Bug ] Fix the bug read the weight for batch normalization layer
There is bug when the model loads the data for the batch normalziation
layer.
During the setup the requestWeights in manager, it add the max
execution order for graddient for gradient clipping, but variable
weight also added. This pr fixs it.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Seungbaek Hong [Mon, 19 Jun 2023 04:54:49 +0000 (13:54 +0900)]
[Application] Transfer learning example on Resnet-18
I added transfer learning option to resnet-18 example.
If this option is enabled, then load pre-trained weights
and freeze the weights of backbone(feature extractor).
(It just a simple transfer learning).
You can make pre-trained weights using save_bin function
from our pytorch resnet-18 example.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Donghyeon Jeong [Wed, 14 Jun 2023 08:05:57 +0000 (17:05 +0900)]
[Trivial] Fix Typo
Fix Typo
- model_loader.h
- model_loader.cpp
- dynamic_training_optimization.h
- dynamic_training_optimization.cpp
- tensor_trainer_nntrainer.hh
- tensor_trainer_nntrainer.cc
Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
sungsik [Mon, 19 Jun 2023 00:57:35 +0000 (09:57 +0900)]
[Trivial] Fix typo
Found typos at:
* network_graph.h
* acti_func.cpp.h
* bn_latyer.h
* common_properties.h
* concat_layer.cpp
Signed-off-by: sungsik <ss.kong@samsung.com>
SeoHyungjun [Thu, 1 Jun 2023 04:52:24 +0000 (13:52 +0900)]
[Application] Fix Resnet18
Fix it because the results of the computation of pytorch and nntrainer
are different.
padding was written as same when stride was 2 in NNtrainer Resnet code.
In pytorch, this is set to error. In addition, padding was not applied
normally in nntrainer, so the results were different. To solve this
problem, the parameters of a1 layer (conv layer) have been modified.
Additionally, momentum and epsilon were added to batch_norm layer.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
Donghak PARK [Mon, 12 Jun 2023 10:14:16 +0000 (19:14 +0900)]
[Typo] Fix tflite_interpreter typo error
Fix Typo in tflite_interpreter.cpp
-->
Signed-off-by: Donghak PARK <donghak.park@samsung.com>
Donghak PARK [Mon, 12 Jun 2023 07:36:15 +0000 (16:36 +0900)]
[Typo] Fix typo
Fix Typo Error
- nntrainer/compiler/recurrent_realizer.h
- nntrainer/graph/graph_node.h
- nntrainer/graph/network_graph.cpp
- nntrainer/layers/addition_layer.cpp
- nntrainer/layers/common_properties.h
Signed-off-by: Donghak PARK <donghak.park@samsung.com>
Seungbaek Hong [Wed, 7 Jun 2023 06:02:01 +0000 (15:02 +0900)]
[capi] fix notation for tizen 8.0
Fixed notation "tizen 7.5" to "tizen 8.0" for tizen release.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyunil park [Mon, 15 May 2023 02:13:18 +0000 (11:13 +0900)]
[nnstreamer][trainer] Add getting model stats information
- epoch_complete_cb is called When one epoch ends in nntrainer and
RunStats information is retrieved from the model.
- Send event to NNStreamer, NNStreamer waits to receive results every epoch.
- Use getStatus and nnstreamer_trainer_notify_event()
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
Donghak PARK [Mon, 22 May 2023 04:54:14 +0000 (13:54 +0900)]
[Trivial] Fix Typo
Fix Typo
- nntrainer_internal.h
- nntrainer.cpp
- unittest_tizen_capi_lr_scheduler.cpp
- unittest_tizen_capi_optimizer.cpp
- unittest_nntrainer_lr_scheduler.cpp
Signed-off-by: Donghak PARK <donghak.park@samsung.com>
Seungbaek Hong [Wed, 31 May 2023 06:38:04 +0000 (15:38 +0900)]
[Trivial] Fix typo
fix typo error (requesing -> requesting).
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Seungbaek Hong [Thu, 30 Mar 2023 10:32:01 +0000 (19:32 +0900)]
[Application] Update yolo v2 model similar to original model
Yolo v2 model was updated similar to original yolo v2 model.
This model was intended to be implemented in accordance with
the original paper of Yolo v2 as much as possible,
but now average pooling is temporarily used instead of the
re-organization module.
If only the average pooling is replaced with the re-organization
module in the future, the rest is the same as the original paper
in Yolo v2.
Both the PyTorch version and the NNTrainer version updated the model
structure and verified that the same results could be obtained
by loading trained weights from PyTorch.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyunil park [Mon, 15 May 2023 01:58:40 +0000 (10:58 +0900)]
[model] Add epoch complete callback
- Called the end of an epoch
- Users can do what they need at the end of each epoch. e.g. get RunStats.
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
Seungbaek Hong [Mon, 22 May 2023 02:32:26 +0000 (11:32 +0900)]
[activation] add gelu function
Added GELU activation function for supporting gpt.
I created unittest for this using pytorch.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Seungbaek Hong [Tue, 16 May 2023 02:12:24 +0000 (11:12 +0900)]
[Tensor] Add gaussian error function to tensor
Added gaussian error function(erf) to tensor.
(for support gelu activation function)
It is already implemented in cmath standard library,
So I just wrap that function for our tensor operation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
DongHak Park [Mon, 24 Apr 2023 10:30:53 +0000 (19:30 +0900)]
[Application] Fix Resnet Application -ENABLE_TFLITE_INTERPRETER CASES
Now TFLITE Interpreter is not support loss : cross type
So in Resnet Application we made some macro to make them mse and there was some wrong part
in ResNet Application there was another macro for ENABLE_TEST GTEST's result assume that Application use cross loss
For Correct Result Fix some #if statement
TODO : even if fix this situation TEST still failed regardless of tflite export releated code
Signed-off-by: DongHak Park <donghak.park@samsung.com>
hyeonseok lee [Wed, 26 Apr 2023 06:59:10 +0000 (15:59 +0900)]
[unittest] remove meaningless unittest
- Remove meaningless unittest
- Unify unittest
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 21 Apr 2023 15:06:08 +0000 (00:06 +0900)]
[graph] fix web tct fail issue
- replace insert with emplace
- initialize member variable node_names
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Seungbaek Hong [Fri, 21 Apr 2023 08:33:43 +0000 (17:33 +0900)]
[tct] fix coverity issues
Fix some coverity issues.
This pr is still work in process.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyunil park [Thu, 20 Apr 2023 07:11:15 +0000 (16:11 +0900)]
[NNstreamer] Change variable type of sub-plugin
- Change from int64_t to unsigned int
- bug-fix: when getting values from nnstreamer in an arm 32bit environment, invalid values are passed
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
Seungbaek Hong [Wed, 19 Apr 2023 08:13:36 +0000 (17:13 +0900)]
[capi] fix comment for tizen release
The notation for future version has been modified.
- "tizen 8.0" to "later version".
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
jijoong.moon [Tue, 18 Apr 2023 22:43:32 +0000 (07:43 +0900)]
[Application] Fix CVE issues in kotlin gson
There is CVE issues befor gson 2.8.9.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Seungbaek Hong [Mon, 17 Apr 2023 05:27:43 +0000 (14:27 +0900)]
[capi] fix some api issues for tizen release
I've checked some api issues using tizen-native-api-review-script.
So, I correct most issues but there are still remain some errors
in the "nntrainer_internal.h" file.
There are three type issues remaining yet.
- enum names should end with '_e'
- struct names should end with '_s'
But I think It would be better to do not rename these enum and
struct because it is already released in earlier version.
And, The last type issues seem to be "false positive".
I think tizen-api-review-script don't aware the macro function, etc.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyeonseok lee [Mon, 17 Apr 2023 05:25:54 +0000 (14:25 +0900)]
[capi] add identity layer to capi
- Added identity layer enum to capi
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Fri, 14 Apr 2023 07:35:34 +0000 (16:35 +0900)]
[optimizer] add log when lr_scheduler property is also set in optimizer
- Added log when Exponential learning rate scheduler properties(decay_rate, decay_steps) are set both in optimizer and lr_scheduler
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 13 Apr 2023 08:29:25 +0000 (17:29 +0900)]
[capi] add unittest for learning rate scheduler
- Added learning rate scheduler related unittest
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 13 Apr 2023 07:34:57 +0000 (16:34 +0900)]
[test] reorder tizen capi unittest
- Reorder unittest for sequential order
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 13 Apr 2023 03:31:07 +0000 (12:31 +0900)]
[capi] add learning rate scheduler related api
- Added learning rate scheduler create/destroy/set property/set property with single param api
- Added set learning rate scheduler to optimizer
- Added ml_train_lr_scheduler_type_e enum
- Fix some comments
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 13 Apr 2023 03:09:36 +0000 (12:09 +0900)]
[ccapi] change setLearningRateScheduler function prototype
- Change return type from void to int.
Capi will call this function so it should be return status.
- Change learning rate scheduler pointer from unique_ptr to shared_ptr
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Thu, 13 Apr 2023 02:56:31 +0000 (11:56 +0900)]
[ccapi] rename LearningRateType enum name to LearningRateSchedulerType
- rename enum name LearningRateType to LearningRateSchedulerType for more detailed info
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Seungbaek Hong [Tue, 21 Mar 2023 11:50:14 +0000 (20:50 +0900)]
[Application] dataloader for yolo
Add detection dataloader for yolo example.
- Set target directory, then this dataloader loads dataset
from "images" and "annotations" folders.
- Currently, It only supports "bmp" image format.
- For variable label data, it adds zero padding to label.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
DongHak Park [Fri, 31 Mar 2023 07:58:12 +0000 (16:58 +0900)]
[TF Export] Update tflite_opnode
Update tflite_opnode
- Add is_trainable for make fused op, by checking is trainable we can make fused of for inference
- Add MUL Op for BatchNormalization Fused Op
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Thu, 6 Apr 2023 07:06:39 +0000 (16:06 +0900)]
[AHub] Fix AHub Defect
Fix AHub Defect
- make some exception statement
- change auto element -> auto &element
Signed-off-by: DongHak Park <donghak.park@samsung.com>
jijoong.moon [Thu, 6 Apr 2023 12:59:06 +0000 (21:59 +0900)]
[Debian] Fix the debian package dependency
nntrainer-dev depends on ccapi-ml-training-dev and
capi-ml-training-dev.
This PR add these dependency for nntrainer-dev
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 7 Apr 2023 06:37:38 +0000 (15:37 +0900)]
[NNStreamer] fix input dimension with MAX RANK with 8
This pr fixes the limit of rank whic greater than 4.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 6 Apr 2023 12:49:23 +0000 (21:49 +0900)]
[NNStreamer] Remove rank check temporary
There was a changes in nnstreamer to extend max rank limit to 16. So
we need to remove rank limit check until they give current rank.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 3 Apr 2023 00:09:24 +0000 (09:09 +0900)]
[ Tensor ] Add default Tensor format in Tensor constructor
This PR includes the default Tensor format (NCHW) in Tesnor
Constructor. It also includes the unittest cases for HHWC Tensor
format. Even though it has Tensor format property, but still it needs
to implement the actual tensor operation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Fri, 31 Mar 2023 06:58:02 +0000 (15:58 +0900)]
[ TensorDim ] Add tensor format in TensorDim
In order to support Channel Last & Channel First at the same time, we
need to define the Tensor Format in TensorDim Class.
According to the format of tensorDim, it return proper value for the
APIs such as bacch(), channel(), height() and width().
The default format is Channel First as it is now.
If nothing is provided when a Tensor is constructured, then it is set
NCHW.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Seungbaek Hong [Tue, 4 Apr 2023 06:45:49 +0000 (15:45 +0900)]
[Ahub] Fix svace issue
Fixed some svace issues on svace-tizen_7.5.
Add exception handling to
- tensor_trainer_nntrainer.cc
- genModelExeOrder.cpp
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
DongHak Park [Tue, 4 Apr 2023 06:21:26 +0000 (15:21 +0900)]
[AHub] Fix AHub Defect
Fix Ahub Defect
- change stderror to stderror_r
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Mon, 3 Apr 2023 04:59:10 +0000 (13:59 +0900)]
[GitAction] Fix Duplicate Approvals
Fix Duplicate Approval
- now git action count all approval
- For example someone Approve PR --> Update PR --> Approve Again then they count 2
- For example CI Approve PR every time they success then gitaction count them all
The given Git Action code performs the following tasks
1. Uses the curl command to call the GitHub REST API and retrieve a list of reviews for the given pull request in the repository.
2. Uses the jq command to filter the reviews based on whether their state is "APPROVED".
3. Extracts the login names of the users who wrote each of the filtered reviews.
4. Uses the unique function to filter out only the unique login names, removing any duplicates.
5. Calculates the length of the resulting list of unique login names, giving the count of unique users who have approved the pull request.
Therefore, this code removes any duplicate approvals, counting each unique user only once, even if they have approved more than once.
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Seungbaek Hong [Thu, 30 Mar 2023 06:24:42 +0000 (15:24 +0900)]
[Application] add validation process of yolo example on pytorch
Add validation process of yolo example using validation dataset on
pytorch.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Seungbaek Hong [Wed, 29 Mar 2023 09:41:22 +0000 (18:41 +0900)]
[Application] Update yolo example of torch for tracking gradients
For tracking the gradients of Loss class,
I removed in-place operation in the Loss class.
And, I added hook_variable function for tracking specific tensors.
We can register specific variable with name using hook_variable
function, then we can check the gradient values using
print_hook_variable function after backwarding.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
jijoong.moon [Fri, 31 Mar 2023 00:23:02 +0000 (09:23 +0900)]
[ Custom ] Add custom optimizer example in Application
This PR includes the custom optimizer 'momentum' example in
Applicaiton/Custom
It adds the testcase and demo implementaiton
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 28 Mar 2023 23:04:54 +0000 (08:04 +0900)]
[ Application ] Fix the layer type of Simpleshot application
There was an error of layer name for "CL2N".
This pr fix the error.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 27 Mar 2023 06:49:12 +0000 (15:49 +0900)]
[Layer] Support batch computation for l2norm layer
This PR enables the l2norm preprocessor layer to support batch
computation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 27 Mar 2023 04:22:49 +0000 (13:22 +0900)]
[ Tensor ] allow to have inc 0 for broadcasting for raw computation
This PR allows to have 0 for incX and incY in tensor raw compuation to
enable broadcasting tensor operation.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 3 Apr 2023 23:58:46 +0000 (08:58 +0900)]
[Release] NNTrainer v0.5.0 release
NNTrainer v0.5.0 is released.
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jiho Chu [Tue, 14 Feb 2023 08:06:51 +0000 (17:06 +0900)]
[TEST] Add execution order generator
It generate execution order golden file for each model.
Each golden data consists of execution orders of each tensors.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Mon, 13 Feb 2023 09:39:02 +0000 (18:39 +0900)]
[TEST] Add execution order test
This patch verifies execution orders for model graph.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Mon, 13 Feb 2023 09:32:08 +0000 (18:32 +0900)]
[DEBUG] Add interface for getting execution order
This patch implemnts debugging feature for execution order.
Execution order for each tensor is decided while initalize and finalize
phase. To check the validity of them, getting execution order interface
is added to network graph class. It can gather both variable and
gradient of the tensors.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Seungbaek Hong [Mon, 9 Jan 2023 07:03:13 +0000 (16:03 +0900)]
[activation] add swish activation function
Added the swish activation function.
It needs input of activation and output of activation
to calculate derivative value.
So, I overload run_prime_fn function to use
input and output for calculating derivative.
- add swish activation
- add test case about swish activation function
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
jijoong.moon [Thu, 29 Dec 2022 11:16:07 +0000 (20:16 +0900)]
[Memory Planner] Update the Memory Planner
This PR includes,
1. assigning right execution order depending on layer type.
: We do need to move those fixes in to each layer
2. Update memory Planner to use the memory with smaller size
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 27 Dec 2022 02:18:13 +0000 (11:18 +0900)]
[Execution Order] Set exectuion order properly for Opt Variables.
This patch includes the proper assignment of exectuion order for
optimizer variables, e.g., M and V for adam optimizer.
Only apply gradient requies optimizer variables.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 22 Dec 2022 23:54:33 +0000 (08:54 +0900)]
[Planner] Add Weight Gradient Planning
This patch includes the memory optimization for weight
gradient. Weight Gradient can be reused under the not execution order
confiliction. In order to do it, memory pool need to have the data
structure for identification of weight gradient. Therefore, it is
marked when the tensor is request during weight request in mananger
except the weight sharing situation (when is_dependent is true).
Notice, we are not gaurantee the gradient is always initialized
property. So, layer developer, or the user must aware it can be
dirty.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jiho Chu [Tue, 20 Dec 2022 01:54:39 +0000 (10:54 +0900)]
[Tensor] Remove calcGrad step for trainable layer
This patch is for implementing tarinable property behavior.
If a layer is set as non-trainable, it doesnot need to execute
calcGrad step, so we can remove it from execution order, also
skip gradient calculation.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
jijoong.moon [Fri, 24 Mar 2023 00:14:15 +0000 (09:14 +0900)]
[ Dataset ] Add Directory Dataset Producer
This pr includes directory dataset producer for bmp image only.
We can extend the type of image to jpg, png with proper decoder and
also can add resizer according to reqruiement.
C API and unit test cases will be added later PRs.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Seungbaek Hong [Fri, 17 Mar 2023 06:40:29 +0000 (15:40 +0900)]
[Application] yolo example moving weight from torch to nntrainer
Add yolo example moving weight from pytorch to nntrainer.
It should be noted that pytorch and nntrainer have different
default values of epsilon constant of batch normalization.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyeonseok lee [Tue, 14 Mar 2023 04:39:49 +0000 (13:39 +0900)]
[tensor] split tensor by given sizes
- Until now tensor was splited evenly by given size, but from now on split operation can split the tensor by given sizes
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyunil park [Wed, 18 Jan 2023 05:07:22 +0000 (14:07 +0900)]
[nnstreamer][trainer] Reanme callback parameter
- Rename 'train' and 'invoke' to 'start' and 'push_data'
- Use PRId64 insted of %ld or %lld for build dependency
- Add action description and constraints
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
hyunil park [Mon, 9 Jan 2023 11:01:31 +0000 (20:01 +0900)]
[nnstreamer][trainer] Change queue to circular queue
- Change queue to circular queue
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
hyunil park [Thu, 5 Jan 2023 08:32:14 +0000 (17:32 +0900)]
[nnstreamer][trainer] Apply setting epochs by GstTensorTrainerProperties
- Add getting epochs property and set to model property
- Use std::mutex instead of pthread_mutex
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
hyunil park [Thu, 5 Jan 2023 02:52:02 +0000 (11:52 +0900)]
[nnstreamer][trainer] Create a queue to construct dataset
- It is dynamically allocated as much as initial queue size and
continues to use the memory until training is finished.
- The maximum size of the queue is 30, and if the number of samples
is less than 30, size is number of samples.
- Data received through 'invoke' is strored in queue, and the data
in the queue is used in the 'data gen callback'. the used data
is not maintained for epochs.
For epoch, (number of trin samples + number of valid samples) * epoch
data should be received.
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
hyunil park [Fri, 2 Dec 2022 06:37:33 +0000 (15:37 +0900)]
[nnstreamer] Create nnstreamer tensor_trainer subplugin
- Create nnstreamer tensor_trainer subplugin
- Create libnnstreamer_trainer_nntrainer.so
Subplugin Receive GstTensorTrainerProperties from nnstreamer tensor_trainer
to create dataset and model. it receive tensor data from tensor_trainer
and train the model.
Subplugin is created by using GstTensorTrainerFramework and tensor_trainer
call create, destory, train and invoke_NN.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
Seungbaek Hong [Tue, 21 Mar 2023 09:37:12 +0000 (18:37 +0900)]
[WIP][ahub] fix ahub issue
fix svace issue of ahub
- add try&catch on yolo example
- add initialization of Exporter class
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
jijoong.moon [Mon, 20 Mar 2023 22:28:23 +0000 (07:28 +0900)]
[ Trivial ] add more error infor
add more error infor when tflite and input dim mismatch.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 20 Mar 2023 00:35:53 +0000 (09:35 +0900)]
[ Application ] Add android resnet application example
This PR includes the android resnet application example with cifar100
dataset. The dataset is not included becuase of the size of dataset,
but user can download and set properly into asset direcotry in
application.
This applicaiton demonstrate how nntrainer is used in android
application through JNI interface and train, test, stop during
processing, and inferencing.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 14 Feb 2023 01:59:23 +0000 (10:59 +0900)]
[Application] Training Resnet kotlin code for Android
This PR includes the kotlin android implemetation to train Resnet18 in
Applications.
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
DongHak Park [Tue, 14 Feb 2023 10:57:40 +0000 (19:57 +0900)]
[Flatbuffer] Add flatbuffer_opnode
Add flatbuffer_opnode.cpp & flatbuffer_opnode.h
- For flatbuffer interpreter : bring opnode and make variable
- Now only support Fully Connected Layer
- Now only support NCHW format tensor
It will be updated
Signed-off-by: DongHak Park <donghak.park@samsung.com>
jijoong.moon [Sun, 12 Mar 2023 23:21:17 +0000 (08:21 +0900)]
[Conv2d] Remove inter_result in conv2d layer
It was used for better latency but it consume more memery than we
expected when there are many conv layers.
In ths pr, this inter_result is removed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 9 Mar 2023 22:36:03 +0000 (07:36 +0900)]
[ Tensor ] fix the dim type
Previously, the dimension type of tensor dim is unsigned int and it
causes the error for bigger problems. This pr changes the dimention
type to size_t.
Also, This pr fix the user_data for stop callback to use it properly.
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
DongHak Park [Tue, 14 Mar 2023 10:47:43 +0000 (19:47 +0900)]
[Trivial] Typo Error
Fix Typo Error
- Correct typos found during development in compiler dir
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Seungbaek Hong [Fri, 3 Mar 2023 05:04:27 +0000 (14:04 +0900)]
[Application] define YOLO v2 model class
define YOLO v2 model structure corresponding to pytorch example.
now, it is implemented by fake data, and there is no loss loss
function for this model.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
DongHak Park [Tue, 14 Feb 2023 10:12:24 +0000 (19:12 +0900)]
[FlatBuffer] Add nntrainer_schema.fbs
Add nntrainer_schema.fbs
- This schema for Flatbuffer export
- It contain only Fully-connected Layer Options
- It contain tensorflow lite's schema
Modify meson_build
- For compiler nntrainer_schema_generated.h
This will be updated for more layers and operators
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Seungbaek Hong [Fri, 10 Feb 2023 13:16:38 +0000 (22:16 +0900)]
[Application] Save model trained from pytorch as nntrainer format
I modified resnet example to save model trained from pytorch as
nntrainer binary model format.
- the pytorch resnet example model is modified to be the same
as the nntrainer resnet example model.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyeonseok lee [Thu, 16 Mar 2023 05:30:12 +0000 (14:30 +0900)]
[ahub] fix ahub issue
- cast unsigned value to signed value and check its value it overrides sign-bit.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Seungbaek Hong [Fri, 3 Mar 2023 05:04:27 +0000 (14:04 +0900)]
[Application] add object detection example using pytorch
Add YOLO v2 object detection example using pytorch.
It was implemented in the same way as the YOLO v2 model,
but only the backbone model was made into a simpler cnn
model to make it easier to configure the initial version
of YOLO to be supported by NNTrainer.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Jiho Chu [Fri, 3 Mar 2023 06:29:41 +0000 (15:29 +0900)]
[NEURALNET] Refine forwarding operation
This patch refines forwading operation in neural network class.
The code depth is not matched for forward and backward operations. For
backwarding operation, there is backwarding_op and it pasded to the
graph, which can handle the operation.
To keep same depth for forwarding, this patch adds forwarding_op
function, and passed to the graph class.
Releated Issue:
\#2108
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
DongHak Park [Wed, 8 Mar 2023 08:16:17 +0000 (17:16 +0900)]
[GitAction] Fix Labeler on review commented
Problem
- on Review commented, Review Approved There is error on http api
- it can't call issue's number because event num only valid if it run's by pull request it self
For solve this problem
- change api ```github.event.number``` to ```github.event.pull_request.number```
- change script version and some trivial part
Signed-off-by: DongHak Park <donghak.park@samsung.com>