platform/core/ml/nntrainer.git
14 months ago[ blas/neon ] Add neon_blas files
skykongkong8 [Wed, 2 Aug 2023 08:23:07 +0000 (17:23 +0900)]
[ blas/neon ] Add neon_blas files

* Enable neon sgemv function in Android (ARM) __fp16 computation
* note: this pr includes a significant part of PR#1981 of nnstreamer/nntrainer

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
14 months ago[Bug] Fix generating nan values in tensor
Donghyeon Jeong [Tue, 1 Aug 2023 02:42:00 +0000 (11:42 +0900)]
[Bug] Fix generating nan values in tensor
- Gradient tensor values are inconsistently set to NaN
- NaN values caused incorrect backwarding in Neural Net
- Replacing malloc with calloc prevents memory allocation with value set to NaN

Signed-off-by: Donghyeon Jeong <djeong20@illinois.edu>
14 months ago[ Tensor ] Templatize apply member function
jijoong.moon [Fri, 28 Jul 2023 13:57:29 +0000 (22:57 +0900)]
[ Tensor ] Templatize apply member function

In order to support gcc-13 & ndk-build, the apply member function
needs to be templetize. And also it makes sence define apply
function.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ Mixed ] fix apply using casted function
jijoong.moon [Fri, 28 Jul 2023 10:49:52 +0000 (19:49 +0900)]
[ Mixed ] fix apply using casted function

Describe a commit content (Until 80 colums per line) in detail ASAP.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ Mixed Tensor ] add tensor type property in initContext
jijoong.moon [Thu, 27 Jul 2023 00:14:57 +0000 (09:14 +0900)]
[ Mixed Tensor ] add tensor type property in initContext

This PR add the tensor type (Format, Weight Tensor DataType,
Activation Tensor DataType) in initContext.
- Remove the tensor type variables and setter, getter member function
in layer, layer_devel, loss layer etc.
- add tensor type setter in initContext
- set the var_grad ( input & ouput ) Tensor Type according to model
Tensor Data Type.
- Add ModelTensorTypeInfo : eg. FP16_FP16 ( Weight FP16, Activation
FP16 )

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ Mixed Tensor ] Bug Fixes
jijoong.moon [Wed, 26 Jul 2023 05:39:17 +0000 (14:39 +0900)]
[ Mixed Tensor ] Bug Fixes

This pr includes bug fixes for mixed tensor supports

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[Tensor] Enable FP16 in gcc-13
Donghyeon Jeong [Wed, 26 Jul 2023 02:43:29 +0000 (11:43 +0900)]
[Tensor] Enable FP16 in gcc-13
- divide in tensor now supports FP16
- ranged in test util supports FP16
- fix zoneout_rate from fp16 to float

Signed-off-by: Donghyeon Jeong <djeong20@illinois.edu>
14 months ago[Bug] Fix tensor_pool unittest error
Donghyeon Jeong [Tue, 25 Jul 2023 09:11:19 +0000 (18:11 +0900)]
[Bug] Fix tensor_pool unittest error

Signed-off-by: Donghyeon Jeong <djeong20@illinois.edu>
14 months agoEnable gcc-13 compile with FP16
Donghyeon Jeong [Tue, 25 Jul 2023 08:38:22 +0000 (17:38 +0900)]
Enable gcc-13 compile with FP16

- Match FP16 types to avoid greater conversion rank error
- Replace deprecated functions in gcc-13
- Add apply function for FP16 in Tensor

Signed-off-by: Donghyeon Jeong <djeong20@illinois.edu>
14 months ago[ Mixed Tensor ] Enable FP32 unittest cases
jijoong.moon [Mon, 24 Jul 2023 22:47:33 +0000 (07:47 +0900)]
[ Mixed Tensor ] Enable FP32 unittest cases

This PR enables the FP32 unittest cases. It includes various fixes and
adding compiler preprocessor pragmas.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[Bug] Fix memory access error in addValue
Donghyeon Jeong [Thu, 20 Jul 2023 07:51:37 +0000 (16:51 +0900)]
[Bug] Fix memory access error in addValue
- Previously memory access to tensor data was incorrect
- Change to direct access to data with index instead of calculating the index

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
14 months ago[Tensor] check data allocation in add/multiply_strided
Donghyeon Jeong [Thu, 20 Jul 2023 04:19:50 +0000 (13:19 +0900)]
[Tensor] check data allocation in add/multiply_strided

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
14 months ago[WIP] [__fp16] Verify through __fp16 unittests
skykongkong8 [Thu, 20 Jul 2023 08:25:49 +0000 (17:25 +0900)]
[WIP] [__fp16] Verify through __fp16 unittests

* Uncomment __fp16 testcases, then verify & debug
* fix missing functions or varibles in tensor and blas_interface
* TODO: do the last, fix setDist function, find erf function

Signed-off-by: skykongkong8 <kssjustin98@gmail.com>
14 months ago[unittest] static cast answer data to fp16
Donghyeon Jeong [Wed, 19 Jul 2023 07:30:06 +0000 (16:30 +0900)]
[unittest] static cast answer data to fp16
- static_cast<__fp16> is needed to avoid narrowing conversion error

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
14 months ago[unittest] Add data type for testing tensor
Donghyeon Jeong [Wed, 19 Jul 2023 01:06:37 +0000 (10:06 +0900)]
[unittest] Add data type for testing tensor
- add Tdatatype to avoid error
- default datda type is FP32
- Tformat & Tdatatype is used to create TensorType

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
14 months ago[unittest] include excluded tensor type in test cases
Donghyeon Jeong [Wed, 19 Jul 2023 04:38:59 +0000 (13:38 +0900)]
[unittest] include excluded tensor type in test cases

- replace Tformat & Tdatatype with TensorType
- include missing Tdatatype

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
14 months ago[Application] provide default tensortype
Donghyeon Jeong [Wed, 19 Jul 2023 04:45:12 +0000 (13:45 +0900)]
[Application] provide default tensortype
- add tensortype to avoid error in initialization

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
14 months ago[WIP] [Tensor] Add __fp16 supporting functions in blas_interface
skykongkong8 [Wed, 19 Jul 2023 05:18:42 +0000 (14:18 +0900)]
[WIP] [Tensor] Add __fp16 supporting functions in blas_interface

* Add __fp16 support with #ifdef, and parameter overloading
* (trivial) fix typo
* TODO: replace with valid __fp16 supporting functions

Signed-off-by: skykongkong8 <kssjustin98@gmail.com>
14 months ago[WIP] [Tensor] Add __fp16 to Tensor member functions
skykongkong8 [Wed, 19 Jul 2023 00:52:28 +0000 (09:52 +0900)]
[WIP] [Tensor] Add __fp16 to Tensor member functions

* add if-elsif code block to each Tensor member function
* fix trivial missed functions

Signed-off-by: skykongkong8 <kssjustin98@gmail.com>
14 months ago[WIP] [Tensor] Add __fp16 to Tensor member functions
skykongkong8 [Tue, 18 Jul 2023 08:30:14 +0000 (17:30 +0900)]
[WIP] [Tensor] Add __fp16 to Tensor member functions

* add if-elseif code block to each Tensor member function
* (trivial) fix trivial typos
* TODO: check for missed functions

Signed-off-by: skykongkong8 <kssjustin98@gmail.com>
14 months ago[ Property ] Add Tensor Type property in model
jijoong.moon [Thu, 29 Jun 2023 12:36:30 +0000 (21:36 +0900)]
[ Property ] Add Tensor Type property in model

This PR enables the tensor type and tensor format in model property as
"tensor_format=NHWC" or "tensor_type=FP16". This information goes to
network_grap and layer node & manager.

Then, each layer can get the model tensor type information and it can
be used to request tensor or just using temporal tensor.

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ Tensor ] Support NHWC for dot, add/multiply_strided and other ops
Adwaith Anand [Wed, 28 Jun 2023 10:19:43 +0000 (15:49 +0530)]
[ Tensor ] Support NHWC for dot, add/multiply_strided and other ops

This PR includes changes of Tensor and TensorDim to support NHWC
computation for dot, add_strided, multiply_strided, cat, split,
and transpose. It also includes unittests to evaluate.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
Signed-off-by: Manohara HK <manohara.hk@samsung.com>
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ GTEST ] Add gtest to run gtest in android device
jijoong.moon [Fri, 14 Jul 2023 12:32:32 +0000 (21:32 +0900)]
[ GTEST ] Add gtest to run gtest in android device

Add Gtest codes

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ WIP ] Mixed Tensor Data Type
jijoong.moon [Fri, 14 Jul 2023 12:27:53 +0000 (21:27 +0900)]
[ WIP ] Mixed Tensor Data Type

Modification for Mixed Tensor Data Type

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[GTEST] add gtest for tensor unittest in Android
jijoong.moon [Wed, 12 Jul 2023 23:51:33 +0000 (08:51 +0900)]
[GTEST] add gtest for tensor unittest in Android

This PR enables the gtest for Android. Especially half precision
test.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ Mixed Precision ] Support Mixed Precision
jijoong.moon [Wed, 12 Jul 2023 07:58:48 +0000 (16:58 +0900)]
[ Mixed Precision ] Support Mixed Precision

This PR enables the Mixed Precision computation.
- Add the data_type property in Tensor : FP16, FP32
- Memory_Data only handle void *
- In Tensor, there were several member function with template
   : getAddress<float>() , getData<__fp16>, etc.
- Need to implement Blas Interface function

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
14 months ago[ Property ] Add Tensor Type property in model
jijoong.moon [Thu, 29 Jun 2023 12:36:30 +0000 (21:36 +0900)]
[ Property ] Add Tensor Type property in model

This PR enables the tensor type in model property as
"tensor_type=NHWC" or "tensor_type=NCHW". This information goes to
network_grap and layer node & manager.

Then, each layer can get the model tensor type information and it can
be used to request tensor or just using temporal tensor.

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
15 months ago[nnstreamer] Set dim value as 1 when nnstreamer give 0 value
Yongjoo Ahn [Thu, 17 Aug 2023 08:43:39 +0000 (17:43 +0900)]
[nnstreamer] Set dim value as 1 when nnstreamer give 0 value

- Recently nnstreamer set 0 for padded value of dimensions.
- Let dimension value 1 for nntrainer when nns give 0.

REF: https://github.com/nnstreamer/nnstreamer/pull/4111

Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
15 months ago[Application] darknet53 pytorch implementation for yolo v3
Seungbaek Hong [Tue, 30 May 2023 12:23:17 +0000 (21:23 +0900)]
[Application] darknet53 pytorch implementation for yolo v3

Added pytorch darknet53 model for yolo v3.

It is used in yolo v3 as a backbone model.

I'll add nntrainer implementation, too.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
15 months agoRemoved unwanted ternary operators
Adwaith Anand [Fri, 4 Aug 2023 14:30:47 +0000 (20:00 +0530)]
Removed unwanted ternary operators

Ternary operators which was used in assignment of boolean values is
removed since it was redundant.

Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
15 months ago[FullyConnected] Added NHWC support for FC_Layer inference part.
Adwaith Anand [Wed, 12 Jul 2023 12:49:06 +0000 (18:19 +0530)]
[FullyConnected] Added NHWC support for FC_Layer inference part.

This also contains the unit tests to evaluate.

**Self evaluation:**
    1. Build test:   [X]Passed [ ]Failed [ ]Skipped
    2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
15 months ago[Doc] Fix tizen reference link
Donghak PARK [Wed, 2 Aug 2023 10:42:15 +0000 (19:42 +0900)]
[Doc] Fix tizen reference link

Tizen reference link has been changed.
So, updated getting-started.md with the lastest link.

previous : https://source.tizen.org/documentation/reference/git-build-system/usage/gbs-build
updated :  https://docs.tizen.org/platform/developing/building
Signed-off-by: Donghak PARK <donghak.park@samsung.com>
15 months ago[TEST] Add timeout option
Jiho Chu [Thu, 3 Aug 2023 10:01:22 +0000 (19:01 +0900)]
[TEST] Add timeout option

It adds timeout option to adjust meson test timeout.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[FIX] modified for checking weight grad
Jiho Chu [Tue, 1 Aug 2023 10:34:02 +0000 (19:34 +0900)]
[FIX] modified for checking weight grad

This path checks requested memory is weight gradient which information
will be used for planning.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[Compiler] Preserve connection order in multi-out realizer
Donghyeon Jeong [Wed, 2 Aug 2023 05:15:51 +0000 (14:15 +0900)]
[Compiler] Preserve connection order in multi-out realizer

Create multiout nodes with a given connection order in building a frequency map.

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
15 months ago[bugfix] added warning flag to compile with gcc 13
hyeonseok lee [Thu, 27 Jul 2023 12:57:40 +0000 (21:57 +0900)]
[bugfix] added warning flag to compile with gcc 13

 - Added Wno-maybe-uninitialized flag

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
15 months ago[TFLite Export] Update node_exporter
DongHak Park [Fri, 14 Apr 2023 09:00:01 +0000 (18:00 +0900)]
[TFLite Export] Update node_exporter

Add Epsilon Props to additional_props for fusing
- For Fusing we need Epsilon for batch norm
Add padding, stride props to props_vector
- For Conv Fusing we need to made new BuiltinOption and for building new BuiltinOption with FUSED activation we need padding,stride

Signed-off-by: DongHak Park <donghak.park@samsung.com>
15 months ago[TFLite Export] Add Realized Path for Fused Op
DongHak Park [Fri, 14 Apr 2023 08:35:07 +0000 (17:35 +0900)]
[TFLite Export] Add Realized Path for Fused Op

For Fused OP Made Realized Path

1. Check Trainable
 - check node is trainable or not for fusing
2. Conv + ReLU Fusing
3. Batch Normalization Fusing

Signed-off-by: DongHak Park <donghak.park@samsung.com>
15 months ago[TFLite Export] Add variable, functions TfOpNodes for Fused OP export
DongHak Park [Fri, 14 Apr 2023 08:27:46 +0000 (17:27 +0900)]
[TFLite Export] Add variable, functions TfOpNodes for Fused OP export
for Export Tflite format with Fused Op add some Variable and Function

1. Add getter, setter, replace to weights
- for Fused Op we need to adjust weights after made Opnode

2. Add isToBeRemove variable
- After made Opnode, check condition and mark as to be remove

3. Add additional_props
- for BatchNormalization Fused Op we need additional props from nntrainer
- made vector<float> variable for save additional data

Signed-off-by: DongHak Park <donghak.park@samsung.com>
15 months ago[LOG] print output dim instead of input dim in model summary
Seungbaek Hong [Thu, 1 Jun 2023 08:27:07 +0000 (17:27 +0900)]
[LOG] print output dim instead of input dim in model summary

When we print the model architecture using summarize method,
nntrainer prints input dimension of each layer.

But, tensorflow and pytorch are printing output dimmension
of each layer in the summary, thus it is inconvenient
to compare each layer with tf and torch models.

Thus, I suggest to print output dimension of each layer
instead of input dimension in the model summary.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
15 months agoremove unused variable
hyeonseok lee [Fri, 21 Jul 2023 12:40:56 +0000 (21:40 +0900)]
remove unused variable

 - Remove unused variables

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
15 months agoremove warning flags related to compile with gcc-13
hyeonseok lee [Fri, 21 Jul 2023 11:12:38 +0000 (20:12 +0900)]
remove warning flags related to compile with gcc-13

 - Remove warning flags which helps to compile with gcc 13.
 - Remove multiout testcase cause this test cannot guarantees the multiout layer order

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
16 months ago[ahub] fix ahub issues
Seungbaek Hong [Wed, 19 Jul 2023 02:21:02 +0000 (11:21 +0900)]
[ahub] fix ahub issues

Fix some issues of svace and coverity.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
16 months ago[Toolchain] Enable gcc-13 support
jijoong.moon [Fri, 21 Jul 2023 02:04:38 +0000 (11:04 +0900)]
[Toolchain] Enable gcc-13 support

This patch includes gcc-13 compatible fixes.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
16 months ago[graph_node] handle deprecated stl iterator
hyeonseok lee [Mon, 17 Jul 2023 11:42:13 +0000 (20:42 +0900)]
[graph_node] handle deprecated stl iterator

 - Explicitly provide the parameter as default parameter for stl iterator is deprecated.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
16 months ago[ Property ] Add Tensor Type property in model
jijoong.moon [Thu, 29 Jun 2023 12:36:30 +0000 (21:36 +0900)]
[ Property ] Add Tensor Type property in model

This PR enables the tensor type in model property as
"tensor_type=NHWC" or "tensor_type=NCHW". This information goes to
network_grap and layer node & manager.

Then, each layer can get the model tensor type information and it can
be used to request tensor or just using temporal tensor.

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
16 months ago[ Tensor ] Support NHWC for dot, add/multiply_strided and other ops
Adwaith Anand [Wed, 28 Jun 2023 10:19:43 +0000 (15:49 +0530)]
[ Tensor ] Support NHWC for dot, add/multiply_strided and other ops

This PR includes changes of Tensor and TensorDim to support NHWC
computation for dot, add_strided, multiply_strided, cat, split,
and transpose. It also includes unittests to evaluate.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
Signed-off-by: Manohara HK <manohara.hk@samsung.com>
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
16 months ago[fix_ahub] Fix Ahub Defect
SeoHyungjun [Thu, 29 Jun 2023 02:15:26 +0000 (11:15 +0900)]
[fix_ahub] Fix Ahub Defect

The transfer_learning variable is a variable set by the
user and does not change during execution.
Changed bool to const bool.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
16 months ago[fix_ahub] Fix AHub Defect
SeoHyungjun [Thu, 22 Jun 2023 07:31:21 +0000 (16:31 +0900)]
[fix_ahub] Fix AHub Defect

- Fixed the NNTrainerTrain constructor so that the member variable notiofier is initialized.
- Fixed nntrainer_model_start_training to stop when nntrainer and notifier are null.
- Fixed AUTO_CAUSES_COPY issue.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
16 months ago[ Bug ] Fix the bug read the weight for batch normalization layer
jijoong.moon [Wed, 21 Jun 2023 06:53:52 +0000 (15:53 +0900)]
[ Bug ] Fix the bug read the weight for batch normalization layer

There is bug when the model loads the data for the batch normalziation
layer.

During the setup the requestWeights in manager, it add the max
execution order for graddient for gradient clipping, but variable
weight also added. This pr fixs it.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
16 months ago[Application] Transfer learning example on Resnet-18
Seungbaek Hong [Mon, 19 Jun 2023 04:54:49 +0000 (13:54 +0900)]
[Application] Transfer learning example on Resnet-18

I added transfer learning option to resnet-18 example.

If this option is enabled, then load pre-trained weights
and freeze the weights of backbone(feature extractor).
(It just a simple transfer learning).

You can make pre-trained weights using save_bin function
from our pytorch resnet-18 example.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
17 months ago[Trivial] Fix Typo
Donghyeon Jeong [Wed, 14 Jun 2023 08:05:57 +0000 (17:05 +0900)]
[Trivial] Fix Typo

Fix Typo
- model_loader.h
- model_loader.cpp
- dynamic_training_optimization.h
- dynamic_training_optimization.cpp
- tensor_trainer_nntrainer.hh
- tensor_trainer_nntrainer.cc

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
17 months ago[Trivial] Fix typo
sungsik [Mon, 19 Jun 2023 00:57:35 +0000 (09:57 +0900)]
[Trivial] Fix typo

Found typos at:
* network_graph.h
* acti_func.cpp.h
* bn_latyer.h
* common_properties.h
* concat_layer.cpp

Signed-off-by: sungsik <ss.kong@samsung.com>
17 months ago[Application] Fix Resnet18
SeoHyungjun [Thu, 1 Jun 2023 04:52:24 +0000 (13:52 +0900)]
[Application] Fix Resnet18

Fix it because the results of the computation of pytorch and nntrainer
are different.

padding was written as same when stride was 2 in NNtrainer Resnet code.
In pytorch, this is set to error. In addition, padding was not applied
normally in nntrainer, so the results were different. To solve this
problem, the parameters of a1 layer (conv layer) have been modified.

Additionally, momentum and epsilon were added to batch_norm layer.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
17 months ago[Typo] Fix tflite_interpreter typo error
Donghak PARK [Mon, 12 Jun 2023 10:14:16 +0000 (19:14 +0900)]
[Typo] Fix tflite_interpreter typo error

Fix Typo in tflite_interpreter.cpp
 -->

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
17 months ago[Typo] Fix typo
Donghak PARK [Mon, 12 Jun 2023 07:36:15 +0000 (16:36 +0900)]
[Typo] Fix typo

Fix Typo Error

-  nntrainer/compiler/recurrent_realizer.h
-  nntrainer/graph/graph_node.h
-  nntrainer/graph/network_graph.cpp
-  nntrainer/layers/addition_layer.cpp
-  nntrainer/layers/common_properties.h

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
17 months ago[capi] fix notation for tizen 8.0
Seungbaek Hong [Wed, 7 Jun 2023 06:02:01 +0000 (15:02 +0900)]
[capi] fix notation for tizen 8.0

Fixed notation "tizen 7.5" to "tizen 8.0" for tizen release.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
17 months ago[nnstreamer][trainer] Add getting model stats information
hyunil park [Mon, 15 May 2023 02:13:18 +0000 (11:13 +0900)]
[nnstreamer][trainer] Add getting model stats information

- epoch_complete_cb is called When one epoch ends in nntrainer and
  RunStats information is retrieved from the model.
- Send event to NNStreamer, NNStreamer waits to receive results every epoch.
- Use getStatus and nnstreamer_trainer_notify_event()

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
17 months ago[Trivial] Fix Typo
Donghak PARK [Mon, 22 May 2023 04:54:14 +0000 (13:54 +0900)]
[Trivial] Fix Typo

Fix Typo
- nntrainer_internal.h
- nntrainer.cpp
- unittest_tizen_capi_lr_scheduler.cpp
- unittest_tizen_capi_optimizer.cpp
- unittest_nntrainer_lr_scheduler.cpp

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
17 months ago[Trivial] Fix typo
Seungbaek Hong [Wed, 31 May 2023 06:38:04 +0000 (15:38 +0900)]
[Trivial] Fix typo

fix typo error (requesing -> requesting).

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
17 months ago[Application] Update yolo v2 model similar to original model
Seungbaek Hong [Thu, 30 Mar 2023 10:32:01 +0000 (19:32 +0900)]
[Application] Update yolo v2 model similar to original model

Yolo v2 model was updated similar to original yolo v2 model.

This model was intended to be implemented in accordance with
the original paper of Yolo v2 as much as possible,
but now average pooling is temporarily used instead of the
re-organization module.

If only the average pooling is replaced with the re-organization
module in the future, the rest is the same as the original paper
in Yolo v2.

Both the PyTorch version and the NNTrainer version updated the model
structure and verified that the same results could be obtained
by loading trained weights from PyTorch.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
17 months ago[model] Add epoch complete callback
hyunil park [Mon, 15 May 2023 01:58:40 +0000 (10:58 +0900)]
[model] Add epoch complete callback

- Called the end of an epoch
- Users can do what they need at the end of each epoch. e.g. get RunStats.

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
17 months ago[activation] add gelu function
Seungbaek Hong [Mon, 22 May 2023 02:32:26 +0000 (11:32 +0900)]
[activation] add gelu function

Added GELU activation function for supporting gpt.

I created unittest for this using pytorch.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
18 months ago[Tensor] Add gaussian error function to tensor
Seungbaek Hong [Tue, 16 May 2023 02:12:24 +0000 (11:12 +0900)]
[Tensor] Add gaussian error function to tensor

Added gaussian error function(erf) to tensor.
(for support gelu activation function)

It is already implemented in cmath standard library,
So I just wrap that function for our tensor operation.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
18 months ago[Application] Fix Resnet Application -ENABLE_TFLITE_INTERPRETER CASES
DongHak Park [Mon, 24 Apr 2023 10:30:53 +0000 (19:30 +0900)]
[Application] Fix Resnet Application -ENABLE_TFLITE_INTERPRETER CASES

Now TFLITE Interpreter is not support loss : cross type
So in Resnet Application we made some macro to make them mse and there was some wrong part

in ResNet Application there was another macro for ENABLE_TEST GTEST's result assume that Application use cross loss

For Correct Result Fix some #if statement

TODO :  even if fix this situation TEST still failed regardless of tflite export releated code

Signed-off-by: DongHak Park <donghak.park@samsung.com>
18 months ago[unittest] remove meaningless unittest
hyeonseok lee [Wed, 26 Apr 2023 06:59:10 +0000 (15:59 +0900)]
[unittest] remove meaningless unittest

 - Remove meaningless unittest
 - Unify unittest

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
18 months ago[graph] fix web tct fail issue
hyeonseok lee [Fri, 21 Apr 2023 15:06:08 +0000 (00:06 +0900)]
[graph] fix web tct fail issue

 - replace insert with emplace
 - initialize member variable node_names

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
18 months ago[tct] fix coverity issues
Seungbaek Hong [Fri, 21 Apr 2023 08:33:43 +0000 (17:33 +0900)]
[tct] fix coverity issues

Fix some coverity issues.

This pr is still work in process.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
19 months ago[NNstreamer] Change variable type of sub-plugin
hyunil park [Thu, 20 Apr 2023 07:11:15 +0000 (16:11 +0900)]
[NNstreamer] Change variable type of sub-plugin

- Change from int64_t to unsigned int
- bug-fix: when getting values from nnstreamer in an arm 32bit environment, invalid values are passed

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
19 months ago[capi] fix comment for tizen release
Seungbaek Hong [Wed, 19 Apr 2023 08:13:36 +0000 (17:13 +0900)]
[capi] fix comment for tizen release

The notation for future version has been modified.

- "tizen 8.0" to "later version".

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
19 months ago[Application] Fix CVE issues in kotlin gson
jijoong.moon [Tue, 18 Apr 2023 22:43:32 +0000 (07:43 +0900)]
[Application] Fix CVE issues in kotlin gson

There is CVE issues befor gson 2.8.9.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[capi] fix some api issues for tizen release
Seungbaek Hong [Mon, 17 Apr 2023 05:27:43 +0000 (14:27 +0900)]
[capi] fix some api issues for tizen release

I've checked some api issues using tizen-native-api-review-script.

So, I correct most issues but there are still remain some errors
in the "nntrainer_internal.h" file.

There are three type issues remaining yet.
- enum names should end with '_e'
- struct names should end with '_s'

But I think It would be better to do not rename these enum and
struct because it is already released in earlier version.

And, The last type issues seem to be "false positive".
I think tizen-api-review-script don't aware the macro function, etc.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
19 months ago[capi] add identity layer to capi
hyeonseok lee [Mon, 17 Apr 2023 05:25:54 +0000 (14:25 +0900)]
[capi] add identity layer to capi

 - Added identity layer enum to capi

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
19 months ago[optimizer] add log when lr_scheduler property is also set in optimizer
hyeonseok lee [Fri, 14 Apr 2023 07:35:34 +0000 (16:35 +0900)]
[optimizer] add log when lr_scheduler property is also set in optimizer

 - Added log when Exponential learning rate scheduler properties(decay_rate, decay_steps) are set both in optimizer and lr_scheduler

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
19 months ago[capi] add unittest for learning rate scheduler
hyeonseok lee [Thu, 13 Apr 2023 08:29:25 +0000 (17:29 +0900)]
[capi] add unittest for learning rate scheduler

 - Added learning rate scheduler related unittest

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
19 months ago[test] reorder tizen capi unittest
hyeonseok lee [Thu, 13 Apr 2023 07:34:57 +0000 (16:34 +0900)]
[test] reorder tizen capi unittest

 - Reorder unittest for sequential order

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
19 months ago[capi] add learning rate scheduler related api
hyeonseok lee [Thu, 13 Apr 2023 03:31:07 +0000 (12:31 +0900)]
[capi] add learning rate scheduler related api

 - Added learning rate scheduler create/destroy/set property/set property with single param  api
 - Added set learning rate scheduler to optimizer
 - Added ml_train_lr_scheduler_type_e enum
 - Fix some comments

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
19 months ago[ccapi] change setLearningRateScheduler function prototype
hyeonseok lee [Thu, 13 Apr 2023 03:09:36 +0000 (12:09 +0900)]
[ccapi] change setLearningRateScheduler function prototype

 - Change return type from void to int.
   Capi will call this function so it should be return status.
 - Change learning rate scheduler pointer from unique_ptr to shared_ptr

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
19 months ago[ccapi] rename LearningRateType enum name to LearningRateSchedulerType
hyeonseok lee [Thu, 13 Apr 2023 02:56:31 +0000 (11:56 +0900)]
[ccapi] rename LearningRateType enum name to LearningRateSchedulerType

 - rename enum name LearningRateType to LearningRateSchedulerType for more detailed info

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
19 months ago[Application] dataloader for yolo
Seungbaek Hong [Tue, 21 Mar 2023 11:50:14 +0000 (20:50 +0900)]
[Application] dataloader for yolo

Add detection dataloader for yolo example.

- Set target directory, then this dataloader loads dataset
from "images" and "annotations" folders.
- Currently, It only supports "bmp" image format.
- For variable label data, it adds zero padding to label.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
19 months ago[TF Export] Update tflite_opnode
DongHak Park [Fri, 31 Mar 2023 07:58:12 +0000 (16:58 +0900)]
[TF Export] Update tflite_opnode

Update tflite_opnode
- Add is_trainable for make fused op, by checking is trainable we can make fused of for inference
- Add MUL Op for BatchNormalization Fused Op

Signed-off-by: DongHak Park <donghak.park@samsung.com>
19 months ago[AHub] Fix AHub Defect
DongHak Park [Thu, 6 Apr 2023 07:06:39 +0000 (16:06 +0900)]
[AHub] Fix AHub Defect

Fix AHub Defect
- make some exception statement
- change auto element -> auto &element

Signed-off-by: DongHak Park <donghak.park@samsung.com>
19 months ago[Debian] Fix the debian package dependency
jijoong.moon [Thu, 6 Apr 2023 12:59:06 +0000 (21:59 +0900)]
[Debian] Fix the debian package dependency

nntrainer-dev depends on ccapi-ml-training-dev and
capi-ml-training-dev.
This PR add these dependency for nntrainer-dev

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[NNStreamer] fix input dimension with MAX RANK with 8
jijoong.moon [Fri, 7 Apr 2023 06:37:38 +0000 (15:37 +0900)]
[NNStreamer] fix input dimension with MAX RANK with 8

This pr fixes the limit of rank whic greater than 4.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[NNStreamer] Remove rank check temporary
jijoong.moon [Thu, 6 Apr 2023 12:49:23 +0000 (21:49 +0900)]
[NNStreamer] Remove rank check temporary

There was a changes in nnstreamer to extend max rank limit to 16. So
we need to remove rank limit check until they give current rank.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[ Tensor ] Add default Tensor format in Tensor constructor
jijoong.moon [Mon, 3 Apr 2023 00:09:24 +0000 (09:09 +0900)]
[ Tensor ] Add default Tensor format in Tensor constructor

This PR includes the default Tensor format (NCHW) in Tesnor
Constructor. It also includes the unittest cases for HHWC Tensor
format. Even though it has Tensor format property, but still it needs
to implement the actual tensor operation.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[ TensorDim ] Add tensor format in TensorDim
jijoong.moon [Fri, 31 Mar 2023 06:58:02 +0000 (15:58 +0900)]
[ TensorDim ] Add tensor format in TensorDim

In order to support Channel Last & Channel First at the same time, we
need to define the Tensor Format in TensorDim Class.

According to the format of tensorDim, it return proper value for the
APIs such as bacch(), channel(), height() and width().

The default format is Channel First as it is now.

If nothing is provided when a Tensor is constructured, then it is set
NCHW.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[Ahub] Fix svace issue
Seungbaek Hong [Tue, 4 Apr 2023 06:45:49 +0000 (15:45 +0900)]
[Ahub] Fix svace issue

Fixed some svace issues on svace-tizen_7.5.

Add exception handling to
- tensor_trainer_nntrainer.cc
- genModelExeOrder.cpp

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
19 months ago[AHub] Fix AHub Defect
DongHak Park [Tue, 4 Apr 2023 06:21:26 +0000 (15:21 +0900)]
[AHub] Fix AHub Defect

Fix Ahub Defect
- change stderror to stderror_r

Signed-off-by: DongHak Park <donghak.park@samsung.com>
19 months ago[GitAction] Fix Duplicate Approvals
DongHak Park [Mon, 3 Apr 2023 04:59:10 +0000 (13:59 +0900)]
[GitAction] Fix Duplicate Approvals

Fix Duplicate Approval
- now git action count all approval
- For example someone Approve PR --> Update PR --> Approve Again then they count 2
- For example CI Approve PR every time they success then gitaction count them all

The given Git Action code performs the following tasks
1. Uses the curl command to call the GitHub REST API and retrieve a list of reviews for the given pull request in the repository.
2. Uses the jq command to filter the reviews based on whether their state is "APPROVED".
3. Extracts the login names of the users who wrote each of the filtered reviews.
4. Uses the unique function to filter out only the unique login names, removing any duplicates.
5. Calculates the length of the resulting list of unique login names, giving the count of unique users who have approved the pull request.

Therefore, this code removes any duplicate approvals, counting each unique user only once, even if they have approved more than once.

Signed-off-by: DongHak Park <donghak.park@samsung.com>
19 months ago[Application] add validation process of yolo example on pytorch
Seungbaek Hong [Thu, 30 Mar 2023 06:24:42 +0000 (15:24 +0900)]
[Application] add validation process of yolo example on pytorch

Add validation process of yolo example using validation dataset on
pytorch.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
19 months ago[Application] Update yolo example of torch for tracking gradients
Seungbaek Hong [Wed, 29 Mar 2023 09:41:22 +0000 (18:41 +0900)]
[Application] Update yolo example of torch for tracking gradients

For tracking the gradients of Loss class,
I removed in-place operation in the Loss class.

And, I added hook_variable function for tracking specific tensors.

We can register specific variable with name using hook_variable
function, then we can check the gradient values using
print_hook_variable function after backwarding.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
19 months ago[ Custom ] Add custom optimizer example in Application
jijoong.moon [Fri, 31 Mar 2023 00:23:02 +0000 (09:23 +0900)]
[ Custom ] Add custom optimizer example in Application

This PR includes the custom optimizer 'momentum' example in
Applicaiton/Custom

It adds the testcase and demo implementaiton

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[ Application ] Fix the layer type of Simpleshot application
jijoong.moon [Tue, 28 Mar 2023 23:04:54 +0000 (08:04 +0900)]
[ Application ] Fix the layer type of Simpleshot application

There was an error of layer name for "CL2N".
This pr fix the error.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[Layer] Support batch computation for l2norm layer
jijoong.moon [Mon, 27 Mar 2023 06:49:12 +0000 (15:49 +0900)]
[Layer] Support batch computation for l2norm layer

This PR enables the l2norm preprocessor layer to support batch
computation.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[ Tensor ] allow to have inc 0 for broadcasting for raw computation
jijoong.moon [Mon, 27 Mar 2023 04:22:49 +0000 (13:22 +0900)]
[ Tensor ] allow to have inc 0 for broadcasting for raw computation

This PR allows to have 0 for incX and incY in tensor raw compuation to
enable broadcasting tensor operation.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[Release] NNTrainer v0.5.0 release
jijoong.moon [Mon, 3 Apr 2023 23:58:46 +0000 (08:58 +0900)]
[Release] NNTrainer v0.5.0 release

NNTrainer v0.5.0 is released.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
19 months ago[TEST] Add execution order generator
Jiho Chu [Tue, 14 Feb 2023 08:06:51 +0000 (17:06 +0900)]
[TEST] Add execution order generator

It generate execution order golden file for each model.
Each golden data consists of execution orders of each tensors.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
19 months ago[TEST] Add execution order test
Jiho Chu [Mon, 13 Feb 2023 09:39:02 +0000 (18:39 +0900)]
[TEST] Add execution order test

This patch verifies execution orders for model graph.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
19 months ago[DEBUG] Add interface for getting execution order
Jiho Chu [Mon, 13 Feb 2023 09:32:08 +0000 (18:32 +0900)]
[DEBUG] Add interface for getting execution order

This patch implemnts debugging feature for execution order.

Execution order for each tensor is decided while initalize and finalize
phase. To check the validity of them, getting execution order interface
is added to network graph class. It can gather both variable and
gradient of the tensors.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>