platform/core/ml/nntrainer.git
4 weeks ago[Coverity] Fix coverity issues accepted/tizen_unified accepted/tizen_unified_x accepted/tizen_unified_x_asan master tizen accepted/tizen/unified/20240402.151301 accepted/tizen/unified/20240402.163546 accepted/tizen/unified/x/20240403.102929 accepted/tizen/unified/x/asan/20240415.123316
Donghyeon Jeong [Fri, 29 Mar 2024 06:22:09 +0000 (15:22 +0900)]
[Coverity] Fix coverity issues

This PR resolves coverity issues of overflow, use of auto that causes a copy, missing lock and thread lock.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
4 weeks ago[Ahub] Fix coverage issue
SeoHyungjun [Wed, 31 Jan 2024 03:52:15 +0000 (12:52 +0900)]
[Ahub] Fix coverage issue

To resolve AUTO_CAUSES_COPY, auto was changed to const auto&.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
5 weeks ago[svace] fix svace issues
Seungbaek Hong [Wed, 27 Mar 2024 09:44:06 +0000 (18:44 +0900)]
[svace] fix svace issues

fixed all svace issues on main branch

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Change-Id: I507379b2ee5f4d15c306408efb56347afaba23ba
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
5 weeks ago[Coverity] Fix coverity issues
Donghyeon Jeong [Thu, 28 Mar 2024 04:20:52 +0000 (13:20 +0900)]
[Coverity] Fix coverity issues

This PR resolves coverity issues of use of auto that causes a copy and missing lock.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
5 weeks ago[coverity] fix coverity issues
hyeonseok lee [Mon, 25 Mar 2024 10:40:50 +0000 (19:40 +0900)]
[coverity] fix coverity issues

 - Added const auto & to avoid copy of an object
 - Added missing lock

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
5 weeks ago[ coverity ] Fix Coverity issue
Donghak PARK [Fri, 22 Mar 2024 06:04:46 +0000 (15:04 +0900)]
[ coverity ] Fix Coverity issue

Fix Coverity issue on
- /test/unittest/layers/layers_golden_tests.cpp
- /test/unittest/models/unittest_models_recurrent.cpp
- /test/unittest/unittest_nntrainer_models.cpp

Resolves:
```
Use of auto that causes a copy (AUTO_CAUSES_COPY)
auto_causes_copy: This lambda has an unspecified return type
copy: This return statement creates a copy.
```
**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
5 weeks ago[coverity] fix coverity issue
Eunju Yang [Thu, 21 Mar 2024 07:51:00 +0000 (16:51 +0900)]
[coverity] fix coverity issue

- This commit fixes the coverity issues
- AUTO_CAUSES_COPY
- MSSING_LOCK

Self-evaluation:

1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
5 weeks ago[coverity] fix coverity issue
Seungbaek Hong [Mon, 25 Mar 2024 02:27:29 +0000 (11:27 +0900)]
[coverity] fix coverity issue

Fix coverity issue on
- /test/unittest/layers/layers_golden_recurrent.cpp

The other issues assigned have already been fixed.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
5 weeks ago[Coverity] Fix the coverity issue
Donghyeon Jeong [Thu, 21 Mar 2024 07:42:43 +0000 (16:42 +0900)]
[Coverity] Fix the coverity issue

This PR resolves the coverity issues of use of auto that causes a copy.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Change-Id: Id29ed85298bd22002f81782c122041988dca22b0
Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
5 weeks ago[ coverity ] Fix coverity issue
skykongkong8 [Thu, 21 Mar 2024 10:34:32 +0000 (19:34 +0900)]
[ coverity ] Fix coverity issue

- Fix coverity issue 1774230, 1774235, 1774238, 1774239, 1774243

Resolves:
```
Use of auto that causes a copy (AUTO_CAUSES_COPY)
auto_causes_copy: This lambda has an unspecified return type
copy: This return statement creates a copy.
```

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
6 weeks ago[Coverity] Fix coverity issue
Eunju Yang [Fri, 2 Feb 2024 08:37:24 +0000 (17:37 +0900)]
[Coverity] Fix coverity issue

Issue: 1745839
Signed-off-by: Eunju Yang <ej.yang@samsung.com>
6 weeks ago[Coverity] Fix coverity issue
Donghak PARK [Mon, 5 Feb 2024 03:30:40 +0000 (12:30 +0900)]
[Coverity] Fix coverity issue

Fix Coverity issues
- 1746142
- 1744671

i will fix iteration_queue.cpp's dead lock issue soon

**Changes proposed in this PR:**
- modified:   nntrainer/compiler/ini_interpreter.cpp
- modified:   nntrainer/dataset/dir_data_producers.cpp

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
6 weeks ago[Coverity] fix coverity issues
Seungbaek Hong [Tue, 30 Jan 2024 08:00:00 +0000 (17:00 +0900)]
[Coverity] fix coverity issues

fix some coverity issues.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
6 weeks ago[ Bug ] Fix coverity issues
skykongkong8 [Wed, 31 Jan 2024 06:25:29 +0000 (15:25 +0900)]
[ Bug ] Fix coverity issues

- Fix non-const variables to const variables since their value is never changed in actual practice
- Use const auto & to avoid object copy

Resolves:
```
non-const type variable, but its value is never changed.
auto_causes_copy
```

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Change-Id: Ib54e1a4406438c64c43eb2ab8768e8a47556f7d2
Signed-off-by: skykongkong8 <ss.kong@samsung.com>
6 weeks ago[coverity] Fix coverity issues
Donghyeon Jeong [Wed, 31 Jan 2024 06:28:38 +0000 (15:28 +0900)]
[coverity] Fix coverity issues

This PR resolves the coverity issues that were identified.

**Changes proposed in this PR:**
- Specify the return type of the lambda function
- Use reference to not copy the object.

This fixes:
- Use of auto that causes a copy (AUTO_CAUSES_COPY)

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
6 weeks ago[FIX] Fix coverity issues
Jiho Chu [Wed, 31 Jan 2024 01:21:08 +0000 (10:21 +0900)]
[FIX] Fix coverity issues

Issue:
1740106
1742375
1747001

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
6 weeks ago[bug] fix coverity issues
hyeonseok lee [Tue, 30 Jan 2024 07:58:24 +0000 (16:58 +0900)]
[bug] fix coverity issues

 - Specify the lambda return type to avoid object copy

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
6 weeks ago[coverity] fix coverity issues
Eunju Yang [Mon, 18 Mar 2024 08:25:53 +0000 (17:25 +0900)]
[coverity] fix coverity issues

This commit fixes coverity issues of auto_causes_copy
- 1739360
- 1740106

Self-evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Change-Id: I92f7c25de243c931226e65f114613a8d6dc84ec0
Signed-off-by: Eunju Yang <ej.yang@samsung.com>
6 weeks ago[Coverity] Fix the coverity issue
Donghyeon Jeong [Mon, 18 Mar 2024 07:42:58 +0000 (16:42 +0900)]
[Coverity] Fix the coverity issue

This PR resolves the coverity issues of missing lock and use of auto that causes a copy.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Change-Id: Ibc56e24d8329b650731a9378ee3a757b7986e2f9
Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
4 months agoFixed the build error using gcc 13 accepted/tizen_unified_riscv accepted/tizen/unified/20231220.165048 accepted/tizen/unified/riscv/20231220.095327
wchang kim [Tue, 4 Jul 2023 02:29:10 +0000 (11:29 +0900)]
Fixed the build error using gcc 13

Change-Id: I0721d29f34c4ac71e1ae96c2bc62e9d2edb15580

7 months ago[Coverity] Fix Coverity issue accepted/tizen_8.0_unified tizen_8.0 accepted/tizen/8.0/unified/20231005.093407 accepted/tizen/unified/20230925.160804 tizen_8.0_m2_release
Donghak PARK [Wed, 20 Sep 2023 10:37:39 +0000 (19:37 +0900)]
[Coverity] Fix Coverity issue

For fix uninitialzed output_axis set default output_axis as 3
- in tensor.h file

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Coverity] Fix Coverity Issue
Donghak PARK [Tue, 19 Sep 2023 08:44:52 +0000 (17:44 +0900)]
[Coverity] Fix Coverity Issue

Fix Coverity Issues
- set default output_axis = 3

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Coverity] Fix Coverity Issue
Donghak PARK [Tue, 19 Sep 2023 08:33:12 +0000 (17:33 +0900)]
[Coverity] Fix Coverity Issue

Remove local reference return
- it already removed at latest version but not yet merged in main branch

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Coverity] Fix Coverity issue
Donghak PARK [Tue, 19 Sep 2023 08:29:22 +0000 (17:29 +0900)]
[Coverity] Fix Coverity issue

Fix may be NULL and is dereferenced at blas_neon.cpp

Check NULL if failed to malloc

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Svace] Fix Ahub issues
SeoHyungjun [Tue, 19 Sep 2023 07:04:52 +0000 (16:04 +0900)]
[Svace] Fix Ahub issues

Added exception handling when malloc fails.

The value of the idx variable has been changed to be initialized when declared.

In the tensor constructor, output_axis was changed to be initialized to 3.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Coverity] Fix Coverity issue
Donghak PARK [Tue, 29 Aug 2023 08:00:38 +0000 (17:00 +0900)]
[Coverity] Fix Coverity issue

Fix Coverity issue : auto_causes_copy issue

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Coverity] Fix issue on Draw_Classification Application
Donghak PARK [Tue, 29 Aug 2023 07:52:01 +0000 (16:52 +0900)]
[Coverity] Fix issue on Draw_Classification Application

Fix Coverity issue
- leaked_storage issue : on fail we need to destroy in_data

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Typo] Fix typo
Donghak PARK [Tue, 29 Aug 2023 06:52:21 +0000 (15:52 +0900)]
[Typo] Fix typo

Fix typo on Simpleshot application's README.md file

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Coverity] Fix Coverity issue on task_runner.cpp
Donghak PARK [Tue, 29 Aug 2023 06:33:41 +0000 (15:33 +0900)]
[Coverity] Fix Coverity issue on task_runner.cpp

Fix Coverity issue
- returned_null : potentialy getcwd can returns nullptr so check it's value

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
7 months ago[Ahub] Fix Ahub issue
SeoHyungjun [Thu, 24 Aug 2023 09:41:39 +0000 (18:41 +0900)]
[Ahub] Fix Ahub issue

The second argument of tensor_dtype is used as std::regex(string).
But it didn't include handling for std::regex_error. created a getRegex()
function because it is used in a similar form in other codes. The
getRegex() function takes a string and retrun a std::regex object.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Ahub] Fix Ahub issue
SeoHyungjun [Thu, 24 Aug 2023 08:10:51 +0000 (17:10 +0900)]
[Ahub] Fix Ahub issue

Previously, nntrainer only supported fp32. nntrainer didn't need to
change the data type, but it needs to be changed to support fp16. If
the tensor type is fp16, get input_dim of InitLayerContext through
getInputDimentions and call setDataType. For getInputDimentions,
return a const object. Added getMutableInputDimentions function
because return object of getInputDimentions cannot be modified.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Ahub] Fix Ahub issue
SeoHyungjun [Wed, 23 Aug 2023 11:57:49 +0000 (20:57 +0900)]
[Ahub] Fix Ahub issue

The condition of the while statement has been modified to solve
'Dereferencingiterator lhs_iter thought it is already past the end
of its container'. In fact, in the above if conditional, it only
works when the size of the two containers is the same, so there is
no problem using '||'. However, it has been edited for clarity.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Ahub] Fix Ahub issue
SeoHyungjun [Wed, 23 Aug 2023 07:26:03 +0000 (16:26 +0900)]
[Ahub] Fix Ahub issue

Restore does not work as expected after changing ostream type.

Added because there is no code to restore the variable out.
Changed the code where the nesting depth was set incorrectly to the
correct location.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Ahub] Fix Ahub issue
SeoHyungjun [Wed, 23 Aug 2023 04:44:48 +0000 (13:44 +0900)]
[Ahub] Fix Ahub issue

The 'initialized' variable receives a pointer via malloc.
If malloc fails, it will be null.
However, since the exception is not handled, calling initialize[i] points to null + i.
Exception handling has been added to prevent this problem.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Ahub] Fix Ahub issue
SeoHyungjun [Wed, 23 Aug 2023 02:34:29 +0000 (11:34 +0900)]
[Ahub] Fix Ahub issue

changed 'auto' to 'auto &' in multiout_realizer.cpp.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Ahub] Fix Ahub issue
SeoHyungjun [Wed, 23 Aug 2023 02:33:13 +0000 (11:33 +0900)]
[Ahub] Fix Ahub issue

Fixed the part where buf is not initialized and used in nntrainer_logger.cpp.
Array initialized with ascii 0 ('\0').

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
7 months ago[Application] bugfix for YOLO version 2
hyunil park [Mon, 18 Sep 2023 08:15:03 +0000 (17:15 +0900)]
[Application] bugfix for YOLO version 2

Training does not proceed because the layer names are different.

- reorg layer name is changed from reorg to reorg_layer

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
7 months agoRevert "[Release] NNTrainer v0.5.1 release"
jijoong.moon [Mon, 18 Sep 2023 14:17:07 +0000 (23:17 +0900)]
Revert "[Release] NNTrainer v0.5.1 release"

This reverts commit 258967dcad51caffde61a06f46bde7f6329622d9.

Signed-off-by: Jijoong Moon <jijoong.moon@samsung.com>
7 months ago[Application] handle getcwd return null pointer
hyeonseok lee [Thu, 14 Sep 2023 08:17:49 +0000 (17:17 +0900)]
[Application] handle getcwd return null pointer

 - Handle when getcwd return NULL to prevent dereference null pointer

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
7 months ago[Release] NNTrainer v0.5.1 release
jijoong.moon [Tue, 12 Sep 2023 01:38:27 +0000 (10:38 +0900)]
[Release] NNTrainer v0.5.1 release

NNTrainer v0.5.1 is released.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
7 months ago[Application] Setup meson file for libyolov2_loss_layer.so
Seungbaek Hong [Wed, 13 Sep 2023 05:48:35 +0000 (14:48 +0900)]
[Application] Setup meson file for libyolov2_loss_layer.so

Setup meson file for making libyolov2_loss_layer.so in yolo v2
application.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
7 months ago[Wait for #2177,#2213][Application] rebase for yolo v2
Seungbaek Hong [Wed, 31 May 2023 06:14:12 +0000 (15:14 +0900)]
[Wait for #2177,#2213][Application] rebase for yolo v2

I've rebased #2177(loss for yolo) and #2213(custom layer for yolo).
(Because the author of PR #2177 is absent for now.

If someone needs to use yolo v2, then use this pr.

I'll update document for this later.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
7 months ago[Application] match nntrainer and pytorch yolo model
hyeonseok lee [Thu, 23 Mar 2023 10:38:20 +0000 (19:38 +0900)]
[Application] match nntrainer and pytorch yolo model

 - Match option value like epsilon, momentum
 - This commit will match nntrainer yolo v2 output with pytorch yolo v2

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
7 months ago[Application] implement yolo v2 loss backwarding
hyeonseok lee [Thu, 6 Apr 2023 11:22:53 +0000 (20:22 +0900)]
[Application] implement yolo v2 loss backwarding

 - Implement yolo v2 loss layer backwarding

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
7 months ago[Application] implement yolo v2 forward
hyeonseok lee [Wed, 22 Mar 2023 04:44:40 +0000 (13:44 +0900)]
[Application] implement yolo v2 forward

 - Implement yolo v2 forward

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
7 months ago[yolo_v2] yolo v2 loss scaffold
hyeonseok lee [Thu, 9 Mar 2023 02:22:12 +0000 (11:22 +0900)]
[yolo_v2] yolo v2 loss scaffold

 - Added yolo v2 loss scaffold

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
7 months ago[Tensor] Unsigned Quantized Tensor
Donghyeon Jeong [Tue, 12 Sep 2023 00:09:35 +0000 (09:09 +0900)]
[Tensor] Unsigned Quantized Tensor

- Quantized tensor values are unsigned with zero points
- Layer context dequantize quantized tensor when request weight
- Template dequantize function

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
7 months ago[Tensor] Quantized Tensor (Int 4) with Scale
Donghyeon Jeong [Tue, 5 Sep 2023 02:25:40 +0000 (11:25 +0900)]
[Tensor] Quantized Tensor (Int 4) with Scale

- Quantized Tensor is now available with Int 4 with scale.
- Two Int 4 values use one Int 8, which each uses 4 bits
- Dequantization is performed by multiplying scaling factors with a given index (b, c, h, w).
- Only read (getValueQint4), write (setValue), and dequantization operations are allowed.

**Self-evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
7 months ago[Application] add re-orginization layer to Yolo v2
Seungbaek Hong [Tue, 30 May 2023 07:04:34 +0000 (16:04 +0900)]
[Application] add re-orginization layer to Yolo v2

Added Re-organization layer to yolo v2 examples
of nntrainer and pytorch.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
7 months ago[Tensor] Quantized Tensor (Int 8) with Scale
Donghyeon Jeong [Wed, 30 Aug 2023 23:19:30 +0000 (08:19 +0900)]
[Tensor] Quantized Tensor (Int 8) with Scale

- Quantized Tensor is now present with Int 8 with scale.
- Dequantization is performed by multiplying values by a scaling factor for channels.
- Only read, write, and dequantization operations are allowed.

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
7 months ago[sub-plugin] Modify synchronization mechanism between push_data and getSample
hyunil park [Tue, 29 Aug 2023 09:23:25 +0000 (18:23 +0900)]
[sub-plugin] Modify synchronization mechanism between push_data and getSample

Modify synchronization mechanism between push_data and getSample
- Remove some member variable
- Add member function to check queue

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
7 months ago[blas/neon] SGEMM Neon execution for any M value
Debadri Samaddar [Fri, 8 Sep 2023 11:13:34 +0000 (16:43 +0530)]
[blas/neon] SGEMM Neon execution for any M value

Used padded calculations for SGEMM using NEON for any value of M.
Where M is the number of rows in output matrix.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
7 months ago[blas/neon] Optimized SGEMM when both inputs are transposed
Debadri Samaddar [Thu, 31 Aug 2023 09:09:20 +0000 (14:39 +0530)]
[blas/neon] Optimized SGEMM when both inputs are transposed

Optimized sgemm stub when both A and B are transposed

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
7 months ago[blas/neon] Added unit test for NEON fp16 SGEMM
Debadri Samaddar [Wed, 30 Aug 2023 14:01:27 +0000 (19:31 +0530)]
[blas/neon] Added unit test for NEON fp16 SGEMM

Added UT for NEON fp16 implementation of SGEMM.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
7 months ago[blas/neon] NEON fp16 implementation of SGEMM
Debadri Samaddar [Wed, 30 Aug 2023 13:57:47 +0000 (19:27 +0530)]
[blas/neon] NEON fp16 implementation of SGEMM

SGEMM fp16 implmentation for Android(ARM) using NEON.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[Typo] Fix typo
Donghak PARK [Tue, 29 Aug 2023 13:38:39 +0000 (22:38 +0900)]
[Typo] Fix typo

Fix typo
- nntrainer/models/neuralnet.cpp

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
8 months ago[tflite_export] Set model batch 1
Donghak PARK [Tue, 29 Aug 2023 13:32:56 +0000 (22:32 +0900)]
[tflite_export] Set model batch 1

Now, After train the model and export it to the tflite format, it will be exported with the batch size used for learning.
However, for interference, it is not necessary to have a batch size, and when converting to a tensorflow lite format in tensorflow, the batch size is set to 1.

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
8 months ago[blas/neon] Optimization on SGEMV fp16 implementation
Debadri Samaddar [Tue, 29 Aug 2023 08:37:19 +0000 (14:07 +0530)]
[blas/neon] Optimization on SGEMV fp16 implementation

Optimized fp16 implementation of SGEMV using NEON to run on Android(ARM).

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[sub-plugin] Add function to stop model training
hyunil park [Tue, 8 Aug 2023 02:42:30 +0000 (11:42 +0900)]
[sub-plugin] Add function to stop model training

nnstreamer tensor_trainer call this function to stop model training

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
8 months ago[ blas/neon ] Add NEON fp16 function for isamax
Debadri Samaddar [Thu, 24 Aug 2023 13:23:18 +0000 (18:53 +0530)]
[ blas/neon ] Add NEON fp16 function for isamax

Enable neon isamax function for Android (ARM) fp16 computation.
Add unit test for fp16 isamax function in Android(ARM).

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[ blas/neon ] Add NEON fp16 function for scopy
Debadri Samaddar [Tue, 22 Aug 2023 15:57:34 +0000 (21:27 +0530)]
[ blas/neon ] Add NEON fp16 function for scopy

Enable neon scopy function for Android (ARM) fp16 computation.
Add unit test for fp16 scopy function in Android(ARM).

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[ blas/neon ] Add NEON fp16 function for sscal
Debadri Samaddar [Thu, 17 Aug 2023 14:47:55 +0000 (20:17 +0530)]
[ blas/neon ] Add NEON fp16 function for sscal

Enable neon sscal function for Android (ARM) fp16 computation.
Add unit test for fp16 sscal function in Android(ARM).

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[ blas/neon ] Add NEON fp16 function for snrm2
Debadri Samaddar [Thu, 10 Aug 2023 10:54:36 +0000 (16:24 +0530)]
[ blas/neon ] Add NEON fp16 function for snrm2

Enable neon snrm2 function for Android (ARM) fp16 computation.
Add unit test for fp16 snrm2 function in Android(ARM).

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[trivial] Add reviewers
skykongkong8 [Wed, 23 Aug 2023 01:59:40 +0000 (10:59 +0900)]
[trivial] Add reviewers

- add new reviewers : sungsik Kong, donghyeon Jeong

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[layer] Verify ln, bn layers with fp16
skykongkong8 [Tue, 22 Aug 2023 04:33:23 +0000 (13:33 +0900)]
[layer] Verify ln, bn layers with fp16

    - issue : adding cosine similarity check in fp32/fp16 revealed that there was unmatched cosine similarity Tensors in case of near-zero Tensors. Nevertheless, absolute value difference and mse pass our epsilon value. We would better to come back here for sanity check.
    - Same result for multi-headed attention layer as well. (Only for near-zero Tensors)
    - Added skip_cosine_similarity_check param to avoid this issue
    - Macro for enable-fp16 option

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[layer] Verify positional encoding layer with fp16
skykongkong8 [Fri, 18 Aug 2023 05:37:34 +0000 (14:37 +0900)]
[layer] Verify positional encoding layer with fp16

- added tensor_type getting code into layer
- added test case in positional encoding layer unittest

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[ bug ] bugfix for wrong data generation trial
skykongkong8 [Thu, 17 Aug 2023 04:26:55 +0000 (13:26 +0900)]
[ bug ] bugfix for wrong data generation trial

- since we handle by casting all the data at the end of the binary data file generation, we do not need to pass input data type in the first place
- newly generated .tar file included

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[TensorPool] Check tensor type in view
Donghyeon Jeong [Tue, 22 Aug 2023 01:59:59 +0000 (10:59 +0900)]
[TensorPool] Check tensor type in view

This PR enables the TensorPool view to filter call from different tensor type

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[sub-plugin] Add function to load an existing model
hyunil park [Fri, 28 Jul 2023 00:01:13 +0000 (09:01 +0900)]
[sub-plugin] Add function to load an existing model

An existing model registered in model_load_path is used when training a new model.

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
8 months ago[Android] Add unit-testing executable build
Donghyeon Jeong [Thu, 17 Aug 2023 05:54:57 +0000 (14:54 +0900)]
[Android] Add unit-testing executable build

This patch adds additional unit test for the android

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[Tensor] remove unused code
Donghyeon Jeong [Thu, 17 Aug 2023 04:40:23 +0000 (13:40 +0900)]
[Tensor] remove unused code

- Remove unused code.

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[Tensor] Fix in Mixed Precision Support
Donghyeon Jeong [Wed, 16 Aug 2023 06:08:42 +0000 (15:08 +0900)]
[Tensor] Fix in Mixed Precision Support

- Fix unchanged works in mixed precision support

- Remove unused code

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[unittest] specify softmax template type
Donghyeon Jeong [Fri, 11 Aug 2023 04:49:34 +0000 (13:49 +0900)]
[unittest] specify softmax template type

Template type in activation functions needs to be specified to avoid
errors on ndk-build.

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[layers] Dump acti_func into header
skykongkong8 [Thu, 10 Aug 2023 06:46:44 +0000 (15:46 +0900)]
[layers] Dump acti_func into header

- For easier maintenance, dump everyhing to header since there only few functions left after applying template to acti_fun.cpp

Resolves:

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[gtest] Add dataset generation code for all layers in fp16
skykongkong8 [Thu, 10 Aug 2023 01:27:02 +0000 (10:27 +0900)]
[gtest] Add dataset generation code for all layers in fp16

- Add code block for generating fp16 dataset for every layer
- Add new .tar.gz file that contains above

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[ Bug Fix ] fix the error in FP32 only case
jijoong.moon [Thu, 10 Aug 2023 12:51:57 +0000 (21:51 +0900)]
[ Bug Fix ] fix the error in FP32 only case

There is configuration bugs for the FP32 only case.
This PR fixes the configuration and some of the ENABLE_FP16 compiler
macro errors.

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
8 months ago[ blas/neon ] Add NEON fp16 function for sdot
Debadri Samaddar [Tue, 8 Aug 2023 11:14:16 +0000 (16:44 +0530)]
[ blas/neon ] Add NEON fp16 function for sdot

Enable neon sdot function for Android (ARM) fp16 computation.
Add unit test for fp16 sdot function in Android(ARM).

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[Bug] Change the string format of the tensor datatype
Donghyeon Jeong [Thu, 10 Aug 2023 09:06:39 +0000 (18:06 +0900)]
[Bug] Change the string format of the tensor datatype

Substitute underscore to hyphen in defining tenser datatype.

The _ (underscore) character used in std::regex is treated as a quantifier in LLVM.

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months agoFix cosine similarity calculation error
Donghyeon Jeong [Wed, 9 Aug 2023 07:27:57 +0000 (16:27 +0900)]
Fix cosine similarity calculation error

Computing cosine similarity in FP16 gives inaccurate results (compute in double).

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[Bug] Fix bug when Android build
skykongkong8 [Thu, 10 Aug 2023 05:19:00 +0000 (14:19 +0900)]
[Bug] Fix bug when Android build

- Due to different compiler setting, trivial code fix for default
  template instantiation is required.

Resolves:

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[gtest] Verify attention layer with fp16
skykongkong8 [Thu, 10 Aug 2023 00:31:38 +0000 (09:31 +0900)]
[gtest] Verify attention layer with fp16

- Add fp16 test case
- Modify epsilon value in cosine similarity with proper decimal number & significant digit

Resolves:

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[layers/activation_func] Apply template on activation functions
skykongkong8 [Thu, 10 Aug 2023 00:30:22 +0000 (09:30 +0900)]
[layers/activation_func] Apply template on activation functions

**Changes proposed in this PR:**

- For mixed precision, activation functions should be revised to a function template to avoid bulky code
- In order to use function template for setActivation, we need another function template to handle multiple types of activation function
- Minor fixes for template instantiation, and this will be revised proplerly for fp16 use in the next PR

Resolves:

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago [gtest] Add dataset file for attention layer
skykongkong8 [Tue, 8 Aug 2023 07:41:08 +0000 (16:41 +0900)]
 [gtest] Add dataset file for attention layer

    * Now nnlayergolden binary file for attention layer gtest will be automatically generated when build

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[Bug] Fix the nhwc test bug
jijoong.moon [Thu, 10 Aug 2023 01:29:44 +0000 (10:29 +0900)]
[Bug] Fix the nhwc test bug

We do need to add the format information during layer test.
This pr add the format change for the input tensor.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
8 months ago[bug] Fix zero division error
skykongkong8 [Mon, 7 Aug 2023 06:36:07 +0000 (15:36 +0900)]
[bug] Fix zero division error

* add edge case handling in cosine_similarity function: in case of zero-valued tensor

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[unittest/layer] Enable fp16 golden test in fc layer
skykongkong8 [Mon, 7 Aug 2023 05:58:01 +0000 (14:58 +0900)]
[unittest/layer] Enable fp16 golden test in fc layer

* fp16 tensor validation metric
  * value-by-value : with epsilon 1e-2, since _FP16 decimal digit is 3
  * cosine similarity
  * mean squared error with epsilon 1e-4, since it is 'squared' value
* Add fclayer fp16 tensor golden data when build
* fix cosine_similarity function to avoid zero division error (NaN value generation)

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months agoFix meson build options to support ARM properly
Donghyeon Jeong [Mon, 7 Aug 2023 01:48:12 +0000 (10:48 +0900)]
Fix meson build options to support ARM properly

- Check for non-android ARM machines
- Use blas_neon.cpp only for ARM machines

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[Bug] Fix redundant call to sgemv fp16 function
Debadri Samaddar [Fri, 4 Aug 2023 09:42:21 +0000 (15:12 +0530)]
[Bug] Fix redundant call to sgemv fp16 function

Added conditions for handling function call based USE__FP16 identifier.

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[ GTEST ] Add gtest for NEON fp16 tensor unittest in Android
Debadri Samaddar [Thu, 3 Aug 2023 14:25:17 +0000 (19:55 +0530)]
[ GTEST ] Add gtest for NEON fp16 tensor unittest in Android

Enables the gtest for half precision NEON functions in Android(ARM).

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[ blas/neon ] Add NEON fp16 function for saxpy
Debadri Samaddar [Thu, 3 Aug 2023 11:20:55 +0000 (16:50 +0530)]
[ blas/neon ] Add NEON fp16 function for saxpy

Enable neon saxpy function for Android (ARM) __fp16 computation

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
8 months ago[test] Enable fp16 golden test data
skykongkong8 [Fri, 4 Aug 2023 00:52:22 +0000 (09:52 +0900)]
[test] Enable fp16 golden test data

* generation : work with genLayerTests.py and use record_single_fp16
* data comparison : from sizeCheckedReadTensor, read with _FP16 memory size offset

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
8 months ago[Compiler] Preserve connection order in multi-out realizer
Donghyeon Jeong [Wed, 2 Aug 2023 05:15:51 +0000 (14:15 +0900)]
[Compiler] Preserve connection order in multi-out realizer

Create multiout nodes with a given connection order in building a frequency map.

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
8 months ago[bugfix] added warning flag to compile with gcc 13
hyeonseok lee [Thu, 27 Jul 2023 12:57:40 +0000 (21:57 +0900)]
[bugfix] added warning flag to compile with gcc 13

 - Added Wno-maybe-uninitialized flag

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
8 months ago[TFLite Export] Add Realized Path for Fused Op
DongHak Park [Fri, 14 Apr 2023 08:35:07 +0000 (17:35 +0900)]
[TFLite Export] Add Realized Path for Fused Op

For Fused OP Made Realized Path

1. Check Trainable
 - check node is trainable or not for fusing
2. Conv + ReLU Fusing
3. Batch Normalization Fusing

Signed-off-by: DongHak Park <donghak.park@samsung.com>
8 months ago[TFLite Export] Add variable, functions TfOpNodes for Fused OP export
DongHak Park [Fri, 14 Apr 2023 08:27:46 +0000 (17:27 +0900)]
[TFLite Export] Add variable, functions TfOpNodes for Fused OP export

for Export Tflite format with Fused Op add some Variable and Function

1. Add getter, setter, replace to weights
- for Fused Op we need to adjust weights after made Opnode

2. Add isToBeRemove variable
- After made Opnode, check condition and mark as to be remove

3. Add additional_props
- for BatchNormalization Fused Op we need additional props from nntrainer
- made vector<float> variable for save additional data

Signed-off-by: DongHak Park <donghak.park@samsung.com>
8 months agoremove warning flags related to compile with gcc-13
hyeonseok lee [Fri, 21 Jul 2023 11:12:38 +0000 (20:12 +0900)]
remove warning flags related to compile with gcc-13

 - Remove warning flags which helps to compile with gcc 13.
 - Remove multiout testcase cause this test cannot guarantees the multiout layer order

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
8 months ago[ahub] fix ahub issues
Seungbaek Hong [Wed, 19 Jul 2023 02:21:02 +0000 (11:21 +0900)]
[ahub] fix ahub issues

Fix some issues of svace and coverity.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
8 months ago[graph_node] handle deprecated stl iterator
hyeonseok lee [Mon, 17 Jul 2023 11:42:13 +0000 (20:42 +0900)]
[graph_node] handle deprecated stl iterator

 - Explicitly provide the parameter as default parameter for stl iterator is deprecated.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
8 months ago[ Tensor ] Support NHWC for dot, add/multiply_strided and other ops
Adwaith Anand [Wed, 28 Jun 2023 10:19:43 +0000 (15:49 +0530)]
[ Tensor ] Support NHWC for dot, add/multiply_strided and other ops

This PR includes changes of Tensor and TensorDim to support NHWC
computation for dot, add_strided, multiply_strided, cat, split,
and transpose. It also includes unittests to evaluate.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Adwaith Anand <adwaith.a@samsung.com>
Signed-off-by: Manohara HK <manohara.hk@samsung.com>
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
8 months ago[Bug] Fix unchanged work in Apply template
Donghyeon Jeong [Thu, 3 Aug 2023 04:52:03 +0000 (13:52 +0900)]
[Bug] Fix unchanged work in Apply template

FP16 is seperated from FP32 in apply function.

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>