platform/core/ml/nntrainer.git
11 months ago[GPU/OpenCL] Initial version of FC Layer with OpenCL ops
Debadri Samaddar [Tue, 7 May 2024 09:08:36 +0000 (14:38 +0530)]
[GPU/OpenCL] Initial version of FC Layer with OpenCL ops

Added naive version of OpenCl implementation for FC Layer.
Incorporated separate kernels for ops used.
Added unit test for fc_layer_cl.

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
11 months ago[ Trivial ] Remove redundant comments and format
skykongkong8 [Mon, 15 Apr 2024 04:11:24 +0000 (13:11 +0900)]
[ Trivial ] Remove redundant comments and format

- Due to adaptive macro kernel usage, previous comment is no longer needed.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[ hgemm ] Refactor kernel init process
skykongkong8 [Mon, 15 Apr 2024 04:01:04 +0000 (13:01 +0900)]
[ hgemm ] Refactor kernel init process

- I found there was a repeated usage of matrix initialization before mul-add fused operations.
- With separate initialization code, we can enjoy:
1. Cleaner code that is reusable for both f16 & f16-f32 kernel
2. Redundant init process is minimized for f16 kernel. Better latency with the SAME accuracy.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[ hgemm/bugfix ] Adaptive macro kernel usage in 4x4 4x8 kernels
skykongkong8 [Mon, 15 Apr 2024 02:10:01 +0000 (11:10 +0900)]
[ hgemm/bugfix ] Adaptive macro kernel usage in 4x4 4x8 kernels

- To avoid the constraint of 4-8 divisibilty w.r.t. K, loop for adaptive K direction.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[ hgemm ] Apply acc16 partial sum strategy and adaptive macro use in 8x8 kernel
skykongkong8 [Mon, 15 Apr 2024 01:34:29 +0000 (10:34 +0900)]
[ hgemm ] Apply acc16 partial sum strategy and adaptive macro use in 8x8 kernel

- Apply similar change made in commit#52a3c734 but in 8x8 kernel

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[ hgemm ] Apply ACC16 partial sum strategy & adaptive macro use in 8x16 kernel
skykongkong8 [Mon, 15 Apr 2024 01:19:24 +0000 (10:19 +0900)]
[ hgemm ] Apply ACC16 partial sum strategy & adaptive macro use in 8x16 kernel

- With more digits computed with fp16 (in this case 1024 -> 2048) I could observe latency improvement with the cost of accuracy loss. However, according to current accuracy measurement criteria, it is still acceptable. Note that it is highly desired to be proven with model output once more.
- With variety of partial sum kernels, we can adaptively apply internal macro kernels without being constrained to K-divisibilty w.r.t. 4, 8, 16.Commit title (Until 50 colums per line)

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[ hgemm ] Apply macro kernel in 4x4 noTrans
skykongkong8 [Fri, 12 Apr 2024 05:13:25 +0000 (14:13 +0900)]
[ hgemm ] Apply macro kernel in 4x4 noTrans

- With macro-defined code, the function latency is expected to be optimized by compiler more easily

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[ hgemm ] Add 4x4 kernel-using f16-f32 hgemm_noTrans
skykongkong8 [Fri, 12 Apr 2024 03:48:02 +0000 (12:48 +0900)]
[ hgemm ] Add 4x4 kernel-using f16-f32 hgemm_noTrans

- Now Hgemm supports 4x4 f16-f32 partial accumulation strategy

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[ hgemm ] Implement 4x4 f16-f32 kernel
skykongkong8 [Fri, 12 Apr 2024 03:46:57 +0000 (12:46 +0900)]
[ hgemm ] Implement 4x4 f16-f32 kernel

- Implement 4x4 GEMM kernel that works f16-f32 partial accumulation

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months agoEdited build instructions for Resnet18 test
Udit Jain [Wed, 22 May 2024 07:39:01 +0000 (16:39 +0900)]
Edited build instructions for Resnet18 test

Edited build instructions for Resnet18 test
**Fixing the meson build option**

Resolves: Error on building the test example where it says
`-c is an un-recognized option` and in the meson documentation -C is used, so it seems to be a typo.

**Self evaluation:**
1. Build test:     []Passed [ ]Failed [ X]Skipped
2. Run test:     []Passed [ ]Failed [ X]Skipped

Signed-off-by: Udit Jain <udit.jain@samsung.com>
11 months ago[Trivial] Update gitignore file
Seungbaek Hong [Wed, 22 May 2024 04:29:18 +0000 (13:29 +0900)]
[Trivial] Update gitignore file

add ".idea/" in gitignore file
- For ignore jetbrain's IDE

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
11 months ago[coverity] fix coverity issue
Donghyeon Jeong [Tue, 21 May 2024 00:38:00 +0000 (09:38 +0900)]
[coverity] fix coverity issue

This PR resolves the coverity issue of the constructor may not initialize class members.

**Changes proposed in this PR:**
- initialize lora_idx and lora_scaling in class constructor.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
11 months ago[bugfix] Fix LoRA indices array size in the FC layer
Donghyeon Jeong [Mon, 20 May 2024 02:12:43 +0000 (11:12 +0900)]
[bugfix] Fix LoRA indices array size in the FC layer

This PR resolves an issue related to the incorrect array size for lora_idx in the fully connected layer.
Specifically, the fix has made the array size four elements long, corresponding to loraA, loraB, loraTmp, and loraOut.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
11 months ago[Application] update yolo v2 python for building pre-training model
Seungbaek Hong [Fri, 17 May 2024 08:38:52 +0000 (17:38 +0900)]
[Application] update yolo v2 python for building pre-training model

In order to train a large dataset, instead of loading the dataset into memory in advance, it was changed to a real-time loading method during training, and visualization code was added to check whether the training proceeded well.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
11 months ago[Nnstreamer-subplugin] Add save_path to setProperty
hyunil park [Fri, 17 May 2024 05:11:08 +0000 (14:11 +0900)]
[Nnstreamer-subplugin] Add save_path to setProperty

- Add save_path to setProperty to save the model for each epoch.
- Remove model->save() call to avoid saving the current epoch result
  to the model when current epoch is interrupted

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
11 months ago[Application] cuda support for example of pytorch yolo v2
Seungbaek Hong [Wed, 8 May 2024 12:21:40 +0000 (21:21 +0900)]
[Application] cuda support for example of pytorch yolo v2

- add cuda option to train yolo v2 model backbone
- preprocessing for input dataset
  * unmatched paired dataset
  * no annotation value

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
11 months ago[Application] Rename yolo -> yolo v2
Seungbaek Hong [Wed, 8 May 2024 04:05:32 +0000 (13:05 +0900)]
[Application] Rename yolo -> yolo v2

To prevent confusion, the name of YOLOv2 implementation was changed from
YOLO to YOLOv2.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
11 months ago[hgemm] Optimizing dimension checks using bitmask
Debadri Samaddar [Thu, 9 May 2024 08:15:22 +0000 (13:45 +0530)]
[hgemm] Optimizing dimension checks using bitmask

Used bitmasks for dimension checks.
e.g: N % 8 is same as N & 0x7

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
11 months ago[hgemm] Added K divisible condition for 1x8 and 1x4 kernels
Debadri Samaddar [Wed, 8 May 2024 09:09:55 +0000 (14:39 +0530)]
[hgemm] Added K divisible condition for 1x8 and 1x4 kernels

Added condition for better accuracy while calling 1x4 and 1x8 kernels

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
11 months ago[hgemm] Interchanged hgemm_noTrans_1x8 and hgemm_noTrans_4x4 calls
Debadri Samaddar [Wed, 8 May 2024 05:48:54 +0000 (11:18 +0530)]
[hgemm] Interchanged hgemm_noTrans_1x8 and hgemm_noTrans_4x4 calls

Moving 1x8 kernel call after 4x4 kernel call.
Added couple of testcases.

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
11 months ago[ hdot ] Use precision-enhanced hdot
skykongkong8 [Wed, 24 Apr 2024 01:39:41 +0000 (10:39 +0900)]
[ hdot ] Use precision-enhanced hdot

- Previous hdot was using full-fp16.
- Since this is also one of dimension-shrinking computation, should use inter-fp32 values to enhance precision.
- This has not been detected due to small dimension Tensor usage in unittest. Add higher dimension test case accordingly.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[Trivial] Removing unnecessary files from the repo and adding an ignore file.
Donghak PARK [Wed, 8 May 2024 04:11:05 +0000 (13:11 +0900)]
[Trivial] Removing unnecessary files from the repo and adding an ignore file.

In an Android project, the files ".gradle" and ".idea" are created locally and have nothing to do with the repository.
Therefore, it is common to delete them, and add a "gitignore" file so that they will not be uploaded again as development progresses.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
11 months ago[CI] Remove Pylinter in CI
Donghak PARK [Thu, 9 May 2024 00:38:13 +0000 (09:38 +0900)]
[CI] Remove Pylinter in CI

Previously, since Pylinter did not exist as gitaction, it was run directly in gitaction.
but there is no need to do the same task twice because pylint is included in static_check.scripts when importing ci from nnstreamer.
So delete pylinter.yml file because it continues to create unnecessary CI errors.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
11 months ago[Application] fix LLaMA application example error
Seungbaek Hong [Tue, 7 May 2024 06:38:22 +0000 (15:38 +0900)]
[Application] fix LLaMA application example error

in case of running without encoder, a problem has been fixed where invalid values are set during operation due to incorrect assignment of input data, and this causes word index related errors.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
11 months ago[Application] Update weights_converter
Seungbaek Hong [Fri, 3 May 2024 04:18:26 +0000 (13:18 +0900)]
[Application] Update weights_converter

the num_layer parameter is set to be automatically through auto config
when converting weights from pytorch format to nntrainer format.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
11 months ago[ NEURALNET ] change the loss scale property to Rigid Property
jijoong.moon [Fri, 26 Apr 2024 11:07:28 +0000 (20:07 +0900)]
[ NEURALNET ] change the loss scale property to Rigid Property

Loss Scale is more like Rigid Property of model, rather than flexible
property.

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
11 months ago[ Weight ] split variable dim and grad dim to set separately
jijoong.moon [Fri, 26 Apr 2024 10:13:05 +0000 (19:13 +0900)]
[ Weight ] split variable dim and grad dim to set separately

This PR split the Variable and Gradient Dim in Var_Grad and Weight.
By this way we can set different Variable Type and Gradient in Wegiht.
. add dim_g for gradient in WeightSpec.
. manager need to update to support WeightSpec.
. Create Tensors according to dim_v and dim_g
. Create Weight chaged in Weight.h

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
11 months ago[ Weight ] Add Loss Scale factor in Weight
jijoong.moon [Fri, 26 Apr 2024 05:48:26 +0000 (14:48 +0900)]
[ Weight ] Add Loss Scale factor in Weight

This PR enables the loss scale factor in Weight.
. Change the WeightSpec to incluide the loss factor
. Add LossScaleForMixed Property as an layer common property, so that
  it can set the scale factor in initContext.
. Add Loss Scale in initContext
. Set the LossScaleForMixed Property when there is LossScale Model
  Property

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
11 months ago[Property] Add loss scale property
Jiho Chu [Wed, 6 Mar 2024 00:58:18 +0000 (09:58 +0900)]
[Property] Add loss scale property

It add loss scale property as model common property.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
11 months agomeson: fix fp16 support conditions for arm/aarch64
MyungJoo Ham [Fri, 26 Jan 2024 04:02:05 +0000 (13:02 +0900)]
meson: fix fp16 support conditions for arm/aarch64

According to GCC document,
https://gcc.gnu.org/onlinedocs/gcc-9.1.0/gcc/Half-Precision.html
even if -mfp16-format=ieee is not given, aarch64 supports
ieee fp16. Thus, for aarch64, even if the option is not available,
try to built it with __fp16 type.

Then, add condition for arm: the final "else" is written for x64/x86
machines.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
11 months ago[Wait for #2536][application] add generate_multiple_tokens for llm
Seungbaek Hong [Fri, 5 Apr 2024 05:08:50 +0000 (14:08 +0900)]
[Wait for #2536][application] add generate_multiple_tokens for llm

Added generate_multiple_tokens function for first generation on llm.

This function takes one logits and generates multiple output tokens.
To meet the purpose of the target application,
even if input are multiple logits,
only the first logits is used to generate multiple output tokens.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
11 months agoAdd SELU activation function
kimhan0515 [Sat, 27 Apr 2024 16:13:16 +0000 (01:13 +0900)]
Add SELU activation function

- Now, user can use SELU activation function like torch or tensor flow.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: kimhan0515 <kimhan0515@gmail.com>
11 months ago[ hnrm2 ] Use precision-enhanced hscal
skykongkong8 [Thu, 25 Apr 2024 03:32:24 +0000 (12:32 +0900)]
[ hnrm2 ] Use precision-enhanced hscal

- Previous hnrm2 was using full-fp16.
- Since this is also one of dimension-shrinking computation, should use inter-fp32 values to enhance precision.
- This has not been detected due to small dimension Tensor usage in unittest. Add higher dimension test case accordingly.
- Note that this function is responsible for Tensor::l2norm(), frequently used for mse loss computation.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
11 months ago[Build] dependency to api
Jaeyun Jung [Fri, 26 Apr 2024 05:49:05 +0000 (14:49 +0900)]
[Build] dependency to api

Code clean, fix cyclic dependency between nntrainer and ml-api.
Build dependency to ml-api on nntrainer is unnecessary.

Signed-off-by: Jaeyun Jung <jy1210.jung@samsung.com>
11 months ago[LLaMA] Bugfix in LLaMA application
Eunju Yang [Tue, 23 Apr 2024 04:23:19 +0000 (13:23 +0900)]
[LLaMA] Bugfix in LLaMA application

- This commit fixes a bug in `applyTKP` function.
- It seems applying Top-K and Top-P to logits didn't work as intended

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[hgemm] hgemm noTrans with 1x4 kernel
Debadri Samaddar [Tue, 23 Apr 2024 06:30:16 +0000 (12:00 +0530)]
[hgemm] hgemm noTrans with 1x4 kernel

Added hgemm_kernel_1x4
Added hgemm_noTrans_1x4 calls
Added unittest dot_gemm_50_768_516

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months ago[bugfix] Fix build issues when fp16 is enabled
Donghyeon Jeong [Thu, 25 Apr 2024 04:34:17 +0000 (13:34 +0900)]
[bugfix] Fix build issues when fp16 is enabled

This PR resolves build issues occur in acti_func.h when fp16 is enabled.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
12 months ago[LoRA] add alpha parameter to LoRA
Eunju Yang [Fri, 5 Apr 2024 00:15:01 +0000 (09:15 +0900)]
[LoRA] add alpha parameter to LoRA

- This commit adds `alpha` parameter to LoRA (fc)
- In the original paper,they adopted `alpha (int)` as a parameter to
derive the scaling factor internally, i.e., scaling = alpha / rank
- This commit takes `alpha` as a hyper-parameter and apply the scaling
factor to the LoRA layer.
- This commit's updates are summarized as follows:
- `common_properties.h` : add LoraAlpha as a parameter.
- `fc_layer.cpp`: update forwarding / calcGradient /
calcDerivative func to apply scaling factor in LoRA computation
- `fc_layer.h`: update to take LoraAlpha as fc_props
- `node_exporter.cpp/h`: add LoraAlpha as a parameter in
tf.export format of fc layer (to pass the test code)
- fix the code lines which may cause coverity issue.
- LoRA initialization is updated:
- LoRA A : ZEROS
- LoRA B : Normal
- [TODO] update tf exporter of fc layer

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months agoAdd Mish activation function
Boseong Seo [Sat, 20 Apr 2024 03:55:09 +0000 (12:55 +0900)]
Add Mish activation function

- Now, user can use Mish activation function like torch or tensorflow.

**Self evaluation**:
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Boseong Seo <suzy13549@snu.ac.kr>
12 months ago[hgemm] Removed unused header
Debadri Samaddar [Mon, 22 Apr 2024 04:01:44 +0000 (09:31 +0530)]
[hgemm] Removed unused header

Deleted unused header inclusion
Removed #include <iostream>

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months ago[hgemm] hgemm noTrans with kernel 1x8
Debadri Samaddar [Thu, 18 Apr 2024 09:02:29 +0000 (14:32 +0530)]
[hgemm] hgemm noTrans with kernel 1x8

Added 1x8 hgemm kernel, packing_A1, packing_B1 functions.
Incorporated hgemm_noTrans_1x8.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months agoci / remove cpp-linter's false positive reports.
MyungJoo Ham [Sat, 20 Apr 2024 04:22:44 +0000 (13:22 +0900)]
ci / remove cpp-linter's false positive reports.

It gives an error of "iostream" not found.

Install libstdc++-dev for possible compilers.

Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
12 months ago[API] Add tensor&operations API structure for supporting autograd
Seungbaek Hong [Fri, 15 Mar 2024 07:11:11 +0000 (16:11 +0900)]
[API] Add tensor&operations API structure for supporting autograd

I added tensor & function(operation) api structure for supporting autograd.

Users can make a model graph using this api.

The operators will be supported one-to-one with ONNX.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
12 months agoAdd Softplus activation function
heka1024 [Wed, 10 Apr 2024 05:54:26 +0000 (14:54 +0900)]
Add Softplus activation function

- Now, user can use Softplus activation function like torch or tensor flow.
- Furthermore, we can use this function to build Mish or other activation functions

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: heka1024 <heka1024@gmail.com>
12 months ago[neuralnet] bugfix multi batch incremental inference
hyeonseok lee [Wed, 17 Apr 2024 01:48:49 +0000 (10:48 +0900)]
[neuralnet] bugfix multi batch incremental inference

 - This commit will handle when the model activation datatype is fp32

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[Docs] add yolov3 readme file
Seungbaek Hong [Mon, 15 Apr 2024 12:39:17 +0000 (21:39 +0900)]
[Docs] add yolov3 readme file

added yolov3 readme file

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
12 months ago[OpenCL/GPU] Modified ifstream condition
Debadri Samaddar [Mon, 15 Apr 2024 07:40:44 +0000 (13:10 +0530)]
[OpenCL/GPU] Modified ifstream condition

Updated ifstream object valid condition

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months ago[GPU/OpenCL] Create kernel utility with binaries
Debadri Samaddar [Thu, 4 Apr 2024 09:21:10 +0000 (14:51 +0530)]
[GPU/OpenCL] Create kernel utility with binaries

Added feature for reading kernel binaries.
Managing already created kernels.
Added static flag and bitmask to check existing kernels.

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months agoAdd ELU activation function
heka1024 [Tue, 9 Apr 2024 13:50:02 +0000 (22:50 +0900)]
Add ELU activation function

- Now, user can use ELU activation function like torch or tensor flow.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Co-authored-by: Hanbyeol Kim kimhan0515@snu.ac.kr
Co-authored-by: Boseong Seo suzy13549@snu.ac.kr
Signed-off-by: heka1024 <heka1024@gmail.com>
12 months ago[application] add repetition_penalty to generate func
Seungbaek Hong [Thu, 4 Apr 2024 11:24:54 +0000 (20:24 +0900)]
[application] add repetition_penalty to generate func

add some options to 'generate' function of llm

- add naive repetition_penalty option
- add bad_words option

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
12 months agoRemove LSTM example in Applications/README.md
Boseong Seo [Tue, 9 Apr 2024 13:23:31 +0000 (22:23 +0900)]
Remove LSTM example in Applications/README.md

- existing link to LSTM example does not work
- user can find LSTM example in Layers dir
  + LSTM dir merged to Layers dir (in PR nnstreamer#2107)
- delete LSTM example in Applications/README.md file in order to reduce confusion

**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Boseong Seo <suzy13549@snu.ac.kr>
12 months ago[ hgemm ] Use macro kernel in 8x8 kernel
skykongkong8 [Thu, 4 Apr 2024 06:44:15 +0000 (15:44 +0900)]
[ hgemm ] Use macro kernel in 8x8 kernel

- Using macro kernel, we can choose somewhere between accuracy-latency tradeoff. Furthermore, it is easier to maintain this way.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[ hgemm ] Apply software prefetching in 4x8 kernel
skykongkong8 [Thu, 4 Apr 2024 06:42:11 +0000 (15:42 +0900)]
[ hgemm ] Apply software prefetching in 4x8 kernel

- We can expect to minimize cache miss using software prefetching

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[ hgemm ] Implement 8x16 hgemm kernel
skykongkong8 [Wed, 3 Apr 2024 11:10:57 +0000 (20:10 +0900)]
[ hgemm ] Implement 8x16 hgemm kernel

- This commit introduces 2 types of 8x16 hgemm kernel
1. full-fp16
2. fp16-fp32 partial accumulation

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[neuralnet] enable multi batch incremental inference
hyeonseok lee [Fri, 5 Apr 2024 13:49:45 +0000 (22:49 +0900)]
[neuralnet] enable multi batch incremental inference

 - The output was not considered multi batch input in incremental inference.
   Now it will return multi batch output.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[application] update llm generate function
Seungbaek Hong [Wed, 3 Apr 2024 11:10:13 +0000 (20:10 +0900)]
[application] update llm generate function

- fix "temperature" operation
- add "top-k, top-p" option
- support batch mode

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
12 months agoReformat code with .clang_format
Boseong Seo [Thu, 4 Apr 2024 07:34:29 +0000 (16:34 +0900)]
Reformat code with .clang_format

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Boseong Seo <suzy13549@snu.ac.kr>
12 months ago[ BugFix ] Modify the wrong input in `EXPECT_EQ`
Boseong Seo [Thu, 4 Apr 2024 04:19:55 +0000 (13:19 +0900)]
[ BugFix ] Modify the wrong input in `EXPECT_EQ`

- `registerFactory` function returns the unsigned value of the int_key when int_key is given as -1 (default), but it was not considered in the code.
 - So, modified the second argument (expected value) of `EXPECT_EQ` as follows.

Signed-off-by: Boseong Seo <suzy13549@snu.ac.kr>
12 months agoUse parameterized test in unittest
Boseong Seo [Tue, 2 Apr 2024 19:28:35 +0000 (04:28 +0900)]
Use parameterized test in unittest

Use parameterized test according to existing TODO comment.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Boseong Seo <suzy13549@snu.ac.kr>
12 months agoFix typo in docs
kimhan0515 [Thu, 4 Apr 2024 16:09:57 +0000 (01:09 +0900)]
Fix typo in docs

Fix typos for some docs
- README.md and docs/configuration-ini.md: simple typo
- Applications/MNIST/README.md: typo and duplicate image

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Hanbyeol Kim kimhan0515@snu.ac.kr
Signed-off-by: kimhan0515 <kimhan0515@gmail.com>
12 months ago[layer] multi batch incremental forwarding
hyeonseok lee [Thu, 4 Apr 2024 12:23:37 +0000 (21:23 +0900)]
[layer] multi batch incremental forwarding

 - Enable multi batch incremental forwarding by looping batchwise

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[OpenCL] Added stringification macro and kernel path
Debadri Samaddar [Tue, 2 Apr 2024 12:51:18 +0000 (18:21 +0530)]
[OpenCL] Added stringification macro and kernel path

Add DEFAULT_KERNEL_PATH as static member of Program class
Modified macros for stringification

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months ago[OpenCL] Added opencl kernel path as option
Debadri Samaddar [Fri, 15 Mar 2024 12:16:12 +0000 (17:46 +0530)]
[OpenCL] Added opencl kernel path as option

Added opencl-kernel-path preprocessor directive and handled inside opencl_program.

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months ago[OpenCL] Proper cleanup and readability
Debadri Samaddar [Thu, 14 Mar 2024 07:55:40 +0000 (13:25 +0530)]
[OpenCL] Proper cleanup and readability

Used better C++ paradigm to enhance readability.
Added proper cleanup stub.

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months ago[OpenCL/GPU] Kernel binary caching
Debadri Samaddar [Mon, 11 Mar 2024 11:14:19 +0000 (16:44 +0530)]
[OpenCL/GPU] Kernel binary caching

Added utilities for saving kernel as binary files.
Added wrapper for clCreateProgramWithBinary.

Signed-off-by: Debadri Samaddar <s.debadri@samsung.com>
12 months ago[LoRA] Apply Inception-LoRA
Eunju Yang [Fri, 15 Mar 2024 06:35:28 +0000 (15:35 +0900)]
[LoRA] Apply Inception-LoRA

- updates the LoRA computation (applying Inception-LoRA)
- compute with LoRA vectors without matrix construction
- revise `forwarding()`
- revise `calcGradient()`
- revise `calcDerivative()`

Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[ Trivial ] apply clang-format to fc_layer.cpp
Eunju Yang [Tue, 12 Mar 2024 00:14:06 +0000 (09:14 +0900)]
[ Trivial ] apply clang-format to fc_layer.cpp

- clang-format re-apply to pass static checker
- `fc_layer.cpp`

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[ trivial ] fix doxgen tag check error
Eunju Yang [Fri, 8 Mar 2024 07:08:12 +0000 (16:08 +0900)]
[ trivial ] fix doxgen tag check error

- remove a redundant and incorrect block comment in
`nntrainer/layers/fc_layer.cpp`

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[ trivial ] apply clang-format
Eunju Yang [Fri, 8 Mar 2024 06:36:39 +0000 (15:36 +0900)]
[ trivial ] apply clang-format

- apply clang format to
- nntrainer/tensor/tensor_v2.cpp
- nntrainer/utils/node_exporter.cpp

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[LoRA/Trivial] fix typo and edit comments
Eunju Yang [Wed, 6 Mar 2024 02:45:08 +0000 (11:45 +0900)]
[LoRA/Trivial] fix typo and edit comments

- Fix typo in the code
- edit comments to to add some explanations

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[LoRA] Revise LoRA implementation for fc_layer
Eunju Yang [Wed, 6 Mar 2024 02:13:28 +0000 (11:13 +0900)]
[LoRA] Revise LoRA implementation for fc_layer

- remove `forwarding_lora()` function
- update forwarding path with LoRA option
- First, compute the forwarding logits of base weight (W) and lora weight (A @ B) respectively.
- then merge the logits to return
- [update] (W + A @ B)x -> Wx + (A @ B)x
- update `calcDerivative` to reflect the changes in forwarding operation
- implicit update of calcDerivative is updated.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[LoRA] revise type of LoraRank property & fix error in fc_layer
Eunju Yang [Wed, 31 Jan 2024 07:07:31 +0000 (16:07 +0900)]
[LoRA] revise type of LoraRank property & fix error in fc_layer

- update type of LoraRank property : Property<int> -> PositiveIntegerProperty
- fix typo dot_batched_deriv_wrt_1 -> dot_deriv_wrt_1
- update code with add -> add_i
- apply clang-format

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[LoRA] update node_exporter of fully connected layer
Eunju Yang [Wed, 31 Jan 2024 07:05:50 +0000 (16:05 +0900)]
[LoRA] update node_exporter of fully connected layer

This commit updates TfLite node exporter of fully conntected layer. It adds new property (LoraRank) as additional input property of fullyconnected layer

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[LoRA] add a new feat(lora) to fc layer
Eunju Yang [Mon, 29 Jan 2024 06:57:15 +0000 (15:57 +0900)]
[LoRA]  add a new feat(lora) to fc layer

This commit includes implementation of lora only for the FC layer, which means it is not the generalized version. It is required to be written as a seperate class in order to remove code duplicates for other layers

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
12 months ago[Layer] Create Depthwise 2D Convolution
Donghak PARK [Wed, 27 Mar 2024 05:27:19 +0000 (14:27 +0900)]
[Layer] Create Depthwise 2D Convolution

This pull request defines a header file for depthwise convolution.

It is a draft for a new layer and welcome any feedback or assistance you may have.

This layer is necessary to support various applications such as SV.

- Depthwise convolution is a type of convolution in which each input channel is convolved with a different kernel (called a depthwise kernel).
- Unlike a regular 2D convolution, depthwise convolution does not mix information across different input channels.

**Changes proposed in this PR:**
- Add Depthwise Convolution 2D Layer

Resolves:
- #2520

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
12 months ago[ neon ] Apply kernel based hgemm
skykongkong8 [Wed, 3 Apr 2024 07:02:16 +0000 (16:02 +0900)]
[ neon ] Apply kernel based hgemm

- Now hgemm subdirectory is included when neon fp16 is in use
- WIP : hgemm 8x16 kernel

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[ Trivial ] Fix typo
skykongkong8 [Wed, 3 Apr 2024 04:29:06 +0000 (13:29 +0900)]
[ Trivial ] Fix typo

- GEMM unittest for square 1024 was generating improper dimension. Fix accordingly.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[ hgemm ] Use optimized hgemm if possible
skykongkong8 [Wed, 3 Apr 2024 04:23:42 +0000 (13:23 +0900)]
[ hgemm ] Use optimized hgemm if possible

- We can use optimized version of hgemm with following condition:
1. noTrans hgemm
2. M, N, K is divisible with 4 or 8
3. Row Major GEMM
4. alpha = 1.0, beta = 0.0 (will be patched soon)
- Otherwise, use previous version as a fallback.
- Note that there are a few optimization strategy is left for optimal hgemm.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[ hgemm ] Implement 8x8 hgemm kernel
skykongkong8 [Wed, 3 Apr 2024 04:17:30 +0000 (13:17 +0900)]
[ hgemm ] Implement 8x8 hgemm kernel

- This commit introduces 2 types of 8x8 hgemm kernel
    1. full-fp16
    2. fp16-fp32 partial accumulationCommit title (Until 50 colums per line)

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[ hgemm ] Implement 4x8 hgemm kernel
skykongkong8 [Wed, 3 Apr 2024 04:16:06 +0000 (13:16 +0900)]
[ hgemm ] Implement 4x8 hgemm kernel

- This commit introduces 2 types of 4x8 hgemm kernel
        1. full-fp16
        2. fp16-fp32 partial accumulation
- Additionally, 4x8 kernel has macro kernel that can regulate accuracy-latency tradeoff. By default it uses partial sum up to 256 digits. Other kernels will be refactored in this way ASAP.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months agoFix typo in test
heka1024 [Tue, 2 Apr 2024 14:53:07 +0000 (23:53 +0900)]
Fix typo in test

Fix some typo in testcase. `duing` -> `during`, `TSETS` -> `TESTS`. And add doxgen for `nntrainer_LazyTensorOpsTest`

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: heka1024 <heka1024@gmail.com>
12 months ago[ HGEMM/draft ] Draft of kernel-based hgemm
skykongkong8 [Fri, 29 Mar 2024 01:31:00 +0000 (10:31 +0900)]
[ HGEMM/draft ] Draft of kernel-based hgemm

- Previously, hgemm was implemented without taking packing / kernel into consideration.
- Here I would like to introduce kernel-based hgemm. It consists of:
1. packing A / B matrix for 4 / 8 divisible case
2. 4x4, 8x8 hgemm kernel for full-fp16 case
- More features like fine-grained packing strategies and kernels will be updated in the near future.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
12 months ago[Coverity] Fix coverity issues
Donghyeon Jeong [Fri, 29 Mar 2024 06:22:09 +0000 (15:22 +0900)]
[Coverity] Fix coverity issues

This PR resolves coverity issues of overflow, use of auto that causes a copy, missing lock and thread lock.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
13 months ago[svace] fix svace issues
Seungbaek Hong [Wed, 27 Mar 2024 09:44:06 +0000 (18:44 +0900)]
[svace] fix svace issues

fixed all svace issues on main branch

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[Coverity] Fix coverity issues
Donghyeon Jeong [Thu, 28 Mar 2024 04:20:52 +0000 (13:20 +0900)]
[Coverity] Fix coverity issues

This PR resolves coverity issues of use of auto that causes a copy and missing lock.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
13 months ago[Trivial] Disable cpp-linter action's clang-format
Donghak PARK [Wed, 27 Mar 2024 07:23:39 +0000 (16:23 +0900)]
[Trivial] Disable cpp-linter action's clang-format

We currently perform a Clang format check during our static checks.
The CPP-Linter we are using is from the Action Market and occasionally produces different results even when the same version is specified.
This reduces efficiency for developers, so only the static check with more detailed logs will be left and the CPP-Linter function will be disabled.
However, the existing Linter function will remain.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
13 months ago[coverity] fix coverity issues
hyeonseok lee [Mon, 25 Mar 2024 10:40:50 +0000 (19:40 +0900)]
[coverity] fix coverity issues

 - Added const auto & to avoid copy of an object
 - Added missing lock

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
13 months ago[ coverity ] Fix Coverity issue
Donghak PARK [Fri, 22 Mar 2024 06:04:46 +0000 (15:04 +0900)]
[ coverity ] Fix Coverity issue

Fix Coverity issue on
- /test/unittest/layers/layers_golden_tests.cpp
- /test/unittest/models/unittest_models_recurrent.cpp
- /test/unittest/unittest_nntrainer_models.cpp

Resolves:
```
Use of auto that causes a copy (AUTO_CAUSES_COPY)
auto_causes_copy: This lambda has an unspecified return type
copy: This return statement creates a copy.
```
**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghak PARK <donghak.park@samsung.com>
13 months ago[coverity] fix coverity issue
Eunju Yang [Thu, 21 Mar 2024 07:51:00 +0000 (16:51 +0900)]
[coverity] fix coverity issue

- This commit fixes the coverity issues
- AUTO_CAUSES_COPY
- MSSING_LOCK

Self-evaluation:

1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
13 months ago[coverity] fix coverity issue
Seungbaek Hong [Mon, 25 Mar 2024 02:27:29 +0000 (11:27 +0900)]
[coverity] fix coverity issue

Fix coverity issue on
- /test/unittest/layers/layers_golden_recurrent.cpp

The other issues assigned have already been fixed.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[Coverity] Fix the coverity issue
Donghyeon Jeong [Thu, 21 Mar 2024 07:42:43 +0000 (16:42 +0900)]
[Coverity] Fix the coverity issue

This PR resolves the coverity issues of use of auto that causes a copy.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
13 months agoFix minor errors in github action
heka1024 [Sun, 17 Mar 2024 06:59:19 +0000 (15:59 +0900)]
Fix minor errors in github action

- `actions/setup-python@v1` is deprecated. So, bump version to v5.
- Step name says it uses python3.9, but it actually installs 3.10. Match name and what it actually doing.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: heka1024 <heka1024@gmail.com>
13 months ago[ coverity ] Fix coverity issue
skykongkong8 [Thu, 21 Mar 2024 10:34:32 +0000 (19:34 +0900)]
[ coverity ] Fix coverity issue

- Fix coverity issue 17742301774235177423817742391774243

Resolves:
```
Use of auto that causes a copy (AUTO_CAUSES_COPY)
auto_causes_copy: This lambda has an unspecified return type
copy: This return statement creates a copy.
```

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
13 months agoUse parameterized test in `NamePropertyTest`
heka1024 [Mon, 18 Mar 2024 16:35:10 +0000 (01:35 +0900)]
Use parameterized test in `NamePropertyTest`

To make code more readable, use parameterized test according to existing TODO comment.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: heka1024 <heka1024@gmail.com>
13 months agoBump actions/checkout in Ubuntu Meson build & test
GyeongHoe Koo [Sun, 17 Mar 2024 06:44:18 +0000 (15:44 +0900)]
Bump actions/checkout in Ubuntu Meson build & test

Node 16 has reached end of life. So, github recommend transition to actions which use node 20+. [Ref](https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/)

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: heka1024 <heka1024@gmail.com>
13 months ago[coverity] fix coverity issues
Eunju Yang [Mon, 18 Mar 2024 08:25:53 +0000 (17:25 +0900)]
[coverity] fix coverity issues

This commit fixes coverity issues of auto_causes_copy
1739360
1740106

Self-evaluation:

Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Eunju Yang <ej.yang@samsung.com>
13 months ago[Coverity] Fix the coverity issue
Donghyeon Jeong [Mon, 18 Mar 2024 07:42:58 +0000 (16:42 +0900)]
[Coverity] Fix the coverity issue

This PR resolves the coverity issues of missing lock and use of auto that causes a copy.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
13 months ago[Layer] Remove Tensor setDataType() usuage
Donghyeon Jeong [Thu, 7 Mar 2024 05:43:48 +0000 (14:43 +0900)]
[Layer] Remove Tensor setDataType() usuage

In several layers, there are attempts to change the data type of a Tensor object after initializing it.
This is currently possible but can cause issues down the line (e.g., treat FloatTensor object as HalfTensor).
As such, the setDataType() method will be removed and considered not to be used in future updates.
Instead, users will need to provide the desired data type when creating a new tensor.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
13 months ago[ neon/trivial ] Use N8 for hgemm, and for starting index for the remaining Tensor...
skykongkong8 [Mon, 11 Mar 2024 05:05:00 +0000 (14:05 +0900)]
[ neon/trivial ] Use N8 for hgemm, and for starting index for the remaining Tensor area

- Like hgemv_transpose, use N8 for hgemm_noTrans as well
- we can re-use this value for the starting index for the remaining area

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>
13 months ago[ neon ] Use bigger kernel in hgemv
skykongkong8 [Mon, 11 Mar 2024 03:05:09 +0000 (12:05 +0900)]
[ neon ] Use bigger kernel in hgemv

- Using up to 16x8 sized kernel shows highest latency. Apply accordingly.

**Self evaluation:**
1. Build test:     [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: skykongkong8 <ss.kong@samsung.com>