platform/core/ml/nntrainer.git
9 months ago[Toolchain] Enable gcc-13 support accepted/tizen_unified_dev accepted/tizen/unified/20230731.175253 accepted/tizen/unified/dev/20230726.120027 accepted/tizen/unified/riscv/20230724.124549
jijoong.moon [Fri, 21 Jul 2023 02:04:38 +0000 (11:04 +0900)]
[Toolchain] Enable gcc-13 support

This patch includes gcc-13 compatible fixes.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Change-Id: I0ae5a427ece0c0869ab6467d51b425d08b699a50
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
9 months ago[ Bug ] Fix the bug read the weight for batch normalization layer
jijoong.moon [Wed, 21 Jun 2023 06:53:52 +0000 (15:53 +0900)]
[ Bug ] Fix the bug read the weight for batch normalization layer

There is bug when the model loads the data for the batch normalziation
layer.

During the setup the requestWeights in manager, it add the max
execution order for graddient for gradient clipping, but variable
weight also added. This pr fixs it.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
9 months ago[graph_node] handle deprecated stl iterator
hyeonseok lee [Mon, 17 Jul 2023 11:42:13 +0000 (20:42 +0900)]
[graph_node] handle deprecated stl iterator

 - Explicitly provide the parameter as default parameter for stl iterator is deprecated.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
10 months ago[capi] fix notation for tizen 8.0
Seungbaek Hong [Wed, 7 Jun 2023 06:02:01 +0000 (15:02 +0900)]
[capi] fix notation for tizen 8.0

Fixed notation "tizen 7.5" to "tizen 8.0" for tizen release.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
10 months ago[nnstreamer][trainer] Add getting model stats information accepted/tizen/unified/20230609.163743
hyunil park [Mon, 15 May 2023 02:13:18 +0000 (11:13 +0900)]
[nnstreamer][trainer] Add getting model stats information

- epoch_complete_cb is called When one epoch ends in nntrainer and
  RunStats information is retrieved from the model.
- Send event to NNStreamer, NNStreamer waits to receive results every epoch.
- Use getStatus and nnstreamer_trainer_notify_event()

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
10 months ago[model] Add epoch complete callback
hyunil park [Mon, 15 May 2023 01:58:40 +0000 (10:58 +0900)]
[model] Add epoch complete callback

- Called the end of an epoch
- Users can do what they need at the end of each epoch. e.g. get RunStats.

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
12 months ago[unittest] remove meaningless unittest lts/v0.5.y accepted/tizen/unified/20230428.155055
hyeonseok lee [Wed, 26 Apr 2023 06:59:10 +0000 (15:59 +0900)]
[unittest] remove meaningless unittest

 - Remove meaningless unittest
 - Unify unittest

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[graph] fix web tct fail issue accepted/tizen/unified/20230425.130129 accepted/tizen/unified/20230426.062756
hyeonseok lee [Fri, 21 Apr 2023 15:06:08 +0000 (00:06 +0900)]
[graph] fix web tct fail issue

 - replace insert with emplace
 - initialize member variable node_names

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[tct] fix coverity issues
Seungbaek Hong [Fri, 21 Apr 2023 08:33:43 +0000 (17:33 +0900)]
[tct] fix coverity issues

Fix some coverity issues.

This pr is still work in process.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
12 months ago[NNstreamer] Change variable type of sub-plugin
hyunil park [Thu, 20 Apr 2023 07:11:15 +0000 (16:11 +0900)]
[NNstreamer] Change variable type of sub-plugin

- Change from int64_t to unsigned int
- bug-fix: when getting values from nnstreamer in an arm 32bit environment, invalid values are passed

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
12 months ago[capi] fix comment for tizen release
Seungbaek Hong [Wed, 19 Apr 2023 08:13:36 +0000 (17:13 +0900)]
[capi] fix comment for tizen release

The notation for future version has been modified.

- "tizen 8.0" to "later version".

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
12 months ago[Application] Fix CVE issues in kotlin gson
jijoong.moon [Tue, 18 Apr 2023 22:43:32 +0000 (07:43 +0900)]
[Application] Fix CVE issues in kotlin gson

There is CVE issues befor gson 2.8.9.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
12 months ago[capi] fix some api issues for tizen release
Seungbaek Hong [Mon, 17 Apr 2023 05:27:43 +0000 (14:27 +0900)]
[capi] fix some api issues for tizen release

I've checked some api issues using tizen-native-api-review-script.

So, I correct most issues but there are still remain some errors
in the "nntrainer_internal.h" file.

There are three type issues remaining yet.
- enum names should end with '_e'
- struct names should end with '_s'

But I think It would be better to do not rename these enum and
struct because it is already released in earlier version.

And, The last type issues seem to be "false positive".
I think tizen-api-review-script don't aware the macro function, etc.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
12 months ago[capi] add identity layer to capi
hyeonseok lee [Mon, 17 Apr 2023 05:25:54 +0000 (14:25 +0900)]
[capi] add identity layer to capi

 - Added identity layer enum to capi

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[optimizer] add log when lr_scheduler property is also set in optimizer
hyeonseok lee [Fri, 14 Apr 2023 07:35:34 +0000 (16:35 +0900)]
[optimizer] add log when lr_scheduler property is also set in optimizer

 - Added log when Exponential learning rate scheduler properties(decay_rate, decay_steps) are set both in optimizer and lr_scheduler

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[capi] add unittest for learning rate scheduler
hyeonseok lee [Thu, 13 Apr 2023 08:29:25 +0000 (17:29 +0900)]
[capi] add unittest for learning rate scheduler

 - Added learning rate scheduler related unittest

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[test] reorder tizen capi unittest
hyeonseok lee [Thu, 13 Apr 2023 07:34:57 +0000 (16:34 +0900)]
[test] reorder tizen capi unittest

 - Reorder unittest for sequential order

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[capi] add learning rate scheduler related api
hyeonseok lee [Thu, 13 Apr 2023 03:31:07 +0000 (12:31 +0900)]
[capi] add learning rate scheduler related api

 - Added learning rate scheduler create/destroy/set property/set property with single param  api
 - Added set learning rate scheduler to optimizer
 - Added ml_train_lr_scheduler_type_e enum
 - Fix some comments

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[ccapi] change setLearningRateScheduler function prototype
hyeonseok lee [Thu, 13 Apr 2023 03:09:36 +0000 (12:09 +0900)]
[ccapi] change setLearningRateScheduler function prototype

 - Change return type from void to int.
   Capi will call this function so it should be return status.
 - Change learning rate scheduler pointer from unique_ptr to shared_ptr

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[ccapi] rename LearningRateType enum name to LearningRateSchedulerType
hyeonseok lee [Thu, 13 Apr 2023 02:56:31 +0000 (11:56 +0900)]
[ccapi] rename LearningRateType enum name to LearningRateSchedulerType

 - rename enum name LearningRateType to LearningRateSchedulerType for more detailed info

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
12 months ago[AHub] Fix AHub Defect
DongHak Park [Thu, 6 Apr 2023 07:06:39 +0000 (16:06 +0900)]
[AHub] Fix AHub Defect

Fix AHub Defect
- make some exception statement
- change auto element -> auto &element

Signed-off-by: DongHak Park <donghak.park@samsung.com>
12 months ago[Ahub] Fix svace issue
Seungbaek Hong [Tue, 4 Apr 2023 06:45:49 +0000 (15:45 +0900)]
[Ahub] Fix svace issue

Fixed some svace issues on svace-tizen_7.5.

Add exception handling to
- tensor_trainer_nntrainer.cc
- genModelExeOrder.cpp

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[AHub] Fix AHub Defect
DongHak Park [Tue, 4 Apr 2023 06:21:26 +0000 (15:21 +0900)]
[AHub] Fix AHub Defect

Fix Ahub Defect
- change stderror to stderror_r

Signed-off-by: DongHak Park <donghak.park@samsung.com>
13 months ago[Release] NNTrainer v0.5.0 release
jijoong.moon [Mon, 3 Apr 2023 23:58:46 +0000 (08:58 +0900)]
[Release] NNTrainer v0.5.0 release

NNTrainer v0.5.0 is released.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[TEST] Add execution order generator
Jiho Chu [Tue, 14 Feb 2023 08:06:51 +0000 (17:06 +0900)]
[TEST] Add execution order generator

It generate execution order golden file for each model.
Each golden data consists of execution orders of each tensors.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
13 months ago[TEST] Add execution order test
Jiho Chu [Mon, 13 Feb 2023 09:39:02 +0000 (18:39 +0900)]
[TEST] Add execution order test

This patch verifies execution orders for model graph.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
13 months ago[DEBUG] Add interface for getting execution order
Jiho Chu [Mon, 13 Feb 2023 09:32:08 +0000 (18:32 +0900)]
[DEBUG] Add interface for getting execution order

This patch implemnts debugging feature for execution order.

Execution order for each tensor is decided while initalize and finalize
phase. To check the validity of them, getting execution order interface
is added to network graph class. It can gather both variable and
gradient of the tensors.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
13 months ago[activation] add swish activation function
Seungbaek Hong [Mon, 9 Jan 2023 07:03:13 +0000 (16:03 +0900)]
[activation] add swish activation function

Added the swish activation function.

It needs input of activation and output of activation
to calculate derivative value.

So, I overload run_prime_fn function to use
input and output for calculating derivative.

- add swish activation
- add test case about swish activation function

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[Memory Planner] Update the Memory Planner
jijoong.moon [Thu, 29 Dec 2022 11:16:07 +0000 (20:16 +0900)]
[Memory Planner] Update the Memory Planner

This PR includes,
  1. assigning right execution order depending on layer type.
     : We do need to move those fixes in to each layer
  2. Update memory Planner to use the memory with smaller size

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[Execution Order] Set exectuion order properly for Opt Variables.
jijoong.moon [Tue, 27 Dec 2022 02:18:13 +0000 (11:18 +0900)]
[Execution Order] Set exectuion order properly for Opt Variables.

This patch includes the proper assignment of exectuion order for
optimizer variables, e.g., M and V for adam optimizer.
Only apply gradient requies optimizer variables.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[Planner] Add Weight Gradient Planning
jijoong.moon [Thu, 22 Dec 2022 23:54:33 +0000 (08:54 +0900)]
[Planner] Add Weight Gradient Planning

This patch includes the memory optimization for weight
gradient. Weight Gradient can be reused under the not execution order
confiliction. In order to do it, memory pool need to have the data
structure for identification of weight gradient. Therefore, it is
marked when the tensor is request during weight request in mananger
except the weight sharing situation (when is_dependent is true).

Notice, we are not gaurantee the gradient is always initialized
property. So, layer developer, or the user must aware it can be
dirty.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[Tensor] Remove calcGrad step for trainable layer
Jiho Chu [Tue, 20 Dec 2022 01:54:39 +0000 (10:54 +0900)]
[Tensor] Remove calcGrad step for trainable layer

This patch is for implementing tarinable property behavior.

If a layer is set as non-trainable, it doesnot need to execute
calcGrad step, so we can remove it from execution order, also
skip gradient calculation.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
13 months ago[ Dataset ] Add Directory Dataset Producer
jijoong.moon [Fri, 24 Mar 2023 00:14:15 +0000 (09:14 +0900)]
[ Dataset ] Add Directory Dataset Producer

This pr includes directory dataset producer for bmp image only.

We can extend the type of image to jpg, png with proper decoder and
also can add resizer according to reqruiement.

C API and unit test cases will be added later PRs.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[Application] yolo example moving weight from torch to nntrainer
Seungbaek Hong [Fri, 17 Mar 2023 06:40:29 +0000 (15:40 +0900)]
[Application] yolo example moving weight from torch to nntrainer

Add yolo example moving weight from pytorch to nntrainer.

It should be noted that pytorch and nntrainer have different

default values of epsilon constant of batch normalization.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[tensor] split tensor by given sizes
hyeonseok lee [Tue, 14 Mar 2023 04:39:49 +0000 (13:39 +0900)]
[tensor] split tensor by given sizes

 - Until now tensor was splited evenly by given size, but from now on split operation can split the tensor by given sizes

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
13 months ago[nnstreamer][trainer] Reanme callback parameter
hyunil park [Wed, 18 Jan 2023 05:07:22 +0000 (14:07 +0900)]
[nnstreamer][trainer] Reanme callback parameter

- Rename 'train' and 'invoke' to 'start' and 'push_data'
- Use PRId64 insted of %ld or %lld for build dependency
- Add action description and constraints

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
13 months ago[nnstreamer][trainer] Change queue to circular queue
hyunil park [Mon, 9 Jan 2023 11:01:31 +0000 (20:01 +0900)]
[nnstreamer][trainer] Change queue to circular queue

- Change queue to circular queue

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
13 months ago[nnstreamer][trainer] Apply setting epochs by GstTensorTrainerProperties
hyunil park [Thu, 5 Jan 2023 08:32:14 +0000 (17:32 +0900)]
[nnstreamer][trainer] Apply setting epochs by GstTensorTrainerProperties

- Add getting epochs property and set to model property
- Use std::mutex instead of pthread_mutex

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
13 months ago[nnstreamer][trainer] Create a queue to construct dataset
hyunil park [Thu, 5 Jan 2023 02:52:02 +0000 (11:52 +0900)]
[nnstreamer][trainer] Create a queue to construct dataset

- It is dynamically allocated as much as initial queue size and
  continues to use the memory until training is finished.
- The maximum size of the queue is 30, and if the number of samples
  is less than 30, size is number of samples.
- Data received through 'invoke' is strored in queue, and the data
  in the queue is used in the 'data gen callback'. the used data
  is not maintained for epochs.
  For epoch, (number of trin samples + number of valid samples) * epoch
  data should be received.

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
13 months ago[nnstreamer] Create nnstreamer tensor_trainer subplugin
hyunil park [Fri, 2 Dec 2022 06:37:33 +0000 (15:37 +0900)]
[nnstreamer] Create nnstreamer tensor_trainer subplugin

- Create nnstreamer tensor_trainer subplugin
- Create libnnstreamer_trainer_nntrainer.so

Subplugin Receive GstTensorTrainerProperties from nnstreamer tensor_trainer
to create dataset and model. it receive tensor data from tensor_trainer
and train the model.
Subplugin is created by using GstTensorTrainerFramework and tensor_trainer
call create, destory, train and invoke_NN.

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: hyunil park <hyunil46.park@samsung.com>
13 months ago[WIP][ahub] fix ahub issue
Seungbaek Hong [Tue, 21 Mar 2023 09:37:12 +0000 (18:37 +0900)]
[WIP][ahub] fix ahub issue

fix svace issue of ahub

- add try&catch on yolo example
- add initialization of Exporter class

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[ Trivial ] add more error infor
jijoong.moon [Mon, 20 Mar 2023 22:28:23 +0000 (07:28 +0900)]
[ Trivial ] add more error infor

add more error infor when tflite and input dim mismatch.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[ Application ] Add android resnet application example
jijoong.moon [Mon, 20 Mar 2023 00:35:53 +0000 (09:35 +0900)]
[ Application ] Add android resnet application example

This PR includes the android resnet application example with cifar100
dataset. The dataset is not included becuase of the size of dataset,
but user can download and set properly into asset direcotry in
application.

This applicaiton demonstrate how nntrainer is used in android
application through JNI interface and train, test, stop during
processing, and inferencing.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[Application] Training Resnet kotlin code for Android
jijoong.moon [Tue, 14 Feb 2023 01:59:23 +0000 (10:59 +0900)]
[Application] Training Resnet kotlin code for Android

This PR includes the kotlin android implemetation to train Resnet18 in
Applications.

**Changes proposed in this PR:**
- Added TOC generator for README.md

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[Flatbuffer] Add flatbuffer_opnode
DongHak Park [Tue, 14 Feb 2023 10:57:40 +0000 (19:57 +0900)]
[Flatbuffer] Add flatbuffer_opnode

Add flatbuffer_opnode.cpp & flatbuffer_opnode.h
- For flatbuffer interpreter : bring opnode and make variable
- Now only support Fully Connected Layer
- Now only support NCHW format tensor

It will be updated

Signed-off-by: DongHak Park <donghak.park@samsung.com>
13 months ago[Conv2d] Remove inter_result in conv2d layer
jijoong.moon [Sun, 12 Mar 2023 23:21:17 +0000 (08:21 +0900)]
[Conv2d] Remove inter_result in conv2d layer

It was used for better latency but it consume more memery than we
expected when there are many conv layers.

In ths pr, this inter_result is removed.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[ Tensor ] fix the dim type
jijoong.moon [Thu, 9 Mar 2023 22:36:03 +0000 (07:36 +0900)]
[ Tensor ] fix the dim type

Previously, the dimension type of tensor dim is unsigned int and it
causes the error for bigger problems. This pr changes the dimention
type to size_t.

Also, This pr fix the user_data for stop callback to use it properly.

Resolves:

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
13 months ago[Trivial] Typo Error
DongHak Park [Tue, 14 Mar 2023 10:47:43 +0000 (19:47 +0900)]
[Trivial] Typo Error

Fix Typo Error
- Correct typos found during development in compiler dir

Signed-off-by: DongHak Park <donghak.park@samsung.com>
13 months ago[Application] define YOLO v2 model class
Seungbaek Hong [Fri, 3 Mar 2023 05:04:27 +0000 (14:04 +0900)]
[Application] define YOLO v2 model class

define YOLO v2 model structure corresponding to pytorch example.

now, it is implemented by fake data, and there is no loss loss
function for this model.

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[FlatBuffer] Add nntrainer_schema.fbs
DongHak Park [Tue, 14 Feb 2023 10:12:24 +0000 (19:12 +0900)]
[FlatBuffer] Add nntrainer_schema.fbs

Add nntrainer_schema.fbs
- This schema for Flatbuffer export
- It contain only Fully-connected Layer Options
- It contain tensorflow lite's schema

Modify meson_build
- For compiler nntrainer_schema_generated.h

This will be updated for more layers and operators

Signed-off-by: DongHak Park <donghak.park@samsung.com>
13 months ago[Application] Save model trained from pytorch as nntrainer format
Seungbaek Hong [Fri, 10 Feb 2023 13:16:38 +0000 (22:16 +0900)]
[Application] Save model trained from pytorch as nntrainer format

I modified resnet example to save model trained from pytorch as
nntrainer binary model format.

- the pytorch resnet example model is modified to be the same
as the nntrainer resnet example model.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[ahub] fix ahub issue
hyeonseok lee [Thu, 16 Mar 2023 05:30:12 +0000 (14:30 +0900)]
[ahub] fix ahub issue

 - cast unsigned value to signed value and check its value it overrides sign-bit.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
13 months ago[Application] add object detection example using pytorch
Seungbaek Hong [Fri, 3 Mar 2023 05:04:27 +0000 (14:04 +0900)]
[Application] add object detection example using pytorch

Add YOLO v2 object detection example using pytorch.

It was implemented in the same way as the YOLO v2 model,
but only the backbone model was made into a simpler cnn
model to make it easier to configure the initial version
of YOLO to be supported by NNTrainer.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
13 months ago[NEURALNET] Refine forwarding operation
Jiho Chu [Fri, 3 Mar 2023 06:29:41 +0000 (15:29 +0900)]
[NEURALNET] Refine forwarding operation

This patch refines forwading operation in neural network class.

The code depth is not matched for forward and backward operations. For
backwarding operation, there is backwarding_op and it pasded to the
graph, which can handle the operation.
To keep same depth for forwarding, this patch adds forwarding_op
function, and passed to the graph class.

Releated Issue:
\#2108

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
13 months ago[GitAction] Fix Labeler on review commented
DongHak Park [Wed, 8 Mar 2023 08:16:17 +0000 (17:16 +0900)]
[GitAction] Fix Labeler on review commented

Problem
- on Review commented, Review Approved There is error on http api
- it can't call issue's number because event num only valid if it run's by pull request it self

For solve this problem
- change api ```github.event.number``` to ```github.event.pull_request.number```
- change script version and some trivial part

Signed-off-by: DongHak Park <donghak.park@samsung.com>
13 months agoreplace strerror with strerror_r
hyeonseok lee [Wed, 8 Mar 2023 10:36:34 +0000 (19:36 +0900)]
replace strerror with strerror_r

 - Replace strerror with strerror_r to make thread safe.

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
13 months ago[Trivial] Update getting-started.md
DongHak Park [Tue, 28 Feb 2023 01:34:40 +0000 (10:34 +0900)]
[Trivial] Update getting-started.md

Corrected the ambiguous content to make it easier for users to understand

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[GitAction] Fix Gitaction Auto Labeler
DongHak Park [Tue, 28 Feb 2023 05:57:47 +0000 (14:57 +0900)]
[GitAction] Fix Gitaction Auto Labeler

Problem
- Request for removal when there is no label or request for addition when there is a label

Fix
- Add is_Contain Variable to upload & in labeler they check it already contain label or not
- In gitaction document their api use name : "Need Review" not labels : ["Need Review"]
- Remove unused Debug line
- Reorder step

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[GitAction] Fix Auto Labeler
DongHak Park [Fri, 24 Feb 2023 01:00:50 +0000 (10:00 +0900)]
[GitAction] Fix Auto Labeler

Problem : There was Permission Error on Auto Labeler on Forked Repo's PR
- Github only give read permission to Forked Repo's PR
- so Gitaction can make any change in PR

to solved this problem USE workflow_run
1. on pull_request or pull_request_review trigger it save it's pr_num and approved num in github artifact
2. in ```labeler.yml``` it download pr_num and review_num
3. in step they check review approved num
4. in if statement they automatically make labels

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[Flatbuffer] Create flatbuffer_interpreter.h
DongHak Park [Thu, 9 Feb 2023 07:58:34 +0000 (16:58 +0900)]
[Flatbuffer] Create flatbuffer_interpreter.h

Create flatbuffer_interpreter.h
add FLATBUFFER Type into apis

for support flatbuffer write interpreter from GraphInterpreter, tflite_interpreter

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[FIX] fix for tflite interpreter option
Jiho Chu [Wed, 22 Feb 2023 07:48:47 +0000 (16:48 +0900)]
[FIX] fix for tflite interpreter option

This patch fixes for tflite interpreter option.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[FIX] delete unnecessary include
Jiho Chu [Wed, 22 Feb 2023 07:25:38 +0000 (16:25 +0900)]
[FIX] delete unnecessary include

This patch deletes unnecessary include path.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[Propose][GitAction] Review Request Auto labeler
Donghak PARK [Wed, 15 Feb 2023 04:23:53 +0000 (13:23 +0900)]
[Propose][GitAction] Review Request Auto labeler

- this git action yml will automatically label "Need Review" when cretae PR
- when it edit or reopen it will automatically make label
- and It also remove when approved review exceed 3

For this we need ```secrets.GITHUB_TOKEN```

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[Application] Merge LSTM to Layers
DongHak Park [Thu, 9 Feb 2023 02:35:54 +0000 (11:35 +0900)]
[Application] Merge LSTM to Layers

Merge LSTM dir to Layers dir
- both pytorch and tensorflow are single LSTM Layer Test Code
- in Layers dir there was single and simple layer tests
- in Layers dir NNtrainer LSTM exmaple also exist
- For User, they find various simeple exmaple more intuitively

releated PR : #2101

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[TRACE] Add apply gradient trace point
Jiho Chu [Tue, 3 Jan 2023 07:13:26 +0000 (16:13 +0900)]
[TRACE] Add apply gradient trace point

This patch adds trace point for apply gradient.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[Memory] Add apply gradient step
Jiho Chu [Mon, 2 Jan 2023 02:17:35 +0000 (11:17 +0900)]
[Memory] Add apply gradient step

After derivative calculation, gradient needs to be applied. But, the
order for it was merged with derv calculation.
This patch divide apply graident order, and modify related lifespans.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[Trivial] Fix svace and coverity issue
Seungbaek Hong [Mon, 20 Feb 2023 06:29:47 +0000 (15:29 +0900)]
[Trivial] Fix svace and coverity issue

- add swap_lookahead to default constuctor of Manager
- add cache_swap to default constructor of ProfileEventData

**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
14 months ago[SWAP] Implement lookahead behavior
Jiho Chu [Wed, 21 Dec 2022 01:29:33 +0000 (10:29 +0900)]
[SWAP] Implement lookahead behavior

This patch implement lookahead behavior.
lookahead property is useed to preload cache elems in parallel.
executor thread maintains preloading task, and notifying when it
finishes loading.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[SWAP] Add lokkahead property
Jiho Chu [Thu, 15 Dec 2022 01:54:24 +0000 (10:54 +0900)]
[SWAP] Add lokkahead property

This patch add lookahead property.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[layer] rename variable names
hyeonseok lee [Mon, 6 Feb 2023 04:49:36 +0000 (13:49 +0900)]
[layer] rename variable names

 - Rename softmax parameters
 - Rename mol layer enum from AttentionParams to MoLAttentionParams

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
14 months ago[layer] revise attention layers to apply softmax as inplace
hyeonseok lee [Wed, 7 Sep 2022 04:31:24 +0000 (13:31 +0900)]
[layer] revise attention layers to apply softmax as inplace

 - Remove attention_score tensor in attention/multi_head_attention layer to apply softmax as inplace
 - Modify tensor lifespan of fc_out to FORWARD_FUNC_LIFESPAN
 - remove unused enum updated_state

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
14 months ago[activation] support softmax inplace
hyeonseok lee [Mon, 6 Feb 2023 01:46:35 +0000 (10:46 +0900)]
[activation] support softmax inplace

 - Support softmax inplace by using extra space.
   The size of extra space will be width size of input.

close #1986

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
14 months ago[Application] Add Simple Layer Tensorflow Examples
DongHak Park [Wed, 1 Feb 2023 07:53:04 +0000 (16:53 +0900)]
[Application] Add Simple Layer Tensorflow Examples

Add Simple Tensorflow Examples with Dummy Data
- Linear
- Conv
- LSTM
- Model_A_Linear
- Model_A_Conv
- Model_C_Linear
- Model_C_Conv

It has same Layer with NNtrainer example that has same name

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[Memory] Reorder execution order for trainable
Jiho Chu [Wed, 28 Dec 2022 11:27:01 +0000 (20:27 +0900)]
[Memory] Reorder execution order for trainable

'non-trainable' layer doesnot needs calculate gradient step, so the step
should be totally removed from excution order count, which can reduce
overall memory space while memory planning.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[Tensor] Remove calcGrad step for trainable layer
Jiho Chu [Tue, 20 Dec 2022 01:54:39 +0000 (10:54 +0900)]
[Tensor] Remove calcGrad step for trainable layer

This patch is for implementing tarinable property behavior.

If a layer is set as non-trainable, it doesnot need to execute
calcGrad step, so we can remove it from execution order, also
skip gradient calculation.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
14 months ago[Application] Add Simple Layers NNtrainer example(FC, LSTM, Conv)
DongHak Park [Wed, 1 Feb 2023 05:41:29 +0000 (14:41 +0900)]
[Application] Add Simple Layers NNtrainer example(FC, LSTM, Conv)

Add Simple Layers NNtrainer example(FC, LSTM, Conv)  with Dummy Data

Add Single Layer example
- Linear(Fully-Connected)
- Conv
- LSTM
- Model_A_Linear
- Model_A_Conv
- Model_C_Linear
- Model_C_Conv

We conduct Memory & Latency Benchmark test based on these code
It's Loss possibly show inf cause It's DataSet is Dummy If you want to get actual Loss set your own Dataset
- It support only training if user want they can set dataset, validation loss, test loss... etc

Signed-off-by: DongHak Park <donghak.park@samsung.com>
14 months ago[Application] Add Simple Layers Pytorch examples
DongHak Park [Wed, 1 Feb 2023 07:27:01 +0000 (16:27 +0900)]
[Application] Add Simple Layers Pytorch examples

Add Simple Pytorch examples with Dummy Data
- Linear
- Conv
- LSTM (Will merge with exist file)
- Model_A_Linear
- Model_A_Conv
- Model_C_Linear
- Model_C_COnv

It has same Layer with NNtrainer example that has same name

Signed-off-by: DongHak Park <donghak.park@samsung.com>
15 months ago[SWAP] modify swap policy
Jiho Chu [Thu, 15 Dec 2022 07:41:30 +0000 (16:41 +0900)]
[SWAP] modify swap policy

This patch modify forward policy

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[Utils] Add log timestamp type
Jiho Chu [Wed, 21 Dec 2022 02:36:42 +0000 (11:36 +0900)]
[Utils] Add log timestamp type

This patch adds types fo log timestamp type.

NNTRAINER_LOG_TIMESTAMP_SEC: year, mon, day, hour, min, sec
NNTRAINER_LOG_TIMESTAMP_MS: milliseconds from log start

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[Graph] Fix cend function
SeoHyungjun [Thu, 29 Dec 2022 03:41:19 +0000 (12:41 +0900)]
[Graph] Fix cend function

- Fix undefined behavior in cend() of \'graph_core.h\'

  The cend() function dereference node_list.cend(). This is undefined
  behavior because it accesses memory in an area that is not actually
  allowed. So changed cend() to operate normally through cbegin() + size.

  However, these changes also include problems. Usually, check that
  node_list is empty, and if it is, it will be treated as an exception.
  if use cbegin() and cend() when node_list is empty, there might be a
  problem. So, Added this caution at @note.

- Fixed a minor typo.

Related : #2086

**Self evaluation:**
1. Build test:   [X]Passed [ ]Failed [ ]Skipped
2. Run test:     [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
15 months ago[test] add test case when a specific layer is non-trainable
Seungbaek Hong [Thu, 29 Dec 2022 06:40:33 +0000 (15:40 +0900)]
[test] add test case when a specific layer is non-trainable

Add a test case when a specific layer is non-trainable.

- Add a test case when the output fc layer is set to non-trainable.

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
15 months ago[Application] Add LSTM examples using pytorch and tensorflow
Seungbaek Hong [Mon, 30 Jan 2023 06:15:35 +0000 (15:15 +0900)]
[Application] Add LSTM examples using pytorch and tensorflow

Add LSTM examples using pytorch and tensorflow.

- LSTM example using NNTrainer needs to be added.
- We conduct memory benchmark test based on these codes.

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
15 months ago[Application] Add Resnet Pytorch example
DongHak Park [Thu, 26 Jan 2023 07:17:51 +0000 (16:17 +0900)]
[Application] Add Resnet Pytorch example

Add Resnet PyTorch Example
- It support only training if user want to use this code, user need to update code with test, validation
- This Example's Dataset : CIFAR100
- This Example's network exactly same with NNtrainer Resnet18 example
- We conduct benchmark test base on this code

Signed-off-by: DongHak Park <donghak.park@samsung.com>
15 months ago[TEST] Add cache loader test
Jiho Chu [Wed, 14 Dec 2022 05:59:16 +0000 (14:59 +0900)]
[TEST] Add cache loader test

This patch provides cache loader class's unittest.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[SWAP] Add cache loader class
Jiho Chu [Wed, 14 Dec 2022 05:55:46 +0000 (14:55 +0900)]
[SWAP] Add cache loader class

This patch adds CacheLoader class.
It provides methods to load cache elements by execution order, and it
uses task executor to manage asynchronous loading.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[Profile] Add cache policy column
Jiho Chu [Thu, 15 Dec 2022 08:18:06 +0000 (17:18 +0900)]
[Profile] Add cache policy column

It adds columns for cache policy information.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[SWAP] Add memory swap policy
Jiho Chu [Thu, 15 Dec 2022 07:41:30 +0000 (16:41 +0900)]
[SWAP] Add memory swap policy

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[SWAP] extract CacheElem class to new file
Jiho Chu [Wed, 14 Dec 2022 05:57:36 +0000 (14:57 +0900)]
[SWAP] extract CacheElem class to new file

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[SWAP] task executor: Remove stop and clean feature
Jiho Chu [Thu, 15 Dec 2022 08:42:40 +0000 (17:42 +0900)]
[SWAP] task executor: Remove stop and clean feature

It's changed to use work queue to manage executions.
There is some overhead for stop and cancel behaviors, so
they are removed temporaly now, due to the performance.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[API] set default value on createModel function
Seungbaek Hong [Mon, 26 Dec 2022 02:45:13 +0000 (11:45 +0900)]
[API] set default value on createModel function

the default value of the "type" parameter of the "createModel"
function is set to NEURAL_NET.

the createModel function receives the model type as a parameter, but
the only value that can be set is NEURAL_NET.
(KNN is also not supported by the createModel function in API)

Thus, it is inefficient to set the parameter value every time because
there is no other option.

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
15 months ago[Utils] Modify memory usage script
Jiho Chu [Tue, 27 Dec 2022 07:16:17 +0000 (16:16 +0900)]
[Utils] Modify memory usage script

It uses /proc/$PID/smaps information to check current memory usage.

Please refer doc:
https://man7.org/linux/man-pages/man5/proc.5.html

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months agoModify trace timing
Jiho Chu [Thu, 5 Jan 2023 05:15:19 +0000 (14:15 +0900)]
Modify trace timing

This patch modify trace timing.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[Utils] Add trace feature
Jiho Chu [Fri, 23 Dec 2022 07:01:20 +0000 (16:01 +0900)]
[Utils] Add trace feature

Trace feature enables easily trace information through training
step. It can include memory and time tracing, and futher informations
could be included.
The memory trace is defaultly written to memory_trace_$PID.log,
while time trace is written to time_trace_$PID.log.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
15 months ago[Trivial] Modify for code consistency
SeoHyungjun [Wed, 18 Jan 2023 07:00:32 +0000 (16:00 +0900)]
[Trivial] Modify for code consistency

The flatten_realizer.cpp and activation_realizer.cpp declare
the same local variable. However, the actual code order was
written differently. Minor modifications were made for code
consistency.

Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
15 months ago[trivial] modify flatten_realizer script for consistent codes
Seungbaek Hong [Mon, 2 Jan 2023 07:06:49 +0000 (16:06 +0900)]
[trivial] modify flatten_realizer script for consistent codes

modify "flatten_realizer.cpp" script for consistent codes of
"flatten_realizer.cpp" and "activation_realizer.cpp".

"flatten_realizer" and "activation realizer" contain exactly same
implementation but written in slightly different ways.

It's a very trivial matter, but I thought it would be better to
write the same implementation in the same code, so I modified it.

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
15 months ago[layer] enable identity layer to support inplace
hyeonseok lee [Wed, 21 Dec 2022 08:34:15 +0000 (17:34 +0900)]
[layer] enable identity layer to support inplace

 - For now identity layer is not considered for support inplace in network graph.
 - This commit will enable identity layer to support inplace.
 - Added helper function for identity layer

Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
16 months ago[test] add test cases that a specific layer is non-trainable
Seungbaek Hong [Thu, 22 Dec 2022 10:32:26 +0000 (19:32 +0900)]
[test] add test cases that a specific layer is non-trainable

Add test cases that a specific layer is non-trainable.

- Test based on the pytoch model with two fc hidden layers.
- Add a test when the first hidden layer is set to non-trainable.
- Add a test when the second hidden layer is set to non-trainable.

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
16 months ago[Tensorflow] Update resnet18 example ( TF1 to TF2 )
DonghakPark [Wed, 21 Dec 2022 03:28:24 +0000 (12:28 +0900)]
[Tensorflow] Update resnet18 example ( TF1 to TF2 )

In Tensorflow 2.x tf.compat or tf.session deprecated
- update random setting
- Tested on tensorflow 2.11

Signed-off-by: DonghakPark <donghak.park@samsung.com>
16 months ago[Application] mnist: Fix tf example
Jiho Chu [Tue, 20 Dec 2022 10:24:25 +0000 (19:24 +0900)]
[Application] mnist: Fix tf example

Modified for tensorflow 2.10.0.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
16 months ago[model] Change the default value of parameter in inference function
Seungbaek Hong [Fri, 16 Dec 2022 07:36:41 +0000 (16:36 +0900)]
[model] Change the default value of parameter in inference function

Change the default value of the "free_mem" parameter to false in the inference function.

Since the "free memory" option can be used only when the model is in the training mode, the desired results may not be obtained if the "free_mem" parameter is set to true when we run inference without training the model.

Therefore, it is better to set "free_mem" option to true only when this option is needed.

**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.net>