jijoong.moon [Mon, 20 Mar 2023 22:28:23 +0000 (07:28 +0900)]
[ Trivial ] add more error infor
add more error infor when tflite and input dim mismatch.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Mon, 20 Mar 2023 00:35:53 +0000 (09:35 +0900)]
[ Application ] Add android resnet application example
This PR includes the android resnet application example with cifar100
dataset. The dataset is not included becuase of the size of dataset,
but user can download and set properly into asset direcotry in
application.
This applicaiton demonstrate how nntrainer is used in android
application through JNI interface and train, test, stop during
processing, and inferencing.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Tue, 14 Feb 2023 01:59:23 +0000 (10:59 +0900)]
[Application] Training Resnet kotlin code for Android
This PR includes the kotlin android implemetation to train Resnet18 in
Applications.
**Changes proposed in this PR:**
- Added TOC generator for README.md
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
DongHak Park [Tue, 14 Feb 2023 10:57:40 +0000 (19:57 +0900)]
[Flatbuffer] Add flatbuffer_opnode
Add flatbuffer_opnode.cpp & flatbuffer_opnode.h
- For flatbuffer interpreter : bring opnode and make variable
- Now only support Fully Connected Layer
- Now only support NCHW format tensor
It will be updated
Signed-off-by: DongHak Park <donghak.park@samsung.com>
jijoong.moon [Sun, 12 Mar 2023 23:21:17 +0000 (08:21 +0900)]
[Conv2d] Remove inter_result in conv2d layer
It was used for better latency but it consume more memery than we
expected when there are many conv layers.
In ths pr, this inter_result is removed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 9 Mar 2023 22:36:03 +0000 (07:36 +0900)]
[ Tensor ] fix the dim type
Previously, the dimension type of tensor dim is unsigned int and it
causes the error for bigger problems. This pr changes the dimention
type to size_t.
Also, This pr fix the user_data for stop callback to use it properly.
Resolves:
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
DongHak Park [Tue, 14 Mar 2023 10:47:43 +0000 (19:47 +0900)]
[Trivial] Typo Error
Fix Typo Error
- Correct typos found during development in compiler dir
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Seungbaek Hong [Fri, 3 Mar 2023 05:04:27 +0000 (14:04 +0900)]
[Application] define YOLO v2 model class
define YOLO v2 model structure corresponding to pytorch example.
now, it is implemented by fake data, and there is no loss loss
function for this model.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
DongHak Park [Tue, 14 Feb 2023 10:12:24 +0000 (19:12 +0900)]
[FlatBuffer] Add nntrainer_schema.fbs
Add nntrainer_schema.fbs
- This schema for Flatbuffer export
- It contain only Fully-connected Layer Options
- It contain tensorflow lite's schema
Modify meson_build
- For compiler nntrainer_schema_generated.h
This will be updated for more layers and operators
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Seungbaek Hong [Fri, 10 Feb 2023 13:16:38 +0000 (22:16 +0900)]
[Application] Save model trained from pytorch as nntrainer format
I modified resnet example to save model trained from pytorch as
nntrainer binary model format.
- the pytorch resnet example model is modified to be the same
as the nntrainer resnet example model.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyeonseok lee [Thu, 16 Mar 2023 05:30:12 +0000 (14:30 +0900)]
[ahub] fix ahub issue
- cast unsigned value to signed value and check its value it overrides sign-bit.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Seungbaek Hong [Fri, 3 Mar 2023 05:04:27 +0000 (14:04 +0900)]
[Application] add object detection example using pytorch
Add YOLO v2 object detection example using pytorch.
It was implemented in the same way as the YOLO v2 model,
but only the backbone model was made into a simpler cnn
model to make it easier to configure the initial version
of YOLO to be supported by NNTrainer.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Jiho Chu [Fri, 3 Mar 2023 06:29:41 +0000 (15:29 +0900)]
[NEURALNET] Refine forwarding operation
This patch refines forwading operation in neural network class.
The code depth is not matched for forward and backward operations. For
backwarding operation, there is backwarding_op and it pasded to the
graph, which can handle the operation.
To keep same depth for forwarding, this patch adds forwarding_op
function, and passed to the graph class.
Releated Issue:
\#2108
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
DongHak Park [Wed, 8 Mar 2023 08:16:17 +0000 (17:16 +0900)]
[GitAction] Fix Labeler on review commented
Problem
- on Review commented, Review Approved There is error on http api
- it can't call issue's number because event num only valid if it run's by pull request it self
For solve this problem
- change api ```github.event.number``` to ```github.event.pull_request.number```
- change script version and some trivial part
Signed-off-by: DongHak Park <donghak.park@samsung.com>
hyeonseok lee [Wed, 8 Mar 2023 10:36:34 +0000 (19:36 +0900)]
replace strerror with strerror_r
- Replace strerror with strerror_r to make thread safe.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DongHak Park [Tue, 28 Feb 2023 01:34:40 +0000 (10:34 +0900)]
[Trivial] Update getting-started.md
Corrected the ambiguous content to make it easier for users to understand
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Tue, 28 Feb 2023 05:57:47 +0000 (14:57 +0900)]
[GitAction] Fix Gitaction Auto Labeler
Problem
- Request for removal when there is no label or request for addition when there is a label
Fix
- Add is_Contain Variable to upload & in labeler they check it already contain label or not
- In gitaction document their api use name : "Need Review" not labels : ["Need Review"]
- Remove unused Debug line
- Reorder step
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Fri, 24 Feb 2023 01:00:50 +0000 (10:00 +0900)]
[GitAction] Fix Auto Labeler
Problem : There was Permission Error on Auto Labeler on Forked Repo's PR
- Github only give read permission to Forked Repo's PR
- so Gitaction can make any change in PR
to solved this problem USE workflow_run
1. on pull_request or pull_request_review trigger it save it's pr_num and approved num in github artifact
2. in ```labeler.yml``` it download pr_num and review_num
3. in step they check review approved num
4. in if statement they automatically make labels
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Thu, 9 Feb 2023 07:58:34 +0000 (16:58 +0900)]
[Flatbuffer] Create flatbuffer_interpreter.h
Create flatbuffer_interpreter.h
add FLATBUFFER Type into apis
for support flatbuffer write interpreter from GraphInterpreter, tflite_interpreter
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Jiho Chu [Wed, 22 Feb 2023 07:48:47 +0000 (16:48 +0900)]
[FIX] fix for tflite interpreter option
This patch fixes for tflite interpreter option.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Wed, 22 Feb 2023 07:25:38 +0000 (16:25 +0900)]
[FIX] delete unnecessary include
This patch deletes unnecessary include path.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Donghak PARK [Wed, 15 Feb 2023 04:23:53 +0000 (13:23 +0900)]
[Propose][GitAction] Review Request Auto labeler
- this git action yml will automatically label "Need Review" when cretae PR
- when it edit or reopen it will automatically make label
- and It also remove when approved review exceed 3
For this we need ```secrets.GITHUB_TOKEN```
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Thu, 9 Feb 2023 02:35:54 +0000 (11:35 +0900)]
[Application] Merge LSTM to Layers
Merge LSTM dir to Layers dir
- both pytorch and tensorflow are single LSTM Layer Test Code
- in Layers dir there was single and simple layer tests
- in Layers dir NNtrainer LSTM exmaple also exist
- For User, they find various simeple exmaple more intuitively
releated PR : #2101
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Jiho Chu [Tue, 3 Jan 2023 07:13:26 +0000 (16:13 +0900)]
[TRACE] Add apply gradient trace point
This patch adds trace point for apply gradient.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Mon, 2 Jan 2023 02:17:35 +0000 (11:17 +0900)]
[Memory] Add apply gradient step
After derivative calculation, gradient needs to be applied. But, the
order for it was merged with derv calculation.
This patch divide apply graident order, and modify related lifespans.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Seungbaek Hong [Mon, 20 Feb 2023 06:29:47 +0000 (15:29 +0900)]
[Trivial] Fix svace and coverity issue
- add swap_lookahead to default constuctor of Manager
- add cache_swap to default constructor of ProfileEventData
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Jiho Chu [Wed, 21 Dec 2022 01:29:33 +0000 (10:29 +0900)]
[SWAP] Implement lookahead behavior
This patch implement lookahead behavior.
lookahead property is useed to preload cache elems in parallel.
executor thread maintains preloading task, and notifying when it
finishes loading.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Thu, 15 Dec 2022 01:54:24 +0000 (10:54 +0900)]
[SWAP] Add lokkahead property
This patch add lookahead property.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
hyeonseok lee [Mon, 6 Feb 2023 04:49:36 +0000 (13:49 +0900)]
[layer] rename variable names
- Rename softmax parameters
- Rename mol layer enum from AttentionParams to MoLAttentionParams
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 7 Sep 2022 04:31:24 +0000 (13:31 +0900)]
[layer] revise attention layers to apply softmax as inplace
- Remove attention_score tensor in attention/multi_head_attention layer to apply softmax as inplace
- Modify tensor lifespan of fc_out to FORWARD_FUNC_LIFESPAN
- remove unused enum updated_state
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Mon, 6 Feb 2023 01:46:35 +0000 (10:46 +0900)]
[activation] support softmax inplace
- Support softmax inplace by using extra space.
The size of extra space will be width size of input.
close #1986
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
DongHak Park [Wed, 1 Feb 2023 07:53:04 +0000 (16:53 +0900)]
[Application] Add Simple Layer Tensorflow Examples
Add Simple Tensorflow Examples with Dummy Data
- Linear
- Conv
- LSTM
- Model_A_Linear
- Model_A_Conv
- Model_C_Linear
- Model_C_Conv
It has same Layer with NNtrainer example that has same name
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Jiho Chu [Wed, 28 Dec 2022 11:27:01 +0000 (20:27 +0900)]
[Memory] Reorder execution order for trainable
'non-trainable' layer doesnot needs calculate gradient step, so the step
should be totally removed from excution order count, which can reduce
overall memory space while memory planning.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Tue, 20 Dec 2022 01:54:39 +0000 (10:54 +0900)]
[Tensor] Remove calcGrad step for trainable layer
This patch is for implementing tarinable property behavior.
If a layer is set as non-trainable, it doesnot need to execute
calcGrad step, so we can remove it from execution order, also
skip gradient calculation.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
DongHak Park [Wed, 1 Feb 2023 05:41:29 +0000 (14:41 +0900)]
[Application] Add Simple Layers NNtrainer example(FC, LSTM, Conv)
Add Simple Layers NNtrainer example(FC, LSTM, Conv) with Dummy Data
Add Single Layer example
- Linear(Fully-Connected)
- Conv
- LSTM
- Model_A_Linear
- Model_A_Conv
- Model_C_Linear
- Model_C_Conv
We conduct Memory & Latency Benchmark test based on these code
It's Loss possibly show inf cause It's DataSet is Dummy If you want to get actual Loss set your own Dataset
- It support only training if user want they can set dataset, validation loss, test loss... etc
Signed-off-by: DongHak Park <donghak.park@samsung.com>
DongHak Park [Wed, 1 Feb 2023 07:27:01 +0000 (16:27 +0900)]
[Application] Add Simple Layers Pytorch examples
Add Simple Pytorch examples with Dummy Data
- Linear
- Conv
- LSTM (Will merge with exist file)
- Model_A_Linear
- Model_A_Conv
- Model_C_Linear
- Model_C_COnv
It has same Layer with NNtrainer example that has same name
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Jiho Chu [Thu, 15 Dec 2022 07:41:30 +0000 (16:41 +0900)]
[SWAP] modify swap policy
This patch modify forward policy
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Wed, 21 Dec 2022 02:36:42 +0000 (11:36 +0900)]
[Utils] Add log timestamp type
This patch adds types fo log timestamp type.
NNTRAINER_LOG_TIMESTAMP_SEC: year, mon, day, hour, min, sec
NNTRAINER_LOG_TIMESTAMP_MS: milliseconds from log start
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
SeoHyungjun [Thu, 29 Dec 2022 03:41:19 +0000 (12:41 +0900)]
[Graph] Fix cend function
- Fix undefined behavior in cend() of \'graph_core.h\'
The cend() function dereference node_list.cend(). This is undefined
behavior because it accesses memory in an area that is not actually
allowed. So changed cend() to operate normally through cbegin() + size.
However, these changes also include problems. Usually, check that
node_list is empty, and if it is, it will be treated as an exception.
if use cbegin() and cend() when node_list is empty, there might be a
problem. So, Added this caution at @note.
- Fixed a minor typo.
Related : #2086
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
Seungbaek Hong [Thu, 29 Dec 2022 06:40:33 +0000 (15:40 +0900)]
[test] add test case when a specific layer is non-trainable
Add a test case when a specific layer is non-trainable.
- Add a test case when the output fc layer is set to non-trainable.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Seungbaek Hong [Mon, 30 Jan 2023 06:15:35 +0000 (15:15 +0900)]
[Application] Add LSTM examples using pytorch and tensorflow
Add LSTM examples using pytorch and tensorflow.
- LSTM example using NNTrainer needs to be added.
- We conduct memory benchmark test based on these codes.
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
DongHak Park [Thu, 26 Jan 2023 07:17:51 +0000 (16:17 +0900)]
[Application] Add Resnet Pytorch example
Add Resnet PyTorch Example
- It support only training if user want to use this code, user need to update code with test, validation
- This Example's Dataset : CIFAR100
- This Example's network exactly same with NNtrainer Resnet18 example
- We conduct benchmark test base on this code
Signed-off-by: DongHak Park <donghak.park@samsung.com>
Jiho Chu [Wed, 14 Dec 2022 05:59:16 +0000 (14:59 +0900)]
[TEST] Add cache loader test
This patch provides cache loader class's unittest.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Wed, 14 Dec 2022 05:55:46 +0000 (14:55 +0900)]
[SWAP] Add cache loader class
This patch adds CacheLoader class.
It provides methods to load cache elements by execution order, and it
uses task executor to manage asynchronous loading.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Thu, 15 Dec 2022 08:18:06 +0000 (17:18 +0900)]
[Profile] Add cache policy column
It adds columns for cache policy information.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Thu, 15 Dec 2022 07:41:30 +0000 (16:41 +0900)]
[SWAP] Add memory swap policy
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Wed, 14 Dec 2022 05:57:36 +0000 (14:57 +0900)]
[SWAP] extract CacheElem class to new file
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Thu, 15 Dec 2022 08:42:40 +0000 (17:42 +0900)]
[SWAP] task executor: Remove stop and clean feature
It's changed to use work queue to manage executions.
There is some overhead for stop and cancel behaviors, so
they are removed temporaly now, due to the performance.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Seungbaek Hong [Mon, 26 Dec 2022 02:45:13 +0000 (11:45 +0900)]
[API] set default value on createModel function
the default value of the "type" parameter of the "createModel"
function is set to NEURAL_NET.
the createModel function receives the model type as a parameter, but
the only value that can be set is NEURAL_NET.
(KNN is also not supported by the createModel function in API)
Thus, it is inefficient to set the parameter value every time because
there is no other option.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Jiho Chu [Tue, 27 Dec 2022 07:16:17 +0000 (16:16 +0900)]
[Utils] Modify memory usage script
It uses /proc/$PID/smaps information to check current memory usage.
Please refer doc:
https://man7.org/linux/man-pages/man5/proc.5.html
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Thu, 5 Jan 2023 05:15:19 +0000 (14:15 +0900)]
Modify trace timing
This patch modify trace timing.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Fri, 23 Dec 2022 07:01:20 +0000 (16:01 +0900)]
[Utils] Add trace feature
Trace feature enables easily trace information through training
step. It can include memory and time tracing, and futher informations
could be included.
The memory trace is defaultly written to memory_trace_$PID.log,
while time trace is written to time_trace_$PID.log.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
SeoHyungjun [Wed, 18 Jan 2023 07:00:32 +0000 (16:00 +0900)]
[Trivial] Modify for code consistency
The flatten_realizer.cpp and activation_realizer.cpp declare
the same local variable. However, the actual code order was
written differently. Minor modifications were made for code
consistency.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
Seungbaek Hong [Mon, 2 Jan 2023 07:06:49 +0000 (16:06 +0900)]
[trivial] modify flatten_realizer script for consistent codes
modify "flatten_realizer.cpp" script for consistent codes of
"flatten_realizer.cpp" and "activation_realizer.cpp".
"flatten_realizer" and "activation realizer" contain exactly same
implementation but written in slightly different ways.
It's a very trivial matter, but I thought it would be better to
write the same implementation in the same code, so I modified it.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
hyeonseok lee [Wed, 21 Dec 2022 08:34:15 +0000 (17:34 +0900)]
[layer] enable identity layer to support inplace
- For now identity layer is not considered for support inplace in network graph.
- This commit will enable identity layer to support inplace.
- Added helper function for identity layer
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Seungbaek Hong [Thu, 22 Dec 2022 10:32:26 +0000 (19:32 +0900)]
[test] add test cases that a specific layer is non-trainable
Add test cases that a specific layer is non-trainable.
- Test based on the pytoch model with two fc hidden layers.
- Add a test when the first hidden layer is set to non-trainable.
- Add a test when the second hidden layer is set to non-trainable.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
DonghakPark [Wed, 21 Dec 2022 03:28:24 +0000 (12:28 +0900)]
[Tensorflow] Update resnet18 example ( TF1 to TF2 )
In Tensorflow 2.x tf.compat or tf.session deprecated
- update random setting
- Tested on tensorflow 2.11
Signed-off-by: DonghakPark <donghak.park@samsung.com>
Jiho Chu [Tue, 20 Dec 2022 10:24:25 +0000 (19:24 +0900)]
[Application] mnist: Fix tf example
Modified for tensorflow 2.10.0.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Seungbaek Hong [Fri, 16 Dec 2022 07:36:41 +0000 (16:36 +0900)]
[model] Change the default value of parameter in inference function
Change the default value of the "free_mem" parameter to false in the inference function.
Since the "free memory" option can be used only when the model is in the training mode, the desired results may not be obtained if the "free_mem" parameter is set to true when we run inference without training the model.
Therefore, it is better to set "free_mem" option to true only when this option is needed.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.net>
DonghakPark [Thu, 15 Dec 2022 06:32:59 +0000 (15:32 +0900)]
[FlatBuffers] Update Tensorflow Lite FlatBuffer Schema
Update Tensorflow Lite FlatBuffer Schema Version 3a --> Version 3b
- Rename fields in SignatureDef.
- Has Compatibility with Version 3 and 3a
For update Tensorflow version
Signed-off-by: DonghakPark <donghak.park@samsung.com>
hyeonseok lee [Tue, 6 Dec 2022 10:30:00 +0000 (19:30 +0900)]
[swap_device] change lseek return type from int to off_t
- Return value of lseek is offset of requested position. The 4byte int type cannot represent
more than 3GB so any request more than 3GB offset cause overflow.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Seungbaek Hong [Mon, 19 Dec 2022 08:15:29 +0000 (17:15 +0900)]
[utils] add getLocaltime function
Because windows doesn't support "localtime_r" function, The "getLocaltime"
function was added to use the "localtime_s" function in windows and the
"localtime_r" function in linux.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.net>
SeoHyungjun [Tue, 20 Dec 2022 10:53:58 +0000 (19:53 +0900)]
[Trivial] Fix incorrect pointer usage
The parameters of the functions \'ml_tensors_data_destroy\' and \'ml_tensors_info_destroy\' are both of the types \'ml_tensors_data_h\'.
However, several code are using address values.
Added dereference operator to match parameter type.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
SeoHyungjun [Fri, 16 Dec 2022 07:40:55 +0000 (16:40 +0900)]
[Trivial] Fix svace issue
Running increment operator after erasing iterator is a logical error.
If erase is required, change the iterator to the return value. (The return value of erase is the next iterator.)
If not, run increment operator.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
SeoHyungjun [Fri, 16 Dec 2022 07:14:16 +0000 (16:14 +0900)]
[Trivial] Fix svace issue
The for loop runs only once due to a return.
To resolve this issue, we added a conditional statement to run return when it is not ML_ERROR_NONE.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
DonghakPark [Fri, 9 Dec 2022 03:18:40 +0000 (12:18 +0900)]
[DataGen] Fix RandomDataLoader parms
Fix RandomDataLoader parm (iteration_for_one_epoch) --> data_size
- in RandomDataLoader it make data as much as data_size
- if developer set iteration_for_one_epoch as iterate size it doesn't work
- for example developer want to make BATCH 64 and ITER 1 it should be 64 in iteration_for_one_epoch so it doesn't make sense
- for more clearly change it's name
Signed-off-by: DonghakPark <donghak.park@samsung.com>
SeoHyungjun [Fri, 9 Dec 2022 03:02:12 +0000 (12:02 +0900)]
[typo] Fix typo
- fix a typo in README.md file
- add reviewers
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
SeoHyungjun [Fri, 9 Dec 2022 01:44:10 +0000 (10:44 +0900)]
[typo] Fix typo
fix the typo error in network_graph.h
the cbegin function returns a forward const iterator.
so, removed \'reverse\' from retval description.
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
Jiho Chu [Thu, 1 Dec 2022 09:52:40 +0000 (18:52 +0900)]
[Application] resnet: Add profiling
This patch addes profiling codes for resnet.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Fri, 2 Dec 2022 07:54:50 +0000 (16:54 +0900)]
[Resnet] Fix FakeDataLoader bug
This fix data loader bug, which is related to iteration number.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Seungbaek Hong [Thu, 1 Dec 2022 02:32:15 +0000 (11:32 +0900)]
[README] Delete duplicate layer description and Fix typo errors.
- Delete pooling2D layer description because it was duplicated twice.
- Fix some typo errors.
Signed-off-by: Seungbaek Hong sb92.hong@samsung.net
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.net>
Jiho Chu [Fri, 28 Oct 2022 08:14:12 +0000 (17:14 +0900)]
[TEST] Add load/unload cache pool test
This patch adds load/unload operations test.
Loading/Unloading memory data are tested for both execution order and
actives.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Fri, 28 Oct 2022 03:11:04 +0000 (12:11 +0900)]
[SWAP] Add load/unload methods to cachepool
This patch adds load and unload methods.
loadExec/unloadExec are for validate/invalidate cache elements by
execution order.
loadActives/unloadActives are for swap in/out whole active
cache data, which will be used in pause/resume behaviors.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
jijoong.moon [Tue, 29 Nov 2022 02:45:02 +0000 (11:45 +0900)]
[API] add ml_train_model_get_weight api
This patch add the
``` c
ml_train_model_get_weight(ml_train_model_h model, const
char *layer_name, ml_tensors_data_h *weight, ml_tensors_info_h *info)
```
Now, developers are able to get the weight data & weight information.
ml_train_layer_get_weight will be added later.
Resolves: #2045
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Seungbaek Hong [Wed, 23 Nov 2022 10:44:10 +0000 (19:44 +0900)]
[utils] Add getRealpath function
Since the "realpath" function is not available in the Windows operating system, the "getRealpath" function has been added and replaced so that it can be used in the Windows operating system.
- Add "getRealpath" function to utils.
**Self evaluation:**
1. Build test: [x]Passed []Failed []Skipped
2. Run test: [x]Passed []Failed []Skipped
Signed-off-by: Seungbaek Hong <sb92.hong@samsung.net>
Jiho Chu [Mon, 7 Nov 2022 12:11:27 +0000 (21:11 +0900)]
[TEST] Add task unittest
This patch implements unittest for task behaviors.
The mock class for TaskExecutor is introduced to check work thread is
correctly working, and completion check is also working correctly.
Many tests are added to check asynchronous tasks' correctness.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Mon, 7 Nov 2022 02:09:32 +0000 (11:09 +0900)]
[Tensor] Add TaskExecutor class
This patch adds TaskExecutor class.
This class manages task's execution by their type.
Synchoronous tasks are exeucuted immediately, while asynchoronous tasks
are executed in thread manner.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Mon, 7 Nov 2022 02:08:40 +0000 (11:08 +0900)]
[Tensor] Add Task class
This patch adds Task class.
Task has several properties for its state and working information.
There are two types for Task, Synchronous and Asynchronous. and, their
behaviors will be differ when they are executed by TaskExecutor.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
hyeonseok lee [Tue, 29 Nov 2022 02:51:15 +0000 (11:51 +0900)]
[ln] optimize layer normalization layer input memory
- As bn layer does ln layer also does not need input in backwarding.
- This commit will make lifespan of ln layer input as FORWARD_FUNC_LIFESPAN.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyunil park [Fri, 18 Nov 2022 00:47:26 +0000 (09:47 +0900)]
[Logger] Modify initialization error message of logger instance
- Runtime error occurs when an app is executed in a place other
than the home directory excepted Android and Tizen
- Currently, Android and Tizen don't make logger
- If an user runs an app using nntrainer in a place where
the logger cannot be initialized, the path is shown for help
- In case of Ubunut, logger unable to initialize on '/usr/bin/',
/usr/lib/' and etc
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: hyunil park <hyunil46.park@samsung.com>
gichan [Wed, 23 Nov 2022 03:10:39 +0000 (12:10 +0900)]
Revert "[ CI ] Temperal Fix for the CI about NNStreamer Backone"
This reverts commit
423097f793a7af0269255cba192fd4acf18fce62.
Signed-off-by: gichan <gichan2.jang@samsung.com>
gichan [Wed, 23 Nov 2022 03:10:23 +0000 (12:10 +0900)]
Revert "[ CI ] Temporal Fix for the CI about NNStreamer Backone"
This reverts commit
e41011408c1dc51737e3884644c704fccee871a3.
Signed-off-by: gichan <gichan2.jang@samsung.com>
SeoHyungjun [Wed, 23 Nov 2022 08:29:13 +0000 (17:29 +0900)]
[Trivial] Fix coverity issue
Added a conditional statement to work only if the result of find_if() is not the end of the iterator
to solve the `INVALIDATE_ITERATOR` issue
**Self evaluation:**
1. Build test: [X]Passed []Failed []Skipped
2. Run test: [X]Passed []Failed []Skipped
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
SeoHyungjun [Wed, 23 Nov 2022 07:21:47 +0000 (16:21 +0900)]
[Trivial] Fix svace issues
- Add 'is_virtual' to default constructor of 'TfOpNode' in tflite_opnode.cpp
- Add 'fbb' to default constructor of 'Exporter' in node_exporter.cpp
**Self evaluation:**
1. Build test: [X]Passed []Failed []Skipped
2. Run test: [X]Passed []Failed []Skipped
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
hyeonseok lee [Wed, 23 Nov 2022 03:12:34 +0000 (12:12 +0900)]
[tensor] Add default delete on tensor map
- Omit of default delete of shared_ptr<MemoryData> causes memory leak.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 26 Oct 2022 06:32:59 +0000 (15:32 +0900)]
[split layer] make a unittest to test split input dimension by split number
- Make a unittest to test split input dimension by split number
- Conv2d layer is added to make model has a weight
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
hyeonseok lee [Wed, 26 Oct 2022 06:08:48 +0000 (15:08 +0900)]
[split_layer] enhance split layer to split input dimension by given number
- Until now split layer split input dimension by input dimension size which makes output dimension to 1.
But for now we can set split number which makes output dimension to (input dimension / split number).
- In this commit given split numbers are expected to divide input dimension equally.
Otherwise it will raise an error.
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
jijoong.moon [Thu, 10 Nov 2022 05:41:28 +0000 (14:41 +0900)]
[ CI ] Temporal Fix for the CI about NNStreamer Backone
Temporarily, we have to turn off the nnstreamer backbone until some issue is fixed.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
jijoong.moon [Thu, 10 Nov 2022 02:00:40 +0000 (11:00 +0900)]
[ CI ] Temperal Fix for the CI about NNStreamer Backone
Temporarily, we have to turn off the nnstreamer backbone until some
issue is fixed.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jiho Chu [Thu, 29 Sep 2022 02:48:27 +0000 (11:48 +0900)]
[TEST] Add Cache pool test
Legacy MemoryPool test is modifed to both MemoryPool and CachePool.
Parameterized test is used for this. CachePool specific tests are also
created.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Thu, 29 Sep 2022 02:39:16 +0000 (11:39 +0900)]
[TENSOR] modified for CachePool test
- Several methos are chagend to virtual to use mock test.
- Throw same exception with MemoryPool
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
DonghakPark [Fri, 21 Oct 2022 05:42:43 +0000 (14:42 +0900)]
[compiler] Revisit tflite_interpreter for TF Lite Export
This commit is for TF Lite Export
Revisit FC Layer reorder
- in case of (FC - Conv -FC)
Disable Inplace in Test Case
- should be fix
Signed-off-by: DonghakPark <donghak.park@samsung.com>
DonghakPark [Mon, 26 Sep 2022 07:09:07 +0000 (16:09 +0900)]
[compiler] Revisit FullyConnected Layer Weights Transpose and Reorder for TF Lite Export
This commit is for TFlite export (REVISIT)
Transpose FullyConnected Layer's Weights (NCHW --> NHWC)
update at : 2022/09/26
Related : nnstreamer#1912
Signed-off-by: DonghakPark <donghak.park@samsung.com>
DonghakPark [Fri, 2 Sep 2022 08:53:22 +0000 (17:53 +0900)]
[compiler] FullyConnevted Layer Weights Transpose for TFLite Export
This commit is for TFlite export
Transpose FullyConnected Layer's weights (NCHW --> NHWC)
update at : 2022/09/13
Related : #1912
Signed-off-by: DonghakPark <donghak.park@samsung.com>
seongwoo [Thu, 19 May 2022 06:23:57 +0000 (15:23 +0900)]
[compiler] Revisit tflite interpreter
This patch revisits tflite interpreter.
1. Refactor the way of exporting tflite binary.
2. Support operators included in Resnet network.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: seongwoo <mhs4670go@naver.com>
Jiho Chu [Thu, 25 Aug 2022 07:02:36 +0000 (16:02 +0900)]
[Profile] Add annotate for memory profiling
It adds several annotations for memory profiling.
Each of them can help to teach profile steps.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Wed, 26 Oct 2022 11:11:08 +0000 (20:11 +0900)]
[INI] Add memory swap properties
This patch is adding memory swap properties.
memory_swap: enable memory swap feature
memory_swap_path: path to save swap file
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Jiho Chu [Fri, 21 Oct 2022 04:56:40 +0000 (13:56 +0900)]
[Context] Propergate property to model
This patch is for app properties propergation.
App properties is described in global configuration file (ini),
and the properties are under section, which only have a meaning for
human readability. It means section deos not have any real role for
behavior. Propergated properties can be used in anywhere in the network.
In this patch include 'memmory_swap' and 'memory_swap_path' which is
used for Model flex properties.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
SeoHyungjun [Mon, 31 Oct 2022 04:35:59 +0000 (13:35 +0900)]
[typo] Fix typo
fix the typo error in README.md file
Signed-off-by: SeoHyungjun <hyungjun.seo@samsung.com>
DonghakPark [Thu, 27 Oct 2022 11:58:41 +0000 (20:58 +0900)]
[typo] Fix typo error
Fix the typo error through spell checker
- Application/
- api/ccapi/
- nntrainer/
Signed-off-by: DonghakPark <donghak.park@samsung.com>