Jihoon Lee [Tue, 26 Jan 2021 03:09:45 +0000 (12:09 +0900)]
[CAPI/acr] Update tizen capi
**Changes proposed in this PR:**
- Setting void *user_data
- Update layer type enum
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 26 Jan 2021 04:20:33 +0000 (13:20 +0900)]
[Tensor] Rearrange Methods
This patch gathers tensor by arithmetic operation (tensor.h is reflected
with rebase)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 21 Jan 2021 09:48:24 +0000 (18:48 +0900)]
[Tensor] rearrange methods
- Add missing out param methods
- Change way it is delegated for some methods
- Rename s/operator_/apply_broadcast
- Remove `operator_i` and `operator_i_util`
- Assure dimension checks
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 21 Jan 2021 04:40:13 +0000 (13:40 +0900)]
[Tensor/Clean] Relocate tensor methods
This patch relocate arithmetic methods while adding some missing
outplace operation signature.
Order is
1. inplace -> outplace -> outplace (with allocated memory)
2. multiply -> divide -> add -> subtract
3. scalar -> tensor
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 27 Jan 2021 02:25:58 +0000 (11:25 +0900)]
[tensor] Update interface for tensor::map
Update interface for tensor::map to include the size of the original
buffer to ensure that the buffer contains enough memory required
by the tensor shape wrapping around the memory.
Added another negative unittest for it.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 26 Jan 2021 07:53:22 +0000 (16:53 +0900)]
[manager] bug fix for Tensor::map
This patch exposes a bug from Tensor::Map
where the offset is not checked when assigning the data.
This is because of the bug in manager for batch normalization layer.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 26 Jan 2021 07:50:02 +0000 (16:50 +0900)]
[manager] Disable user_shared_memory
Disable user_shared_memory to as NNAPI is not required to be supported.
It is further needed to decouple tensor structure allocation and its
internal memory allocation.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Gichan Jang [Wed, 27 Jan 2021 06:39:22 +0000 (15:39 +0900)]
[Docs] Generate nntrainer docs
Generate nntrainer docs.
See: https://nntrainer.github.io/
- Sub-documents such as Application/* need to be added
- The table and others need to be modified according to the hotdoc rule.
Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
Jihoon Lee [Wed, 20 Jan 2021 08:00:51 +0000 (17:00 +0900)]
[spec] add backward competibility under tizen 6
Since `ml-error-common` redclares error enum inside nnstreamer in tizen
5.5. This patch workarounds the issue by making a fake ml-api-error.h
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 20 Jan 2021 07:07:11 +0000 (16:07 +0900)]
[meson] Clean up ml-api-common dependency
This patch cleans up the ml-api-common dependency.
If it is seems to be stable with multiple platform, we can remove
`api/capi/include/platform/ml-api-common.h`. let's keep it for now
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 26 Jan 2021 04:04:36 +0000 (13:04 +0900)]
[Fix] Reflect changes to upstream/main
From merging some big prs there happend some inconsistency which casued
a build break. This patch solves the issue
**Changes proposed in this PR:**
- Use manager.initializeTensor() in the unittest
- Add training signature to forwarding
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
MyungJoo Ham [Mon, 25 Jan 2021 08:42:45 +0000 (17:42 +0900)]
License Fix / Relicense to Apache-2.0
1. Do not use "Apache-2.0-only". It's "Apache-2.0".
2. Relicense files to Apache-2.0. (The author permits; I'm the author.)
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Parichay Kapoor [Wed, 20 Jan 2021 09:05:32 +0000 (18:05 +0900)]
[data augmentation] support for random translate
Added support for random translate which is fractional and does mirroring
This is implemented with opencv, but build is allowed without opencv
The model can be built but using this layer without opencv will throw
Added corresponding unittest as well.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 20 Jan 2021 08:03:40 +0000 (17:03 +0900)]
[data augmentation] Support for random flip
Add support for random flip data augmentation along with its unittests
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 5 Jan 2021 15:16:03 +0000 (00:16 +0900)]
[dynamic-training] Add dynamic training using derivatives
Added dynamic training using derivatives where the decision to
apply the gradient is calculated using the derivative received
without calculating the gradient itself.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 4 Jan 2021 12:02:36 +0000 (21:02 +0900)]
[dyanmic training] Adding dynamic-training code
Added dynamic-training code with both max and l2norm mode
Verified working with existing examples given the threshold
TODO: support dynamic training with derivative
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 18 Jan 2021 06:58:39 +0000 (15:58 +0900)]
[Fix/TFlite] Fix tflite allocation
Now, memory alllocation is handled outside of each layer.
Accordingly, allocating out tensor shouldn't be done inside a layer.
For the same reason, loss layer backwarding needs some fix, for now
it is just commented and will be handled soon
This patch handles the issue for tflite layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Fri, 22 Jan 2021 09:43:34 +0000 (18:43 +0900)]
[var_grad] Remove redundant argument for initializeWeight
remove redundant argument for initializeWeight - gtrain
as weight initialization is independent of if the weight is
going to be used in training or not.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 20 Jan 2021 03:17:41 +0000 (12:17 +0900)]
[weight] Decouple init of weight and gradients
Decouple initialization of weight variables and its corresponding gradients
Weights are always intialized and used later with inference/train
but gradients are initialized only with training and with different
configurations based on the chosen optimization strategies.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 4 Jan 2021 10:13:59 +0000 (19:13 +0900)]
[pooling] Do not allocate memory in initialize
Set batch size in initialize for pooling layer allocates memory.
However, the final batch size is allowed to change in inference/training.
This unnecessarily changes the peak memory requirement.
For now, this memory is allocated with forwarding.
Later this will be handled as a tensor with manager once int data type is supported.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 4 Jan 2021 10:12:42 +0000 (19:12 +0900)]
[manager] Donot allocate adam for inference
Donot allocate adam and gradient memory for weights
when the model is being executed for inference
V2:
Separate memory allocation for weights and gradients
Gradient memory allocation is decided based on training/inference
However weight memory is always to be allocated and must be loaded
before readModel(), so need to be separated
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Gichan Jang [Fri, 22 Jan 2021 08:08:58 +0000 (17:08 +0900)]
[README] Add coverity badge
Add nntrainer coverity badge to README.
Signed-off-by: Gichan Jang <gichan2.jang@samsung.com>
Parichay Kapoor [Fri, 22 Jan 2021 03:41:14 +0000 (12:41 +0900)]
[optimization] Bug fix for in-place layer optimization
Inplace layer optimization is performed for multiple layers - activation and batch normalization layers
and this list will increase with data augmentation etc.
However, the in-place layers cannot work correctly consecutively if these layers are trainable.
They can work perfectly is they dont need to pass the derivative back.
For now, this patch limits two consecutive layers to be in-place.
This will be made generic later dependent on the trainable and inPlace property of the layer.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 19 Jan 2021 14:15:20 +0000 (23:15 +0900)]
[inference] Add input validation for inference
Add input validation for inference of the neural network
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Sat, 9 Jan 2021 07:04:21 +0000 (16:04 +0900)]
[Tensor] Add outplace method for arithmetic ops
Add outplace ops with already allocated tensor.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Tue, 19 Jan 2021 12:58:47 +0000 (21:58 +0900)]
[meson] Update meson for ubuntu 20.04
Update meson to work with ubuntu 20.04
Also add some missing checks
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 19 Jan 2021 13:01:32 +0000 (22:01 +0900)]
[docs] Add missing dependencies
Add missing dependencies required to build nntrainer with meson
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 7 Jan 2021 06:50:33 +0000 (15:50 +0900)]
[Layer] Add eval mode for the training
**Changes proposed in this PR:**
- This patch add eval mode for the training forward and
fix batch normalization layer accordingly
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Thu, 14 Jan 2021 03:58:31 +0000 (12:58 +0900)]
[ Fix ] Fix Logistic Regression Example Error
This PR includes fixes about logistic regression application
Change forwarding function
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
hyeonseok lee [Tue, 5 Jan 2021 12:03:39 +0000 (21:03 +0900)]
Enable trainable property to layer
Set trainable value to false in constructor in activation layer, flatten_layer
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Fri, 8 Jan 2021 03:02:47 +0000 (12:02 +0900)]
[Tools] Fix bug that translayer cannot detect bn
For batchnormalization in tf 2.3 it is not detected in transLayer, so
added new type to detect batch normalization layer
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 30 Dec 2020 13:02:53 +0000 (22:02 +0900)]
[transfer learning] Enable test on ubuntu
Enable testing of the trained model on ubuntu
Added check to ensure that nnstreamer is enabled
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 29 Dec 2020 12:11:16 +0000 (21:11 +0900)]
[manager] Optimize input/output memory for inference
Optimize input/output memory for inference by using a shared buffer
where the max([sum(input_l, output_l)) for l from all layers]) memory
is allocated for inference.
Baseline working unittest added with models unittest which ensures
that inference works with and without optimizations without any
failures. Value verification tests is done by nnstreamer subplugin of
nntrainer.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
hyeonseok lee [Tue, 5 Jan 2021 11:55:11 +0000 (20:55 +0900)]
Support sum value in profiler
Now profiler will show the avg, min, max, sum values
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Tue, 29 Dec 2020 11:51:13 +0000 (20:51 +0900)]
[Test] Disable deriv verification when opt is on
This patch disables derivative verification but only checks the whole
return derivatives.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 28 Dec 2020 07:48:16 +0000 (16:48 +0900)]
[Conv2d] Optimize layer loop
This optimize layer loops by
- minimize padding calculation
- Maximize cache hit by tranposing the matrix
- maximize cache hit by reordering loop order
- ~use single offset to minimize offset calculation~
- ~add shortcut when kernel size is 1~
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 28 Dec 2020 06:44:20 +0000 (15:44 +0900)]
[Conv2d] Reuse im2col array by batch
This patch enables reusing im2col array by batch, while saving
initializing time setting to zero.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Mon, 28 Dec 2020 06:08:01 +0000 (15:08 +0900)]
[Conv2d] Change conv2d gemm to dot
- Change conv2dgemm to dot to enable optimization path inside dot
operation
- Add beta option to dot operation (C = alpha*A*B + beta*C)
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Tue, 29 Dec 2020 10:35:42 +0000 (19:35 +0900)]
[bugfix] Fix model path and dataset path in model_loader.cpp
Fix model path and dataset path to involve working directory path
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Parichay Kapoor [Tue, 29 Dec 2020 06:18:55 +0000 (15:18 +0900)]
[dist/tizen] Enable base unittests for tizen build
Enable nntrainer unittests for tizen build
Not sure why or when this got commented
but lets enable it
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 24 Dec 2020 06:09:54 +0000 (15:09 +0900)]
[model] Optimize model input/output
Optimize models extra input/output memory allocation counting towards peak memory allocation.
Memory is allocated with for input of input layer and output/gradient of output layer.
However, that memory is never used as train_run() allocates new buffer and passes it to the
input layer/loss layer.
This patch takes the already allocted memory from input/loss layer to be used to collect input/label data.
This patch also removes the extra parameters from forwarding/backwarding and with corresponding
with_val functions. Further, two types of forwarding in loss layer has been merged to just 1 function.
Now, loss layer and input layer does not need to be distinguished and can be treated as a regular layer.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Thu, 24 Dec 2020 06:49:44 +0000 (15:49 +0900)]
[Conv] Optimize im2col
This patch optimize im2col by...
- Add padding as a argument instead of passing pad value
- Skip creating padded tensor and assignment for padded index
- Refactor variable names for clarity
See also #824
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 24 Dec 2020 06:45:19 +0000 (15:45 +0900)]
[Tensor] Optimize accessor
This patch...
- inlines some accessor with noexcept specifier to boost up
- Add getValuePadded to reduce memory copy to make a padded tensor
see also #825
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Cc: Parichay Kapoor <pk.kappor@samsung.com>
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 29 Dec 2020 01:31:19 +0000 (10:31 +0900)]
[Fix] Assign default value for max_deriv size
This patch initialize max_dervative_size to avoid unexpected termination
resolves #834
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 23 Dec 2020 09:55:05 +0000 (18:55 +0900)]
[model/test] Duplicate models test for optimization
Run models test twice, once with all the optimizations enabled
and then once with all the optimizations disabled.
This ensures that both the modes work properly.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 22 Dec 2020 01:44:14 +0000 (10:44 +0900)]
[activation] Making activation in-place
Added activation layer to be in-place.
Each layer now allocates memory for its output than for its input.
For activation layer, if its memory is optimized, then the memory
for the layer behind activation layer is not allocated.
And the memory for the derivative of the activation layer is shared
among all such layers.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 18 Dec 2020 04:57:40 +0000 (13:57 +0900)]
[layer] Use gradient instead of variable for derivative
Use gradient instead of variable for derivative
Manager internally sets gradient memory same as variable for the optimization
but hides this kind of optimizations from the layer
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 18 Dec 2020 04:04:48 +0000 (13:04 +0900)]
[manager] Manager tracks input/output memory
Manager tracks input/output memory and allocates it
based on if the execution is training or inference
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 18 Dec 2020 03:22:55 +0000 (12:22 +0900)]
[inputlayer] Input layer must be non-trainable
Input layer must always be non-trainable as it does not support backwarding operation
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 17 Dec 2020 07:29:28 +0000 (16:29 +0900)]
[layer] Move layer input/output management to manager
Move layer inputs/outputs memory management to the manager.
This is accomplished by replacing the use of NetBuffers instead of Var_Grad.
Now, all the memory of weights, gradients, inputs, outputs and derivatives
are managed by the manager, and allows more optimizations to be done with
inputs/outputs.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Fri, 18 Dec 2020 05:23:07 +0000 (14:23 +0900)]
[Profiler] Change profiler specs
- Profiler time unit is changed: milli -> microsecond
- Now report is ordered by key
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 18 Dec 2020 03:16:19 +0000 (12:16 +0900)]
[Profiler] Apply ops level profiler
This patch attaches ops level profiler
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 18 Dec 2020 01:52:14 +0000 (10:52 +0900)]
[Profiler] Add event registerator
Profiler can now dynamically register event and send it to
profileListenr as of this patch with fixing few bugs
resolves #814
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 17 Dec 2020 12:33:00 +0000 (12:33 +0000)]
[Manager] Add MMaped memory
There was a requirement to separate weight memory region and grad memory
region.
To easily separate those two, this patch introduces no abstraction:
`MMapedMemory` while separating weight and grad mmap
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 16 Dec 2020 04:44:41 +0000 (13:44 +0900)]
[Manager/Fix] Disallow copy ctor of manager
Since manager is holding a memory, it shouldn't be copied as ownership
becoms not clear. This patch delets copy ctor / assignment ops. While
chainging signature for members and functions that uses manager
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 16 Dec 2020 11:06:04 +0000 (11:06 +0000)]
[Android] Manage ndk to deal with changes
1. Upgrade ndk version to 29
2. Add dependent library
3. Fix syntax for Application.mk
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 15 Dec 2020 04:50:49 +0000 (13:50 +0900)]
[Tensor] Add Tensor Wrap method
Add Tensor some factory methods
1. burrows external memory and use from
2. create from shared pointer without copy
To restrict unwanted use, those methods are static methods
called `Tensor::Wrap`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Tue, 15 Dec 2020 04:30:45 +0000 (13:30 +0900)]
[TensorDim] Add initializer list ctor
This patch adds a tensordim
initializer list ctor to easily pass as a functional argument
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 23 Dec 2020 15:22:43 +0000 (00:22 +0900)]
[tensor] argmax bugfix
Apply memory allocation bugfix to argmax
where a empty vector is being addressed
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Fri, 18 Dec 2020 05:21:42 +0000 (14:21 +0900)]
[tensor] Set stride for shared tensor
Set stride for shared tensor
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 15 Dec 2020 11:10:39 +0000 (20:10 +0900)]
[layer] Support in-place batch normalization
Support in-place batch normalization where the batch normalization
input/output is not stored and is over-written by the next layer.
This patch removes the input/output memory requirement when using
batch normalization layer.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Fri, 18 Dec 2020 10:23:42 +0000 (19:23 +0900)]
[ ARGMAX ] Fix bug about argmax
Need to fix to calcuate argmax in tensor
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Mon, 14 Dec 2020 13:53:51 +0000 (13:53 +0000)]
[Test] Add macro to check if backbone is enabled
When backbone is not enabled, test fails because backbone is not enabled
This patch adds a define in the test so that test can pass when backbone
is not enabled
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 16 Dec 2020 08:38:51 +0000 (17:38 +0900)]
[svace] Assure unintialized members
nnstreamer_layer had two unintialized members.
This patch initializes those two
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 16 Dec 2020 07:41:44 +0000 (16:41 +0900)]
[svace] Error handling for applications/test
1. Fix inconsistent alloc/dealloc(new/free)
2. Add try catch to some statements
3. Fix memory leak from `asprintf`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 16 Dec 2020 06:55:52 +0000 (15:55 +0900)]
[svace] assure file to be closed before remove
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Sangjung Woo [Thu, 17 Dec 2020 03:12:38 +0000 (12:12 +0900)]
[Docs] Remove unnecessary HTML link for feature/privilege.
This patch removes the unnecessary HTML link for feature/privilege.
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
Jihoon Lee [Fri, 11 Dec 2020 08:13:07 +0000 (17:13 +0900)]
[Optim] Add shortcut to dot product
When dimension is 1, it is vector by matrix or vector by vector
multiplication. This patch adds a shortcut in that situation
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 11 Dec 2020 07:58:34 +0000 (16:58 +0900)]
[Fix] fix lda, ldb param
**Changes proposed in this PR:**
- lda, ldb, ldc is for layout so it should be set in terms of memory
layout, this patch fixes the issue while adding a corresponding test
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 9 Dec 2020 07:23:03 +0000 (16:23 +0900)]
[Profiler] Add basic profilerlistener
This patch adds global profiler listener for various purpose
From this patch,
1. Profiler can called globally with designated event key
2. Listener reporting suite included
3. Enum key has changed to int key to deal with unhashable
key compile error in few platforms.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
v2)
1. Change listener to RAII object (with forcing profiler, event
designation)
2. Add unsubscribe method
3. Change event register to set to prevent notifying a listener twice
4. Change semintics to not allow adding same listener twice
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 9 Dec 2020 05:40:24 +0000 (14:40 +0900)]
[Test] Add profiler test
**Changes proposed in this PR:**
- Add profiler test
- Wire profiler sources / header to the build system
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 9 Dec 2020 04:09:34 +0000 (13:09 +0900)]
[Profiler] Separate Profiler for wider use
This patch extracts profiler from neuralnet.
Also, this seperates `ProfileListener` which
should be used for client side while `Profiler`
is used in library side
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
hyeonseok lee [Wed, 9 Dec 2020 03:13:09 +0000 (12:13 +0900)]
[meson.build] Change join_paths to / in meson.build files
Replace join_paths in meson.build files to /
Check issue #709 for more details
Signed-off-by: hyeonseok lee <hs89.lee@samsung.com>
Jihoon Lee [Tue, 8 Dec 2020 07:25:08 +0000 (16:25 +0900)]
[Android] Integrate openblas into android
Android ndk was not building on top of openblas
This patch fixes the problem
resolves #794
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 9 Dec 2020 13:11:29 +0000 (22:11 +0900)]
[mnist] Update saved model file
As saving the optimizer parameters has been updated, the previous
model file gives wrong result. This patch adds the new model file.
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 3 Dec 2020 10:14:07 +0000 (19:14 +0900)]
[network] Rework the backwarding
- remove forwarding from backwarding
backwarding should just do backwarding and no more
- moved backwarding back to neuralnetwork so that graph
does not has to care about how to backward etc.
Graph just provides iterators for iterating the graph
in reverse. Graph does not know that layers have backwarding etc.
Also this removes dependency of graph from optimizer.
V2:
Added comment fixes for the corresponding PR
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 3 Dec 2020 08:38:28 +0000 (17:38 +0900)]
[optimizer] Move optimizer out of layer
This patch moves optimizer out of layer.
Now backwarding just calculates derivatives and gradient
but does not applies the gradient.
This gradient applying is done by the model.
Layer still support applyGradient operation but requires optimizer
as an argument.
This decouples layers from optimizers and can operate independently.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 3 Dec 2020 06:19:02 +0000 (15:19 +0900)]
[optimizer] Simplify optimizer initialize
As there is just one optimizer and shared by layers, it must be initialized just once by the neural network.
Also, addOptimizerVariables() moved out separately from initialize() as initialize() should work
on optimizers parameters and should not need list of weights.
Also remove set_tensor argument which was redundant
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Thu, 3 Dec 2020 05:43:09 +0000 (14:43 +0900)]
[optimizer] Move optimizer variables to weights
Move optimizer variables to weights
Now all the weight related tensors are handled by weights themselves
So, optimizer can be shared across all layers, no need to create new
copies for all layers
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 8 Dec 2020 04:34:11 +0000 (13:34 +0900)]
[vgg] Added pytorch model for vgg16
Added pytorch model for vgg16
This is to benchmark against tf and nntrainer
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 8 Dec 2020 04:32:41 +0000 (13:32 +0900)]
[vgg] Update to official vgg16 model
Update the nntrainer and tensorflow to use official VGG16 model architecture
The FC layers setup is different as the cifar100 dataset has just 100 output classes
than 1000 classes of the imagenet.
Further, the number of epochs are reduced to 1.
When training, this can be increased appropriately.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 7 Dec 2020 03:48:02 +0000 (12:48 +0900)]
[MNIST] Added pytorch version
Added pytorch version of MNIST for benchmarking purpose
This code is only tested with CPU
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 7 Dec 2020 11:31:44 +0000 (20:31 +0900)]
[ndk] Add enable profile flag
This patch add enable profile flag for ndk build for profiling purpose
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Fri, 4 Dec 2020 09:29:32 +0000 (18:29 +0900)]
[Experiment] Add profiler
This patch add `enable-profile` option to enable profile. Also this
patch adds a simple profiling logic to `neuralnet::inference`
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 3 Dec 2020 06:38:29 +0000 (15:38 +0900)]
[Meson] Add ndk-build to be part of ndk build
**Changes proposed in this PR:**
- Add option to build library using ndk
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 2 Dec 2020 10:11:14 +0000 (19:11 +0900)]
[Chores] CustomShortcut bug
As ini format has been changed, ini for customshortcut need change
This patch handles it.
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Parichay Kapoor [Wed, 2 Dec 2020 05:44:26 +0000 (14:44 +0900)]
[manager] Share gradient memory for all layer
This patch allows sharing the memory for gradient across all the layers
The maximum size of the gradient is allocated and all layers have unique tensors
which internally points to this tensor.
This optimization feature can be disabled for a model (as done with automated models unittest)
Manager is also moved to nntrainer/tensor as manager is managing all the weights (tensors) and will
in future manage all the inputs/outputs.
If the functionality of manager is extended, then it can be appropriately moved.
See also #774
Resolves #766
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Wed, 2 Dec 2020 02:52:44 +0000 (11:52 +0900)]
[layers/manager] Register weights with manager
All the weights of the layer are now registered with manager
Manager allocates memory for these weights and in future
handle their updates etc
See also #774 #766
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 1 Dec 2020 12:07:49 +0000 (21:07 +0900)]
[weight] Updated weights to be vector
Updated weights of layer to be vector than a shared_ptr array
This is for easier management and updating weight internally when
gradient will share the memory
See also #774 #766
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 1 Dec 2020 11:14:57 +0000 (20:14 +0900)]
[manager] Added nntrainer manager for weights
Added manager to manage all the allocated weights
This patch also adds manager to the model and passes manager to the
initialize which allows weights to be added to the manager.
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 1 Dec 2020 10:46:38 +0000 (19:46 +0900)]
[weight/var_grad] Make internal variable as shared_ptr
Internal variables in weights/var_grad, namely, the variable and gradient itself
are changed to shared_ptr so that weights can be shared without worrying about
shallow copies.
Also changed the copy constructor to not create new Tensor as copy constructor
of weight will get called and its unnecessary + unintentional overhead.
As weight is just wrapper over tensor, their copy constructors should follow
same behavior as tensor which is to not create new memory.
Added clone as an alternative to create new copy of a given weight.
See also #774 #766
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
jijoong.moon [Tue, 1 Dec 2020 04:31:03 +0000 (13:31 +0900)]
[ CONV2D ] seperate conv2d_gemm and im2col
It is better to split conv2d_gemm and im2col
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Parichay Kapoor [Wed, 2 Dec 2020 05:29:19 +0000 (14:29 +0900)]
[unittest] Enable disabled unittest
Enable fc layer disabled unittest
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Tue, 1 Dec 2020 01:48:32 +0000 (10:48 +0900)]
[var_grad] Trainable inferred from gradient
Trainable property of a variable was earlier inferred by storing a trainable variable
Now, trainable will be inferred using gradient.uninitialized()
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Parichay Kapoor [Mon, 30 Nov 2020 05:52:20 +0000 (14:52 +0900)]
[tensor] Update tensor operation signature
Update tensor operation signature to return Tensor reference as a retval
than a tensor itself. This avoid creating dummy tensors as a return (which might have been
optimized by the compiler but lets do manually as the input is also a reference).
**Self evaluation:**
1. Build test: [x]Passed [ ]Failed [ ]Skipped
2. Run test: [x]Passed [ ]Failed [ ]Skipped
Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
Jihoon Lee [Mon, 30 Nov 2020 07:34:09 +0000 (16:34 +0900)]
[CustomLayer] Update readme.md
Add readme.md about how to run and expected output
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [ ]Passed [ ]Failed [X]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Thu, 26 Nov 2020 07:50:53 +0000 (16:50 +0900)]
[Custom] Add actual example
**Changes proposed in this PR:**
- Add an example to create the custom layer to be used with ini
- Add an example to create the custom layer to be used with api
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
Jihoon Lee [Wed, 18 Nov 2020 02:30:40 +0000 (11:30 +0900)]
[Custom] Add an example scaffolding
Add a layer example that depends on the user's custom code
This patch generates scaffolding to the `Application/Custom` folder
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>
jijoong.moon [Fri, 27 Nov 2020 01:12:45 +0000 (10:12 +0900)]
[ Graph ] remove grad mem buffer for backwarding
This PR includes,
. remove grad memory buffer in n_buffes for graph. We do not need
this because we could use var memory buffer of n_buffers to
backwarding.
. For MNIST, memory consumption is reduced 3.5 to 2.6
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
Jihoon Lee [Fri, 27 Nov 2020 08:30:47 +0000 (17:30 +0900)]
[ModelLoader] Use vector<string> when create layer
When creating a layer from an ini, enum based properties were used.
This prevents adding a new properties without changing the api header.
This patch moves to setting layer properties to vector<string>
to enable setting properties without changing the api header, eventually
enabling custom properties in custom layer.
**Semantics Change propesed in this PR**
Ini won't ignore the properties that is not supported since model_loader
would not know if it is supported or not
See also #716
**Self evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee <jhoon.it.lee@samsung.com>