[Tensor] Remove calcGrad step for trainable layer
[platform/core/ml/nntrainer.git] / nntrainer / tensor / manager.h
2023-02-07 Jiho Chu[Tensor] Remove calcGrad step for trainable layer
2022-11-07 DonghakPark[compiler] Revisit FullyConnected Layer Weights Transpo...
2022-10-31 DonghakPark[typo] Fix typo error
2022-10-23 Jiho Chu[Tensor] Add constructor for user swap path
2022-10-13 Jiho Chu[Property] Add memory_swap property
2022-01-19 Jihoon Lee[Tensor Pool] Add expose/persist concept
2022-01-19 Jihoon Lee[ExecOrder] Add exec order control and fix inference
2021-12-29 Jihoon Lee[Output] Return zero grad for empty output
2021-12-29 Jihoon Lee[Manager] update requestInputs to use requestTensor
2021-12-03 Parichay Kapoor[graph] Extend lifespan of model outputs
2021-12-01 Parichay Kapoor[gradclip] hot fix + unittests
2021-12-01 Parichay Kapoor[model/graph] Clip the gradients and then apply
2021-12-01 Parichay Kapoor[graph/manager] Extend gradient execution for clipping
2021-11-23 Jihoon Lee[Tp] Propose new tensor spec
2021-11-19 Jihoon Lee[QuickFix] Check weight access for the last
2021-11-16 Jihoon Lee[Tpool] Remove updateExternalTensor
2021-11-11 Parichay Kapoor[layer] Inplace support for multiout layer
2021-10-14 Parichay Kapoor[graph/model] Support for multi-label/input for the...
2021-10-13 Jihoon Lee[Sharing] Implement tensor sharing
2021-10-07 Jihoon Lee[WeightSharing] Pass shared_name from the original
2021-10-07 Jihoon Lee[WeightSharing] Implement isFirst/lastAccess
2021-10-07 Jihoon Lee[Recurrent] Add zero grad / delegate apply gradient
2021-10-07 Jihoon Lee[Recurrent] Propagate Trainable variable to weights
2021-10-06 Parichay Kapoor[in-place] Make input layer work in-place
2021-10-06 Parichay Kapoor[inplace opt] Support in-place no-op flatten layer
2021-10-05 Parichay Kapoor[graph/manager] Enable memory v1 optimizations
2021-10-01 Parichay Kapoor[cleanup/fix] Cleanup + bugfix
2021-10-01 Parichay Kapoor[manager] Temporarily handle external tensors
2021-10-01 Parichay Kapoor[Manager] Manager to use TensorPool for all requests
2021-09-30 Parichay Kapoor[fix] Rebase fix
2021-09-30 Parichay Kapoor[Manager] Use TensorPool for Gradients
2021-09-30 Parichay Kapoor[Manager] Manager use TensorPool for Weights
2021-09-28 Parichay Kapoor[manager] Use memory pool for weights
2021-09-28 Parichay Kapoor[manager] Constraint input and ouput tensor sharing
2021-09-28 Parichay Kapoor[manager] Create lifetime and usage list by tensors
2021-09-27 Parichay Kapoor[manager] Flatten tensors list
2021-08-23 Parichay Kapoor[memorypool] Introducing memory pool
2021-08-20 Parichay Kapoor[context/graph] Catch lifespan + 3-way execution order
2021-08-20 Parichay Kapoor[manager] Set the default usage timestamp for tensors
2021-08-17 Parichay Kapoor[graph/neuralnet] Move manager from neuralnet to netgraph
2021-07-22 Parichay Kapoor[test] Enable modelfile unittest
2021-07-20 Parichay Kapoor[activation] Update implementation for in-place
2021-06-23 hyeonseok lee[Optimizer] Implement getOptimizerVariableDim
2021-06-23 Parichay Kapoor[manager] Add support for request Inputs/outputs
2021-06-23 Parichay Kapoor[graph] Support creating RunLayerContext
2021-06-23 Parichay Kapoor[manager] Add support for request Inputs/outputs
2021-06-23 Parichay Kapoor[manager] Support request Tensors and Weights
2021-06-23 Parichay Kapoor[context] Layer context creation scaffolding
2021-04-22 Jihoon Lee[Android] Fix android build
2021-04-19 hyeonseok lee[manager] disable inference_memory_opt
2021-03-29 jijoong.moon[ RNN ] Fix Tensor copy bug and set proper initialization
2021-03-15 Parichay Kapoor[Manager] Remove alloc for first/last layer during...
2021-03-09 Parichay Kapoor[manager] Support deinitialize
2021-03-08 Parichay Kapoor[manager] Check on re-initialize
2021-03-08 Parichay Kapoor[manager] Support deallocation of memory
2021-03-08 Parichay Kapoor[manager] Add check in manager for multiple init and...
2021-03-03 Parichay Kapoor[model] Allow updating batch size after allocation
2021-02-09 Parichay Kapoor[manager] svace issue fix
2021-02-05 Parichay Kapoor[bug fixes] Update shared mem and in-place opt
2021-02-05 Parichay Kapoor[Manager] Support lazy memory allocation with manager
2021-01-28 Parichay Kapoor[manager] Disable user_shared_memory
2021-01-25 Parichay Kapoor[weight] Decouple init of weight and gradients
2021-01-25 Parichay Kapoor[pooling] Do not allocate memory in initialize
2021-01-25 Parichay Kapoor[manager] Donot allocate adam for inference
2021-01-21 Parichay Kapoor[manager] Optimize input/output memory for inference
2020-12-29 Jihoon Lee[Fix] Assign default value for max_deriv size
2020-12-28 Parichay Kapoor[activation] Making activation in-place
2020-12-28 Parichay Kapoor[manager] Manager tracks input/output memory
2020-12-28 Parichay Kapoor[layer] Move layer input/output management to manager
2020-12-24 Jihoon Lee[Manager] Add MMaped memory
2020-12-24 Jihoon Lee[Manager/Fix] Disallow copy ctor of manager
2020-12-24 Jihoon Lee[Tensor] Add Tensor Wrap method
2020-12-07 Parichay Kapoor[manager] Share gradient memory for all layer