[Tensor] Remove calcGrad step for trainable layer
[platform/core/ml/nntrainer.git] / nntrainer / tensor / manager.cpp
2023-02-07 Jiho Chu[Tensor] Remove calcGrad step for trainable layer
2022-11-30 hyeonseok lee[ln] optimize layer normalization layer input memory
2022-10-13 Jiho Chu[Property] Add memory_swap property
2022-03-25 jijoong.moon[COVERITY] remove unreachec code accepted/tizen/unified/20220330.021238 submit/tizen/20220325.053530
2022-03-14 Jihoon LeeChange loading meta information behavior
2022-02-09 Parichay Kapoor[layers] Support for weight decay to layers
2022-01-19 Jihoon Lee[Tensor Pool] Add expose/persist concept
2022-01-19 Jihoon Lee[ExecOrder] Add exec order control and fix inference
2022-01-19 Jihoon Lee[manager] Separate backwarding -> grad, deriv
2021-12-29 Jihoon Lee[Output] Return zero grad for empty output
2021-12-29 Jihoon Lee[Manager] update requestInputs to use requestTensor
2021-12-09 Parichay Kapoor[manager] Remove activation input exec order from backw...
2021-12-03 Parichay Kapoor[graph] Extend lifespan of model outputs
2021-12-02 jijoong.moon[ Fix ] bug fixes in tensor_pool and resnet
2021-12-01 Parichay Kapoor[gradclip] hot fix + unittests
2021-12-01 Parichay Kapoor[model/graph] Clip the gradients and then apply
2021-12-01 Parichay Kapoor[graph/manager] Extend gradient execution for clipping
2021-12-01 Parichay Kapoor[layernode] Add property clip gradient by norm
2021-11-19 Jihoon Lee[QuickFix] Check weight access for the last
2021-11-19 Jihoon Lee[QuickFix] Add not dependent weight sharing
2021-11-18 Jihoon Lee[Tp] Rename requestTensor -> request
2021-11-18 Jihoon Lee[Tp] requestPreallocated -> view
2021-11-18 Jihoon Lee[Tp] rename externallyAllocated -> placeholder
2021-11-11 Parichay Kapoor[layer] Inplace support for multiout layer
2021-10-19 Jihoon Lee[Fix] tensor pool prerequest bug
2021-10-18 Parichay Kapoor[mem-opt] Support in-place batch normalization
2021-10-18 Parichay Kapoor[batchnorm] Optimize batch norm memory usage
2021-10-13 Jihoon Lee[Sharing] Implement tensor sharing
2021-10-07 Jihoon Lee[WeightSharing] Remove zero grad
2021-10-07 Jihoon Lee[WeightSharing] enable weight sharing from manager
2021-10-07 Jihoon Lee[WeightSharing] Pass shared_name from the original
2021-10-07 Jihoon Lee[WeightSharing] Implement isFirst/lastAccess
2021-10-07 Jihoon Lee[Recurrent] Add zero grad / delegate apply gradient
2021-10-07 Jihoon Lee[Recurrent] Propagate Trainable variable to weights
2021-10-06 Parichay Kapoor[rebase] Rebase fix
2021-10-06 Parichay Kapoor[in-place] Make input layer work in-place
2021-10-06 Parichay Kapoor[inplace opt] Support in-place no-op flatten layer
2021-10-05 Parichay Kapoor[graph/manager] Enable memory v1 optimizations
2021-10-05 Parichay Kapoor[bug fixes] Add bug fixes to manager + tensor pool
2021-10-01 Parichay Kapoor[cleanup/fix] Cleanup + bugfix
2021-10-01 Parichay Kapoor[manager] Temporarily handle external tensors
2021-10-01 Parichay Kapoor[Manager] Manager to use TensorPool for all requests
2021-09-30 Parichay Kapoor[fix] Rebase fix
2021-09-30 Parichay Kapoor[Manager] Use TensorPool for Gradients
2021-09-30 Parichay Kapoor[Manager] Manager use TensorPool for Weights
2021-09-28 Parichay Kapoor[manager] Use memory pool for weights
2021-09-28 Parichay Kapoor[manager] Constraint input and ouput tensor sharing
2021-09-28 Parichay Kapoor[manager] Create lifetime and usage list by tensors
2021-09-27 Parichay Kapoor[manager] Flatten tensors list
2021-08-23 Parichay Kapoor[memorypool] Introducing memory pool
2021-08-23 Parichay Kapoor[manager] Weight/gradient allocation simplify
2021-08-20 Parichay Kapoor[context/graph] Catch lifespan + 3-way execution order
2021-08-20 Parichay Kapoor[manager] Set the default usage timestamp for tensors
2021-08-17 Parichay Kapoor[graph/neuralnet] Move manager from neuralnet to netgraph
2021-08-10 Parichay Kapoor[var_grad] Support initializer for var_grad
2021-08-10 Parichay Kapoor[tensor] WeightInitializer refactor to Tensor::Initializer
2021-07-22 Parichay Kapoor[layernode] Remove LayerNode dependence on LayerV1
2021-07-22 Parichay Kapoor[tensor] Enable request additional tesnor with batchnorm
2021-07-22 Parichay Kapoor[manager] Support input/output tensor allocation
2021-07-22 Parichay Kapoor[var_grad] Update trainable to need_gradient
2021-07-22 Parichay Kapoor[manager] Memory allocation for non-weight tensors
2021-06-23 hyeonseok lee[Optimizer] Implement getOptimizerVariableDim
2021-06-23 Parichay Kapoor[manager] Support initialization/allocation of weights
2021-06-23 Parichay Kapoor[manager] Add support for request Inputs/outputs
2021-06-23 Parichay Kapoor[graph] Support creating RunLayerContext
2021-06-23 Parichay Kapoor[manager] Add support for request Inputs/outputs
2021-06-23 Parichay Kapoor[manager] Support request Tensors and Weights
2021-04-22 Jihoon Lee[Android] Fix android build
2021-03-29 jijoong.moon[ RNN ] Fix Tensor copy bug and set proper initialization
2021-03-19 Parichay Kapoor[Weights] Split weight variable init and alloc
2021-03-15 Parichay Kapoor[Manager] Remove alloc for first/last layer during...
2021-03-09 Parichay Kapoor[manager] Support deinitialize
2021-03-08 Parichay Kapoor[manager] Check on re-initialize
2021-03-08 Parichay Kapoor[manager] Support deallocation of memory
2021-03-08 Parichay Kapoor[manager] Add check in manager for multiple init and...
2021-03-03 Parichay Kapoor[model] Allow updating batch size after allocation
2021-03-02 Jihoon Lee[Fix] Handle edgecase when using shared memory
2021-02-09 Parichay Kapoor[manager] svace issue fix
2021-02-09 hyeonseok leeFix svace issue GraphWatcher UNINIT.CTOR
2021-02-09 hyeonseok leeFix svace issue manager UNREACHABLE_CODE
2021-02-05 Parichay Kapoor[Weight] Cleanup train argument for initialize gradient
2021-02-05 Parichay Kapoor[var_grad] Improve nomenclature
2021-02-05 Parichay Kapoor[tensor] Split tensor constructor into 2
2021-02-05 Parichay Kapoor[bug fixes] Update shared mem and in-place opt
2021-02-05 Parichay Kapoor[Manager] Support lazy memory allocation with manager
2021-01-28 Parichay Kapoor[tensor] Update interface for tensor::map
2021-01-28 Parichay Kapoor[manager] bug fix for Tensor::map
2021-01-28 Parichay Kapoor[manager] Disable user_shared_memory
2021-01-25 Parichay Kapoor[var_grad] Remove redundant argument for initializeWeight
2021-01-25 Parichay Kapoor[weight] Decouple init of weight and gradients
2021-01-25 Parichay Kapoor[manager] Donot allocate adam for inference
2021-01-21 Parichay Kapoor[manager] Optimize input/output memory for inference
2020-12-30 Parichay Kapoor[model] Optimize model input/output
2020-12-29 Jihoon Lee[Fix] Assign default value for max_deriv size
2020-12-28 Parichay Kapoor[activation] Making activation in-place
2020-12-28 Parichay Kapoor[layer] Use gradient instead of variable for derivative
2020-12-28 Parichay Kapoor[manager] Manager tracks input/output memory
2020-12-28 Parichay Kapoor[layer] Move layer input/output management to manager
2020-12-24 Jihoon Lee[Manager] Add MMaped memory
2020-12-24 Jihoon Lee[Manager/Fix] Disallow copy ctor of manager
next