[layer] Support filter masking in mol attention
[platform/core/ml/nntrainer.git] / nntrainer / tensor / tensor.h
2021-12-03 Parichay Kapoor[layer] Support filter masking in mol attention
2021-12-03 Parichay Kapoor[tensor] Add derivatives for dot operation
2021-12-01 Parichay Kapoor[graph/manager] Extend gradient execution for clipping
2021-12-01 Parichay Kapoor[tensor] Support batched dot operation
2021-11-30 Parichay Kapoor[layer] Embedding layer support ZeroMaskIdx
2021-11-25 hyeonseok lee[grucell] Implement grucell
2021-11-25 hyeonseok lee[bugfix] Bugfix on tensor sum
2021-10-29 Parichay Kapoor[tensor] Support adding result to output with multiply
2021-10-28 Parichay Kapoor[tensor] Add beta for tensor sum
2021-10-26 Parichay Kapoor[tensor] Add support for strided operators
2021-10-25 Parichay Kapoor[tensor] Bug fix for dropout mask generation
2021-10-18 Parichay Kapoor[in-place] Support in-place activation
2021-10-08 Parichay Kapoor[layer] Bug fix for softmax operation
2021-10-01 Parichay Kapoor[batchnorm] Optimize batch norm layer
2021-10-01 Parichay Kapoor[rebase] Add rebase fix
2021-10-01 Parichay Kapoor[cleanup/fix] Cleanup + bugfix
2021-09-30 Parichay Kapoor[Manager] Manager use TensorPool for Weights
2021-09-28 Parichay Kapoor[manager] Use memory pool for weights
2021-09-27 Parichay Kapoor[tensor] Add name identifier to Tensor
2021-09-23 Parichay Kapoor[resnet] Add test to resnet application
2021-09-23 Parichay Kapoor[tensor] Reduce num of operations for tensor sum
2021-08-27 Jihoon Lee[itq] Add multiple mt itq test
2021-08-26 Jihoon Lee[tensor_filter] Fix tensorfilter fail
2021-08-24 Jihoon Lee[Dataset] Rework func dataset to samplewise
2021-08-10 Parichay Kapoor[Tensor] Support initializer with tensor
2021-08-10 Parichay Kapoor[tensor] WeightInitializer refactor to Tensor::Initializer
2021-08-02 hyeonseok lee[tensor_dim] package tensor_dim.h with ccapi
2021-07-22 Parichay Kapoor[pooling] Update global max pooling
2021-07-22 Parichay Kapoor[pooling] Update pooling to use helper tensors
2021-07-22 Parichay Kapoor[batchnorm] Update to LayerV2
2021-07-02 jijoong.moon[ Recurrent ] Implement Dropout for Recurrent Net
2021-06-17 jijoong.moon[ GRU ] Add GRU Unittest
2021-05-24 Parichay Kapoor[tensor] Update tensor sum
2021-05-03 Jihoon Lee[TensorDim/Tensor] Update transpose relates
2021-03-03 Parichay Kapoor[tensor] Allow updating batch size after allocation
2021-02-05 Parichay Kapoor[tensor] Split tensor constructor into 2
2021-02-05 Parichay Kapoor[bug fixes] Update shared mem and in-place opt
2021-02-05 Parichay Kapoor[Manager] Support lazy memory allocation with manager
2021-02-05 Parichay Kapoor[tensor] Support late memory allocation for tensors
2021-01-29 Jihoon Lee[Tensor] rearrange methods
2021-01-29 Jihoon Lee[Tensor/Clean] Relocate tensor methods
2021-01-28 Parichay Kapoor[tensor] Update interface for tensor::map
2021-01-28 Parichay Kapoor[manager] bug fix for Tensor::map
2021-01-25 Parichay Kapoor[dyanmic training] Adding dynamic-training code
2021-01-22 Jihoon Lee[Tensor] Add outplace method for arithmetic ops
2021-01-08 Jihoon Lee[Conv2d] Optimize layer loop
2021-01-08 Jihoon Lee[Conv2d] Change conv2d gemm to dot
2020-12-30 Jihoon Lee[Conv] Optimize im2col
2020-12-30 Jihoon Lee[Tensor] Optimize accessor
2020-12-24 Jihoon Lee[Manager] Add MMaped memory
2020-12-24 Jihoon Lee[Tensor] Add Tensor Wrap method
2020-12-07 Parichay Kapoor[manager] Share gradient memory for all layer
2020-12-01 Parichay Kapoor[tensor] Update tensor operation signature
2020-11-30 Parichay Kapoor[tensor] Support multiply/divide with given output
2020-11-30 Parichay Kapoor[conv2d] Refactor conv2d layer
2020-11-19 Parichay Kapoor[fc,tensor,adam] Change signature for sum accepted/tizen/unified/20201120.125454 submit/tizen/20201119.063013
2020-11-19 Parichay Kapoor[tensor] Reduce overall memory overhead
2020-11-16 Parichay Kapoor[tensor] Update tensor signature for apply
2020-11-16 Parichay Kapoor[tensor] Update tensor op signature for dot
2020-11-06 Parichay Kapoor[restructure] Restructure the core files