[ Mixed Precision ] Support Mixed Precision
authorjijoong.moon <jijoong.moon@samsung.com>
Wed, 12 Jul 2023 07:58:48 +0000 (16:58 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Mon, 21 Aug 2023 06:29:23 +0000 (15:29 +0900)
commit9da19e4f153fd0aa38ce489de3582f6f2da830e7
tree85d8719aed0102fc2e5b338c47423259e79182b7
parente012416a9437f5001e2aa5c6db9da149b7df01d4
[ Mixed Precision ] Support Mixed Precision

This PR enables the Mixed Precision computation.
- Add the data_type property in Tensor : FP16, FP32
- Memory_Data only handle void *
- In Tensor, there were several member function with template
   : getAddress<float>() , getData<__fp16>, etc.
- Need to implement Blas Interface function

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
33 files changed:
jni/Android.mk.in
jni/meson.build
meson_options.txt
nntrainer/dataset/databuffer.cpp
nntrainer/dataset/random_data_producers.cpp
nntrainer/layers/acti_func.cpp
nntrainer/layers/centroid_knn.cpp
nntrainer/layers/concat_layer.cpp
nntrainer/layers/conv2d_layer.cpp
nntrainer/layers/embedding.cpp
nntrainer/layers/gru.cpp
nntrainer/layers/lstm.cpp
nntrainer/layers/lstmcell_core.cpp
nntrainer/layers/mol_attention_layer.cpp
nntrainer/layers/pooling2d_layer.cpp
nntrainer/layers/preprocess_flip_layer.cpp
nntrainer/layers/preprocess_translate_layer.cpp
nntrainer/layers/rnn.cpp
nntrainer/models/dynamic_training_optimization.cpp
nntrainer/tensor/blas_interface.cpp
nntrainer/tensor/blas_interface.h
nntrainer/tensor/cache_elem.cpp
nntrainer/tensor/cache_elem.h
nntrainer/tensor/cache_pool.cpp
nntrainer/tensor/cache_pool.h
nntrainer/tensor/memory_data.h
nntrainer/tensor/memory_pool.cpp
nntrainer/tensor/memory_pool.h
nntrainer/tensor/tensor.cpp
nntrainer/tensor/tensor.h
nntrainer/utils/util_func.cpp
nntrainer/utils/util_func.h
tools/package_android.sh