Add a workaround when scale for TENSOR_QUANT8_ASYMM is zero (#1605)
author서상민/동작제어Lab(SR)/Staff Engineer/삼성전자 <sangmin7.seo@samsung.com>
Fri, 8 Jun 2018 01:03:26 +0000 (10:03 +0900)
committerGitHub Enterprise <noreply-CODE@samsung.com>
Fri, 8 Jun 2018 01:03:26 +0000 (10:03 +0900)
This patch adds a workaround, which changes `scale` to one when tensor's
scale is zero, because currently zero scale is passed down from TF Lite.
Note that the latest NeuralNetworks.h (see
https://android.googlesource.com/platform/frameworks/ml/+/master/nn/runtime/include/NeuralNetworks.h)
requires `scale` to be greater than zero.  Remove this workaround when
the scale value is correctly passed.

Signed-off-by: Sangmin Seo <sangmin7.seo@samsung.com>
libs/support/tflite/src/nnapi_delegate.cpp

index 599283d..8223529 100644 (file)
@@ -93,6 +93,12 @@ uint32_t addTensorOperands(tflite::Interpreter *interpreter, ANeuralNetworksMode
       case kTfLiteUInt8:
         nn_type = ANEURALNETWORKS_TENSOR_QUANT8_ASYMM;
         scale = tensor->params.scale;
+        // FIXME The next line is a workaround because currently zero scale is passed down from TF
+        //       Lite.  Note that the latest NeuralNetworks.h (see
+        //       https://android.googlesource.com/platform/frameworks/ml/+/master/nn/runtime/include/NeuralNetworks.h)
+        //       requires scale to be greater than zero.  Remove this workaround when the scale
+        //       value is correctly passed.
+        scale = (scale == 0.0f) ? 1.0f : scale;
         zeroPoint = tensor->params.zero_point;
         break;
       case kTfLiteInt32: