Support models with signed 8-bit integer I/O
authorAaron J Arthurs <aajarthurs@gmail.com>
Mon, 17 Feb 2020 16:18:38 +0000 (10:18 -0600)
committerMyungJoo Ham <myungjoo.ham@samsung.com>
Sun, 23 Feb 2020 10:06:00 +0000 (02:06 -0800)
TensorFlow Lite models whose I/O is of type `kTfLiteInt8` are rejected,
requiring an I/O wrapper (`Quantize` operators). As signed 8-bit
integers are preferred in TensorFlow's quantization specifications [1],
this patch alleviates the need for an I/O wrapper to run strictly
`kTfLiteInt8`-type models.

[1] https://www.tensorflow.org/lite/performance/quantization_spec

Signed-off-by: Aaron J Arthurs <aajarthurs@gmail.com>
ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc

index 7dd088d..65e64a9 100644 (file)
@@ -320,6 +320,7 @@ TFLiteInterpreter::getTensorType (TfLiteType tfType)
     case kTfLiteInt32:
       return _NNS_INT32;
     case kTfLiteBool:
+    case kTfLiteInt8:
       return _NNS_INT8;
     case kTfLiteInt64:
       return _NNS_INT64;