From 83bdf3f28b18cf86651ead48df42a030f2277ab2 Mon Sep 17 00:00:00 2001 From: Aaron J Arthurs Date: Mon, 17 Feb 2020 10:18:38 -0600 Subject: [PATCH] Support models with signed 8-bit integer I/O TensorFlow Lite models whose I/O is of type `kTfLiteInt8` are rejected, requiring an I/O wrapper (`Quantize` operators). As signed 8-bit integers are preferred in TensorFlow's quantization specifications [1], this patch alleviates the need for an I/O wrapper to run strictly `kTfLiteInt8`-type models. [1] https://www.tensorflow.org/lite/performance/quantization_spec Signed-off-by: Aaron J Arthurs --- ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc | 1 + 1 file changed, 1 insertion(+) diff --git a/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc b/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc index 7dd088d..65e64a9 100644 --- a/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc +++ b/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc @@ -320,6 +320,7 @@ TFLiteInterpreter::getTensorType (TfLiteType tfType) case kTfLiteInt32: return _NNS_INT32; case kTfLiteBool: + case kTfLiteInt8: return _NNS_INT8; case kTfLiteInt64: return _NNS_INT64; -- 2.7.4