From: Aaron J Arthurs Date: Mon, 17 Feb 2020 16:18:38 +0000 (-0600) Subject: Support models with signed 8-bit integer I/O X-Git-Tag: accepted/tizen/unified/20200228.123757~2 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=83bdf3f28b18cf86651ead48df42a030f2277ab2;p=platform%2Fupstream%2Fnnstreamer.git Support models with signed 8-bit integer I/O TensorFlow Lite models whose I/O is of type `kTfLiteInt8` are rejected, requiring an I/O wrapper (`Quantize` operators). As signed 8-bit integers are preferred in TensorFlow's quantization specifications [1], this patch alleviates the need for an I/O wrapper to run strictly `kTfLiteInt8`-type models. [1] https://www.tensorflow.org/lite/performance/quantization_spec Signed-off-by: Aaron J Arthurs --- diff --git a/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc b/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc index 7dd088d..65e64a9 100644 --- a/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc +++ b/ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc @@ -320,6 +320,7 @@ TFLiteInterpreter::getTensorType (TfLiteType tfType) case kTfLiteInt32: return _NNS_INT32; case kTfLiteBool: + case kTfLiteInt8: return _NNS_INT8; case kTfLiteInt64: return _NNS_INT64;