Support models with signed 8-bit integer I/O
authorAaron J Arthurs <aajarthurs@gmail.com>
Mon, 17 Feb 2020 16:18:38 +0000 (10:18 -0600)
committerMyungJoo Ham <myungjoo.ham@samsung.com>
Sun, 23 Feb 2020 10:06:00 +0000 (02:06 -0800)
commit83bdf3f28b18cf86651ead48df42a030f2277ab2
tree70427e27f2cb85cce6a5c2d7fb4b78ecc7efaad7
parent5d2dfff70dc5bbb0495f68407fc2467c406fc379
Support models with signed 8-bit integer I/O

TensorFlow Lite models whose I/O is of type `kTfLiteInt8` are rejected,
requiring an I/O wrapper (`Quantize` operators). As signed 8-bit
integers are preferred in TensorFlow's quantization specifications [1],
this patch alleviates the need for an I/O wrapper to run strictly
`kTfLiteInt8`-type models.

[1] https://www.tensorflow.org/lite/performance/quantization_spec

Signed-off-by: Aaron J Arthurs <aajarthurs@gmail.com>
ext/nnstreamer/tensor_filter/tensor_filter_tensorflow_lite.cc