Introduce Quantize-Dequantize to FakeQuantize transformation (#1849)
authorMateusz Tabaka <mateusz.tabaka@intel.com>
Wed, 26 Aug 2020 08:51:51 +0000 (10:51 +0200)
committerGitHub <noreply@github.com>
Wed, 26 Aug 2020 08:51:51 +0000 (11:51 +0300)
commita6076a1fd683a411990db8c87697adebeceaea42
treeaec955d12be3969ee1c6d1f9408dc69579b4f85f
parent4673dc9b9cf6832632bc76823b2905880baa15be
Introduce Quantize-Dequantize to FakeQuantize transformation (#1849)

* Introduce Quantize-Dequantize to FakeQuantize transformation

* Revert changes in DequantizeLinear

* apply code format

* Changes after review:

- description for transformation
- remove NGRAPH_CHECK and move some checks from callback to predicates in pattern
- check if out_low/high are broadcastable for FQ's first input
- fix params to copy_runtime_info

* Add type_matches and type_matches_any predicates

* Use get_single_value

* Changes after review:

- add brief description of transformation
- use get_pattern_value_map instead of get_pattern_map
- change opset1 to opset4
- fix params to copy_runtime_info

* Check result of dynamic_pointer_cast
inference-engine/src/transformations/include/transformations/convert_quantize_dequantize.hpp [new file with mode: 0644]
inference-engine/src/transformations/src/transformations/common_optimizations/common_optimizations.cpp
inference-engine/src/transformations/src/transformations/convert_quantize_dequantize.cpp [new file with mode: 0644]
inference-engine/tests/functional/inference_engine/transformations/convert_quantize_dequantize.cpp [new file with mode: 0644]
ngraph/core/include/ngraph/pattern/op/pattern.hpp
ngraph/core/src/pattern/op/pattern.cpp
ngraph/test/models/onnx/quant_dequant_pattern.prototxt [new file with mode: 0644]
ngraph/test/models/onnx/quant_dequant_pattern_axis.prototxt [new file with mode: 0644]
ngraph/test/onnx/onnx_import.in.cpp