Fix cases then const blob precision is not FP32/FP16 (#1020)
authorKamil Magierski <kamil.magierski@intel.com>
Mon, 22 Jun 2020 12:46:01 +0000 (14:46 +0200)
committerGitHub <noreply@github.com>
Mon, 22 Jun 2020 12:46:01 +0000 (15:46 +0300)
commitf6758486800d34eadb87a6468d9662a2da2c47a2
tree2e260d7a10d15720d770d9f5305b4d6e62404e88
parent491e5e9fbb7b6d8e7fe34cb96bee0e4f555f2ce7
Fix cases then const blob precision is not FP32/FP16 (#1020)

Co-authored-by: kmagiers <kmagiers@intel.com>
inference-engine/src/gna_plugin/frontend/scale_factor_calc.hpp
inference-engine/tests/functional/plugin/gna/shared_tests_instances/subgraph_tests/concat_quantization.cpp