[nnc] Increase interpreter speed (#2768)
authorVladimir Plazun/AI Tools Lab /SRR/Engineer/삼성전자 <v.plazun@samsung.com>
Mon, 28 Jan 2019 15:17:34 +0000 (18:17 +0300)
committerРоман Михайлович Русяев/AI Tools Lab /SRR/Staff Engineer/삼성전자 <r.rusyaev@samsung.com>
Mon, 28 Jan 2019 15:17:34 +0000 (18:17 +0300)
commit6f71ec74d3e1b36aa15467100cab2a29f5d355d5
treee1605e3812924590673cfad921ac775dcf2b1a29
parent2de6d83f534ef2b6f6452e196822fd3e5537decc
[nnc] Increase interpreter speed (#2768)

Remove vector usage from Shapes and Indices
Move all getters into headers( inlining )
Increase inference speed up to 4x-6x times
Tinker with Deconv2D implementation to speed it up
Fix issue in broadcasting implementation

Signed-off-by: Vladimir Plazun v.plazun@partner.samsung.com
contrib/nnc/core/modelIR/Index.cpp
contrib/nnc/core/modelIR/Shape.cpp
contrib/nnc/core/modelIR/TensorVariant.cpp
contrib/nnc/include/ADT/SmallVector.h [new file with mode: 0644]
contrib/nnc/include/core/modelIR/Common.h [new file with mode: 0644]
contrib/nnc/include/core/modelIR/Index.h
contrib/nnc/include/core/modelIR/Shape.h
contrib/nnc/include/core/modelIR/ShapeRange.h
contrib/nnc/include/core/modelIR/TensorVariant.h
contrib/nnc/passes/interpreter/ops/DeConv2D.cpp