[VTA] Support for batched inference (#3661)
authorThierry Moreau <moreau@uw.edu>
Tue, 30 Jul 2019 21:01:31 +0000 (14:01 -0700)
committerJared Roesch <roeschinc@gmail.com>
Tue, 30 Jul 2019 21:01:31 +0000 (14:01 -0700)
commit6c7f0c4d084a4f2e166d41344bc70965f8ffc034
treefd241da8e1c4777b7bb1b5aacfb6009d8110c87a
parent9b355fc35587a1019f486f9ff0b2dffcd7cbb3df
[VTA] Support for batched inference (#3661)

* fix in IR pass to support padding on 6-d tensors

* support for both N>1 and N==1 for padding

* batch size > 1 tuning and base config

* output formatting

* batch conv2d

* print all category results

* revert to single-batch config

* pick record best

* fix conv test

* improving reporting

* address batching bug in fast simulator

* fix
vta/python/vta/ir_pass.py
vta/scripts/tune_conv2d.py
vta/src/sim/sim_driver.cc
vta/tests/python/integration/test_benchmark_topi_conv2d.py
vta/tutorials/autotvm/tune_relay_vta.py
vta/tutorials/frontend/deploy_resnet_on_vta.py