雾雨魔理沙 [Fri, 9 Aug 2019 19:40:16 +0000 (12:40 -0700)]
[Relay] [Training] Fix ad for concatenate (#3729)
* reproduce error
* fix
* lint
* lint
雾雨魔理沙 [Fri, 9 Aug 2019 08:51:20 +0000 (01:51 -0700)]
Fix typo in ir_pass.h (#3741)
Benjamin Tu [Thu, 8 Aug 2019 20:53:27 +0000 (13:53 -0700)]
[VTA] [Chisel] Bug fix for VME Shell (#3737)
* fix
* fixes
Tianqi Chen [Thu, 8 Aug 2019 20:10:23 +0000 (13:10 -0700)]
[CI] Update docker image ci_cpu,i386 to include verilator (#3738)
Animesh Jain [Thu, 8 Aug 2019 18:41:24 +0000 (11:41 -0700)]
[QNN] Requantize operator (#3531)
* [Relay] [Quantization] WIP - Common files for the qauntization work.
* [Relay] [Quantization] WIP - Prototyping requantize op.
* Requantize operator implementation.
Requantize converts one quantized tensor representation to another quantized
representation. The PR has following implementation features
- Requantize operator defined in qnn namespace - relay.qnn.requantize
- Lowering of the requantize to exisiting Relay operators
- Integer fixed point implementation of requantize
- Two rounding modes - FE_UPWARDS (round towards infinity) and
FE_AWAY_FROM_ZERO (std::round behavior)
- Floating point implementation as well, that can act as reference or can be
used for devices when FP32 computation is not used.
- Unit test cases
Relevant Issue - https://github.com/dmlc/tvm/issues/2351
Credit to TFLite and GemmLowp to provide reference implementations.
* Typo and lint fixes.
* Doc fix.
* Uncommenting the lint script (fixing mistake).
* Modifying the unit tests.
* Moving C++ files into src/relay/qnn
* Moving python files to python/tvm/relay/qnn. Some minor fixes.
* Moving the attrs.h inside the include directory.
* Pushing files that I forgot earlier. Changing util location.
* Incorporating comments. API change. Lint fixes.
* Modifying the GetFixedPointMultiplierShift API as per comments.
* Forgot the dialect change.
* Changing rewrite to qnn_lower.
* Renaming Quantize to Qnn for clarity.
* Remove use_int_domain.
* Incorportaing review comments.
* Adding API doc for QNN dialect.
* Move the qnn_lower pass to transform namespace.
* Moving from expr to module. Adding namespace in C++.
* Minor sentence rewrites. Added qnn namespace.
* Added the API doc.
* Chanding default out_dtype to int8. Adding a test with in/out_dtype as uint8.
* Style fixes. Better error messages.
* Adding documentation.
* More documentation fixes.
* Adding out dtype check for requantize.
* Adding corner case for FP32 to fixed point conversion.
* Adding extra line.
* Documentation fix.
* Adding static inline.
* Incorporating jackwish comment. Removed idtype from requantize lowering.
* Removing Quantize/Dequantize code. Restricting Requantize to (u)int8/int32.
* Style fixes.
* Fix the docs.
* Move to Legalize API.
Marcus Shawcroft [Thu, 8 Aug 2019 17:36:36 +0000 (18:36 +0100)]
[DOCKER] Fix missing apt https transport support (#3735)
* [DOCKER] Fix missing apt https transport support
* [DOCKER] Drop superflous explicit sudo's
Nick Hynes [Wed, 7 Aug 2019 19:51:48 +0000 (12:51 -0700)]
Remove sccache from Rust install (#3728)
Yulun Yao [Wed, 7 Aug 2019 16:59:00 +0000 (09:59 -0700)]
Tutorial: Build a Graph Convolutional Network on TVM (#3681)
* add build gcn tutorial
* add dgl to docker file
* add dgl to docker file
* Apply suggestions from code review
Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe>
* add dgl to docker file
* rerun checks
* Revert "add build gcn tutorial"
This reverts commit
dbe8b5f0e02a13fdd586a9faa58fd1326653afb0.
* resolve git issue
* resolve git issue
* resolve git issue
* apply marisa's comment
Thierry Moreau [Wed, 7 Aug 2019 15:53:41 +0000 (08:53 -0700)]
[VTA][Dockerfile] Chisel dependencies for TSIM CI (#3721)
Umang Yadav [Wed, 7 Aug 2019 15:51:18 +0000 (11:51 -0400)]
Take zero extent loops as NoOp and remove it and add unittest for the same (#3724)
Haichen Shen [Wed, 7 Aug 2019 11:34:53 +0000 (04:34 -0700)]
[Relay/TOPI][Op] Add variance and layer norm op (#3700)
* Add LayerNorm op
* update
* fix
* Add mean_std and mean_variance
* add std and update doc
* add license
* x
* lint
* x
* fix
* fix doc
Haichen Shen [Wed, 7 Aug 2019 04:27:06 +0000 (21:27 -0700)]
[Frontend][MXNet] Fix mxnet converter for hybridblock and add div_sqrt_dim (#3701)
* Fix mxnet converter for hybrid block
* tweak
* fix rebase
* fix
* add test
雾雨魔理沙 [Wed, 7 Aug 2019 02:39:09 +0000 (19:39 -0700)]
fix name (#3719)
Animesh Jain [Tue, 6 Aug 2019 22:23:41 +0000 (15:23 -0700)]
[Relay] Legalize pass (#3672)
* [Relay] Rewrite pass.
This pass transforms an expression to other expression.
This pass has many usecases
* Replace a expr to another expr, if the other expr has faster performance.
* For ASICs, we might want to modify the inputs to adapt to the HW support.
* Alter op layout can work in conjunction with this pass.
The supporting usecase is the Intel i8 x i8 conv. Intel HW supports u8 x i8 conv
in HW. Using this pass, we can replace an i8 x i8 conv to a sequence of
operators where one of the operators is now u8 x i8 conv. This will also help
automatic quantizaion performance.
* Better API name.
* Removing the conv2d legalization for x86. Will send a separate PR.
* Test name changes.
* Registering one funtion to register FTVMLegalize.
* Better comments.
mingwayzhang [Tue, 6 Aug 2019 22:20:08 +0000 (15:20 -0700)]
Fix (2/2) [TOPI] conv2d schedule code (#3648) (#3717)
* Fix the tile_rx and tile_ry issue.
Note that this patch depends on pull request #9 in tvm-distro.
Krzysztof Parzyszek [Tue, 6 Aug 2019 21:58:24 +0000 (16:58 -0500)]
Update dmlc-core to the latest commit (#3716)
This includes changes to build TVM runtime for Hexagon.
Zhi [Tue, 6 Aug 2019 21:05:06 +0000 (14:05 -0700)]
[relay][frontend] clean up tf frontend (#3710)
* clean up tf frontend
* fix get_relay_op
Liangfu Chen [Tue, 6 Aug 2019 20:58:38 +0000 (04:58 +0800)]
safe to remove thread related headers? (#3713)
Haichen Shen [Tue, 6 Aug 2019 19:25:59 +0000 (12:25 -0700)]
[Bugfix] Fix the issue that function pass modifies original module (#3712)
* fix
* fix interpreter
Yulun Yao [Tue, 6 Aug 2019 01:13:22 +0000 (18:13 -0700)]
[Relay] [TOPI] `{relay,topi}.nn.sparse_transpose` for **Square** CSR matrices (#3707)
* add build gcn tutorial
* add transpose operator for square sparse matrices
* remove extra files
* change loop tag
* comply with lint
* comply with lint -- line too long
* comply with lint
* lint check
* lint check
* lint check
* apply marisa and theirry's reviews
Junru Shao [Mon, 5 Aug 2019 22:16:14 +0000 (15:16 -0700)]
Export tvm::relay::OpRegistry::OpRegistry (#3711)
Tianqi Chen [Mon, 5 Aug 2019 21:55:25 +0000 (14:55 -0700)]
[CI] Update GPU docker (#3709)
ghostplant [Mon, 5 Aug 2019 16:31:55 +0000 (00:31 +0800)]
Quit and clean when TVM is interrupted (#3640)
Andrew Tulloch [Mon, 5 Aug 2019 16:31:19 +0000 (09:31 -0700)]
Metal reinterpret fix (#3706)
雾雨魔理沙 [Mon, 5 Aug 2019 16:23:36 +0000 (09:23 -0700)]
[Relay] Partial Evaluator do concatenate, and has better termination checker for scalar. (#3703)
* save
lint some
lint
lint
add charrnn
save
save
save
remove debug
remove debug
remove space
refactor
save
rewrite dce
* reset files
* join -> meet
* lint
* address review comment
* wordsmith
Jon Soifer [Mon, 5 Aug 2019 02:46:28 +0000 (19:46 -0700)]
[TOPI] Update softmax compute and CPU schedule (#3680)
* Update Softmax compute and CPU schedule
* Add C++ compute
* Fix schedule
* Update CUDA and OpenGL schedules
* Fix log_softmax
* Fix hls and opengl schedules
* Fix CUDA schedule
Huilin Qu [Sat, 3 Aug 2019 23:55:22 +0000 (19:55 -0400)]
Fix gather_nd in Relay (#3442)
* Fix gather_nd in Relay
* Add test cases for gather_nd.
Benjamin Tu [Sat, 3 Aug 2019 06:04:38 +0000 (23:04 -0700)]
[VTA] [Chisel] Added Chisel Module Unit Test Infrastructure (#3698)
* added wholething
* changed build and makefile
abergeron [Sat, 3 Aug 2019 04:09:44 +0000 (00:09 -0400)]
Add an option to build with -pthread (ON by default) (#3671)
雾雨魔理沙 [Fri, 2 Aug 2019 17:35:27 +0000 (10:35 -0700)]
[Relay] [Error] Fix error in partial evaluator (#3693)
* fix
* lint
Lianmin Zheng [Fri, 2 Aug 2019 16:14:27 +0000 (00:14 +0800)]
[AutoTVM] Fix hang/crash issues on feature extraction (#3689)
* [AutoTVM] Fix hang/crash issues on feature extraction
* Update xgboost_cost_model.py
* fix lint
Neo Chien [Fri, 2 Aug 2019 15:52:00 +0000 (23:52 +0800)]
Align the naming rule for OpAttributeUnImplemented (#3695)
Yulun Yao [Fri, 2 Aug 2019 15:51:14 +0000 (08:51 -0700)]
[DOCKER] Add DGL to {ci_gpu, demo_cpu, demo_gpu} docker images (#3692)
* add dgl to docker file
* add dgl to docker file
Lianmin Zheng [Fri, 2 Aug 2019 15:50:33 +0000 (23:50 +0800)]
[TOPI] Memoize winograd matrix (#3687)
* [TOPI] Memoize winograd matrix
* lint
* Fix name
Wuwei Lin [Fri, 2 Aug 2019 03:55:27 +0000 (20:55 -0700)]
[Relay][Quantization] KL-divergence-based per-layer calibration (#3538)
* [Relay][Quantization] Support floating-point scale
* [Relay][Quantization] KL-divergence calibration on dataset
* Fix unhandled LeftShift case in QuantizeRealize
* Fix lint
* drop QBias
* fix lint
* address comments
* address comments
* Update comments
* address comments
* lint
* kQIdentity = 0
Wei Chen [Thu, 1 Aug 2019 21:47:11 +0000 (14:47 -0700)]
[Relay][VM] Support execution on devices (#3678)
* [Relay][VM] Support execution on devices
* Reduce Copy calls
* Cleanup
* Lint
* CR comments
* Merge test into test_vm.py
Jian Weng [Thu, 1 Aug 2019 19:52:33 +0000 (12:52 -0700)]
Add shuffle support to TVM (#3633)
sf-wind [Thu, 1 Aug 2019 19:49:40 +0000 (12:49 -0700)]
Enable the sparse schedule (#3651)
alexgl-github [Thu, 1 Aug 2019 19:46:39 +0000 (12:46 -0700)]
Add support for Tensorflow operators log1p, cos, sin (#3614)
The patch adds support for Tensorflow operators log1p and cos
Tensorflow log1p is described at https://www.tensorflow.org/api_docs/python/tf/math/log1p
Tensorflow cos is described at https://www.tensorflow.org/api_docs/python/tf/math/cos
Tensorflow sin is described at https://www.tensorflow.org/api_docs/python/tf/math/sin
雾雨魔理沙 [Thu, 1 Aug 2019 18:52:13 +0000 (11:52 -0700)]
[Relay] Strict mode in pattern matching (#3620)
* add fatal
lint
lint
lint
do
make completeness check an error
lint
remove fatal
* fix test
* reset parser file
* remove unneeded import
* Update python/tvm/relay/adt.py
Co-Authored-By: Steven S. Lyubomirsky <slyubomirsky@gmail.com>
* Update include/tvm/relay/adt.h
Co-Authored-By: Steven S. Lyubomirsky <slyubomirsky@gmail.com>
* Eliminate trailing whitespace (my fault)
Yifan Xiong [Thu, 1 Aug 2019 16:46:23 +0000 (00:46 +0800)]
[Relay][Frontend] Fix typo names in frontend (#3685)
Fix typo names in caffe2 and onnx frontend:
* sotrage_order -> storage_order
* OpNotInplemented -> OpNotImplemented
Tim Hatch [Thu, 1 Aug 2019 16:27:58 +0000 (09:27 -0700)]
Make tests multi-process friendly. (#3683)
This side effect at module import time has a race condition between the "exists" check and the "mkdir" call. The safer thing is to just call mkdir and catch the "already exists" error which is what makedirs does.
Alexander Pivovarov [Thu, 1 Aug 2019 15:31:09 +0000 (08:31 -0700)]
Replace learnt with learned (#3684)
Leyuan Wang [Wed, 31 Jul 2019 20:45:58 +0000 (13:45 -0700)]
[DOC] Update ssd doc to avoid confusion. (#3677)
* intel graphics conv2d bugs fixed for inception_v3
* intel conv2d api updated, nn input size 4 condition added
* review addressed
* move conv_tags to attributes
* ssd doc updated
* address comment
Zhi [Wed, 31 Jul 2019 16:02:15 +0000 (09:02 -0700)]
[Relay][VM] Relay VM serialization (#3647)
* relay vm serialization
* fix lint
* load params, fix stream
* lint
* fix typo
lixiaoquan [Wed, 31 Jul 2019 15:37:54 +0000 (23:37 +0800)]
[TEST] Comptiable with python3.5 (#3675)
Wuwei Lin [Wed, 31 Jul 2019 08:26:05 +0000 (16:26 +0800)]
[TOPI][CUDA] schedule for group_conv2d (#3663)
* [TOPI][CUDA] schedule for group_conv2d
* Fix #flops
Liangfu Chen [Wed, 31 Jul 2019 07:19:54 +0000 (15:19 +0800)]
[VTA] VTA Compilation Script for Intel FPGA (#3494)
* initial compilation script for chisel-vta;
* replace tabs with spaces;
* compile script for de10-nano;
* remove generated verilog source code;
* remove `altsource_probe`, `debounce`, `edge_detect` ip;
* replace quartus project files with a single tcl script;
* Update install.md
* improved makefile-based compilation script;
* complete makefile-based compilation of chisel-vta for de10-nano;
* install quartus;
* conversion to .rbf file;
* document chisel-vta compilation process for de10-nano;
* rename generated bitstream file;
* download and extract custom ip for de10-nano;
* minor change
* minor change
* fix indentation;
* bug fix;
* improved robustness in makefile;
* clean up;
* add `.sdc .ipx .qsys` allowance in jenkins;
* add ASF header;
* add ASF header;
* remove IntelShell.scala, update vta_hw.tcl, clean up Makefile & soc_system.qsys;
* add ASF header;
* keep sources compact;
* keep sources compact;
* it's not necessary now
* AXI4LiteClient -> AXI3Client for IntelShell
* remove connection to fpga_only_master;
* a few important bug fix: wire reset pin, and set host_r_last to high
* remove intel specific interface definition;
* add NO_DSP option in Makefile;
* AXI4Lite is not used in IntelShell;
* minor fix: disable dsp and use logic instead;
* quartus version change: 18.0 -> 18.1
* remove altera related statement;
* compose compile_design.tcl
* initial tcl script for soc_system generation;
* remove .qsys file;
* remove unused;
* .qsys can be generated by tcl script;
* remove hps_io and shrink size of soc_system;
* integrate into makefile;
* version change: 18.0 -> 18.1
* add sample config file for de10-nano;
* parameterize DEVICE and PROJECT_NAME
* remove extra lines;
* brief description on flashing sd card image for de10-nano
* docs on building additional components
* parameterize DEVICE and DEVICE_FAMILY
* parameterize DEVICE and DEVICE_FAMILY
* parameterize DEVICE and DEVICE_FAMILY
* de10-nano -> de10nano
* minor change
* add comment in code and document in order to address review comments;
Balint Cristian [Wed, 31 Jul 2019 07:10:16 +0000 (10:10 +0300)]
Add yolov3-tiny to the tutorial. (#3674)
Haichen Shen [Wed, 31 Jul 2019 01:22:51 +0000 (18:22 -0700)]
add reviewer - slyubomirsky (#3673)
Balint Cristian [Tue, 30 Jul 2019 22:06:50 +0000 (01:06 +0300)]
[RPC] Terminate worker's childs first. (#3669)
Thierry Moreau [Tue, 30 Jul 2019 21:01:31 +0000 (14:01 -0700)]
[VTA] Support for batched inference (#3661)
* fix in IR pass to support padding on 6-d tensors
* support for both N>1 and N==1 for padding
* batch size > 1 tuning and base config
* output formatting
* batch conv2d
* print all category results
* revert to single-batch config
* pick record best
* fix conv test
* improving reporting
* address batching bug in fast simulator
* fix
Thierry Moreau [Tue, 30 Jul 2019 21:00:38 +0000 (14:00 -0700)]
removing deprecated script (#3667)
Josh Fromm [Tue, 30 Jul 2019 16:29:56 +0000 (09:29 -0700)]
[TOPI] Enable standalone wheel build (#3657)
* Fixed topi bdist_wheel build to include libraries.
* Removed unneeded imports
Wuwei Lin [Tue, 30 Jul 2019 15:25:15 +0000 (23:25 +0800)]
[TOPI] Fix traverse function not inline zero-input op (#3623)
* Fix traverse_inline not inline zero input op properly
* Add where to python and set tag to broadcast
* Fix inline
* test
* fix test target
* fix
Thomas Viehmann [Tue, 30 Jul 2019 14:54:16 +0000 (16:54 +0200)]
ROCm: Add SaveToFile and LoadFile (#3665)
...and add rocm module_save to the tests.
Thomas Viehmann [Tue, 30 Jul 2019 10:40:50 +0000 (12:40 +0200)]
tvm/contrib/rocm: improve finding of ld.lld (#3664)
This refines the detection of ld.lld matching the neighbouring clang
file. This is particularly helpful on Ubuntu/Debian when either the
default ld.lld is not installed or the versioned one is preferable for
consistency.
@tqchen I think you last touched the clang equivalent in #3590 .
Thomas Viehmann [Tue, 30 Jul 2019 09:30:46 +0000 (11:30 +0200)]
Print llvm source by default in ROCMModuleNode::GetSource (#3662)
雾雨魔理沙 [Tue, 30 Jul 2019 04:58:08 +0000 (21:58 -0700)]
[Relay] Fix typo in ChangeBatch (#3660)
雾雨魔理沙 [Tue, 30 Jul 2019 03:18:55 +0000 (20:18 -0700)]
[Relay][VTA] Add ChangeBatch pass (#3656)
* init
* lint
* lint
Luis Vega [Mon, 29 Jul 2019 18:11:53 +0000 (11:11 -0700)]
[VTA] [Chisel] make dram offset configurable for uops different than 4-bytes (#3654)
Luis Vega [Mon, 29 Jul 2019 07:22:06 +0000 (00:22 -0700)]
[VTA] [CMake] hotfix tsim rules (#3650)
Thierry Moreau [Mon, 29 Jul 2019 01:41:10 +0000 (18:41 -0700)]
[VTA] Refactor to increase platform coverage (Ultra96 etc.) (#3496)
* hardware refactor for increased FPGA coverage, small optimizations
* fix header
* cleaning up parameters that won't be needed for now
* streamlining makefile, and simplifying tcl scripts
* moving parameter derivation into pkg_config.py, keeping tcl scripts lightweight
* refactoring tcl script to avoid global variables
* deriving AXI signals in pkg_config.py
* unifying address map definition for hardware and software drivers
* single channel design for ultra96 to simplify build
* enable alu by default, no mul opcode for now
* hardware fix
* new bitstream; vta version
* avoid error when env variable is not set
* ultra96 cleanup
* further cleaning up tcl script for bitstream generation
* preliminary rpc server support on ultra96
* rpc server tracker scripts
* ultra96 ldflag
* ultra96 support
* ultra96 support
* cleanup line
* cmake support for ultra96
* simplify memory instantiation
* cleaning up IP parameter initialization
* fix queue instantiation
* 2019.1 transition
* fix macro def
* removing bus width from config
* cleanup
* fix
* turning off testing for now
* cleanup ultra96 ps insantiation
* minor refactor
* adding comments
* upgrading to tophub v0.6
* model used in TVM target now refers to a specific version of VTA for better autoTVM scheduling
* revert change due to bug
* rename driver files to be for zynq-type devices
* streamlining address mapping
* unifying register map offset values between driver and hardware generator
* rely on cma library for cache flush/invalidation
* coherence management
* not make buffer packing depend on data types that can be wider than 64bits
* refactor config derivation to minimize free parameters
* fix environment/pkg config interaction
* adding cfg dump property to pkgconfig:
* fix rpc reconfig
* fix spacing
* cleanup
* fix spacing
* long line fix
* fix spacing and lint
* fix line length
* cmake fix
* environment fix
* renaming after pynq since the driver stack relies on the pynq library - see pynq.io
* update doc
* adding parameterization to name
* space
* removing reg width
* vta RPC
* update doc on how to edit vta_config.json
* fix path
* fix path
Luis Vega [Sun, 28 Jul 2019 23:18:34 +0000 (16:18 -0700)]
fix comment/doc in TensorLoad (#3646)
Balint Cristian [Sun, 28 Jul 2019 08:05:37 +0000 (11:05 +0300)]
Hotfix for issue #3641. (#3644)
Luis Vega [Sun, 28 Jul 2019 07:20:53 +0000 (00:20 -0700)]
fix case when offset is odd and size is even (#3643)
Luis Vega [Sat, 27 Jul 2019 20:39:37 +0000 (13:39 -0700)]
[VTA] [Chisel] fix tensor issue/commit in gemm (#3637)
* fix tensor issue/commit in gemm
* remove trailing spaces
Yong Wu [Sat, 27 Jul 2019 16:44:22 +0000 (09:44 -0700)]
[Relay][TF] add BatchMatMul (#3634)
peterjc123 [Sat, 27 Jul 2019 16:43:34 +0000 (00:43 +0800)]
Improve the x86 auto-tune tutorial (#3609)
YPBlib [Fri, 26 Jul 2019 22:14:39 +0000 (06:14 +0800)]
Update tensorflow.py (#3632)
Logan Weber [Fri, 26 Jul 2019 22:14:18 +0000 (15:14 -0700)]
Make Google Test usage configurable in CMake files (#3628)
* Add USE_GTEST as a CMake variable
* Add GTest section in installation docs
* Incorporate feedback
lixiaoquan [Fri, 26 Jul 2019 18:05:14 +0000 (02:05 +0800)]
[TensorFlow] Fix a bug output index is ignored (#3631)
Enhance test to cover this case
Wuwei Lin [Fri, 26 Jul 2019 06:49:28 +0000 (14:49 +0800)]
[TOPI][CUDA] Schedule for pool_grad (#3622)
* [TOPI][CUDA] Schedule for pool_grad
* Relay test
* Fix fused op
* doc
* Remove set scope local
雾雨魔理沙 [Fri, 26 Jul 2019 05:17:47 +0000 (22:17 -0700)]
[Relay] [Training] Add numerical gradient check. (#3630)
* add check_grad
* finish
* what does the fox say?
* lint lint lint lint lint lint lint lint lint
Benjamin Tu [Fri, 26 Jul 2019 01:47:04 +0000 (18:47 -0700)]
[VTA] [Chisel] support for different inp/wgt bits, rewrote DotProduct for clarity (#3605)
* support for different inp/wgt bits, rewrote dot for clarity
* [VTA] [Chisel] support for different inp/wgt bits, rewrote DotProduct for clarity
* [VTA] [Chisel] support for different inp/wgt bits, rewrote DotProduct for clarity
* change back to sim
* fix index
* fix index
* fix indent
* fix indent
* fix indent
* fix trailing spaces
* fix trailing spaces
* change to more descriptive name
* matric->matrix
* fix spacing
* fix spacing & added generic name for dot
* better parameter flow
* spacing
* spacing
* spacing
* update requirement (tested) for dot, spacing
* function call convention
* small edit
Lianmin Zheng [Thu, 25 Jul 2019 22:28:59 +0000 (06:28 +0800)]
[IR] Make iterators compatible with constructors of STL containers (#3624)
Balint Cristian [Thu, 25 Jul 2019 21:56:22 +0000 (00:56 +0300)]
Add Winograd matrices computation. (#3553)
Logan Weber [Thu, 25 Jul 2019 17:12:57 +0000 (10:12 -0700)]
Implementation of uTVM (#3227)
* uTVM interfaces (#14)
* some minor interface changes
* implemented HostLowLevelDevice
* added MicroDeviceAPI
* implemented micro_common and added Python interfaces
* current status, semi implemented micro session
* added micro_common implementation and python interfaces (#18)
* added micro_common implementation and python interfaces (#18)
* current status, semi implemented
* host test working
* updated interfaces for MicroSession arguments allocation
* make somewhat lint compatible
* fix based on comments
* added rounding macro
* fix minor bug
* improvements based on comments
* Clean up `binutil.py` and make Python-3-compatible
* Change argument allocation design
* Address feedback and lint errors
* Improve binutil tests
* Simplify allocator (per @tqchen's suggestions)
* Doc/style fixes
* farts
* mcgee
* rodata section werks
(and so does `test_runtime_micro_workspace.py`)
* simple graph runtime werk
* TEMP
* ResNet works, yo
* First round of cleanup
* More cleanup
* runs a dyson over the code
* Another pass
* Fix `make lint` issues
* ready to pr... probably
* final
* Undo change
* Fix rebase resolution
* Minor fixes
* Undo changes to C codegen tests
* Add `obj_path` in `create_micro_lib`
* TEMP
* Address feedback
* Add missing TODO
* Partially address feedback
* Fix headers
* Switch to enum class for `SectionKind`
* Add missing ASF header
* Fix lint
* Fix lint again
* Fix lint
* Kill lint warnings
* Address feedback
* Change Python interface to MicroTVM
All interaction with the device is now through `Session` objects, which
are used through Python's `with` blocks.
* Reorder LowLevelDevice interface
* Store shared ptr to session in all alloced objects
* Move helper functions out of `tvm.micro`
* Switch static char arr to vector
* Improve general infra and code quality
Does not yet address all of tqchen's feedback
* Forgot a rename
* Fix lint
* Add ASF header
* Fix lint
* Partially address MarisaKirisame's feedback
* Lint
* Expose `MicroSession` as a node to Python
* Revert to using `Session` constructor
* Fix compiler error
* (Maybe) fix CI error
* Debugging
* Remove
* Quell lint
* Switch to stack-based session contexts
* Make uTVM less intrusive to host codegen
And use SSA for operands of generated ternary operators
* Inline UTVMArgs into UTVMTask struct
* Remove `HostLowLevelDevice` header
* Remove `BaseAddr` class
* Address feedback
* Add "utvm" prefix to global vars in runtime
* Fix lint
* Fix CI
* Fix `test_binutil.py`
* Fix submodules
* Remove ResNet tests
* Make `test_binutil.py` work with nose
* Fix CI
* I swear this actually fixes the binutil tests
* lint
* lint
* Add fcompile-compatible cross-compile func
* Add docs for uTVM runtime files
* Move pointer patching into `MicroSession`
* Fix lint
* First attempt at unifying cross-compile APIs
* Fix lint
* Rename `cross_compile` back to `cc`
* Address feedback
* Remove commented code
* Lint
* Figure out failing function
* Remove debugging code
* Change "micro_dev" target to "micro"
* Add checks in tests for whether uTVM is enabled
* Add TODO for 32-bit support
* Rename more "micro_dev" to "micro"
* Undo rename
We already have `tvm.micro` as a namespace. Can't have it as a method
as well.
* Fix failing CI
Thanks to @tqchen for finding this bug. Emitting ternary operators for
`min` and `max` causes concurrency bugs in CUDA, so we're moving the
ternary op emissions from `CodeGenC` to `CodeGenCHost`.
* Address feedback
* Fix lint
Philip Hyunsu Cho [Thu, 25 Jul 2019 06:12:44 +0000 (23:12 -0700)]
Add a missing header in cuda_device_api.cc (#3621)
Yong Wu [Thu, 25 Jul 2019 06:12:11 +0000 (23:12 -0700)]
[Relay][Keras] Permute, Softmax support (#3618)
Jian Weng [Thu, 25 Jul 2019 01:07:51 +0000 (18:07 -0700)]
fix typo (#3611)
Animesh Jain [Thu, 25 Jul 2019 01:06:36 +0000 (18:06 -0700)]
[TOPI] Average Pool2D Bug. (#3607)
* [TOPI] Average Pool2D Bug.
Issue - https://github.com/dmlc/tvm/issues/3581
* Add uint16 test.
Logan Weber [Wed, 24 Jul 2019 22:39:10 +0000 (15:39 -0700)]
Remove prints in `generic_op_impl.py` (#3616)
Tianqi Chen [Wed, 24 Jul 2019 20:53:22 +0000 (13:53 -0700)]
Hotfix pylint (#3615)
Tianqi Chen [Wed, 24 Jul 2019 18:48:39 +0000 (11:48 -0700)]
[TEST] Fix testcase to make them more compatible to zero-rank (#3612)
雾雨魔理沙 [Wed, 24 Jul 2019 18:31:19 +0000 (11:31 -0700)]
init (#3571)
quickfix
Wuwei Lin [Wed, 24 Jul 2019 18:30:46 +0000 (02:30 +0800)]
[TOPI][Relay] max_pool2d & avg_pool2d gradient (#3601)
Zhi [Wed, 24 Jul 2019 17:13:16 +0000 (10:13 -0700)]
[Relay][vm] Small bug fix for DataTypeObject (#3604)
* small bug fix for DataTypeObject
* retrigger ci
Andrew Tulloch [Tue, 23 Jul 2019 21:44:27 +0000 (14:44 -0700)]
We observe multiple groups across a range of domains (ASR, NMT, LM, etc), (#3566)
internally and externally, interested in replacing standard dense layers with
block-sparse matrix multiplication layers. The motivations are generally: higher
performance (due to reduction in FLOPs, memory bandwidth/cache footprint),
enabling larger models (e.g. fitting more layers in a given memory budget).
Some public work along these lines:
* https://openai.com/blog/block-sparse-gpu-kernels/
* https://openai.com/blog/sparse-transformer/
* https://arxiv.org/abs/1802.08435
* https://arxiv.org/abs/1711.02782
Various groups have been able to successfully train models with reasonable
levels of sparsity (90%+) with marginal accuracy changes, which suggests
substantial speedups are possible (as this implies a >10x reduction in FLOPs).
It is fairly straightforward to realize these theoretical speedups, see e.g. TVM
benchmarks for Intel CPUs in
https://gist.github.com/ajtulloch/
e65f90487bceb8848128e8db582fe902, and CUDA
results in https://github.com/openai/blocksparse, etc.
* https://github.com/openai/blocksparse (CUDA)
* https://software.intel.com/en-us/mkl-developer-reference-c-mkl-bsrmm (MKL BSRM)
* https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.bsr_matrix.html (SCIPY BSR representation)
This is extracted from an internal patch we've been using internally. There are
various extensions possible (int8/fp16/bf16, CUDA/other GPU architectures), but
this is a reasonable starting point. This needs more thorough unit test coverage
however.
We follow the conventions established by scipy.sparse.bsr_matrix and other
libraries, see the unit tests for details.
For folks interested in experimenting with scheduling/AutoTVM etc,
https://gist.github.com/ajtulloch/
e65f90487bceb8848128e8db582fe902 is a useful
starting point.
Andrew Tulloch [Tue, 23 Jul 2019 21:43:27 +0000 (14:43 -0700)]
{relay,topi}.reinterpret support (#3599)
= Motivation
It's useful to expose the tvm::reinterpret functionality to Relay/TOPI users, as
this allows them to build (fused) operators leveraging the bitwise
reinterpretation of an operator. An example is approximate transcendental
functions, which can be implemented similar to:
```.py
def C(x):
return relay.expr.const(x, "float32")
def approx_exp(x):
x = relay.minimum(relay.maximum(x, C(-88.0)), C(88.0))
x = C(127.0) + x * C(1.
44269504)
xf = relay.floor(x)
i = relay.cast(xf, "int32")
x = x - xf
Y = C(0.
99992522) + x * (C(0.
69583354) + x * (C(0.
22606716) + x * C(0.
078024523)))
exponent = relay.left_shift(i, relay.expr.const(23, "int32"))
exponent = relay.reinterpret(exponent, "float32")
return exponent * Y
def approx_sigmoid(x):
# <2.0e-5 absolute error over [-5, 5]
y = approx_exp(x)
return y / (y + C(1.0))
def approx_tanh(x):
# <4.0e-5 absolute error over [-5, 5]
x = x * C(2.0)
y = approx_exp(x)
return (y - C(1.0)) / (y + C(1.0))
```
See unit tests for implementations of these approximate transendentals.
Luis Vega [Tue, 23 Jul 2019 17:23:10 +0000 (10:23 -0700)]
remove tabs (#3603)
Steven S. Lyubomirsky [Tue, 23 Jul 2019 17:14:53 +0000 (10:14 -0700)]
[Relay][Pass][Docs] Update the doc for adding a Relay pass to mention the pass infra (#3583)
* Update the Relay adding pass doc to reference the new pass infrastructure
* Correct pass name
Co-Authored-By: Zhi <5145158+zhiics@users.noreply.github.com>
* Align header equals signs
Animesh Jain [Tue, 23 Jul 2019 16:59:40 +0000 (09:59 -0700)]
Checking the correct dtypes for choosing the Intel int8 instructions. (#3516)
雾雨魔理沙 [Tue, 23 Jul 2019 09:52:44 +0000 (02:52 -0700)]
[Relay] [Training] Allow gradient to return a tuple (#3600)
Andrew Tulloch [Tue, 23 Jul 2019 02:48:55 +0000 (19:48 -0700)]
[Runtime] [ThreadPool] Make SpscTaskQueue::Pop(..) spin_count configurable (#3577)
In cases where we have multiple models or threadpools active, spinning around
`sched_yield()` may not be desirable, as it prevents the OS from effectively
scheduling other threads.
Thus, allow users to conditionally disable this behaviour (via an environment
variable `TVM_THREAD_POOL_SPIN_COUNT`, similar to existing environment flags for
the thread pool such as `TVM_BIND_THREADS`, etc).
This substantially improves tail latencies in some of our multi-tenant
workloads in practice.
Unit tests have been added - on my laptop, running:
```
TVM_THREAD_POOL_SPIN_COUNT=0 ./build/threading_backend_test;
TVM_THREAD_POOL_SPIN_COUNT=1 ./build/threading_backend_test;
./build/threading_backend_test;
```
gives https://gist.github.com/ajtulloch/
1805ca6cbaa27f5d442d23f9d0021ce6 (i.e.
97ms -> <1ms after this change)
Ramana Radhakrishnan [Mon, 22 Jul 2019 22:12:09 +0000 (23:12 +0100)]
Add support for Tflite operator SPLIT (#3520)
* [RFC] Initial support for Tflite operator SPLIT
This patch adds initial support for the tflite operator split. However
I am not yet sure how to handle the axis parameter for the split
operator and support it in the test infrastructure. Putting this up for
an initial review and comment.
The split operator in tflite according to
https://www.tensorflow.org/lite/guide/ops_compatibility
appears to take num_or_size_split as a 0D tensor.
I also note that tflite.split is one of the few operators that returns
multiple outputs and thus the helper routines in the tests needed some
massaging to make this work.
@apivarov , could you please review this ?
Thanks,
Ramana
* Fix the axis parameter
Add more tests
* Address review comments
* Try out frozen_gene's suggestion
* Handle split of 1 element
* int32 is only supported in tflite 1.14, let's check that version here.
* Keep this at python3.5
* Add packaging as a python package to be installed
Tianqi Chen [Mon, 22 Jul 2019 18:37:43 +0000 (11:37 -0700)]
Update Jenkinsfile
Thierry Moreau [Mon, 22 Jul 2019 15:31:37 +0000 (08:31 -0700)]
[VTA] Runtime refactor to allow for non-shared memory FPGAs (e.g. F1) (#3554)
* updated runtime to support non-shared memory FPGAs for instruction and micro-op kernels
* adding driver-defined memcpy function to handle F1 cases
* refactor to include flush/invalidate in memcpy driver function
* update tsim driver
* bug fixes
* cleanup
* pre-allocate fpga readable buffers to improve perf
* fix
* remove instruction stream address rewrite pass for micro op kernels
* fix:
* white spaces
* fix lint
* avoid signed/unsigned compilation warning
* avoid signed/unsigned compilation warning
* fix
* fix
* addressing comments
* whitespace
* moving flush/invalidate out of memmove
* clearnup
* fix
* cosmetic
* rename API
* comment fix
Tianqi Chen [Sun, 21 Jul 2019 23:30:31 +0000 (16:30 -0700)]
[CI] Upgrade LLVM envs (#3590)
Luis Vega [Sun, 21 Jul 2019 22:45:48 +0000 (15:45 -0700)]
add coherent, length, and user bits option to Shell Config (#3593)