Baden Hughes [Thu, 25 Jun 2020 16:38:40 +0000 (02:38 +1000)]
Update install.rst (#5858)
* Update install.rst
minor cleanups/corrections
* Update install.rst
Fixed broken link
Haichen Shen [Thu, 25 Jun 2020 14:55:40 +0000 (07:55 -0700)]
[Relay][Vm] Some performance improvement to VM (#5901)
* make alignment constant
* tweak copyto and loadscalarint
* some safety check
* x
* lint
* fix
Chenfan [Thu, 25 Jun 2020 05:44:39 +0000 (13:44 +0800)]
CUDA device API & VerifyGPUCode pass update (#5898)
* Add kMaxRegistersPerBlock device api for cuda
* Add vectorize check to verify_gpu_code
* Lint fix
* Cast fix
Yao Wang [Thu, 25 Jun 2020 05:29:51 +0000 (22:29 -0700)]
[Thread Backend]Fix CPU Thread Binding for Multiple Sockets (#5918)
* Fix CPU Thread Binding for Multiple Sockets
* Backward compatibility
Tom Gall [Wed, 24 Jun 2020 22:44:11 +0000 (17:44 -0500)]
Add MicroTVM tutorial using the STM32F746 discovery board (#5655)
* Add MicroTVM tutorial using the STM32F746 discovery board
with a sample tflite model
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Fix: add a reference to the new turtorials/micro directory
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* fix: Cosmetic, align Micro TVM text with divider
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Fixes to remove warnings, spaces for readability, code blocks
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* remove use of dload in favor of requests for obtaining the TFLite model
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* add setup for CMSIS_ST_PATH
comment out portion of tutorial that will not run without a physical board available
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Fix warning due to ** in python but part of a comment block
The block is commented out since it can only run on device
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Numerous reworks to address feedback.
Within docs/conf.py place the microTVM tutorial prior to the VTA tutorials
Within the micro_tflite
- rework section headers
- reorder code so model prep code is all in one place as well as code
for running on device
- address indentation feedback
- remove '' '' usage which I mistakenly thought was getting around a
sphinx issue involving **
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Change disable_vectorize to use current approach with tvm.transform.PassContext
Change to pull example model from github with download_testdata
Add 2.5K tflite model
Couple of small changes following https://sphinx-gallery.github.io/stable/syntax.html
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* remove use of relay.build_config in favor of PassContext
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Couple of minor 4 space fix ups
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Change to use tvm.transform.PassContext for disable_victorize and disabling FuseOps
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Remove binary module from repo
Change download_testdata back to pull model from linaro server
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Couple of small cosmetic changes. (spaces and extra lines)
Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Convert link to tf docs to examine a tf lite model to use RST syntax
Signed-off-by: Tom Gall <tom.gall@linaro.org>
Tianqi Chen [Wed, 24 Jun 2020 21:48:12 +0000 (14:48 -0700)]
[TIR][REFACTOR] Deprecate FreeStmt (#5890)
Currently FreeStmt is not being used.
While it can be useful to have an early free hint
we can always use an intrinsic instead of a first class statement.
Cody Yu [Wed, 24 Jun 2020 18:44:23 +0000 (11:44 -0700)]
[PatternLang] Support any index matching for TupleGetItem (#5909)
* support any index matching
* update doc
lixiaoquan [Wed, 24 Jun 2020 15:49:13 +0000 (23:49 +0800)]
Fix serialization of inf float value (#5912)
Thomas Viehmann [Wed, 24 Jun 2020 11:49:43 +0000 (13:49 +0200)]
Don't multiply by constant 1 uselessly in dense (#5911)
Thomas Viehmann [Wed, 24 Jun 2020 04:42:38 +0000 (06:42 +0200)]
PyTorch frontend: fix handling of duplicate use of a model weight (#5897)
This happens e.g. in shared input/output embeddings in BERT
or siamese networks.
Thank you @siju-samuel for reporting.
Junru Shao [Wed, 24 Jun 2020 03:16:04 +0000 (20:16 -0700)]
Allow implicit conversion in TVM FFI to tvm::Bool (#5907)
Matthew Brookhart [Tue, 23 Jun 2020 23:18:29 +0000 (16:18 -0700)]
Add Binary Intrinsic ops to TIR Ops in C++ (#5900)
* Add Binary Intrinsic ops to TIR Ops in C++
* clang-format
Thomas Viehmann [Tue, 23 Jun 2020 21:01:46 +0000 (23:01 +0200)]
add a few gradients (#5899)
Jared Roesch [Tue, 23 Jun 2020 18:39:48 +0000 (11:39 -0700)]
Rust Refactor Stage 4: Rewrite Rust graph runtime to use new APIs (#5830)
* Port graph-runtime to new API
* --amend
* Fix file lint
* Remove old travis file
* Add @kazum's patch
* Update rust/tvm-sys/src/datatype.rs
Co-authored-by: Andrew <amcharg@gmail.com>
Co-authored-by: Andrew <amcharg@gmail.com>
Tianqi Chen [Tue, 23 Jun 2020 15:24:49 +0000 (08:24 -0700)]
Fix the python intrin rule (#5895)
Cody Yu [Tue, 23 Jun 2020 15:23:58 +0000 (08:23 -0700)]
[Relay]Allow every runtime module to handle constants (#5885)
* update source module
* address comment
Cody Yu [Tue, 23 Jun 2020 15:22:12 +0000 (08:22 -0700)]
remove fatal (#5888)
Giuseppe Rossini [Tue, 23 Jun 2020 05:20:05 +0000 (06:20 +0100)]
[RFC] Improve quantized convolution performance for armv8 architectures (#5754)
* Improve quantized conv2d performance for armv8
Signed-off-by: Giuseppe Rossini <giuseppe.rossini@arm.com>
Change-Id: I3a3d29f5332dd9b3354e8e0dfb24677a521f9c8f
* Add ASF header to conv2d_gemm.py
Change-Id: I33853279e39c849ae1b555a9c91d7557985a0a35
* Run clang-format-10 on c++ files
Change-Id: Ieee22f032e595dabfc1616ab33466fcbf8d94365
* Fix pylint errors/warnings
Change-Id: I435d4d7bca7500db99547f4401fdc0d0995a1ff4
* Fix pylint errors/warnings in topi
Change-Id: I2fc1ad8453e9020072ab967c849df5390c2967b5
* Fix legalizations tests for aarch64
Change-Id: I0a67a49a7849f52ef7d57b9292ce9125bbb7cb2c
* Reintroduce conv2d_nhwc_spatial_pack.arm_cpu and int16 cast
Change-Id: I91b67fabd475e90a9b75f2dd5ecfee851265e0bb
* Switch type of legalization depending on the strategy used
Change-Id: I9a03040a8c40a6cd2658ed14c3751e05a8e19f2b
* Revert last commit
Change-Id: Ice34101e358e3ce8ebfb12c58f73e910ba5de8e8
* Fix the auto-tuner by registering the correct schedules
Change-Id: Id9273688b2620e1ea849ab01b4c46af8fbf37fd0
* Address review comments
Change-Id: Ia1755a0af7b6d159072d9f0c93c932c481101e48
* Improve usability and readability of conv2d_gemm_weight_transform
Change-Id: I3333186bbc2fe4054b58ce15d910e3be7b315482
* Change variable name to weight in Conv2DGemmWeightTransformRel
Change-Id: Ifb5f1f33af7512fe67c6b049b20a42a0bb2d26c9
* Fix clang-10 linting errors
Change-Id: I25ccc844d9cee23766096e1daddb6180abc413a6
* Trigger tests
Change-Id: Id37706fb7cf77a87a3cc817ecf8046297d9ca95a
Tianqi Chen [Tue, 23 Jun 2020 00:47:01 +0000 (17:47 -0700)]
[TIR][REFACTOR][API-CHANGE] Change Call.name to Call.op(RelayExpr) (#5863)
* [TIR][REFACTOR][API-CHANGE] Change Call.name(string) to Call.op(tvm::Op/RelayExpr)
This PR brings a major refactor to the tir::Call structure.
The current Call structure uses a string field(name) to identify the
function/intrinsic being called. This approach is limited as we start
to expand TIR to be more structured. In particular, we are interested in
the following aspects:
- Type a function and perform better compile time type checking so that we
can find errors early.
- Register additional properties about an operator, such as:
- Whether an intrinsic can be vectorized
- What is the adjoint function of the intrinsic(for tensor expression AD)
- Whether the operator has side effect.
- Perform specific codegen about an intrinsic if necessary.
- Call into another function in the same module.
The refactor changes the Call.name field to Call.op.
The Call.op field has a RelayExpr type, and we can pass:
- A tvm::Op which represents the corresponding intrinsic.
- A tvm::GlobalVar for calling into another function in the IRModule.
All the current intrinsics are migrated by registering an tvm::Op.
Because the unified IR shares a single Op registry. We use the "tir"
namespace for tir related intrinsics, for example bitwise and is now registered
under `tir.bitwise_and`.
To simplify upgrade, we introduce a `tir.call_extern` intrinsic
that allows us to call into arbitary external function without type checking.
However, we should move towards more type checked variants in the system.
Under the new op design. We should no longer try to pattern match all the
specific intrincis. Instead, we should rely on attr of each Op to do transformation.
For example, the vectorization pass depends on the TVectorizable property of the op,
which can be registered independently.
In this way, we can still grow the number of intrinsics when necessary
without having to change all the passes.
The same rule applies for tensor expression AD. Currently we are performing
AD by pattern match on operators like exp, sin, cos. We should instead
change to the ajoint registeration mechanism like those in relay.
Followup refactors need to be performed, including:
- Fold the Call.call_type into operator's attribute.
- Enrich the operator registry information
- Refactor passes(e.g. AD, intrin lowering) to use the attribute based transformation
* Fix nms
* Fix remaining testcase
* Address review comment
Haichen Shen [Mon, 22 Jun 2020 23:45:44 +0000 (16:45 -0700)]
[COMMUNITY] Matthew Brookhart -> Reviewer (#5886)
Thomas Viehmann [Mon, 22 Jun 2020 23:40:45 +0000 (01:40 +0200)]
keep parameter names from PyTorch (#5887)
Thomas Viehmann [Mon, 22 Jun 2020 13:33:04 +0000 (15:33 +0200)]
Improve type handling in PyTorch frontend (#5834)
* Improve type handling in PyTorch frontend
- Use type information from graph for inputs if available. Check
against shape information from graph if available.
- Allow user to set default dtype (default to float32 for sanity and
compatibility).
- Implement type promotion to follow PyTorch mechanism. This includes
fixing the handling of many "Scalar" overloads in PyTorch binary ops.
- Fix arange/linspace type semantics.
- Added support for traced functions. (Because it really is about the
"self" input handling.)
Aside from adding an optional default_dtype keyword argument, this does not
change the signature/requirements of from_pytorch.
* Fix scalar detection using numpy.isscalar
and address other review comments. Thank you @siju-samuel
* refine test criteron on qnn_test::test_serialized_modules, fix bool conversion of const
Matthew Brookhart [Mon, 22 Jun 2020 05:55:16 +0000 (22:55 -0700)]
Fail early before running invalid dynamic graphs (#5856)
* fail early before running invalid dynamic graphs
* fix an issue with the VM comment
Junru Shao [Mon, 22 Jun 2020 03:53:39 +0000 (20:53 -0700)]
[Bugfix][Build] Fix building with LLVM-10 on macOS (#5859)
Balint Cristian [Sun, 21 Jun 2020 22:36:51 +0000 (01:36 +0300)]
[QUANTIZE] Add nn.batch_flatten as quantizable. (#5805)
* [ONNX] Skip ADD inside Gemm op when vector is zero
* [QUANTIZE] Add nn.batch_flatten as quantizable.
Junru Shao [Sat, 20 Jun 2020 02:36:08 +0000 (19:36 -0700)]
[Target] Introduce Target Id Registry (#5838)
Cody Yu [Sat, 20 Jun 2020 00:04:53 +0000 (17:04 -0700)]
[DOCS] Update has_dtype/has_shape to pattern lang doc (#5847)
Tianqi Chen [Fri, 19 Jun 2020 17:37:46 +0000 (10:37 -0700)]
Fix map assign issue in CI test (#5854)
Thomas Viehmann [Fri, 19 Jun 2020 15:28:45 +0000 (17:28 +0200)]
Add Python Classes for all Attrs (#5853)
Menooker [Fri, 19 Jun 2020 14:40:40 +0000 (22:40 +0800)]
[DataType] Add bfloat16 (#5601)
Junru Shao [Fri, 19 Jun 2020 14:35:44 +0000 (07:35 -0700)]
[Object] Introduce POD-C Compliant tvm::Map (#5740)
mbaret [Fri, 19 Jun 2020 14:35:05 +0000 (15:35 +0100)]
[FIX] Recover global state after test_util.py (#5824)
In test_util.py, a program exit is simulated to test
that the error throwing behaviour is accurate.
Unforunately, this also deletes necessary global state
and so all subsequent tests that run and use tempdir
throw the same error.
This patch is a simple fix to restore the global state
at the end of the test.
Change-Id: I62fef46167e47f6af43271e2ce1db30f54857647
Haichen Shen [Thu, 18 Jun 2020 23:29:21 +0000 (16:29 -0700)]
[AutoTVM] Suppress the warning messages when compile engine selects impls (#5821)
ANSHUMAN TRIPATHY [Thu, 18 Jun 2020 23:21:55 +0000 (04:51 +0530)]
Additional canonicalization added for AddNode (#5846)
Matthew Brookhart [Thu, 18 Jun 2020 22:28:01 +0000 (15:28 -0700)]
fix batchnorm infer_value error, add regression test and unit test (#5845)
Zhi [Thu, 18 Jun 2020 22:18:29 +0000 (15:18 -0700)]
[RUNTIME] Introduce MetadataModule to separate code compilation/interpretation and weight initialization (#5770)
Jared Roesch [Thu, 18 Jun 2020 18:33:25 +0000 (11:33 -0700)]
`tvm` crate stage 3 of Rust refactor (#5769)
* Adapt to new macro
* Add tvm crate
* Fix out of tree pass with new bindings
* Super slick API working
* Add examples
* Delay egg example and add ASF headers
* Move array.rs around
* Remove outdated tests will restore in CI PR
* Fix some memory issues
* Fix ref counting issue
* Formatting and cleanup
* Remove out-of-tree for now
* Remove out-of-tree
Thomas Viehmann [Thu, 18 Jun 2020 18:14:21 +0000 (20:14 +0200)]
ffi (Object): make class dict visible in instances (#5843)
masahi [Thu, 18 Jun 2020 16:24:03 +0000 (01:24 +0900)]
[Torch][Quantized] Fix converting serialized quantized models (#5839)
* [Torch] Fix converting serialized quantized models
* clean up dtype check
* comment clean up
Siju Samuel [Thu, 18 Jun 2020 00:50:33 +0000 (06:20 +0530)]
[KERAS]RepeatVector, Conv3DTranspose op support added (#5833)
Thomas Viehmann [Wed, 17 Jun 2020 20:39:12 +0000 (22:39 +0200)]
Add a combine batch_matmul pass (#5791)
* Add a combine batch_matmul pass
Contrary what you might expect, this doesn't share as much code with
the combine dense pass as it does with the combine 2d conv pass.
This is because it concatenates the "output feature" dimensions.
* fix docstring
Haichen Shen [Wed, 17 Jun 2020 20:15:14 +0000 (13:15 -0700)]
[Frontend][MXNet] Support a few contrib ops in mxnet (#5819)
* support for bert in mxnet1.6 and gluonnlp0.9
* fix converter
* Add test cases
* add a todo
Yao Wang [Wed, 17 Jun 2020 17:05:35 +0000 (10:05 -0700)]
[Frontend][TensorFlow]Fix TF Dynamic input shape (#5825)
* Fix TF Dynamic input shape
* Remove warning
* Add test
Mahesh Ambule [Wed, 17 Jun 2020 03:53:08 +0000 (09:23 +0530)]
[Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence operator (#5495)
* TFLite reverse_sequence op
* TFLite add_n implementation
* reverse_sequence implementation
* reverse_sequence implementation
* reverse sequence
* TOPI,Relay,TFLite - Reverse Sequence
Signed-off-by: maheshambule <mahesh_ambule@persistent.com>
* Reverse Sequence small fixes
Signed-off-by: maheshambule <mahesh_ambule@persistent.com>
* lint fixes
Signed-off-by: maheshambule <mdambule07@gmail.com>
* TFLite reverse_sequence op
Signed-off-by: maheshambule
* MXNet SequenceReverse implementation
* clang format
* clang format
* review comment fixes
Zhi [Wed, 17 Jun 2020 03:49:51 +0000 (20:49 -0700)]
[RUNTIME][String] Overload string operators (#5806)
Haichen Shen [Wed, 17 Jun 2020 02:30:49 +0000 (19:30 -0700)]
[Fix] Fix recursive let for well formed check (#5780)
lixiaoquan [Tue, 16 Jun 2020 22:11:22 +0000 (06:11 +0800)]
[MergeComposite] Fix InferType when module contains Prelude (#5797)
A function may refer to other resources in the same module, so keep
the content of original module when infering a function.
Thomas Viehmann [Tue, 16 Jun 2020 21:13:40 +0000 (23:13 +0200)]
fix relay.build to not change the module argument in place (#5822)
Haichen Shen [Tue, 16 Jun 2020 20:14:07 +0000 (13:14 -0700)]
[Relay][OpStrategy] Tweak cublas/cudnn priority level (#5820)
* Tweak cublas plevel
* update
* trigger ci
Ina Dobreva [Tue, 16 Jun 2020 17:16:26 +0000 (20:16 +0300)]
[Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax (#4805)
* [Frontend][TFLite]Add support for relu6, leaky_relu, relu_n1_to_1, log_softmax
* add implementation in parser
* add qnn tests for each operator
* Implement clip operation for quantized relu6, relu1
* add 'clip' as in the quantized fused operations
* remove redundant assertions and imports
* Fix floating value quantization for RELU6 and RELU1
ANSHUMAN TRIPATHY [Tue, 16 Jun 2020 15:24:12 +0000 (20:54 +0530)]
Error msg update (#5818)
Tianqi Chen [Mon, 15 Jun 2020 20:08:55 +0000 (13:08 -0700)]
[COMMUNITY] Siju Samuel -> Committer (#5817)
Tianqi Chen [Mon, 15 Jun 2020 17:28:59 +0000 (10:28 -0700)]
[CI] Limit number of threads in all jobs (#5815)
Leandro Nunes [Mon, 15 Jun 2020 14:54:38 +0000 (15:54 +0100)]
Pin hand landmark network to version 0.7.4. (#5813)
* Versions above 0.7.4 are broken due to changes in the
quantization operations in the model, which are current
not supported by TVM.
Fixes #5774.
Samuel [Mon, 15 Jun 2020 12:47:53 +0000 (18:17 +0530)]
[MXNET]conv3d and conv3d_transpose addedx (#5814)
Tianqi Chen [Mon, 15 Jun 2020 03:53:11 +0000 (20:53 -0700)]
[CI] Move cpu-only frontend tests to a CPU stage (#5807)
Bing Xu [Mon, 15 Jun 2020 01:54:15 +0000 (18:54 -0700)]
[topi] fix strategy for sparse dense cuda (#5782)
Junru Shao [Sun, 14 Jun 2020 23:28:06 +0000 (16:28 -0700)]
Allow RPCWrappedFunc to rewrite runtime::String as std::string (#5796)
Zijing Gu [Sun, 14 Jun 2020 21:40:20 +0000 (17:40 -0400)]
[topi] fix sparse dense schedule on cuda (#5803)
Balint Cristian [Sun, 14 Jun 2020 17:37:32 +0000 (20:37 +0300)]
[QUANTIZE] Add config switch for nn.dense layer type. (#5801)
Tianqi Chen [Sun, 14 Jun 2020 16:45:46 +0000 (09:45 -0700)]
[TIR][REFACTOR] Add tir prefix to type keys (#5802)
Balint Cristian [Sun, 14 Jun 2020 03:39:20 +0000 (06:39 +0300)]
[ONNX] Skip multiply with 1.0f constant for GEMM import (#5800)
* [ONNX] Skip ADD inside Gemm op when vector is zero
* [ONNX] Skip multiply with 1.0f constant for GEMM import
Tianqi Chen [Sat, 13 Jun 2020 19:18:17 +0000 (12:18 -0700)]
[TEST] Temporary disable fp16 type_as test for PyTorch Frontend (#5799)
Tianqi Chen [Sat, 13 Jun 2020 16:09:00 +0000 (09:09 -0700)]
[TIR][REFACTIR] Update TIR nodes std::string->String. (#5793)
This PR updates the remaining TIR node's member to use
String instead of std::string.
Rand Xie [Sat, 13 Jun 2020 04:52:45 +0000 (21:52 -0700)]
support aten::type_as in the pytorch frontend (#5787)
* support aten::type_as in the pytorch frontend
* use _convert_data_type to convert torch type to tvm type and add more types in the type_as test
Yao Wang [Sat, 13 Jun 2020 03:32:46 +0000 (20:32 -0700)]
Fix tf parser (#5794)
Tianqi Chen [Sat, 13 Jun 2020 00:23:05 +0000 (17:23 -0700)]
[TIR][REFACTOR] Cleanup unused classes (#5789)
Matthew Brookhart [Fri, 12 Jun 2020 22:11:34 +0000 (15:11 -0700)]
Edit onnx parser to infer values in post order (#5755)
* edit onnx parser to infer values in post order to speed up onnx imports with many calls to infer_value
* fix pylint
Tianqi Chen [Fri, 12 Jun 2020 21:19:40 +0000 (14:19 -0700)]
[COMMUNITY] @wpan11nv -> Reviewer (#5790)
lixiaoquan [Fri, 12 Jun 2020 17:42:13 +0000 (01:42 +0800)]
[TF] Support symbolic inputs of Fill (#5762)
* [TF] Support symbolic inputs of Fill
* Rebase and simplify. Value has been converted to constant if it is
tf.Constant
Samuel [Fri, 12 Jun 2020 17:18:05 +0000 (22:48 +0530)]
[TENSORFLOW]Conv3d Transpose OP added (#5775)
* [TENSORFLOW]Conv3d Transpose OP added
* Testcase updated, tf cpu supports only ndhwc
Samuel [Fri, 12 Jun 2020 17:16:52 +0000 (22:46 +0530)]
[PYTORCH]aten::norm support added (#5776)
Samuel [Fri, 12 Jun 2020 16:27:32 +0000 (21:57 +0530)]
[FRONTEND]Darknet support batch size for yolo (#5688)
Fix the issue reported in
https://discuss.tvm.ai/t/yolov3-tiny-batch-input-test-failed/6796
mbaret [Fri, 12 Jun 2020 15:43:33 +0000 (16:43 +0100)]
[BYOC][FIX] Infer types in MergeComposite (#5766)
If InferType isn't run between partitioning passes,
function calls are inserted which don't have a type.
This can result in failures for patterns which want
to check types.
This works around it simply by running InferType after
every partitioning.
Change-Id: Ie0887f0564a41eb0913bfe42a362e8effe9681b9
Josh Fromm [Fri, 12 Jun 2020 15:39:24 +0000 (08:39 -0700)]
Add ignore storage_order attribute to onnx pooling parser. (#5781)
Bing Xu [Fri, 12 Jun 2020 15:35:03 +0000 (08:35 -0700)]
[cmake] update vulkan rules (#5777)
Yi-Hsiang (Sean) Lai [Fri, 12 Jun 2020 15:33:20 +0000 (11:33 -0400)]
fix calibration pass to support multiple functions (#5768)
Co-authored-by: Ubuntu <ubuntu@ip-172-31-43-142.us-east-2.compute.internal>
MORITA Kazutaka [Fri, 12 Jun 2020 15:31:31 +0000 (00:31 +0900)]
[CODEGEN][CONTRIB] CoreML codegen (#5634)
* [CODEGEN][CONTRIB] CoreML codegen
* import coremltools only when it is necessary
* fix pylint errors
* don't import contrib.coreml when using runtime lib
* skip coreml codegen test in CI
* don't register relay.ext.coremlcompiler in __init__.py
* move tvm/contrib/coreml.py to tvm/contrib/target/coreml.py
* use existing transformers for graph partitioning
* skip test only when coremltools is not available
* add check for annotation
* move _register_coreml_op to python/tvm/relay/op/contrib/coreml.py
* skip compile when xcode is unavailable
* relay.op.Op -> tvm.ir.Op
* set USE_COREML on
* refine test
notoraptor [Fri, 12 Jun 2020 15:24:56 +0000 (11:24 -0400)]
[topi][relay] Add operation gather to relay. (#5716)
hlu1 [Fri, 12 Jun 2020 15:24:29 +0000 (08:24 -0700)]
[Topi] pass-by-value -> pass-by-const-reference (#5783)
Tianqi Chen [Fri, 12 Jun 2020 15:24:18 +0000 (08:24 -0700)]
[REFACTOR][API-Change] Migrate all Object construction to constructor. (#5784)
This PR migrates all the remaining object constructions to the new constructor style
that is consistent with the rest of the codebase and changes the affected files accordingly.
Other changes:
- ThreadScope::make -> ThreadScope::Create
- StorageScope::make -> StorageScope::Create
wrongtest [Fri, 12 Jun 2020 15:23:12 +0000 (23:23 +0800)]
[RUNTIME] Add compile_shared option to linux compile utility fn (#5751)
* feat: Add compile_shared option to linux compile fn
* feat: Add compile_shared option for linux compile util fn
* fix: Fix minrpc testcase use executable compilation
* fix: Fix binutil case where call create_shared to create executable
Co-authored-by: baoxinqi <baoxinqi@4paradigm.com>
majiang31312 [Fri, 12 Jun 2020 15:22:24 +0000 (23:22 +0800)]
fix #5686: remove a overstrict assert in MakeAllreduce (#5686) (#5785)
Zhi [Fri, 12 Jun 2020 15:20:57 +0000 (08:20 -0700)]
[DOC][FIX] Fix some typos in git-clang-format.sh (#5786)
Yao Wang [Fri, 12 Jun 2020 05:54:23 +0000 (22:54 -0700)]
[Frontend][TensorFlow] Improve Control Flow and TensorArray (#5699)
* Improve TF parser control flow and tensor array
* Fix tf tensor array scatter
* Add ssd test
* Add back static ta test
* Minor fix for frontend and test_forward
* SplitRel for dynamic shape
* Fix test ssd
* Fix loop var naming issue
* Minor improve
* Fix format
* Fix clang format
* Fix tensor array in pytorch frontend
* Fix stack size issue for ssd test
* Address comments
* Fix slice size
* Fix build
* Rebase
Tianqi Chen [Thu, 11 Jun 2020 23:35:43 +0000 (16:35 -0700)]
[TIR][REFACTOR][API-Change] Migrate tir/stmt.h to use constructor. (#5778)
This PR migrate tvm/tir/stmt.h to the new constructor style that is
consistent with the rest of the codebase and changes the affected files accordingly.
Tianqi Chen [Thu, 11 Jun 2020 18:36:01 +0000 (11:36 -0700)]
[TIR][REFACTOR][API-Change] Migrate the tvm/tir/expr.h to construct style. (#5773)
This PR migrate tvm/tir/expr.h to the new constructor style that is
consistent with the rest of the codebase and changes the affected files accordingly.
Thomas Viehmann [Thu, 11 Jun 2020 16:38:46 +0000 (18:38 +0200)]
Make batch matrix multiplication on GPU tunable (#5752)
This is primarily aimed at the AMD GPU backend and done as part
of a project for AMD, but should work for all users of the GPU
schedule.
Matthew Brookhart [Thu, 11 Jun 2020 14:25:09 +0000 (07:25 -0700)]
Add ShapePattern and DataTypePattern (#5760)
Thomas Viehmann [Thu, 11 Jun 2020 12:10:27 +0000 (14:10 +0200)]
Fix gelu in PyTorch frontend, tighten numerical checks (#5763)
Previously, the PyTorch frontend approximated gelu with fastgelu.
To provide a more faithful conversion, we implement gelu instead.
We also tighten the numerical comparisons between PyTorch and
TVM-from-PyTorch to 1e-5. The object detection models need an
increased tolerance of 1e-4 to pass.
I had to throw in a few fixes for missing conversions
(probably due to working with very new PyTorch).
I must admit the GoogLeNet/NasNet test didn't run on my machine,
probably due to problems at my end.
Samuel [Thu, 11 Jun 2020 09:12:35 +0000 (14:42 +0530)]
[TOPI][RELAY][PYTORCH]Conv3d_transpose op support added (#5737)
* [TOPI][RELAY][PYTORCH]Conv3d_transpose op support added
* Test cases in topi/relay
* conv3d_transpose_ncdhw_python added
* Review comments fixed
Haichen Shen [Thu, 11 Jun 2020 03:49:57 +0000 (20:49 -0700)]
[Relay] Fix for recursive let (#5757)
* Make let processing iterative
* Try again
* Fix pretty printer overflow
* cleanup
* fix lint
* Fix text printer
Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Jared Roesch <jroesch@octoml.ai>
Zijing Gu [Wed, 10 Jun 2020 17:07:36 +0000 (13:07 -0400)]
[topi] block sparse dense on cuda (#5746)
Jared Roesch [Wed, 10 Jun 2020 09:07:19 +0000 (02:07 -0700)]
[Rust] Second stage of Rust Refactor (#5527)
* Add tvm-rt crate
* Backport changes from frontend branch
* Format
* Add ASF headers
* Address self-code review
* Replace with helper
* Fix lint
* Fix
* Clean up repro debugging
* WIP
* Remove global resgistry to fix one memory issue
* Fix
* Format
* Format
* Update rust/tvm-rt/README.md
Co-authored-by: Jason Knight <binarybana@gmail.com>
* Format
* Duplicate TVM macros
* Split macros
* Restore old macro for old crates
* Repair macros
* Fix format
* Format
Co-authored-by: Jason Knight <binarybana@gmail.com>
Tianqi Chen [Wed, 10 Jun 2020 05:28:52 +0000 (22:28 -0700)]
[REFACTOR][TIR] Provide->ProducerStore, Realize->ProducerRealize. (#5750)
This PR finishes up the final step for DSL/TIR de-coupling to refactor
Provide/Realize to use the DataProducer.
As in the case of ProducerLoad, ProducerStore/Realize are not supposed
to appear in a vaid TIR function ans are only used by high-level DSLs
as intermediate structures.
Cody Yu [Wed, 10 Jun 2020 05:03:09 +0000 (22:03 -0700)]
[Bugfix] Fix reshape (#5739)
* Fix reshape
* fix doc warning
* fix ci
* address comments
Junru Shao [Wed, 10 Jun 2020 00:32:38 +0000 (17:32 -0700)]
[Minor][Test] Clean WASM environment before build (#5759)
Matthew Brookhart [Tue, 9 Jun 2020 22:33:57 +0000 (15:33 -0700)]
Add Scatter to Topi/Relay/ONNX via hybrid script (#5619)
* I can construct scatter but not embed it in a Relay Graph
* working 1-4 dimesion scatter
* add scatter to ONNX
fix lint
* isolate tests to cpu backend
* Fix i386 test
* fix gpu tolerance
* use elemwise_shape_func for scatter
* fix incorrect rebase
Yong Wu [Tue, 9 Jun 2020 16:48:33 +0000 (00:48 +0800)]
[TOPI][Relay][OP] support dynamic NMS(Non Maximum Suppression), symbolic begin, end, and strides for strided_slice (#4312)
* [TOPI][Relay][OP] Dynamic NMS and strided_slice
* Incorporate comments
* fix nnvm compatibility issues
* fix InferCorrectLayout
* Minor fix
* fix for fuse
* Workaround to pass batch_size into hybrid function to handle dynamic shape
* Seperate rearrange
* fix lint
* fix ci, comments
* change attr to Optional<T>
* clang format
* remove empty lines
* partial ignore for end of strided_slice
* pylint
* add out_indices for gpu get_valid_counts
* change to slice_mode
* clang-format, fix comments
* fix comment
* change slice_mode to string
* fix CI
* update docstring
Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
xqdan [Tue, 9 Jun 2020 15:34:27 +0000 (23:34 +0800)]
[ARITH][BACKPORT-0.6] fix a min/max simplify bug (#5749)
* fix a min/max simplify bug
* fix cpplint
* turn into oposite when c1val<0 and add more case
* fix c1=0
Co-authored-by: xqdan <danxiaoqiang@huawei.com>