mbaret [Fri, 12 Jun 2020 15:43:33 +0000 (16:43 +0100)]
[BYOC][FIX] Infer types in MergeComposite (#5766)
If InferType isn't run between partitioning passes,
function calls are inserted which don't have a type.
This can result in failures for patterns which want
to check types.
This works around it simply by running InferType after
every partitioning.
Change-Id: Ie0887f0564a41eb0913bfe42a362e8effe9681b9
Josh Fromm [Fri, 12 Jun 2020 15:39:24 +0000 (08:39 -0700)]
Add ignore storage_order attribute to onnx pooling parser. (#5781)
Bing Xu [Fri, 12 Jun 2020 15:35:03 +0000 (08:35 -0700)]
[cmake] update vulkan rules (#5777)
Yi-Hsiang (Sean) Lai [Fri, 12 Jun 2020 15:33:20 +0000 (11:33 -0400)]
fix calibration pass to support multiple functions (#5768)
Co-authored-by: Ubuntu <ubuntu@ip-172-31-43-142.us-east-2.compute.internal>
MORITA Kazutaka [Fri, 12 Jun 2020 15:31:31 +0000 (00:31 +0900)]
[CODEGEN][CONTRIB] CoreML codegen (#5634)
* [CODEGEN][CONTRIB] CoreML codegen
* import coremltools only when it is necessary
* fix pylint errors
* don't import contrib.coreml when using runtime lib
* skip coreml codegen test in CI
* don't register relay.ext.coremlcompiler in __init__.py
* move tvm/contrib/coreml.py to tvm/contrib/target/coreml.py
* use existing transformers for graph partitioning
* skip test only when coremltools is not available
* add check for annotation
* move _register_coreml_op to python/tvm/relay/op/contrib/coreml.py
* skip compile when xcode is unavailable
* relay.op.Op -> tvm.ir.Op
* set USE_COREML on
* refine test
notoraptor [Fri, 12 Jun 2020 15:24:56 +0000 (11:24 -0400)]
[topi][relay] Add operation gather to relay. (#5716)
hlu1 [Fri, 12 Jun 2020 15:24:29 +0000 (08:24 -0700)]
[Topi] pass-by-value -> pass-by-const-reference (#5783)
Tianqi Chen [Fri, 12 Jun 2020 15:24:18 +0000 (08:24 -0700)]
[REFACTOR][API-Change] Migrate all Object construction to constructor. (#5784)
This PR migrates all the remaining object constructions to the new constructor style
that is consistent with the rest of the codebase and changes the affected files accordingly.
Other changes:
- ThreadScope::make -> ThreadScope::Create
- StorageScope::make -> StorageScope::Create
wrongtest [Fri, 12 Jun 2020 15:23:12 +0000 (23:23 +0800)]
[RUNTIME] Add compile_shared option to linux compile utility fn (#5751)
* feat: Add compile_shared option to linux compile fn
* feat: Add compile_shared option for linux compile util fn
* fix: Fix minrpc testcase use executable compilation
* fix: Fix binutil case where call create_shared to create executable
Co-authored-by: baoxinqi <baoxinqi@4paradigm.com>
majiang31312 [Fri, 12 Jun 2020 15:22:24 +0000 (23:22 +0800)]
fix #5686: remove a overstrict assert in MakeAllreduce (#5686) (#5785)
Zhi [Fri, 12 Jun 2020 15:20:57 +0000 (08:20 -0700)]
[DOC][FIX] Fix some typos in git-clang-format.sh (#5786)
Yao Wang [Fri, 12 Jun 2020 05:54:23 +0000 (22:54 -0700)]
[Frontend][TensorFlow] Improve Control Flow and TensorArray (#5699)
* Improve TF parser control flow and tensor array
* Fix tf tensor array scatter
* Add ssd test
* Add back static ta test
* Minor fix for frontend and test_forward
* SplitRel for dynamic shape
* Fix test ssd
* Fix loop var naming issue
* Minor improve
* Fix format
* Fix clang format
* Fix tensor array in pytorch frontend
* Fix stack size issue for ssd test
* Address comments
* Fix slice size
* Fix build
* Rebase
Tianqi Chen [Thu, 11 Jun 2020 23:35:43 +0000 (16:35 -0700)]
[TIR][REFACTOR][API-Change] Migrate tir/stmt.h to use constructor. (#5778)
This PR migrate tvm/tir/stmt.h to the new constructor style that is
consistent with the rest of the codebase and changes the affected files accordingly.
Tianqi Chen [Thu, 11 Jun 2020 18:36:01 +0000 (11:36 -0700)]
[TIR][REFACTOR][API-Change] Migrate the tvm/tir/expr.h to construct style. (#5773)
This PR migrate tvm/tir/expr.h to the new constructor style that is
consistent with the rest of the codebase and changes the affected files accordingly.
Thomas Viehmann [Thu, 11 Jun 2020 16:38:46 +0000 (18:38 +0200)]
Make batch matrix multiplication on GPU tunable (#5752)
This is primarily aimed at the AMD GPU backend and done as part
of a project for AMD, but should work for all users of the GPU
schedule.
Matthew Brookhart [Thu, 11 Jun 2020 14:25:09 +0000 (07:25 -0700)]
Add ShapePattern and DataTypePattern (#5760)
Thomas Viehmann [Thu, 11 Jun 2020 12:10:27 +0000 (14:10 +0200)]
Fix gelu in PyTorch frontend, tighten numerical checks (#5763)
Previously, the PyTorch frontend approximated gelu with fastgelu.
To provide a more faithful conversion, we implement gelu instead.
We also tighten the numerical comparisons between PyTorch and
TVM-from-PyTorch to 1e-5. The object detection models need an
increased tolerance of 1e-4 to pass.
I had to throw in a few fixes for missing conversions
(probably due to working with very new PyTorch).
I must admit the GoogLeNet/NasNet test didn't run on my machine,
probably due to problems at my end.
Samuel [Thu, 11 Jun 2020 09:12:35 +0000 (14:42 +0530)]
[TOPI][RELAY][PYTORCH]Conv3d_transpose op support added (#5737)
* [TOPI][RELAY][PYTORCH]Conv3d_transpose op support added
* Test cases in topi/relay
* conv3d_transpose_ncdhw_python added
* Review comments fixed
Haichen Shen [Thu, 11 Jun 2020 03:49:57 +0000 (20:49 -0700)]
[Relay] Fix for recursive let (#5757)
* Make let processing iterative
* Try again
* Fix pretty printer overflow
* cleanup
* fix lint
* Fix text printer
Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Jared Roesch <jroesch@octoml.ai>
Zijing Gu [Wed, 10 Jun 2020 17:07:36 +0000 (13:07 -0400)]
[topi] block sparse dense on cuda (#5746)
Jared Roesch [Wed, 10 Jun 2020 09:07:19 +0000 (02:07 -0700)]
[Rust] Second stage of Rust Refactor (#5527)
* Add tvm-rt crate
* Backport changes from frontend branch
* Format
* Add ASF headers
* Address self-code review
* Replace with helper
* Fix lint
* Fix
* Clean up repro debugging
* WIP
* Remove global resgistry to fix one memory issue
* Fix
* Format
* Format
* Update rust/tvm-rt/README.md
Co-authored-by: Jason Knight <binarybana@gmail.com>
* Format
* Duplicate TVM macros
* Split macros
* Restore old macro for old crates
* Repair macros
* Fix format
* Format
Co-authored-by: Jason Knight <binarybana@gmail.com>
Tianqi Chen [Wed, 10 Jun 2020 05:28:52 +0000 (22:28 -0700)]
[REFACTOR][TIR] Provide->ProducerStore, Realize->ProducerRealize. (#5750)
This PR finishes up the final step for DSL/TIR de-coupling to refactor
Provide/Realize to use the DataProducer.
As in the case of ProducerLoad, ProducerStore/Realize are not supposed
to appear in a vaid TIR function ans are only used by high-level DSLs
as intermediate structures.
Cody Yu [Wed, 10 Jun 2020 05:03:09 +0000 (22:03 -0700)]
[Bugfix] Fix reshape (#5739)
* Fix reshape
* fix doc warning
* fix ci
* address comments
Junru Shao [Wed, 10 Jun 2020 00:32:38 +0000 (17:32 -0700)]
[Minor][Test] Clean WASM environment before build (#5759)
Matthew Brookhart [Tue, 9 Jun 2020 22:33:57 +0000 (15:33 -0700)]
Add Scatter to Topi/Relay/ONNX via hybrid script (#5619)
* I can construct scatter but not embed it in a Relay Graph
* working 1-4 dimesion scatter
* add scatter to ONNX
fix lint
* isolate tests to cpu backend
* Fix i386 test
* fix gpu tolerance
* use elemwise_shape_func for scatter
* fix incorrect rebase
Yong Wu [Tue, 9 Jun 2020 16:48:33 +0000 (00:48 +0800)]
[TOPI][Relay][OP] support dynamic NMS(Non Maximum Suppression), symbolic begin, end, and strides for strided_slice (#4312)
* [TOPI][Relay][OP] Dynamic NMS and strided_slice
* Incorporate comments
* fix nnvm compatibility issues
* fix InferCorrectLayout
* Minor fix
* fix for fuse
* Workaround to pass batch_size into hybrid function to handle dynamic shape
* Seperate rearrange
* fix lint
* fix ci, comments
* change attr to Optional<T>
* clang format
* remove empty lines
* partial ignore for end of strided_slice
* pylint
* add out_indices for gpu get_valid_counts
* change to slice_mode
* clang-format, fix comments
* fix comment
* change slice_mode to string
* fix CI
* update docstring
Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
xqdan [Tue, 9 Jun 2020 15:34:27 +0000 (23:34 +0800)]
[ARITH][BACKPORT-0.6] fix a min/max simplify bug (#5749)
* fix a min/max simplify bug
* fix cpplint
* turn into oposite when c1val<0 and add more case
* fix c1=0
Co-authored-by: xqdan <danxiaoqiang@huawei.com>
Trevor Morris [Mon, 8 Jun 2020 23:43:28 +0000 (16:43 -0700)]
Don't add cast for TF batch norm when type isn't changing (#5731)
Tianqi Chen [Sun, 7 Jun 2020 21:11:05 +0000 (14:11 -0700)]
[REFACTOR][TE][TIR] Call::Halide => ProducerLoad, DSL/TIR decouple. (#5743)
In the HalideIR's design, DSL components and IR are mixed together.
For example, Call::Halide can containa reference to a function which is
constructed in the tensor expression language.
While this coupled design simplifies certain aspect of the DSL construction,
it prevents the TIR to evolve as a clean standalone IR:
- The additional tensor expression provided in the function is opaque to the IR
and may become obsolete as we transform them.
- The duplication of the information in the DSL tensor and IR makes it hard to
design a stand-alone text format (when there are elements shared in the tensor
expression and normal statements).
This PR aims to clearly de-couple the TIR from high-level DSL structures(tensor expression),
while still provide clear extensions to build DSLs on top of the TIR.
We introduce a DataProducer as a base class for high level tensor expressions objects
that produce data. We then introduce ProducerLoad to replace the Call::Halide usage,
so that the Call node can always be self contained and used for low-level calls.
The high-level tensor expression DSL can still generate a PrimExpr that contains a ProducerLoad.
These PrimExprs contains fragments of information that can be combined together to
generate a low-level TIR PrimFunc.
We also state clearly that DataProducer **should not** appear in any TIR PrimFunc.
Instead, the high-level DSL layer should lowered DataProducers to Buffers and TIR statements
that produces these buffers. We can further provide verifications to validate such invariance.
Changes:
- Introduce DataProducer to serve as a base class for Tensor in tensor expressions.
- Migrate use of Call::Halide to ProducerLoad
- Migrate the other usages of Calls.
We will also create follow-up PRs to migrate the remaining two DSL related IR nodes(Realize/Provide)
to use the DataProducer.
Zhi [Sun, 7 Jun 2020 19:41:23 +0000 (12:41 -0700)]
sequential cpp test (#5745)
Junru Shao [Sat, 6 Jun 2020 22:23:20 +0000 (15:23 -0700)]
Add some docs on downstream consistency (#5742)
https://github.com/apache/incubator-tvm/pull/5730#issuecomment-
639567636
Tianqi Chen [Sat, 6 Jun 2020 20:23:31 +0000 (13:23 -0700)]
[REFACTOR][ARITH] Remove legacy compute_expr.h (#5738)
Replaces most of the ComptuteReduce using foldl.
handar423 [Sat, 6 Jun 2020 03:40:17 +0000 (11:40 +0800)]
fix small bug about dense_grad (#5695)
abergeron [Fri, 5 Jun 2020 21:17:41 +0000 (17:17 -0400)]
Fix the values for test_fmod since it fails way too often otherwise (#5723)
Tianqi Chen [Fri, 5 Jun 2020 19:13:17 +0000 (12:13 -0700)]
[TEST] Fix flaky topi/tests/python/test_topi_pooling.py:test_adaptive_pool (#5736)
Cody Yu [Fri, 5 Jun 2020 17:27:57 +0000 (10:27 -0700)]
Fix reshape usage in ARM Winograd (#5732)
akosik-anyvision [Fri, 5 Jun 2020 15:39:20 +0000 (11:39 -0400)]
Change 'delete's in Relay VM Instruction dtor to 'delete[]'s (#5735)
Thomas Viehmann [Fri, 5 Jun 2020 09:49:37 +0000 (11:49 +0200)]
ROCm: Add warp shuffles and enable reductions (#5727)
Thank you @masahi and @wpan11nv for the feedback
Samuel [Fri, 5 Jun 2020 05:48:44 +0000 (11:18 +0530)]
[ONNX]MaxRoiPool, Mod & Xor op support added (#5729)
Tianqi Chen [Thu, 4 Jun 2020 22:04:17 +0000 (15:04 -0700)]
[REFACTOR] Separate ArgTypeCode from DLDataTypeCode (#5730)
We use a single enum(TypeCode) to represent ArgTypeCode and DLDataTypeCode.
However, as we start to expand more data types, it is clear that argument
type code(in the FFI convention) and data type code needs to evolve separately.
So that we can add first class for data types without having changing the FFI ABI.
This PR makes the distinction clear and refactored the code to separate the two.
- [PY] Separate ArgTypeCode from DataTypeCode
- [WEB] Separate ArgTypeCode from DataTypeCode
- [JAVA] Separate ArgTypeCode from DataTypeCode
Dhruva Ray [Thu, 4 Jun 2020 17:55:38 +0000 (23:25 +0530)]
[Frontend][TFLite] Add parser support for shape and range (#5329)
* [Relay][Frontend][TFLite] Add parser support for shape and range
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* Incorporated review comments and used new functions
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* Few cosmetic changes
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* Removed an extra line added by rebase...
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
Dhruva Ray [Thu, 4 Jun 2020 17:54:56 +0000 (23:24 +0530)]
[TOPI,RELAY][TFLITE] Sparse to dense operator (#5447)
* [Relay][Frontend][TFLite] Add parser support for shape and range
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* [TOPI,RELAY][TFLITE] Sparse to dense operator
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* use param name in documentation
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* sphinx doc errors fixed
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* incorporated review comments
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* Missing a blank line...
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* use get_tensor_expr
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* Accidently removed this function in the rebase...
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* support default value for default_value
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* clang format fixes
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
* topi pylint fixes
Signed-off-by: Dhruva Ray <dhruvaray@gmail.com>
Thomas Viehmann [Thu, 4 Jun 2020 15:46:55 +0000 (17:46 +0200)]
codegen llvm: move nvptx-specific intrinsic handling into codegen_nvptx (#5726)
See discussion in #5600.
I'm also throwing in a pointer lifetime fix for the context held by
NVPTX because otherwise topi/tests/python/test_topi_softmax.py
would sefault for me. With the test, I can also run resnet-18 on
the nvptx target in gpu_imagenet_bench.py.
Junru Shao [Thu, 4 Jun 2020 15:29:03 +0000 (08:29 -0700)]
Fix runtime::String backward compatibility in JSON (#5725)
Wuwei Lin [Thu, 4 Jun 2020 06:47:04 +0000 (02:47 -0400)]
[AutoTVM, Relay] Clear compile engine after task extraction (#5724)
Deepak [Thu, 4 Jun 2020 04:33:42 +0000 (10:03 +0530)]
[TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added (#5617)
* Implemented functionInvocation Unit Test for StatefulPartitionedCall operator(working) and initial changes for placeholder(not working as of now)
* Placeholder exercises with tvm
* placeholder interim
* SPOP Test cases structure
* New test cases for spop
* miscellaneous test cases for spop
* Placeholder samples..working with shapes explicitly passed
* Variables test case. Works with the same fix of shape_dict
* SPOP Positive test cases first iteration
* support output tensors as function args, multiple functions
* Corrected Indentation
* filewritter is only for debug purpose
* support variables in function args
* First working iteration of positive spop test cases
* Removed commented code, simplified code
* Code Reorganization- First working iteration of positive spop test cases
* corrected variable name after refactor
* Code Reorganization- First working iteration of positive spop test cases
* move code inside mapped operator function
* Removed extra line
* support variables in function args
* Removed commented code, simplified code
* move code inside mapped operator function
* Code Reorganization- First working iteration of positive spop test cases
# Conflicts:
# tests/python/frontend/tensorflow/test_forward.py
* Code Reorganization- First working iteration of positive spop test cases
* Function invocation more test cases
* Simplified & Merged different Function Invocation Test cases
* support invocation of nested callables
no need to explicitly handle paratitioned and
statefulPartitioned condition in convert_operator function
* Simplified and Uniform testcases
* support invocation of nested callables
no need to explicitly handle paratitioned and
statefulPartitioned condition in convert_operator function
* Simplified and Uniform testcases
* removed duplicate and renamed testcase
* Negative scenario added for testing operator statefulness. Only Exception to stateful operators are Partitioned & StatefulPartitionedOp which have capability to execute even stateless operators within them
* Miscellaneous reorganization changes for spop scenarios
* Miscellaneous reorganization changes for spop scenarios
* Corrected import of tensorflow modules safely using try except and other code reorganization
* Negative scenario for resource variables handled
* Documentation update for code
* SPOP change in function handling
* handle nested subgraph
* refactor
* get op def compatible with tf 1x & 2x
* Fixed liniting issues
* added doctsring and few nits
* Merged changes for positive test cases and negative test cases
* Moved StatefulPartitionedCall test case to the end of the TC list
* Fixed some typos and semantics
* dmlc-core
* dmlc-core
* fixes
* Addressing Review comments in the PR for SPOP support
* Fixed pylint errors
* Corrected tensorflow import syntax
* Placed the op_def_registry module import outside of for loop
* Removed new stateful operators list and combined these operators with missing operators to display as single list. Also removed throwing seperate exception for stateful ops
Co-authored-by: Prashant Sail <psail4444@gmail.com>
Co-authored-by: maheshambule <mahesh_ambule@persistent.com>
Samuel [Thu, 4 Jun 2020 02:06:21 +0000 (07:36 +0530)]
[ONNX]ReduceL1, ReduceL2, ReduceSumSquare, ReduceLogSum ops added (#5721)
abergeron [Wed, 3 Jun 2020 21:24:27 +0000 (17:24 -0400)]
Fix generating types like float44 and float88 (#5722)
Junru Shao [Wed, 3 Jun 2020 20:34:44 +0000 (13:34 -0700)]
[Object] Restore the StrMap behavior in JSON/SHash/SEqual (#5719)
Junru Shao [Wed, 3 Jun 2020 15:01:55 +0000 (08:01 -0700)]
[Object][FFI] Introduce runtime::String::CanConvertFrom (#5718)
* [Object][FFI] Introduce runtime::String::CanConvertFrom
* Update container.h
lixiaoquan [Wed, 3 Jun 2020 15:01:10 +0000 (23:01 +0800)]
Avoid downloading when TOPHUB_LOCATION is NONE (#5720)
Samuel [Wed, 3 Jun 2020 08:29:38 +0000 (13:59 +0530)]
[MXNET]Softmin, trunc op support added (#5715)
Junru Shao [Tue, 2 Jun 2020 23:33:09 +0000 (16:33 -0700)]
[Object] Unify StrMapNode and MapNode (#5687)
* Pass cpptest and py unittest
* fix graph runtime
* right fix
* fix a bug that runtime::String's operator < is actually compare by address
* Update container.py
* Renaming
* Address comments
* lint
* Replace ObjectHash in object.py
tobe [Tue, 2 Jun 2020 16:22:42 +0000 (00:22 +0800)]
Rename tvm_dso_op to libtvm_dso_op (#5714)
Liangfu Chen [Tue, 2 Jun 2020 16:22:14 +0000 (00:22 +0800)]
[BUGFIX][CRT] Fix Compilation Error in CRT (#5713)
Tianqi Chen [Tue, 2 Jun 2020 03:51:49 +0000 (20:51 -0700)]
Remove opengl runtime and cmake (#5712)
Tianqi Chen [Tue, 2 Jun 2020 00:53:33 +0000 (17:53 -0700)]
Remove deprecated opengl files (#5711)
Samuel [Tue, 2 Jun 2020 00:15:41 +0000 (05:45 +0530)]
[PYTORCH]ReplicationPad support added (#5708)
Cody Yu [Tue, 2 Jun 2020 00:14:33 +0000 (17:14 -0700)]
[PatternLang] Simplify Pattern API Implementations (#5703)
* Add syntatic sugar; include pattern to API docs
* fix doc warnings
Tianqi Chen [Mon, 1 Jun 2020 22:35:06 +0000 (15:35 -0700)]
[REFACTOR][PY] relay.op.Op -> tvm.ir.Op (#5705)
* [REFACTOR][PY] relay.op.Op -> tvm.ir.Op
* Improve the error check
Rand Xie [Mon, 1 Jun 2020 15:39:36 +0000 (08:39 -0700)]
fix typo: anchor windoes should be anchor windows (#5706)
ANSHUMAN TRIPATHY [Mon, 1 Jun 2020 15:38:26 +0000 (21:08 +0530)]
[Arith] ExtendedEuclidean merge impl to int_operator (#5625)
Neo Chien [Sun, 31 May 2020 19:32:25 +0000 (03:32 +0800)]
[AutoTVM][TOPI] Fix bifrost spatial packing conv2d auto tune (#5684)
* [AutoTVM][TOPI] Fix bifrost spatial packing conv2d auto tune
* [AutoTVM][TOPI] Putting placeholder replacement in compute
* Fix winograd kernel replacement
* Fix sanity check: Line too long
Samuel [Sat, 30 May 2020 21:25:36 +0000 (02:55 +0530)]
[PYTORCH]floor_divide support for squeezenet (#5702)
https://github.com/apache/incubator-tvm/issues/5133#issuecomment-
636330705
Zhi [Sat, 30 May 2020 04:59:35 +0000 (21:59 -0700)]
[REFACTOR][RELAY] Replace build_config with PassContext (#5698)
Cody Yu [Sat, 30 May 2020 02:11:24 +0000 (19:11 -0700)]
[BYOC] Support Tuple Output in C/DNNL Codegen (#5701)
* Support tuple output runtime
* fix unit test
Balint Cristian [Sat, 30 May 2020 01:10:22 +0000 (04:10 +0300)]
[ONNX] Skip ADD inside Gemm op when vector is zero (#5697)
Matthew Brookhart [Sat, 30 May 2020 01:07:07 +0000 (18:07 -0700)]
[PatternLang]Conditionally Embedding Constants in Partitioned Functions (#5693)
* Embed constants in the partition function if the pattern explicity requests constants
fix rst
fix pylint
* improve comments based on Cody's feedback
notoraptor [Fri, 29 May 2020 23:35:03 +0000 (19:35 -0400)]
In memory_plan, check if value is not None, instead of just checking value as boolean. (#5700)
Samuel [Fri, 29 May 2020 21:29:43 +0000 (02:59 +0530)]
[ONNX]LpPool Support added (#5696)
tobe [Fri, 29 May 2020 20:58:34 +0000 (04:58 +0800)]
Support more dtypes for TVMDSOOp (#5694)
Tianqi Chen [Fri, 29 May 2020 16:36:19 +0000 (09:36 -0700)]
[COMMUNITY] @masahi -> PPMC (#5691)
Tianqi Chen [Fri, 29 May 2020 16:35:47 +0000 (09:35 -0700)]
@zhiics -> PPMC (#5692)
Zhi [Fri, 29 May 2020 14:52:03 +0000 (07:52 -0700)]
[REFACTOR][RELAY] move fallback_device to config (#5690)
lhutton1 [Fri, 29 May 2020 14:51:30 +0000 (15:51 +0100)]
[RELAY] Fix segfault in pretty print when ObjectRef is null (#5681)
* [RELAY] Fix segfault in pretty print when ObjectRef is null
Encountered when pretty printing module with function attribute equal to NullValue<ObjectRef>().
Change-Id: I2e7b304859f03038730ba9c3b9db41ebd3e1fbb5
* Add test case
Change-Id: I579b20da3f5d49054823392be80aaf78a055f596
lixiaoquan [Fri, 29 May 2020 10:37:05 +0000 (18:37 +0800)]
[Relay] Fix dataflow_pattern.rewrite() hang if Match in IR (#5680)
rewrite() quits only if graph stop changing, but ExprMutator
always creates new Match node. This patch fixes this.
Samuel [Fri, 29 May 2020 07:16:49 +0000 (12:46 +0530)]
[PYTORCH]Minor bug fixes (#5683)
* [PYTORCH]Minor bug fixes
* Review comment fix, testcase added
* Added testcase for bert model
Cody Yu [Thu, 28 May 2020 23:48:07 +0000 (16:48 -0700)]
[PatternLang] Add ConstantPattern (#5689)
* Add ConstantPattern
* update doc
Neo Chien [Thu, 28 May 2020 08:56:06 +0000 (16:56 +0800)]
[TIR][REFACTOR] std::string -> String Migration in TIR nodes (#5596)
* [TIR][REFACTOR] std::string -> String Migration for Var node and SizeVar Node
* update json_compact.py
Samuel [Thu, 28 May 2020 03:10:58 +0000 (08:40 +0530)]
[TFLITE]Quantize & Dequantize op (#5394)
* [TFLITE]Quantize & Dequantize op
* Testcases added
* Review comment fixed
Cody Yu [Thu, 28 May 2020 03:07:32 +0000 (20:07 -0700)]
[DOC] Improve Pattern Language Docs (#5676)
* [DOC] Improve Pattern Language Docs
* address comments
* address comments
Junru Shao [Wed, 27 May 2020 20:50:56 +0000 (13:50 -0700)]
[Bugfix] Fix Python debugger segfaults with TVM built with LLVM (#5685)
* Import readline before loading libtvm
* make lint happy
tobe [Wed, 27 May 2020 15:59:02 +0000 (23:59 +0800)]
Fix the shift column for scale_shift_nchw and scale_shift_nhwc in C topi (#5679)
notoraptor [Wed, 27 May 2020 01:15:18 +0000 (21:15 -0400)]
Call previous excepthook in tvm_excepthook. (#5675)
* Call previous excepthook in tvm_excepthook.
* Rename prev_excepthook.
* Create a tvm_wrap_excepthook to wrap a given excepthook with tvm custom excepthook work
and call it on system previous excepthook.
* Add docstring.
Matthew Brookhart [Wed, 27 May 2020 01:14:58 +0000 (18:14 -0700)]
add a testcase for #5674 (#5677)
Cody Yu [Tue, 26 May 2020 22:04:01 +0000 (15:04 -0700)]
[BYOC] Pattern Language MergeComposite (#5656)
* Pattern Language MergeComposite
* fix DNNL pattern
* Use builtin binary operator syntax for demo
* Improve unit test
Matthew Brookhart [Tue, 26 May 2020 20:16:13 +0000 (13:16 -0700)]
add a check for null function attributes (#5674)
Andrew Reusch [Tue, 26 May 2020 18:30:19 +0000 (11:30 -0700)]
add tvm.micro pydoc to sphinx (#5661)
* add tvm.micro pydoc to sphinx
* making build pass and addressing tqchen comments
lixiaoquan [Tue, 26 May 2020 18:29:29 +0000 (02:29 +0800)]
[TF] Support TupleWrapper as direct ancestor of control flow ops (#5639)
Matthew Brookhart [Tue, 26 May 2020 17:26:31 +0000 (10:26 -0700)]
[POC][PatternLang]Remove constants from partitioned functions (#5663)
* remove constants from partitioned functions
* remove print statements
Neo Chien [Tue, 26 May 2020 17:25:50 +0000 (01:25 +0800)]
[AutoTVM][TOPI] AutoTVM incorrect measurement (#5511)
* [AutoTVM][TOPI] AutoTVM incorrect measurement
* create new placeholder with converted layout
* update _schedule_winograd
Mei Ye [Tue, 26 May 2020 16:30:38 +0000 (09:30 -0700)]
enable amd_apu device on vulkan target (#5659)
Zhao Wu [Tue, 26 May 2020 15:36:44 +0000 (23:36 +0800)]
[C++ RPC] Fix C++ RPC build problem on Linux (#5671)
Zhao Wu [Tue, 26 May 2020 15:15:44 +0000 (23:15 +0800)]
[Doc] Misc doc fix (#5672)
Tianqi Chen [Tue, 26 May 2020 03:50:00 +0000 (20:50 -0700)]
[REFACTOR][TIR][API-Change] Migrate BuildConfig to PassContext. (#5668)
* [REFACTOR][TIR] Migrate BuildConfig to PassContext.
This PR migrates the TIR configurations from BuildConfig to the
PassContext used by the unified IR.
Moving forward, PassContext will be the unified way to configure passes in the TVM stack.
Changes
- Refactored TVM_PASS_REGISTER_CONFIG_OPTION to take in the reference type.
- Removed BuildConfig.
- Migrated the passes to use PassContext.
* Update include/tvm/ir/attrs.h
Co-authored-by: Zhi <5145158+zhiics@users.noreply.github.com>
Co-authored-by: Zhi <5145158+zhiics@users.noreply.github.com>
Tianqi Chen [Tue, 26 May 2020 03:19:15 +0000 (20:19 -0700)]
[PYTHON] Add buffer name when creating tensor bindings (#5670)
Yao Wang [Tue, 26 May 2020 01:09:44 +0000 (18:09 -0700)]
[Relay][Op]Support symbolic TopK, Ones, Zeros and Full (#5459)
* Support symbolic TopK, Ones, Zeros and Full
* Fix pylint
* Add docstring for topk shape func
* Fix grad
* Fix lazy_gradient_init
* Fix parser
* Fix print ir text
* Fix lint
* Improve pattern_util
* Fix topk
* Fix build
* Use Optional for attribute
* Fix clang-format
* Minot fix
* Fix pylint
* Fix build warning
* Fix parser
* Move ToScalar
* Fix lint
* Fix lint
* Make topk shape func as data independent when k is constant.
* Fix lint
* Minor fix
Wei Pan [Mon, 25 May 2020 16:44:57 +0000 (09:44 -0700)]
[TOPI] Improve CUDA softmax scheduling (#5600)
- Do not use multiple kernels
- Schedule with warp reductions
- Fixed a bug on the lower warp memory pass
- Fixed warp shuffle intrinsics for the nvptx backend.
Signed-off-by: Wei Pan <weip@nvidia.com>
Shizhi Tang [Mon, 25 May 2020 16:39:33 +0000 (00:39 +0800)]
handle likely in IRMutatorWithAnalyzer (#5665)
Tianqi Chen [Sat, 23 May 2020 15:38:00 +0000 (08:38 -0700)]
[TIR][BUILD] Remove buffer params from pass config. (#5652)
Buffer configurations can be passed during construction
and does not need to be part of the build config.
This is a refactor step to simplify the BuildConfig for the PassContext migration.