platform/upstream/tvm.git
4 years agoAdd option to specify flatbuffers location (#5425)
Michal Piszczek [Thu, 23 Apr 2020 23:25:46 +0000 (16:25 -0700)]
Add option to specify flatbuffers location (#5425)

4 years ago[MXNET]DepthToSpace & SpaceToDepth Operator (#5408)
Samuel [Thu, 23 Apr 2020 22:05:25 +0000 (03:35 +0530)]
[MXNET]DepthToSpace & SpaceToDepth Operator (#5408)

4 years ago[RFC] Pytest environment improvements (#5421)
Ramana Radhakrishnan [Thu, 23 Apr 2020 22:05:03 +0000 (23:05 +0100)]
[RFC] Pytest environment improvements (#5421)

* [RFC] Pass pytest options globally.

In many places having a global pytest flag is useful . For me with the
build and test of tvm , I would like to be able to globally pass in
pytest options as part of development flow or CI flows where one would
like to measure other things regularly that need measurements including
pytest coverage data that I would like to experiment with across the stack.

This has been achieved with an additional setup-pytest-env.sh file in
tests/scripts rather than putting in something in every single task test
script and something I would like to avoid.

This now means the -v option to pytest is superfluous. I did consider
having a pytest.ini file but that doesn't allow me to pass any old
environment variable in and this seems to be the compromise.

* Improve other use case documentation

* Rationalize pytest environment.

* Remove the setting from docker/with_same_user.
* Take the opportunity to migrate common PYTHONPATH and
TVM_PATH into the common environment setting.

* Fixup vta fsim

* Be more explicit with common PYTHONPATH

* Fix python path for task_python_vta_fsim.sh properly

* Fix nit in documentation.

4 years ago[BYOC] Use Non-Recursive Visitor/Mutator (#5410)
Cody Yu [Thu, 23 Apr 2020 20:56:43 +0000 (13:56 -0700)]
[BYOC] Use Non-Recursive Visitor/Mutator (#5410)

* Non-Recursive AnnotatedTarget and MergeAnnotation

* Non-Recursive AnnotatedRegionSet and RegionMerger

4 years ago[DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)
Tianqi Chen [Thu, 23 Apr 2020 19:40:11 +0000 (12:40 -0700)]
[DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)

* [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings

* Add note block

4 years ago[CI] Migrate Tensorflow and Tensorflow lite in CI to 2.1.0 (#5392)
Ramana Radhakrishnan [Thu, 23 Apr 2020 19:13:42 +0000 (20:13 +0100)]
[CI] Migrate Tensorflow and Tensorflow lite in CI to  2.1.0 (#5392)

* Migrate Tensorflow and TFLite in the CI up to 1.15.2

The latest stable version of Tensorflow and Tensorflow lite
in the 1.x series is 1.15.2. The tflite frontend is receiving
support for versions of tflite > 1.14 but there is no consistent
testing.

There are 2 failures already in the source base with tf 1.15
and I'm concerned this will just get exacerbated over time
if we don't have CI picking this up and I view this as a stepping
stone towards stepping CI to TF2.x.

The test failures that I have commented will get issues raised
for them as issues to be fixed.

* Comment out run of qnn_mobilenet_v3_net

This is another test that fails with TFlite 1.15.2

* Skip the qnn_mobilenet_v3 test in the pytest fashion.

* Switch docker versions to support Tensorflow 2.1.0

* Fix up pytest imports and usage.

* Skip these tests currently for Tensorflow 2.1.0

4 years ago[cuDNN] Add cuDNN grouped convolutions support (#5319)
Wei Pan [Thu, 23 Apr 2020 18:59:03 +0000 (11:59 -0700)]
[cuDNN] Add cuDNN grouped convolutions support (#5319)

Signed-off-by: Wei Pan <weip@nvidia.com>
4 years ago[Frontend] Asymmetric padding of convolution support (#4803)
Zhao Wu [Thu, 23 Apr 2020 17:10:02 +0000 (01:10 +0800)]
[Frontend] Asymmetric padding of convolution support (#4803)

4 years agofix [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU (...
samwyi [Thu, 23 Apr 2020 15:05:42 +0000 (08:05 -0700)]
fix [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU (#5388) (#5418)

4 years ago[DOCS] Migrate HLS documents from md to rst (#5419)
MORITA Kazutaka [Thu, 23 Apr 2020 15:05:08 +0000 (00:05 +0900)]
[DOCS] Migrate HLS documents from md to rst (#5419)

4 years ago[RUNTIME][CONTRIB] CoreML Runtime (#5283)
MORITA Kazutaka [Thu, 23 Apr 2020 07:35:43 +0000 (16:35 +0900)]
[RUNTIME][CONTRIB] CoreML Runtime (#5283)

* [RUNTIME][CONTRIB] CoreML Runtime

* fix lint

* fix CI

* use xcrun to compile coreml model

4 years ago[TIR][REFACTOR] Remove ir_pass in favor of analysis/transform. (#5415)
Tianqi Chen [Thu, 23 Apr 2020 03:44:25 +0000 (20:44 -0700)]
[TIR][REFACTOR] Remove ir_pass in favor of analysis/transform. (#5415)

This PR removes ir_pass(old style pass functions) in favor
of analysis/transform(new style pass manager).

4 years agoDon't remove() TempDirectory in __del__ after atexit hook runs. (#5414)
Andrew Reusch [Thu, 23 Apr 2020 01:30:11 +0000 (18:30 -0700)]
Don't remove() TempDirectory in __del__ after atexit hook runs. (#5414)

* Use atexit to remove TempDirectory before interpreter shutdown.
 * Can't rely on complex functions from __del__ anyway.
 * Fixes warning message on my box:
       Exception ignored in: <function TempDirectory.__del__ at 0x12be10680>
       Traceback (most recent call last):
        File ".../tvm/python/tvm/contrib/util.py", line 55, in __del__
        File ".../tvm/python/tvm/contrib/util.py", line 51, in remove
        File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 509, in rmtree
        AttributeError: 'NoneType' object has no attribute 'path'

4 years ago[LLVM] Replace calls to Type::getVectorNumElements (#5398)
Krzysztof Parzyszek [Wed, 22 Apr 2020 22:58:42 +0000 (17:58 -0500)]
[LLVM] Replace calls to Type::getVectorNumElements (#5398)

This function has recently been removed from LLVM 11. Use alternative
way to obtain vector element count (VectorType::getNumElements) which
works for all LLVM versions.

4 years agoCustomize SI prefix in logging (#5411)
Andrew Reusch [Wed, 22 Apr 2020 22:35:21 +0000 (15:35 -0700)]
Customize SI prefix in logging (#5411)

* Customize SI prefix in logging

* Include unit test

4 years ago[Relay] Fix memory leak when accessing NDArray (#5413)
Haichen Shen [Wed, 22 Apr 2020 21:59:54 +0000 (14:59 -0700)]
[Relay] Fix memory leak when accessing NDArray (#5413)

4 years ago[TIR] Enhance Substitute, python bindings for Substitute/PostOrderVisit/IRTransform...
Tianqi Chen [Wed, 22 Apr 2020 20:25:36 +0000 (13:25 -0700)]
[TIR] Enhance Substitute, python bindings for Substitute/PostOrderVisit/IRTransform. (#5400)

Substitute now takes a std::function to customize more replacing behaviors.

Co-authored-by: Siyuan Feng <hzfengsy@sjtu.edu.cn>
Co-authored-by: Siyuan Feng <hzfengsy@sjtu.edu.cn>
4 years agoUpdate dmlc-core to latest (#5401)
Tianqi Chen [Wed, 22 Apr 2020 20:07:35 +0000 (13:07 -0700)]
Update dmlc-core to latest (#5401)

4 years ago[Fix] Remove the duplicate PrintIR pass in Relay (#5403)
Haichen Shen [Wed, 22 Apr 2020 14:55:15 +0000 (07:55 -0700)]
[Fix] Remove the duplicate PrintIR pass in Relay (#5403)

4 years agoFactor out import of common tflite.Operator in tflite frontend. (#5355)
Ramana Radhakrishnan [Wed, 22 Apr 2020 06:09:11 +0000 (07:09 +0100)]
Factor out import of common tflite.Operator in tflite frontend. (#5355)

* Restructure imports in tflite frontend.

These python modules are needed for every tflite file parsed.
Factorize out imports of the common most ones.

Now that the import of operator is common, asserts can be commonized.

Loses 473 lines of duplication.

* Only restrict to tflite.Operator

4 years ago[KERAS]Minimum & AlphaDropout op support (#5380)
Samuel [Wed, 22 Apr 2020 03:57:06 +0000 (09:27 +0530)]
[KERAS]Minimum & AlphaDropout op support (#5380)

4 years ago[LLVM] Use ArrayRef<int> in calls to CreateShuffleVector (#5399)
Krzysztof Parzyszek [Wed, 22 Apr 2020 02:11:28 +0000 (21:11 -0500)]
[LLVM] Use ArrayRef<int> in calls to CreateShuffleVector (#5399)

This switch was made in LLVM 11. Previously this function was expecting
mask indices of type uint32_t. This variant is now deprecated.

4 years ago[PTYTHON] Migrate VTA TIR passes to the new pass manager. (#5397)
Tianqi Chen [Tue, 21 Apr 2020 21:23:18 +0000 (14:23 -0700)]
[PTYTHON] Migrate VTA TIR passes to the new pass manager. (#5397)

4 years agoTf2 test fixups (#5391)
Ramana Radhakrishnan [Tue, 21 Apr 2020 14:49:41 +0000 (15:49 +0100)]
Tf2 test fixups (#5391)

* Fix oversight in importing tf.compat.v1 as tf.

* Actually disable test for lstm in TF2.1

Since the testing framework actually uses pytest, the version
check needs to be moved.

4 years agoFix test_ir_type. (#5390)
Andrew Reusch [Tue, 21 Apr 2020 14:41:52 +0000 (07:41 -0700)]
Fix test_ir_type. (#5390)

* The void return type is not None/nullptr, it's VoidType or
   TupleType([]).

4 years ago[Topi, ARM] Disbale Winograd for quantized tensors. (#5363)
Animesh Jain [Tue, 21 Apr 2020 11:06:13 +0000 (04:06 -0700)]
[Topi, ARM] Disbale Winograd for quantized tensors. (#5363)

* [Topi, ARM] Disbale Winograd for quantized tensors.

* Relaxing float

4 years agoAdd ability to have multiple copies of same input to onnx_inputs. (#5389)
Josh Fromm [Tue, 21 Apr 2020 10:57:13 +0000 (03:57 -0700)]
Add ability to have multiple copies of same input to onnx_inputs. (#5389)

4 years ago[ARITH] Remove legacy const pattern functions (#5387)
Tianqi Chen [Tue, 21 Apr 2020 03:53:32 +0000 (20:53 -0700)]
[ARITH] Remove legacy const pattern functions (#5387)

4 years ago[ARITH] Remove the legacy Simplify, migrate to Analyzer. (#5385)
Tianqi Chen [Tue, 21 Apr 2020 02:28:08 +0000 (19:28 -0700)]
[ARITH] Remove the legacy Simplify, migrate to Analyzer. (#5385)

The legacy Simplify/CanonicalSimplify are now a thin wrapper around the Analyzer.
This PR removes these functions and migrated every place that requires
simplification to enforce Analyzer creation.
The new API would encourage more Analyzer sharing and potentially enable
context-aware analyzer-based simplification.

4 years ago[REFACTOR][TE] Inline -> te/schedule/operation_inline.h (#5386)
Tianqi Chen [Tue, 21 Apr 2020 02:28:00 +0000 (19:28 -0700)]
[REFACTOR][TE] Inline -> te/schedule/operation_inline.h (#5386)

Rationale: inline is a transformation used in te to
rewrite its internal expressions. It is not a formal IRModule->IRModule transform pass.

Also removed the python test as the test is covered by stage.compute_inline.

4 years ago[Blocksparse] Pipeline for lowering dense model to sparse-dense (#5377)
Bing Xu [Mon, 20 Apr 2020 05:15:37 +0000 (22:15 -0700)]
[Blocksparse] Pipeline for lowering dense model to sparse-dense (#5377)

4 years ago[TIR][REFACTOR] RewriteForTensorCore -> te/schedule (#5379)
Tianqi Chen [Mon, 20 Apr 2020 02:57:25 +0000 (19:57 -0700)]
[TIR][REFACTOR] RewriteForTensorCore -> te/schedule (#5379)

* [TIR][REFACTIR] RewriteForTensorCore -> te/schedule

RewriteForTensor depends on the schedule information, which makes it differ
from a typical pass(which should get all the information from the input TIR).

As a result, we refactor it as a SchedulePostProc step for now.
We should revisit it later as we introduce more support for tensor core patterns in the TIR.

* Fix VTA to fit the new IR Pattern

4 years ago[PYTORCH]Unary Ops (#5378)
Samuel [Mon, 20 Apr 2020 01:48:51 +0000 (07:18 +0530)]
[PYTORCH]Unary Ops (#5378)

4 years ago[TIR][REFACTOR] Remove te::Tensor dependencies from TIR passes. (#5372)
Tianqi Chen [Sun, 19 Apr 2020 22:26:51 +0000 (15:26 -0700)]
[TIR][REFACTOR] Remove te::Tensor dependencies from TIR passes. (#5372)

* [TIR][REFACTOR] Remove te::Tensor dependencies from TIR passes.

te::Tensor is an useful object for tensor expression, but brings
un-necessary reverse dependency in TIR nodes such as Provide and Realize.

This PR is a first step to remove this dependency. We will use Buffer in all the places
where the te::Tensor was used. The rough correspondence are:

- Provide -> BufferStore
- Realize -> BufferRealize
- HalideCall -> BufferLoad.

After this change, we can not use IRModule of PrimFuncs cleanly to represent TIR
at any point of the optimizations. Buffer will serve as the abstraction for the TIR data
models to represent the intermediate storages and their constraints.

We still keep Realize/HalideCall and Provide as TIR nodes for now to make the change minimum.
Right after ScheduleOps, we call SchedulePostProcToPrimFunc to canonicalize the temporary IR
generated by TE(which contains these nodes) to the TIR.

The TIR optimizations are now mostly migrated to to the pass manager.
Followup PRs are needed to migrate the remaining few passes.

* Fix dev tutorial

4 years agoRemove developer facing api from frontend exports. (#5375)
shoubhik [Sun, 19 Apr 2020 04:09:32 +0000 (21:09 -0700)]
Remove developer facing api from frontend exports. (#5375)

4 years agoAdd cuda target check to dense tensorcore schedule. (#5376)
Josh Fromm [Sun, 19 Apr 2020 04:09:14 +0000 (21:09 -0700)]
Add cuda target check to dense tensorcore schedule. (#5376)

4 years ago[TIR] Fix lower_warp_memory when there are >1 warp buffers (#5368)
Tang, Shizhi [Sun, 19 Apr 2020 03:11:51 +0000 (11:11 +0800)]
[TIR] Fix lower_warp_memory when there are >1 warp buffers (#5368)

* fix recursion in lower_warp_memory

* post-order mutation

4 years ago[TIR][REFACTOR] Migrate low-level passes in tvm.lower to the Unified IR pass manager...
Tianqi Chen [Sat, 18 Apr 2020 19:33:58 +0000 (12:33 -0700)]
[TIR][REFACTOR] Migrate low-level passes in tvm.lower to the Unified IR pass manager. (#5364)

- Migrate BoundCheckers and Simplify
- Migrate RewriteUnsafeSelect and RemoveNoOp
- Migrate UnrollLoop and StorageRewrite
- Migrate InjectDoubleBuffer and InjectVirtualThread
- Migrate LoopPartition and Vectorize
- Migrate CoProcSync, LiftAttrScope, InjectCopyIntrin

We still keep ir_pass registerations for now.
Need a separate PR to refactor the parts before the StorageFlatten.

4 years ago[RUNTIME] FastRPC interface for Hexagon runtime (#5353)
Krzysztof Parzyszek [Sat, 18 Apr 2020 11:56:17 +0000 (06:56 -0500)]
[RUNTIME] FastRPC interface for Hexagon runtime (#5353)

* [RUNTIME] FastRPC interface for Hexagon runtime

Co-authored-by: Ravishankar Kolachana <quic_rkolacha@quicinc.com>
Co-authored-by: Krzysztof Parzyszek <kparzysz@quicinc.com>
* Explain store offset in a comment in launcher

Co-authored-by: Abhikrant Sharma <quic_abhikran@quicinc.com>
Co-authored-by: Ravishankar Kolachana <quic_rkolacha@quicinc.com>
4 years agofix fuse over functions that are handled by external codegen (#5365)
Zhi [Sat, 18 Apr 2020 06:38:59 +0000 (23:38 -0700)]
fix fuse over functions that are handled by external codegen (#5365)

4 years agodocker: Drop caffe2 download progess bars (#5359)
Marcus Shawcroft [Fri, 17 Apr 2020 18:05:42 +0000 (19:05 +0100)]
docker: Drop caffe2 download progess bars (#5359)

Change-Id: Ia15c3c8f41f75423814e559f6fdb062098f19464

4 years ago[RELAY][PYTORCH]GroupNorm op support added (#5358)
Samuel [Fri, 17 Apr 2020 14:54:24 +0000 (20:24 +0530)]
[RELAY][PYTORCH]GroupNorm op support added (#5358)

4 years ago[TIR] Make lower_warp_memory support extent(threadIdx.x) < warp_size (#5307)
Tang, Shizhi [Fri, 17 Apr 2020 14:35:11 +0000 (22:35 +0800)]
[TIR] Make lower_warp_memory support extent(threadIdx.x) < warp_size (#5307)

* support extent(threadIdx.x) < warp_size in lower_warp_memory

* more docs for lower_warp_memory

4 years ago[TOPI-ARM] Do not alter layout if layout is NHWC (#5350)
Animesh Jain [Fri, 17 Apr 2020 07:10:02 +0000 (00:10 -0700)]
[TOPI-ARM] Do not alter layout if layout is NHWC (#5350)

* [TOPI-ARM] Do not alter layout if layout is NHWC

* Add test.

4 years ago[Hexagon] Add hexagon_posix.cc to TVM/RT sources in the right place (#5346)
Krzysztof Parzyszek [Fri, 17 Apr 2020 06:22:51 +0000 (01:22 -0500)]
[Hexagon] Add hexagon_posix.cc to TVM/RT sources in the right place (#5346)

This file was added before the variable with TVM/RT was initialized.
The initialization overwrote the addition.

4 years ago[PYTORCH]Tensor creation ops support (#5347)
Samuel [Fri, 17 Apr 2020 01:24:55 +0000 (06:54 +0530)]
[PYTORCH]Tensor creation ops support (#5347)

4 years ago[CRT]Compilation warnings fixed for 32bit and 64bit compilation (#5349)
Samuel [Thu, 16 Apr 2020 23:56:30 +0000 (05:26 +0530)]
[CRT]Compilation warnings fixed for 32bit and 64bit compilation (#5349)

4 years agoenable tsim and fsim for GPU build (#5352)
Zhi [Thu, 16 Apr 2020 22:06:00 +0000 (15:06 -0700)]
enable tsim and fsim for GPU build (#5352)

4 years ago[BYOC][FIX] Fix typo in "default" (#5348)
mbaret [Thu, 16 Apr 2020 20:23:01 +0000 (21:23 +0100)]
[BYOC][FIX] Fix typo in "default" (#5348)

Default annotations were incorrectly being named 'defualt'
which results in them not being removed in PartitionGraph.

4 years ago[RUNTIME][CRT] support DLTensor whose ndim == 0 (#5344)
windclarion [Thu, 16 Apr 2020 20:02:31 +0000 (04:02 +0800)]
[RUNTIME][CRT] support DLTensor whose ndim == 0 (#5344)

Signed-off-by: windclarion <windclarion@gmail.com>
4 years ago[RELAY][BYOC] Register pattern tables from external codegens (#5262)
mbaret [Thu, 16 Apr 2020 08:36:38 +0000 (09:36 +0100)]
[RELAY][BYOC] Register pattern tables from external codegens (#5262)

* [RELAY][BYOC] Register pattern tables from external codegens

This adds utility functions to support registering
and retrieving pattern tables used by MergeComposite for
external codegens.

Change-Id: I5be165a321440e48b15ff6aff4970e0c67496aaa

* Updated DNNL tests to use pattern table mechanism

* Removed pattern table standalone test

* Change reg to _op

4 years ago[TOPI][PYTORCH]Logical & Bitwise operator support (#5341)
Samuel [Thu, 16 Apr 2020 08:34:36 +0000 (14:04 +0530)]
[TOPI][PYTORCH]Logical & Bitwise operator support (#5341)

4 years ago[DOCS] Bring relay docs to the top-level flat view (#5343)
Tianqi Chen [Wed, 15 Apr 2020 22:32:59 +0000 (15:32 -0700)]
[DOCS] Bring relay docs to the top-level flat view (#5343)

- Changes most of the relay docs to use autosummary.
- Bring relay API docs to the top-level flat view for easier discovery
- Removed a few cases of re-exports.

4 years ago[Tutorial, QNN] Add tutorial for loading quantized PyTorch model (#5321)
masahi [Wed, 15 Apr 2020 20:46:28 +0000 (05:46 +0900)]
[Tutorial, QNN] Add tutorial for loading quantized PyTorch model (#5321)

* add pytorch tutorial code and doc stub

* add more docs

* formatting, more docs

* typo fix

* try make sphinx happy

* add performance section

* type and nit fix

* format fix

4 years ago[BYOC] Prevent duplicate outputs in subgraph Tuple (#5320)
Trevor Morris [Wed, 15 Apr 2020 20:33:31 +0000 (13:33 -0700)]
[BYOC] Prevent duplicate outputs in subgraph Tuple (#5320)

* Fix duplicate output in partitiongraph

* Add test case

* Fix test_annotated_regions with duplicate compiler_end outputs

* Revert "Fix duplicate output in partitiongraph"

This reverts commit e1f8ef3f4ca5b2aaa31ace6fa968bb50e5e4d1fa.

* Prevent duplicate outputs in Tuple in PartitionGraph

* Fix lint

* Add another test case for when regions are merged, and when TupleGetItem was duplicated

* Pull GetFunctionOutput out of branch, improve description of GetFunctionOutput

* Use std::move for GetFunctionOutput. Fix typo with testcase name

* Use tvm.transform.Sequential

4 years ago[TIR] Remove ProducerConsumer and AllocateNode::new_expr (#5333)
Tianqi Chen [Wed, 15 Apr 2020 18:11:39 +0000 (11:11 -0700)]
[TIR] Remove ProducerConsumer and AllocateNode::new_expr (#5333)

* [TIR] Remove ProducerConsumer and AllocateNode::new_expr

This PR removes two legacy IR parts in TIR that are deprecated.

ProducerConsumer node only serves as a hint markup and may no longer be
informative after extensive transformations in the pass.
If necessary, we can add related info via AttrStmt.

The new_expr field in the AllocateNode is deprecated since it can just be
replaced by a LetStmt.

- Remove dependencies of passes on ProducerConsumer.
- Remove ProducerConsumer from the IR.
- Remove the deprecated fields (new_expr, free_function) from AllocateNode.

* Fix additional testcases

4 years ago[PYTHON] Enhance with_attr API, cleanup MakeAPILegacy in testcases (#5335)
Tianqi Chen [Wed, 15 Apr 2020 18:11:28 +0000 (11:11 -0700)]
[PYTHON] Enhance with_attr API, cleanup MakeAPILegacy in testcases (#5335)

4 years ago[TOPI] Improve get_valid_count and nms performance for CUDA (#5339)
Leyuan Wang [Wed, 15 Apr 2020 15:32:50 +0000 (08:32 -0700)]
[TOPI] Improve get_valid_count and nms performance for CUDA (#5339)

* get_valid_count updated to have correct results

* speedup nms

* update nms

* revert back nms

* recover one test for get_valid_count

4 years ago[TOPI] Using x86 schedules for ARM conv2d. (#5334)
Animesh Jain [Wed, 15 Apr 2020 15:31:48 +0000 (08:31 -0700)]
[TOPI] Using x86 schedules for ARM conv2d. (#5334)

4 years ago[PYTORCH]Take, Topk op support (#5332)
Samuel [Wed, 15 Apr 2020 10:18:03 +0000 (15:48 +0530)]
[PYTORCH]Take, Topk op support (#5332)

* [PYTORCH]take, topk op support

* Ci Failure fix

4 years agoWindows Support for cpp_rpc (#4857)
jmorrill [Wed, 15 Apr 2020 08:49:15 +0000 (01:49 -0700)]
Windows Support for cpp_rpc (#4857)

* Windows Support for cpp_rpc

* Add missing patches that fix crashes under Windows

* On Windows, use python to untar vs wsl

* remove some CMakeLists.txt stuff

* more minor CMakeLists.txt changes

* Remove items from CMakeLists.txt

* Minor CMakeLists.txt changes

* More minor CMakeLists.txt changes

* Even more minor CMakeLists.txt changes

* Modify readme

4 years ago[Runtime][Relay][Cleanup] Clean up for memory pass to enable heterogenous execution...
Jared Roesch [Wed, 15 Apr 2020 00:10:00 +0000 (17:10 -0700)]
[Runtime][Relay][Cleanup] Clean up for memory pass to enable heterogenous execution support. (#5324)

* Cleanup type pack and unpack for tuples.

* Clean up the memory_pass using common helpers

* Clean up memory.cc

* Refactor pass

* Add doc strings

* Fix CPPlint

* Fix PyLint

* Fix

* Apply suggestions from code review

Co-Authored-By: Zhi <5145158+zhiics@users.noreply.github.com>
* Fix typo

Co-authored-by: Zhi <5145158+zhiics@users.noreply.github.com>
4 years ago[CI] Fix build.sh to propagate --network=host to the docker build command (#5336)
Leandro Nunes [Wed, 15 Apr 2020 00:03:43 +0000 (01:03 +0100)]
[CI] Fix build.sh to propagate --network=host to the docker build command (#5336)

* when passing --net=host to build.sh it needs to be also
   sent as --network=host to "docker build", so that both
   build and run will use the same network configuration

4 years ago[LLVM] Use llvm::FunctionCallee in IRBuilder::CreateCall with LLVM 11+ (#5338)
Krzysztof Parzyszek [Wed, 15 Apr 2020 00:03:31 +0000 (19:03 -0500)]
[LLVM] Use llvm::FunctionCallee in IRBuilder::CreateCall with LLVM 11+ (#5338)

The older variants of CreateCall have been deprecated and were recently
removed from LLVM. This caused compilation failures.

4 years ago[RELAY] Remove re-exports of tvm.transform (#5337)
Tianqi Chen [Wed, 15 Apr 2020 00:03:15 +0000 (17:03 -0700)]
[RELAY] Remove re-exports of tvm.transform (#5337)

4 years ago[TIR] Refactor MakePackedAPI to target dependent stage. (#5326)
Tianqi Chen [Tue, 14 Apr 2020 14:48:14 +0000 (07:48 -0700)]
[TIR] Refactor MakePackedAPI to target dependent stage. (#5326)

Previously MakePackedAPI was in the target independent stage,
but never the less requires the device_type information that will be
binded at a later target dependent stage.

The previous implementation was due to the limitation of LoweredFunc
which can not carry buffer_map info(so they have to be lowered right away).
This is no longer the case after the unified IR refactor.

This PR migrates MakePackedAPI to a target dependent stage
and removes the un-necessary BindDevice pass.

4 years ago[RELAY][PYTORCH]isNan, isinf, isfinite, ceil, clamp, round ops (#5316)
Samuel [Tue, 14 Apr 2020 09:45:02 +0000 (15:15 +0530)]
[RELAY][PYTORCH]isNan, isinf, isfinite, ceil, clamp, round ops (#5316)

* [RELAY][PYTORCH]isNan, isinf, isfinite, ceil, clamp, round ops

* Review comments

4 years ago[TE][BuildModule] Fix import in dump pass ir (#5327)
Wuwei Lin [Tue, 14 Apr 2020 06:47:57 +0000 (02:47 -0400)]
[TE][BuildModule] Fix import in dump pass ir (#5327)

4 years ago[Frontend|MXNet] SwapAxis operator support (#5246)
Mahesh Ambule [Tue, 14 Apr 2020 06:09:21 +0000 (11:39 +0530)]
[Frontend|MXNet] SwapAxis operator support (#5246)

* MXNet swap axis

* MXNet swap axis

* swap axis review comment

* swap axis review comment

4 years ago[CODEGEN][CUDA] Fix vector load (#5226)
LiangLiu [Tue, 14 Apr 2020 03:35:31 +0000 (11:35 +0800)]
[CODEGEN][CUDA] Fix vector load (#5226)

* Fix high-low bit bug in __pack_half2

* Fix vector load

* Add unit8 support for PrintVecElemLoadExpr and BroadcastNode

4 years agoadd memoized expr translator for use by backend codegen (#5325)
masahi [Tue, 14 Apr 2020 01:21:31 +0000 (10:21 +0900)]
add memoized expr translator for use by backend codegen (#5325)

4 years ago[COMMUNITY] @mbaret -> Reviewer (#5322)
Tianqi Chen [Mon, 13 Apr 2020 23:04:32 +0000 (16:04 -0700)]
[COMMUNITY] @mbaret -> Reviewer (#5322)

4 years ago[BYOC] Enhance partitioning and external codegen (#5310)
Zhi [Mon, 13 Apr 2020 21:06:02 +0000 (14:06 -0700)]
[BYOC] Enhance partitioning and external codegen (#5310)

* Remove duplicated output args

* address comment

* fix codegen c

* improve comment

* VisitExprDefault_

* deduce type

4 years ago[RUNTIME][IR] Allow non-nullable ObjectRef, introduce Optional<T>. (#5314)
Tianqi Chen [Mon, 13 Apr 2020 17:49:48 +0000 (10:49 -0700)]
[RUNTIME][IR] Allow non-nullable ObjectRef, introduce Optional<T>. (#5314)

* [RUNTIME] Allow non-nullable ObjectRef, introduce Optional<T>.

We use ObjectRef and their sub-classes extensively throughout our codebase.
Each of ObjectRef's sub-classes are nullable, which means they can hold nullptr
as their values.

While in some places we need nullptr as an alternative value. The implicit support
for nullptr in all ObjectRef creates additional burdens for the developer
to explicitly check defined in many places of the codebase.

Moreover, it is unclear from the API's intentional point of view whether
we want a nullable object or not-null version(many cases we want the later).

Borrowing existing wisdoms from languages like Rust. We propose to
introduce non-nullable ObjectRef, and Optional<T> container that
represents a nullable variant.

To keep backward compatiblity, we will start by allowing most ObjectRef to be nullable.
However, we should start to use Optional<T> as the type in places where
we know nullable is a requirement. Gradually, we will move most of the ObjectRef
to be non-nullable and use Optional<T> in the nullable cases.

Such explicitness in typing can help reduce the potential problems
in our codebase overall.

Changes in this PR:
- Introduce _type_is_nullable attribute to ObjectRef
- Introduce Optional<T>
- Change String to be non-nullable.
- Change the API of function->GetAttr to return Optional<T>

* Address review comments

* Upgrade all compiler flags to c++14

* Update as per review comment

4 years ago[Topi] Tensorcore support for Conv3D (#5284)
Josh Fromm [Mon, 13 Apr 2020 17:49:17 +0000 (10:49 -0700)]
[Topi] Tensorcore support for Conv3D (#5284)

* one weird trick.

* Added schedule knob for different workloads.

* Initial conv3d tensorcore working.

* Added conv3d tensorcore strategy.

* Added layout conversion to tensorcore friendly format for conv2d and conv3d.

* Add target name check.

* Fixed bad names and depthwise check.

* Removed duplicated attribute assignment.

4 years ago[REALY][OP] fix typo (#5315)
windclarion [Mon, 13 Apr 2020 14:55:33 +0000 (22:55 +0800)]
[REALY][OP] fix typo (#5315)

Signed-off-by: windclarion <windclarion@gmail.com>
4 years ago[PYTORCH]Reduce_ops support added (#5308)
Samuel [Mon, 13 Apr 2020 09:50:10 +0000 (15:20 +0530)]
[PYTORCH]Reduce_ops support added (#5308)

* [PYTORCH]Reduce_ops support added

* Review comments updated

* typo bug in qnn test

4 years ago[Torch] Support Python list, more realistic recurrent networks (#5306)
masahi [Mon, 13 Apr 2020 06:11:57 +0000 (15:11 +0900)]
[Torch] Support Python list, more realistic recurrent networks (#5306)

* use funcs from prelude, pass around convert_map

* get relay input type from user ishape

* handle tuple unpack

* experimenting with static tensor array

* use prelude concat instead of cons + rev

* minor clean up

* fix layer norm conversion bug, unwrap tensor array

* add infer shape on tensor array

* pass around prelude for now

* compile worked but runtime error

* fix tensor array wrapping

* begin list dynamic test

* is_list_dynamic first version

* finish dynamic list test

* a few fix

* use shape_of function if Any is found

* improve size conversion

* working on adding free vars to loop block

* fixed inlined inner loop issue

* clean up free var handling

* add support for tensor array concat

* adding ta concat on last axis

* fix concat, but got runtime error

* disable concat on axis -1 for now

* add lstm tests

* revert unrelated change

* fix stacked bidir test

* minor fix to test

* relax tol a bit, revert dnnl change to avoid conflict

* simplify infer type, use input tensor shape rather than concat shape

* more shape fix

4 years ago[Intrinsic] Add log1p, ldexp, atan2, hypot, nextafter, copysign (#5312)
Junru Shao [Sun, 12 Apr 2020 16:32:23 +0000 (09:32 -0700)]
[Intrinsic] Add log1p, ldexp, atan2, hypot, nextafter, copysign (#5312)

* [Intrinsic] Add log1p, ldexp, atan2, hypot, nextafter, copysign

* Lint

4 years ago[Rust][CI] Restore Rust CI (#5137)
Jared Roesch [Sun, 12 Apr 2020 16:30:47 +0000 (09:30 -0700)]
[Rust][CI] Restore Rust CI (#5137)

4 years agoRemove PrimExpr from String (#5311)
Zhi [Sun, 12 Apr 2020 16:12:23 +0000 (09:12 -0700)]
Remove PrimExpr from String (#5311)

4 years ago[Requantize] Cleanup and Optimize Lowering (#5286)
Animesh Jain [Sun, 12 Apr 2020 06:10:52 +0000 (23:10 -0700)]
[Requantize] Cleanup and Optimize Lowering (#5286)

* Adding Cast back to Int32 in FixedPointMultiply.

* Removing extra clip.

* Fix space.

* Retrigger.

* Retrigger.

4 years ago[IR][TRANSFORM] Enable CopyOnWrite for passes. (#5309)
Tianqi Chen [Sun, 12 Apr 2020 00:42:42 +0000 (17:42 -0700)]
[IR][TRANSFORM] Enable CopyOnWrite for passes. (#5309)

This PR enables the copy on write optimizations passes:
- Enable COW for IRModule both TIR and relay passes.
- Enabled COW for PrimFunc in TIR passes.

Need more thoughts into whether/how to enable COW
for relay::Function, due to some function passes depend
on the presence of IRModule for context information,
and the std::move of the related function to nullptr
might affect the related behavior.

4 years ago[PYTORCH]Abs, Arange, Softplus ops (#5295)
Samuel [Sat, 11 Apr 2020 05:02:58 +0000 (10:32 +0530)]
[PYTORCH]Abs, Arange, Softplus ops (#5295)

* [PYTHON]Abs, Arange, Softplus ops

* Review comments updated

4 years ago[LLVM] Fix generation of LLVM intrinsics (#5282)
Krzysztof Parzyszek [Sat, 11 Apr 2020 04:19:03 +0000 (23:19 -0500)]
[LLVM] Fix generation of LLVM intrinsics (#5282)

* [LLVM] Fix generation of LLVM intrinsics

The type list in the call to llvm::Intrinsic::getDeclaration is not
the intrinsic's signature, it's the list of overloaded types. Without
this fix, the updated unit test would cause the following error:

TVMError: LLVM module verification failed with the following errors:
Intrinsic name not mangled correctly for type arguments! Should be:
llvm.ctlz.i32
i32 (i32, i1)* @llvm.ctlz.i32.i1

Special handling for llvm.prefetch, sig matching for overloaded ints only

The prefetch intrinsic returns void in LLVM, while it returns i32 in TVM.
This case needs to be handled specially, because rule-based intrinsic
translation would cause invalid LLVM type to be created.

Do the signature matching only for overloaded intrinsics. It's not needed
for non-overloaded ones, so this can save a bit of compile-time.

* Include intrinsic name in the error message

* Fix number of arguments for llvm.fmuladd and llvm.pow

4 years ago[BYOC] Add example of Composite + Annotate for DNNL fused op (#5272)
masahi [Sat, 11 Apr 2020 03:29:20 +0000 (12:29 +0900)]
[BYOC] Add example of Composite + Annotate for DNNL fused op (#5272)

* merge change from dev branch

* fix string issue

* bring comanic's change back

4 years ago[Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array (#5243)
Yao Wang [Sat, 11 Apr 2020 01:43:23 +0000 (18:43 -0700)]
[Frontend][TensorFlow]Improve TensorFlow Static Shape Tensor Array (#5243)

* Support TF Frontend Static TensorArray

* Fix pylint

* Fix lint

* Move get_tensor_array_shape into prelude

* Fix lint

* Fix common

4 years ago[RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc (#5271)
Tianqi Chen [Sat, 11 Apr 2020 00:07:20 +0000 (17:07 -0700)]
[RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc (#5271)

* [RUNTIME] Introduce RValue reference(move) support to TypedPackedFunc

This PR introduces RValue reference support the PackedFunc calling convention to address the above issue.
Specifically, when an argument is a r-value reference, we will use a assign a different type code(`kObjectRValueRefArg`),
and pass `Object**`  (the address to the Object pointer) instead through the values array.
The callee can choose to move out this Object pointer and set the original Object pointer from the caller side to be nullptr.

We also add an experimental move support to the python side(marked as _move so to indicate the dev nature).
This enhancement will enable copy on write optimizations through out the TVM stack.

* Address review comments

* fix compilation

4 years ago[RELAY][FRONTEND][CAFFE2] add Mul and ConvTranspose operator (#5302)
Huacong Yang [Fri, 10 Apr 2020 21:46:03 +0000 (05:46 +0800)]
[RELAY][FRONTEND][CAFFE2] add Mul and ConvTranspose operator (#5302)

4 years ago[BYOC] Refine AnnotateTarget and MergeCompilerRegion Passes (#5277)
Cody Yu [Fri, 10 Apr 2020 21:32:56 +0000 (14:32 -0700)]
[BYOC] Refine AnnotateTarget and MergeCompilerRegion Passes (#5277)

* add target to region

* refactor annotate_target

* Make all unit test working

* quick fix

* enable BN, unit test failed

* Fix vm test, unit test. Refactor annotate_target a bit.

* quick fix fusion

* revert fusion change

* style fix

* Refactor merge region pass

* format

* minor fix

* Skip e2e test

* lint

* support AnnotateTarget multiple runs

* Add HasAttr and revert DNNL codegen

* address comment

Co-authored-by: Zhi Chen <chzhi@amazon.com>
4 years ago[CI] Fix the hexagon string (#5304)
Tianqi Chen [Fri, 10 Apr 2020 17:58:59 +0000 (10:58 -0700)]
[CI] Fix the hexagon string (#5304)

4 years ago[Arith] linear system and equation solver (#5171)
Yizhi Liu [Fri, 10 Apr 2020 15:11:21 +0000 (08:11 -0700)]
[Arith] linear system and equation solver (#5171)

* [arith] linear system and equation solver

Co-authored-by: Sergei Grechanik <sergei.grechanik+h@gmail.com>
* avoid constructing analyzer every time

* generate random test cases and address comments

Co-authored-by: Sergei Grechanik <sergei.grechanik@gmail.com>
* rename linear_system to int_constraints

* add comments and use random seed

* message for reporting failure with seed

* add SEqualReduce to IntConstraints; allow variables & ranges to be None

Co-authored-by: Sergei Grechanik <sergei.grechanik+h@gmail.com>
Co-authored-by: Sergei Grechanik <sergei.grechanik@gmail.com>
4 years ago[PYTORCH]Repeat, Reciprocal & Reshape Op support (#5280)
Samuel [Fri, 10 Apr 2020 15:08:56 +0000 (20:38 +0530)]
[PYTORCH]Repeat, Reciprocal & Reshape Op support (#5280)

4 years ago[FRONTEND][TENSORFLOW] Fix gather_nd indices (#5279)
MORITA Kazutaka [Fri, 10 Apr 2020 14:47:53 +0000 (23:47 +0900)]
[FRONTEND][TENSORFLOW] Fix gather_nd indices (#5279)

* [FRONTEND][TENSORFLOW] Fix gather_nd indices

* retrigger CI

4 years agoUpdate device_annotation.cc (#5291)
weiliangweiliang [Fri, 10 Apr 2020 14:47:19 +0000 (22:47 +0800)]
Update device_annotation.cc (#5291)

4 years ago[REFACTOR][IR] Move to runtime::String (#5276)
Zhi [Fri, 10 Apr 2020 14:46:23 +0000 (07:46 -0700)]
[REFACTOR][IR] Move to runtime::String (#5276)

* Use runtime::String

* move string to tvm namespace

* add const char* constructor

* implicit cast from std::string

4 years ago[NDArray] Set shape_ in NDArray::FromDLPack (#5301)
hlu1 [Fri, 10 Apr 2020 14:42:54 +0000 (07:42 -0700)]
[NDArray] Set shape_ in NDArray::FromDLPack (#5301)

4 years ago[RUNTIME] Initial implementation of Hexagon runtime support (#5252)
Krzysztof Parzyszek [Fri, 10 Apr 2020 13:47:59 +0000 (08:47 -0500)]
[RUNTIME] Initial implementation of Hexagon runtime support (#5252)

* [RUNTIME] Initial implementation of Hexagon runtime support

This is only the TVM runtime. The FastRPC libraries, simulator driver,
etc. will be provided in subsequent commits.

* Fix pylint complaints

* Fix some more pylint complaints

* Add link to the Hexagon SDK website

* Extract VTCM marker into a common variable

* Implement device->device memory copy

* Disable unsigned PDs by default

* Ensure that --hvx_length is present in sim_args if HVX is enabled

* Remove the line about clang from README.md

Apparently things work with libstdc++.

* Mention to set USE_RPC=OFF when building libtvm_runtime.so for Hexagon

* Remember to use codegen_hvx in validate_hvx_length

* Add a line about minimum version of LLVM

4 years ago[BYOC] Refine DNNL Codegen (#5288)
Cody Yu [Fri, 10 Apr 2020 07:29:39 +0000 (00:29 -0700)]
[BYOC] Refine DNNL Codegen (#5288)

* Improve DNNL

* Add bind params

* trigger ci

4 years agoAdding support for TFLite QnnSub operator. (#5230)
shoubhik [Fri, 10 Apr 2020 05:32:28 +0000 (22:32 -0700)]
Adding support for TFLite QnnSub operator. (#5230)