platform/upstream/tvm.git
4 years agoImprove docker/bash.sh to handle git worktrees (#5970)
Andrew Reusch [Wed, 1 Jul 2020 15:03:59 +0000 (08:03 -0700)]
Improve docker/bash.sh to handle git worktrees (#5970)

* improve error code when git ls-files fails

* fix docker/bash to handle git worktrees

4 years agoPrint right number of parentheses for LoadNode (#5965)
Krzysztof Parzyszek [Tue, 30 Jun 2020 22:25:33 +0000 (17:25 -0500)]
Print right number of parentheses for LoadNode (#5965)

Stop printing the unnecessary ')' after each LoadNode that didn't
have a matching '('.

4 years agoRaise an exception when extern function does not return Stmt (#5964)
Krzysztof Parzyszek [Tue, 30 Jun 2020 17:58:58 +0000 (12:58 -0500)]
Raise an exception when extern function does not return Stmt (#5964)

The function for tvm.te.extern should return either PrimExpr or Stmt,
however there is no check if it actually does so. If it does not, the
result may be a segmentation fault later on. Catch this case early on,
so an informative message can be shown.

4 years agoFix small typo in nn.conv2d_gemm_weight_transform (#5925)
Giuseppe Rossini [Tue, 30 Jun 2020 15:49:46 +0000 (16:49 +0100)]
Fix small typo in nn.conv2d_gemm_weight_transform (#5925)

* Fix small typo in nn.conv2d_gemm_weight_transform

Change-Id: I7844d898ebf82592f78f478982262ef95f83cc3e

* Add TOPI conv2d_gemm unit tests

Change-Id: I9ed82a68acffcf0dd9720781f8be4aada9d8e6e4

4 years agoMake first order gradient graphs more efficient (#5959)
Thomas Viehmann [Tue, 30 Jun 2020 15:48:44 +0000 (17:48 +0200)]
Make first order gradient graphs more efficient (#5959)

Previously, nodes are visited as often as they are used and each time a
derivative is computed. Only at the leaves were the contributions of
everything added. This patch changes this to add at any node that is
used several times.

4 years agoFix the meaning of conv{1,2}d_transpose output_padding parameter. (#5758)
abergeron [Tue, 30 Jun 2020 07:05:43 +0000 (03:05 -0400)]
Fix the meaning of conv{1,2}d_transpose output_padding parameter. (#5758)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* format

* Add tests for conv1d_transpose output_padding and some check that the values are valid.

* Add check in the implementations

* Add checks to the implementations of conv2d

* Make use of the output_padding argument from topi in relay.

* Fix relay tests asking for invalid output_padding

* Fix line length

* Fix vta tests

* Update tophub references

* Trigger CI

Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
4 years agoAmendments for gradients (#5941)
Thomas Viehmann [Tue, 30 Jun 2020 03:35:36 +0000 (05:35 +0200)]
Amendments for gradients (#5941)

* Amendments for gradients

- We fix the dtype handling of consts in generated gradients.
- We add a collapse_sum_to instruction mirroring the collapse_sum_like.
  While for general definitions (potentially dynamic shapes),
  collapse_sum_like is the first choice, when moving to static,
  using collapse_sum_to will greatly simplify the graph.
  (This simplification is not part of the PR.)

* Fix Broadcast rel description in comment

Thank you, @MarisaKirisame

4 years ago[RELAY][GRAD] handle Tuple/TupleGetItem in first order gradient (#5946)
Thomas Viehmann [Tue, 30 Jun 2020 03:34:20 +0000 (05:34 +0200)]
[RELAY][GRAD] handle Tuple/TupleGetItem in first order gradient (#5946)

* handle Tuple/TupleGetItem in first order gradient

* Unify MultiOnes/MultiZeros.

4 years agoFix some typo errors in license header (#5956)
Leon Wang [Tue, 30 Jun 2020 03:28:30 +0000 (11:28 +0800)]
Fix some typo errors in license header (#5956)

Signed-off-by: leonwanghui <wanghui71leon@gmail.com>
4 years ago[OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add (#5857)
Trevor Morris [Tue, 30 Jun 2020 00:55:22 +0000 (17:55 -0700)]
[OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add (#5857)

* [OpenCL] Fix atomic add used by get_valid_counts

* Rename l -> load, add flag to enable atomics

* Opencl doesn't do data rearrangement

4 years ago[TIR][ANALYSIS] Refine side effect analysis. (#5954)
Tianqi Chen [Mon, 29 Jun 2020 15:25:46 +0000 (08:25 -0700)]
[TIR][ANALYSIS] Refine side effect analysis. (#5954)

4 years ago[Relay] symbolic max_output_size (#5844)
Yong Wu [Mon, 29 Jun 2020 06:18:38 +0000 (14:18 +0800)]
[Relay] symbolic max_output_size  (#5844)

* symbolic max_output_size

* pylint

* fix ci

4 years ago[BUGFIX] Add cuda 11 to contrib.nvcc.find_libdevice_path() (#5902)
Yanming Wang [Sun, 28 Jun 2020 23:10:19 +0000 (23:10 +0000)]
[BUGFIX] Add cuda 11 to contrib.nvcc.find_libdevice_path() (#5902)

4 years ago[REFACTOR][TIR][API-Change] Range/IntSet API style consistency. (#5953)
Tianqi Chen [Sun, 28 Jun 2020 23:02:06 +0000 (16:02 -0700)]
[REFACTOR][TIR][API-Change] Range/IntSet API style consistency. (#5953)

- Range::make_by_min_extent -> Range::FromMinExtent
- Update the APIs in IntSet to use CamelCase

4 years ago[RELAY][VM] Add shape_of instruction (#5855)
Zhi [Sun, 28 Jun 2020 17:05:50 +0000 (10:05 -0700)]
[RELAY][VM] Add shape_of instruction (#5855)

4 years agoadd rm xla attributes in tf docs (#5950)
Meteorix [Sun, 28 Jun 2020 16:28:33 +0000 (00:28 +0800)]
add rm xla attributes in tf docs (#5950)

4 years agoraise right error in tensorflow split op (#5951)
Meteorix [Sun, 28 Jun 2020 16:28:15 +0000 (00:28 +0800)]
raise right error in tensorflow split op (#5951)

4 years ago[TIR] Improve Let/LetStmt support. (#5949)
Tianqi Chen [Sun, 28 Jun 2020 16:22:11 +0000 (09:22 -0700)]
[TIR] Improve Let/LetStmt support. (#5949)

Let/LetStmt are useful primitives to create variable bindings.
While let binding are harmful for simplification and integer analysis,
they are useful for other cases:

- C0: LetStmt is useful to represent a step that has side effect(e.g. call a PRNG)
- C1: Let expression can be used to create deep nested expression for complicated functions.

This PR improves the let support in the following ways:
- Enable vectorization support for let
- Change let simplification strategy to simplify the most trivial case
  while ignore more complicated cases(to avoid deep nest explosion)
- Enhance arith module to handle const bound and modular set for let.

The overall recommendation is to only use Let in the cases when necessary(C0, C1).

4 years ago[Doc] minor fix for release doc (#5948)
Yizhi Liu [Sun, 28 Jun 2020 08:24:38 +0000 (01:24 -0700)]
[Doc] minor fix for release doc (#5948)

4 years agofix string argument mismatch in GraphRuntimeCodegen (#5933)
Lianmin Zheng [Sun, 28 Jun 2020 00:09:39 +0000 (17:09 -0700)]
fix string argument mismatch in GraphRuntimeCodegen (#5933)

4 years ago[TIR][PASS] Remove legacy HoistIfThenElse (#5944)
Tianqi Chen [Sat, 27 Jun 2020 21:56:13 +0000 (14:56 -0700)]
[TIR][PASS] Remove legacy HoistIfThenElse (#5944)

This pass has not been migrated to the new transform API,
and contains potential bugs per https://github.com/apache/incubator-tvm/issues/5559.
Given that it is not being actively used, this PR remove this pass
from the collection.

Followup PRs are more than welcomed to land a better version that
conforms with the new transform API.

4 years agoUpdate date in the NOTICE (#5942)
Tianqi Chen [Sat, 27 Jun 2020 19:31:47 +0000 (12:31 -0700)]
Update date in the NOTICE (#5942)

4 years ago[TIR][OP][API-CHANGE] Remove CallNode.call_type in favor of attribute. (#5937)
Tianqi Chen [Sat, 27 Jun 2020 17:54:26 +0000 (10:54 -0700)]
[TIR][OP][API-CHANGE] Remove CallNode.call_type in favor of attribute. (#5937)

This is a followup refactor for tir::Call.
Now that we have switched call->name to call->op, the function effect property
can be registered through the op itself, so we no longer need the call_type in the CallNode.

- Introduce CallEffectKind to provide a more fine grained categorization of calls.
- Introduce call_pure_extern and call_llvm_pure_intrin to
  allow us to indicate pure calls in those cases.
- Migrate existing usecases to the new API.

4 years agoadd dnnl (#5936)
Cody Yu [Fri, 26 Jun 2020 23:02:16 +0000 (16:02 -0700)]
add dnnl (#5936)

4 years ago[CODEGEN][CONTRIB] Various update for CoreML codegen (#5934)
MORITA Kazutaka [Fri, 26 Jun 2020 17:34:33 +0000 (02:34 +0900)]
[CODEGEN][CONTRIB] Various update for CoreML codegen (#5934)

* [CODEGEN][CONTRIB] Various update for CoreML codegen

* fix lint error

4 years ago[Runtime] Only initialize required module (#5926)
Cody Yu [Fri, 26 Jun 2020 15:05:12 +0000 (08:05 -0700)]
[Runtime] Only initialize required module (#5926)

* init required modules

* trigger ci

* trigger ci

4 years agoUpdate code_review.rst (#5923)
Baden Hughes [Fri, 26 Jun 2020 14:23:08 +0000 (00:23 +1000)]
Update code_review.rst (#5923)

editorial pass with corrections

4 years agoAdd TupleGetItem to CSE (#5931)
Matthew Brookhart [Fri, 26 Jun 2020 14:22:43 +0000 (07:22 -0700)]
Add TupleGetItem to CSE (#5931)

* Add TupleGetItem to CSE

* rename a local variable

4 years ago[Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching (#5924)
Chenfan [Fri, 26 Jun 2020 14:19:47 +0000 (22:19 +0800)]
[Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching (#5924)

4 years ago[PatternLang] Don't rewrite expressions used outside of the pattern (#5930)
Matthew Brookhart [Fri, 26 Jun 2020 14:15:07 +0000 (07:15 -0700)]
[PatternLang] Don't rewrite expressions used outside of the pattern (#5930)

* Don't rewrite expressions used outside of the pattern

* add comments

4 years ago[TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or...
Lianmin Zheng [Fri, 26 Jun 2020 05:52:19 +0000 (22:52 -0700)]
[TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse (#5917)

* Add LegalizeInvalidAttach

* lint & typo

* lint & typo

* address comment

* fix lint

4 years agorefine error (#5929)
Cody Yu [Fri, 26 Jun 2020 02:26:03 +0000 (19:26 -0700)]
refine error (#5929)

4 years ago[BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify (#5922)
Yizhi Liu [Fri, 26 Jun 2020 02:13:03 +0000 (19:13 -0700)]
[BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify (#5922)

4 years agoTwo small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+ (#5920)
Thomas Viehmann [Thu, 25 Jun 2020 16:59:12 +0000 (18:59 +0200)]
Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+ (#5920)

- For LLVM 10+ we need to avoid calling Align with 0, or else
  we get a crash.
- For ROCm 3.5+ we need to use code object 3 (the default in LLVM 9+)
  but for ROCm < 3.5 we want the code object 2.
- As we want to separate codegen from the API, we need to add
  a device api query for the version.
  But every one else wants now one, too. (But I only filled it
  in for CUDA for now.)
- I'm throwing in an addition of kMaxRegistersPerBlock for ROCm.
  This was introduced for CUDA in #5898.

4 years agoUpdate install.rst (#5858)
Baden Hughes [Thu, 25 Jun 2020 16:38:40 +0000 (02:38 +1000)]
Update install.rst (#5858)

* Update install.rst

minor cleanups/corrections

* Update install.rst

Fixed broken link

4 years ago[Relay][Vm] Some performance improvement to VM (#5901)
Haichen Shen [Thu, 25 Jun 2020 14:55:40 +0000 (07:55 -0700)]
[Relay][Vm] Some performance improvement to VM (#5901)

* make alignment constant

* tweak copyto and loadscalarint

* some safety check

* x

* lint

* fix

4 years agoCUDA device API & VerifyGPUCode pass update (#5898)
Chenfan [Thu, 25 Jun 2020 05:44:39 +0000 (13:44 +0800)]
CUDA device API & VerifyGPUCode pass update (#5898)

* Add kMaxRegistersPerBlock device api for cuda

* Add vectorize check to verify_gpu_code

* Lint fix

* Cast fix

4 years ago[Thread Backend]Fix CPU Thread Binding for Multiple Sockets (#5918)
Yao Wang [Thu, 25 Jun 2020 05:29:51 +0000 (22:29 -0700)]
[Thread Backend]Fix CPU Thread Binding for Multiple Sockets (#5918)

* Fix CPU Thread Binding for Multiple Sockets

* Backward compatibility

4 years agoAdd MicroTVM tutorial using the STM32F746 discovery board (#5655)
Tom Gall [Wed, 24 Jun 2020 22:44:11 +0000 (17:44 -0500)]
Add MicroTVM tutorial using the STM32F746 discovery board (#5655)

* Add MicroTVM tutorial using the STM32F746 discovery board
with a sample tflite model

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Fix: add a reference to the new turtorials/micro directory

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* fix: Cosmetic, align Micro TVM text with divider

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Fixes to remove warnings, spaces for readability, code blocks

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* remove use of dload in favor of requests for obtaining the TFLite model

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* add setup for CMSIS_ST_PATH
comment out portion of tutorial that will not run without a physical board available

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Fix warning due to ** in python but part of a comment block
The block is commented out since it can only run on device

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Numerous reworks to address feedback.

Within docs/conf.py place the microTVM tutorial prior to the VTA tutorials

Within the micro_tflite
  - rework section headers
  - reorder code so model prep code is all in one place as well as code
    for running on device
  - address indentation feedback
  - remove '' '' usage which I mistakenly thought was getting around a
    sphinx issue involving **

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Change disable_vectorize to use current approach with tvm.transform.PassContext
Change to pull example model from github with download_testdata
Add 2.5K tflite model
Couple of small changes following https://sphinx-gallery.github.io/stable/syntax.html

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* remove use of relay.build_config in favor of PassContext

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Couple of minor 4 space fix ups

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Change to use tvm.transform.PassContext for disable_victorize and disabling FuseOps

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Remove binary module from repo
Change download_testdata back to pull model from linaro server

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Couple of small cosmetic changes. (spaces and extra lines)

Signed-off-by: Tom Gall <tom.gall@linaro.org>
* Convert link to tf docs to examine a tf lite model to use RST syntax

Signed-off-by: Tom Gall <tom.gall@linaro.org>
4 years ago[TIR][REFACTOR] Deprecate FreeStmt (#5890)
Tianqi Chen [Wed, 24 Jun 2020 21:48:12 +0000 (14:48 -0700)]
[TIR][REFACTOR] Deprecate FreeStmt (#5890)

Currently FreeStmt is not being used.
While it can be useful to have an early free hint
we can always use an intrinsic instead of a first class statement.

4 years ago[PatternLang] Support any index matching for TupleGetItem (#5909)
Cody Yu [Wed, 24 Jun 2020 18:44:23 +0000 (11:44 -0700)]
[PatternLang] Support any index matching for TupleGetItem (#5909)

* support any index matching

* update doc

4 years agoFix serialization of inf float value (#5912)
lixiaoquan [Wed, 24 Jun 2020 15:49:13 +0000 (23:49 +0800)]
Fix serialization of inf float value (#5912)

4 years agoDon't multiply by constant 1 uselessly in dense (#5911)
Thomas Viehmann [Wed, 24 Jun 2020 11:49:43 +0000 (13:49 +0200)]
Don't multiply by constant 1 uselessly in dense (#5911)

4 years agoPyTorch frontend: fix handling of duplicate use of a model weight (#5897)
Thomas Viehmann [Wed, 24 Jun 2020 04:42:38 +0000 (06:42 +0200)]
PyTorch frontend: fix handling of duplicate use of a model weight (#5897)

This happens e.g. in shared input/output embeddings in BERT
or siamese networks.

Thank you @siju-samuel for reporting.

4 years agoAllow implicit conversion in TVM FFI to tvm::Bool (#5907)
Junru Shao [Wed, 24 Jun 2020 03:16:04 +0000 (20:16 -0700)]
Allow implicit conversion in TVM FFI to tvm::Bool (#5907)

4 years agoAdd Binary Intrinsic ops to TIR Ops in C++ (#5900)
Matthew Brookhart [Tue, 23 Jun 2020 23:18:29 +0000 (16:18 -0700)]
Add Binary Intrinsic ops to TIR Ops in C++ (#5900)

* Add Binary Intrinsic ops to TIR Ops in C++

* clang-format

4 years agoadd a few gradients (#5899)
Thomas Viehmann [Tue, 23 Jun 2020 21:01:46 +0000 (23:01 +0200)]
add a few gradients (#5899)

4 years agoRust Refactor Stage 4: Rewrite Rust graph runtime to use new APIs (#5830)
Jared Roesch [Tue, 23 Jun 2020 18:39:48 +0000 (11:39 -0700)]
Rust Refactor Stage 4: Rewrite Rust graph runtime to use new APIs (#5830)

* Port graph-runtime to new API

* --amend

* Fix file lint

* Remove old travis file

* Add @kazum's patch

* Update rust/tvm-sys/src/datatype.rs

Co-authored-by: Andrew <amcharg@gmail.com>
Co-authored-by: Andrew <amcharg@gmail.com>
4 years agoFix the python intrin rule (#5895)
Tianqi Chen [Tue, 23 Jun 2020 15:24:49 +0000 (08:24 -0700)]
Fix the python intrin rule (#5895)

4 years ago[Relay]Allow every runtime module to handle constants (#5885)
Cody Yu [Tue, 23 Jun 2020 15:23:58 +0000 (08:23 -0700)]
[Relay]Allow every runtime module to handle constants (#5885)

* update source module

* address comment

4 years agoremove fatal (#5888)
Cody Yu [Tue, 23 Jun 2020 15:22:12 +0000 (08:22 -0700)]
remove fatal (#5888)

4 years ago[RFC] Improve quantized convolution performance for armv8 architectures (#5754)
Giuseppe Rossini [Tue, 23 Jun 2020 05:20:05 +0000 (06:20 +0100)]
[RFC] Improve quantized convolution performance for armv8 architectures (#5754)

* Improve quantized conv2d performance for armv8

Signed-off-by: Giuseppe Rossini <giuseppe.rossini@arm.com>
Change-Id: I3a3d29f5332dd9b3354e8e0dfb24677a521f9c8f

* Add ASF header to conv2d_gemm.py

Change-Id: I33853279e39c849ae1b555a9c91d7557985a0a35

* Run clang-format-10 on c++ files

Change-Id: Ieee22f032e595dabfc1616ab33466fcbf8d94365

* Fix pylint errors/warnings

Change-Id: I435d4d7bca7500db99547f4401fdc0d0995a1ff4

* Fix pylint errors/warnings in topi

Change-Id: I2fc1ad8453e9020072ab967c849df5390c2967b5

* Fix legalizations tests for aarch64

Change-Id: I0a67a49a7849f52ef7d57b9292ce9125bbb7cb2c

* Reintroduce conv2d_nhwc_spatial_pack.arm_cpu and int16 cast

Change-Id: I91b67fabd475e90a9b75f2dd5ecfee851265e0bb

* Switch type of legalization depending on the strategy used

Change-Id: I9a03040a8c40a6cd2658ed14c3751e05a8e19f2b

* Revert last commit

Change-Id: Ice34101e358e3ce8ebfb12c58f73e910ba5de8e8

* Fix the auto-tuner by registering the correct schedules

Change-Id: Id9273688b2620e1ea849ab01b4c46af8fbf37fd0

* Address review comments

Change-Id: Ia1755a0af7b6d159072d9f0c93c932c481101e48

* Improve usability and readability of conv2d_gemm_weight_transform

Change-Id: I3333186bbc2fe4054b58ce15d910e3be7b315482

* Change variable name to weight in Conv2DGemmWeightTransformRel

Change-Id: Ifb5f1f33af7512fe67c6b049b20a42a0bb2d26c9

* Fix clang-10 linting errors

Change-Id: I25ccc844d9cee23766096e1daddb6180abc413a6

* Trigger tests

Change-Id: Id37706fb7cf77a87a3cc817ecf8046297d9ca95a

4 years ago[TIR][REFACTOR][API-CHANGE] Change Call.name to Call.op(RelayExpr) (#5863)
Tianqi Chen [Tue, 23 Jun 2020 00:47:01 +0000 (17:47 -0700)]
[TIR][REFACTOR][API-CHANGE] Change Call.name to Call.op(RelayExpr) (#5863)

* [TIR][REFACTOR][API-CHANGE] Change Call.name(string) to Call.op(tvm::Op/RelayExpr)

This PR brings a major refactor to the tir::Call structure.
The current Call structure uses a string field(name) to identify the
function/intrinsic being called. This approach is limited as we start
to expand TIR to be more structured. In particular, we are interested in
the following aspects:

- Type a function and perform better compile time type checking so that we
  can find errors early.
- Register additional properties about an operator, such as:
  - Whether an intrinsic can be vectorized
  - What is the adjoint function of the intrinsic(for tensor expression AD)
  - Whether the operator has side effect.
- Perform specific codegen about an intrinsic if necessary.
- Call into another function in the same module.

The refactor changes the Call.name field to Call.op.
The Call.op field has a RelayExpr type, and we can pass:

- A tvm::Op which represents the corresponding intrinsic.
- A tvm::GlobalVar for calling into another function in the IRModule.

All the current intrinsics are migrated by registering an tvm::Op.
Because the unified IR shares a single Op registry. We use the "tir"
namespace for tir related intrinsics, for example bitwise and is now registered
under `tir.bitwise_and`.

To simplify upgrade, we introduce a `tir.call_extern` intrinsic
that allows us to call into arbitary external function without type checking.
However, we should move towards more type checked variants in the system.

Under the new op design. We should no longer try to pattern match all the
specific intrincis. Instead, we should rely on attr of each Op to do transformation.
For example, the vectorization pass depends on the TVectorizable property of the op,
which can be registered independently.

In this way, we can still grow the number of intrinsics when necessary
without having to change all the passes.

The same rule applies for tensor expression AD. Currently we are performing
AD by pattern match on operators like exp, sin, cos. We should instead
change to the ajoint registeration mechanism like those in relay.

Followup refactors need to be performed, including:
- Fold the Call.call_type into operator's attribute.
- Enrich the operator registry information
- Refactor passes(e.g. AD, intrin lowering) to use the attribute based transformation

* Fix nms

* Fix remaining testcase

* Address review comment

4 years ago[COMMUNITY] Matthew Brookhart -> Reviewer (#5886)
Haichen Shen [Mon, 22 Jun 2020 23:45:44 +0000 (16:45 -0700)]
[COMMUNITY] Matthew Brookhart -> Reviewer (#5886)

4 years agokeep parameter names from PyTorch (#5887)
Thomas Viehmann [Mon, 22 Jun 2020 23:40:45 +0000 (01:40 +0200)]
keep parameter names from PyTorch (#5887)

4 years agoImprove type handling in PyTorch frontend (#5834)
Thomas Viehmann [Mon, 22 Jun 2020 13:33:04 +0000 (15:33 +0200)]
Improve type handling in PyTorch frontend (#5834)

* Improve type handling in PyTorch frontend

- Use type information from graph for inputs if available. Check
  against shape information from graph if available.
- Allow user to set default dtype (default to float32 for sanity and
  compatibility).
- Implement type promotion to follow PyTorch mechanism. This includes
  fixing the handling of many "Scalar" overloads in PyTorch binary ops.
- Fix arange/linspace type semantics.
- Added support for traced functions. (Because it really is about the
  "self" input handling.)

Aside from adding an optional default_dtype keyword argument, this does not
change the signature/requirements of from_pytorch.

* Fix scalar detection using numpy.isscalar

and address other review comments. Thank you @siju-samuel

* refine test criteron on qnn_test::test_serialized_modules, fix bool conversion of const

4 years agoFail early before running invalid dynamic graphs (#5856)
Matthew Brookhart [Mon, 22 Jun 2020 05:55:16 +0000 (22:55 -0700)]
Fail early before running invalid dynamic graphs (#5856)

* fail early before running invalid dynamic graphs

* fix an issue with the VM comment

4 years ago[Bugfix][Build] Fix building with LLVM-10 on macOS (#5859)
Junru Shao [Mon, 22 Jun 2020 03:53:39 +0000 (20:53 -0700)]
[Bugfix][Build] Fix building with LLVM-10 on macOS (#5859)

4 years ago[QUANTIZE] Add nn.batch_flatten as quantizable. (#5805)
Balint Cristian [Sun, 21 Jun 2020 22:36:51 +0000 (01:36 +0300)]
[QUANTIZE] Add nn.batch_flatten as quantizable. (#5805)

* [ONNX] Skip ADD inside Gemm op when vector is zero

* [QUANTIZE] Add nn.batch_flatten as quantizable.

4 years ago[Target] Introduce Target Id Registry (#5838)
Junru Shao [Sat, 20 Jun 2020 02:36:08 +0000 (19:36 -0700)]
[Target] Introduce Target Id Registry (#5838)

4 years ago[DOCS] Update has_dtype/has_shape to pattern lang doc (#5847)
Cody Yu [Sat, 20 Jun 2020 00:04:53 +0000 (17:04 -0700)]
[DOCS] Update has_dtype/has_shape to pattern lang doc (#5847)

4 years agoFix map assign issue in CI test (#5854)
Tianqi Chen [Fri, 19 Jun 2020 17:37:46 +0000 (10:37 -0700)]
Fix map assign issue in CI test (#5854)

4 years agoAdd Python Classes for all Attrs (#5853)
Thomas Viehmann [Fri, 19 Jun 2020 15:28:45 +0000 (17:28 +0200)]
Add Python Classes for all Attrs (#5853)

4 years ago[DataType] Add bfloat16 (#5601)
Menooker [Fri, 19 Jun 2020 14:40:40 +0000 (22:40 +0800)]
[DataType] Add bfloat16 (#5601)

4 years ago[Object] Introduce POD-C Compliant tvm::Map (#5740)
Junru Shao [Fri, 19 Jun 2020 14:35:44 +0000 (07:35 -0700)]
[Object] Introduce POD-C Compliant tvm::Map (#5740)

4 years ago[FIX] Recover global state after test_util.py (#5824)
mbaret [Fri, 19 Jun 2020 14:35:05 +0000 (15:35 +0100)]
[FIX] Recover global state after test_util.py (#5824)

In test_util.py, a program exit is simulated to test
that the error throwing behaviour is accurate.
Unforunately, this also deletes necessary global state
and so all subsequent tests that run and use tempdir
throw the same error.

This patch is a simple fix to restore the global state
at the end of the test.

Change-Id: I62fef46167e47f6af43271e2ce1db30f54857647

4 years ago[AutoTVM] Suppress the warning messages when compile engine selects impls (#5821)
Haichen Shen [Thu, 18 Jun 2020 23:29:21 +0000 (16:29 -0700)]
[AutoTVM] Suppress the warning messages when compile engine selects impls (#5821)

4 years agoAdditional canonicalization added for AddNode (#5846)
ANSHUMAN TRIPATHY [Thu, 18 Jun 2020 23:21:55 +0000 (04:51 +0530)]
Additional canonicalization added for AddNode (#5846)

4 years agofix batchnorm infer_value error, add regression test and unit test (#5845)
Matthew Brookhart [Thu, 18 Jun 2020 22:28:01 +0000 (15:28 -0700)]
fix batchnorm infer_value error, add regression test and unit test (#5845)

4 years ago[RUNTIME] Introduce MetadataModule to separate code compilation/interpretation and...
Zhi [Thu, 18 Jun 2020 22:18:29 +0000 (15:18 -0700)]
[RUNTIME] Introduce MetadataModule to separate code compilation/interpretation and weight initialization (#5770)

4 years ago `tvm` crate stage 3 of Rust refactor (#5769)
Jared Roesch [Thu, 18 Jun 2020 18:33:25 +0000 (11:33 -0700)]
 `tvm` crate stage 3 of Rust refactor  (#5769)

* Adapt to new macro

* Add tvm crate

* Fix out of tree pass with new bindings

* Super slick API working

* Add examples

* Delay egg example and add ASF headers

* Move array.rs around

* Remove outdated tests will restore in CI PR

* Fix some memory issues

* Fix ref counting issue

* Formatting and cleanup

* Remove out-of-tree for now

* Remove out-of-tree

4 years agoffi (Object): make class dict visible in instances (#5843)
Thomas Viehmann [Thu, 18 Jun 2020 18:14:21 +0000 (20:14 +0200)]
ffi (Object): make class dict visible in instances (#5843)

4 years ago[Torch][Quantized] Fix converting serialized quantized models (#5839)
masahi [Thu, 18 Jun 2020 16:24:03 +0000 (01:24 +0900)]
[Torch][Quantized] Fix converting serialized quantized models (#5839)

* [Torch] Fix converting serialized quantized models

* clean up dtype check

* comment clean up

4 years ago[KERAS]RepeatVector, Conv3DTranspose op support added (#5833)
Siju Samuel [Thu, 18 Jun 2020 00:50:33 +0000 (06:20 +0530)]
[KERAS]RepeatVector, Conv3DTranspose op support added (#5833)

4 years agoAdd a combine batch_matmul pass (#5791)
Thomas Viehmann [Wed, 17 Jun 2020 20:39:12 +0000 (22:39 +0200)]
Add a combine batch_matmul pass (#5791)

* Add a combine batch_matmul pass

Contrary what you might expect, this doesn't share as much code with
the combine dense pass as it does with the combine 2d conv pass.
This is because it concatenates the "output feature" dimensions.

* fix docstring

4 years ago[Frontend][MXNet] Support a few contrib ops in mxnet (#5819)
Haichen Shen [Wed, 17 Jun 2020 20:15:14 +0000 (13:15 -0700)]
[Frontend][MXNet] Support a few contrib ops in mxnet (#5819)

* support for bert in mxnet1.6 and gluonnlp0.9

* fix converter

* Add test cases

* add a todo

4 years ago[Frontend][TensorFlow]Fix TF Dynamic input shape (#5825)
Yao Wang [Wed, 17 Jun 2020 17:05:35 +0000 (10:05 -0700)]
[Frontend][TensorFlow]Fix TF Dynamic input shape (#5825)

* Fix TF Dynamic input shape

* Remove warning

* Add test

4 years ago[Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence operator (#5495)
Mahesh Ambule [Wed, 17 Jun 2020 03:53:08 +0000 (09:23 +0530)]
[Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence operator (#5495)

* TFLite reverse_sequence op

* TFLite add_n implementation

* reverse_sequence implementation

* reverse_sequence implementation

* reverse sequence

* TOPI,Relay,TFLite - Reverse Sequence

Signed-off-by: maheshambule <mahesh_ambule@persistent.com>
* Reverse Sequence small fixes

Signed-off-by: maheshambule <mahesh_ambule@persistent.com>
* lint fixes

Signed-off-by: maheshambule <mdambule07@gmail.com>
* TFLite reverse_sequence op

Signed-off-by: maheshambule
* MXNet SequenceReverse implementation

* clang format

* clang format

* review comment fixes

4 years ago[RUNTIME][String] Overload string operators (#5806)
Zhi [Wed, 17 Jun 2020 03:49:51 +0000 (20:49 -0700)]
[RUNTIME][String] Overload string operators (#5806)

4 years ago[Fix] Fix recursive let for well formed check (#5780)
Haichen Shen [Wed, 17 Jun 2020 02:30:49 +0000 (19:30 -0700)]
[Fix] Fix recursive let for well formed check (#5780)

4 years ago[MergeComposite] Fix InferType when module contains Prelude (#5797)
lixiaoquan [Tue, 16 Jun 2020 22:11:22 +0000 (06:11 +0800)]
[MergeComposite] Fix InferType when module contains Prelude (#5797)

A function may refer to other resources in the same module, so keep
  the content of original module when infering a function.

4 years agofix relay.build to not change the module argument in place (#5822)
Thomas Viehmann [Tue, 16 Jun 2020 21:13:40 +0000 (23:13 +0200)]
fix relay.build to not change the module argument in place (#5822)

4 years ago[Relay][OpStrategy] Tweak cublas/cudnn priority level (#5820)
Haichen Shen [Tue, 16 Jun 2020 20:14:07 +0000 (13:14 -0700)]
[Relay][OpStrategy] Tweak cublas/cudnn priority level (#5820)

* Tweak cublas plevel

* update

* trigger ci

4 years ago[Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softma...
Ina Dobreva [Tue, 16 Jun 2020 17:16:26 +0000 (20:16 +0300)]
[Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax (#4805)

* [Frontend][TFLite]Add support for relu6, leaky_relu, relu_n1_to_1, log_softmax

* add implementation in parser
* add qnn tests for each operator

* Implement clip operation for quantized relu6, relu1

* add 'clip' as in the quantized fused operations
* remove redundant assertions and imports

* Fix floating value quantization for RELU6 and RELU1

4 years agoError msg update (#5818)
ANSHUMAN TRIPATHY [Tue, 16 Jun 2020 15:24:12 +0000 (20:54 +0530)]
Error msg update (#5818)

4 years ago[COMMUNITY] Siju Samuel -> Committer (#5817)
Tianqi Chen [Mon, 15 Jun 2020 20:08:55 +0000 (13:08 -0700)]
[COMMUNITY] Siju Samuel -> Committer (#5817)

4 years ago[CI] Limit number of threads in all jobs (#5815)
Tianqi Chen [Mon, 15 Jun 2020 17:28:59 +0000 (10:28 -0700)]
[CI] Limit number of threads in all jobs (#5815)

4 years agoPin hand landmark network to version 0.7.4. (#5813)
Leandro Nunes [Mon, 15 Jun 2020 14:54:38 +0000 (15:54 +0100)]
Pin hand landmark network to version 0.7.4. (#5813)

* Versions above 0.7.4 are broken due to changes in the
   quantization operations in the model, which are current
   not supported by TVM.

Fixes #5774.

4 years ago[MXNET]conv3d and conv3d_transpose addedx (#5814)
Samuel [Mon, 15 Jun 2020 12:47:53 +0000 (18:17 +0530)]
[MXNET]conv3d and conv3d_transpose addedx (#5814)

4 years ago[CI] Move cpu-only frontend tests to a CPU stage (#5807)
Tianqi Chen [Mon, 15 Jun 2020 03:53:11 +0000 (20:53 -0700)]
[CI] Move cpu-only frontend tests to a CPU stage (#5807)

4 years ago[topi] fix strategy for sparse dense cuda (#5782)
Bing Xu [Mon, 15 Jun 2020 01:54:15 +0000 (18:54 -0700)]
[topi] fix strategy for sparse dense cuda (#5782)

4 years agoAllow RPCWrappedFunc to rewrite runtime::String as std::string (#5796)
Junru Shao [Sun, 14 Jun 2020 23:28:06 +0000 (16:28 -0700)]
Allow RPCWrappedFunc to rewrite runtime::String as std::string (#5796)

4 years ago[topi] fix sparse dense schedule on cuda (#5803)
Zijing Gu [Sun, 14 Jun 2020 21:40:20 +0000 (17:40 -0400)]
[topi] fix sparse dense schedule on cuda (#5803)

4 years ago[QUANTIZE] Add config switch for nn.dense layer type. (#5801)
Balint Cristian [Sun, 14 Jun 2020 17:37:32 +0000 (20:37 +0300)]
[QUANTIZE] Add config switch for nn.dense layer type. (#5801)

4 years ago[TIR][REFACTOR] Add tir prefix to type keys (#5802)
Tianqi Chen [Sun, 14 Jun 2020 16:45:46 +0000 (09:45 -0700)]
[TIR][REFACTOR] Add tir prefix to type keys (#5802)

4 years ago[ONNX] Skip multiply with 1.0f constant for GEMM import (#5800)
Balint Cristian [Sun, 14 Jun 2020 03:39:20 +0000 (06:39 +0300)]
[ONNX] Skip multiply with 1.0f constant for GEMM import (#5800)

* [ONNX] Skip ADD inside Gemm op when vector is zero

* [ONNX] Skip multiply with 1.0f constant for GEMM import

4 years ago[TEST] Temporary disable fp16 type_as test for PyTorch Frontend (#5799)
Tianqi Chen [Sat, 13 Jun 2020 19:18:17 +0000 (12:18 -0700)]
[TEST] Temporary disable fp16 type_as test for PyTorch Frontend (#5799)

4 years ago[TIR][REFACTIR] Update TIR nodes std::string->String. (#5793)
Tianqi Chen [Sat, 13 Jun 2020 16:09:00 +0000 (09:09 -0700)]
[TIR][REFACTIR] Update TIR nodes std::string->String. (#5793)

This PR updates the remaining TIR node's member to use
String instead of std::string.

4 years agosupport aten::type_as in the pytorch frontend (#5787)
Rand Xie [Sat, 13 Jun 2020 04:52:45 +0000 (21:52 -0700)]
support aten::type_as in the pytorch frontend (#5787)

* support aten::type_as in the pytorch frontend

* use _convert_data_type to convert torch type to tvm type and add more types in the type_as test

4 years agoFix tf parser (#5794)
Yao Wang [Sat, 13 Jun 2020 03:32:46 +0000 (20:32 -0700)]
Fix tf parser (#5794)