platform/upstream/tvm.git
4 years agoGitHub actions/checkout@v1 --> v2 (#4680)
Christian Clauss [Fri, 10 Jan 2020 22:36:05 +0000 (23:36 +0100)]
GitHub actions/checkout@v1 --> v2 (#4680)

https://github.com/actions/checkout/releases

4 years agoAlso package core.rly (#4679)
abergeron [Fri, 10 Jan 2020 22:04:15 +0000 (17:04 -0500)]
Also package core.rly (#4679)

4 years ago[CI] Bump to use the new cpu image (#4677)
Tianqi Chen [Fri, 10 Jan 2020 22:01:27 +0000 (14:01 -0800)]
[CI] Bump to use the new cpu image (#4677)

4 years agofix topi.nn.global_pool layout="NHWC" (#4656)
戚海涛 [Fri, 10 Jan 2020 19:00:10 +0000 (03:00 +0800)]
fix topi.nn.global_pool layout="NHWC" (#4656)

* Update topi.cc

fix topi.nn.global_pool layout="NHWC"

* add topi.nn.global_pool layout=NHWC test

4 years ago[CI] Update deps for chisel (#4675)
Tianqi Chen [Fri, 10 Jan 2020 18:29:44 +0000 (10:29 -0800)]
[CI] Update deps for chisel (#4675)

4 years ago[CodeGen] Generate blob use LLVM directly (#4657)
Zhao Wu [Fri, 10 Jan 2020 17:01:00 +0000 (01:01 +0800)]
[CodeGen] Generate blob use LLVM directly (#4657)

4 years ago[VTA] Update docker for TSIM based simulation (#4674)
Liangfu Chen [Fri, 10 Jan 2020 16:42:12 +0000 (00:42 +0800)]
[VTA] Update docker for TSIM based simulation (#4674)

4 years agoAdded pool autopadding and simplified parsers. (#4672)
Josh Fromm [Fri, 10 Jan 2020 09:29:01 +0000 (01:29 -0800)]
Added pool autopadding and simplified parsers. (#4672)

4 years agodownload fallback config file for search from tophub if it does not exist (#4671)
Xingyu Zhou [Fri, 10 Jan 2020 04:22:45 +0000 (20:22 -0800)]
download fallback config file for search from tophub if it does not exist (#4671)

4 years ago[REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) (#4669)
Tianqi Chen [Thu, 9 Jan 2020 23:30:23 +0000 (15:30 -0800)]
[REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr) (#4669)

* [REFACTOR][IR] tvm::Expr -> PrimExpr(Primitive Expr)

As part of unified IR, we will need to unify relay::Expr
and the current tvm::Expr under the same base type.

From the techinical point of view. tvm::Expr is a "primitive"
expression that only contains POD types and handles and does
not do life-cycle management.

This PR renames Expr->PrimExpr to clarify that.
We will send a subsequent PR to introduce the base expr class.

* Remove legacy VarExpr and ExprHash/Equal

4 years agoUse int for endch to fix portability issues regarding signed/unsigned char (#4668)
Trevor Morris [Thu, 9 Jan 2020 19:06:38 +0000 (11:06 -0800)]
Use int for endch to fix portability issues regarding signed/unsigned char (#4668)

4 years ago[Relay][Frontend][TFlite] Add parses support for unary elemwise ops (#4634)
Ina Dobreva [Thu, 9 Jan 2020 18:46:26 +0000 (18:46 +0000)]
[Relay][Frontend][TFlite] Add parses support for unary elemwise ops (#4634)

* [Relay][Frontend][Tflite] Add parses support for unary elemwise ops

* Add generic method to convert unary functions: abs, exp, ceil, floor
  log, sin, cos, sqrt, rsqrt, neg
* Add relevant tests

* Delete excessive underscores as requested in PR review

* Change parameter name as suggested in PR review

4 years ago[REFACTOR] relay::Module Def -> TypeDef (#4665)
Tianqi Chen [Thu, 9 Jan 2020 17:12:56 +0000 (09:12 -0800)]
[REFACTOR] relay::Module Def -> TypeDef (#4665)

* [REFACTOR] relay::Module Def -> TypeDef

The term Def was not very clear about what is the object of interest(could be function def or type def).
Changes the term to TypeDef to be more explicit.

* Update include/tvm/relay/module.h

Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
4 years ago[Relay/Topi][Op] 1D Pooling (#4663)
Josh Fromm [Thu, 9 Jan 2020 09:43:25 +0000 (01:43 -0800)]
[Relay/Topi][Op] 1D Pooling (#4663)

* Added 1D pooling to Topi

* Added 1D pooling relay op and tests.

* Added onnx parsing and tests for maxpool1d and averagepool1d

* formatting

* moved partial import.

* Fixed typo.

4 years ago[Autotvm] Use VM compile to extract autotvm tasks (#4328)
Haichen Shen [Thu, 9 Jan 2020 01:15:55 +0000 (17:15 -0800)]
[Autotvm] Use VM compile to extract autotvm tasks (#4328)

* [AutoTVM] Use vm compile in extracting task from relay

* update

* restructure vm compiler to reduce task extraction time

* x

* fix

* update doc

* udpate doc

* lint

4 years ago[CI] Recover Windows Mac Build CI via Github Actions (#4662)
Tianqi Chen [Thu, 9 Jan 2020 01:02:12 +0000 (17:02 -0800)]
[CI] Recover Windows Mac Build CI via Github Actions (#4662)

* [RUNTIME] Fix windows build after the latest dso module change.

Switch to shared_ptr to get around a problem in latest MSVC.

* [CI] Add github action for win mac build.

4 years ago[CONV] Reduce data size of asymmetric padding testcase (#4658)
optima2005 [Wed, 8 Jan 2020 18:26:14 +0000 (02:26 +0800)]
[CONV] Reduce data size of asymmetric padding testcase (#4658)

Co-authored-by: Tianqi Chen <tqchen@users.noreply.github.com>
4 years ago[REFACTOR][IR] Add Node suffix to low-level IR nodes (#4649)
Tianqi Chen [Wed, 8 Jan 2020 17:01:00 +0000 (09:01 -0800)]
[REFACTOR][IR] Add Node suffix to low-level IR nodes (#4649)

* [REFACTOR][IR] Variable -> VarNode

* [REFACTOR][IR] Add/Sub/Mul/Div -> AddNode/SubNode etc.

* [REFACTOR][IR] Min/Max/FloorDiv/FloorMod -> MinNode/MaxNode etc.

* [REFACTOR][IR] EQ/NE/LT/LE/GT/GE/Select -> EQNode/NENode etc.

* [REFACTOR][IR] Add Node suffix to Select/Call/Load/Ramp/Shuffle/Let

* [REFACTOR][IR] Add node suffix to IntImm/UIntImm/FloatImm/StringImm

* [REFACTOR][IR] Add Node suffix to Any, AttrStmt, AssertStmt

* [REFACTOR][IR] Add Node suffix to Store/Provide/Allocate/Free

* [REFACTOR][IR] Add Node suffix to ProducerConsumer

* Fix lint

* style updates, test fixes

4 years agoreduce input size to fix oom (#4653)
Zhi [Wed, 8 Jan 2020 16:52:38 +0000 (08:52 -0800)]
reduce input size to fix oom (#4653)

4 years ago[COMMUNITY] @MarisaKirisame -> committer (#4645)
Haichen Shen [Wed, 8 Jan 2020 02:23:02 +0000 (18:23 -0800)]
[COMMUNITY] @MarisaKirisame -> committer (#4645)

4 years ago[RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return val (#4637)
Tianqi Chen [Tue, 7 Jan 2020 23:28:26 +0000 (15:28 -0800)]
[RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return val (#4637)

* [RUNTIME][DSO] Improve TVMBackendPackedCFunc to allow return value.

Previously the signature of LibraryModule's PackedFunc does not support return value.
This wasn't a limitation for our current usecase but could become one
as we start to generate more interesting functions.

This feature also start to get interesting as we move towards unified
object protocol and start to pass object around.
This PR enhances the function signature to allow return values.

We also created two macros TVM_DLL_EXPORT_PACKED_FUNC and TVM_DLL_EXPORT_TYPED_FUNC
to allow manual creation of functions that can be loaded by a LibraryModule.

Examples are added in apps/dso_plugin_module.
The change to TVMBackendPackedCFunc is backward compatible,
as previous function will simply ignore the return value field.

* address review comments

4 years ago[QNN] Channel wise quantization - Quantize & Requantize (#4629)
Animesh Jain [Tue, 7 Jan 2020 22:07:42 +0000 (14:07 -0800)]
[QNN] Channel wise quantization - Quantize & Requantize (#4629)

4 years agoResolve constexpr related link error in debug mode (#4641)
Tianqi Chen [Tue, 7 Jan 2020 17:10:33 +0000 (09:10 -0800)]
Resolve constexpr related link error in debug mode (#4641)

4 years ago[CI] better deletion script for pycache (#4635)
Tianqi Chen [Tue, 7 Jan 2020 06:00:53 +0000 (22:00 -0800)]
[CI] better deletion script for pycache (#4635)

4 years ago[COMMUNITY] @wweic -> committer (#4636)
Tianqi Chen [Mon, 6 Jan 2020 21:02:16 +0000 (13:02 -0800)]
[COMMUNITY] @wweic -> committer (#4636)

4 years ago[FRONTEND][Keras] Add support for tf.Keras networks in Relay Keras frontend (#4630)
Leandro Nunes [Mon, 6 Jan 2020 17:50:12 +0000 (17:50 +0000)]
[FRONTEND][Keras] Add support for tf.Keras networks in Relay Keras frontend (#4630)

* Make Relay Keras frontend support networks created using
   Tensorflow (1.13) Keras implementation (tf.Keras)
 * Modify Keras frontend tests to run from a class rather than a
   function based script
 * Adjust Keras frontend tests to run with both 'Keras' and 'tf.Keras'
 * Change "TestKeras.test_forward_merge" to validate instances by
   class name rather than instance type

4 years agoUpdate image version tags in Dockerfile comments (#4631)
Leandro Nunes [Mon, 6 Jan 2020 17:07:26 +0000 (17:07 +0000)]
Update image version tags in Dockerfile comments (#4631)

* Fix typos on Docker image versions that we are currently running
   as part of CI

 * Add version comment in the same pattern for ci_lint image

4 years agoPin python pillow to "<7" due to torchvision 1.2.0 dependency issue (#4632)
Leandro Nunes [Mon, 6 Jan 2020 16:31:07 +0000 (16:31 +0000)]
Pin python pillow to "<7" due to torchvision 1.2.0 dependency issue (#4632)

* As a result of backwards incompatible changes released in pillow 7.0,
   torchvision crashes if you just "pip install pillow", as we do in
   a few places.

 * This patch sets pillow<7 to be installed in Dockerfiles and support
   material as tutorials and documentation.

4 years agoImprove comments (#4633)
Ramana Radhakrishnan [Mon, 6 Jan 2020 16:30:51 +0000 (16:30 +0000)]
Improve comments (#4633)

* Improve commentary for operator fusion.

* Attempt to clarify what well formed checker is doing

4 years ago[REFACTOR][IR] Introduce SeqStmt to replace ir::Block (#4627)
Tianqi Chen [Mon, 6 Jan 2020 05:39:33 +0000 (21:39 -0800)]
[REFACTOR][IR] Introduce SeqStmt to replace ir::Block (#4627)

* [REFACTOR][IR] Introduce SeqStmt to replace Block

ir::Block was used to represent a sequence of Stmts in the original low-level IR.
The nested ir::Block structure is not really friendly for recursive visits,
especially when the statements are unrolled.

This PR introduce a SeqStmt that directly stores a sequence of statements in an Array container.
The new SeqStmt will be used as a replacement of the original Block structure.

* [REFACTOR] Migrate use of Block to SeqStmt.

* [REFACTOR] Remove Block

* Add more comments per yizhi's comment

4 years ago[CONV] Asymmetric padding (#4511)
optima2005 [Mon, 6 Jan 2020 03:53:47 +0000 (11:53 +0800)]
[CONV] Asymmetric padding (#4511)

* [CONV] Asymmetic padding

* fix lint error

* update for legalize, rocm and cudnn

* add more test cases

* change more symmetric padding

* change conv2d winograd tests according orginal cases

* remove 'alter_op_layout.h' header in bitserial.cc

4 years ago[Topi]Allow empty tensor for reshape, tile and strided_slice (#4618)
Yao Wang [Mon, 6 Jan 2020 03:50:09 +0000 (19:50 -0800)]
[Topi]Allow empty tensor for reshape, tile and strided_slice (#4618)

* Support empty tensor

* Fix schedule

* Refactor

* Minor fix

* Fix pylint

* Merge cpp and python is_empty_shape

4 years ago[REFACTOR] Automatically deduce function type signature in Registry.set_body_typed...
Tianqi Chen [Mon, 6 Jan 2020 01:52:45 +0000 (17:52 -0800)]
[REFACTOR] Automatically deduce function type signature in Registry.set_body_typed (#4623)

Previously we support a limited case of function type deduction and in many places
we have to supply the type twice during set_body_typed (one in the template parameter, another in the lambda signature).

This PR improves the deduce function by enablng automatic function signature deduction.

```
TVM_REGISTER_GLOBAL("sub")
.set_body_typed([](int x, int y) -> int { return x - y; });
```

Unfortunately, because of template conflict, we can not support the original case
where both type signature and lambda are supplied through set_body_typed.

This PR refactors the existing regsitration to the new style.

4 years agoGet around limitation of g++-4.8 (#4626)
Tianqi Chen [Mon, 6 Jan 2020 01:52:30 +0000 (17:52 -0800)]
Get around limitation of g++-4.8 (#4626)

4 years agoAdded declare of aluBits for TensorAlu (#4624)
Kevin Yuan [Mon, 6 Jan 2020 00:56:16 +0000 (08:56 +0800)]
Added declare of aluBits for TensorAlu (#4624)

4 years agotensor_array split test (#4619)
Zhi [Sun, 5 Jan 2020 20:17:16 +0000 (12:17 -0800)]
tensor_array split test (#4619)

4 years ago[REFACTOR] IRPrinter->NodePrinter, move to node/printer.h (#4622)
Tianqi Chen [Sun, 5 Jan 2020 04:09:18 +0000 (20:09 -0800)]
[REFACTOR] IRPrinter->NodePrinter, move to node/printer.h (#4622)

Rationale: printer is a common infra that is shared across all nodes.

4 years ago[REFACTOR] TVM_REGISTER_API -> TVM_REGISTER_GLOBAL (#4621)
Tianqi Chen [Sat, 4 Jan 2020 23:38:56 +0000 (15:38 -0800)]
[REFACTOR] TVM_REGISTER_API -> TVM_REGISTER_GLOBAL (#4621)

TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL.
In the spirit of simplify redirections, this PR removes
the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL.

This type of refactor will also simplify the IDE navigation tools
such as FFI navigator to provide better code reading experiences.

Move EnvFunc's definition to node.

4 years ago[REFACTOR] Unified IR base types. (#4616)
Tianqi Chen [Sat, 4 Jan 2020 20:26:21 +0000 (12:26 -0800)]
[REFACTOR] Unified IR base types. (#4616)

This PR moves a few base types from relay to the ir sub-folder.
These types will serve as a common type system across the stack.

Notably, we want to be able to use the same FuncType for all function signatures.
I tried to make a minimum move to bring the necessary dependencies for a FuncType.
We can discuss what additional things we want to move as a follow-up.

Notably, because the TensorType will have a dependency on low-level Expr,
we will need to break the type.h into two files and introduce a
tensor_type.h(or leave them in relay for now).

4 years ago[REFACTOR][TYPE] Remove un-necessary var sub-field in GlobalTypeVar and TypeVar ...
Tianqi Chen [Sat, 4 Jan 2020 07:08:55 +0000 (23:08 -0800)]
[REFACTOR][TYPE] Remove un-necessary var sub-field in GlobalTypeVar and TypeVar (#4615)

Currently, we use a tvm::Var to represent a placeholder for shapes in generic types.
This is not necessary for GlobalTypeVar(as we never parameterize by shape var),
and is a bit twisted for TypeVar.

As we move to a unified type system, we want to break the dependency
from the base TypeVar(which is shared across the languages) from the expression.
Note that it is fine for TensorType to depend on Expr.

One alternative solution to embed the Var would be to introduce a TypeVarExpr,
which can wrap a TypeVar as Expr. However, this new alternative won't be
natural until we migrate the type to the global scope.

Lucikly, we have not yet start to depend on the shape parameterization heavily yet.

This PR removes the tvm::Var from the typevars. We will follow up with another
PR to migrate the types to a base location. After that, we should be able to
use the more elegant approach via TypeVarExpr.

4 years ago[Relay][Pass]Improve memory_allocation pass to support multiple i/o dynamic kernels...
Yao Wang [Sat, 4 Jan 2020 06:19:00 +0000 (22:19 -0800)]
[Relay][Pass]Improve memory_allocation pass to support multiple i/o dynamic kernels (#4595)

* Add more shape funcs

* Fix test

* Enhance test_any_concat

* Fix pylint

* Minor fix test

* Fix pylint

* Minor refactor

* Add test any for elemwise

4 years ago[relay][tensor_array] test tensor_array in vm (#4608)
Zhi [Fri, 3 Jan 2020 21:31:21 +0000 (13:31 -0800)]
[relay][tensor_array] test tensor_array in vm (#4608)

* [relay] test tensor_array in vm

* add tensor_array scatter test

4 years agoskip example json runtime test when config is not set (#4614)
Zhi [Fri, 3 Jan 2020 20:29:05 +0000 (12:29 -0800)]
skip example json runtime test when config is not set (#4614)

4 years ago[CMAKE] Remove unecessary rdynamic (#4613)
Tianqi Chen [Fri, 3 Jan 2020 20:28:49 +0000 (12:28 -0800)]
[CMAKE] Remove unecessary rdynamic (#4613)

4 years ago[VTA] Throw exception on mis-formatted files and avoid overwrite Scala code (#4555)
Liangfu Chen [Fri, 3 Jan 2020 17:37:50 +0000 (01:37 +0800)]
[VTA] Throw exception on mis-formatted files and avoid overwrite Scala code (#4555)

4 years ago{QNN] Making scale/zero_points as expr instead of attrs. (#4611)
Animesh Jain [Fri, 3 Jan 2020 13:39:56 +0000 (05:39 -0800)]
{QNN] Making scale/zero_points as expr instead of attrs. (#4611)

4 years ago[Quantization] Make calibration faster and more memory usage friendly (#4589)
masahi [Fri, 3 Jan 2020 12:23:09 +0000 (21:23 +0900)]
[Quantization] Make calibration faster and more memory usage friendly (#4589)

* Use memory efficient calibrate

* Fixed indexing

* add cpp kl stub

* ported KL cpp from mxnet

* Fixed std::distance arguments order

* remove python implementation

* fix lint and indent

* fix indent

* refactoring

* fix lint

* fix for i386

4 years ago[REFACTOR] Remove old Low-level Visitor/Mutator (#4612)
Tianqi Chen [Fri, 3 Jan 2020 01:52:37 +0000 (17:52 -0800)]
[REFACTOR] Remove old Low-level Visitor/Mutator (#4612)

4 years ago[TOPI, Relay] Add half_pixel option to Resize op (#4610)
masahi [Fri, 3 Jan 2020 00:14:14 +0000 (09:14 +0900)]
[TOPI, Relay] Add half_pixel option to Resize op (#4610)

* add onnx resize converter

* update frontends

* updating topi

* adding onnx resize tests

* fixed NHWC test by casting size dtype to int32

* fix tests

* fix lint

* update existing test cases

* fix tensorflow frontend

* fix lint

* remove NHWC stuff

* update topi resize test for half_pixel

* update doc

* fix doc

* remove onnx resize bits

4 years ago[REFACTOR] Migrate Low-level IR Passes into the New Stmt/Expr Mutator (#4607)
Tianqi Chen [Fri, 3 Jan 2020 00:00:53 +0000 (16:00 -0800)]
[REFACTOR] Migrate Low-level IR Passes into the New Stmt/Expr Mutator (#4607)

* CombineContextCall

* Migrate BoundChecker

* Migrate CoprocSync

* Migrate detect_device

* Migrate loop_partition

* Migrate infer_fragement

* Migrate inject_copy_intrin

* Migrate inject double buffer

* Migrate lower_intrin and simplify

* Migrate storage flatten

* Migrate inject prefetch

* Migrate inject_virtual_thread

* migrate inline

* Migrate lift attr scope

* Migrate custom datatypes

* migrate lower_thread_all_reduce

* Migrate lower_tvm_builtin

* migrate lower_warp memory

* Migrate make_api.cc

* Migrate remap_thread_axis

* Migrate remove_no_op

* migrate rewrite_unsafe_select

* Migrate skip_assert simple_passes

* Migrate split_host_device

* Migrate ssa

* Migrate storage_access

* Migrate storage_rewrite

* Migrate tensor_core

* Migrate unroll_loop

* Migrate vectorize

* Migrate verify compact_buffer gpu_code

* Migrate verify_memory

* Migrate storage_sync

* Remove unused refs to mutator

* Migrate hybrid_op

* Migrate tensorize

* Migrate schedule ops

* Migrate schedule_dataflow_rewrite

* Migrate auto_inline_elemwise

* Remove unecessary ref to visitor

* remove unecessary ref

* Migrate bound_deducer

* Migrate domain_touched

* Migrate autotvm feature touch extractor

* Add annotations

4 years agoBugfix StmtMutator IfThenElse (#4609)
Tianqi Chen [Thu, 2 Jan 2020 17:08:54 +0000 (09:08 -0800)]
Bugfix StmtMutator IfThenElse (#4609)

4 years ago[IR] Unify approach to Visitor/Mutator under Functor (#4606)
Tianqi Chen [Thu, 2 Jan 2020 00:30:47 +0000 (16:30 -0800)]
[IR] Unify approach to Visitor/Mutator under Functor (#4606)

IRMutator and IRVisitor were the main data structures for doing low level IR visiting.
As the project evolves, we start to introduce more powerful variants such as StmtFunctor and ExprFunctor.
This PR brings new classes that allows us to migrate the visitor mutator to be sub-class of these functors.

List of changes:

- Create separate class for ExprMutator and StmtMutator, following convention used in relay.
- Introduce copy-on-write to StmtMutator that can later benefit the statement mutations
  if we use move semantics and keep a single copy of stmt.
- Move two generic visit mutate util to use the new classes.

We will send followup PRs to migrate the existing passes that use the legacy visitors
to the new one.

4 years ago[FRONTEND][TF] Add conv3d (#4604)
optima2005 [Wed, 1 Jan 2020 09:42:54 +0000 (17:42 +0800)]
[FRONTEND][TF] Add conv3d (#4604)

* [FRONTEND][TF] Add conv3d

* fix high rtol

4 years agomake adt tag signed (#4605)
Zhi [Wed, 1 Jan 2020 06:36:19 +0000 (22:36 -0800)]
make adt tag signed (#4605)

4 years agoSort VM stats by time (#4601)
Zhi [Tue, 31 Dec 2019 19:16:12 +0000 (11:16 -0800)]
Sort VM stats by time (#4601)

4 years ago[REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal to Object (#4603)
Tianqi Chen [Tue, 31 Dec 2019 17:35:03 +0000 (09:35 -0800)]
[REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal  to Object (#4603)

* [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object.

Historically, we have classes like NodePtr/Ref/HashEqual.
After unified object protocol, these names are just alias of the object counterpart.
Moreover, there are helper macros defined over the places for defining these object.

This PR consoldiate the terminologies into the corresponding ones
in the Object system so we have a clean and consistent API moving forward.

* Update include/tvm/attrs.h

Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
* fix compilation

Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
4 years ago[relay][refactor] Cache Op::Get in passes to reduce lookup overhead (#4594)
Zhi [Tue, 31 Dec 2019 00:33:50 +0000 (16:33 -0800)]
[relay][refactor] Cache Op::Get in passes to reduce lookup overhead (#4594)

* Refactor to use IsOp utility

* retrigger CI

4 years ago[Relay][Convert Layout] Handling batch norm layout change. (#4600)
Animesh Jain [Mon, 30 Dec 2019 23:35:25 +0000 (15:35 -0800)]
[Relay][Convert Layout] Handling batch norm layout change. (#4600)

4 years ago[REFACTOR][RUNTIME] Update NDArray use the Unified Object System (#4581)
Tianqi Chen [Mon, 30 Dec 2019 06:16:27 +0000 (22:16 -0800)]
[REFACTOR][RUNTIME] Update NDArray use the Unified Object System (#4581)

* [REFACTOR][RUNTIME] Move NDArray to Object System.

Previously NDArray has its own object reference counting mechanism.
This PR migrates NDArray to the unified object protocol.

The calling convention of NDArray remained intact.
That means NDArray still has its own type_code and
its handle is still DLTensor compatible.

In order to do so, this PR added a few minimum runtime type
detection in TVMArgValue and RetValue only when the corresponding
type is a base type(ObjectRef) that could also refer to NDArray.

This means that even if we return a base reference object ObjectRef
which refers to the NDArray. The type_code will still be translated
correctly as kNDArrayContainer.
If we assign a non-base type(say Expr) that we know is not compatible
with NDArray during compile time, no runtime type detection will be performed.

This PR also adopts the object protocol for NDArray sub-classing and
removed the legacy NDArray subclass protocol.
Examples in apps/extension are now updated to reflect that.

Making NDArray as an Object brings all the benefits of the object system.
For example, we can now use the Array container to store NDArrays.

* Address review comments

4 years agofix codegenc (#4597)
Zhi [Mon, 30 Dec 2019 04:38:16 +0000 (20:38 -0800)]
fix codegenc (#4597)

4 years ago[Perf] Add CublasLt extern support for better Igemm performance (#4550)
Leyuan Wang [Sun, 29 Dec 2019 22:35:38 +0000 (14:35 -0800)]
[Perf] Add CublasLt extern support for better Igemm performance (#4550)

* cublaslt added

* fix lint

* address comments

* address more comments

* Trigger CI

* Trigger CI

4 years ago[GraphRuntime] Support parameter out in the graph runtime debug (#4598)
Neo Chien [Sun, 29 Dec 2019 21:21:04 +0000 (05:21 +0800)]
[GraphRuntime] Support parameter out in the graph runtime debug (#4598)

* [GraphRuntime] Support parameter out in the graph runtime debug

* Dummy commit to trigger build

4 years ago[FRONTEND][TF] conv2d_transpose 'SAME' support kernel more than 1x1 (#4484)
optima2005 [Sat, 28 Dec 2019 20:05:14 +0000 (04:05 +0800)]
[FRONTEND][TF] conv2d_transpose 'SAME' support kernel more than 1x1 (#4484)

* [FRONTEND][TF] conv3d_transpose 'SAME' support kernel more than 1x1

* revised per as review comments

* add more fallback wolkaround to make all tests pass

4 years agofix tf.compat.v1 issue for tf verison <=1.12 (#4593)
zhuochen [Sat, 28 Dec 2019 20:04:41 +0000 (04:04 +0800)]
fix tf.compat.v1 issue for tf verison <=1.12 (#4593)

4 years ago[autotvm] fix typos in comment (#4591)
Wang Yucheng [Fri, 27 Dec 2019 19:22:42 +0000 (03:22 +0800)]
[autotvm] fix typos in comment (#4591)

4 years ago[Runtime] add necessary const qualifier for NDArray container of parameters (#4590)
Zhao Wu (Chinese Name: 吴钊) [Fri, 27 Dec 2019 15:49:57 +0000 (23:49 +0800)]
[Runtime] add necessary const qualifier for NDArray container of parameters (#4590)

4 years ago[TOPI] add 3D upsampling Op. (#4584)
optima2005 [Fri, 27 Dec 2019 14:25:25 +0000 (22:25 +0800)]
[TOPI] add 3D upsampling Op. (#4584)

* [TOPI] add 3D upsampling Op.

* fix lint issues

* change align_corners to coordinate_transformation_mode

* fix resize3d half_pixel

* make a simple function and clean up trilinear_resize3d_python

* fix doc

4 years ago[Relay][AlterLayout] Broadcast with scalar shape (#4577)
Animesh Jain [Fri, 27 Dec 2019 02:42:21 +0000 (18:42 -0800)]
[Relay][AlterLayout] Broadcast with scalar shape (#4577)

4 years ago[Relay] Convert Layout Pass. (#4335)
Animesh Jain [Thu, 26 Dec 2019 19:15:46 +0000 (11:15 -0800)]
[Relay] Convert Layout Pass. (#4335)

4 years ago[FIX][TOPI][X86] schedule dense pack (#4539)
deepIgnorance [Thu, 26 Dec 2019 18:10:44 +0000 (02:10 +0800)]
[FIX][TOPI][X86] schedule dense pack (#4539)

4 years ago[TOPI][AutoTVM] NHWC conv2d templates for ARM (#3859)
黎明灰烬 [Thu, 26 Dec 2019 17:36:31 +0000 (01:36 +0800)]
[TOPI][AutoTVM] NHWC conv2d templates for ARM (#3859)

* [AutoTVM][TOPI] NHWC conv2d templates (spatial pack) for ARM

As some frontends (tflite for example) are using NHWC as the default
layout, we are enabling NHWC schedule templates in TOPI and AutoTVM.

* some comments fix

4 years ago[Container] Fix NDArray SaveDLTensor declaration and implementation signature differe...
Zhao Wu [Thu, 26 Dec 2019 17:33:03 +0000 (01:33 +0800)]
[Container] Fix NDArray SaveDLTensor declaration and implementation signature different (#4586)

4 years ago[Quantization, Calibrate] Fix context creation when current_target is explicity set...
masahi [Thu, 26 Dec 2019 14:13:38 +0000 (23:13 +0900)]
[Quantization, Calibrate] Fix context creation when current_target is explicity set (#4582)

4 years ago[DOCS]fix typos in autotvm tutorial (#4585)
Wang Yucheng [Thu, 26 Dec 2019 09:45:59 +0000 (17:45 +0800)]
[DOCS]fix typos in autotvm tutorial (#4585)

4 years ago[NEWS] add v0.6 release (#4558)
Yizhi Liu [Thu, 26 Dec 2019 04:31:22 +0000 (20:31 -0800)]
[NEWS] add v0.6 release (#4558)

* [NEWS] add v0.6 release

* remove link prefix

* fix issue number

4 years agoSome Windows and MSVC fixes (#4569)
kice [Wed, 25 Dec 2019 21:42:03 +0000 (16:42 -0500)]
Some Windows and MSVC fixes (#4569)

* fix python exception creation in Windows

* better string conversion for msvc

* fix cpp style issue

4 years ago[RUNTIME] Remove Extension VTable in favor of Unified Object system. (#4578)
Tianqi Chen [Wed, 25 Dec 2019 17:21:01 +0000 (09:21 -0800)]
[RUNTIME] Remove Extension VTable in favor of Unified Object system. (#4578)

Before the unified object protocol, we support pass
additional extension objects around by declaring a type as an extension type.
The old extension mechanism requires the types to register their
constructor and deleter to a VTable and does not enjoy the benefit of the
self-contained deletion property of the new Object system.

This PR upgrades the extension example to make use of the new object system
and removed the old Extension VTable.

Note that the register_extension funtion in the python side continues to work
when the passed argument does not require explicit container copy/deletion,
which covers the current usecases of the extension mechanism.

4 years ago[DEPRECATION] Cleanup legacy verilog support (#4576)
Tianqi Chen [Tue, 24 Dec 2019 23:14:03 +0000 (15:14 -0800)]
[DEPRECATION] Cleanup legacy verilog support (#4576)

This PR cleans up the left over code for legacy verilog support which was experimental.
The new hardware backend path is now support by VTA via TSIM.

4 years ago[DOC] fix doc in api.py (#4580)
Bohan Hou [Tue, 24 Dec 2019 16:51:05 +0000 (00:51 +0800)]
[DOC] fix doc in api.py (#4580)

4 years ago[Relay/Topi][Op] Added native DepthToSpace and SpaceToDepth Operators (#4566)
Josh Fromm [Tue, 24 Dec 2019 05:04:34 +0000 (00:04 -0500)]
[Relay/Topi][Op] Added native DepthToSpace and SpaceToDepth Operators (#4566)

* Added tvm function stencil for subpixel operations to topi.

* Topi subpixel operators added and tested.

* Added subpixel attrs.

* Added depth_to_space relay attributes.

* depth_to_space fully working.

* Fixed NHWC shape bug.

* SpaceToDepth in and all tests passing.

* lint fixes.

* Added string include

* Fixed topi formatting.

* Added DCR/CDR mode to depthtospace operator.

4 years ago[DEPRECATION] Remove NNVM compiler (#4571)
Tianqi Chen [Mon, 23 Dec 2019 19:51:26 +0000 (11:51 -0800)]
[DEPRECATION] Remove NNVM compiler (#4571)

* Remove NNVM compiler

4 years agoFix llvm-enabled build by adding missing intrinsics headers (#4575)
Dmitri Makarov [Mon, 23 Dec 2019 16:50:48 +0000 (17:50 +0100)]
Fix llvm-enabled build by adding missing intrinsics headers (#4575)

4 years agoremove unnecessary cast to int32 (#4573)
masahi [Mon, 23 Dec 2019 16:48:37 +0000 (01:48 +0900)]
remove unnecessary cast to int32 (#4573)

4 years ago[VTA][Chisel] End-to-end Inference with Chisel VTA (#4574)
Liangfu Chen [Mon, 23 Dec 2019 16:43:52 +0000 (00:43 +0800)]
[VTA][Chisel] End-to-end Inference with Chisel VTA (#4574)

* [VTA][Chisel] End-to-end Inference with Chisel VTA

* Update TensorAlu.scala

4 years agoRemove nnvm (#4565)
Tianqi Chen [Mon, 23 Dec 2019 04:52:33 +0000 (20:52 -0800)]
Remove nnvm (#4565)

4 years ago[Relay] add max_pool3d in relay and TF converter (#4551)
Yong Wu [Mon, 23 Dec 2019 01:43:33 +0000 (17:43 -0800)]
[Relay] add max_pool3d in relay and TF converter (#4551)

* [Relay] add max_pool3d in relay and TF converter

* fix comments

4 years ago[TEST] Remove nnvm related code in topi and test script (#4562)
Tianqi Chen [Sun, 22 Dec 2019 17:47:34 +0000 (09:47 -0800)]
[TEST] Remove nnvm related code in topi and test script (#4562)

* [TEST] Remove nnvm related code in topi and test script

* Remove docs dep

4 years ago[Relay][Frontend][ONNX] Support auto_pad in Conv and ConvTranspose (#4563)
Neo Chien [Sun, 22 Dec 2019 17:03:39 +0000 (01:03 +0800)]
[Relay][Frontend][ONNX] Support auto_pad in Conv and ConvTranspose (#4563)

4 years agoSupport standardize runtime module (#4532)
Zhao Wu [Sun, 22 Dec 2019 04:14:40 +0000 (12:14 +0800)]
Support standardize runtime module (#4532)

4 years ago[REFACTOR][DTYPE] Isolate dtype to runtime (#4560)
Tianqi Chen [Sun, 22 Dec 2019 02:26:21 +0000 (18:26 -0800)]
[REFACTOR][DTYPE] Isolate dtype to runtime (#4560)

dtype.h -> runtime/data_type.h

Changes:
- Rename all old reference of tvm::Type to DataType
- ExprNode.type -> ExprNode.dtype
- Expr.type() -> Expr.dtype()
- Change Expr related functions to expr_operator.
  - DataType::min() -> min_value(DataType)
  - DataType::max() -> max_value(DataType)
- Move type constructor Int, UInt, Float, Handle, Bool into DataType.
  - Int(bits) -> DataType::Int(bits)
  - UInt(bits) -> DataType::UInt(bits)

4 years ago[RUNTIME][VULKAN] Fix compiler warning (#4559)
Tianqi Chen [Sun, 22 Dec 2019 02:26:11 +0000 (18:26 -0800)]
[RUNTIME][VULKAN] Fix compiler warning (#4559)

4 years ago[IR] fix style in ir_mutator and ir_visitor (#4561)
Siyuan Feng [Sun, 22 Dec 2019 01:56:18 +0000 (17:56 -0800)]
[IR] fix style in ir_mutator and ir_visitor (#4561)

4 years ago[VTA] improved virtual memory mapping (#4545)
Liangfu Chen [Sat, 21 Dec 2019 22:19:56 +0000 (06:19 +0800)]
[VTA] improved virtual memory mapping (#4545)

* [VTA] improved virtual memory mapping

* Update virtual_memory.cc

4 years ago[COMMUNITY] @cchung100m -> reviewer (#4557)
Tianqi Chen [Fri, 20 Dec 2019 22:42:57 +0000 (14:42 -0800)]
[COMMUNITY] @cchung100m -> reviewer (#4557)

4 years agovm external codegen (#4544)
Zhi [Fri, 20 Dec 2019 22:36:14 +0000 (14:36 -0800)]
vm external codegen (#4544)

4 years ago[PYTHON][FFI] Cythonize NDArray.copyto (#4549)
Tianqi Chen [Fri, 20 Dec 2019 22:21:09 +0000 (14:21 -0800)]
[PYTHON][FFI] Cythonize NDArray.copyto (#4549)

* [PYTHON][FFI] Cythonize NDArray.copyto

* Cythonize the shape property

4 years ago[DOCS] Mention Ninja build system in install/from_source.rst (#4554)
Hideto Ueno [Fri, 20 Dec 2019 09:25:18 +0000 (18:25 +0900)]
[DOCS] Mention Ninja build system in install/from_source.rst (#4554)

* [DOCS] Mention Ninja build system in install/from_source.rst

* Address comments

4 years ago[TOPI] Fixed nms max_output_size loop (#4541)
mbarrett97 [Wed, 18 Dec 2019 21:23:36 +0000 (21:23 +0000)]
[TOPI] Fixed nms max_output_size loop (#4541)

One of the loops in hybrid_nms used for
performing the max_output_size reordering
was incorrectly designated as parallel
resulting in incorrect behaviour. This patch
changes that loop to a serial loop.

Change-Id: I97184f5887f5f028d8ab339fa2808eb7630a4017

4 years ago[TOPI] Allow batch matmul to be fused into injective ops (#4537)
Haichen Shen [Wed, 18 Dec 2019 21:17:18 +0000 (13:17 -0800)]
[TOPI] Allow batch matmul to be fused into injective ops (#4537)

4 years ago[relay][op] add expand op (from ONNX) to relay frontend (#4483)
Takato Yamada [Wed, 18 Dec 2019 17:58:37 +0000 (02:58 +0900)]
[relay][op] add expand op (from ONNX) to relay frontend (#4483)

* Add Expand to onnx.py

* add test function for expand

* Fix a onnx frontend test

* Add tests for the value itself instead of shape only on test_expand

* Cleaned up some unnecessary modifications.