Zhi [Sun, 5 Jan 2020 20:17:16 +0000 (12:17 -0800)]
tensor_array split test (#4619)
Tianqi Chen [Sun, 5 Jan 2020 04:09:18 +0000 (20:09 -0800)]
[REFACTOR] IRPrinter->NodePrinter, move to node/printer.h (#4622)
Rationale: printer is a common infra that is shared across all nodes.
Tianqi Chen [Sat, 4 Jan 2020 23:38:56 +0000 (15:38 -0800)]
[REFACTOR] TVM_REGISTER_API -> TVM_REGISTER_GLOBAL (#4621)
TVM_REGSISTER_API is an alias of TVM_REGISTER_GLOBAL.
In the spirit of simplify redirections, this PR removes
the original TVM_REGISTER_API macro and directly use TVM_REGISTER_GLOBAL.
This type of refactor will also simplify the IDE navigation tools
such as FFI navigator to provide better code reading experiences.
Move EnvFunc's definition to node.
Tianqi Chen [Sat, 4 Jan 2020 20:26:21 +0000 (12:26 -0800)]
[REFACTOR] Unified IR base types. (#4616)
This PR moves a few base types from relay to the ir sub-folder.
These types will serve as a common type system across the stack.
Notably, we want to be able to use the same FuncType for all function signatures.
I tried to make a minimum move to bring the necessary dependencies for a FuncType.
We can discuss what additional things we want to move as a follow-up.
Notably, because the TensorType will have a dependency on low-level Expr,
we will need to break the type.h into two files and introduce a
tensor_type.h(or leave them in relay for now).
Tianqi Chen [Sat, 4 Jan 2020 07:08:55 +0000 (23:08 -0800)]
[REFACTOR][TYPE] Remove un-necessary var sub-field in GlobalTypeVar and TypeVar (#4615)
Currently, we use a tvm::Var to represent a placeholder for shapes in generic types.
This is not necessary for GlobalTypeVar(as we never parameterize by shape var),
and is a bit twisted for TypeVar.
As we move to a unified type system, we want to break the dependency
from the base TypeVar(which is shared across the languages) from the expression.
Note that it is fine for TensorType to depend on Expr.
One alternative solution to embed the Var would be to introduce a TypeVarExpr,
which can wrap a TypeVar as Expr. However, this new alternative won't be
natural until we migrate the type to the global scope.
Lucikly, we have not yet start to depend on the shape parameterization heavily yet.
This PR removes the tvm::Var from the typevars. We will follow up with another
PR to migrate the types to a base location. After that, we should be able to
use the more elegant approach via TypeVarExpr.
Yao Wang [Sat, 4 Jan 2020 06:19:00 +0000 (22:19 -0800)]
[Relay][Pass]Improve memory_allocation pass to support multiple i/o dynamic kernels (#4595)
* Add more shape funcs
* Fix test
* Enhance test_any_concat
* Fix pylint
* Minor fix test
* Fix pylint
* Minor refactor
* Add test any for elemwise
Zhi [Fri, 3 Jan 2020 21:31:21 +0000 (13:31 -0800)]
[relay][tensor_array] test tensor_array in vm (#4608)
* [relay] test tensor_array in vm
* add tensor_array scatter test
Zhi [Fri, 3 Jan 2020 20:29:05 +0000 (12:29 -0800)]
skip example json runtime test when config is not set (#4614)
Tianqi Chen [Fri, 3 Jan 2020 20:28:49 +0000 (12:28 -0800)]
[CMAKE] Remove unecessary rdynamic (#4613)
Liangfu Chen [Fri, 3 Jan 2020 17:37:50 +0000 (01:37 +0800)]
[VTA] Throw exception on mis-formatted files and avoid overwrite Scala code (#4555)
Animesh Jain [Fri, 3 Jan 2020 13:39:56 +0000 (05:39 -0800)]
{QNN] Making scale/zero_points as expr instead of attrs. (#4611)
masahi [Fri, 3 Jan 2020 12:23:09 +0000 (21:23 +0900)]
[Quantization] Make calibration faster and more memory usage friendly (#4589)
* Use memory efficient calibrate
* Fixed indexing
* add cpp kl stub
* ported KL cpp from mxnet
* Fixed std::distance arguments order
* remove python implementation
* fix lint and indent
* fix indent
* refactoring
* fix lint
* fix for i386
Tianqi Chen [Fri, 3 Jan 2020 01:52:37 +0000 (17:52 -0800)]
[REFACTOR] Remove old Low-level Visitor/Mutator (#4612)
masahi [Fri, 3 Jan 2020 00:14:14 +0000 (09:14 +0900)]
[TOPI, Relay] Add half_pixel option to Resize op (#4610)
* add onnx resize converter
* update frontends
* updating topi
* adding onnx resize tests
* fixed NHWC test by casting size dtype to int32
* fix tests
* fix lint
* update existing test cases
* fix tensorflow frontend
* fix lint
* remove NHWC stuff
* update topi resize test for half_pixel
* update doc
* fix doc
* remove onnx resize bits
Tianqi Chen [Fri, 3 Jan 2020 00:00:53 +0000 (16:00 -0800)]
[REFACTOR] Migrate Low-level IR Passes into the New Stmt/Expr Mutator (#4607)
* CombineContextCall
* Migrate BoundChecker
* Migrate CoprocSync
* Migrate detect_device
* Migrate loop_partition
* Migrate infer_fragement
* Migrate inject_copy_intrin
* Migrate inject double buffer
* Migrate lower_intrin and simplify
* Migrate storage flatten
* Migrate inject prefetch
* Migrate inject_virtual_thread
* migrate inline
* Migrate lift attr scope
* Migrate custom datatypes
* migrate lower_thread_all_reduce
* Migrate lower_tvm_builtin
* migrate lower_warp memory
* Migrate make_api.cc
* Migrate remap_thread_axis
* Migrate remove_no_op
* migrate rewrite_unsafe_select
* Migrate skip_assert simple_passes
* Migrate split_host_device
* Migrate ssa
* Migrate storage_access
* Migrate storage_rewrite
* Migrate tensor_core
* Migrate unroll_loop
* Migrate vectorize
* Migrate verify compact_buffer gpu_code
* Migrate verify_memory
* Migrate storage_sync
* Remove unused refs to mutator
* Migrate hybrid_op
* Migrate tensorize
* Migrate schedule ops
* Migrate schedule_dataflow_rewrite
* Migrate auto_inline_elemwise
* Remove unecessary ref to visitor
* remove unecessary ref
* Migrate bound_deducer
* Migrate domain_touched
* Migrate autotvm feature touch extractor
* Add annotations
Tianqi Chen [Thu, 2 Jan 2020 17:08:54 +0000 (09:08 -0800)]
Bugfix StmtMutator IfThenElse (#4609)
Tianqi Chen [Thu, 2 Jan 2020 00:30:47 +0000 (16:30 -0800)]
[IR] Unify approach to Visitor/Mutator under Functor (#4606)
IRMutator and IRVisitor were the main data structures for doing low level IR visiting.
As the project evolves, we start to introduce more powerful variants such as StmtFunctor and ExprFunctor.
This PR brings new classes that allows us to migrate the visitor mutator to be sub-class of these functors.
List of changes:
- Create separate class for ExprMutator and StmtMutator, following convention used in relay.
- Introduce copy-on-write to StmtMutator that can later benefit the statement mutations
if we use move semantics and keep a single copy of stmt.
- Move two generic visit mutate util to use the new classes.
We will send followup PRs to migrate the existing passes that use the legacy visitors
to the new one.
optima2005 [Wed, 1 Jan 2020 09:42:54 +0000 (17:42 +0800)]
[FRONTEND][TF] Add conv3d (#4604)
* [FRONTEND][TF] Add conv3d
* fix high rtol
Zhi [Wed, 1 Jan 2020 06:36:19 +0000 (22:36 -0800)]
make adt tag signed (#4605)
Zhi [Tue, 31 Dec 2019 19:16:12 +0000 (11:16 -0800)]
Sort VM stats by time (#4601)
Tianqi Chen [Tue, 31 Dec 2019 17:35:03 +0000 (09:35 -0800)]
[REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal to Object (#4603)
* [REFACTOR][OBJECT] Consoldiate NodePtr/Ref/Hash/Equal and macros to Object.
Historically, we have classes like NodePtr/Ref/HashEqual.
After unified object protocol, these names are just alias of the object counterpart.
Moreover, there are helper macros defined over the places for defining these object.
This PR consoldiate the terminologies into the corresponding ones
in the Object system so we have a clean and consistent API moving forward.
* Update include/tvm/attrs.h
Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
* fix compilation
Co-authored-by: Wei Chen <ipondering.weic@gmail.com>
Zhi [Tue, 31 Dec 2019 00:33:50 +0000 (16:33 -0800)]
[relay][refactor] Cache Op::Get in passes to reduce lookup overhead (#4594)
* Refactor to use IsOp utility
* retrigger CI
Animesh Jain [Mon, 30 Dec 2019 23:35:25 +0000 (15:35 -0800)]
[Relay][Convert Layout] Handling batch norm layout change. (#4600)
Tianqi Chen [Mon, 30 Dec 2019 06:16:27 +0000 (22:16 -0800)]
[REFACTOR][RUNTIME] Update NDArray use the Unified Object System (#4581)
* [REFACTOR][RUNTIME] Move NDArray to Object System.
Previously NDArray has its own object reference counting mechanism.
This PR migrates NDArray to the unified object protocol.
The calling convention of NDArray remained intact.
That means NDArray still has its own type_code and
its handle is still DLTensor compatible.
In order to do so, this PR added a few minimum runtime type
detection in TVMArgValue and RetValue only when the corresponding
type is a base type(ObjectRef) that could also refer to NDArray.
This means that even if we return a base reference object ObjectRef
which refers to the NDArray. The type_code will still be translated
correctly as kNDArrayContainer.
If we assign a non-base type(say Expr) that we know is not compatible
with NDArray during compile time, no runtime type detection will be performed.
This PR also adopts the object protocol for NDArray sub-classing and
removed the legacy NDArray subclass protocol.
Examples in apps/extension are now updated to reflect that.
Making NDArray as an Object brings all the benefits of the object system.
For example, we can now use the Array container to store NDArrays.
* Address review comments
Zhi [Mon, 30 Dec 2019 04:38:16 +0000 (20:38 -0800)]
fix codegenc (#4597)
Leyuan Wang [Sun, 29 Dec 2019 22:35:38 +0000 (14:35 -0800)]
[Perf] Add CublasLt extern support for better Igemm performance (#4550)
* cublaslt added
* fix lint
* address comments
* address more comments
* Trigger CI
* Trigger CI
Neo Chien [Sun, 29 Dec 2019 21:21:04 +0000 (05:21 +0800)]
[GraphRuntime] Support parameter out in the graph runtime debug (#4598)
* [GraphRuntime] Support parameter out in the graph runtime debug
* Dummy commit to trigger build
optima2005 [Sat, 28 Dec 2019 20:05:14 +0000 (04:05 +0800)]
[FRONTEND][TF] conv2d_transpose 'SAME' support kernel more than 1x1 (#4484)
* [FRONTEND][TF] conv3d_transpose 'SAME' support kernel more than 1x1
* revised per as review comments
* add more fallback wolkaround to make all tests pass
zhuochen [Sat, 28 Dec 2019 20:04:41 +0000 (04:04 +0800)]
fix tf.compat.v1 issue for tf verison <=1.12 (#4593)
Wang Yucheng [Fri, 27 Dec 2019 19:22:42 +0000 (03:22 +0800)]
[autotvm] fix typos in comment (#4591)
Zhao Wu (Chinese Name: 吴钊) [Fri, 27 Dec 2019 15:49:57 +0000 (23:49 +0800)]
[Runtime] add necessary const qualifier for NDArray container of parameters (#4590)
optima2005 [Fri, 27 Dec 2019 14:25:25 +0000 (22:25 +0800)]
[TOPI] add 3D upsampling Op. (#4584)
* [TOPI] add 3D upsampling Op.
* fix lint issues
* change align_corners to coordinate_transformation_mode
* fix resize3d half_pixel
* make a simple function and clean up trilinear_resize3d_python
* fix doc
Animesh Jain [Fri, 27 Dec 2019 02:42:21 +0000 (18:42 -0800)]
[Relay][AlterLayout] Broadcast with scalar shape (#4577)
Animesh Jain [Thu, 26 Dec 2019 19:15:46 +0000 (11:15 -0800)]
[Relay] Convert Layout Pass. (#4335)
deepIgnorance [Thu, 26 Dec 2019 18:10:44 +0000 (02:10 +0800)]
[FIX][TOPI][X86] schedule dense pack (#4539)
黎明灰烬 [Thu, 26 Dec 2019 17:36:31 +0000 (01:36 +0800)]
[TOPI][AutoTVM] NHWC conv2d templates for ARM (#3859)
* [AutoTVM][TOPI] NHWC conv2d templates (spatial pack) for ARM
As some frontends (tflite for example) are using NHWC as the default
layout, we are enabling NHWC schedule templates in TOPI and AutoTVM.
* some comments fix
Zhao Wu [Thu, 26 Dec 2019 17:33:03 +0000 (01:33 +0800)]
[Container] Fix NDArray SaveDLTensor declaration and implementation signature different (#4586)
masahi [Thu, 26 Dec 2019 14:13:38 +0000 (23:13 +0900)]
[Quantization, Calibrate] Fix context creation when current_target is explicity set (#4582)
Wang Yucheng [Thu, 26 Dec 2019 09:45:59 +0000 (17:45 +0800)]
[DOCS]fix typos in autotvm tutorial (#4585)
Yizhi Liu [Thu, 26 Dec 2019 04:31:22 +0000 (20:31 -0800)]
[NEWS] add v0.6 release (#4558)
* [NEWS] add v0.6 release
* remove link prefix
* fix issue number
kice [Wed, 25 Dec 2019 21:42:03 +0000 (16:42 -0500)]
Some Windows and MSVC fixes (#4569)
* fix python exception creation in Windows
* better string conversion for msvc
* fix cpp style issue
Tianqi Chen [Wed, 25 Dec 2019 17:21:01 +0000 (09:21 -0800)]
[RUNTIME] Remove Extension VTable in favor of Unified Object system. (#4578)
Before the unified object protocol, we support pass
additional extension objects around by declaring a type as an extension type.
The old extension mechanism requires the types to register their
constructor and deleter to a VTable and does not enjoy the benefit of the
self-contained deletion property of the new Object system.
This PR upgrades the extension example to make use of the new object system
and removed the old Extension VTable.
Note that the register_extension funtion in the python side continues to work
when the passed argument does not require explicit container copy/deletion,
which covers the current usecases of the extension mechanism.
Tianqi Chen [Tue, 24 Dec 2019 23:14:03 +0000 (15:14 -0800)]
[DEPRECATION] Cleanup legacy verilog support (#4576)
This PR cleans up the left over code for legacy verilog support which was experimental.
The new hardware backend path is now support by VTA via TSIM.
Bohan Hou [Tue, 24 Dec 2019 16:51:05 +0000 (00:51 +0800)]
[DOC] fix doc in api.py (#4580)
Josh Fromm [Tue, 24 Dec 2019 05:04:34 +0000 (00:04 -0500)]
[Relay/Topi][Op] Added native DepthToSpace and SpaceToDepth Operators (#4566)
* Added tvm function stencil for subpixel operations to topi.
* Topi subpixel operators added and tested.
* Added subpixel attrs.
* Added depth_to_space relay attributes.
* depth_to_space fully working.
* Fixed NHWC shape bug.
* SpaceToDepth in and all tests passing.
* lint fixes.
* Added string include
* Fixed topi formatting.
* Added DCR/CDR mode to depthtospace operator.
Tianqi Chen [Mon, 23 Dec 2019 19:51:26 +0000 (11:51 -0800)]
[DEPRECATION] Remove NNVM compiler (#4571)
* Remove NNVM compiler
Dmitri Makarov [Mon, 23 Dec 2019 16:50:48 +0000 (17:50 +0100)]
Fix llvm-enabled build by adding missing intrinsics headers (#4575)
masahi [Mon, 23 Dec 2019 16:48:37 +0000 (01:48 +0900)]
remove unnecessary cast to int32 (#4573)
Liangfu Chen [Mon, 23 Dec 2019 16:43:52 +0000 (00:43 +0800)]
[VTA][Chisel] End-to-end Inference with Chisel VTA (#4574)
* [VTA][Chisel] End-to-end Inference with Chisel VTA
* Update TensorAlu.scala
Tianqi Chen [Mon, 23 Dec 2019 04:52:33 +0000 (20:52 -0800)]
Remove nnvm (#4565)
Yong Wu [Mon, 23 Dec 2019 01:43:33 +0000 (17:43 -0800)]
[Relay] add max_pool3d in relay and TF converter (#4551)
* [Relay] add max_pool3d in relay and TF converter
* fix comments
Tianqi Chen [Sun, 22 Dec 2019 17:47:34 +0000 (09:47 -0800)]
[TEST] Remove nnvm related code in topi and test script (#4562)
* [TEST] Remove nnvm related code in topi and test script
* Remove docs dep
Neo Chien [Sun, 22 Dec 2019 17:03:39 +0000 (01:03 +0800)]
[Relay][Frontend][ONNX] Support auto_pad in Conv and ConvTranspose (#4563)
Zhao Wu [Sun, 22 Dec 2019 04:14:40 +0000 (12:14 +0800)]
Support standardize runtime module (#4532)
Tianqi Chen [Sun, 22 Dec 2019 02:26:21 +0000 (18:26 -0800)]
[REFACTOR][DTYPE] Isolate dtype to runtime (#4560)
dtype.h -> runtime/data_type.h
Changes:
- Rename all old reference of tvm::Type to DataType
- ExprNode.type -> ExprNode.dtype
- Expr.type() -> Expr.dtype()
- Change Expr related functions to expr_operator.
- DataType::min() -> min_value(DataType)
- DataType::max() -> max_value(DataType)
- Move type constructor Int, UInt, Float, Handle, Bool into DataType.
- Int(bits) -> DataType::Int(bits)
- UInt(bits) -> DataType::UInt(bits)
Tianqi Chen [Sun, 22 Dec 2019 02:26:11 +0000 (18:26 -0800)]
[RUNTIME][VULKAN] Fix compiler warning (#4559)
Siyuan Feng [Sun, 22 Dec 2019 01:56:18 +0000 (17:56 -0800)]
[IR] fix style in ir_mutator and ir_visitor (#4561)
Liangfu Chen [Sat, 21 Dec 2019 22:19:56 +0000 (06:19 +0800)]
[VTA] improved virtual memory mapping (#4545)
* [VTA] improved virtual memory mapping
* Update virtual_memory.cc
Tianqi Chen [Fri, 20 Dec 2019 22:42:57 +0000 (14:42 -0800)]
[COMMUNITY] @cchung100m -> reviewer (#4557)
Zhi [Fri, 20 Dec 2019 22:36:14 +0000 (14:36 -0800)]
vm external codegen (#4544)
Tianqi Chen [Fri, 20 Dec 2019 22:21:09 +0000 (14:21 -0800)]
[PYTHON][FFI] Cythonize NDArray.copyto (#4549)
* [PYTHON][FFI] Cythonize NDArray.copyto
* Cythonize the shape property
Hideto Ueno [Fri, 20 Dec 2019 09:25:18 +0000 (18:25 +0900)]
[DOCS] Mention Ninja build system in install/from_source.rst (#4554)
* [DOCS] Mention Ninja build system in install/from_source.rst
* Address comments
mbarrett97 [Wed, 18 Dec 2019 21:23:36 +0000 (21:23 +0000)]
[TOPI] Fixed nms max_output_size loop (#4541)
One of the loops in hybrid_nms used for
performing the max_output_size reordering
was incorrectly designated as parallel
resulting in incorrect behaviour. This patch
changes that loop to a serial loop.
Change-Id: I97184f5887f5f028d8ab339fa2808eb7630a4017
Haichen Shen [Wed, 18 Dec 2019 21:17:18 +0000 (13:17 -0800)]
[TOPI] Allow batch matmul to be fused into injective ops (#4537)
Takato Yamada [Wed, 18 Dec 2019 17:58:37 +0000 (02:58 +0900)]
[relay][op] add expand op (from ONNX) to relay frontend (#4483)
* Add Expand to onnx.py
* add test function for expand
* Fix a onnx frontend test
* Add tests for the value itself instead of shape only on test_expand
* Cleaned up some unnecessary modifications.
Alex Gladkov [Wed, 18 Dec 2019 17:35:22 +0000 (09:35 -0800)]
Implement 1d deconvolution (#4476)
Tianqi Chen [Wed, 18 Dec 2019 06:17:51 +0000 (22:17 -0800)]
Update legacy places from nnvm to relay. (#4535)
* Update legacy places from nnvm to relay.
This PR prepares the current mainline to remove nnvm compiler dep.
* remove legacy stage
Zhi [Wed, 18 Dec 2019 03:17:55 +0000 (19:17 -0800)]
[Relay] External codegen (#4482)
lhutton1 [Tue, 17 Dec 2019 17:55:32 +0000 (17:55 +0000)]
PIL is depreciated and should be replaced with pillow (a fork of PIL) (#4533)
Change-Id: If2075df5475505f2da87dae7145af5a7ab83d8a4
Liangfu Chen [Mon, 16 Dec 2019 18:26:54 +0000 (02:26 +0800)]
fix crash issue in tsim backend (#4527)
masahi [Mon, 16 Dec 2019 16:11:53 +0000 (01:11 +0900)]
fix onnx shape dtype (#4528)
Cody Yu [Mon, 16 Dec 2019 06:37:43 +0000 (22:37 -0800)]
fix empty config caused KeyError (#4520)
YixinBao [Mon, 16 Dec 2019 05:46:21 +0000 (13:46 +0800)]
add bfloat16 typeflag support (#4525)
Liang ZOU [Sun, 15 Dec 2019 23:09:51 +0000 (07:09 +0800)]
[ir] use DataType instead of Type for readability because Type has been deprecated (#4513)
miheer vaidya [Sun, 15 Dec 2019 23:09:16 +0000 (16:09 -0700)]
Use the best tuner possible (#4397)
* Use the best tuner possible
* Add comment denoting availability of better tuners
* Fix typos and wording
Josh Fromm [Sun, 15 Dec 2019 23:08:35 +0000 (15:08 -0800)]
Fixed extra reshape parameter bug. (#4524)
Ina Dobreva [Sat, 14 Dec 2019 05:15:12 +0000 (05:15 +0000)]
[Bugfix][Frontend][TFlite] Fix wrong function call in TANH tests (#4517)
* Replace sigmoid() with tanh() in tests for TANH
SWu [Fri, 13 Dec 2019 20:09:56 +0000 (15:09 -0500)]
Fix bias_add gradient (#4516)
* Fix bias_add gradient
A change caused collapse_sum_like to reject implicit dimension
broadcasting for bias_add gradient, so switch to explicit sum reduction
on the non-bias axis dimensions.
* Lint fix
Alexander Pivovarov [Fri, 13 Dec 2019 17:42:58 +0000 (09:42 -0800)]
Fix TF resize for dynamic size models (#4510)
Leandro Nunes [Fri, 13 Dec 2019 05:48:29 +0000 (05:48 +0000)]
[CI] Update docker image ci_lint to obtain Python 3.6 from ppa:deadsnakes/ppa (#4505) (#4506)
masahi [Thu, 12 Dec 2019 22:52:06 +0000 (07:52 +0900)]
[Quantization] Fix annotation for multiply op (#4458)
* fix mul rewrite
* register Realize Rewrite for global avg pool and add test
* remove unnecessary check
* improve the test case
Haichen Shen [Thu, 12 Dec 2019 22:33:57 +0000 (14:33 -0800)]
[Hybrid][Fix] Fix hybrid script to support array of tensors (#4494)
* [Fix][Hybrid] Fix hybrid script to support array of tensors
* add test case
* clean up
* trigger ci
Dmitri Makarov [Thu, 12 Dec 2019 16:00:32 +0000 (17:00 +0100)]
Fix build for llvm newer than 9.0 (#4515)
optima2005 [Thu, 12 Dec 2019 06:06:20 +0000 (14:06 +0800)]
[TOPI] implement pool3d op (#4478)
* [TOPI] implement pool3d op
* use PoolInferCorrectLayout for both 2d and 3d pooling
* unify MakeMaxPool and MakeAvgPool
LaiyuanGong [Thu, 12 Dec 2019 01:47:50 +0000 (19:47 -0600)]
[NODE][Serialization]fix serialization precision loss in float (#4503)
* fix serialization precision loss in float
When we want to serialize a tvm.tensor object(like pickle), we will get a precision loss cause by std::to_string()。
For example, a2.value will be 0.0 while a.value=0.
00000001 in the following:
import tvm
import pickle
a = tvm.const(0.
00000001, 'float32')
a2 = pickle.loads(pickle.dumps(a))
* remove line end spaces
Thomas Viehmann [Thu, 12 Dec 2019 01:23:00 +0000 (02:23 +0100)]
add rocm schedules to topi C++ (#4507)
This imports the CUDA schedules to rocm.
Peter Yeh [Thu, 12 Dec 2019 01:19:42 +0000 (17:19 -0800)]
Add AMD codeGen unit tests (#4509)
Ramana Radhakrishnan [Wed, 11 Dec 2019 18:42:15 +0000 (18:42 +0000)]
Refactor bilinear and neighbour implementation in Tensorflow frontend (#4504)
There is significant duplication between functions.
Spotted while looking to move the tensorflow and tflite framework support to later than
1.13.1. The tests barf around resize_nearest_neighbour not ignoring the attribute
'helpful_pixel_centers'.
That upgrade is a separate discussion while this can go in
independently.
Thanks,
Ramana
Liang ZOU [Wed, 11 Dec 2019 17:33:08 +0000 (01:33 +0800)]
[codegen][Build] it's more readable to move the if condition out of the loop (#4501)
MORITA Kazutaka [Wed, 11 Dec 2019 16:39:06 +0000 (08:39 -0800)]
[RUNTIME] Fix compile errors of OpenCL FPGA backend (#4492)
Peter Yeh [Wed, 11 Dec 2019 09:14:36 +0000 (01:14 -0800)]
update rocm intrin rule (#4499)
Liangfu Chen [Wed, 11 Dec 2019 00:53:53 +0000 (08:53 +0800)]
[VTA] Speedup TSIM by Multi-threading (#4491)
This PR tries to increase TSIM performance by introducing multi-threading support.
reminisce [Tue, 10 Dec 2019 22:05:52 +0000 (14:05 -0800)]
Add __float2half_rn for cuda compute capabilities less than 53 (#4489)
* Fix
* clean up
Haichen Shen [Tue, 10 Dec 2019 19:09:23 +0000 (11:09 -0800)]
[Relay][Fix] Fix alter op layout when calling a global var (#4454)
* [Relay][Fix] Fix alter op layout when calling a global var
* add test case
Yizhi Liu [Tue, 10 Dec 2019 18:35:12 +0000 (10:35 -0800)]
[Team] Jared Roesch -> PPMC (#4488)
Liang ZOU [Tue, 10 Dec 2019 17:54:22 +0000 (01:54 +0800)]
[docs] typos in include/tvm/ir.h (#4493)
Tianqi Chen [Mon, 9 Dec 2019 21:22:31 +0000 (13:22 -0800)]
[REFACTOR][RUNTIME] Add LibraryModule that merges systemlib and dso. (#4481)
Historically we have two variations of modules(DSOModule and SystemLibModule)
that both exposes module via symbols.
This PR creates a common implementation for both, and introduce a Library
base class that allows us to have different implementations of GetSymbol.
It paves ways for future library related module enhancements.
Ina Dobreva [Mon, 9 Dec 2019 17:20:55 +0000 (17:20 +0000)]
[Relay][Frontend][TFlite] Add parses support for UNPACK tflite operator (#4447)
* use SPLIT & SQUEEZE = UNPACK as implemented in tensorflow parser
Relay doesn't support UNPACK
* tflite 1.13: UNPACK doesn't work as exepcted -> copies the values from
1st unpacked tensor to the other unpacks
* tflite 1.13: doesn't accept negative axis
Thierry Moreau [Mon, 9 Dec 2019 06:08:21 +0000 (22:08 -0800)]
[VTA] Bringing group convolution support (#4421)
* group conv operator support for VTA
* autotvm tuning script for group conv2d
* lint fix
* lint fix
* lint fix
* addressing comments
Zhi [Sun, 8 Dec 2019 19:57:25 +0000 (11:57 -0800)]
Check function attr for alpha equal (#4479)