ziheng [Fri, 7 Feb 2020 21:41:35 +0000 (13:41 -0800)]
[COMMUNITY] comaniac -> reviewer (#4841)
Tianqi Chen [Fri, 7 Feb 2020 17:15:08 +0000 (09:15 -0800)]
[REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update (#4837)
* [REFACTOR][PY-API] Polish tvm.runtime, tvm.runtime.module API update
This PR updates the tvm.runtime to use the new FFI style.
- Remove top-level tvm.module to avoid confusion between runtime.Module and IRModule
- API changes wrt to runtime.Module
- tvm.module.load -> tvm.runtime.load_module
- tvm.module.enabled -> tvm.runtime.enabled
- tvm.module.system_lib -> tvm.runtime.system_lib
- Remove dep on api_internal from runtime.
* Update module.load in the latest API
Wang Yucheng [Fri, 7 Feb 2020 11:57:34 +0000 (19:57 +0800)]
[Frontend][TFLite] Add MIRROR_PAD operator (#4822)
Ina Dobreva [Fri, 7 Feb 2020 10:23:55 +0000 (10:23 +0000)]
[Relay][Frontend][TFlite] Add support for quantized LOGISTIC (#4696)
* [Relay][Frontend][TFlite] Add support for quantized LOGISTIC
* add qnn implementation
* add qnn test case for qnn logistic
* Helper functions for quantize and dequantize.
Animesh Jain [Thu, 6 Feb 2020 22:13:08 +0000 (22:13 +0000)]
[Doc] ConvertLayout - Call RemoveUnunsedFunctions.
Tianqi Chen [Fri, 7 Feb 2020 02:12:13 +0000 (18:12 -0800)]
Improve tol to resolve flaky case (#4836)
Josh Fromm [Fri, 7 Feb 2020 02:09:10 +0000 (18:09 -0800)]
[Frontend][ONNX] LSTM Support (#4825)
* Initial version working and passing tests.
* WIP on supporting other activations.
* add support for multiple activation functions in lstm
* All tests working and code cleaned up.
* Undo import swap to avoid conflict with masahi.
* Added new tests and related bug fixes.
Co-authored-by: Matthew Brookhart <mbrookhart@octoml.ai>
Zhao Wu [Fri, 7 Feb 2020 02:03:20 +0000 (10:03 +0800)]
[Doc] Introduction to module serialization (#4564)
Zhi [Fri, 7 Feb 2020 01:38:48 +0000 (17:38 -0800)]
Fix doc after moving to unified IR (#4835)
Tianqi Chen [Thu, 6 Feb 2020 19:58:22 +0000 (11:58 -0800)]
[CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6 (#4826)
Tianqi Chen [Thu, 6 Feb 2020 19:58:11 +0000 (11:58 -0800)]
[CI][DOCKER] Update ci-gpu to v0.60 (#4827)
Just do it [Thu, 6 Feb 2020 17:25:06 +0000 (01:25 +0800)]
It's gpu not cpu. (#4832)
abergeron [Thu, 6 Feb 2020 03:24:42 +0000 (22:24 -0500)]
[TOPI][Relay] Add bitwise ops (#4815)
* Add bitwise ops to topi
* Add the bitwise ops to relay.
Tianqi Chen [Thu, 6 Feb 2020 02:49:24 +0000 (18:49 -0800)]
[CONTRIB][CC] Enhance cc.cross_compiler (#4817)
* [CONTRIB][CC] Enhance cc.cross_compiler
- Enhance cc.cross_compiler to take str argument.
- Remove cc.build_create_shared_func as it is dupilicated with cross_compiler
- Add examples to cc.cross_compiler
* address review comments
Xingyu Zhou [Wed, 5 Feb 2020 23:33:45 +0000 (07:33 +0800)]
[Relay] Conv2D padding representation (#4787)
* enforce 4-way padding
* add util with get_pad_tuple
* delete unnecessary arguments
* fix lint
* add container.Array case
* fix cudnn conv2d asymmetric padding logic
* rename get_pad_tuple to get_pad_tuple2d
* revert change for topi/python/topi/nn/conv2d.py
* add get_pad_tuple2d for several contrib conv2d ops
* add get_pad_tuple2d for all conv2d ops
Ina Dobreva [Wed, 5 Feb 2020 20:12:44 +0000 (20:12 +0000)]
[Relay][Frontend][TFLite] Add parser support for logical operators (#4642)
* [Relay][Frontend][TFLite] Add parser support for logical operators
* Add parser support for logical_and, logical_or
* Add boolean dtype as a valid tensor type
* BOOLEAN dtype is supported only from tf 1.15
so logical ops work only in that and newer versions
* Logical_not is ommited since tflite can't convert it -->
throws errors for addv2
* Add TFLite vesion check in tests for logical ops
* Check is added because of boolean dtype lack of support
Animesh Jain [Wed, 5 Feb 2020 19:52:18 +0000 (11:52 -0800)]
[QNN] Optimize lowering for requantize and FixedPointMultiply. (#4798)
* [QNN] Optimize lowering for requantize and FixedPointMultiply.
* Add check for requantize scale gt 1.
* Added test case.
Ina Dobreva [Wed, 5 Feb 2020 19:51:59 +0000 (19:51 +0000)]
[Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range (#4789)
* [TFLite] Dynamically calculate input_stats of any fake_quant range
* pass the input range to the convertor and caclulate (mean, scale) there
* change the range of the second tensor in elemwise operations
so that we test inputs with different quant params
* change the possible output range for elemwise ops wrt the updated ranges
* update the comments for (m, s) calculations
* add input range dict to reduce_mean op
* Apply requested changes
* add exception handling for zero division in input_stats
* fix range of the input tensor in elemwsie
Seyyed Hossein Hasanpour [Wed, 5 Feb 2020 18:07:29 +0000 (21:37 +0330)]
Fixed subprocess creation under windows (#4820)
* fixed subprocess creation under windows
this addresses the issue #4819
* Update server.py
Tianqi Chen [Wed, 5 Feb 2020 17:00:03 +0000 (09:00 -0800)]
[REFACTOR][PY] Establish tvm.runtime (#4818)
* [REFACTOR][PY] Establish tvm.runtime
This PR establishes the tvm.runtime namespace that contains the core runtime data structures.
The top-level API are kept inact for now via re-exporting.
We will followup later to cleanup some of the top-level APIs.
* Fix ndarray name
shoubhik [Wed, 5 Feb 2020 13:27:52 +0000 (05:27 -0800)]
Mxnet parser for Qnn dialect (#4714)
* - Additional util methods needed for mxnet frontend for qnn dialect.
* - Fixing call to quantize.
* [QNN] MxNet-MKLDNN parser support for QNN
* [QNN] Relax conv check.
* - Merge from origin
* [QNN] Channel wise changes
* [QNN] Dense changes
* Dense fix for QNN ops.
* - Removed non-mkl code from utils.
- Small refactoring
- Remove "with_sum" from conv
- Simplified code
* - Fixing ring buffer name.
* - Fixing pylint issues.
* - Fixing lint
- Removing redundant commented code.
* - Adding test cases
- Removing unused methods.
* [WIP] end to end test case for mxnet qnn parser
* Changes to parse large CV models.
* Pylint issues.
* Fix Conv2D with sum and quantized pooling.
* Reverting the changes made for mxnet-mkldnn test cases. Because of #4753, mxnet could not be updated to mxnet-mkldnn.
Co-authored-by: Animesh Jain <anijain@umich.edu>
Haichen Shen [Wed, 5 Feb 2020 06:04:41 +0000 (22:04 -0800)]
allow customize mkldnn library location (#4814)
Tianqi Chen [Wed, 5 Feb 2020 01:01:01 +0000 (17:01 -0800)]
[REFACTOR][PY] tvm._ffi (#4813)
* [REFACTOR][PY] tvm._ffi
- Remove from __future__ import absolute_import in the related files as they are no longer needed if the code only runs in python3
- Remove reverse dependency of _ctypes _cython to object_generic.
- function.py -> packed_func.py
- Function -> PackedFunc
- all registry related logics goes to tvm._ffi.registry
- Use absolute references for FFI related calls.
- tvm._ffi.register_object
- tvm._ffi.register_func
- tvm._ffi.get_global_func
* Move get global func to the ffi side
Animesh Jain [Tue, 4 Feb 2020 23:25:46 +0000 (15:25 -0800)]
[TOPI][x86] Injective schedule improvement (#4786)
* [TOPI][x86] Injective Schedule Improvement.
* Add tiling.
* Vectorize when there is an axis.
Haichen Shen [Tue, 4 Feb 2020 21:14:15 +0000 (13:14 -0800)]
fix memory leak (#4811)
Animesh Jain [Tue, 4 Feb 2020 17:32:45 +0000 (09:32 -0800)]
[AutoTVM] Minor bug fixes in AutoTVM for QNN graphs (#4797)
* [AutoTVM] Minor bug fixes in AutoTVM for QNN graphs.
* Bring back strided_slice.
* Replace tvm.nd change.
Tianqi Chen [Tue, 4 Feb 2020 17:21:51 +0000 (09:21 -0800)]
[DOCS] Fix vta tutorial (#4809)
Tianqi Chen [Tue, 4 Feb 2020 04:47:51 +0000 (20:47 -0800)]
[LINT] Fix -Wextra (#4804)
* [LINT] Fix -Wextra
* Fix virtual-dtor
Hua Jiang [Mon, 3 Feb 2020 19:36:53 +0000 (11:36 -0800)]
[TOPI] upsample operator 'NCHWinic' format support. (#4791)
* [TOPI] upsample operator 'NCHWinic' format support.
some hardware accelerator ask packed format data like NCHWinic to fit the
hardware resource, here add upsample NCHWinic format support to help
such requirement.
* address review comments, add assert for 'else must be NCHWxc' logic.
mbarrett97 [Mon, 3 Feb 2020 17:55:43 +0000 (17:55 +0000)]
[TIR] Create a StringImm reference type (#4806)
This is motivated by the want to send an
array of strings across the python/C++
boundary. Arrays only support ObjectRef types
and so can't carry StringImmNodes. This creates
a string reference type, StringImm, which can
be used with tvm::Arrays.
Change-Id: I598a44536c156b97dbfe3e9518e0a1f705da850c
Zhao Wu [Mon, 3 Feb 2020 17:53:13 +0000 (01:53 +0800)]
[ThreadPool] Solve ARM BIG.LITTLE heterogeneous multicores (#4747)
vizero1 [Mon, 3 Feb 2020 04:03:57 +0000 (05:03 +0100)]
Change color channel from BGR to RGB for darknet preprocessing (#4794)
Animesh Jain [Mon, 3 Feb 2020 02:56:45 +0000 (18:56 -0800)]
[QNN] Conv2D with dilation support. (#4796)
masahi [Mon, 3 Feb 2020 02:53:41 +0000 (11:53 +0900)]
[QNN] Doc fix on convolution and dequantize (#4799)
* QNN doc fix on conv and dequantize
* fix param name in tflite frontend
* make different fix
kshitij12345 [Sun, 2 Feb 2020 18:57:12 +0000 (00:27 +0530)]
fix #4670: add bias for fc layer (#4801)
masahi [Sun, 2 Feb 2020 02:04:44 +0000 (11:04 +0900)]
[Relay] Expose vm OptimizeModule to Python (#4800)
* Expose VM OptimizeModule to python
* added missing imports
* fix import
Alex Gladkov [Sat, 1 Feb 2020 01:43:27 +0000 (17:43 -0800)]
Add schedule for conv3d NDHWC layout (#4775)
Animesh Jain [Fri, 31 Jan 2020 22:29:08 +0000 (14:29 -0800)]
[Relay][Topi] Use SimplifyInference for L2 Normazlization. (#4795)
masahi [Thu, 30 Jan 2020 19:09:48 +0000 (04:09 +0900)]
Dedup BindParamByName function in VM compiler (#4793)
jmorrill [Thu, 30 Jan 2020 16:43:16 +0000 (08:43 -0800)]
Fix parsing of different exception string formats (#4785)
Ina Dobreva [Thu, 30 Jan 2020 14:10:52 +0000 (14:10 +0000)]
[Relay][Frontend][TFlite] Add add parser support for relational ops (#4695)
Add support for: greater_equal, less, less_equal, equal, not_equal
Add tests for the elemwise relational ops
abergeron [Thu, 30 Jan 2020 02:33:39 +0000 (21:33 -0500)]
Make sure to visit the arguments of inlined functions (#4783)
wpan11nv [Wed, 29 Jan 2020 03:40:39 +0000 (19:40 -0800)]
[AUTOTVM] Fix a bug in generating the search space (#4779)
- Do not use numpy.prod which ignores integer (64 bits) overflows.
This leads to an incorrect number of points in the search space.
hlu1 [Wed, 29 Jan 2020 01:58:36 +0000 (17:58 -0800)]
[Python] Replace os.path.exists with try...except...else (#4784)
Jared Roesch [Tue, 28 Jan 2020 11:25:52 +0000 (03:25 -0800)]
[PassManager] Implement pass manager tracing API (#4782)
* Implement pass tracing API
* Set is_before correctly
* Add docs for trace function
* Fix lint
* Remove PDB
* Ensure trace_func is set before calling
* Fix conditional
Cody Yu [Tue, 28 Jan 2020 00:10:35 +0000 (16:10 -0800)]
Safe remove tmpdir (#4781)
Jon Soifer [Mon, 27 Jan 2020 23:40:25 +0000 (17:40 -0600)]
[Relay][Frontend][ONNX] Broadcast condition, x, and y for Where op (#4774)
* ONNX frontend broadcast condition
* fix
* fix style
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Jon Soifer [Mon, 27 Jan 2020 22:58:11 +0000 (16:58 -0600)]
properly extract error type from windows error message (#4780)
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Jon Soifer [Mon, 27 Jan 2020 22:27:55 +0000 (16:27 -0600)]
[Build] Explicitly link to cublasLt if it exists (#4776)
* Explicitly link to cublasLt
* Only link cublasLt if it's found
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Kaiyan Chang [Mon, 27 Jan 2020 00:41:46 +0000 (08:41 +0800)]
Update tune_simple_template.py (#4778)
fixed a spelling mistake.
HUAN-PING SU [Sat, 25 Jan 2020 22:55:55 +0000 (06:55 +0800)]
Bump prebuilt-image version in demo dockerfile (#4770)
Ina Dobreva [Fri, 24 Jan 2020 06:41:56 +0000 (06:41 +0000)]
[Bugfix][Frontend][TF] Fix incorrect calculations in tf SLICE (#4518)
* fix formula for calculating end indices when size[i] == -1
* add a test case for size[i] == -1
* discard expanding dimension of begin_value & end_value since
it is needed only if you pass them as scalars not as tensors.
* discard 'slice_tensor' variable so that implementation matches
the tf parser pattern
masahi [Fri, 24 Jan 2020 05:01:47 +0000 (14:01 +0900)]
add missing nullptr check (#4773)
masahi [Fri, 24 Jan 2020 03:19:48 +0000 (12:19 +0900)]
[TOPI] Remove cpp upsampling and resize op (#4769)
* remove cpp upsampling
* remove cpp resize
Alex Gladkov [Fri, 24 Jan 2020 03:17:45 +0000 (19:17 -0800)]
Fix Tensorflow conv3d pad bug, add non-cubic data and kernel tests (#4772)
hlu1 [Fri, 24 Jan 2020 00:48:25 +0000 (16:48 -0800)]
[Doc] TVM_REGISTER_API -> TVM_REGISTER_GLOBAL (#4768)
Hua Jiang [Thu, 23 Jan 2020 22:05:07 +0000 (14:05 -0800)]
[VTA] Support network which have no unique operator as start/stop name for graph pack. (#4703)
* [VTA] Support network which have no unique operator as start/stop name
for graph pack.
[Issue]
Current vta use 'start' and 'stop' name to define the pack start point
and end point, but this method not work for these network which have
no 2 unique operator as start point and stop point.
[Solution]
In this solution we give 2 addtional parameters start_name_indx and
stop_name_indx to make vta pack logic work with the said network,
for exampl for following networks which have no unique operator,
%0 = nn.add
%1 = nn.conv2d
%2 = nn.batch_norm
%3 = nn.leaky_relu
%4 = nn.add
%5 = nn.conv2d
%6 = nn.batch_norm
%7 = nn.leaky_relu
%8 = nn.add
with this solution we can use following parameter format to make
vta work on it.
relay_prog = graph_pack(
//....
start_name="nn.add",
stop_name="nn.add",
start_name_idx=0,
stop_name_idx=4)
to apply on new network, by printing the network we can get index information like following.
print(mod.astext(show_meta_data=False))
relay_prog = graph_pack(mod
...
start_name="nn.add",
stop_name="nn.add",
start_name_idx=0,
stop_name_idx=4)
* address review comments and fix index count bug
issue:
when do print(mod), the output not only the Call is also have other type
like Var, need add logic to count all except meta.
solution:
add related logic
* address review comments.
* address review comments
* add more detail comments.
Alexander Pivovarov [Thu, 23 Jan 2020 00:47:15 +0000 (16:47 -0800)]
pooling.cc improvements (#4767)
Alex Gladkov [Wed, 22 Jan 2020 13:41:46 +0000 (05:41 -0800)]
Improve CUDA conv2d_transpose_nchw (#4762)
- combine pad and dilate;
- fix for the issue https://discuss.tvm.ai/t/compile-error-for-cuda-target/4164
- fix for the issue https://github.com/apache/incubator-tvm/pull/4472
Alexander Pivovarov [Wed, 22 Jan 2020 06:30:02 +0000 (22:30 -0800)]
Remove run_infer_type duplicates (#4766)
Alexander Pivovarov [Wed, 22 Jan 2020 02:21:10 +0000 (18:21 -0800)]
Fix padding in pooling op (#4738)
Tianqi Chen [Wed, 22 Jan 2020 00:51:07 +0000 (16:51 -0800)]
[REFACTOR] driver.h -> driver_api.h (#4760)
"driver" normally refers to the "main" function.
Rationale: the header exposes set of APIs to drive compilation
and should be named as driver api to best reflect its usage.
Cody Yu [Tue, 21 Jan 2020 21:50:20 +0000 (13:50 -0800)]
[Docs] Bring Your Own Codegen Guide -- Part 2 (#4718)
* BYOC Tutorial -- part 2
* Fix comments
* Address comments
Tianqi Chen [Tue, 21 Jan 2020 21:44:50 +0000 (13:44 -0800)]
[INFO] Add .asf.yaml for github info (#4761)
Tianqi Chen [Tue, 21 Jan 2020 19:58:21 +0000 (11:58 -0800)]
[REFACTOR] top->te (#4759)
Bring up namespace te -- Tensor expression language DSL.
Tianqi Chen [Tue, 21 Jan 2020 04:06:17 +0000 (20:06 -0800)]
[REFACTOR] Establish printer in the source folder (#4752)
* [REFACTOR] Establish printer in the source folder.
As we move towards the unified IR, we will eventually want to build a unified
printers for both relay and TIR.
This PR isolate the printer component into a separate folder in src as a first step.
- Refactored the Doc DSL using Object, clean up APIs.
- Isolate out the meta data into a header.
- move printer into relay_text_printer, add comments about further TODos.
* Rename NodePrinter -> ReprPrinter to distinguish it from other printers
masahi [Mon, 20 Jan 2020 22:32:22 +0000 (07:32 +0900)]
Expose relay BindParamsByName to Python (#4751)
* expose BindParamByName to python
* fixed alpha equal test
Tianqi Chen [Mon, 20 Jan 2020 22:01:31 +0000 (14:01 -0800)]
[REFACTOR][TYPE] Finish move all types to IR. (#4746)
* [REFACTOR][TYPE] Finish move all types to IR.
- Move definition of Ref and TensorType to ir
- Move type_functor.h to public header.
- Rename RefType -> RelayRefType for clarity.
* Add atol
Alex Gladkov [Mon, 20 Jan 2020 01:18:51 +0000 (17:18 -0800)]
Add CUDA conv2d for NHWC layout (#4737)
Tianqi Chen [Sun, 19 Jan 2020 17:53:22 +0000 (09:53 -0800)]
[REFACTOR][CODEGEN] codegen->target, build_module->driver (#4742)
This PR moves the codegen related code into the target folder,
as they are target specific functionalities.
We also adopt the term "compiler driver" in common compiler infra
such as rust, GHC and clang.
As a result, build_module is moved into the driver folder.
HUAN-PING SU [Sun, 19 Jan 2020 06:47:08 +0000 (14:47 +0800)]
Fix demo dockerfile build failed (#4744)
Tianqi Chen [Sun, 19 Jan 2020 06:44:50 +0000 (22:44 -0800)]
[REFACTOR] Establish tir (#4740)
TIR is the new namespace for low-level IR
for tensor-level optimizations and loop transformations.
This PR establishes the namespace and files.
- lowered_func.h,buffer.h,data_layout.h -> tir/buffer.h,tir/data_layout.h,tir/lowered_func.h
- ir.h -> tir/expr.h, tir/stmt.h
- ir_functor_ext.h -> tir/expr_functor.h, tir/stmt_functor.h
Haichen Shen [Sat, 18 Jan 2020 17:05:46 +0000 (09:05 -0800)]
Fix dense (#4728)
Zhi [Sat, 18 Jan 2020 17:04:47 +0000 (09:04 -0800)]
[runtime][refactor] Unify vm and interpreter objects (#4693)
* unify vm and interpreter objects
* move closure back vm
* adt/closure back to vm.adt/vm.closure
* closure base
wpan11nv [Sat, 18 Jan 2020 02:58:11 +0000 (18:58 -0800)]
[CodeGen][CUDA] Improve CUDA vectorizer (#4736)
- Fixes issues to enable fp16 vectorizer. Now correct packing and
unpacking CUDA code will be emitted. Enabled more unit tests.
- Do not emit code to read the first lane from an undef variable
int _3;
_3 = _3 & ~(0x000000ff << 0) | ...
and emit the following code instead:
_3 = (((0x000000ff & (_1 >> 0))+(0x000000ff & (_2 >> 0))) << 0);
Note that nvcc 10.2 is forgiving and emits the same code for both cases.
A warning appears in test_codegen_cuda.py.
Signed-off-by: Wei Pan <weip@nvidia.com>
Liangfu Chen [Fri, 17 Jan 2020 23:23:49 +0000 (07:23 +0800)]
[VTA][TSIM] Enable TSIM CI Testing (#4407)
* Update task_python_vta.sh
* install sbt=1.1.1 with apt-get
* update verilator_opt
* install verilator with major version 4.0
* disable multi-threading for now
* bug fix for correcting uop fetch address in LoadUop module
* bug fix for correcting uop fetch address in LoadUop module
* adjustment to read from dram_offset
* enable USE_THREADS with verilator 4.x
* DEBUG: try avoid core dump with verilator 4.x
* bug fix in LoadUop module
* log mega cycles in tsim
* download cat.png to avoid fetching in each run
* bug fix in LoadUop module
* solve dram_even/sram_even issue
* bug fix
* introduce scalalint in ci
* speedup tsim in ci
* bug fix
* lint scala code before building
* disable multi-threading
* split fsim/tsim script
* update Jenkins settings
* duplicate task_python_vta_fsim.sh as task_python_vta.sh for now
Co-authored-by: Thierry Moreau <tmoreau@octoml.ai>
Tianqi Chen [Fri, 17 Jan 2020 23:11:55 +0000 (15:11 -0800)]
[REFACTOR] Get rid of packed_func_ext. (#4735)
Move the conversion extensions to the specific class definitions
so that we longer need to include packed_func_ext.
Animesh Jain [Fri, 17 Jan 2020 18:21:45 +0000 (10:21 -0800)]
[x86 schedule] Fallback schedule for Int8 depthwise. (#4733)
Tianqi Chen [Fri, 17 Jan 2020 18:07:13 +0000 (10:07 -0800)]
[TOOLS] JSON upgrader to upgrade serialized json. (#4730)
During Unified IR refactor we will change the structure of IRs.
This will cause certain historical modules stored via json no longer
able to be loaded by the current version.
This PR introduces a backward compatible layer to try its best effort
to upgrade json from previous version(this case 0.6) to the current version.
We mainly aim to support update of high-level ir(relay).
Animesh Jain [Fri, 17 Jan 2020 17:49:07 +0000 (09:49 -0800)]
[QNN] Conv2D type checking for kernel per-channel scales. (#4732)
* [QNN] Conv2D type checking for kernel per-channel scales.
* Address commments.
* Address comments.
* - Adding safety checks for downcasts.
Co-authored-by: shoubhik <shoubhikbhatti@gmail.com>
Liangfu Chen [Fri, 17 Jan 2020 17:19:52 +0000 (01:19 +0800)]
[VTA] Update Jenkinsfile for VTA test with TSIM (#4734)
* [VTA] Update Jenkinsfile for VTA test with TSIM
* duplicate task_python_vta.sh multiple copies for now
vexilligera [Fri, 17 Jan 2020 17:18:01 +0000 (17:18 +0000)]
export builtin_fp16 on Windows (#4731)
hlu1 [Fri, 17 Jan 2020 16:58:07 +0000 (08:58 -0800)]
[Relay] Invoke tvm::build from relay compile_engine and interpreter (#4723)
Tianqi Chen [Fri, 17 Jan 2020 04:18:57 +0000 (20:18 -0800)]
[REFACTOR] Polish runtime (#4729)
- Remove operator bool from base object ref macro
- Raitionale: operator bool can be dangerous for sub-classes
that also overloads other operators(e.g. ==).
- If bool is still needed, use explicit operator bool.
- Use absolute include when necessary
- Move type related util to data_type
- Isolate stackvm code from compiler
Animesh Jain [Thu, 16 Jan 2020 23:58:29 +0000 (15:58 -0800)]
[Docs] Convert Layout pass. (#4664)
* [Docs] Convert Layout pass.
* Address comments. Section 3 massaging.
* Address comments.
Tianqi Chen [Thu, 16 Jan 2020 23:23:54 +0000 (15:23 -0800)]
[REFACTOR] top - namespace for Tensor Operation DSL (#4727)
* [REFACTOR] introduce top - Tensor Operation DSL.
Historically we put Tensor, Schedule and compute under the root tvm namespace.
This is no longer a good idea as the project's scope grows larger
than the tensor operation DSL.
This PR introduces top -- a namespace for tensor operational
DSL concepts such as schedule, tensor, compute.
We moved the related files to the new top subfolder.
* Move relevant files into include/tvm/top and src/top
Cody Yu [Thu, 16 Jan 2020 21:58:03 +0000 (13:58 -0800)]
[Docs] Bring Your Own Codegen Guide -- Part 1 (#4602)
* BYOC tutorial: codegen C
* Address comments
* Address comments
* Add build option
* Address comments
* Use TVM_DLL_EXPORT_TYPED_FUNC
Thierry Moreau [Thu, 16 Jan 2020 20:20:42 +0000 (12:20 -0800)]
[Runtime] EdgeTPU runtime for Coral Boards (#4698)
Tianqi Chen [Thu, 16 Jan 2020 18:27:16 +0000 (10:27 -0800)]
[REFACTOR][ARITH] Unified IR, introduce arith subfolder. (#4722)
Spread the arithmetic.h into several components and move
into arith subfolder.
The arith namespace will be used for arithmetic expression
pattern detections and simplifications.
Wei Chen [Thu, 16 Jan 2020 17:01:24 +0000 (09:01 -0800)]
[Relay][Op] Add type check to dense (#4724)
Zhao Wu [Thu, 16 Jan 2020 16:51:53 +0000 (00:51 +0800)]
[CPP RPC] Fix the compile problem of cpp_rpc (#4725)
Wang Yucheng [Thu, 16 Jan 2020 15:33:11 +0000 (23:33 +0800)]
[Relay][Frontend][TFLite] Add parser support for squared difference (#4652)
* [Relay][Frontend][TFLite] Add parser support for squared difference
* fix some error
* fix exp_type
* add comment
Tianqi Chen [Thu, 16 Jan 2020 13:05:04 +0000 (05:05 -0800)]
[COMMUNITY] @FrozenGene -> committer (#4719)
Yizhi Liu [Thu, 16 Jan 2020 06:07:40 +0000 (22:07 -0800)]
[Arith] add SizeVar representing non-neg valued variable in a tensor shape (#4684)
* [arith] add ShapeVar representing non-neg valued variable in a tensor shape
* bounder remover; deal with div in int_set differently
* fix bounder_remover
* migrate unittest to use shape_var
* use tvm.shape_var in integration & relay tests
* add test case; fix Var register
* fix lint
* fix lint again
* add default ShapeVar visitor in Relay
* fix override
* fix ShapeVar visit bug
* revert IntervalSet for shape_var
* remove bound_remover
* remove is_var; use constructor for shapevar/var instead
* ShapeVar -> SizeVar; add constructor comments
* shape_var -> size_var in doc
* tindex -> size
Tianqi Chen [Thu, 16 Jan 2020 04:23:15 +0000 (20:23 -0800)]
[REFACTOR][IR] Introduce include/tvm/target (#4721)
As part of Unified IR infra.
Introduce target folder to store all the compilation target related information.
Tianqi Chen [Thu, 16 Jan 2020 04:23:06 +0000 (20:23 -0800)]
[VERSION] Update mainline version to 0.7.dev0 (#4720)
Tianqi Chen [Thu, 16 Jan 2020 03:44:39 +0000 (19:44 -0800)]
[REFACTOR][FFI] Make more clear naming for C API Type codes. (#4715)
This PR introduces more clear naming prefix for C API type codes
to avoid conflict with other packages.
We also removed TVMArray and TVMType to directly use DLTensor and DLDataType.
Tianqi Chen [Wed, 15 Jan 2020 22:44:14 +0000 (14:44 -0800)]
[REFACTOR] Move support related code to include/tvm/support (#4716)
* [REFACTOR] Move support related code to include/tvm/support
- tvm/logging.h -> tvm/support/logging.h
- remove tvm/base.h, move with into tvm/support/with.h
* src/common -> src/support
Tianqi Chen [Wed, 15 Jan 2020 17:07:22 +0000 (09:07 -0800)]
[REFACTOR][IR] attrs.h -> ir (#4709)
This PR moves attrs.h into the ir folder as it
can serve as a common infra for building ir dats structures.
We also moved common container(FloatImm) into ir/expr.h
Wang Yucheng [Wed, 15 Jan 2020 16:48:08 +0000 (00:48 +0800)]
[Relay][Frontend][TFLite] Add constant input support for elemwise ops (#4666)
* [Relay][Frontend][TFLite] Add constant input support for elemwise ops
* modify in tflite.py