Tianqi Chen [Fri, 27 Sep 2019 14:45:25 +0000 (07:45 -0700)]
[ARITH] Use explicit div mode in python. (#4014)
Kimish Patel [Fri, 27 Sep 2019 00:20:00 +0000 (17:20 -0700)]
Exposed lowered func to c++ API. (#4012)
So that you can use: `build_mod_.GetFunction("get_lowered_funcs", false);`
to get lowered_funcs.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Haozheng Fan [Thu, 26 Sep 2019 22:00:14 +0000 (06:00 +0800)]
hide psutil (#4013)
Animesh Jain [Thu, 26 Sep 2019 17:19:42 +0000 (10:19 -0700)]
[QNN][Conv2D] Optimize lowering. (#4006)
Jon Soifer [Thu, 26 Sep 2019 05:48:50 +0000 (22:48 -0700)]
[TOPI][x86] Introduce schedule_injective_from_existing and unify external schedules for all targets (#3983)
* Fix extern schedule for x86
* Register x86::schedule_extern
* Fix
* Fix
* Replace extern.py with extern.h
* Introduce new generic function schedule_injective_from_existing
* Fix
* Fix
* Add back to C++
* Fix style
* Injective schedule calls local schedule_injective_from_existing
* Fix
* Remove target arg from schedule_injective_from_existing
* Fix docs
* Try to fix unit test
* Fix test
* Fix other tests
* Fix bug
Yida Wang [Wed, 25 Sep 2019 23:24:14 +0000 (16:24 -0700)]
[RELAY]impose a max op limit to the op fusion pass (#4002)
* impose a max op limit to op fusion
* use cross platform data type
黎明灰烬 [Wed, 25 Sep 2019 23:02:19 +0000 (07:02 +0800)]
[TOPI] Move conv2d spatial pack schedule to dedicated file (#3972)
More schedules are making the conv2d.py file too large, so
we'd like to move the spatial pack schedule to dedicated file
before introducing NHWC schedule. No logic change in this patch.
Tianqi Chen [Wed, 25 Sep 2019 22:43:31 +0000 (15:43 -0700)]
Revert "Added tesnorizeation for avx2 based gemm. (#3982)" (#4007)
This reverts commit
23727eb49ea71609fc29963b996a68a14fddf79c.
Cody Hao Yu [Wed, 25 Sep 2019 20:50:42 +0000 (13:50 -0700)]
remove FLOP computation for 3rd party lib call (#4005)
Tianqi Chen [Wed, 25 Sep 2019 19:47:29 +0000 (12:47 -0700)]
[ARITH] Refactor to use explicit div/mod functions instead of operators. (#4000)
* [ARITH] Use explicit div/mod functions instead of operators.
* fix pooling case
Kimish Patel [Wed, 25 Sep 2019 18:22:54 +0000 (11:22 -0700)]
Expose llvm.nearbyint intrinsic. This is a faster alternate to rounding. (#4001)
* Expose llvm.nearbyint intrinsic. This is a faster alternate to rounding.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Added python binding. Added test.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Philipp Krones [Wed, 25 Sep 2019 17:29:23 +0000 (19:29 +0200)]
Change Vivado install instructions to version 2018.3 (#4003)
Kimish Patel [Wed, 25 Sep 2019 16:52:09 +0000 (09:52 -0700)]
Added tesnorizeation for avx2 based gemm. (#3982)
* Added tesnorizeation for avx2 based gemm.
Summary:
Tensorized the same region as avx512. Names produce 16x1 int32 results.
Does by doing two sets of AVX2 instructions to do reduction on 8x4 int8
kernel with 1x4 data.
Test Plan:
on avx2 machine:
python tests/python/contrib/test_gemm_avx2_acc32.py
Reviewers:
Subscribers:
Tasks:
Tags:
* Fix lint errors. Removed commented out code.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Tianqi Chen [Wed, 25 Sep 2019 03:13:29 +0000 (20:13 -0700)]
[COMMUNITY] @yongwww-> reviewer (#3997)
Ina Dobreva [Wed, 25 Sep 2019 00:13:21 +0000 (01:13 +0100)]
add parser support for GREATER tflite operator (#3963)
add test for GREATER
Kimish Patel [Wed, 25 Sep 2019 00:08:01 +0000 (17:08 -0700)]
Changes to make tensorize work. These changes also fix the previously broken test. (#3981)
* Changes to make tensorize work. These changes also fix the previously
broken test.
Summary:
Tensorize was breaking for a few reasons.
1)
Assert at: src/op/tensorize.cc:234 CHECK(is_one(e.region[j]->extent))
In some cases this cannot be proven, e.g.:
expected shape=[16, 4], given region=[range(min=((ax1.outer*16)/16), ext=(((((ax1.outer*16) + 15)/16) + 1) - ax1.outer)), range(min=((k.outer*4)/4), ext=(((((k.outer*4) + 3)/4) + 1) - k.outer)), range(min=0, ext=16), range(min=0, ext=4)]
The unprovable one is: ext=(((((ax1.outer*16) + 15)/16) + 1) - ax1.outer)).
This can be simplified but it is not because to simplify divide, it must
prove ax1.outer > 0 and since it is var it cannot. The fix for this to
just find all the vars in expr in relace them with some const value.
2) Equivalence between tensorized expr and one being asked to tensorize. For example,
the error would be.
TVMError: Check failed: Equal(lhs, rhs):
Failed to match the compute with TensorIntrin tensor_intrin's declaration
provided= reduce(combiner=comm_reducer(result=[(x + y)], lhs=[x], rhs=[y], identity_element=[(int16)0]), source=[(int16(data(k))*int16(kernel(((((((((k.outer.outer*64) + (k.outer.inner*2)) + k)/2)*128) + i) - (k.outer.inner*128)) - (k.outer.outer*4096)), ((((k.outer.outer*64) + (k.outer.inner*2)) + k) % 2))))], axis=[iter_var(k, range(min=0, ext=2))], where=(bool)1, value_index=0),
intrin= reduce(combiner=comm_reducer(result=[(x + y)], lhs=[x], rhs=[y], identity_element=[(int16)0]), source=[(int16(data(k))*int16(kernel(i, k)))], axis=[iter_var(k, range(min=0, ext=2))], where=(bool)1, value_index=0)
Difference is mainly in the source part:
source=[(int16(data(k))*int16(kernel(((((((((k.outer.outer*64) + (k.outer.inner*2)) + k)/2)*128) + i) - (k.outer.inner*128)) - (k.outer.outer*4096)), ((((k.outer.outer*64) + (k.outer.inner*2)) + k) % 2))))]
source=[(int16(data(k))*int16(kernel(i, k)))], axis=[iter_var(k, range(min=0, ext=2))]
This was not being simpifiled due to compute_intrin_iter_space (map for
iter var to range) not containing leaf iter vars.
3) Here it fails with:
Check failed: is_one(Simplify(value->shape[i])): Argument b_buffer shape mismatch[16, 4] vs [(((((ax1.outer*16) + 15)/16) + 1) - ax1.outer), (((((k.outer*4) + 3)/4) + 1) - k.outer), 16, 4]
This is in buffer binding where it thinks expected and buffer bound
shape is different. Although if we could simplify expr, this would not
be the case.
Test Plan:
On skylake avx512 machine:
python tests/python/contrib/test_gemm_acc16.py
Reviewers:
Subscribers:
Tasks:
Tags:
* Implemented bounded analyzer which traverses tree and for reduce/for
statements binds the bound of the analyzer. Later this is used to
simplify expressions. Inspired from ir_mutator_with_analyzer
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Addressed comments.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Added ASF header + define macro for the header file: TVM_ARITHMETIC_IR_VISITOR_WITH_ANALYZER_H_
Some lint fixes as well.
* Relax the assumption that dom_map must always contain all leaf itervars.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Disable copy constructor and move to raw ptr.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Tianqi Chen [Tue, 24 Sep 2019 18:01:37 +0000 (11:01 -0700)]
[ARITH] Explicitly state truncdiv/mod in pattern matching. (#3986)
* [ARITH] Explicitly state truncdiv/mod in pattern matching.
* Fix the dependent cpp test
Ina Dobreva [Tue, 24 Sep 2019 17:18:41 +0000 (18:18 +0100)]
add parser support for TANH tflite operator (#3996)
Jon Soifer [Tue, 24 Sep 2019 08:12:11 +0000 (01:12 -0700)]
[Relay] Add new IR pass CombineParallelDense (#3862)
* Refactor to create abstract ParallelOpCombiner
* First draft of CombineParallelDense
* Begin to work on tests
* Test
* Refactor to move out more common code
* Clean up
* Fix
* Remove statics
* fix wording
* Start to add combine_parallel_op_batch
* Resolve PR comments
* Resolve PR comments
* dummy change to retrigger CI
* Change special case from bias_add to add
* Revert special case change
* Ignore units check
* dummy change to retrigger CI
* dummy change to re-trigger CI
* Improve docs
* Update docs
* Update docs
Steven S. Lyubomirsky [Tue, 24 Sep 2019 08:05:00 +0000 (01:05 -0700)]
Add type solver unit tests for unifying quantified funcs (one bug found) (#3947)
Jon Soifer [Tue, 24 Sep 2019 08:02:26 +0000 (01:02 -0700)]
[Relay][Frontend][ONNX] Add Erf to ONNX frontend (#3988)
* Add Erf to ONNX frontend
* dummy change to retrigger CI
StandbyMe [Tue, 24 Sep 2019 05:54:56 +0000 (13:54 +0800)]
[DOC] Add test script starter command to document (#3993)
Animesh Jain [Mon, 23 Sep 2019 15:55:04 +0000 (08:55 -0700)]
[QNN] Fix padding changes due to #3739 (#3989)
Paddy Horan [Sun, 22 Sep 2019 23:13:07 +0000 (19:13 -0400)]
[Rust] Fixes "common" sub crate using nightly and master (#3965)
shoubhik [Sun, 22 Sep 2019 22:55:35 +0000 (15:55 -0700)]
Qnn fully connected (#3910)
* Qnn Dense layer.
* Reformatting code.
* Reformatting code and making the test case more readable.
* Fixing lint issues.
* Fixing test method names to pass the nose related configurations.
* Aligning the code for code style.
Huang, Guangtai [Sun, 22 Sep 2019 16:49:10 +0000 (00:49 +0800)]
Add operator `isnan` (#3979)
* add expr `isnan`
* move to intrinsic
* doc & add to topi
* fix error from ci
Zhi [Sat, 21 Sep 2019 18:44:07 +0000 (11:44 -0700)]
Add docs for analysis namespace (#3985)
Peter Yeh [Sat, 21 Sep 2019 06:04:37 +0000 (23:04 -0700)]
Enable miopen Group Convolution (#3987)
* enable group conv through miopen
* linter fix
Peter Yeh [Sat, 21 Sep 2019 00:02:00 +0000 (17:02 -0700)]
add bc for gfx1010 (#3984)
Neo Chien [Fri, 20 Sep 2019 21:48:55 +0000 (05:48 +0800)]
[Relay][Frontend][TFLite] frontend operator support: batch_to_space_nd, space_to_batch_nd (#3850)
* Fix unittest
* Fix pylint error: Line 915 too long
* Fix the conflicting files
* frontend operator support: space_to_batch_nd
* add test case for frontend operator support: space_to_batch_nd
* add test case for frontend operator support: space_to_batch_nd
* frontend operator support: space_to_batch_nd
* Fix ValueError: don't know how to convert type <class 'numpy.ndarray'> to node
Neo Chien [Fri, 20 Sep 2019 19:17:11 +0000 (03:17 +0800)]
[Relay][Frontend][ONNX] operator support: Tile (#3941)
* [Relay][Frontend][ONNX] operator support: Tile
* Trigger notification
Tianqi Chen [Fri, 20 Sep 2019 17:17:04 +0000 (10:17 -0700)]
[ARITH] Add Lowering rule for FloorDiv/Mod (#3976)
* [ARITH] Add Lowering rule for FloorDiv/Mod
* add comment about constant folding
Alex Gladkov [Fri, 20 Sep 2019 03:49:34 +0000 (20:49 -0700)]
Add support for MXNet pad operator. (#3739)
MXNet pad is described at:
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.pad
Add support for parameter 'None' in MXNet slice operator.
MXNet 'slice' is described at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.slice
Add support for MXNet cos, sin, arctan
MXNet 'cos' is described at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.cos
MXNet 'sin' is described at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.sin
MXNet arctan is descirbed at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.arctan
Add support for MXNet 1D Convolution and 1D Deconvolution
MXNet convolution is described at:
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.Convolution
MXNet Deconvolution is described at:
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.Deconvolution
Animesh Jain [Fri, 20 Sep 2019 00:06:44 +0000 (17:06 -0700)]
[QNN] Renaming tests to follow the Relay nomenclature. (#3975)
Cody Hao Yu [Thu, 19 Sep 2019 21:23:20 +0000 (14:23 -0700)]
[TOPI] Add proper scheduling for dense on CUDA (#3923)
* add proper scheduling for dense on CUDA
* add fallback config and fix unit test
* fix corner cases
* refactoring
* fix bias and add testcase
* let fusion happen
Meghan Cowan [Thu, 19 Sep 2019 17:36:30 +0000 (10:36 -0700)]
Remove GTest cmake flag from install docs (#3953)
Ina Dobreva [Thu, 19 Sep 2019 17:33:26 +0000 (18:33 +0100)]
adjust pylint output (#3973)
adjust pylint output to show file location to make it possible to locate errors
Animesh Jain [Thu, 19 Sep 2019 04:54:01 +0000 (21:54 -0700)]
[Relay] Legalize and AlterOpLayout for Int8 Intel. (#3961)
Tianqi Chen [Thu, 19 Sep 2019 04:02:30 +0000 (21:02 -0700)]
[ARITH] Introduce base-class IRMutatorWithAnalyzer for scope dependent analysis (#3969)
Ligeng Zhu [Wed, 18 Sep 2019 23:03:09 +0000 (19:03 -0400)]
[Relay] Add shape check for ConcatenateRel and StackRel (#3699)
* [Relay] add shape check for concat
* [Relay] add shape check for stack
* add test case for shape mismatch
* [typo] add the missing assert
* fix lint errors.
* replace int with size_t.
* statically cast param->axis to size_t.
* switch to run_infer_type.
* fix checking for negative index
* add static_cast for param->axis
* merge to latest tvm
* fix lint error
* Fix an error with negative index.
* Update transform.h
* Update transform.cc
Neo Chien [Wed, 18 Sep 2019 15:12:32 +0000 (23:12 +0800)]
[TVM][AutoTVM] cast filepath arguments to string (#3968)
Josh Fromm [Wed, 18 Sep 2019 07:40:31 +0000 (00:40 -0700)]
[Relay] Keras frontend upsample and 1 channel conv2d fixes (#3937)
* Fix upsample layout in keras frontend.
* Fixed group conv being used instead of conv when channels=1
* Add new conv2d test to catch bugs when channels=1.
shoubhik [Tue, 17 Sep 2019 23:39:34 +0000 (16:39 -0700)]
Adding support to check if an attribute is present or not without having to get the value (#3957)
* Adding support to check if an attribute is present or not without having to get the value.
* - Renaming the method to more appropriate name.
Andrew Tulloch [Tue, 17 Sep 2019 16:34:33 +0000 (09:34 -0700)]
[Vulkan] Minor optimization for deferred token lookups. (#3960)
Use a hash map keyed on the descriptor set to avoid bad asymptotic behaviour.
Junru Shao [Tue, 17 Sep 2019 16:33:30 +0000 (09:33 -0700)]
More friendly error msg; Fix Android Demo LLVM ver (#3962)
Animesh Jain [Mon, 16 Sep 2019 21:52:28 +0000 (14:52 -0700)]
[TOPI] Setting up AutoTVM template for Intel Int8 conv2D (#3955)
Yuwei Hu [Mon, 16 Sep 2019 20:03:32 +0000 (13:03 -0700)]
[TOPI] Improve conv2d_transpose schedule on X86 and CUDA (#3948)
* improve conv2d_transpose x86 performance by reusing conv2d schedule
* parallelize across batches to make large-batch conv2d and conv2d_transpose faster
* improve doc for autotvm.task.space.FallbackConfigEntity.fallback_with_reference_log
* add fallback schedule for schedule_conv2d_transpose_nchw_cuda
* fix pylint
* fix pylint
* unify conv2d_transpose declaration in topi.nn and topi.x86
Yao Wang [Mon, 16 Sep 2019 18:22:00 +0000 (11:22 -0700)]
[Graph Tuner] Fix benchmark layout in graph tuner (#3926)
* Fix graph tuner benchmarking layout transform
* Add test
Zhi [Mon, 16 Sep 2019 18:07:40 +0000 (11:07 -0700)]
[tvm][codegen] Make buffer auto broadcast independent to the order of input args (#3956)
* [tvm][codegen] Make buffer auto broadcast independent to the order of the input arg
* fix indent
Neo Chien [Mon, 16 Sep 2019 17:42:47 +0000 (01:42 +0800)]
[TOPI] operator support: logical_and, logical_or, logical_not (#3929)
* [TOPI] operator support: logical_and, logical_or, logical_not
* [TOPI] operator support: logical_and, logical_or, logical_not
* [TOPI] fix test cases for operator support: logical_and, logical_or, logical_not
* [TOPI] fix test cases for operator support: logical_not
Animesh Jain [Mon, 16 Sep 2019 17:35:48 +0000 (10:35 -0700)]
[QNN] Legalization for Intel x86 QNN Conv2D (#3896)
* QNNLegalize for conv2d
* [QNN] Legalization for Intel x86 QNN Conv2D
Peter Yeh [Sun, 15 Sep 2019 21:52:31 +0000 (14:52 -0700)]
Enable miopen transpose convolution and fp16 support (#3952)
* Enable miopen transpose convolution and fp16 support
* linter
Jon Soifer [Sun, 15 Sep 2019 20:03:19 +0000 (13:03 -0700)]
[Relay][TensorFlow] Add support for SquaredDifference (#3930)
* Add support for SquaredDifference and StopGradient; minor fix in BatchMatMul
* Remove stopgradient change
* Resolve PR comment
* Dummy change to retrigger CI
* dummy change to retrigger CI
Cody Hao Yu [Sun, 15 Sep 2019 00:37:17 +0000 (17:37 -0700)]
[AutoTVM] Enhance tuning space of split (#3949)
* Refine policies for define_split
- Rename policy "all" to "factors"
- Add policy "verbose" and "power2"
* Refine search space
* add doc
Junru Shao [Sat, 14 Sep 2019 19:45:37 +0000 (12:45 -0700)]
trivial (#3954)
Umang Yadav [Fri, 13 Sep 2019 20:42:36 +0000 (16:42 -0400)]
1) Add EQ op to the deduce_bound and add unittests for the same (#3775)
2) Add EQ support in the loop partition and add test for the same
3) Change typo truc to trunc
Andrew Tulloch [Fri, 13 Sep 2019 20:40:43 +0000 (13:40 -0700)]
Vulkan2 Runtime API (#3849)
Hua Jiang [Fri, 13 Sep 2019 20:33:55 +0000 (13:33 -0700)]
[VTA] RPC path update. (#3924)
Issue:
RPC path get changed into "vta_rpc" from "pynq_rpc", but related
document still use old informaiton.
Solution:
Update RPC path information.
Jianyu Huang [Fri, 13 Sep 2019 20:27:40 +0000 (13:27 -0700)]
Add AVX512VNNI support for TVM (#3388)
Animesh Jain [Fri, 13 Sep 2019 18:38:14 +0000 (11:38 -0700)]
Refactoring x86 conv2d_NCHWc (#3944)
noituIover [Fri, 13 Sep 2019 01:04:52 +0000 (09:04 +0800)]
Fix CUDA int8x4 vectorize (#3928)
* Fix int8x4 vectorize
* Fix gpu shared/local memory accumulate
* Add test_shared_memory for int8x4
* Adjust test format
* Fix cpplint
shoubhik [Thu, 12 Sep 2019 23:34:20 +0000 (16:34 -0700)]
Do type checking for the input and kernel in the qnn conv2d (#3904)
* [QNN] Convolution 2D Implementation.
Rebasing. Empty commit.
Clang-format styling.
* Reformatting code.
* Fixing lint issues.
Jon Soifer [Thu, 12 Sep 2019 20:04:45 +0000 (13:04 -0700)]
[TOPI][CUDA] Support cuBLAS BatchMatMul (#3936)
* Support cuBLAS BatchMatMul
* Add test and check target name
Andrew Tulloch [Thu, 12 Sep 2019 19:32:01 +0000 (12:32 -0700)]
[RFC] [Contrib] Minimal runtime (~12kb .text on ARMv7/x86) for subset of TVM models (#3567)
This is an alternative implementation of a subset of the TVM runtime API (and
graph runtime) that focuses entirely on reducing code size, at the expense of
functionality (no tvm.extern(..) calls via PackedFunc, CPU only, etc). It might
be worth incrementally expanding the surface area if there's interest.
The motivation for this work was seeing what the minimal useful subset of the
TVM runtime is. This is relevant for e.g. super code-size constrained
applications in e.g. embedded/mobile. The current runtime is more like O(100KiB)
or so, so this might be compelling for some users.
The smaller surface area for auditing might make this relevant for
https://github.com/dmlc/tvm/issues/3159, or the usecases I was thinking about in
https://github.com/dmlc/tvm/issues/2523#issuecomment-
459165815 re: the Rust
runtime.
The symbols in the tvm::minimalruntime space (i.e. excluding std:: and
picojson::) are about 5KiB, so I think there's a bunch of room here (i.e. we
could replace picojson:: with [`jsmn`](https://zserge.com/jsmn.html) or
something, and we could replace more of the `std::unordered_map` usage, etc with
custom primitives as well (similar to the `DynArray`).
Jared Roesch [Thu, 12 Sep 2019 03:39:56 +0000 (22:39 -0500)]
[Relay][Module] Refactor the way we interface between different modules of Relay. (#3906)
* Module refactor
* Add load module
* Add support for idempotent import
* Tweak load paths
* Move path around
* Expose C++ import functions in Python
* Fix import
* Add doc string
* Fix
* Fix lint
* Fix lint
* Fix test failure
* Add type solver
* Fix lint
Lianmin Zheng [Wed, 11 Sep 2019 21:32:15 +0000 (14:32 -0700)]
[Community] Add reviewer Balint Cristian (#3935)
Yizhi Liu [Wed, 11 Sep 2019 18:10:48 +0000 (02:10 +0800)]
[Arm] parallel batch axis (#3931)
* support LLVM trunk
* guard with USE_LLVM in if condition for c++14
* GREATER_EQUAL -> GREATER
* [Arm] parallel batch axis
Zhao Wu [Wed, 11 Sep 2019 04:09:25 +0000 (12:09 +0800)]
[TFLite] Support depthwise convolution multiplier greater than 1 (#3922)
雾雨魔理沙 [Wed, 11 Sep 2019 03:30:46 +0000 (20:30 -0700)]
[Relay] fix exponential blowup in interpreter (#3559)
Neo Chien [Tue, 10 Sep 2019 17:41:16 +0000 (01:41 +0800)]
[Relay][Frontend][Keras] Fix ReLU in Keras Converter missed the case (#3917)
* [Relay][Frontend][Keras] Fix ReLU in Keras Converter missed the case
* [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case
* [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case
Pratyush Patel [Tue, 10 Sep 2019 00:43:01 +0000 (17:43 -0700)]
[CODEGEN] Remove incorrect check for LLVM in C codegen test (#3921)
雾雨魔理沙 [Mon, 9 Sep 2019 19:48:04 +0000 (12:48 -0700)]
[Relay][Training] Add gradient for max. (#3915)
* save
* save
Luis Vega [Mon, 9 Sep 2019 17:31:31 +0000 (10:31 -0700)]
[VTA][Config] hotfix denano10 (#3918)
Xingjian Shi [Mon, 9 Sep 2019 17:26:34 +0000 (10:26 -0700)]
Numpy compatible dtype inference for `tvm.convert` and `tvm.const` (#3861)
* numpy compatible type inference
* update
* try to fix
* fix
* try to fix
* fix lint
* Update nn.h
* cast to int32
* try to fix
* fix again
* retrigger ci
Haichen Shen [Mon, 9 Sep 2019 14:54:15 +0000 (07:54 -0700)]
[Relay/TOPI][Op] Add erf intrinsic and op (#3702)
* add more ops
* stop vectorization for erf
* x
* cleanup
* fix
* add whitelist for vectorizable intrin
* add tf converter
* fix dense
* fix
* add missing intrin
* fix mxnet frontend
* fix nvptx
雾雨魔理沙 [Sun, 8 Sep 2019 03:11:47 +0000 (20:11 -0700)]
[Relay][Training] Add gradient for cast (#3894)
save
fix
fix grad
雾雨魔理沙 [Sun, 8 Sep 2019 00:10:11 +0000 (17:10 -0700)]
change docker install script (#3524)
Haichen Shen [Sat, 7 Sep 2019 21:34:32 +0000 (14:34 -0700)]
[Fix] Fix blas cmake for mac os (#3898)
* fix cmake for mac os
* rename
Yizhi Liu [Sat, 7 Sep 2019 18:43:29 +0000 (02:43 +0800)]
Support LLVM trunk (#3907)
* support LLVM trunk
* guard with USE_LLVM in if condition for c++14
* GREATER_EQUAL -> GREATER
noituIover [Sat, 7 Sep 2019 16:44:39 +0000 (00:44 +0800)]
Fix a typo (#3913)
Peter Yeh [Sat, 7 Sep 2019 03:41:35 +0000 (20:41 -0700)]
Add .hsaco save/load for ROCm target (#3852)
fix lld
Haichen Shen [Sat, 7 Sep 2019 00:12:56 +0000 (17:12 -0700)]
add luis as reviewer (#3909)
Hua Jiang [Sat, 7 Sep 2019 00:03:51 +0000 (17:03 -0700)]
[VTA] Support TLPP in function simulator. (#3555)
* [VTA] Support TLPP in function simulator.
Issue:
currently vta function simulator just doing serialized instruction
execution, the dependency logic of runtime ISA which use for task
level pipe line parallelism can not get verified by function simulator.
Solution:
make the simulator driver to be multiple thread and support TLPP.
Benefit:
TLPP support VTA function simulator would make VTA logic testing/debug
/change more easy.
replace boost lockfree queue
add configure control for simulator tlpp enable or disable.
change code tyle into google style.
Wrap queue read/write and sync logic to make function call more simple.
Add some comments.
Remove MT logic, change into Single thread mode.
address review comments.
code style change to match google code style and add comments.
add cmake macro to enable/disable simulator tlpp logic.
submodule update.
correct file name mentioned in comments.
* remove USE_VTA_FSIM_TLPP.
Leyuan Wang [Sat, 7 Sep 2019 00:01:29 +0000 (17:01 -0700)]
[TOPI] Intel graphics conv2d autotvm template added (#3839)
* update lint
* lint fixed
* lint updated
* lint fixed
* lint fixed
* lint fixed
* updates
* add intel graphics as a package
* remove print info
* depthwise conv2d schedule added for intel graphics
* asdf
* fix lint
* fix lint
* fix ci
* add channels
雾雨魔理沙 [Fri, 6 Sep 2019 22:17:37 +0000 (15:17 -0700)]
save (#3901)
雾雨魔理沙 [Fri, 6 Sep 2019 18:51:27 +0000 (11:51 -0700)]
[Relay][Op] Make Type Relation catch more errors (#3899)
* save
* init
* move type_relations
Logan Weber [Fri, 6 Sep 2019 18:04:34 +0000 (11:04 -0700)]
[Relay] Add ADTs to text format (#3863)
* Getting closer to having ADT defs
* ADT defs working probly
* Match parsing basipally done
* came to earth in a silver chrome UFO
* match finished?
* All tests but newest are passing
* ADT constructors work
now cleanup?
* Cleanup round 1
* Cleanup round 2
* Cleanup round 3
* Cleanup round 4
* Cleanup round 6
* Cleanup round 7
* Lil grammar fix
* Remove ANTLR Java files
* Lint roller
* Lint roller
* Address feedback
* Test completeness in match test
* Remove unused imports
* Lint roller
* Switch to Rust-style ADT syntax
* Lil fix
* Add dummy `extern type` handler
* Add type arg to test
* Update prelude semantic version
* Repair test
* Fix graph var handling in match
* Revert 's/graph_equal/is_unifiable' change
Yong Wu [Fri, 6 Sep 2019 15:30:04 +0000 (08:30 -0700)]
[bugfix] remove duplicate resize (#3902)
Jason Knight [Fri, 6 Sep 2019 13:30:13 +0000 (06:30 -0700)]
Add another MKL name alias for MKL (#3853)
Installed through pypi
Yizhi Liu [Fri, 6 Sep 2019 13:29:31 +0000 (21:29 +0800)]
[schedule] Improve ceil_divide in tile/split (#3842)
Jon Soifer [Thu, 5 Sep 2019 23:42:29 +0000 (16:42 -0700)]
[PYTHON/FFI] Search PATH for DLLs (#3888)
* Search PATH for DLLs
* Fix lint issue
雾雨魔理沙 [Thu, 5 Sep 2019 23:41:44 +0000 (16:41 -0700)]
[Relay] add Tuple pattern (#3596)
* implement tuple pattern
* add tuple pattern
* lint;
* lint
* lint
* fix error
* fix
* add test
kice [Thu, 5 Sep 2019 23:21:54 +0000 (19:21 -0400)]
Fix int32 range overflow by using int64 (#3870)
雾雨魔理沙 [Thu, 5 Sep 2019 21:39:13 +0000 (14:39 -0700)]
[Relay] Fix operator fusion for multiple output (#3871)
* save
* add test
* refactor
* fix indent
* save
* refactor
Haibin Lin [Thu, 5 Sep 2019 18:48:57 +0000 (11:48 -0700)]
[DOC] Fix doc rendering (#3897)
* Update from_source.rst
* Update deploy_ssd_gluoncv.py
黎明灰烬 [Thu, 5 Sep 2019 18:32:21 +0000 (02:32 +0800)]
[Test] enable NHWC of `relay.testing.mobilenet` (#3886)
* [Relay] enable NHWC of `relay.testing.mobilenet`
In this way, we can play around NHWC inside TVM regardless of
the frontends.
* [Test] test for NHWC of relay.testing.mobilenet
Thierry Moreau [Thu, 5 Sep 2019 18:29:42 +0000 (11:29 -0700)]
[VTA][TOPI] Conv2d transpose (deconvolution) operator support (#3777)
* initial conv2d_transpose
* correct select operator
* cleanup
* fix
* fix correcness check
* conv2d transpose declaration fix
* autotvm conv2d_transpose tuning script
* ir pass fix
* fix tuning script
* deriving params from env, adding bias
* removing bias comp from deconvolution
* lint
* fix
* lint
* lint
* turning off cpu
* lint, ops
* lint
* import fix
* removing hard coded values
* lint
Thierry Moreau [Thu, 5 Sep 2019 18:17:09 +0000 (11:17 -0700)]
[VTA][Relay] Extending Vision model coverage compilation for VTA (#3740)
* adding support for graphpack over multiply op
* increasing resnet model coverage
* fix indentation
* lint
* moving recursion limit fix into graphpack pass
* moving recursionlimit to relay init
* pooling on NCHWnc format
* adding more models
* deploy_resnet_on_vta.py
* trailing line
* generalizing to vision models
* merge conflicts
* fix, apply quantization to VTA only
* improving comments
* trimming models that have runtime issues for the moment
* lint
* lint
* lint
雾雨魔理沙 [Thu, 5 Sep 2019 18:13:07 +0000 (11:13 -0700)]
[Relay][Training] Small refactoring (#3893)
* init
* fix
Animesh Jain [Thu, 5 Sep 2019 17:22:45 +0000 (10:22 -0700)]
[QNN] Add - Refactoring to C++ (#3736)