bindog [Thu, 3 Oct 2019 00:01:36 +0000 (08:01 +0800)]
[Relay][Op] Add instance norm op (#4004)
* [Relay][Op] Add instance norm op
* mend
[Relay][Op] Add instance norm op
Animesh Jain [Wed, 2 Oct 2019 22:39:54 +0000 (15:39 -0700)]
[QNN][Relay] Calling Dialect passes from inside Relay Build API. (#3971)
Umang Yadav [Wed, 2 Oct 2019 20:13:10 +0000 (16:13 -0400)]
[RELAY/PASS] Fix the extent for the post_stmt in the loop partition (#3734)
Wei Chen [Wed, 2 Oct 2019 20:11:30 +0000 (13:11 -0700)]
[TF][Op] Op where (#4045)
* [TF][Op] Add TF op Where
* improve tests
* add tests for vm
Cody Hao Yu [Tue, 1 Oct 2019 23:20:29 +0000 (16:20 -0700)]
Fix split's last factor issue (#4044)
Tianqi Chen [Tue, 1 Oct 2019 21:07:49 +0000 (14:07 -0700)]
[COMMUNITY] ajtulloch -> committer (#4043)
Wei Chen [Tue, 1 Oct 2019 20:09:21 +0000 (13:09 -0700)]
[TOPI]Add op argwhere (#3994)
* Add op argwhere
* Move shape func to _algorithm.py
* Add lint rule
* Raise exception if rank is not supportted
* move argwhere to transform
* Add argwhere example
* Fix lint
* Add 1-d support
* cleanup
* Add more dtype support
* CR comment
* Improve error message
* Docs
* raise exception
Yizhi Liu [Tue, 1 Oct 2019 15:40:16 +0000 (23:40 +0800)]
[topi] add ARM v8.2 udot (uint8) support (#3978)
* [topi] add ARM v8.2 udot (uint8) support
* fix test case
* fix common conv2d schedule
* add back fp32_time in test
* fix lint
* fix doc, add support for int32_lanes=4, signed int
* fix lint
* add ic_bn % 4 checker in schedule
Tianqi Chen [Mon, 30 Sep 2019 19:14:59 +0000 (12:14 -0700)]
[COMMUNITY] anijain2305 -> reviewer (#4036)
Animesh Jain [Mon, 30 Sep 2019 17:24:36 +0000 (10:24 -0700)]
[QNN] Renaming dense operator. (#4033)
Animesh Jain [Mon, 30 Sep 2019 17:06:35 +0000 (10:06 -0700)]
[Relay][Compile_engine] Int64 shape handling for outputs. (#4031)
ndl [Mon, 30 Sep 2019 16:25:24 +0000 (18:25 +0200)]
Add dmlc-core to the list of installed header directories. (#4035)
There are dependencies on dmlc-core in TVM public API headers
(e.g. some headers include dmlc/logging.h) so it needs to be installed
as part of TVM for TVM headers to be actually usable.
Tianqi Chen [Mon, 30 Sep 2019 05:06:58 +0000 (22:06 -0700)]
[ARITH] migrate indexdiv/mod to floordiv/mod (#4008)
Logan Weber [Sun, 29 Sep 2019 23:48:10 +0000 (16:48 -0700)]
[Relay] Move prelude to text format (#3939)
* Fix parser
* Doc fix
* Add module utility functions necessary for prelude
* Implement prelude in text format
* Remove programmatically constructed prelude defs
* Fix 0-arity type conses in pretty printer and test
* Make prelude loading backwards-compatible
* Fix patterns
* Improve some prelude defs
* Fix `ImportFromStd`
It needs to also follow the "add unchecked, add checked" pattern
* Lint roller
* Woops
* Address feedback
* Fix `test_list_constructor` VM test
* Fix `test_adt.py` failures
egolearner [Sun, 29 Sep 2019 16:21:18 +0000 (00:21 +0800)]
make tvm compilable by gcc 4.9.2 (#4032)
please see https://stackoverflow.com/a/
26949099
Neo Chien [Sun, 29 Sep 2019 03:20:34 +0000 (11:20 +0800)]
[AUTOTVM][DOCS] Add a link to the defining network description of auto-tuning tutorial (#4023)
* [AUTOTVM][DOCS] Add a link to autoTVM tutorial to direct the details of building NN with relay
* [AUTOTVM][DOCS] Add a link to autoTVM tutorial to direct the details of building NN with relay
Tianqi Chen [Sat, 28 Sep 2019 21:43:44 +0000 (14:43 -0700)]
[ARITH] cleanup the indexmod/div on python side (#4028)
bindog [Sat, 28 Sep 2019 17:22:01 +0000 (01:22 +0800)]
[Fix] Add more pad_mode support for onnx converter (#4029)
* [Fix] Add more pad_mode support for onnx converter
* robustness fix
Ina Dobreva [Sat, 28 Sep 2019 00:30:11 +0000 (01:30 +0100)]
Add parser support for ReLU tflite operator (#4022)
Alex Gladkov [Sat, 28 Sep 2019 00:27:48 +0000 (17:27 -0700)]
Additional MXNet Convolution and Deconvolution tests (#4026)
Add different batch sizes and channel numbers to
MXNet Convolution and Deconvolution tests.
brett koonce [Fri, 27 Sep 2019 19:39:42 +0000 (14:39 -0500)]
docs: minor spelling tweaks (#4027)
Paddy Horan [Fri, 27 Sep 2019 16:41:31 +0000 (12:41 -0400)]
[Rust] Fix issue with CPP enums. (#4019)
Tianqi Chen [Fri, 27 Sep 2019 16:34:02 +0000 (09:34 -0700)]
[DOCKER] make demo images consistent with ci images when possible. (#4024)
Yida Wang [Fri, 27 Sep 2019 15:59:15 +0000 (08:59 -0700)]
[Fix]use a more intuitive way to limit the #ops in a group (#4018)
* use a more intuitive way to limit the #ops in a group
* format
Tianqi Chen [Fri, 27 Sep 2019 14:45:25 +0000 (07:45 -0700)]
[ARITH] Use explicit div mode in python. (#4014)
Kimish Patel [Fri, 27 Sep 2019 00:20:00 +0000 (17:20 -0700)]
Exposed lowered func to c++ API. (#4012)
So that you can use: `build_mod_.GetFunction("get_lowered_funcs", false);`
to get lowered_funcs.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Haozheng Fan [Thu, 26 Sep 2019 22:00:14 +0000 (06:00 +0800)]
hide psutil (#4013)
Animesh Jain [Thu, 26 Sep 2019 17:19:42 +0000 (10:19 -0700)]
[QNN][Conv2D] Optimize lowering. (#4006)
Jon Soifer [Thu, 26 Sep 2019 05:48:50 +0000 (22:48 -0700)]
[TOPI][x86] Introduce schedule_injective_from_existing and unify external schedules for all targets (#3983)
* Fix extern schedule for x86
* Register x86::schedule_extern
* Fix
* Fix
* Replace extern.py with extern.h
* Introduce new generic function schedule_injective_from_existing
* Fix
* Fix
* Add back to C++
* Fix style
* Injective schedule calls local schedule_injective_from_existing
* Fix
* Remove target arg from schedule_injective_from_existing
* Fix docs
* Try to fix unit test
* Fix test
* Fix other tests
* Fix bug
Yida Wang [Wed, 25 Sep 2019 23:24:14 +0000 (16:24 -0700)]
[RELAY]impose a max op limit to the op fusion pass (#4002)
* impose a max op limit to op fusion
* use cross platform data type
黎明灰烬 [Wed, 25 Sep 2019 23:02:19 +0000 (07:02 +0800)]
[TOPI] Move conv2d spatial pack schedule to dedicated file (#3972)
More schedules are making the conv2d.py file too large, so
we'd like to move the spatial pack schedule to dedicated file
before introducing NHWC schedule. No logic change in this patch.
Tianqi Chen [Wed, 25 Sep 2019 22:43:31 +0000 (15:43 -0700)]
Revert "Added tesnorizeation for avx2 based gemm. (#3982)" (#4007)
This reverts commit
23727eb49ea71609fc29963b996a68a14fddf79c.
Cody Hao Yu [Wed, 25 Sep 2019 20:50:42 +0000 (13:50 -0700)]
remove FLOP computation for 3rd party lib call (#4005)
Tianqi Chen [Wed, 25 Sep 2019 19:47:29 +0000 (12:47 -0700)]
[ARITH] Refactor to use explicit div/mod functions instead of operators. (#4000)
* [ARITH] Use explicit div/mod functions instead of operators.
* fix pooling case
Kimish Patel [Wed, 25 Sep 2019 18:22:54 +0000 (11:22 -0700)]
Expose llvm.nearbyint intrinsic. This is a faster alternate to rounding. (#4001)
* Expose llvm.nearbyint intrinsic. This is a faster alternate to rounding.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Added python binding. Added test.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Philipp Krones [Wed, 25 Sep 2019 17:29:23 +0000 (19:29 +0200)]
Change Vivado install instructions to version 2018.3 (#4003)
Kimish Patel [Wed, 25 Sep 2019 16:52:09 +0000 (09:52 -0700)]
Added tesnorizeation for avx2 based gemm. (#3982)
* Added tesnorizeation for avx2 based gemm.
Summary:
Tensorized the same region as avx512. Names produce 16x1 int32 results.
Does by doing two sets of AVX2 instructions to do reduction on 8x4 int8
kernel with 1x4 data.
Test Plan:
on avx2 machine:
python tests/python/contrib/test_gemm_avx2_acc32.py
Reviewers:
Subscribers:
Tasks:
Tags:
* Fix lint errors. Removed commented out code.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Tianqi Chen [Wed, 25 Sep 2019 03:13:29 +0000 (20:13 -0700)]
[COMMUNITY] @yongwww-> reviewer (#3997)
Ina Dobreva [Wed, 25 Sep 2019 00:13:21 +0000 (01:13 +0100)]
add parser support for GREATER tflite operator (#3963)
add test for GREATER
Kimish Patel [Wed, 25 Sep 2019 00:08:01 +0000 (17:08 -0700)]
Changes to make tensorize work. These changes also fix the previously broken test. (#3981)
* Changes to make tensorize work. These changes also fix the previously
broken test.
Summary:
Tensorize was breaking for a few reasons.
1)
Assert at: src/op/tensorize.cc:234 CHECK(is_one(e.region[j]->extent))
In some cases this cannot be proven, e.g.:
expected shape=[16, 4], given region=[range(min=((ax1.outer*16)/16), ext=(((((ax1.outer*16) + 15)/16) + 1) - ax1.outer)), range(min=((k.outer*4)/4), ext=(((((k.outer*4) + 3)/4) + 1) - k.outer)), range(min=0, ext=16), range(min=0, ext=4)]
The unprovable one is: ext=(((((ax1.outer*16) + 15)/16) + 1) - ax1.outer)).
This can be simplified but it is not because to simplify divide, it must
prove ax1.outer > 0 and since it is var it cannot. The fix for this to
just find all the vars in expr in relace them with some const value.
2) Equivalence between tensorized expr and one being asked to tensorize. For example,
the error would be.
TVMError: Check failed: Equal(lhs, rhs):
Failed to match the compute with TensorIntrin tensor_intrin's declaration
provided= reduce(combiner=comm_reducer(result=[(x + y)], lhs=[x], rhs=[y], identity_element=[(int16)0]), source=[(int16(data(k))*int16(kernel(((((((((k.outer.outer*64) + (k.outer.inner*2)) + k)/2)*128) + i) - (k.outer.inner*128)) - (k.outer.outer*4096)), ((((k.outer.outer*64) + (k.outer.inner*2)) + k) % 2))))], axis=[iter_var(k, range(min=0, ext=2))], where=(bool)1, value_index=0),
intrin= reduce(combiner=comm_reducer(result=[(x + y)], lhs=[x], rhs=[y], identity_element=[(int16)0]), source=[(int16(data(k))*int16(kernel(i, k)))], axis=[iter_var(k, range(min=0, ext=2))], where=(bool)1, value_index=0)
Difference is mainly in the source part:
source=[(int16(data(k))*int16(kernel(((((((((k.outer.outer*64) + (k.outer.inner*2)) + k)/2)*128) + i) - (k.outer.inner*128)) - (k.outer.outer*4096)), ((((k.outer.outer*64) + (k.outer.inner*2)) + k) % 2))))]
source=[(int16(data(k))*int16(kernel(i, k)))], axis=[iter_var(k, range(min=0, ext=2))]
This was not being simpifiled due to compute_intrin_iter_space (map for
iter var to range) not containing leaf iter vars.
3) Here it fails with:
Check failed: is_one(Simplify(value->shape[i])): Argument b_buffer shape mismatch[16, 4] vs [(((((ax1.outer*16) + 15)/16) + 1) - ax1.outer), (((((k.outer*4) + 3)/4) + 1) - k.outer), 16, 4]
This is in buffer binding where it thinks expected and buffer bound
shape is different. Although if we could simplify expr, this would not
be the case.
Test Plan:
On skylake avx512 machine:
python tests/python/contrib/test_gemm_acc16.py
Reviewers:
Subscribers:
Tasks:
Tags:
* Implemented bounded analyzer which traverses tree and for reduce/for
statements binds the bound of the analyzer. Later this is used to
simplify expressions. Inspired from ir_mutator_with_analyzer
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Addressed comments.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Added ASF header + define macro for the header file: TVM_ARITHMETIC_IR_VISITOR_WITH_ANALYZER_H_
Some lint fixes as well.
* Relax the assumption that dom_map must always contain all leaf itervars.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
* Disable copy constructor and move to raw ptr.
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Tianqi Chen [Tue, 24 Sep 2019 18:01:37 +0000 (11:01 -0700)]
[ARITH] Explicitly state truncdiv/mod in pattern matching. (#3986)
* [ARITH] Explicitly state truncdiv/mod in pattern matching.
* Fix the dependent cpp test
Ina Dobreva [Tue, 24 Sep 2019 17:18:41 +0000 (18:18 +0100)]
add parser support for TANH tflite operator (#3996)
Jon Soifer [Tue, 24 Sep 2019 08:12:11 +0000 (01:12 -0700)]
[Relay] Add new IR pass CombineParallelDense (#3862)
* Refactor to create abstract ParallelOpCombiner
* First draft of CombineParallelDense
* Begin to work on tests
* Test
* Refactor to move out more common code
* Clean up
* Fix
* Remove statics
* fix wording
* Start to add combine_parallel_op_batch
* Resolve PR comments
* Resolve PR comments
* dummy change to retrigger CI
* Change special case from bias_add to add
* Revert special case change
* Ignore units check
* dummy change to retrigger CI
* dummy change to re-trigger CI
* Improve docs
* Update docs
* Update docs
Steven S. Lyubomirsky [Tue, 24 Sep 2019 08:05:00 +0000 (01:05 -0700)]
Add type solver unit tests for unifying quantified funcs (one bug found) (#3947)
Jon Soifer [Tue, 24 Sep 2019 08:02:26 +0000 (01:02 -0700)]
[Relay][Frontend][ONNX] Add Erf to ONNX frontend (#3988)
* Add Erf to ONNX frontend
* dummy change to retrigger CI
StandbyMe [Tue, 24 Sep 2019 05:54:56 +0000 (13:54 +0800)]
[DOC] Add test script starter command to document (#3993)
Animesh Jain [Mon, 23 Sep 2019 15:55:04 +0000 (08:55 -0700)]
[QNN] Fix padding changes due to #3739 (#3989)
Paddy Horan [Sun, 22 Sep 2019 23:13:07 +0000 (19:13 -0400)]
[Rust] Fixes "common" sub crate using nightly and master (#3965)
shoubhik [Sun, 22 Sep 2019 22:55:35 +0000 (15:55 -0700)]
Qnn fully connected (#3910)
* Qnn Dense layer.
* Reformatting code.
* Reformatting code and making the test case more readable.
* Fixing lint issues.
* Fixing test method names to pass the nose related configurations.
* Aligning the code for code style.
Huang, Guangtai [Sun, 22 Sep 2019 16:49:10 +0000 (00:49 +0800)]
Add operator `isnan` (#3979)
* add expr `isnan`
* move to intrinsic
* doc & add to topi
* fix error from ci
Zhi [Sat, 21 Sep 2019 18:44:07 +0000 (11:44 -0700)]
Add docs for analysis namespace (#3985)
Peter Yeh [Sat, 21 Sep 2019 06:04:37 +0000 (23:04 -0700)]
Enable miopen Group Convolution (#3987)
* enable group conv through miopen
* linter fix
Peter Yeh [Sat, 21 Sep 2019 00:02:00 +0000 (17:02 -0700)]
add bc for gfx1010 (#3984)
Neo Chien [Fri, 20 Sep 2019 21:48:55 +0000 (05:48 +0800)]
[Relay][Frontend][TFLite] frontend operator support: batch_to_space_nd, space_to_batch_nd (#3850)
* Fix unittest
* Fix pylint error: Line 915 too long
* Fix the conflicting files
* frontend operator support: space_to_batch_nd
* add test case for frontend operator support: space_to_batch_nd
* add test case for frontend operator support: space_to_batch_nd
* frontend operator support: space_to_batch_nd
* Fix ValueError: don't know how to convert type <class 'numpy.ndarray'> to node
Neo Chien [Fri, 20 Sep 2019 19:17:11 +0000 (03:17 +0800)]
[Relay][Frontend][ONNX] operator support: Tile (#3941)
* [Relay][Frontend][ONNX] operator support: Tile
* Trigger notification
Tianqi Chen [Fri, 20 Sep 2019 17:17:04 +0000 (10:17 -0700)]
[ARITH] Add Lowering rule for FloorDiv/Mod (#3976)
* [ARITH] Add Lowering rule for FloorDiv/Mod
* add comment about constant folding
Alex Gladkov [Fri, 20 Sep 2019 03:49:34 +0000 (20:49 -0700)]
Add support for MXNet pad operator. (#3739)
MXNet pad is described at:
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.pad
Add support for parameter 'None' in MXNet slice operator.
MXNet 'slice' is described at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.slice
Add support for MXNet cos, sin, arctan
MXNet 'cos' is described at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.cos
MXNet 'sin' is described at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.sin
MXNet arctan is descirbed at
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.arctan
Add support for MXNet 1D Convolution and 1D Deconvolution
MXNet convolution is described at:
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.Convolution
MXNet Deconvolution is described at:
https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.Deconvolution
Animesh Jain [Fri, 20 Sep 2019 00:06:44 +0000 (17:06 -0700)]
[QNN] Renaming tests to follow the Relay nomenclature. (#3975)
Cody Hao Yu [Thu, 19 Sep 2019 21:23:20 +0000 (14:23 -0700)]
[TOPI] Add proper scheduling for dense on CUDA (#3923)
* add proper scheduling for dense on CUDA
* add fallback config and fix unit test
* fix corner cases
* refactoring
* fix bias and add testcase
* let fusion happen
Meghan Cowan [Thu, 19 Sep 2019 17:36:30 +0000 (10:36 -0700)]
Remove GTest cmake flag from install docs (#3953)
Ina Dobreva [Thu, 19 Sep 2019 17:33:26 +0000 (18:33 +0100)]
adjust pylint output (#3973)
adjust pylint output to show file location to make it possible to locate errors
Animesh Jain [Thu, 19 Sep 2019 04:54:01 +0000 (21:54 -0700)]
[Relay] Legalize and AlterOpLayout for Int8 Intel. (#3961)
Tianqi Chen [Thu, 19 Sep 2019 04:02:30 +0000 (21:02 -0700)]
[ARITH] Introduce base-class IRMutatorWithAnalyzer for scope dependent analysis (#3969)
Ligeng Zhu [Wed, 18 Sep 2019 23:03:09 +0000 (19:03 -0400)]
[Relay] Add shape check for ConcatenateRel and StackRel (#3699)
* [Relay] add shape check for concat
* [Relay] add shape check for stack
* add test case for shape mismatch
* [typo] add the missing assert
* fix lint errors.
* replace int with size_t.
* statically cast param->axis to size_t.
* switch to run_infer_type.
* fix checking for negative index
* add static_cast for param->axis
* merge to latest tvm
* fix lint error
* Fix an error with negative index.
* Update transform.h
* Update transform.cc
Neo Chien [Wed, 18 Sep 2019 15:12:32 +0000 (23:12 +0800)]
[TVM][AutoTVM] cast filepath arguments to string (#3968)
Josh Fromm [Wed, 18 Sep 2019 07:40:31 +0000 (00:40 -0700)]
[Relay] Keras frontend upsample and 1 channel conv2d fixes (#3937)
* Fix upsample layout in keras frontend.
* Fixed group conv being used instead of conv when channels=1
* Add new conv2d test to catch bugs when channels=1.
shoubhik [Tue, 17 Sep 2019 23:39:34 +0000 (16:39 -0700)]
Adding support to check if an attribute is present or not without having to get the value (#3957)
* Adding support to check if an attribute is present or not without having to get the value.
* - Renaming the method to more appropriate name.
Andrew Tulloch [Tue, 17 Sep 2019 16:34:33 +0000 (09:34 -0700)]
[Vulkan] Minor optimization for deferred token lookups. (#3960)
Use a hash map keyed on the descriptor set to avoid bad asymptotic behaviour.
Junru Shao [Tue, 17 Sep 2019 16:33:30 +0000 (09:33 -0700)]
More friendly error msg; Fix Android Demo LLVM ver (#3962)
Animesh Jain [Mon, 16 Sep 2019 21:52:28 +0000 (14:52 -0700)]
[TOPI] Setting up AutoTVM template for Intel Int8 conv2D (#3955)
Yuwei Hu [Mon, 16 Sep 2019 20:03:32 +0000 (13:03 -0700)]
[TOPI] Improve conv2d_transpose schedule on X86 and CUDA (#3948)
* improve conv2d_transpose x86 performance by reusing conv2d schedule
* parallelize across batches to make large-batch conv2d and conv2d_transpose faster
* improve doc for autotvm.task.space.FallbackConfigEntity.fallback_with_reference_log
* add fallback schedule for schedule_conv2d_transpose_nchw_cuda
* fix pylint
* fix pylint
* unify conv2d_transpose declaration in topi.nn and topi.x86
Yao Wang [Mon, 16 Sep 2019 18:22:00 +0000 (11:22 -0700)]
[Graph Tuner] Fix benchmark layout in graph tuner (#3926)
* Fix graph tuner benchmarking layout transform
* Add test
Zhi [Mon, 16 Sep 2019 18:07:40 +0000 (11:07 -0700)]
[tvm][codegen] Make buffer auto broadcast independent to the order of input args (#3956)
* [tvm][codegen] Make buffer auto broadcast independent to the order of the input arg
* fix indent
Neo Chien [Mon, 16 Sep 2019 17:42:47 +0000 (01:42 +0800)]
[TOPI] operator support: logical_and, logical_or, logical_not (#3929)
* [TOPI] operator support: logical_and, logical_or, logical_not
* [TOPI] operator support: logical_and, logical_or, logical_not
* [TOPI] fix test cases for operator support: logical_and, logical_or, logical_not
* [TOPI] fix test cases for operator support: logical_not
Animesh Jain [Mon, 16 Sep 2019 17:35:48 +0000 (10:35 -0700)]
[QNN] Legalization for Intel x86 QNN Conv2D (#3896)
* QNNLegalize for conv2d
* [QNN] Legalization for Intel x86 QNN Conv2D
Peter Yeh [Sun, 15 Sep 2019 21:52:31 +0000 (14:52 -0700)]
Enable miopen transpose convolution and fp16 support (#3952)
* Enable miopen transpose convolution and fp16 support
* linter
Jon Soifer [Sun, 15 Sep 2019 20:03:19 +0000 (13:03 -0700)]
[Relay][TensorFlow] Add support for SquaredDifference (#3930)
* Add support for SquaredDifference and StopGradient; minor fix in BatchMatMul
* Remove stopgradient change
* Resolve PR comment
* Dummy change to retrigger CI
* dummy change to retrigger CI
Cody Hao Yu [Sun, 15 Sep 2019 00:37:17 +0000 (17:37 -0700)]
[AutoTVM] Enhance tuning space of split (#3949)
* Refine policies for define_split
- Rename policy "all" to "factors"
- Add policy "verbose" and "power2"
* Refine search space
* add doc
Junru Shao [Sat, 14 Sep 2019 19:45:37 +0000 (12:45 -0700)]
trivial (#3954)
Umang Yadav [Fri, 13 Sep 2019 20:42:36 +0000 (16:42 -0400)]
1) Add EQ op to the deduce_bound and add unittests for the same (#3775)
2) Add EQ support in the loop partition and add test for the same
3) Change typo truc to trunc
Andrew Tulloch [Fri, 13 Sep 2019 20:40:43 +0000 (13:40 -0700)]
Vulkan2 Runtime API (#3849)
Hua Jiang [Fri, 13 Sep 2019 20:33:55 +0000 (13:33 -0700)]
[VTA] RPC path update. (#3924)
Issue:
RPC path get changed into "vta_rpc" from "pynq_rpc", but related
document still use old informaiton.
Solution:
Update RPC path information.
Jianyu Huang [Fri, 13 Sep 2019 20:27:40 +0000 (13:27 -0700)]
Add AVX512VNNI support for TVM (#3388)
Animesh Jain [Fri, 13 Sep 2019 18:38:14 +0000 (11:38 -0700)]
Refactoring x86 conv2d_NCHWc (#3944)
noituIover [Fri, 13 Sep 2019 01:04:52 +0000 (09:04 +0800)]
Fix CUDA int8x4 vectorize (#3928)
* Fix int8x4 vectorize
* Fix gpu shared/local memory accumulate
* Add test_shared_memory for int8x4
* Adjust test format
* Fix cpplint
shoubhik [Thu, 12 Sep 2019 23:34:20 +0000 (16:34 -0700)]
Do type checking for the input and kernel in the qnn conv2d (#3904)
* [QNN] Convolution 2D Implementation.
Rebasing. Empty commit.
Clang-format styling.
* Reformatting code.
* Fixing lint issues.
Jon Soifer [Thu, 12 Sep 2019 20:04:45 +0000 (13:04 -0700)]
[TOPI][CUDA] Support cuBLAS BatchMatMul (#3936)
* Support cuBLAS BatchMatMul
* Add test and check target name
Andrew Tulloch [Thu, 12 Sep 2019 19:32:01 +0000 (12:32 -0700)]
[RFC] [Contrib] Minimal runtime (~12kb .text on ARMv7/x86) for subset of TVM models (#3567)
This is an alternative implementation of a subset of the TVM runtime API (and
graph runtime) that focuses entirely on reducing code size, at the expense of
functionality (no tvm.extern(..) calls via PackedFunc, CPU only, etc). It might
be worth incrementally expanding the surface area if there's interest.
The motivation for this work was seeing what the minimal useful subset of the
TVM runtime is. This is relevant for e.g. super code-size constrained
applications in e.g. embedded/mobile. The current runtime is more like O(100KiB)
or so, so this might be compelling for some users.
The smaller surface area for auditing might make this relevant for
https://github.com/dmlc/tvm/issues/3159, or the usecases I was thinking about in
https://github.com/dmlc/tvm/issues/2523#issuecomment-
459165815 re: the Rust
runtime.
The symbols in the tvm::minimalruntime space (i.e. excluding std:: and
picojson::) are about 5KiB, so I think there's a bunch of room here (i.e. we
could replace picojson:: with [`jsmn`](https://zserge.com/jsmn.html) or
something, and we could replace more of the `std::unordered_map` usage, etc with
custom primitives as well (similar to the `DynArray`).
Jared Roesch [Thu, 12 Sep 2019 03:39:56 +0000 (22:39 -0500)]
[Relay][Module] Refactor the way we interface between different modules of Relay. (#3906)
* Module refactor
* Add load module
* Add support for idempotent import
* Tweak load paths
* Move path around
* Expose C++ import functions in Python
* Fix import
* Add doc string
* Fix
* Fix lint
* Fix lint
* Fix test failure
* Add type solver
* Fix lint
Lianmin Zheng [Wed, 11 Sep 2019 21:32:15 +0000 (14:32 -0700)]
[Community] Add reviewer Balint Cristian (#3935)
Yizhi Liu [Wed, 11 Sep 2019 18:10:48 +0000 (02:10 +0800)]
[Arm] parallel batch axis (#3931)
* support LLVM trunk
* guard with USE_LLVM in if condition for c++14
* GREATER_EQUAL -> GREATER
* [Arm] parallel batch axis
Zhao Wu [Wed, 11 Sep 2019 04:09:25 +0000 (12:09 +0800)]
[TFLite] Support depthwise convolution multiplier greater than 1 (#3922)
雾雨魔理沙 [Wed, 11 Sep 2019 03:30:46 +0000 (20:30 -0700)]
[Relay] fix exponential blowup in interpreter (#3559)
Neo Chien [Tue, 10 Sep 2019 17:41:16 +0000 (01:41 +0800)]
[Relay][Frontend][Keras] Fix ReLU in Keras Converter missed the case (#3917)
* [Relay][Frontend][Keras] Fix ReLU in Keras Converter missed the case
* [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case
* [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case
Pratyush Patel [Tue, 10 Sep 2019 00:43:01 +0000 (17:43 -0700)]
[CODEGEN] Remove incorrect check for LLVM in C codegen test (#3921)
雾雨魔理沙 [Mon, 9 Sep 2019 19:48:04 +0000 (12:48 -0700)]
[Relay][Training] Add gradient for max. (#3915)
* save
* save
Luis Vega [Mon, 9 Sep 2019 17:31:31 +0000 (10:31 -0700)]
[VTA][Config] hotfix denano10 (#3918)
Xingjian Shi [Mon, 9 Sep 2019 17:26:34 +0000 (10:26 -0700)]
Numpy compatible dtype inference for `tvm.convert` and `tvm.const` (#3861)
* numpy compatible type inference
* update
* try to fix
* fix
* try to fix
* fix lint
* Update nn.h
* cast to int32
* try to fix
* fix again
* retrigger ci
Haichen Shen [Mon, 9 Sep 2019 14:54:15 +0000 (07:54 -0700)]
[Relay/TOPI][Op] Add erf intrinsic and op (#3702)
* add more ops
* stop vectorization for erf
* x
* cleanup
* fix
* add whitelist for vectorizable intrin
* add tf converter
* fix dense
* fix
* add missing intrin
* fix mxnet frontend
* fix nvptx
雾雨魔理沙 [Sun, 8 Sep 2019 03:11:47 +0000 (20:11 -0700)]
[Relay][Training] Add gradient for cast (#3894)
save
fix
fix grad