platform/upstream/tvm.git
4 years agoRemove SGX toolchain installation from CI Dockerfile (#4948)
Nick Hynes [Wed, 26 Feb 2020 21:44:32 +0000 (13:44 -0800)]
Remove SGX toolchain installation from CI Dockerfile (#4948)

4 years ago[Relay][pass] call graph for relay (#4922)
Zhi [Wed, 26 Feb 2020 18:01:27 +0000 (10:01 -0800)]
[Relay][pass] call graph for relay (#4922)

* call graph for relay

* CallGraphEntryNode->CallGraphEntry, __getitem__->print_var

* fix typos

4 years ago[Tutorial] Add a tutorial for PyTorch (#4936)
Alex Wong [Wed, 26 Feb 2020 05:41:53 +0000 (21:41 -0800)]
[Tutorial] Add a tutorial for PyTorch (#4936)

* Add a tutorial for PyTorch

* Fix sphinx formatting, add version support

* Remove space

* Remove version check

* Some refactoring

* Use no grad

* Rename input

* Update cat img source

4 years agoBump up dev version (#4941)
Haichen Shen [Wed, 26 Feb 2020 05:39:16 +0000 (21:39 -0800)]
Bump up dev version (#4941)

* bump up dev version

* update

4 years ago[DOCS] Fix Sphinx Warning: the target found for cross-reference (#4925)
Neo Chien [Wed, 26 Feb 2020 04:23:27 +0000 (12:23 +0800)]
[DOCS] Fix Sphinx Warning: the target found for cross-reference (#4925)

* [DOCS] Fix Sphinx Warnings: the target found for cross-reference warnings

* Fix the warning: undefined label

4 years agoTensor Expression Debug Display (TEDD) (#4651)
yongfeng-nv [Wed, 26 Feb 2020 04:21:08 +0000 (23:21 -0500)]
Tensor Expression Debug Display (TEDD) (#4651)

* Initial TEDD for publishing.

* 1. Fix lint issues. 2. Print intrin.body instead of intrin.name in Schedule Tree.  3. Add examples to top level APIs' comments.  4. Top level APIs don't print Dot string by default, unless outputdotstring is True.

* Fix more lint issues.

* Update top level API argument names and use raw strings to avoid Python lint warnings in the tests.

* Disable TEDD verification, but keep TE construction.

* Stop importing tedd to avoid failure.

* Separate data extraction and visualization. 1. Add API tedd.dump_json(schedule) to dump a json string for the schedule data for visualization.  2. Update tests.  3. Add a tutorial.  4. Add range information to IterVars.

* Update TEDD about InferBound failure.  1. TEDD doesn't call inferbound for DFG. 2. Update tutorial about the InferBound failure.

* 1. Import IPython only if SVG is requested.  This is required to fix a tutorial publishing faliure.  2. Fix test about IPython availability check.

4 years ago[WIP] Fixing an Infinite Loop case in UnmatchedChecker. (#4881)
雾雨魔理沙 [Wed, 26 Feb 2020 01:35:49 +0000 (17:35 -0800)]
[WIP] Fixing an Infinite Loop case in UnmatchedChecker. (#4881)

* save

* save

* remove

* remove cerr

4 years ago[Fix] remove unnecessary spliting in the cached chunk (#4935)
Yida Wang [Tue, 25 Feb 2020 21:14:58 +0000 (13:14 -0800)]
[Fix] remove unnecessary spliting in the cached chunk (#4935)

* remove unnecessary spliting in the cached chunk

* remove unnecessary spliting in the cached chunk

4 years ago[LLVM] Fix build breaks from StringRef changes (#4923)
wpan11nv [Tue, 25 Feb 2020 11:32:21 +0000 (03:32 -0800)]
[LLVM] Fix build breaks from StringRef changes (#4923)

- llvm::StringRef to std::string conversion is explicit now.

Signed-off-by: Wei Pan <wpan11nv@nvidia.com>
4 years ago[Relay][External Codegen] Support data types for CSourceModuleCodegen args and output...
Jon Soifer [Tue, 25 Feb 2020 04:53:24 +0000 (20:53 -0800)]
[Relay][External Codegen] Support data types for CSourceModuleCodegen args and output (#4934)

* Support int args and no extra buffers

* Fixes

* remove testing code

* fix style

* more style

* use const args

* style

Co-authored-by: Jon Soifer <jonso@microsoft.com>
4 years ago[Relay] Add a PyTorch to Relay Parser (#4497)
Alex Wong [Tue, 25 Feb 2020 04:14:45 +0000 (20:14 -0800)]
[Relay] Add a PyTorch to Relay Parser (#4497)

* Add a PyTorch to Relay parser

* Add alexnet, googlenet, mnasnet, shufflenet wip

* Fix lint

* Remove fix for shufflenet

* Lower check

* Pull changes from neo-ai/tvm changes

* Remove commented out section

* Use infer_shape everywhere

* Change back to using trace instead of path in from_pytorch

* Parse state_dict to add param names

* Umbrella single_op under test_forwards

* Remove print and cleanup call

* Check if update to test broke CI

* Retrigger CI

* Add back in updated tests

* Try splitting up tests

* First pass at flexible typing, implemented for ones

* Add int32 for all ops

* Remove print statements

* Fix lint

* Broad except

* Add other tensor types

* Temporarily use old tests

* Retrigger CI

* Lower type names

* Use numpy to convert in dense op

* Fix lint

* Remove print

* Need to cleanup but verify int32 works for add

* Rough tests for different types, a lot of types are not supported on CPU

* Probably doesn't build, need to save work as I have to switch branches (constantly)

* Parse param type

* Remove print stmt in parser

* Clean up some code

* Working on flaot32 for bn

* Add resnet18 double type

* Fix lint

* Temporarily move PT tests first

* Temporarily add back refactored tests to fix mem issue

* Add more type test and temp remove some tests

* Comment out tests, hopefully CI prints a trace

* Get stack trace

* Remove operator dict key, rename op_name to node_id, remove dead code

* Make relay map a list

* Remove some hacky string stuff

* Move to PyTorch 1.4

* Remove input_type as param

* Remove _get_fill_value, fix full ops

* Remove unused code and combine ops for identity and none

* Remove fn_param

* Clean up main loop

* Remove useless if/else for outputs

* Remove ir_names, only used once

* Remove some string hacking

* Remove string parsing to get output name

* Fix bug with output sizes of nodes

* Use attributeNames in parse ops

* Remove continue and add_op in parse_op

* Do this everywhere, use assert instead of explciitly type casting

* Remove unnecessary swap

* Slight refactor for elemwise input parse

* Use a copy of graph everywhere

* Rename nid_to_node_name

* Refactor parse import prereqs

* Clean up input node kind check

* Clean up conditionals

* Clean up add_op

* Cleanup type for ones and zeros op

* Fix lint

* Add torch install to CI

* Actually use torch

* Try moving import torch to only where it's needed

* Import torch for CI

* Use take op for select

* Temporarily add ignore for jit inline pass for CI

* Use CompleteTensorType, might be a PT 1.2 only thing

* Use different types in elemwise op

* Use float16 ones

* Fix float16 test

* Remove the temp docker changes

* Remove temp test

* Temporarily comment out original tests

* Remove file

* Empty cache after each test

* Add some prints and lower input sizes

* Try using no grad

* Trying to globally set grad off

* Use no grad for torchvision

* Remove xfail tests

* Remove VGG and AlexNet due to some issues

* Combine pooling tests

* Remove extra test file

* Remove single op, remove larger pooling tests

* Remove maxpool3

* Remove debug prints

* Remove inference call and add no_grad in measure latency

* Use standard string start char

* Remove redundant infer_shape in slice

* Convert most to checks to just expr

* Remove extra paren

* More refactor of isinstance

* Add helper for creating typed constants

* Assert instead of return when no matching type

* Remove network variants

* Add no_grad when forward, remove deatch, fix lint

* Change isinstance to expr in transpose

* Use opnotimplemented, refactor

* Fix full ops, remove duplicate tests

* Never use shape field unless we know the type

* Remove comma, retrigger CI

* Add paren, retrigger CI

* Use inline if-else for flags

* Throw exception instead of assert

* Remove version check for CI

* Check version when doing inline pass

* Fix lint

* Lower more input sizes

* Add new line, conv2d only accepts weight as expr

* Use tvm.runtime.ndarray

* Remove change to torch version install

* Try no grad for mobilenet

* Fix lint

* Fix lint again

* Revert to last passing

* Delete test files

* Ignore lint

* Revert back

* Comment out mobilenet

* Clean up compare compiled and baseline outputs

* Use IRModule

* Add todos

* Refactor use_bias

* Add todo for fix conv op channels

* Change input to data type

* Remove todo

* Handle channel multiplier > 1

4 years agoUse opencv reisze method for preprocessing of image in darknet (#4883)
vizero1 [Tue, 25 Feb 2020 03:55:27 +0000 (04:55 +0100)]
Use opencv reisze method for preprocessing of image in darknet (#4883)

* Use opencv reisze method for preprocessing of image in darknet

* Use opencv reisze method for preprocessing of image in darknet

* Fix pylint issues

4 years ago[FRONTEND][KERAS]GaussianDropout/Noise parsing support (#4928)
Samuel [Tue, 25 Feb 2020 03:39:10 +0000 (09:09 +0530)]
[FRONTEND][KERAS]GaussianDropout/Noise parsing support (#4928)

GaussianDropout & GaussianNoise are active only during training time. This can be skipped during inference.

4 years ago[Relay][AutoTVM] Relay op strategy (#4644)
Haichen Shen [Mon, 24 Feb 2020 21:12:03 +0000 (13:12 -0800)]
[Relay][AutoTVM] Relay op strategy (#4644)

* relay op strategy

fix lint

bitpack strategy

bitserial_dense (#6)

* update strategy

* address comments

fix a few topi test

Dense strategy (#5)

* dense

* add biforst; remove comments

* address comment

Refactor x86 conv2d_NCHWc (#4)

* Refactor x86 conv2d

* Add x86 depthwise_conv2d_NCHWc

* Add back topi x86 conv2d_nchw

* Merge x86 conv2d_nchw and conv2d_NCHWc

* Minor fix for x86 conv2d

fix more strategy

Add x86 conv2d_NCHWc_int8 strategy (#8)

* Add x86 conv2d_NCHWc_int8 strategy

* Remove contrib_conv2d_nchwc_int8

* Fix generic conv2d_NCHWc for int8

* Fix topi arm_cpu conv2d_NCHWc_int8

update x86 conv2d

enable specify relay ops to be tuned for autotvm

add cuda conv2d strategy

add conv2d strategy for rocm

add conv2d strategy for hls

add conv2d strategy for arm cpu

add conv2d strategy for mali

add conv2d strategy for bifrost

add conv2d strategy for intel graphics

clean up and fix lint

remove template keys from autotvm

remove 2 in the func name

address comments

fix

* fix bugs

* lint

* address comments

* add name to op implement

* Modify topi tests (#9)

* Add pooling, reorg, softmax and vision

* Add lrn

* fix topi test

* fix more topi test

* lint

* address comments

* x

* fix more tests & bugs

* Modify more tests (#10)

* Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn

* Minor fix

* More minor fix

* fix more test

* try to update vta using strategy

* fix cpptest

* x

* fix rebase err

* Fix two tests (#11)

* change autotvm log format

* lint

* minor fix

* try fix vta test

* fix rebase err

* tweak

* tmp hack for vta pass

* fix tutorial

* fix

* fix more tutorials

* fix vta tutorial

* minor

* address comments

* fix

* address comments

* fix cpptest

* fix docs

* change data structure name and api

* address comments

* lint

* fix rebase err

* updates

* fix winograd test

* fix doc

* rebase

* upgrade tophub version number

* fix bug

* re-enable vta tsim test after tophub is upgraded

* fix vta test to use the correct args so the config can be found in tophub

Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
4 years ago[Fix] Fix get_valid_count flaky test for cuda (#4901)
Leyuan Wang [Fri, 21 Feb 2020 22:31:04 +0000 (14:31 -0800)]
[Fix] Fix get_valid_count flaky test for cuda (#4901)

* get_valid_count accuracy issue fixed for individual tests but not for all tests running together

* minor fix

* initialize valid_count and PrefixSum buffers

* test updated

* udpate relay test as well

* update document

* fix lint

* address comment

* fix lint

* correct atomicAdd identifier name

4 years ago[TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort (#4891)
Neo Chien [Fri, 21 Feb 2020 21:03:41 +0000 (05:03 +0800)]
[TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort (#4891)

* [TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort

* upadate test function of argsort like topk

* Shuffle index and get data from shuffled index

* Replace the random.uniform with np.arange

4 years ago[COMMUNITY] @anijain2305 -> Committer (#4921)
Tianqi Chen [Fri, 21 Feb 2020 05:56:26 +0000 (21:56 -0800)]
[COMMUNITY] @anijain2305 -> Committer (#4921)

4 years agoFix tests for tflite unary elemwise operations (#4913)
Ina Dobreva [Fri, 21 Feb 2020 04:10:45 +0000 (04:10 +0000)]
Fix tests for tflite unary elemwise operations (#4913)

* add TFLite version check for 'ceil' and 'cos'
* fix name check of test_op for positive inputs
* add error message for operator not found in the installed fbs schema

4 years ago[CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore (#4546)
Orion34C [Fri, 21 Feb 2020 02:43:45 +0000 (10:43 +0800)]
[CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore (#4546)

* support cuda tensorcore subbyte int data type in auto tensorcore

* add lisence

* pass cpplint

* fix code review comments

* merge the int4/int1 codegen tutorial into the existing auto tensorcore tutorial

* using master's new API

* disable tuning when cuda is not enabled

* address cr comment

* do not run the tuning

* fix test failure

* fix cpplint error

* fix bool type reduction bug

* 1. fix a index bug 2. fix returned bytes value of int1/int4/uint4

* fix typo

4 years ago[DOCS] Fix Sphinx Warnings (RST indent, cross-ref, and image scale) (#4920)
Cody Yu [Thu, 20 Feb 2020 22:09:34 +0000 (14:09 -0800)]
[DOCS] Fix Sphinx Warnings (RST indent, cross-ref, and image scale) (#4920)

* fix indents

* Fix image scale and cross-ref

4 years ago[Relay] Fix an assertion exposed by loop vectorizer (#4916)
wpan11nv [Thu, 20 Feb 2020 17:25:35 +0000 (09:25 -0800)]
[Relay] Fix an assertion exposed by loop vectorizer (#4916)

- Allows uniform conditions for select expressions (the same as halide)
  exposed by the loop vectorizer.

Signed-off-by: Wei Pan <weip@nvidia.com>
4 years ago[DOCS] Fix sphinx warnings (#4917)
Cody Yu [Thu, 20 Feb 2020 17:24:37 +0000 (09:24 -0800)]
[DOCS] Fix sphinx warnings (#4917)

* Fix Python docstrings

* More fixes

* Fix lint

4 years ago[REFACTOR] Polish ffi convention. (#4912)
Tianqi Chen [Wed, 19 Feb 2020 21:14:42 +0000 (13:14 -0800)]
[REFACTOR] Polish ffi convention. (#4912)

* [REFACTOR] Polish ffi convention.

- Remove the src/api, keep registration local to the c++ function.
- Remove the api_internal as it is no longer needed.

* Update the codebase walk through

4 years ago[RELAY][FRONTEND][TF] Fix FuseBatchNorm output cast error if need_cast is True (...
hcyang [Wed, 19 Feb 2020 06:33:16 +0000 (14:33 +0800)]
[RELAY][FRONTEND][TF] Fix FuseBatchNorm output cast error if need_cast is True (#4894)

4 years agoFix tvm.target.generic_func runtime detection (#4910)
Andrew [Wed, 19 Feb 2020 02:03:03 +0000 (18:03 -0800)]
Fix tvm.target.generic_func runtime detection (#4910)

4 years ago[DOCS] Update API docs to reflect the status after the refactor. (#4907)
Tianqi Chen [Tue, 18 Feb 2020 23:39:59 +0000 (15:39 -0800)]
[DOCS] Update API docs to reflect the status after the refactor. (#4907)

4 years ago[Relay] Expose FunctionGetAttr to Python (#4905)
Jon Soifer [Tue, 18 Feb 2020 21:44:59 +0000 (13:44 -0800)]
[Relay] Expose FunctionGetAttr to Python (#4905)

* [Relay] Expose FunctionGetAttr to Python

* add test

Co-authored-by: Jon Soifer <jonso@microsoft.com>
4 years ago[Relay][Frontend][Keras] NHWC import support. (#4899)
Josh Fromm [Tue, 18 Feb 2020 18:24:22 +0000 (10:24 -0800)]
[Relay][Frontend][Keras] NHWC import support. (#4899)

* Basic test working

* Almost all tests working.

* all tests passing.

* Fixed lint.

* Improved Style.

4 years ago[REFACTOR][PY] Establish tvm.arith (#4904)
Tianqi Chen [Tue, 18 Feb 2020 16:14:12 +0000 (08:14 -0800)]
[REFACTOR][PY] Establish tvm.arith (#4904)

4 years ago[CI] Add autodocsum as dep (#4902)
Tianqi Chen [Tue, 18 Feb 2020 05:40:19 +0000 (21:40 -0800)]
[CI] Add autodocsum as dep (#4902)

4 years ago[CI] Update ci docker to add autodocsumm (#4903)
Tianqi Chen [Tue, 18 Feb 2020 04:56:45 +0000 (20:56 -0800)]
[CI] Update ci docker to add autodocsumm (#4903)

4 years agoFixed bugs that occured when using bitwise operators on floating point type expressio...
pankratz [Tue, 18 Feb 2020 02:48:05 +0000 (19:48 -0700)]
Fixed bugs that occured when using bitwise operators on floating point type expressions. Further crash when using ops <<, >>, %. Finally added regression tests for both types of bug. (#4892)

4 years ago[REFACTOR][PY] Establish tvm.te and tvm.driver (#4900)
Tianqi Chen [Tue, 18 Feb 2020 02:47:09 +0000 (18:47 -0800)]
[REFACTOR][PY] Establish tvm.te and tvm.driver (#4900)

- Move the related files to tvm.te
- Move build_module.py to tvm.driver

4 years ago[Relay][Pass] Fix bug in re-processing call node in MergeComposite pass (#4879)
Jon Soifer [Mon, 17 Feb 2020 20:18:15 +0000 (12:18 -0800)]
[Relay][Pass] Fix bug in re-processing call node in MergeComposite pass (#4879)

* Fix bug in re-processing call node

* Add test

* Add to main

* temp changes to work from another machine

* fix rest of tests

* fix test_reuse_call_merge

* fix merge

Co-authored-by: Jon Soifer <jonso@microsoft.com>
4 years ago[DOCS] Introduce how to add hardware backend to FAQ (#4898)
Tianqi Chen [Mon, 17 Feb 2020 18:53:05 +0000 (10:53 -0800)]
[DOCS] Introduce how to add hardware backend to FAQ (#4898)

4 years agoFast exponent (#4790)
Alex Gladkov [Mon, 17 Feb 2020 17:22:11 +0000 (09:22 -0800)]
Fast exponent (#4790)

4 years agoUpdate faq.md (#4893)
Baden Hughes [Mon, 17 Feb 2020 01:56:25 +0000 (11:56 +1000)]
Update faq.md (#4893)

various minor editorial updates - style, grammar, typos.

4 years agoFix alpha_equal bug (#4897)
Zhi [Mon, 17 Feb 2020 01:44:22 +0000 (17:44 -0800)]
Fix alpha_equal bug (#4897)

4 years ago[CI] Cleanup logfile before tutorial runs (#4896)
Tianqi Chen [Sun, 16 Feb 2020 23:02:39 +0000 (15:02 -0800)]
[CI] Cleanup logfile before tutorial runs (#4896)

4 years ago[Relay] Fix VM compiler for while loop with free vars (#4889)
masahi [Sun, 16 Feb 2020 06:43:22 +0000 (15:43 +0900)]
[Relay] Fix VM compiler for while loop with free vars  (#4889)

* add additional switch to handle nested call node

* Fix VM compiler for while loop with free var

4 years ago[CodeGen][CUDA] Fix issues in cuda codegen (#4876)
wpan11nv [Sun, 16 Feb 2020 03:47:36 +0000 (19:47 -0800)]
[CodeGen][CUDA] Fix issues in cuda codegen (#4876)

- Do not emit __shared__ etc. as part of type for casting

- Fix fp16 reduction kernels with compiler errors:

  "no operator "+" matches these operands, volatile half + volatile half

  This patch inserts casts to remove volatile type qualifier following
  volatile loads (fp16 only). CUDA fp16 library headers should add
  volatile member functions.

- Update have_fp16 to include compute 6.1 GPUs, which do support fp16,
  although their fp16 throughput is low. Updated tests.

Signed-off-by: Wei Pan <weip@nvidia.com>
4 years agoimprove antlr import error message (#4888)
masahi [Sat, 15 Feb 2020 18:32:58 +0000 (03:32 +0900)]
improve antlr import error message (#4888)

4 years ago[AutoTVM] Support range in index based tuners (#4870)
Cody Yu [Sat, 15 Feb 2020 04:15:47 +0000 (20:15 -0800)]
[AutoTVM] Support range in index based tuners (#4870)

* Support range in index based tuners

* Address comments

* Remove __*state__

* trigger CI

4 years ago[QNN] Add support for per channel weight scale in dense op (#4880)
masahi [Sat, 15 Feb 2020 01:12:06 +0000 (10:12 +0900)]
[QNN] Add support for per channel weight scale in dense op (#4880)

* add test case for per channel dense

* add unit arg in tflite frontend

* update qnn legalize test

* fix output dim index

4 years ago[QNN] More doc fix on quantize and convolution (#4874)
masahi [Fri, 14 Feb 2020 04:28:07 +0000 (13:28 +0900)]
[QNN] More doc fix on quantize and convolution (#4874)

* [QNN] Doc fix on quantize and convolution

* update test

4 years ago[TOPI][CUDA] Enable vectorization on fp16 type (#4867)
wpan11nv [Fri, 14 Feb 2020 04:24:29 +0000 (20:24 -0800)]
[TOPI][CUDA] Enable vectorization on fp16 type (#4867)

- This allows to better utilize the memory bandwidth

- Note that not all cases are vectorized for fp16 datatype. For
  instance, when the size is not a multiple of 1024, the inner loop
  may be an expression that cannot be vectorized. In this case, a
  small inner loop is still benefical for latency hidding.

Signed-off-by: Wei Pan <weip@nvidia.com>
4 years ago[REFACTOR][PY] Establish tvm.tir
tqchen [Wed, 12 Feb 2020 23:38:39 +0000 (15:38 -0800)]
[REFACTOR][PY] Establish tvm.tir

- Move related files into the corresponding location as in C++
- Keep the top-level TVM API backward compatible to make minimum changes in topi

4 years agoUpdate docs/dev/virtual_machine.rst
Zhi [Wed, 12 Feb 2020 16:14:36 +0000 (08:14 -0800)]
Update docs/dev/virtual_machine.rst

Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
4 years agoUpdate docs/dev/virtual_machine.rst
Zhi [Wed, 12 Feb 2020 16:14:27 +0000 (08:14 -0800)]
Update docs/dev/virtual_machine.rst

Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
4 years agofix vm doc
Zhi Chen [Tue, 11 Feb 2020 04:52:20 +0000 (04:52 +0000)]
fix vm doc

4 years agoOptimize x86 conv3d_ndhwc using data packing approach. (#4866)
Alex Gladkov [Thu, 13 Feb 2020 06:38:15 +0000 (22:38 -0800)]
Optimize x86 conv3d_ndhwc  using data packing approach. (#4866)

Add tuneable conv3d_ndhwc schedule

4 years ago[FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess (#4543)
mbarrett97 [Thu, 13 Feb 2020 02:07:58 +0000 (02:07 +0000)]
[FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess (#4543)

* [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

This adds support for the custom operator
TFLite_Detection_PostProcess which is commonly used in
object detection networks such as SSD Mobilenet. It
only adds support for when use_regular_nms = False.

Change-Id: I819b253c0eb6f0fa55da65d2634e09359b888828

* Added a test for the tflite custom op

Change-Id: Ie5baa092deae9a8bcffd2ebd9f6d346b90e58afd

* Removed trailing comma

Change-Id: Ib08f02b5f1a59a883048bfb36e4321152cd2e7f2

* Added spaces between divide

Change-Id: If1171fc03d211a809cedeb800804394972af4060

* Formatted comment

Change-Id: I3ce7e69b8d2c73aec57369c1c64ea1eec07f087b

* Reduced line length in test

Change-Id: I49eaafc3369070f8f3e85fbb965ad20972096c68

* Set random seed for test

Change-Id: I542a787d11422ea83c52147b2cb1144fcef0dd77

* Fixes to style

Change-Id: I2971b8ecebe08c882b2481a99f67cfbe515e0b1f

* Assert for incorrect number of inputs

Change-Id: I393f3b3b62be73e427498d98456fb1d5a214e0af

* Change comparison to pass linting

The linter was updated, so I needed to fix
a small style issue as a result.

Change-Id: Ia3c954565a00de92e7fb1912eae9ed9875d60c7c

4 years ago[REFACTOR][PY][API-CHANGE] Establish tvm.target
tqchen [Wed, 12 Feb 2020 21:39:45 +0000 (13:39 -0800)]
[REFACTOR][PY][API-CHANGE] Establish tvm.target

Move the related target modules into tvm.target.

API change:
- tvm.target.current_target -> tvm.target.Target.current
- tvm.datatype -> tvm.target.datatype

4 years ago[JVM] Update the runtime PackedFunc for module
tqchen [Wed, 12 Feb 2020 19:20:23 +0000 (11:20 -0800)]
[JVM] Update the runtime PackedFunc for module

4 years agoFix optimize
tqchen [Wed, 12 Feb 2020 05:41:47 +0000 (21:41 -0800)]
Fix optimize

4 years ago[DOCS][PY] Sphinx docs about tvm.ir
tqchen [Wed, 12 Feb 2020 04:36:20 +0000 (20:36 -0800)]
[DOCS][PY] Sphinx docs about tvm.ir

4 years ago[REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862)
Tianqi Chen [Wed, 12 Feb 2020 04:01:36 +0000 (20:01 -0800)]
[REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862)

* [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files.

This PR establishes tvm.ir and migrates the corresponding relay
files into the new folder.

API Change:
- relay.Module -> tvm.IRModule

* Update with ADT

* Migrate transform

* address comments

* Migrate module

* Migrate json_compact

* Migrate attrs

* Move LoweredFunc to stmt temporarily

* temp migrate container

* Finish migrate container

4 years ago[Topi] Missing header (#4865)
hlu1 [Tue, 11 Feb 2020 22:46:50 +0000 (14:46 -0800)]
[Topi] Missing header (#4865)

4 years agoFix onnx import bugs (#4750)
kice [Tue, 11 Feb 2020 21:44:48 +0000 (16:44 -0500)]
Fix onnx import bugs (#4750)

* Fix onnx import bugs

Fix onnx attributes of string type incorrect handling
Merge symmetric padding of Conv to symmetric form

* Only merge symmetric padding for conv2d

4 years ago[Refactor] move vm.py under runtime and adt to runtime.container.py (#4855)
Zhi [Tue, 11 Feb 2020 20:04:04 +0000 (12:04 -0800)]
[Refactor] move vm.py under runtime and adt to runtime.container.py (#4855)

4 years agoadd resize op converter (#4838)
masahi [Tue, 11 Feb 2020 17:56:14 +0000 (02:56 +0900)]
add resize op converter (#4838)

4 years ago[TVM] const auto p -> const auto &p (#4861)
hlu1 [Tue, 11 Feb 2020 16:52:25 +0000 (08:52 -0800)]
[TVM] const auto p -> const auto &p (#4861)

4 years ago[LLVM] Explicit llvm::StringRef to std::string conversion (#4859)
hlu1 [Tue, 11 Feb 2020 16:45:17 +0000 (08:45 -0800)]
[LLVM] Explicit llvm::StringRef to std::string conversion (#4859)

4 years ago[RUNTIME] Fix memory leakage of TVMByteArray (#4856)
Lianmin Zheng [Tue, 11 Feb 2020 09:58:36 +0000 (01:58 -0800)]
[RUNTIME] Fix memory leakage of TVMByteArray (#4856)

4 years ago[TFLite] Using real image for QNN testing. (#4816)
Animesh Jain [Tue, 11 Feb 2020 02:56:37 +0000 (18:56 -0800)]
[TFLite] Using real image for QNN testing. (#4816)

* [TFLite] Using real image for QNN testing.

* Setting seed for SSD mobilenet for fixed input.

* Support quantized Pad op.

* Remove unnnecessary line.

* Ina comments.

4 years agoreverse changes in pr #4849 (#4853)
Leyuan Wang [Mon, 10 Feb 2020 22:13:35 +0000 (14:13 -0800)]
reverse changes in pr #4849 (#4853)

4 years ago[Relay] Added Merge Composite pass (#4771)
mbarrett97 [Mon, 10 Feb 2020 19:39:20 +0000 (19:39 +0000)]
[Relay] Added Merge Composite pass (#4771)

* [Relay] Added MergeComposite pass

This pass allows for patterns to be wrapped
in a function marked with 'Composite' and a
composite function name. This is intended to be
used with the external codegen for the cases where
an external operator maps to multiple Relay
operators. In that case, the mapping can be expressed
as a pattern and assigned a name.

For more information on this pass and its motivation,
see the RFC:
https://discuss.tvm.ai/t/rfc-external-codegen-defining-composite-relay-operators/5470

Change-Id: Icb1b803a9f0ac57c529143200228f3bb5793afc0

* [Relay] Merge composite tests

Added tests for the merge_composite pass.

Change-Id: I1728b4a05b0c1c36140a40f1afe028fde62185dd

* Merge composite additional test

Change-Id: I9bc7d6053c575e9468ac5abc31214c6ad8507e46

* Support priority order in merge_composite

The order in which the patterns are matched
was currently random as an unordered_map was
used to store the pattern table. This uses
arrays instead so that a distinct priority
order of matching can be defined. Additional
tests have also been added to verify this
behaviour.

Change-Id: Ief347df4262639138d5d9d7c8cee7ef233af7b56

* Improved merge composite docs

Change-Id: Ie3a72045ecc3f13ad3c302fbdf192b7296a306a8

* Removed unused variable

Change-Id: I7814d5fde368ffaf1b3d6d806060c774c7720364

* Remove unnecessary op check

Change-Id: I38e78d2acd5b86cb8e837be72ff9d72cd10bcf33

* Improve styling on composite function creation

Change-Id: I37add1c3134e0b5d5085fe1eb9daf8e06890fa8c

* Comment reword

Change-Id: Ie05872dcbbe0c3e1190b0597083b9a64e6b66c66

* Stylistic changes to avoid std::move

Change-Id: I43a93995bbf10530399900c992aa99dd4ae4575f

* Relax a check in ExtractPattern

Change-Id: I0faef77a66c55f83f09e6e47c561ffaea63dedfa

* Remove new line

Change-Id: Ifdd02c12087a7e1a0a9b54825669bc0de8f13c3d

* Removed MatchPattern from MergeComposite

This is not necessary now that ExtractPattern
can fulfill the same purpose.

Change-Id: I14dc020afa8e50f2df4c0a2efb88a011987f8196

* Removed a new line

Change-Id: I8b50f0c9069aa1bcaccbe68eb421031f01a64842

* Improved docs for merge composite

Change-Id: Ib1959a35c856e7ea5639de2e4ef314a54f44caf5

* Fixed free vars in test

Change-Id: I2b7f273db275964ec0e9820560663f0808adee79

* Handle case where root arg might not be a call

Change-Id: I4eeea3ce723d3ba337d110dcc690377daebe8626

* Removed blank line

Change-Id: I07f5392c0e95cfe3cfa5c333703cc6f82d6034fb

* Change to CHECK_EQ

Change-Id: I5c5d62d3cd57f72508b30b926f72091ae6f0d1cc

* Revised a conditional

Change-Id: I23a7897ca15a7cd076db5039dc653a4b8c27e803

* Improved doc styling

Change-Id: I377f0a1c1ac70f3b8d7584b0c49bddc8c6c134ef

* Fail extraction if vars conflict

Change-Id: I78e36d805e8ed6b55e61d490212a967c857554a4

* Added further merge composite tests

Change-Id: Ib1d800409fca4c1834c7fe0cab5a26ab99a26820

Co-authored-by: lhutton1 <35535092+lhutton1@users.noreply.github.com>
4 years agoFixed bug in ExprOp that caused bitwise operators to fail when a basic python type...
pankratz [Mon, 10 Feb 2020 19:33:51 +0000 (12:33 -0700)]
Fixed bug in ExprOp that caused bitwise operators to fail when a basic python type was on the left hand side of the expression. Added regression test for crashing cases. (#4852)

4 years ago[Frontend][TFlite] use qnn helper function in softmax (#4840)
Wang Yucheng [Mon, 10 Feb 2020 17:54:58 +0000 (01:54 +0800)]
[Frontend][TFlite] use qnn helper function in softmax (#4840)

4 years ago[CI][DOCKER] Update ci-lint to pylint2.4.4 (#4851)
Tianqi Chen [Sun, 9 Feb 2020 17:12:19 +0000 (09:12 -0800)]
[CI][DOCKER] Update ci-lint to pylint2.4.4 (#4851)

4 years ago[CI] Update ci-lint to v0.60 (#4850)
Tianqi Chen [Sun, 9 Feb 2020 05:42:36 +0000 (21:42 -0800)]
[CI] Update ci-lint to v0.60 (#4850)

4 years ago[LINT][PY] Fixes for pylint==2.4.4 (#4849)
Tianqi Chen [Sun, 9 Feb 2020 03:45:17 +0000 (19:45 -0800)]
[LINT][PY] Fixes for pylint==2.4.4 (#4849)

4 years ago[TEST] test_cuddn flaky (#4846)
Tianqi Chen [Sat, 8 Feb 2020 22:05:21 +0000 (14:05 -0800)]
[TEST] test_cuddn flaky (#4846)

4 years agoFixed process termination routine in windows (#4844)
Seyyed Hossein Hasanpour [Sat, 8 Feb 2020 18:48:16 +0000 (22:18 +0330)]
Fixed process termination routine in windows (#4844)

* Fixed process termination routine in windows

addresses and Fixes AttributeError: module 'os' has no attribute 'killpg' error in #4821

* Update server.py

4 years ago[Doc][AutoTVM] Fix bugs that override n_trials (#4842)
Cody Yu [Fri, 7 Feb 2020 22:34:14 +0000 (14:34 -0800)]
[Doc][AutoTVM] Fix bugs that override n_trials (#4842)

4 years ago[COMMUNITY] comaniac -> reviewer (#4841)
ziheng [Fri, 7 Feb 2020 21:41:35 +0000 (13:41 -0800)]
[COMMUNITY] comaniac -> reviewer (#4841)

4 years ago[REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update (#4837)
Tianqi Chen [Fri, 7 Feb 2020 17:15:08 +0000 (09:15 -0800)]
[REFACTOR][PY][API-Change] Polish tvm.runtime, tvm.runtime.module API update (#4837)

* [REFACTOR][PY-API] Polish tvm.runtime, tvm.runtime.module API update

This PR updates the tvm.runtime to use the new FFI style.

- Remove top-level tvm.module to avoid confusion between runtime.Module and IRModule
- API changes wrt to runtime.Module
  - tvm.module.load -> tvm.runtime.load_module
  - tvm.module.enabled -> tvm.runtime.enabled
  - tvm.module.system_lib -> tvm.runtime.system_lib
- Remove dep on api_internal from runtime.

* Update module.load in the latest API

4 years ago[Frontend][TFLite] Add MIRROR_PAD operator (#4822)
Wang Yucheng [Fri, 7 Feb 2020 11:57:34 +0000 (19:57 +0800)]
[Frontend][TFLite] Add MIRROR_PAD operator (#4822)

4 years ago[Relay][Frontend][TFlite] Add support for quantized LOGISTIC (#4696)
Ina Dobreva [Fri, 7 Feb 2020 10:23:55 +0000 (10:23 +0000)]
[Relay][Frontend][TFlite] Add support for quantized LOGISTIC (#4696)

* [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

 * add qnn implementation
 * add qnn test case for qnn logistic

* Helper functions for quantize and dequantize.

4 years ago[Doc] ConvertLayout - Call RemoveUnunsedFunctions.
Animesh Jain [Thu, 6 Feb 2020 22:13:08 +0000 (22:13 +0000)]
[Doc] ConvertLayout - Call RemoveUnunsedFunctions.

4 years agoImprove tol to resolve flaky case (#4836)
Tianqi Chen [Fri, 7 Feb 2020 02:12:13 +0000 (18:12 -0800)]
Improve tol to resolve flaky case (#4836)

4 years ago[Frontend][ONNX] LSTM Support (#4825)
Josh Fromm [Fri, 7 Feb 2020 02:09:10 +0000 (18:09 -0800)]
[Frontend][ONNX] LSTM Support (#4825)

* Initial version working and passing tests.

* WIP on supporting other activations.

* add support for multiple activation functions in lstm

* All tests working and code cleaned up.

* Undo import swap to avoid conflict with masahi.

* Added new tests and related bug fixes.

Co-authored-by: Matthew Brookhart <mbrookhart@octoml.ai>
4 years ago[Doc] Introduction to module serialization (#4564)
Zhao Wu [Fri, 7 Feb 2020 02:03:20 +0000 (10:03 +0800)]
[Doc] Introduction to module serialization (#4564)

4 years agoFix doc after moving to unified IR (#4835)
Zhi [Fri, 7 Feb 2020 01:38:48 +0000 (17:38 -0800)]
Fix doc after moving to unified IR (#4835)

4 years ago[CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6 (#4826)
Tianqi Chen [Thu, 6 Feb 2020 19:58:22 +0000 (11:58 -0800)]
[CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6 (#4826)

4 years ago[CI][DOCKER] Update ci-gpu to v0.60 (#4827)
Tianqi Chen [Thu, 6 Feb 2020 19:58:11 +0000 (11:58 -0800)]
[CI][DOCKER] Update ci-gpu to v0.60 (#4827)

4 years agoIt's gpu not cpu. (#4832)
Just do it [Thu, 6 Feb 2020 17:25:06 +0000 (01:25 +0800)]
It's gpu not cpu. (#4832)

4 years ago[TOPI][Relay] Add bitwise ops (#4815)
abergeron [Thu, 6 Feb 2020 03:24:42 +0000 (22:24 -0500)]
[TOPI][Relay] Add bitwise ops (#4815)

* Add bitwise ops to topi

* Add the bitwise ops to relay.

4 years ago[CONTRIB][CC] Enhance cc.cross_compiler (#4817)
Tianqi Chen [Thu, 6 Feb 2020 02:49:24 +0000 (18:49 -0800)]
[CONTRIB][CC] Enhance cc.cross_compiler (#4817)

* [CONTRIB][CC] Enhance cc.cross_compiler

- Enhance cc.cross_compiler to take str argument.
- Remove cc.build_create_shared_func as it is dupilicated with cross_compiler
- Add examples to cc.cross_compiler

* address review comments

4 years ago[Relay] Conv2D padding representation (#4787)
Xingyu Zhou [Wed, 5 Feb 2020 23:33:45 +0000 (07:33 +0800)]
[Relay] Conv2D padding representation (#4787)

* enforce 4-way padding

* add util with get_pad_tuple

* delete unnecessary arguments

* fix lint

* add container.Array case

* fix cudnn conv2d asymmetric padding logic

* rename get_pad_tuple to get_pad_tuple2d

* revert change for topi/python/topi/nn/conv2d.py

* add get_pad_tuple2d for several contrib conv2d ops

* add get_pad_tuple2d for all conv2d ops

4 years ago[Relay][Frontend][TFLite] Add parser support for logical operators (#4642)
Ina Dobreva [Wed, 5 Feb 2020 20:12:44 +0000 (20:12 +0000)]
[Relay][Frontend][TFLite] Add parser support for logical operators (#4642)

* [Relay][Frontend][TFLite] Add parser support for logical operators

* Add parser support for logical_and, logical_or
* Add boolean dtype as a valid tensor type
* BOOLEAN dtype is supported only from tf 1.15
  so logical ops work only in that and newer versions
* Logical_not is ommited since tflite can't convert it -->
  throws errors for addv2

* Add TFLite vesion check in tests for logical ops

* Check is added because of boolean dtype lack of support

4 years ago[QNN] Optimize lowering for requantize and FixedPointMultiply. (#4798)
Animesh Jain [Wed, 5 Feb 2020 19:52:18 +0000 (11:52 -0800)]
[QNN] Optimize lowering for requantize and FixedPointMultiply. (#4798)

* [QNN] Optimize lowering for requantize and FixedPointMultiply.

* Add check for requantize scale gt 1.

* Added test case.

4 years ago[Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range (#4789)
Ina Dobreva [Wed, 5 Feb 2020 19:51:59 +0000 (19:51 +0000)]
[Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range (#4789)

* [TFLite] Dynamically calculate input_stats of any fake_quant range

* pass the input range to the convertor and caclulate (mean, scale) there
* change the range of the second tensor in elemwise operations
  so that we test inputs with different quant params
* change the possible output range for elemwise ops wrt the updated ranges
* update the comments for (m, s) calculations
* add input range dict to reduce_mean op

* Apply requested changes

* add exception handling for zero division in input_stats
* fix range of the input tensor in elemwsie

4 years agoFixed subprocess creation under windows (#4820)
Seyyed Hossein Hasanpour [Wed, 5 Feb 2020 18:07:29 +0000 (21:37 +0330)]
Fixed subprocess creation under windows (#4820)

* fixed subprocess creation under windows

this addresses  the issue #4819

* Update server.py

4 years ago[REFACTOR][PY] Establish tvm.runtime (#4818)
Tianqi Chen [Wed, 5 Feb 2020 17:00:03 +0000 (09:00 -0800)]
[REFACTOR][PY] Establish tvm.runtime (#4818)

* [REFACTOR][PY] Establish tvm.runtime

This PR establishes the tvm.runtime namespace that contains the core runtime data structures.
The top-level API are kept inact for now via re-exporting.

We will followup later to cleanup some of the top-level APIs.

* Fix ndarray name

4 years agoMxnet parser for Qnn dialect (#4714)
shoubhik [Wed, 5 Feb 2020 13:27:52 +0000 (05:27 -0800)]
Mxnet parser for Qnn dialect (#4714)

* - Additional util methods needed for mxnet frontend for qnn dialect.

* - Fixing call to quantize.

* [QNN] MxNet-MKLDNN parser support for QNN

* [QNN] Relax conv check.

* - Merge from origin

* [QNN] Channel wise changes

* [QNN] Dense changes

* Dense fix for QNN ops.

* - Removed non-mkl code from utils.

- Small refactoring

- Remove "with_sum" from conv

- Simplified code

* - Fixing ring buffer name.

* - Fixing pylint issues.

* - Fixing lint
- Removing redundant commented code.

* - Adding test cases
- Removing unused methods.

* [WIP] end to end test case for mxnet qnn parser

* Changes to parse large CV models.

* Pylint issues.

* Fix Conv2D with sum and quantized pooling.

* Reverting the changes made for mxnet-mkldnn test cases. Because of #4753, mxnet could not be updated to mxnet-mkldnn.

Co-authored-by: Animesh Jain <anijain@umich.edu>
4 years agoallow customize mkldnn library location (#4814)
Haichen Shen [Wed, 5 Feb 2020 06:04:41 +0000 (22:04 -0800)]
allow customize mkldnn library location (#4814)

4 years ago[REFACTOR][PY] tvm._ffi (#4813)
Tianqi Chen [Wed, 5 Feb 2020 01:01:01 +0000 (17:01 -0800)]
[REFACTOR][PY] tvm._ffi (#4813)

* [REFACTOR][PY] tvm._ffi

- Remove from __future__ import absolute_import in the related files as they are no longer needed if the code only runs in python3
- Remove reverse dependency of _ctypes _cython to object_generic.
- function.py -> packed_func.py
- Function -> PackedFunc
- all registry related logics goes to tvm._ffi.registry
- Use absolute references for FFI related calls.
  - tvm._ffi.register_object
  - tvm._ffi.register_func
  - tvm._ffi.get_global_func

* Move get global func to the ffi side

4 years ago[TOPI][x86] Injective schedule improvement (#4786)
Animesh Jain [Tue, 4 Feb 2020 23:25:46 +0000 (15:25 -0800)]
[TOPI][x86] Injective schedule improvement (#4786)

* [TOPI][x86] Injective Schedule Improvement.

* Add tiling.

* Vectorize when there is an axis.

4 years agofix memory leak (#4811)
Haichen Shen [Tue, 4 Feb 2020 21:14:15 +0000 (13:14 -0800)]
fix memory leak (#4811)