pyjhzwh [Sat, 7 Mar 2020 00:39:33 +0000 (19:39 -0500)]
Fix stride default value None in torch.nn.functional.avg_pool (#4984)
* fix unordered dictionary problem for python version 3.5
* modify style
* default value of stride in torch.nn.functional.avg_pool is None
* delete prev modifications
* add testcase for nn.functional.avg_pool2d
Jon Soifer [Fri, 6 Mar 2020 23:54:03 +0000 (15:54 -0800)]
[Runtime] Export GraphRuntime in tvm_runtime.dll (#5002)
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Yao Wang [Fri, 6 Mar 2020 01:17:35 +0000 (17:17 -0800)]
[topi][relay] add operation tan to TVM (#4938)
* Add relay operation relay.op.tan.
* Update tan implementation in TVM.
* Update tests.
* Add shape function for tan.
* Add missing main test to python/frontend/tensorflow/test_forward.
* Revert, back to sin/cos.
* Revert "Revert, back to sin/cos."
This reverts commit
4da5b503b921585ba9d80944b29136142b575c40.
* Fix implementation of tan in cuda. Do not support tan for float16.
Simplify topi/tests/python/test_topi_math. Add testing for tan with float32 and float64.
Try again to implement tan as sin/cos in llvm.
Tianqi Chen [Fri, 6 Mar 2020 01:15:19 +0000 (19:15 -0600)]
Adding Hua Jiang as reviewer. (#4993)
Zhi [Thu, 5 Mar 2020 23:56:44 +0000 (15:56 -0800)]
hotfix gcn tutorial fail (#4994)
Zhi [Thu, 5 Mar 2020 15:37:26 +0000 (07:37 -0800)]
refactor build module to take IRModule (#4988)
Tianqi Chen [Thu, 5 Mar 2020 00:35:38 +0000 (18:35 -0600)]
Conditions updated to cover better user scenarios (#4951)
* Conditions updated to cover better user scenarios
* [1] New test case added
* [2] New test case added
* [3] Proper variable name used
* [4] Review Comments handled
* [5] Review comments handled
* [6] Review comments handled
Tianqi Chen [Thu, 5 Mar 2020 00:34:08 +0000 (18:34 -0600)]
Fix gpu not found when running TVM docker (#4975)
Animesh Jain [Wed, 4 Mar 2020 19:24:56 +0000 (11:24 -0800)]
[Torch, QNN] Add support for quantized models via QNN (#4977)
* qnn support initial import
* fix upsampling num input
* imagenet tests added
* add qunatized module tests
* quantized module tests working
* imagenet test working
* fix lint
* remove top level torch import to fix ci error
* disable lint warning on outside toplevel import
* revert parse -> convert change
* add comments to qnn translation
* address comments, add sample outputs
* add more comments
* refactor bias add and requantize step
Lianmin Zheng [Wed, 4 Mar 2020 18:52:02 +0000 (10:52 -0800)]
Tighten split's extent (#4931)
* Set split node's range to minimum of ext and split factor or split nparts, but only when PassDownDomain is called with allow_missing == false, i.e. by InferBound. Add a helper PassUpThreadBinding() to get a map telling whether an IterVar has at least one leaf IterVar deriving from it binding to a thread. Add two unit tests.
* Enhance LoopVectorizer for vectorizing by 0. Found at least one case from testtopi/tests/python/test_topi_transform.py::test_tile.
* Revert changes vectorize_loop.cc; when parent's ext is zero, set split's range to the factor or nparts.
* Update with comments.
* Refactor the ext tightening predicate.
* Fix reference types.
* Integrate tvm.te changes.
* Trivial comment change to trigger CI.
* Trivial comment correction to trigger testing.
pyjhzwh [Wed, 4 Mar 2020 09:40:37 +0000 (04:40 -0500)]
[Torch] fix unordered dictionary problem for python version under 3.6 (#4982)
* fix unordered dictionary problem for python version 3.5
* modify style
Zhi [Tue, 3 Mar 2020 09:30:28 +0000 (01:30 -0800)]
[Relay] Target annotation for external codegen (#4933)
* op based external compiler annotation
* Use TVM register directly
* Small fix
* test graph
Co-authored-by: Cody Yu <comaniac0422@gmail.com>
Leandro Nunes [Tue, 3 Mar 2020 03:41:31 +0000 (03:41 +0000)]
Pin xgboost dependency version to 0.90 (#4965)
* Sets xgboost dependency to be 0.90, preventing
segfaults during TVM python unit tests execution
* This is discussed in issue #4953
maheshambule [Mon, 2 Mar 2020 20:57:40 +0000 (02:27 +0530)]
[Frontend] [Tensorflow] ReadVariableOp operator support (#4952)
* tf frontend read variable op
* pylint fix
* tf frontend freezed graph pruned ops
Zhi [Mon, 2 Mar 2020 06:23:46 +0000 (22:23 -0800)]
[Relay][Pass] Add inline pass (#4927)
* add inline pass
* IsInline -> IsMarkedInlined
* fix comment
Ethan-Yan27 [Mon, 2 Mar 2020 06:00:08 +0000 (14:00 +0800)]
[Doc]refine the example description of max/min/sum/tag_scope (#4974)
Samuel [Mon, 2 Mar 2020 03:16:46 +0000 (08:46 +0530)]
[TFLITE]FLOOR_MOD & FLOOR_DIV support (#4971)
* TFLite Floor_div & floor_mod parsing code
* Review comment updated
Animesh Jain [Sun, 1 Mar 2020 21:57:24 +0000 (13:57 -0800)]
[Relay][FastMath] Relay pass to use fast exp/tanh (#4873)
* [Relay][FastMath] Relay pass to use fast exp/tanh
* Adding required_pass to the tests.
* FastMath test changes.
zhengdi [Sun, 1 Mar 2020 16:33:59 +0000 (00:33 +0800)]
[TOPI] fix docs errors (#4973)
masahi [Sun, 1 Mar 2020 00:51:49 +0000 (09:51 +0900)]
[Torch] Upsampling op support and enable registering a user defined op conversion map (#4961)
* add custom conversion map
* add roi align test using custom convert map
* refactor test
* add support for upsampling op and test on segmentation models
* remove redundant no_grad
* add upsampling test case
* make the default custom map None, instead of empty dict
* updated tests, remove packaging and drop PT 1.2 support
* add better support for aten::to and tests
* add a note on dilation in x86
jmorrill [Sat, 29 Feb 2020 21:52:22 +0000 (13:52 -0800)]
Added CopyFromBytes and CopyToBytes convenience methods to NDArray. Fixed typos. (#4970)
* Added CopyFromBytes and CopyToBytes convenience methods. Fixed typos.
* Removed unneed argument check
* Use TVMArrayCopyFrom/ToBytes methods
* Moved CopyFrom/ToBytes to ndarray.cc
* CopyToBytes impl was using CopyFromBytes. Fixed
* changed inline to TVM_DLL
* Used impl from TVMArrayCopyTo/FromBytes into NDArray CopyTo/FromBytes
* Move implementation of all CopyFrom/ToBytes into a common impls
* make arg const
* simplify method impl
Ina Dobreva [Sat, 29 Feb 2020 21:30:16 +0000 (23:30 +0200)]
[Frontend][TFLite] Add parser support for l2_normalization (#4966)
* [Frontend][TFLite] Add parser support for l2_normalization
* TF doesn't provide uint8 support
* TFL does the normalization only if it's over the last axis
* TFL uses only the default value for expilon
* Change error message
Tianqi Chen [Fri, 28 Feb 2020 21:20:04 +0000 (13:20 -0800)]
[DOCS] Fix sphinx precheck (#4967)
* [DOCS] Fix sphinx precheck
* ignore keras warnings
* Remove more warnings
masahi [Fri, 28 Feb 2020 02:59:15 +0000 (11:59 +0900)]
[Relay, Torch] Clean up and refactor PyTorch frontend (#4944)
* The initial import of refactored implementation, all tests passed
* enable mobilenet v2 test
* minor cleanup
* reorg
* fix lint
* use input names that come with torch IR
* fix typo
* introduce parse_operators
* fix lint
* add _ prefix
Tianqi Chen [Fri, 28 Feb 2020 02:57:18 +0000 (18:57 -0800)]
[CI] Add pre-check script to check sphinx doc build. (#4956)
Introduce the check stage to the unittest stage for now
so we don't have to rebuild CI images.
As we make additional CPU images to make use of the sphinx,
consider move it to an earlier stage.
Cody Yu [Fri, 28 Feb 2020 02:57:07 +0000 (18:57 -0800)]
fix doc warning (#4959)
Tianqi Chen [Thu, 27 Feb 2020 22:27:37 +0000 (14:27 -0800)]
[DOCS] Sphinx -- Introduce alias detection. (#4954)
* [DOCS] Sphinx -- Introduce alias detection.
Background: some of our namespaces import function from another
namespace. For example tvm.te imports most of the operators from tvm.tir.
Previously we manually exclude these aliases from the doc.
However that means we can not link them by the alias name.
This PR adds a sphinx callback plugin to detect such aliases, and create a rubric block
on the button of its current docstring `Alias of the original class`.
It is done in a way so that we can refer to the generated docs.
We also fixed a few docs errors.
* Fix most of the issues
Jon Soifer [Thu, 27 Feb 2020 21:32:16 +0000 (13:32 -0800)]
[Runtime] Fix TVM_DLL_EXPORT_TYPED_FUNC to work on Windows (#4955)
* [Runtime] Fixed TVM_DLL_EXPORT_TYPED_FUNC to work on Windows
* fix style
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Cody Yu [Thu, 27 Feb 2020 20:38:54 +0000 (12:38 -0800)]
Move Ops in relay.op.contrib.* (#4942)
* move contrib
* lint
* address comment
* address comment
maheshambule [Thu, 27 Feb 2020 19:40:30 +0000 (01:10 +0530)]
[Frontend] [MXNet] make_loss operator support (#4930)
* make_loss test case
* mxnet frontend make_loss support
* added comment for make_loss
* pylint fix
* Update mxnet.py
zhengdi [Thu, 27 Feb 2020 16:43:10 +0000 (00:43 +0800)]
[RELAY] fix error message (#4945)
Tianqi Chen [Thu, 27 Feb 2020 05:10:47 +0000 (21:10 -0800)]
[REFACTOR][PY][API-CHANGE] Remove legacy python files. (#4943)
* [REFACTOR][PY][API-CHANGE] Remove legacy python files.
Remove legacy python files.
Use the te namespace for most of the tensor expression primitives.
- tvm.create_schedule -> tvm.te.create_schedule
- tvm.placeholder -> tvm.te.placeholder
- tvm.compute -> tvm.te.compute
* Remove top-level exposures.
Tianqi Chen [Thu, 27 Feb 2020 00:44:09 +0000 (16:44 -0800)]
[TUTORIAL] Fix tedd tutorial after strategy change (#4947)
* [TUTORIAL] Fix tedd tutorial after strategy change
* Remove scale, remove link to external gdoc
Hua Jiang [Wed, 26 Feb 2020 23:52:28 +0000 (15:52 -0800)]
[VTA] YoloV3 Support (#4887)
* [VTA] YoloV3 Support
Issue:
YoloV3 use some operator and logic that not get good support by
existing vta logic, like nn.pad, upsample, and 255 output channel.
Solution:
add related logic to let darknet YoloV3 can running on VTA
* Fix small(0, or 1 heigh/width) detect frame issue.
* add yolov3-tiny turtorial
* add os import
* address review comments.
* rename tutorial file with a short name.
* rename deploy_vision_on_vta.py into deploy_classification.py.
* address review comment, fix plint eror in deploy_detection.py
Ina Dobreva [Wed, 26 Feb 2020 23:43:11 +0000 (01:43 +0200)]
[Frontend][TFLite] Add parser support for 'square' operator (#4915)
* [Frontend][TFLite] Add parser support for square operator
* Add parser implementation
* Add relevant tests
* Note: 'square' is an unary elemwise operator but it's added separately
in the parser since there is no Relay 'square' op
and instead we have to use 'multiply'
* Change relay operation from 'multiply' to 'power'
* Remove a redundant line as requested
Nick Hynes [Wed, 26 Feb 2020 21:44:32 +0000 (13:44 -0800)]
Remove SGX toolchain installation from CI Dockerfile (#4948)
Zhi [Wed, 26 Feb 2020 18:01:27 +0000 (10:01 -0800)]
[Relay][pass] call graph for relay (#4922)
* call graph for relay
* CallGraphEntryNode->CallGraphEntry, __getitem__->print_var
* fix typos
Alex Wong [Wed, 26 Feb 2020 05:41:53 +0000 (21:41 -0800)]
[Tutorial] Add a tutorial for PyTorch (#4936)
* Add a tutorial for PyTorch
* Fix sphinx formatting, add version support
* Remove space
* Remove version check
* Some refactoring
* Use no grad
* Rename input
* Update cat img source
Haichen Shen [Wed, 26 Feb 2020 05:39:16 +0000 (21:39 -0800)]
Bump up dev version (#4941)
* bump up dev version
* update
Neo Chien [Wed, 26 Feb 2020 04:23:27 +0000 (12:23 +0800)]
[DOCS] Fix Sphinx Warning: the target found for cross-reference (#4925)
* [DOCS] Fix Sphinx Warnings: the target found for cross-reference warnings
* Fix the warning: undefined label
yongfeng-nv [Wed, 26 Feb 2020 04:21:08 +0000 (23:21 -0500)]
Tensor Expression Debug Display (TEDD) (#4651)
* Initial TEDD for publishing.
* 1. Fix lint issues. 2. Print intrin.body instead of intrin.name in Schedule Tree. 3. Add examples to top level APIs' comments. 4. Top level APIs don't print Dot string by default, unless outputdotstring is True.
* Fix more lint issues.
* Update top level API argument names and use raw strings to avoid Python lint warnings in the tests.
* Disable TEDD verification, but keep TE construction.
* Stop importing tedd to avoid failure.
* Separate data extraction and visualization. 1. Add API tedd.dump_json(schedule) to dump a json string for the schedule data for visualization. 2. Update tests. 3. Add a tutorial. 4. Add range information to IterVars.
* Update TEDD about InferBound failure. 1. TEDD doesn't call inferbound for DFG. 2. Update tutorial about the InferBound failure.
* 1. Import IPython only if SVG is requested. This is required to fix a tutorial publishing faliure. 2. Fix test about IPython availability check.
雾雨魔理沙 [Wed, 26 Feb 2020 01:35:49 +0000 (17:35 -0800)]
[WIP] Fixing an Infinite Loop case in UnmatchedChecker. (#4881)
* save
* save
* remove
* remove cerr
Yida Wang [Tue, 25 Feb 2020 21:14:58 +0000 (13:14 -0800)]
[Fix] remove unnecessary spliting in the cached chunk (#4935)
* remove unnecessary spliting in the cached chunk
* remove unnecessary spliting in the cached chunk
wpan11nv [Tue, 25 Feb 2020 11:32:21 +0000 (03:32 -0800)]
[LLVM] Fix build breaks from StringRef changes (#4923)
- llvm::StringRef to std::string conversion is explicit now.
Signed-off-by: Wei Pan <wpan11nv@nvidia.com>
Jon Soifer [Tue, 25 Feb 2020 04:53:24 +0000 (20:53 -0800)]
[Relay][External Codegen] Support data types for CSourceModuleCodegen args and output (#4934)
* Support int args and no extra buffers
* Fixes
* remove testing code
* fix style
* more style
* use const args
* style
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Alex Wong [Tue, 25 Feb 2020 04:14:45 +0000 (20:14 -0800)]
[Relay] Add a PyTorch to Relay Parser (#4497)
* Add a PyTorch to Relay parser
* Add alexnet, googlenet, mnasnet, shufflenet wip
* Fix lint
* Remove fix for shufflenet
* Lower check
* Pull changes from neo-ai/tvm changes
* Remove commented out section
* Use infer_shape everywhere
* Change back to using trace instead of path in from_pytorch
* Parse state_dict to add param names
* Umbrella single_op under test_forwards
* Remove print and cleanup call
* Check if update to test broke CI
* Retrigger CI
* Add back in updated tests
* Try splitting up tests
* First pass at flexible typing, implemented for ones
* Add int32 for all ops
* Remove print statements
* Fix lint
* Broad except
* Add other tensor types
* Temporarily use old tests
* Retrigger CI
* Lower type names
* Use numpy to convert in dense op
* Fix lint
* Remove print
* Need to cleanup but verify int32 works for add
* Rough tests for different types, a lot of types are not supported on CPU
* Probably doesn't build, need to save work as I have to switch branches (constantly)
* Parse param type
* Remove print stmt in parser
* Clean up some code
* Working on flaot32 for bn
* Add resnet18 double type
* Fix lint
* Temporarily move PT tests first
* Temporarily add back refactored tests to fix mem issue
* Add more type test and temp remove some tests
* Comment out tests, hopefully CI prints a trace
* Get stack trace
* Remove operator dict key, rename op_name to node_id, remove dead code
* Make relay map a list
* Remove some hacky string stuff
* Move to PyTorch 1.4
* Remove input_type as param
* Remove _get_fill_value, fix full ops
* Remove unused code and combine ops for identity and none
* Remove fn_param
* Clean up main loop
* Remove useless if/else for outputs
* Remove ir_names, only used once
* Remove some string hacking
* Remove string parsing to get output name
* Fix bug with output sizes of nodes
* Use attributeNames in parse ops
* Remove continue and add_op in parse_op
* Do this everywhere, use assert instead of explciitly type casting
* Remove unnecessary swap
* Slight refactor for elemwise input parse
* Use a copy of graph everywhere
* Rename nid_to_node_name
* Refactor parse import prereqs
* Clean up input node kind check
* Clean up conditionals
* Clean up add_op
* Cleanup type for ones and zeros op
* Fix lint
* Add torch install to CI
* Actually use torch
* Try moving import torch to only where it's needed
* Import torch for CI
* Use take op for select
* Temporarily add ignore for jit inline pass for CI
* Use CompleteTensorType, might be a PT 1.2 only thing
* Use different types in elemwise op
* Use float16 ones
* Fix float16 test
* Remove the temp docker changes
* Remove temp test
* Temporarily comment out original tests
* Remove file
* Empty cache after each test
* Add some prints and lower input sizes
* Try using no grad
* Trying to globally set grad off
* Use no grad for torchvision
* Remove xfail tests
* Remove VGG and AlexNet due to some issues
* Combine pooling tests
* Remove extra test file
* Remove single op, remove larger pooling tests
* Remove maxpool3
* Remove debug prints
* Remove inference call and add no_grad in measure latency
* Use standard string start char
* Remove redundant infer_shape in slice
* Convert most to checks to just expr
* Remove extra paren
* More refactor of isinstance
* Add helper for creating typed constants
* Assert instead of return when no matching type
* Remove network variants
* Add no_grad when forward, remove deatch, fix lint
* Change isinstance to expr in transpose
* Use opnotimplemented, refactor
* Fix full ops, remove duplicate tests
* Never use shape field unless we know the type
* Remove comma, retrigger CI
* Add paren, retrigger CI
* Use inline if-else for flags
* Throw exception instead of assert
* Remove version check for CI
* Check version when doing inline pass
* Fix lint
* Lower more input sizes
* Add new line, conv2d only accepts weight as expr
* Use tvm.runtime.ndarray
* Remove change to torch version install
* Try no grad for mobilenet
* Fix lint
* Fix lint again
* Revert to last passing
* Delete test files
* Ignore lint
* Revert back
* Comment out mobilenet
* Clean up compare compiled and baseline outputs
* Use IRModule
* Add todos
* Refactor use_bias
* Add todo for fix conv op channels
* Change input to data type
* Remove todo
* Handle channel multiplier > 1
vizero1 [Tue, 25 Feb 2020 03:55:27 +0000 (04:55 +0100)]
Use opencv reisze method for preprocessing of image in darknet (#4883)
* Use opencv reisze method for preprocessing of image in darknet
* Use opencv reisze method for preprocessing of image in darknet
* Fix pylint issues
Samuel [Tue, 25 Feb 2020 03:39:10 +0000 (09:09 +0530)]
[FRONTEND][KERAS]GaussianDropout/Noise parsing support (#4928)
GaussianDropout & GaussianNoise are active only during training time. This can be skipped during inference.
Haichen Shen [Mon, 24 Feb 2020 21:12:03 +0000 (13:12 -0800)]
[Relay][AutoTVM] Relay op strategy (#4644)
* relay op strategy
fix lint
bitpack strategy
bitserial_dense (#6)
* update strategy
* address comments
fix a few topi test
Dense strategy (#5)
* dense
* add biforst; remove comments
* address comment
Refactor x86 conv2d_NCHWc (#4)
* Refactor x86 conv2d
* Add x86 depthwise_conv2d_NCHWc
* Add back topi x86 conv2d_nchw
* Merge x86 conv2d_nchw and conv2d_NCHWc
* Minor fix for x86 conv2d
fix more strategy
Add x86 conv2d_NCHWc_int8 strategy (#8)
* Add x86 conv2d_NCHWc_int8 strategy
* Remove contrib_conv2d_nchwc_int8
* Fix generic conv2d_NCHWc for int8
* Fix topi arm_cpu conv2d_NCHWc_int8
update x86 conv2d
enable specify relay ops to be tuned for autotvm
add cuda conv2d strategy
add conv2d strategy for rocm
add conv2d strategy for hls
add conv2d strategy for arm cpu
add conv2d strategy for mali
add conv2d strategy for bifrost
add conv2d strategy for intel graphics
clean up and fix lint
remove template keys from autotvm
remove 2 in the func name
address comments
fix
* fix bugs
* lint
* address comments
* add name to op implement
* Modify topi tests (#9)
* Add pooling, reorg, softmax and vision
* Add lrn
* fix topi test
* fix more topi test
* lint
* address comments
* x
* fix more tests & bugs
* Modify more tests (#10)
* Modify tests for bitserial_conv2d, bitserial_dense, bitserial_conv2d_rasp and bnn
* Minor fix
* More minor fix
* fix more test
* try to update vta using strategy
* fix cpptest
* x
* fix rebase err
* Fix two tests (#11)
* change autotvm log format
* lint
* minor fix
* try fix vta test
* fix rebase err
* tweak
* tmp hack for vta pass
* fix tutorial
* fix
* fix more tutorials
* fix vta tutorial
* minor
* address comments
* fix
* address comments
* fix cpptest
* fix docs
* change data structure name and api
* address comments
* lint
* fix rebase err
* updates
* fix winograd test
* fix doc
* rebase
* upgrade tophub version number
* fix bug
* re-enable vta tsim test after tophub is upgraded
* fix vta test to use the correct args so the config can be found in tophub
Co-authored-by: Yao Wang <kevinthesunwy@gmail.com>
Leyuan Wang [Fri, 21 Feb 2020 22:31:04 +0000 (14:31 -0800)]
[Fix] Fix get_valid_count flaky test for cuda (#4901)
* get_valid_count accuracy issue fixed for individual tests but not for all tests running together
* minor fix
* initialize valid_count and PrefixSum buffers
* test updated
* udpate relay test as well
* update document
* fix lint
* address comment
* fix lint
* correct atomicAdd identifier name
Neo Chien [Fri, 21 Feb 2020 21:03:41 +0000 (05:03 +0800)]
[TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort (#4891)
* [TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort
* upadate test function of argsort like topk
* Shuffle index and get data from shuffled index
* Replace the random.uniform with np.arange
Tianqi Chen [Fri, 21 Feb 2020 05:56:26 +0000 (21:56 -0800)]
[COMMUNITY] @anijain2305 -> Committer (#4921)
Ina Dobreva [Fri, 21 Feb 2020 04:10:45 +0000 (04:10 +0000)]
Fix tests for tflite unary elemwise operations (#4913)
* add TFLite version check for 'ceil' and 'cos'
* fix name check of test_op for positive inputs
* add error message for operator not found in the installed fbs schema
Orion34C [Fri, 21 Feb 2020 02:43:45 +0000 (10:43 +0800)]
[CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore (#4546)
* support cuda tensorcore subbyte int data type in auto tensorcore
* add lisence
* pass cpplint
* fix code review comments
* merge the int4/int1 codegen tutorial into the existing auto tensorcore tutorial
* using master's new API
* disable tuning when cuda is not enabled
* address cr comment
* do not run the tuning
* fix test failure
* fix cpplint error
* fix bool type reduction bug
* 1. fix a index bug 2. fix returned bytes value of int1/int4/uint4
* fix typo
Cody Yu [Thu, 20 Feb 2020 22:09:34 +0000 (14:09 -0800)]
[DOCS] Fix Sphinx Warnings (RST indent, cross-ref, and image scale) (#4920)
* fix indents
* Fix image scale and cross-ref
wpan11nv [Thu, 20 Feb 2020 17:25:35 +0000 (09:25 -0800)]
[Relay] Fix an assertion exposed by loop vectorizer (#4916)
- Allows uniform conditions for select expressions (the same as halide)
exposed by the loop vectorizer.
Signed-off-by: Wei Pan <weip@nvidia.com>
Cody Yu [Thu, 20 Feb 2020 17:24:37 +0000 (09:24 -0800)]
[DOCS] Fix sphinx warnings (#4917)
* Fix Python docstrings
* More fixes
* Fix lint
Tianqi Chen [Wed, 19 Feb 2020 21:14:42 +0000 (13:14 -0800)]
[REFACTOR] Polish ffi convention. (#4912)
* [REFACTOR] Polish ffi convention.
- Remove the src/api, keep registration local to the c++ function.
- Remove the api_internal as it is no longer needed.
* Update the codebase walk through
hcyang [Wed, 19 Feb 2020 06:33:16 +0000 (14:33 +0800)]
[RELAY][FRONTEND][TF] Fix FuseBatchNorm output cast error if need_cast is True (#4894)
Andrew [Wed, 19 Feb 2020 02:03:03 +0000 (18:03 -0800)]
Fix tvm.target.generic_func runtime detection (#4910)
Tianqi Chen [Tue, 18 Feb 2020 23:39:59 +0000 (15:39 -0800)]
[DOCS] Update API docs to reflect the status after the refactor. (#4907)
Jon Soifer [Tue, 18 Feb 2020 21:44:59 +0000 (13:44 -0800)]
[Relay] Expose FunctionGetAttr to Python (#4905)
* [Relay] Expose FunctionGetAttr to Python
* add test
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Josh Fromm [Tue, 18 Feb 2020 18:24:22 +0000 (10:24 -0800)]
[Relay][Frontend][Keras] NHWC import support. (#4899)
* Basic test working
* Almost all tests working.
* all tests passing.
* Fixed lint.
* Improved Style.
Tianqi Chen [Tue, 18 Feb 2020 16:14:12 +0000 (08:14 -0800)]
[REFACTOR][PY] Establish tvm.arith (#4904)
Tianqi Chen [Tue, 18 Feb 2020 05:40:19 +0000 (21:40 -0800)]
[CI] Add autodocsum as dep (#4902)
Tianqi Chen [Tue, 18 Feb 2020 04:56:45 +0000 (20:56 -0800)]
[CI] Update ci docker to add autodocsumm (#4903)
pankratz [Tue, 18 Feb 2020 02:48:05 +0000 (19:48 -0700)]
Fixed bugs that occured when using bitwise operators on floating point type expressions. Further crash when using ops <<, >>, %. Finally added regression tests for both types of bug. (#4892)
Tianqi Chen [Tue, 18 Feb 2020 02:47:09 +0000 (18:47 -0800)]
[REFACTOR][PY] Establish tvm.te and tvm.driver (#4900)
- Move the related files to tvm.te
- Move build_module.py to tvm.driver
Jon Soifer [Mon, 17 Feb 2020 20:18:15 +0000 (12:18 -0800)]
[Relay][Pass] Fix bug in re-processing call node in MergeComposite pass (#4879)
* Fix bug in re-processing call node
* Add test
* Add to main
* temp changes to work from another machine
* fix rest of tests
* fix test_reuse_call_merge
* fix merge
Co-authored-by: Jon Soifer <jonso@microsoft.com>
Tianqi Chen [Mon, 17 Feb 2020 18:53:05 +0000 (10:53 -0800)]
[DOCS] Introduce how to add hardware backend to FAQ (#4898)
Alex Gladkov [Mon, 17 Feb 2020 17:22:11 +0000 (09:22 -0800)]
Fast exponent (#4790)
Baden Hughes [Mon, 17 Feb 2020 01:56:25 +0000 (11:56 +1000)]
Update faq.md (#4893)
various minor editorial updates - style, grammar, typos.
Zhi [Mon, 17 Feb 2020 01:44:22 +0000 (17:44 -0800)]
Fix alpha_equal bug (#4897)
Tianqi Chen [Sun, 16 Feb 2020 23:02:39 +0000 (15:02 -0800)]
[CI] Cleanup logfile before tutorial runs (#4896)
masahi [Sun, 16 Feb 2020 06:43:22 +0000 (15:43 +0900)]
[Relay] Fix VM compiler for while loop with free vars (#4889)
* add additional switch to handle nested call node
* Fix VM compiler for while loop with free var
wpan11nv [Sun, 16 Feb 2020 03:47:36 +0000 (19:47 -0800)]
[CodeGen][CUDA] Fix issues in cuda codegen (#4876)
- Do not emit __shared__ etc. as part of type for casting
- Fix fp16 reduction kernels with compiler errors:
"no operator "+" matches these operands, volatile half + volatile half
This patch inserts casts to remove volatile type qualifier following
volatile loads (fp16 only). CUDA fp16 library headers should add
volatile member functions.
- Update have_fp16 to include compute 6.1 GPUs, which do support fp16,
although their fp16 throughput is low. Updated tests.
Signed-off-by: Wei Pan <weip@nvidia.com>
masahi [Sat, 15 Feb 2020 18:32:58 +0000 (03:32 +0900)]
improve antlr import error message (#4888)
Cody Yu [Sat, 15 Feb 2020 04:15:47 +0000 (20:15 -0800)]
[AutoTVM] Support range in index based tuners (#4870)
* Support range in index based tuners
* Address comments
* Remove __*state__
* trigger CI
masahi [Sat, 15 Feb 2020 01:12:06 +0000 (10:12 +0900)]
[QNN] Add support for per channel weight scale in dense op (#4880)
* add test case for per channel dense
* add unit arg in tflite frontend
* update qnn legalize test
* fix output dim index
masahi [Fri, 14 Feb 2020 04:28:07 +0000 (13:28 +0900)]
[QNN] More doc fix on quantize and convolution (#4874)
* [QNN] Doc fix on quantize and convolution
* update test
wpan11nv [Fri, 14 Feb 2020 04:24:29 +0000 (20:24 -0800)]
[TOPI][CUDA] Enable vectorization on fp16 type (#4867)
- This allows to better utilize the memory bandwidth
- Note that not all cases are vectorized for fp16 datatype. For
instance, when the size is not a multiple of 1024, the inner loop
may be an expression that cannot be vectorized. In this case, a
small inner loop is still benefical for latency hidding.
Signed-off-by: Wei Pan <weip@nvidia.com>
tqchen [Wed, 12 Feb 2020 23:38:39 +0000 (15:38 -0800)]
[REFACTOR][PY] Establish tvm.tir
- Move related files into the corresponding location as in C++
- Keep the top-level TVM API backward compatible to make minimum changes in topi
Zhi [Wed, 12 Feb 2020 16:14:36 +0000 (08:14 -0800)]
Update docs/dev/virtual_machine.rst
Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
Zhi [Wed, 12 Feb 2020 16:14:27 +0000 (08:14 -0800)]
Update docs/dev/virtual_machine.rst
Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>
Zhi Chen [Tue, 11 Feb 2020 04:52:20 +0000 (04:52 +0000)]
fix vm doc
Alex Gladkov [Thu, 13 Feb 2020 06:38:15 +0000 (22:38 -0800)]
Optimize x86 conv3d_ndhwc using data packing approach. (#4866)
Add tuneable conv3d_ndhwc schedule
mbarrett97 [Thu, 13 Feb 2020 02:07:58 +0000 (02:07 +0000)]
[FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess (#4543)
* [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess
This adds support for the custom operator
TFLite_Detection_PostProcess which is commonly used in
object detection networks such as SSD Mobilenet. It
only adds support for when use_regular_nms = False.
Change-Id: I819b253c0eb6f0fa55da65d2634e09359b888828
* Added a test for the tflite custom op
Change-Id: Ie5baa092deae9a8bcffd2ebd9f6d346b90e58afd
* Removed trailing comma
Change-Id: Ib08f02b5f1a59a883048bfb36e4321152cd2e7f2
* Added spaces between divide
Change-Id: If1171fc03d211a809cedeb800804394972af4060
* Formatted comment
Change-Id: I3ce7e69b8d2c73aec57369c1c64ea1eec07f087b
* Reduced line length in test
Change-Id: I49eaafc3369070f8f3e85fbb965ad20972096c68
* Set random seed for test
Change-Id: I542a787d11422ea83c52147b2cb1144fcef0dd77
* Fixes to style
Change-Id: I2971b8ecebe08c882b2481a99f67cfbe515e0b1f
* Assert for incorrect number of inputs
Change-Id: I393f3b3b62be73e427498d98456fb1d5a214e0af
* Change comparison to pass linting
The linter was updated, so I needed to fix
a small style issue as a result.
Change-Id: Ia3c954565a00de92e7fb1912eae9ed9875d60c7c
tqchen [Wed, 12 Feb 2020 21:39:45 +0000 (13:39 -0800)]
[REFACTOR][PY][API-CHANGE] Establish tvm.target
Move the related target modules into tvm.target.
API change:
- tvm.target.current_target -> tvm.target.Target.current
- tvm.datatype -> tvm.target.datatype
tqchen [Wed, 12 Feb 2020 19:20:23 +0000 (11:20 -0800)]
[JVM] Update the runtime PackedFunc for module
tqchen [Wed, 12 Feb 2020 05:41:47 +0000 (21:41 -0800)]
Fix optimize
tqchen [Wed, 12 Feb 2020 04:36:20 +0000 (20:36 -0800)]
[DOCS][PY] Sphinx docs about tvm.ir
Tianqi Chen [Wed, 12 Feb 2020 04:01:36 +0000 (20:01 -0800)]
[REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding files (#4862)
* [REFACTOR][PY][API-CHANGE] establish tvm.ir, migrate corresponding relay files.
This PR establishes tvm.ir and migrates the corresponding relay
files into the new folder.
API Change:
- relay.Module -> tvm.IRModule
* Update with ADT
* Migrate transform
* address comments
* Migrate module
* Migrate json_compact
* Migrate attrs
* Move LoweredFunc to stmt temporarily
* temp migrate container
* Finish migrate container
hlu1 [Tue, 11 Feb 2020 22:46:50 +0000 (14:46 -0800)]
[Topi] Missing header (#4865)
kice [Tue, 11 Feb 2020 21:44:48 +0000 (16:44 -0500)]
Fix onnx import bugs (#4750)
* Fix onnx import bugs
Fix onnx attributes of string type incorrect handling
Merge symmetric padding of Conv to symmetric form
* Only merge symmetric padding for conv2d
Zhi [Tue, 11 Feb 2020 20:04:04 +0000 (12:04 -0800)]
[Refactor] move vm.py under runtime and adt to runtime.container.py (#4855)
masahi [Tue, 11 Feb 2020 17:56:14 +0000 (02:56 +0900)]
add resize op converter (#4838)
hlu1 [Tue, 11 Feb 2020 16:52:25 +0000 (08:52 -0800)]
[TVM] const auto p -> const auto &p (#4861)
hlu1 [Tue, 11 Feb 2020 16:45:17 +0000 (08:45 -0800)]
[LLVM] Explicit llvm::StringRef to std::string conversion (#4859)
Lianmin Zheng [Tue, 11 Feb 2020 09:58:36 +0000 (01:58 -0800)]
[RUNTIME] Fix memory leakage of TVMByteArray (#4856)
Animesh Jain [Tue, 11 Feb 2020 02:56:37 +0000 (18:56 -0800)]
[TFLite] Using real image for QNN testing. (#4816)
* [TFLite] Using real image for QNN testing.
* Setting seed for SSD mobilenet for fixed input.
* Support quantized Pad op.
* Remove unnnecessary line.
* Ina comments.