platform/upstream/tvm.git
4 years ago[Relay][Legalize] Legalize conv2d_transpose for NHWC (#4399)
Alexander Pivovarov [Sat, 23 Nov 2019 05:59:15 +0000 (21:59 -0800)]
[Relay][Legalize] Legalize conv2d_transpose for NHWC (#4399)

4 years ago[RUNTIME] Move module export to the function level. (#4405)
Tianqi Chen [Sat, 23 Nov 2019 04:32:20 +0000 (20:32 -0800)]
[RUNTIME] Move module export to the function level. (#4405)

4 years ago[TVM][RUNTIME] A minimum example to generate external library wrappers for DSOModule...
Zhi [Fri, 22 Nov 2019 23:31:50 +0000 (15:31 -0800)]
[TVM][RUNTIME] A minimum example to generate external library wrappers for DSOModule (#4280)

4 years ago[LICENSE] add 3rdparty licenses (#4402)
Yizhi Liu [Fri, 22 Nov 2019 22:37:25 +0000 (14:37 -0800)]
[LICENSE] add 3rdparty licenses (#4402)

* [LICENSE] add 3rdparty licenses

* rename license files to .txt

4 years agoAdded tflite frontend support for quantized mean. (#4339)
tristan-arm [Fri, 22 Nov 2019 21:34:40 +0000 (21:34 +0000)]
Added tflite frontend support for quantized mean. (#4339)

4 years ago[DOCS] Mention incubating in readme (#4401)
Tianqi Chen [Fri, 22 Nov 2019 18:28:43 +0000 (10:28 -0800)]
[DOCS] Mention incubating in readme (#4401)

4 years ago[Golang][Doc] improve the samples and doc (#4385)
Neo Chien [Fri, 22 Nov 2019 05:52:24 +0000 (13:52 +0800)]
[Golang][Doc] improve the samples and doc (#4385)

* [Golang][Doc] improve the samples and doc

* [Golang][Doc] add asf header

* [Golang][Doc] Improve the end to end example

* [Golang][Doc] Improve the end to end example

4 years agoupdate_document_after_repository_renamed (#4398)
tripley [Fri, 22 Nov 2019 03:12:05 +0000 (19:12 -0800)]
update_document_after_repository_renamed (#4398)

4 years agoUpdate Jenkinsfile for external runtime (#4396)
Cody Yu [Fri, 22 Nov 2019 00:45:47 +0000 (16:45 -0800)]
Update Jenkinsfile for external runtime (#4396)

4 years ago[Relay][VM] Clean up the VM and VM profiler code (#4391)
Haichen Shen [Fri, 22 Nov 2019 00:01:01 +0000 (16:01 -0800)]
[Relay][VM] Clean up the VM and VM profiler code (#4391)

* [VM] add a few more API to vm

* [VM][Fix] fix vm convert args

* [VM] a few fixes

* rename fields

* update

* update vm profiler

* x

* add doc

* lint

* fix test

* address comments

4 years ago[TOPI] Fix flaky testcase for floor div (#4382)
Yizhi Liu [Thu, 21 Nov 2019 23:39:40 +0000 (15:39 -0800)]
[TOPI] Fix flaky testcase for floor div (#4382)

* [TOPI] Fix flaky testcase for floor div

* avoid check at 0.0

4 years agoAdd Logan to reviewer (#4390)
Haichen Shen [Thu, 21 Nov 2019 20:46:41 +0000 (12:46 -0800)]
Add Logan to reviewer (#4390)

4 years agoUpdate compile_engine.py (#4393)
Huang, Guangtai [Thu, 21 Nov 2019 20:46:00 +0000 (04:46 +0800)]
Update compile_engine.py (#4393)

4 years ago[Relay][Frontend][TF] Fix slice when begin or size is not Const (#4372)
Siyuan Li [Thu, 21 Nov 2019 18:53:37 +0000 (02:53 +0800)]
[Relay][Frontend][TF] Fix slice when begin or size is not Const (#4372)

* fix slice bug when input is param

* use _infer_value rather than _infer_value_simulated

4 years agoadd GPU checking before compilation for rocm (#4394)
Thomas Viehmann [Thu, 21 Nov 2019 14:40:29 +0000 (15:40 +0100)]
add GPU checking before compilation for rocm (#4394)

Previously, we would rely on the later phases to error out
(often for using too much shared memory). This enables the
checks on the IR that already exist for CUDA and OpenCL also
for ROCm.

4 years ago[QNN] Lowering for Depthwise Convolution. (#4351)
Animesh Jain [Thu, 21 Nov 2019 05:22:25 +0000 (21:22 -0800)]
[QNN] Lowering for Depthwise Convolution. (#4351)

4 years ago[fix][pass] Save the function when it is used as a call arg (#4389)
Zhi [Thu, 21 Nov 2019 00:50:01 +0000 (16:50 -0800)]
[fix][pass] Save the function when it is used as a call arg (#4389)

4 years ago[CI] Add more info, per exec ws isolation (#4388)
Tianqi Chen [Wed, 20 Nov 2019 23:43:54 +0000 (15:43 -0800)]
[CI] Add more info, per exec ws isolation (#4388)

4 years ago[ThreadPool] Solve thread transitions issue (#4344)
Zhao Wu [Wed, 20 Nov 2019 20:43:20 +0000 (04:43 +0800)]
[ThreadPool] Solve thread transitions issue (#4344)

* [ThreadPool] Solve thread transitions issue

* Use pthread_atfork to avoid master thread affinity be derived by child.

* Code Format

* comment of exclude_worker0_

* set full cpu affinity

* Redundant blank line

* CPPLint

* CPPLint namespace

* CPPLint

* Fix the wrong logic of bind master thread.

4 years agoCompare all outputs in TFLite test_forward_ssd_mobilenet_v1 (#4373)
Alexander Pivovarov [Wed, 20 Nov 2019 17:36:57 +0000 (09:36 -0800)]
Compare all outputs in TFLite test_forward_ssd_mobilenet_v1 (#4373)

4 years ago[team] add Yizhi's pgp key (#4380)
Yizhi Liu [Wed, 20 Nov 2019 17:31:34 +0000 (09:31 -0800)]
[team] add Yizhi's pgp key (#4380)

4 years agofix build with llvm trunk (#4386)
masahi [Wed, 20 Nov 2019 17:09:29 +0000 (02:09 +0900)]
fix build with llvm trunk (#4386)

4 years ago[doc] fix typo, codege to codegen (#4383)
Liang ZOU [Wed, 20 Nov 2019 13:05:01 +0000 (21:05 +0800)]
[doc] fix typo, codege to codegen (#4383)

4 years ago[CI] Avoid content-length request in test data download (#4375)
Tianqi Chen [Wed, 20 Nov 2019 06:04:42 +0000 (22:04 -0800)]
[CI] Avoid content-length request in test data download (#4375)

4 years ago[nvcc] enable multiple arch in one fatbin (#4377)
Yizhi Liu [Tue, 19 Nov 2019 23:07:29 +0000 (15:07 -0800)]
[nvcc] enable multiple arch in one fatbin (#4377)

4 years ago[Relay][Quantize] Integrate data-aware calibration into quantization (#4295)
Wuwei Lin [Tue, 19 Nov 2019 22:54:57 +0000 (17:54 -0500)]
[Relay][Quantize] Integrate data-aware calibration into quantization (#4295)

* [Relay][Quantize] Integrate data-aware calibration into quantization

* Update _calibrate.py

* trigger ci

* Address comments

* address comments

4 years ago[PERF] Parallelize reduction for CPU (#4158)
Haichen Shen [Tue, 19 Nov 2019 21:56:51 +0000 (13:56 -0800)]
[PERF] Parallelize reduction for CPU (#4158)

* [PERF] parallel reduction in cpu

* fix

* x

* update

* lint

* fix

4 years ago[tutorial][benchmark] nnvm -> relay (#4368)
Yizhi Liu [Tue, 19 Nov 2019 21:51:34 +0000 (13:51 -0800)]
[tutorial][benchmark] nnvm -> relay (#4368)

* [tutorial] nnvm -> relay

* use relay workload

* delete movbilenetv2 option

4 years agoFix TFLite RESHAPE assert (#4320)
Alexander Pivovarov [Tue, 19 Nov 2019 17:15:08 +0000 (09:15 -0800)]
Fix TFLite RESHAPE assert (#4320)

4 years ago[Relay tests] AlterOpLayout - Temporary attr update (#4357)
Animesh Jain [Tue, 19 Nov 2019 04:18:58 +0000 (20:18 -0800)]
[Relay tests] AlterOpLayout - Temporary attr update (#4357)

4 years agoadd rule for clean (#4364)
miheer vaidya [Tue, 19 Nov 2019 04:03:53 +0000 (21:03 -0700)]
add rule for clean (#4364)

* add rule for clean

* Update clean rule

Seems like lib/ directory is not made by the makefile
So don't delete directory, just the contents of it.

4 years agoreminding message for TVM_REGISTER_NODE_TYPE (#4365)
Yizhi Liu [Mon, 18 Nov 2019 23:10:10 +0000 (15:10 -0800)]
reminding message for TVM_REGISTER_NODE_TYPE (#4365)

4 years agofix Android and OpenCL docker install (#4363)
Cody Hao Yu [Mon, 18 Nov 2019 19:24:39 +0000 (11:24 -0800)]
fix Android and OpenCL docker install (#4363)

4 years ago[SOURCE] Add ASF header to __init__.py files (#4359)
Tianqi Chen [Mon, 18 Nov 2019 18:22:25 +0000 (10:22 -0800)]
[SOURCE] Add ASF header to __init__.py files (#4359)

4 years ago[Frontend]Add TensorFlow FloorMod (#4308)
Yao Wang [Mon, 18 Nov 2019 03:54:34 +0000 (19:54 -0800)]
[Frontend]Add TensorFlow FloorMod (#4308)

* Add tf FloorMod

* Add floor_div/mod into topi and relay

* Add to rst

* Fix test

4 years ago[Relay][Frontend][Tensorflow]Add conv2d_transpose (#4300)
optima2005 [Mon, 18 Nov 2019 01:24:44 +0000 (09:24 +0800)]
[Relay][Frontend][Tensorflow]Add conv2d_transpose (#4300)

* [Relay][Frontend][Tensorflow]Add conv2d_transpose

* add transformation from NHWC to NCHW to compatible with TVM conv2d_transpose implementation

* remove 'dilations' paramater to compitable with TF1.3

4 years agoSend list as argument to schedule_conv2d (#4358)
miheer vaidya [Mon, 18 Nov 2019 00:39:36 +0000 (17:39 -0700)]
Send list as argument to schedule_conv2d (#4358)

When getting cuda schedule passing single tensor seem to work but after changing target to "llvm" causes assert.
Sending list on other hand makes both cuda and llvm targets happy.
See https://discuss.tvm.ai/t/solved-simple-example-error-attributeerror-tensorslice-object-has-no-attribute-op/2245/3

4 years agoFix docstring in topi.nn.fifo_buffer (#4349)
Philip Hyunsu Cho [Sat, 16 Nov 2019 16:40:38 +0000 (08:40 -0800)]
Fix docstring in topi.nn.fifo_buffer (#4349)

4 years agoRetain qnn input kernel scales (#4292)
Ramana Radhakrishnan [Sat, 16 Nov 2019 16:39:19 +0000 (16:39 +0000)]
Retain qnn input kernel scales (#4292)

* Add qnn conv2d attributes for input_tensor_scale and
kernel_tensor_scale.

The lowering in the tflite frontend loses the input_tensor_scale
and the kernel_tensor_scale by multiplying it and putting it into
the Requantize operation. This means that any graph partitioning
passes or other passes that need to access this information no longer
have it available in the qnn dialect.

regards
Ramana

* Store input tensor scale and Weight tensor scale for Dense as well

As for conv2d, the tflite frontend drops the input tensor
scale and the weight tensor scale from the relay op. Store
it as separate fields in there.

* Fix unintentional tab

* Rename input_tensor_scale to input_scale and kernel_tensor_scale
to kernel_scale for conv2d.

* input_tensor_scale -> input_scale weight_tensor_scale->weight_scale

* Rework dense testcase

And use input_scale and kernel_scale

* Be consistent in use of input_scale and kernel_scale values

* Fixup qnn conv2d tests for input_scale and kernel_scale

* Make pydoc identical between conv2d and dense for weight_tensor

* Fix up conv2d parameters to be in the same order between C++ and python

* Fix ordering of parameters for dense.

* Add input_scale and output_scale to try and satisfy ci gods

* Delete input_scale and kernel_scale.

nn.conv2d does not contain input_scale and kernel_scale. We need
to delete it when lowering it to nn.conv2d.

* Add input_scale and kernel_scale for qnn.conv2d

4 years ago[Debugger] Sorting op-time breakdown for quicker analysis. (#4352)
Animesh Jain [Sat, 16 Nov 2019 16:38:10 +0000 (08:38 -0800)]
[Debugger] Sorting op-time breakdown for quicker analysis. (#4352)

4 years agoproper device query through rocm api (#4305)
Peter Yeh [Sat, 16 Nov 2019 06:39:44 +0000 (22:39 -0800)]
proper device query through rocm api (#4305)

4 years agofix install script (#4350)
Cody Hao Yu [Sat, 16 Nov 2019 06:27:49 +0000 (22:27 -0800)]
fix install script (#4350)

4 years agoAutoTVM: selecting tuning templates when extracting task (#4338)
黎明灰烬 [Sat, 16 Nov 2019 00:53:01 +0000 (08:53 +0800)]
AutoTVM: selecting tuning templates when extracting task (#4338)

* AutoTVM: selecting tuning templates when extracting task

Make the procedure of trying new templates easier.

Test: tests/python/relay/test_autotvm_task_extraction.py

* Use dict to match key for topi ops

* fix lint issue

* be more pythonic :)

4 years agoAdd workgroup size attribute to AMDGPU functions in codegen (#4342)
Thomas Viehmann [Fri, 15 Nov 2019 23:14:56 +0000 (00:14 +0100)]
Add workgroup size attribute to AMDGPU functions in codegen (#4342)

When we did not set the workgroup size, LLVM will use too many registers
for kernel launches with many threads. This resulted in "invalid ISA"
errors. Here we set the maximum workgroup size to the maximum threads
per block from the device API.

Of course, one might look into allowing configurations with fewer
threads at runtime to use more registers.

4 years ago[FIX] Fix for a specific case when loop partitioning with indivisble (#4243)
Kimish Patel [Fri, 15 Nov 2019 22:37:37 +0000 (14:37 -0800)]
[FIX] Fix for a specific case when loop partitioning with indivisble (#4243)

factors and resulting nested loop is broken.
This is due to the fact that we are creating zero extent loops which
are fixed afterwards. However unroll pass breaks due to the zero extent
loop.

4 years ago[Relay][VM][Interpreter] Enable first-class constructors in VM and interpreter via...
Logan Weber [Fri, 15 Nov 2019 22:12:52 +0000 (14:12 -0800)]
[Relay][VM][Interpreter] Enable first-class constructors in VM and interpreter via eta expansion (#4218)

* Fix constructor pretty printing

* Make Module::HasDef name consistent with API

* Add VM constructor compilation via eta expansion

* Lint

* Fix CI

* Fix failing test

* Address comment

* Retrigger CI

* Retrigger CI

4 years ago[COMMUNITY] Add DISCLAIMER, KEYS for ASF release (#4345)
Tianqi Chen [Fri, 15 Nov 2019 22:09:47 +0000 (14:09 -0800)]
[COMMUNITY] Add DISCLAIMER, KEYS for ASF release (#4345)

* [COMMUNITY] Add DISCLAIMER, KEYS for ASF release

* Add file name spec

4 years agoAdd check to ensure input file was successfully opened in NNVM deploy code demo ...
T.J. Mercier [Fri, 15 Nov 2019 19:15:12 +0000 (11:15 -0800)]
Add check to ensure input file was successfully opened in NNVM deploy code demo (#4315)

4 years agoBump up CUDA log version in tophub.py (#4347)
Alex Gladkov [Fri, 15 Nov 2019 19:04:00 +0000 (11:04 -0800)]
Bump up CUDA log version in tophub.py (#4347)

4 years ago[CodeGen] Add build config option disable_assert to control whether to generate asser...
Zhao Wu [Fri, 15 Nov 2019 18:05:26 +0000 (02:05 +0800)]
[CodeGen] Add build config option disable_assert to control whether to generate assert (#4340)

4 years agofix inconsistent tag name (#4134)
ziyu-guo [Fri, 15 Nov 2019 17:59:59 +0000 (09:59 -0800)]
fix inconsistent tag name (#4134)

4 years ago[VTA] Bug fix for padded load with large inputs (#4293)
Liangfu Chen [Fri, 15 Nov 2019 17:59:04 +0000 (01:59 +0800)]
[VTA] Bug fix for padded load with large inputs (#4293)

* bug fix for padded load with large inputs

* Update TensorLoad.scala

* Update test_vta_insn.py

4 years agoimp module is deprecated (#4275)
Jian Weng [Fri, 15 Nov 2019 17:13:04 +0000 (09:13 -0800)]
imp module is deprecated (#4275)

4 years ago[Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth (#4271)
Neo Chien [Fri, 15 Nov 2019 16:53:13 +0000 (00:53 +0800)]
[Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth (#4271)

4 years ago[Test][Relay][Pass] Add test case for lambda lift (#4317)
Wei Chen [Fri, 15 Nov 2019 16:42:58 +0000 (08:42 -0800)]
[Test][Relay][Pass] Add test case for lambda lift (#4317)

4 years ago[RUNTIME] Add device query for AMD GcnArch (#4341)
Peter Yeh [Fri, 15 Nov 2019 04:43:47 +0000 (20:43 -0800)]
[RUNTIME] Add device query for AMD GcnArch (#4341)

* add gcnArch query

* kGcnArch query for cuda is a no-op

4 years ago[Relay][Frontend][TF] Fix transpose when axes is not a param (#4327)
Jon Soifer [Fri, 15 Nov 2019 03:52:40 +0000 (19:52 -0800)]
[Relay][Frontend][TF] Fix transpose when axes is not a param (#4327)

* [Relay][Frontend][TF] Use _infer_value_simulated when axes is not a const to Transpose

* uncomment tests

* dummy change to retrigger ci

4 years ago[Contrib] Add MKL DNN option (#4323)
Haichen Shen [Fri, 15 Nov 2019 03:45:57 +0000 (19:45 -0800)]
[Contrib] Add MKL DNN option (#4323)

* [Contrib] Add MKL DNN

* update

* update

4 years agoDeprecate NNVM warning msg (#4333)
Yizhi Liu [Fri, 15 Nov 2019 03:45:25 +0000 (19:45 -0800)]
Deprecate NNVM warning msg (#4333)

4 years agoSolve custom model of prelu (#4326)
Zhao Wu [Fri, 15 Nov 2019 03:43:38 +0000 (11:43 +0800)]
Solve custom model of prelu (#4326)

4 years agoAdd topi.nn.fifo_buffer to TVM doc (#4343)
Philip Hyunsu Cho [Fri, 15 Nov 2019 03:42:53 +0000 (19:42 -0800)]
Add topi.nn.fifo_buffer to TVM doc (#4343)

4 years agoAdd support for quant. mul operator in tflite frontend (#4283)
Ina Dobreva [Fri, 15 Nov 2019 03:41:36 +0000 (03:41 +0000)]
Add support for quant. mul operator in tflite frontend (#4283)

A test for qnn_mul has to be added when the qnn elemwise tests (#4282) get merged.

4 years ago[Relay][Pass] Add pass to remove unused functions in relay module (#4334)
Wei Chen [Fri, 15 Nov 2019 01:52:01 +0000 (17:52 -0800)]
[Relay][Pass] Add pass to remove unused functions in relay module (#4334)

* [Relay][Pass] Add pass to remove unused functions in relay module

* Add tests

* Fix lint

* Fix visit order

* Add pass argument

* Fix

4 years agoEnable hipModuleGetGlobal() (#4321)
Peter Yeh [Fri, 15 Nov 2019 00:41:12 +0000 (16:41 -0800)]
Enable hipModuleGetGlobal() (#4321)

4 years ago[Build][Windows] Fix Windows build by including cctype (#4319)
Jon Soifer [Thu, 14 Nov 2019 23:13:11 +0000 (15:13 -0800)]
[Build][Windows] Fix Windows build by including cctype (#4319)

* Fix build

* dummy change to retrigger CI

* dummy change to retrigger ci

* dummy change to retrigger ci

4 years ago[CI] Set workspace to be per executor (#4336)
Tianqi Chen [Thu, 14 Nov 2019 17:26:01 +0000 (09:26 -0800)]
[CI] Set workspace to be per executor (#4336)

4 years ago[Codegen] remove fp16 function override for cuda (#4331)
Yizhi Liu [Thu, 14 Nov 2019 17:17:42 +0000 (09:17 -0800)]
[Codegen] remove fp16 function override for cuda  (#4331)

* add volatile override back

* [codegen] remove fp16 function override for cuda

4 years agochange ci image version (#4313)
Tianqi Chen [Thu, 14 Nov 2019 17:15:45 +0000 (09:15 -0800)]
change ci image version (#4313)

4 years ago[doc][fix] fix sphinx parsing for pass infra tutorial (#4337)
Zhi [Thu, 14 Nov 2019 17:08:28 +0000 (09:08 -0800)]
[doc][fix] fix sphinx parsing for pass infra tutorial (#4337)

4 years ago[QNN] Use Int16 upcast in Fallback Conv2D. Fix test names. (#4329)
Animesh Jain [Thu, 14 Nov 2019 17:06:01 +0000 (09:06 -0800)]
[QNN] Use Int16 upcast in Fallback Conv2D. Fix test names. (#4329)

4 years ago[QNN] Quantize - Fixing the sequence of lowering. (#4316)
Animesh Jain [Thu, 14 Nov 2019 05:07:52 +0000 (21:07 -0800)]
[QNN] Quantize - Fixing the sequence of lowering. (#4316)

4 years ago[CI][DOCKER] Add ONNX runtime dep (#4314)
Tianqi Chen [Thu, 14 Nov 2019 04:49:34 +0000 (20:49 -0800)]
[CI][DOCKER] Add ONNX runtime dep (#4314)

* [DOCKER] Add ONNX runtime dep

* Improve ci script

4 years agofix error when memory_id is VTA_MEM_ID_OUT (#4330)
jason-song-dev [Thu, 14 Nov 2019 03:42:22 +0000 (12:42 +0900)]
fix error when memory_id is VTA_MEM_ID_OUT (#4330)

4 years ago[QNN][Legalize] Specialize for Platforms without any fast Int8 arithmetic units....
Animesh Jain [Wed, 13 Nov 2019 19:18:49 +0000 (11:18 -0800)]
[QNN][Legalize] Specialize for Platforms without any fast Int8 arithmetic units. (#4307)

4 years ago[TOPI][OP] Support Faster-RCNN Proposal OP on CPU (#4297)
Zhao Wu [Wed, 13 Nov 2019 06:11:38 +0000 (14:11 +0800)]
[TOPI][OP] Support Faster-RCNN Proposal OP on CPU (#4297)

* Support Proposal operator on CPU.

* PyLint space issue

* PyLint space issue

* Pylint singleton-comparison issue

4 years agoFix the TF tutorial to run against TF2.0 and TF1.x (#4104)
Eric Platon [Tue, 12 Nov 2019 23:52:24 +0000 (00:52 +0100)]
Fix the TF tutorial to run against TF2.0 and TF1.x (#4104)

* WIP Run the TF tutorial on TF2

* Remove debugger statement.

* Complete the support for TF2.0's `resize`.

TF2.0 adds a `half_pixel_centers` attribute to the `resize` function in
the image API. This commit completes the hooks in Relay's TF frontend.

At the point of this commit, no new test yet. Also, this commit
addresses solely the `resize` change. Other commits address other
changes in TF2.0.

* Support TF2.0 in the tutorial by using the compat API.

This looks cleaner than trying to detect the TF version.

* Use the TF compat API, so as to support TF2.0.

This is a direct change, relying on the compat API provided by the TF
team.

This code will last as long as the compat API exists, so a
"proper" support for TF1.x and 2.x will require more work in some
future.

* Partial support for EXPLICIT padding introduced in TF2.0.

Explicit padding is a special case in TF2.0 (see reference linked
below). Some models are serialized with that mode, and break TF support
in TVM.

Support is *partial* as EXPLICIT falls back to set padding on the
Relay op, which only supports 2 values. At some point, padding may need
to be extended to support 4 values, but that is out of scope of this
support commit.

Reference on EXPLICIT padding: https://github.com/tensorflow/tensorflow/commit/ec81825aaf7e848d9f8ddffdf1e0d20aebe9172c#diff-1d1c0bb0a880f85b6164f71dbb2f446e

* Guard on checking for optional TF2.0 attribute.

* Do not expect Relay to implement TF-specific attributes.

The `half_pixel_centers` attribute is a new feature in TF2.0. Earlier
commits of mine mistakenly introduce them in the Relay API. This is
probably not what Relay is expected to support, and the semantics of
`half_pixel_centers` is unclear (to me, at least) at this point.

* Remove unclear comment.

CR https://github.com/dmlc/tvm/pull/4104#discussion_r338705742

Addresses #4104

* Changes after review.

Complying without understanding the rationale for now.

* Fix the arguments set mistakenly.

An argument ignored for the wrong operation.

4 years ago[Relay][Op][TF] Complete tensor array unstack with all ranks support (#4309)
Wei Chen [Tue, 12 Nov 2019 20:36:28 +0000 (12:36 -0800)]
[Relay][Op][TF] Complete tensor array unstack with all ranks support (#4309)

4 years agoAdd test for the qnn_add operator (#4282)
Ina Dobreva [Tue, 12 Nov 2019 20:23:04 +0000 (20:23 +0000)]
Add test for the qnn_add operator (#4282)

* Add test for the qnn_add operator

The tests use fake quant approach so until the tf session tensors remain in float32.
The test data has to be passed in uint8 because of how the tflite/tvm comparison works.
Abs tolerance up to 1 is allowed for the qnn results. For now input_stats are hardcoded
assuming the tests for the other qnn ops will pass the input data in the same range.

* Separate qnn uint8 test function from the fp32 elemwise tests

Isolate qnn uint8 elemwise tests
Remove blank lines

4 years agoadd (#4311)
Haichen Shen [Tue, 12 Nov 2019 19:54:56 +0000 (11:54 -0800)]
add (#4311)

4 years ago[Relay][Frontend][Keras] batch_norm op params not handling well (#4310)
Xingyu Zhou [Tue, 12 Nov 2019 18:30:04 +0000 (10:30 -0800)]
[Relay][Frontend][Keras] batch_norm op params not handling well (#4310)

* Relay Keras frontent batch_norm op params not handeling well

* add unit test for Relay Frontend Keras batch_norm

4 years agoFix incorrect call to Unicode Win32 InetPton (#4306)
jmorrill [Tue, 12 Nov 2019 16:50:06 +0000 (08:50 -0800)]
Fix incorrect call to Unicode Win32 InetPton (#4306)

* Fix incorrect call to Unicode Win32

* Removed inet_pton call. Win32 already has it.

4 years ago[Relay][Frontend][Tensorflow] Fix type assignment for operator 'tf.range' (#4294)
Neo Chien [Tue, 12 Nov 2019 04:34:50 +0000 (12:34 +0800)]
[Relay][Frontend][Tensorflow] Fix type assignment for operator 'tf.range' (#4294)

4 years agoAdd More Shape Functions (#4179)
Yao Wang [Mon, 11 Nov 2019 23:46:29 +0000 (15:46 -0800)]
Add More Shape Functions (#4179)

* Add shape functions

* Fix get_const_tuple

* Fix cpplint

* Fix pylint

* Fix pylint

* rebase and fix

* Check Any for infer type

* Fix expand_dim shape func for zero rank input

* Fix pooling infer type

* Address comment

* Register layout transform attr

4 years ago[TF][Relay][Op] Pass module when infer shape (#4287)
Wei Chen [Mon, 11 Nov 2019 19:22:14 +0000 (11:22 -0800)]
[TF][Relay][Op] Pass module when infer shape (#4287)

* [TF][Relay][Op] Pass module when infer shape

* Fix lint

* Improve style

* Add test

4 years ago[RUNTIME][REFACTOR] Use object protocol to support runtime::Module (#4289)
Tianqi Chen [Mon, 11 Nov 2019 18:09:29 +0000 (10:09 -0800)]
[RUNTIME][REFACTOR] Use object protocol to support runtime::Module (#4289)

Previously runtime::Module was supported using shared_ptr.
This PR refactors the codebase to use the Object protocol.

It will open doors to allow easier interpolation between
Object containers and module in the future.

4 years ago[TF][TEST] add test_forward_reduce_any back (#4301)
Yong Wu [Mon, 11 Nov 2019 17:24:52 +0000 (09:24 -0800)]
[TF][TEST] add test_forward_reduce_any back (#4301)

the test case was removed in #4181 for some reason
@tqchen @soiferj @zhiics

4 years agoFix tf reshape (#4285)
Yao Wang [Mon, 11 Nov 2019 16:23:23 +0000 (08:23 -0800)]
Fix tf reshape (#4285)

* Fix tf reshape

* Fix test

* Fix pylint

* Fix pylint

4 years ago[tutorial] Relay pass infra tutorial (#4083)
Zhi [Mon, 11 Nov 2019 05:57:43 +0000 (21:57 -0800)]
[tutorial] Relay pass infra tutorial (#4083)

* Add pass manager tutorial

* fix some examples

* retrigger ci

* Update tutorials/dev/relay_pass_infra.py

Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe>
* Add ToANormalForm link

4 years ago[TOPI][AlterOpLayout][ARM] Enabling NHWC to NCHW layout transformation. (#4249)
Animesh Jain [Mon, 11 Nov 2019 03:09:16 +0000 (19:09 -0800)]
[TOPI][AlterOpLayout][ARM] Enabling NHWC to NCHW layout transformation. (#4249)

4 years ago[RUTNIME] Support C++ RPC (#4281)
Zhao Wu [Sun, 10 Nov 2019 22:56:44 +0000 (06:56 +0800)]
[RUTNIME] Support C++ RPC (#4281)

4 years ago[TFLite] Support PRelu (#4298)
Zhao Wu [Sun, 10 Nov 2019 19:45:10 +0000 (03:45 +0800)]
[TFLite] Support PRelu (#4298)

4 years ago[Test][TF][Relay] Fix argument preparation for vm test mode (#4296)
Wei Chen [Sun, 10 Nov 2019 18:31:20 +0000 (10:31 -0800)]
[Test][TF][Relay] Fix argument preparation for vm test mode (#4296)

4 years ago[Codegen][cuda-fp16] fallback to fp32 simulation when cuda arch < sm53 (#4268)
Yizhi Liu [Sun, 10 Nov 2019 06:16:34 +0000 (22:16 -0800)]
[Codegen][cuda-fp16] fallback to fp32 simulation when cuda arch < sm53 (#4268)

4 years agoRename ml.dmlc.tvm to org.apache.tvm (#4290)
Yizhi Liu [Sun, 10 Nov 2019 02:20:33 +0000 (18:20 -0800)]
Rename ml.dmlc.tvm to org.apache.tvm (#4290)

4 years agoAuto TensorCore CodeGen (#4234)
Minmin Sun (孙敏敏) [Sat, 9 Nov 2019 21:01:36 +0000 (05:01 +0800)]
Auto TensorCore CodeGen (#4234)

* Add Auto TensorCore TensorCore Unit Test

* Rebase to tvm master branch & Add auto tensor core

* Code Refine

* Add tensor core switch by pragma

* Add pragma in tensor core example code

* Get real tile size to replace hard coded 16

* support more than 2 dimensions (e.g. batchmatmul) for buffer bind scope

* support batch matmul

* Move cuda env check to tensor_core.cc

* Coderefine for tensor_core.cc

* Refine comments

* Some refinements of code and comment

* Update TensorCore UT to pass the CPU test

* remove redundant code

* matmul's storage align for different layout

* Add support for differenct position of type cast

* Add formal tutorial for auto tensorcore codegen

* move tensorcore check up to tutorial code

* code and doc refine

* comment out tune_and_evaluate in tutorial

* fix cpplint error

4 years agoUpdate tvm_runtime.h (#4278)
peike [Fri, 8 Nov 2019 08:23:15 +0000 (19:23 +1100)]
Update tvm_runtime.h (#4278)

fix the problem that android_rpc compilation failed

4 years ago[TOPI][CUDA] Fix Winograd Kernel Size Support (#4276)
Cody Hao Yu [Fri, 8 Nov 2019 05:44:35 +0000 (21:44 -0800)]
[TOPI][CUDA] Fix Winograd Kernel Size Support (#4276)

* fix_winograd_cuda_kernel_size

* add unit test

4 years ago[Relay][Frontend][ONNX] Add support for broadcasting to Where and MatMul (#4267)
Jon Soifer [Thu, 7 Nov 2019 22:10:30 +0000 (14:10 -0800)]
[Relay][Frontend][ONNX] Add support for broadcasting to Where and MatMul (#4267)

4 years ago[AutoTVM] Add batch_matmul to tunable operations (#4242)
Josh Fromm [Thu, 7 Nov 2019 00:07:09 +0000 (16:07 -0800)]
[AutoTVM] Add batch_matmul to tunable operations (#4242)

* Batch matmul tuning running but with errors.

* Default x86 schedule as good as before.

* Code Cleanup

* Remove unused argument.

* improved template documentation.

* Silly lint fix

* Removed leftover comment.

* Moved cfg declaration to schedule for batch_matmul

* Moved x86 dense cfg declaration to schedule.

* lint fix

* Removed duplicate cfg declaration in dense.

* Reverted changes to dense.

4 years ago[TOPI] Fix bug in Winograd on CUDA (#4260)
Cody Hao Yu [Wed, 6 Nov 2019 23:02:54 +0000 (15:02 -0800)]
[TOPI] Fix bug in Winograd on CUDA (#4260)

* fix winograd

* move get padding after kernel transform