Yizhi Liu [Tue, 19 Nov 2019 21:51:34 +0000 (13:51 -0800)]
[tutorial][benchmark] nnvm -> relay (#4368)
* [tutorial] nnvm -> relay
* use relay workload
* delete movbilenetv2 option
Alexander Pivovarov [Tue, 19 Nov 2019 17:15:08 +0000 (09:15 -0800)]
Fix TFLite RESHAPE assert (#4320)
Animesh Jain [Tue, 19 Nov 2019 04:18:58 +0000 (20:18 -0800)]
[Relay tests] AlterOpLayout - Temporary attr update (#4357)
miheer vaidya [Tue, 19 Nov 2019 04:03:53 +0000 (21:03 -0700)]
add rule for clean (#4364)
* add rule for clean
* Update clean rule
Seems like lib/ directory is not made by the makefile
So don't delete directory, just the contents of it.
Yizhi Liu [Mon, 18 Nov 2019 23:10:10 +0000 (15:10 -0800)]
reminding message for TVM_REGISTER_NODE_TYPE (#4365)
Cody Hao Yu [Mon, 18 Nov 2019 19:24:39 +0000 (11:24 -0800)]
fix Android and OpenCL docker install (#4363)
Tianqi Chen [Mon, 18 Nov 2019 18:22:25 +0000 (10:22 -0800)]
[SOURCE] Add ASF header to __init__.py files (#4359)
Yao Wang [Mon, 18 Nov 2019 03:54:34 +0000 (19:54 -0800)]
[Frontend]Add TensorFlow FloorMod (#4308)
* Add tf FloorMod
* Add floor_div/mod into topi and relay
* Add to rst
* Fix test
optima2005 [Mon, 18 Nov 2019 01:24:44 +0000 (09:24 +0800)]
[Relay][Frontend][Tensorflow]Add conv2d_transpose (#4300)
* [Relay][Frontend][Tensorflow]Add conv2d_transpose
* add transformation from NHWC to NCHW to compatible with TVM conv2d_transpose implementation
* remove 'dilations' paramater to compitable with TF1.3
miheer vaidya [Mon, 18 Nov 2019 00:39:36 +0000 (17:39 -0700)]
Send list as argument to schedule_conv2d (#4358)
When getting cuda schedule passing single tensor seem to work but after changing target to "llvm" causes assert.
Sending list on other hand makes both cuda and llvm targets happy.
See https://discuss.tvm.ai/t/solved-simple-example-error-attributeerror-tensorslice-object-has-no-attribute-op/2245/3
Philip Hyunsu Cho [Sat, 16 Nov 2019 16:40:38 +0000 (08:40 -0800)]
Fix docstring in topi.nn.fifo_buffer (#4349)
Ramana Radhakrishnan [Sat, 16 Nov 2019 16:39:19 +0000 (16:39 +0000)]
Retain qnn input kernel scales (#4292)
* Add qnn conv2d attributes for input_tensor_scale and
kernel_tensor_scale.
The lowering in the tflite frontend loses the input_tensor_scale
and the kernel_tensor_scale by multiplying it and putting it into
the Requantize operation. This means that any graph partitioning
passes or other passes that need to access this information no longer
have it available in the qnn dialect.
regards
Ramana
* Store input tensor scale and Weight tensor scale for Dense as well
As for conv2d, the tflite frontend drops the input tensor
scale and the weight tensor scale from the relay op. Store
it as separate fields in there.
* Fix unintentional tab
* Rename input_tensor_scale to input_scale and kernel_tensor_scale
to kernel_scale for conv2d.
* input_tensor_scale -> input_scale weight_tensor_scale->weight_scale
* Rework dense testcase
And use input_scale and kernel_scale
* Be consistent in use of input_scale and kernel_scale values
* Fixup qnn conv2d tests for input_scale and kernel_scale
* Make pydoc identical between conv2d and dense for weight_tensor
* Fix up conv2d parameters to be in the same order between C++ and python
* Fix ordering of parameters for dense.
* Add input_scale and output_scale to try and satisfy ci gods
* Delete input_scale and kernel_scale.
nn.conv2d does not contain input_scale and kernel_scale. We need
to delete it when lowering it to nn.conv2d.
* Add input_scale and kernel_scale for qnn.conv2d
Animesh Jain [Sat, 16 Nov 2019 16:38:10 +0000 (08:38 -0800)]
[Debugger] Sorting op-time breakdown for quicker analysis. (#4352)
Peter Yeh [Sat, 16 Nov 2019 06:39:44 +0000 (22:39 -0800)]
proper device query through rocm api (#4305)
Cody Hao Yu [Sat, 16 Nov 2019 06:27:49 +0000 (22:27 -0800)]
fix install script (#4350)
黎明灰烬 [Sat, 16 Nov 2019 00:53:01 +0000 (08:53 +0800)]
AutoTVM: selecting tuning templates when extracting task (#4338)
* AutoTVM: selecting tuning templates when extracting task
Make the procedure of trying new templates easier.
Test: tests/python/relay/test_autotvm_task_extraction.py
* Use dict to match key for topi ops
* fix lint issue
* be more pythonic :)
Thomas Viehmann [Fri, 15 Nov 2019 23:14:56 +0000 (00:14 +0100)]
Add workgroup size attribute to AMDGPU functions in codegen (#4342)
When we did not set the workgroup size, LLVM will use too many registers
for kernel launches with many threads. This resulted in "invalid ISA"
errors. Here we set the maximum workgroup size to the maximum threads
per block from the device API.
Of course, one might look into allowing configurations with fewer
threads at runtime to use more registers.
Kimish Patel [Fri, 15 Nov 2019 22:37:37 +0000 (14:37 -0800)]
[FIX] Fix for a specific case when loop partitioning with indivisble (#4243)
factors and resulting nested loop is broken.
This is due to the fact that we are creating zero extent loops which
are fixed afterwards. However unroll pass breaks due to the zero extent
loop.
Logan Weber [Fri, 15 Nov 2019 22:12:52 +0000 (14:12 -0800)]
[Relay][VM][Interpreter] Enable first-class constructors in VM and interpreter via eta expansion (#4218)
* Fix constructor pretty printing
* Make Module::HasDef name consistent with API
* Add VM constructor compilation via eta expansion
* Lint
* Fix CI
* Fix failing test
* Address comment
* Retrigger CI
* Retrigger CI
Tianqi Chen [Fri, 15 Nov 2019 22:09:47 +0000 (14:09 -0800)]
[COMMUNITY] Add DISCLAIMER, KEYS for ASF release (#4345)
* [COMMUNITY] Add DISCLAIMER, KEYS for ASF release
* Add file name spec
T.J. Mercier [Fri, 15 Nov 2019 19:15:12 +0000 (11:15 -0800)]
Add check to ensure input file was successfully opened in NNVM deploy code demo (#4315)
Alex Gladkov [Fri, 15 Nov 2019 19:04:00 +0000 (11:04 -0800)]
Bump up CUDA log version in tophub.py (#4347)
Zhao Wu [Fri, 15 Nov 2019 18:05:26 +0000 (02:05 +0800)]
[CodeGen] Add build config option disable_assert to control whether to generate assert (#4340)
ziyu-guo [Fri, 15 Nov 2019 17:59:59 +0000 (09:59 -0800)]
fix inconsistent tag name (#4134)
Liangfu Chen [Fri, 15 Nov 2019 17:59:04 +0000 (01:59 +0800)]
[VTA] Bug fix for padded load with large inputs (#4293)
* bug fix for padded load with large inputs
* Update TensorLoad.scala
* Update test_vta_insn.py
Jian Weng [Fri, 15 Nov 2019 17:13:04 +0000 (09:13 -0800)]
imp module is deprecated (#4275)
Neo Chien [Fri, 15 Nov 2019 16:53:13 +0000 (00:53 +0800)]
[Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth (#4271)
Wei Chen [Fri, 15 Nov 2019 16:42:58 +0000 (08:42 -0800)]
[Test][Relay][Pass] Add test case for lambda lift (#4317)
Peter Yeh [Fri, 15 Nov 2019 04:43:47 +0000 (20:43 -0800)]
[RUNTIME] Add device query for AMD GcnArch (#4341)
* add gcnArch query
* kGcnArch query for cuda is a no-op
Jon Soifer [Fri, 15 Nov 2019 03:52:40 +0000 (19:52 -0800)]
[Relay][Frontend][TF] Fix transpose when axes is not a param (#4327)
* [Relay][Frontend][TF] Use _infer_value_simulated when axes is not a const to Transpose
* uncomment tests
* dummy change to retrigger ci
Haichen Shen [Fri, 15 Nov 2019 03:45:57 +0000 (19:45 -0800)]
[Contrib] Add MKL DNN option (#4323)
* [Contrib] Add MKL DNN
* update
* update
Yizhi Liu [Fri, 15 Nov 2019 03:45:25 +0000 (19:45 -0800)]
Deprecate NNVM warning msg (#4333)
Zhao Wu [Fri, 15 Nov 2019 03:43:38 +0000 (11:43 +0800)]
Solve custom model of prelu (#4326)
Philip Hyunsu Cho [Fri, 15 Nov 2019 03:42:53 +0000 (19:42 -0800)]
Add topi.nn.fifo_buffer to TVM doc (#4343)
Ina Dobreva [Fri, 15 Nov 2019 03:41:36 +0000 (03:41 +0000)]
Add support for quant. mul operator in tflite frontend (#4283)
A test for qnn_mul has to be added when the qnn elemwise tests (#4282) get merged.
Wei Chen [Fri, 15 Nov 2019 01:52:01 +0000 (17:52 -0800)]
[Relay][Pass] Add pass to remove unused functions in relay module (#4334)
* [Relay][Pass] Add pass to remove unused functions in relay module
* Add tests
* Fix lint
* Fix visit order
* Add pass argument
* Fix
Peter Yeh [Fri, 15 Nov 2019 00:41:12 +0000 (16:41 -0800)]
Enable hipModuleGetGlobal() (#4321)
Jon Soifer [Thu, 14 Nov 2019 23:13:11 +0000 (15:13 -0800)]
[Build][Windows] Fix Windows build by including cctype (#4319)
* Fix build
* dummy change to retrigger CI
* dummy change to retrigger ci
* dummy change to retrigger ci
Tianqi Chen [Thu, 14 Nov 2019 17:26:01 +0000 (09:26 -0800)]
[CI] Set workspace to be per executor (#4336)
Yizhi Liu [Thu, 14 Nov 2019 17:17:42 +0000 (09:17 -0800)]
[Codegen] remove fp16 function override for cuda (#4331)
* add volatile override back
* [codegen] remove fp16 function override for cuda
Tianqi Chen [Thu, 14 Nov 2019 17:15:45 +0000 (09:15 -0800)]
change ci image version (#4313)
Zhi [Thu, 14 Nov 2019 17:08:28 +0000 (09:08 -0800)]
[doc][fix] fix sphinx parsing for pass infra tutorial (#4337)
Animesh Jain [Thu, 14 Nov 2019 17:06:01 +0000 (09:06 -0800)]
[QNN] Use Int16 upcast in Fallback Conv2D. Fix test names. (#4329)
Animesh Jain [Thu, 14 Nov 2019 05:07:52 +0000 (21:07 -0800)]
[QNN] Quantize - Fixing the sequence of lowering. (#4316)
Tianqi Chen [Thu, 14 Nov 2019 04:49:34 +0000 (20:49 -0800)]
[CI][DOCKER] Add ONNX runtime dep (#4314)
* [DOCKER] Add ONNX runtime dep
* Improve ci script
jason-song-dev [Thu, 14 Nov 2019 03:42:22 +0000 (12:42 +0900)]
fix error when memory_id is VTA_MEM_ID_OUT (#4330)
Animesh Jain [Wed, 13 Nov 2019 19:18:49 +0000 (11:18 -0800)]
[QNN][Legalize] Specialize for Platforms without any fast Int8 arithmetic units. (#4307)
Zhao Wu [Wed, 13 Nov 2019 06:11:38 +0000 (14:11 +0800)]
[TOPI][OP] Support Faster-RCNN Proposal OP on CPU (#4297)
* Support Proposal operator on CPU.
* PyLint space issue
* PyLint space issue
* Pylint singleton-comparison issue
Eric Platon [Tue, 12 Nov 2019 23:52:24 +0000 (00:52 +0100)]
Fix the TF tutorial to run against TF2.0 and TF1.x (#4104)
* WIP Run the TF tutorial on TF2
* Remove debugger statement.
* Complete the support for TF2.0's `resize`.
TF2.0 adds a `half_pixel_centers` attribute to the `resize` function in
the image API. This commit completes the hooks in Relay's TF frontend.
At the point of this commit, no new test yet. Also, this commit
addresses solely the `resize` change. Other commits address other
changes in TF2.0.
* Support TF2.0 in the tutorial by using the compat API.
This looks cleaner than trying to detect the TF version.
* Use the TF compat API, so as to support TF2.0.
This is a direct change, relying on the compat API provided by the TF
team.
This code will last as long as the compat API exists, so a
"proper" support for TF1.x and 2.x will require more work in some
future.
* Partial support for EXPLICIT padding introduced in TF2.0.
Explicit padding is a special case in TF2.0 (see reference linked
below). Some models are serialized with that mode, and break TF support
in TVM.
Support is *partial* as EXPLICIT falls back to set padding on the
Relay op, which only supports 2 values. At some point, padding may need
to be extended to support 4 values, but that is out of scope of this
support commit.
Reference on EXPLICIT padding: https://github.com/tensorflow/tensorflow/commit/
ec81825aaf7e848d9f8ddffdf1e0d20aebe9172c#diff-
1d1c0bb0a880f85b6164f71dbb2f446e
* Guard on checking for optional TF2.0 attribute.
* Do not expect Relay to implement TF-specific attributes.
The `half_pixel_centers` attribute is a new feature in TF2.0. Earlier
commits of mine mistakenly introduce them in the Relay API. This is
probably not what Relay is expected to support, and the semantics of
`half_pixel_centers` is unclear (to me, at least) at this point.
* Remove unclear comment.
CR https://github.com/dmlc/tvm/pull/4104#discussion_r338705742
Addresses #4104
* Changes after review.
Complying without understanding the rationale for now.
* Fix the arguments set mistakenly.
An argument ignored for the wrong operation.
Wei Chen [Tue, 12 Nov 2019 20:36:28 +0000 (12:36 -0800)]
[Relay][Op][TF] Complete tensor array unstack with all ranks support (#4309)
Ina Dobreva [Tue, 12 Nov 2019 20:23:04 +0000 (20:23 +0000)]
Add test for the qnn_add operator (#4282)
* Add test for the qnn_add operator
The tests use fake quant approach so until the tf session tensors remain in float32.
The test data has to be passed in uint8 because of how the tflite/tvm comparison works.
Abs tolerance up to 1 is allowed for the qnn results. For now input_stats are hardcoded
assuming the tests for the other qnn ops will pass the input data in the same range.
* Separate qnn uint8 test function from the fp32 elemwise tests
Isolate qnn uint8 elemwise tests
Remove blank lines
Haichen Shen [Tue, 12 Nov 2019 19:54:56 +0000 (11:54 -0800)]
add (#4311)
Xingyu Zhou [Tue, 12 Nov 2019 18:30:04 +0000 (10:30 -0800)]
[Relay][Frontend][Keras] batch_norm op params not handling well (#4310)
* Relay Keras frontent batch_norm op params not handeling well
* add unit test for Relay Frontend Keras batch_norm
jmorrill [Tue, 12 Nov 2019 16:50:06 +0000 (08:50 -0800)]
Fix incorrect call to Unicode Win32 InetPton (#4306)
* Fix incorrect call to Unicode Win32
* Removed inet_pton call. Win32 already has it.
Neo Chien [Tue, 12 Nov 2019 04:34:50 +0000 (12:34 +0800)]
[Relay][Frontend][Tensorflow] Fix type assignment for operator 'tf.range' (#4294)
Yao Wang [Mon, 11 Nov 2019 23:46:29 +0000 (15:46 -0800)]
Add More Shape Functions (#4179)
* Add shape functions
* Fix get_const_tuple
* Fix cpplint
* Fix pylint
* Fix pylint
* rebase and fix
* Check Any for infer type
* Fix expand_dim shape func for zero rank input
* Fix pooling infer type
* Address comment
* Register layout transform attr
Wei Chen [Mon, 11 Nov 2019 19:22:14 +0000 (11:22 -0800)]
[TF][Relay][Op] Pass module when infer shape (#4287)
* [TF][Relay][Op] Pass module when infer shape
* Fix lint
* Improve style
* Add test
Tianqi Chen [Mon, 11 Nov 2019 18:09:29 +0000 (10:09 -0800)]
[RUNTIME][REFACTOR] Use object protocol to support runtime::Module (#4289)
Previously runtime::Module was supported using shared_ptr.
This PR refactors the codebase to use the Object protocol.
It will open doors to allow easier interpolation between
Object containers and module in the future.
Yong Wu [Mon, 11 Nov 2019 17:24:52 +0000 (09:24 -0800)]
[TF][TEST] add test_forward_reduce_any back (#4301)
the test case was removed in #4181 for some reason
@tqchen @soiferj @zhiics
Yao Wang [Mon, 11 Nov 2019 16:23:23 +0000 (08:23 -0800)]
Fix tf reshape (#4285)
* Fix tf reshape
* Fix test
* Fix pylint
* Fix pylint
Zhi [Mon, 11 Nov 2019 05:57:43 +0000 (21:57 -0800)]
[tutorial] Relay pass infra tutorial (#4083)
* Add pass manager tutorial
* fix some examples
* retrigger ci
* Update tutorials/dev/relay_pass_infra.py
Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe>
* Add ToANormalForm link
Animesh Jain [Mon, 11 Nov 2019 03:09:16 +0000 (19:09 -0800)]
[TOPI][AlterOpLayout][ARM] Enabling NHWC to NCHW layout transformation. (#4249)
Zhao Wu [Sun, 10 Nov 2019 22:56:44 +0000 (06:56 +0800)]
[RUTNIME] Support C++ RPC (#4281)
Zhao Wu [Sun, 10 Nov 2019 19:45:10 +0000 (03:45 +0800)]
[TFLite] Support PRelu (#4298)
Wei Chen [Sun, 10 Nov 2019 18:31:20 +0000 (10:31 -0800)]
[Test][TF][Relay] Fix argument preparation for vm test mode (#4296)
Yizhi Liu [Sun, 10 Nov 2019 06:16:34 +0000 (22:16 -0800)]
[Codegen][cuda-fp16] fallback to fp32 simulation when cuda arch < sm53 (#4268)
Yizhi Liu [Sun, 10 Nov 2019 02:20:33 +0000 (18:20 -0800)]
Rename ml.dmlc.tvm to org.apache.tvm (#4290)
Minmin Sun (孙敏敏) [Sat, 9 Nov 2019 21:01:36 +0000 (05:01 +0800)]
Auto TensorCore CodeGen (#4234)
* Add Auto TensorCore TensorCore Unit Test
* Rebase to tvm master branch & Add auto tensor core
* Code Refine
* Add tensor core switch by pragma
* Add pragma in tensor core example code
* Get real tile size to replace hard coded 16
* support more than 2 dimensions (e.g. batchmatmul) for buffer bind scope
* support batch matmul
* Move cuda env check to tensor_core.cc
* Coderefine for tensor_core.cc
* Refine comments
* Some refinements of code and comment
* Update TensorCore UT to pass the CPU test
* remove redundant code
* matmul's storage align for different layout
* Add support for differenct position of type cast
* Add formal tutorial for auto tensorcore codegen
* move tensorcore check up to tutorial code
* code and doc refine
* comment out tune_and_evaluate in tutorial
* fix cpplint error
peike [Fri, 8 Nov 2019 08:23:15 +0000 (19:23 +1100)]
Update tvm_runtime.h (#4278)
fix the problem that android_rpc compilation failed
Cody Hao Yu [Fri, 8 Nov 2019 05:44:35 +0000 (21:44 -0800)]
[TOPI][CUDA] Fix Winograd Kernel Size Support (#4276)
* fix_winograd_cuda_kernel_size
* add unit test
Jon Soifer [Thu, 7 Nov 2019 22:10:30 +0000 (14:10 -0800)]
[Relay][Frontend][ONNX] Add support for broadcasting to Where and MatMul (#4267)
Josh Fromm [Thu, 7 Nov 2019 00:07:09 +0000 (16:07 -0800)]
[AutoTVM] Add batch_matmul to tunable operations (#4242)
* Batch matmul tuning running but with errors.
* Default x86 schedule as good as before.
* Code Cleanup
* Remove unused argument.
* improved template documentation.
* Silly lint fix
* Removed leftover comment.
* Moved cfg declaration to schedule for batch_matmul
* Moved x86 dense cfg declaration to schedule.
* lint fix
* Removed duplicate cfg declaration in dense.
* Reverted changes to dense.
Cody Hao Yu [Wed, 6 Nov 2019 23:02:54 +0000 (15:02 -0800)]
[TOPI] Fix bug in Winograd on CUDA (#4260)
* fix winograd
* move get padding after kernel transform
Neo Chien [Wed, 6 Nov 2019 18:39:40 +0000 (02:39 +0800)]
[Contrib] Fix error message at callback_get_section_size() (#4221)
* [Contrib] Fix error message at callback_get_section_size()
* Trigger notification
Liangfu Chen [Wed, 6 Nov 2019 17:19:22 +0000 (01:19 +0800)]
[VTA] Hotfix for padded load test in Chisel VTA (#4264)
* Update TensorUtil.scala
* Update test_vta_insn.py
Tianqi Chen [Wed, 6 Nov 2019 00:03:04 +0000 (16:03 -0800)]
[DOCS] Update link loc (#4257)
zhuochen [Tue, 5 Nov 2019 17:51:36 +0000 (01:51 +0800)]
workaround typing.Deque import error for Python 3.5 (#4254)
Thomas Viehmann [Tue, 5 Nov 2019 10:25:18 +0000 (11:25 +0100)]
Require LLVM >= 9 for AMDGPU backend (#4253)
LLVM 8 will crash when loading the bitcodes
This is a runtime check as the file will be compiled in even when
USE_ROCM OFF is used in the configuration if ROCM is installed
in the default location.
Fixes: #4087
Tianqi Chen [Mon, 4 Nov 2019 22:03:33 +0000 (14:03 -0800)]
CI trigger after repo move (#4252)
Trevor Morris [Mon, 4 Nov 2019 18:37:41 +0000 (10:37 -0800)]
[Relay][Frontend][Tensorflow] Fix GatherV2, Add StopGradient (#4238)
* Add StopGradient. Add batch_dims attr to ignore list for GatherV2
* Trigger CI
Kim [Mon, 4 Nov 2019 16:04:02 +0000 (00:04 +0800)]
remove PEP498 f-string new feature for support python3.5 (#4250)
XFPlus [Mon, 4 Nov 2019 16:03:42 +0000 (00:03 +0800)]
Fix typo in err msg (#4251)
Hua Jiang [Sat, 2 Nov 2019 03:29:54 +0000 (20:29 -0700)]
[VTA] Performance optimize, remove unnecessary contigious memory use. (#4246)
* [VTA] Performance optimize, remove unnecessary contigious memory use.
Issue:
Uop maintain a cache vector to copy uop data into contigious DRAM memory for
FPGA/Simulator use, but this cache vector not get clear after FPGA/Simulator
core run, in Resnet18 case, if we printf the cache size in UopQueue::ReadBarrier
function, we can saw such cache size keep increase, this would cause
no use data copy and unnecessary contigous DRAM memory malloc.
Analysis:
This issue caused by not clear cache_ vector when do
uop_queue_.Reset().
Solution:
Override BaseQueue Reset function in UopQueue and add cache_ clear
logic.
* address review comments, remove spacing.
Yao Wang [Sat, 2 Nov 2019 03:10:21 +0000 (20:10 -0700)]
Support reshape for dynamic shape in tf converter (#4185)
* Support reshape for dynamic shape in tf converter
* Only allow reshape directly after shape function for symbolic input shape
* Fix lint
Tianqi Chen [Fri, 1 Nov 2019 23:34:42 +0000 (16:34 -0700)]
[NODE][REFACTOR] Rename IRFunctor->NodeFunctor, use func pointer (#4247)
* [NODE][REFACTOR] Rename IRFunctor->NodeFunctor, use function pointer for dispatching.
Previously we used std::function for the functor dispatching.
It introduces additional overhead and problems during dll destruction(of std::function).
This PR changes the std::function to function pointers.
This change a bit restrictions around the set_dispatch that we can get around,
but will improve the general efficiency by reducing one level of indirection in the std::function.
We also no longer need special marcos to register functions to the Functor.
Jared Roesch [Fri, 1 Nov 2019 21:28:23 +0000 (16:28 -0500)]
Implement explicit IR representation of memory alloction (#3560)
Wei Chen [Fri, 1 Nov 2019 20:37:58 +0000 (13:37 -0700)]
[Relay][Prelude] Add more dtypes to tensor_t (#4233)
Wuwei Lin [Fri, 1 Nov 2019 17:36:36 +0000 (13:36 -0400)]
[Relay][Pass] Avoid FoldConstant folding some ops (#4245)
* [Relay][Pass] Avoid FoldConstant folding some ops
* rename
Kim [Fri, 1 Nov 2019 15:54:33 +0000 (23:54 +0800)]
[ Relay ][ Frontend ][ Tensorflow ]add op add_n to relay/frontend/tensorflow.py (#4181)
Sergei Grechanik [Fri, 1 Nov 2019 15:51:43 +0000 (18:51 +0300)]
[ARITH] Fix lowering of FloorMod (#4236)
autumnqin [Fri, 1 Nov 2019 14:53:47 +0000 (22:53 +0800)]
Fix the problem that android_rpc compilation failed. (#4244)
Signed-off-by: qinqiuping <autumnqin@126.com>
Tianqi Chen [Thu, 31 Oct 2019 18:13:48 +0000 (11:13 -0700)]
[BUILD] Disable utvm standalone runtime by default (#4240)
Tianqi Chen [Thu, 31 Oct 2019 18:13:32 +0000 (11:13 -0700)]
[CUDA] Fix fp16 intrin, disable bad fp16 vecadd test for now (#4239)
Tianqi Chen [Thu, 31 Oct 2019 18:11:46 +0000 (11:11 -0700)]
[CI] Update GPU docker to cuda10 (#4228)
* [CI] Update the ci-gpu to use cuda10
* [CI] Enforce tensorcore gpu for unittest
KoolKoffee [Thu, 31 Oct 2019 16:15:57 +0000 (16:15 +0000)]
Fix typo in get_output doc-string (#4237)
Tianqi Chen [Thu, 31 Oct 2019 05:24:52 +0000 (22:24 -0700)]
[CI] Move gpu docker binary to cuda10 (#4229)
* [CI] Move gpu docker binary to cuda10
* Fix the gcn tutorial
Wei Chen [Thu, 31 Oct 2019 02:52:12 +0000 (19:52 -0700)]
[Doc] Update ANTLR instruction (#4231)
* [Doc] Update ANTLR instruction
* Update docs/install/from_source.rst
Wei Chen [Wed, 30 Oct 2019 22:54:56 +0000 (15:54 -0700)]
[Relay] Install Relay Prelude program in package install (#4227)
Tianqi Chen [Wed, 30 Oct 2019 22:33:10 +0000 (15:33 -0700)]
[CI] use llvm9 for the gpu tests (#4224)
* [CI] use llvm9 for the gpu tests
* Update Docker script to support new nvidia docker
Jon Soifer [Wed, 30 Oct 2019 18:43:09 +0000 (11:43 -0700)]
[Relay][Topi][TensorFlow][ONNX][Lang] Add support for Any op (#4205)
* Add support for Any op
* Support ONNX frontend
* Add doc
* Add to relay docs
* Dummy change to retrigger CI