From: Tianqi Chen Date: Mon, 17 Feb 2020 18:53:05 +0000 (-0800) Subject: [DOCS] Introduce how to add hardware backend to FAQ (#4898) X-Git-Tag: upstream/0.7.0~1238 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=0b2d11a5745779ec139a05e8ece73c93fa6d7db8;p=platform%2Fupstream%2Ftvm.git [DOCS] Introduce how to add hardware backend to FAQ (#4898) --- diff --git a/docs/api/python/target.rst b/docs/api/python/target.rst index 6851c04..625b98e 100644 --- a/docs/api/python/target.rst +++ b/docs/api/python/target.rst @@ -19,4 +19,4 @@ tvm.target ---------- .. automodule:: tvm.target :members: - :imported-members: + :imported-members: diff --git a/docs/dev/relay_bring_your_own_codegen.rst b/docs/dev/relay_bring_your_own_codegen.rst index b735bb8..3e3369d 100644 --- a/docs/dev/relay_bring_your_own_codegen.rst +++ b/docs/dev/relay_bring_your_own_codegen.rst @@ -15,6 +15,8 @@ specific language governing permissions and limitations under the License. +.. _relay-bring-your-own-codegen: + ============================= Bring Your Own Codegen To TVM ============================= diff --git a/docs/faq.md b/docs/faq.md deleted file mode 100644 index b5bf65e..0000000 --- a/docs/faq.md +++ /dev/null @@ -1,49 +0,0 @@ - - - - - - - - - - - - - - - - - -Frequently Asked Questions -========================== -This document contains frequently asked questions. - -How to Install --------------- -See [Installation](http://docs.tvm.ai/install/) - -TVM's relation to Other IR/DSL Projects ---------------------------------------- -There are usually two levels of abstractions of IR in the deep learning systems. -TensorFlow's XLA and Intel's ngraph both use a computation graph representation. -This representation is high level, and can be helpful to perform generic optimizations -such as memory reuse, layout transformation and automatic differentiation. - -TVM adopts a low-level representation, that explicitly express the choice of memory -layout, parallelization pattern, locality and hardware primitives etc. -This level of IR is closer to directly target hardwares. -The low-level IR adopts ideas from existing image processing languages like Halide, darkroom -and loop transformation tools like loopy and polyhedra-based analysis. -We specifically focus on expressing deep learning workloads (e.g. recurrence), -optimization for different hardware backends and embedding with frameworks to provide -end-to-end compilation stack. - - -TVM's relation to libDNN, cuDNN ------------------------------- -TVM can incorporate these libraries as external calls. One goal of TVM is to be able to -generate high-performing kernels. We will evolve TVM an incremental manner as -we learn from the techniques of manual kernel crafting and add these as primitives in DSL. -See also [TVM Operator Inventory](https://github.com/apache/incubator-tvm/tree/master/topi) for -recipes of operators in TVM. diff --git a/docs/faq.rst b/docs/faq.rst new file mode 100644 index 0000000..32714b0 --- /dev/null +++ b/docs/faq.rst @@ -0,0 +1,64 @@ +.. Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + +.. http://www.apache.org/licenses/LICENSE-2.0 + +.. Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. + + +Frequently Asked Questions +========================== + + +How to Install +-------------- +See :ref:`installation`. + + +How to add a new Hardware Backend +--------------------------------- + +- If the hardware backend has LLVM support, + then we can directly generate the code by setting the correct target triple as in :py:mod:`~tvm.target`. +- If the target hardware is a GPU, try to use the cuda, opencl or vulkan backend. +- If the target hardware is a special accelerator, + checkout :ref:`vta-index` and :ref:`relay-bring-your-own-codegen`. +- For all of the above cases, You may want to add target specific + optimization templates using AutoTVM, see :ref:`tutorials-autotvm-sec`. +- Besides using LLVM's vectorization, we can also embed micro-kernels to leverage hardware intrinsics, + see :ref:`tutorials-tensorize`. + + +TVM's relation to Other IR/DSL Projects +--------------------------------------- +There are usually two levels of abstractions of IR in the deep learning systems. +TensorFlow's XLA and Intel's ngraph both use a computation graph representation. +This representation is high level, and can be helpful to perform generic optimizations +such as memory reuse, layout transformation and automatic differentiation. + +TVM adopts a low-level representation, that explicitly express the choice of memory +layout, parallelization pattern, locality and hardware primitives etc. +This level of IR is closer to directly target hardwares. +The low-level IR adopts ideas from existing image processing languages like Halide, darkroom +and loop transformation tools like loopy and polyhedra-based analysis. +We specifically focus on expressing deep learning workloads (e.g. recurrence), +optimization for different hardware backends and embedding with frameworks to provide +end-to-end compilation stack. + + +TVM's relation to libDNN, cuDNN +------------------------------- +TVM can incorporate these libraries as external calls. One goal of TVM is to be able to +generate high-performing kernels. We will evolve TVM an incremental manner as +we learn from the techniques of manual kernel crafting and add these as primitives in DSL. +See also top for recipes of operators in TVM. diff --git a/docs/install/index.rst b/docs/install/index.rst index f1caec1..5f73941 100644 --- a/docs/install/index.rst +++ b/docs/install/index.rst @@ -15,6 +15,8 @@ specific language governing permissions and limitations under the License. +.. _installation: + Installation ============ To install TVM, please read :ref:`install-from-source`. diff --git a/docs/vta/index.rst b/docs/vta/index.rst index b719e2d..d31e377 100644 --- a/docs/vta/index.rst +++ b/docs/vta/index.rst @@ -15,6 +15,8 @@ specific language governing permissions and limitations under the License. +.. _vta-index: + VTA: Deep Learning Accelerator Stack ==================================== @@ -50,4 +52,4 @@ Literature - Read the VTA tech report: `An Open Hardware Software Stack for Deep Learning`_. .. _release blog post: https://tvm.apache.org/2018/07/12/vta-release-announcement -.. _An Open Hardware Software Stack for Deep Learning: https://arxiv.org/abs/1807.04188 \ No newline at end of file +.. _An Open Hardware Software Stack for Deep Learning: https://arxiv.org/abs/1807.04188 diff --git a/tutorials/autotvm/README.txt b/tutorials/autotvm/README.txt index c511381..38e3b33 100644 --- a/tutorials/autotvm/README.txt +++ b/tutorials/autotvm/README.txt @@ -1,3 +1,4 @@ -Auto tuning -------------- +.. _tutorials-autotvm-sec: +Auto tuning +----------- diff --git a/tutorials/language/tensorize.py b/tutorials/language/tensorize.py index 7fb2d53..afc708e 100644 --- a/tutorials/language/tensorize.py +++ b/tutorials/language/tensorize.py @@ -15,6 +15,8 @@ # specific language governing permissions and limitations # under the License. """ +.. _tutorials-tensorize: + Use Tensorize to Leverage Hardware Intrinsics ============================================= **Author**: `Yizhi Liu `_