1 # Build OpenVINO™ Inference Engine
5 - [Introduction](#introduction)
6 - [Build on Linux\* Systems](#build-on-linux-systems)
7 - [Software Requirements](#software-requirements)
8 - [Build Steps](#build-steps)
9 - [Additional Build Options](#additional-build-options)
10 - [Build for Raspbian* Stretch OS](#build-for-raspbian-stretch-os)
11 - [Hardware Requirements](#hardware-requirements)
12 - [Native Compilation](#native-compilation)
13 - [Cross Compilation Using Docker\*](#cross-compilation-using-docker)
14 - [Additional Build Options](#additional-build-options-1)
15 - [Build on Windows* Systems](#build-on-windows-systems)
16 - [Software Requirements](#software-requirements-1)
17 - [Build Steps](#build-steps-1)
18 - [Additional Build Options](#additional-build-options-2)
19 - [Building Inference Engine with Ninja* Build System](#building-inference-engine-with-ninja-build-system)
20 - [Build on macOS\* Systems](#build-on-macos-systems)
21 - [Software Requirements](#software-requirements-2)
22 - [Build Steps](#build-steps-2)
23 - [Additional Build Options](#additional-build-options-3)
24 - [Build on Android\* Systems](#build-on-android-systems)
25 - [Software Requirements](#software-requirements-3)
26 - [Build Steps](#build-steps-3)
27 - [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
28 - [Add Inference Engine to Your Project](#add-inference-engine-to-your-project)
29 - [(Optional) Additional Installation Steps for the Intel® Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
30 - [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
31 - [Next Steps](#next-steps)
32 - [Additional Resources](#additional-resources)
36 The Inference Engine can infer models in different formats with various input
39 The open source version of Inference Engine includes the following plugins:
41 | PLUGIN | DEVICE TYPES |
42 | ---------------------| -------------|
43 | CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
44 | GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
45 | GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa\* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
46 | MYRIAD plugin | Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
47 | Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
49 ## Build on Linux\* Systems
51 The software was validated on:
52 - Ubuntu\* 18.04 (64-bit) with default GCC\* 7.5.0
53 - Ubuntu\* 20.04 (64-bit) with default GCC\* 9.3.0
54 - CentOS\* 7.6 (64-bit) with default GCC\* 4.8.5
56 ### Software Requirements
57 - [CMake]\* 3.13 or higher
58 - GCC\* 4.8 or higher to build the Inference Engine
59 - Python 3.6 or higher for Inference Engine Python API wrapper
60 - (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441].
61 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
67 git submodule update --init --recursive
69 2. Install build dependencies using the `install_build_dependencies.sh` script in the
72 chmod +x install_build_dependencies.sh
75 ./install_build_dependencies.sh
77 3. By default, the build enables the Inference Engine GPU plugin to infer models
78 on your Intel® Processor Graphics. This requires you to
79 [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]
80 before running the build. If you don't want to use the GPU plugin, use the
81 `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
82 Intel® Graphics Compute Runtime for OpenCL™ Driver.
83 4. Create a build folder:
85 mkdir build && cd build
87 5. Inference Engine uses a CMake-based build system. In the created `build`
88 directory, run `cmake` to fetch project dependencies and create Unix
89 makefiles, then run `make` to build the project:
91 cmake -DCMAKE_BUILD_TYPE=Release ..
92 make --jobs=$(nproc --all)
95 ### Additional Build Options
97 You can use the following additional build options:
99 - The default build uses an internal JIT GEMM implementation.
101 - To switch to an OpenBLAS\* implementation, use the `GEMM=OPENBLAS` option with
102 `BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` CMake options to specify a path to the
103 OpenBLAS headers and library. For example, the following options on CentOS\*:
104 `-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`.
106 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL`
107 and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked
108 MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded
109 from the Intel® [MKL-DNN repository].
111 - Threading Building Blocks (TBB) is used by default. To build the Inference
112 Engine with OpenMP\* threading, set the `-DTHREADING=OMP` option.
114 - Required versions of TBB and OpenCV packages are downloaded automatically by
115 the CMake-based script. If you want to use the automatically downloaded
116 packages but you already have installed TBB or OpenCV packages configured in
117 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
118 environment variables before running the `cmake` command, otherwise they
119 will not be downloaded and the build may fail if incompatible versions were
122 - If the CMake-based build script can not find and download the OpenCV package
123 that is supported on your platform, or if you want to use a custom build of
124 the OpenCV library, refer to the
125 [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
128 - To build the Python API wrapper:
129 1. Install all additional packages listed in the
130 `/inference-engine/ie_bridges/python/requirements.txt` file:
132 pip install -r requirements.txt
134 2. Use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following
137 -DPYTHON_EXECUTABLE=`which python3.7` \
138 -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
139 -DPYTHON_INCLUDE_DIR=/usr/include/python3.7
142 - To switch the CPU and GPU plugins off/on, use the `cmake` options
143 `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
145 - nGraph-specific compilation options:
146 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
147 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
149 ## Build for Raspbian Stretch* OS
151 > **NOTE**: Only the MYRIAD plugin is supported.
153 ### Hardware Requirements
154 * Raspberry Pi\* 2 or 3 with Raspbian\* Stretch OS (32-bit). Check that it's CPU supports ARMv7 instruction set (`uname -m` command returns `armv7l`).
156 > **NOTE**: Despite the Raspberry Pi\* CPU is ARMv8, 32-bit OS detects ARMv7 CPU instruction set. The default `gcc` compiler applies ARMv6 architecture flag for compatibility with lower versions of boards. For more information, run the `gcc -Q --help=target` command and refer to the description of the `-march=` option.
158 You can compile the Inference Engine for Raspberry Pi\* in one of the two ways:
159 * [Native Compilation](#native-compilation), which is the simplest way, but time-consuming
160 * [Cross Compilation Using Docker*](#cross-compilation-using-docker), which is the recommended way
162 ### Native Compilation
163 Native compilation of the Inference Engine is the most straightforward solution. However, it might take at least one hour to complete on Raspberry Pi\* 3.
165 1. Install dependencies:
169 sudo apt-get install -y git cmake libusb-1.0-0-dev
172 2. Go to the cloned `openvino` repository:
178 3. Initialize submodules:
181 git submodule update --init --recursive
184 4. Create a build folder:
187 mkdir build && cd build
190 5. Build the Inference Engine:
193 cmake -DCMAKE_BUILD_TYPE=Release \
196 -DENABLE_GNA=OFF .. && make
199 ### Cross Compilation Using Docker*
201 This compilation was tested on the following configuration:
203 * Host: Ubuntu\* 18.04 (64-bit, Intel® Core™ i7-6700K CPU @ 4.00GHz × 8)
204 * Target: Raspbian\* Stretch (32-bit, ARMv7, Raspberry Pi\* 3)
209 sudo apt-get install -y docker.io
212 2. Add a current user to `docker` group:
215 sudo usermod -a -G docker $USER
218 Log out and log in for this to take effect.
220 3. Create a directory named `ie_cross_armhf` and add a text file named `Dockerfile`
221 with the following content:
228 RUN dpkg --add-architecture armhf && \
230 apt-get install -y --no-install-recommends \
232 crossbuild-essential-armhf \
235 libusb-1.0-0-dev:armhf \
237 libavcodec-dev:armhf \
238 libavformat-dev:armhf \
239 libswscale-dev:armhf \
240 libgstreamer1.0-dev:armhf \
241 libgstreamer-plugins-base1.0-dev:armhf \
242 libpython3-dev:armhf \
247 RUN wget https://www.cmake.org/files/v3.14/cmake-3.14.3.tar.gz && \
248 tar xf cmake-3.14.3.tar.gz && \
249 (cd cmake-3.14.3 && ./bootstrap --parallel=$(nproc --all) && make --jobs=$(nproc --all) && make install) && \
250 rm -rf cmake-3.14.3 cmake-3.14.3.tar.gz
253 It uses the Debian\* Stretch (Debian 9) OS for compilation because it is a base of the Raspbian\* Stretch.
255 4. Build a Docker\* image:
258 docker image build -t ie_cross_armhf ie_cross_armhf
261 5. Run Docker\* container with mounted source code folder from host:
264 docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash
267 6. While in the container:
269 1. Go to the cloned `openvino` repository:
275 2. Create a build folder:
278 mkdir build && cd build
281 3. Build the Inference Engine:
284 cmake -DCMAKE_BUILD_TYPE=Release \
285 -DCMAKE_TOOLCHAIN_FILE="../cmake/arm.toolchain.cmake" \
286 -DTHREADS_PTHREAD_ARG="-pthread" \
289 -DENABLE_GNA=OFF .. && make --jobs=$(nproc --all)
292 7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries
293 in the `openvino/bin/armv7l/` directory and the OpenCV*
294 installation in the `openvino/inference-engine/temp`.
296 >**NOTE**: Native applications that link to cross-compiled Inference Engine
297 library require an extra compilation flag `-march=armv7-a`.
299 ### Additional Build Options
301 You can use the following additional build options:
303 - Required versions of OpenCV packages are downloaded automatically by the
304 CMake-based script. If you want to use the automatically downloaded packages
305 but you already have installed OpenCV packages configured in your environment,
306 you may need to clean the `OpenCV_DIR` environment variable before running
307 the `cmake` command; otherwise they won't be downloaded and the build may
308 fail if incompatible versions were installed.
310 - If the CMake-based build script cannot find and download the OpenCV package
311 that is supported on your platform, or if you want to use a custom build of
312 the OpenCV library, see: [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
315 - To build Python API wrapper, install `libpython3-dev:armhf` and `python3-pip`
316 packages using `apt-get`; then install `numpy` and `cython` python modules
317 via `pip3`, adding the following options:
320 -DPYTHON_EXECUTABLE=/usr/bin/python3.5 \
321 -DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.5m.so \
322 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5
325 - nGraph-specific compilation options:
326 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
327 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
329 ## Build on Windows* Systems
331 The software was validated on:
332 - Microsoft\* Windows\* 10 (64-bit) with Visual Studio 2019
334 ### Software Requirements
335 - [CMake]\*3.13 or higher
336 - Microsoft\* Visual Studio 2017, 2019
337 - (Optional) Intel® Graphics Driver for Windows* (26.20) [driver package].
338 - Python 3.6 or higher for Inference Engine Python API wrapper
339 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
345 git submodule update --init --recursive
347 2. By default, the build enables the Inference Engine GPU plugin to infer models
348 on your Intel® Processor Graphics. This requires you to [download and install
349 the Intel® Graphics Driver for Windows (26.20) [driver package] before
350 running the build. If you don't want to use the GPU plugin, use the
351 `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
352 Intel® Graphics Driver.
353 3. Create build directory:
357 4. In the `build` directory, run `cmake` to fetch project dependencies and
358 generate a Visual Studio solution.
360 For Microsoft\* Visual Studio 2017:
362 cmake -G "Visual Studio 15 2017 Win64" -DCMAKE_BUILD_TYPE=Release ..
365 For Microsoft\* Visual Studio 2019:
367 cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_BUILD_TYPE=Release ..
370 5. Build generated solution in Visual Studio or run
371 `cmake --build . --config Release` to build from the command line.
373 6. Before running the samples, add paths to the TBB and OpenCV binaries used for
374 the build to the `%PATH%` environment variable. By default, TBB binaries are
375 downloaded by the CMake-based script to the `<openvino_repo>/inference-engine/temp/tbb/bin`
376 folder, OpenCV binaries to the `<openvino_repo>/inference-engine/temp/opencv_4.5.0/opencv/bin`
379 ### Additional Build Options
381 - Internal JIT GEMM implementation is used by default.
383 - To switch to OpenBLAS GEMM implementation, use the `-DGEMM=OPENBLAS` CMake
384 option and specify path to OpenBLAS using the `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include`
385 and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Download
386 a prebuilt OpenBLAS\* package via the [OpenBLAS] link. mingw64* runtime
387 dependencies can be downloaded via the [mingw64\* runtime dependencies] link.
389 - To switch to the optimized MKL-ML\* GEMM implementation, use the
390 `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to
391 unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be
392 downloaded from the Intel® [MKL-DNN repository for Windows].
394 - Threading Building Blocks (TBB) is used by default. To build the Inference
395 Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
397 - Required versions of TBB and OpenCV packages are downloaded automatically by
398 the CMake-based script. If you want to use the automatically-downloaded
399 packages but you already have installed TBB or OpenCV packages configured in
400 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
401 environment variables before running the `cmake` command; otherwise they won't
402 be downloaded and the build may fail if incompatible versions were installed.
404 - If the CMake-based build script can not find and download the OpenCV package
405 that is supported on your platform, or if you want to use a custom build of
406 the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
409 - To switch off/on the CPU and GPU plugins, use the `cmake` options
410 `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
412 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
413 specify an exact Python version, use the following options:
415 -DPYTHON_EXECUTABLE="C:\Program Files\Python37\python.exe" ^
416 -DPYTHON_LIBRARY="C:\Program Files\Python37\libs\python37.lib" ^
417 -DPYTHON_INCLUDE_DIR="C:\Program Files\Python37\include"
420 - nGraph-specific compilation options:
421 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
422 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
424 ### Building Inference Engine with Ninja* Build System
427 call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
430 :: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script
432 cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
433 cmake --build . --config Release
436 ## Build on macOS* Systems
438 > **NOTE**: The current version of the OpenVINO™ toolkit for macOS* supports
439 inference on Intel CPUs only.
441 The software was validated on:
442 - macOS\* 10.15, 64-bit
444 ### Software Requirements
446 - [CMake]\* 3.13 or higher
447 - Clang\* compiler from Xcode\* 10.1 or higher
448 - Python\* 3.6 or higher for the Inference Engine Python API wrapper
449 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
456 git submodule update --init --recursive
458 2. Create a build folder:
460 mkdir build && cd build
462 3. Inference Engine uses a CMake-based build system. In the created `build`
463 directory, run `cmake` to fetch project dependencies and create Unix makefiles,
464 then run `make` to build the project:
466 cmake -DCMAKE_BUILD_TYPE=Release ..
467 make --jobs=$(nproc --all)
469 ### Additional Build Options
471 You can use the following additional build options:
473 - Internal JIT GEMM implementation is used by default.
475 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and
476 `-DMKLROOT=<path_to_MKL>` cmake options to specify a path to unpacked MKL-ML
477 with the `include` and `lib` folders. MKL-ML\* [package for Mac] can be downloaded
478 [here](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_mac_2019.0.5.20190502.tgz)
480 - Threading Building Blocks (TBB) is used by default. To build the Inference
481 Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
483 - Required versions of TBB and OpenCV packages are downloaded automatically by
484 the CMake-based script. If you want to use the automatically downloaded
485 packages but you already have installed TBB or OpenCV packages configured in
486 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
487 environment variables before running the `cmake` command, otherwise they won't
488 be downloaded and the build may fail if incompatible versions were installed.
490 - If the CMake-based build script can not find and download the OpenCV package
491 that is supported on your platform, or if you want to use a custom build of
492 the OpenCV library, refer to the
493 [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
496 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
497 specify an exact Python version, use the following options:
498 - If you installed Python through Homebrew*, set the following flags:
500 -DPYTHON_EXECUTABLE=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/bin/python3.7m \
501 -DPYTHON_LIBRARY=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib \
502 -DPYTHON_INCLUDE_DIR=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/include/python3.7m
504 - If you installed Python another way, you can use the following commands to find where the `dylib` and `include_dir` are located, respectively:
506 find /usr/ -name 'libpython*m.dylib'
507 find /usr/ -type d -name python3.7m
509 - nGraph-specific compilation options:
510 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
511 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
513 ## Build on Android* Systems
515 This section describes how to build Inference Engine for Android x86 (64-bit) operating systems.
517 ### Software Requirements
519 - [CMake]\* 3.13 or higher
520 - Android NDK (this guide has been validated with r20 release)
521 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
525 1. Download and unpack Android NDK: https://developer.android.com/ndk/downloads. Let's assume that `~/Downloads` is used as a working folder.
528 wget https://dl.google.com/android/repository/android-ndk-r20-linux-x86_64.zip
530 unzip android-ndk-r20-linux-x86_64.zip
531 mv android-ndk-r20 android-ndk
537 git submodule update --init --recursive
540 3. Create a build folder:
545 4. Change working directory to `build` and run `cmake` to create makefiles. Then run `make`.
550 -DCMAKE_TOOLCHAIN_FILE=~/Downloads/android-ndk/build/cmake/android.toolchain.cmake \
551 -DANDROID_ABI=x86_64 \
552 -DANDROID_PLATFORM=21 \
553 -DANDROID_STL=c++_shared \
556 make --jobs=$(nproc --all)
559 * `ANDROID_ABI` specifies target architecture (`x86_64`)
560 * `ANDROID_PLATFORM` - Android API version
561 * `ANDROID_STL` specifies that shared C++ runtime is used. Copy `~/Downloads/android-ndk/sources/cxx-stl/llvm-libc++/libs/x86_64/libc++_shared.so` from Android NDK along with built binaries
564 ## Use Custom OpenCV Builds for Inference Engine
566 > **NOTE**: The recommended and tested version of OpenCV is 4.4.0.
568 Required versions of OpenCV packages are downloaded automatically during the
569 building Inference Engine library. If the build script can not find and download
570 the OpenCV package that is supported on your platform, you can use one of the
573 * Download the most suitable version from the list of available pre-build
574 packages from [https://download.01.org/opencv/2020/openvinotoolkit] from the
575 `<release_version>/inference_engine` directory.
577 * Use a system-provided OpenCV package (e.g with running the
578 `apt install libopencv-dev` command). The following modules must be enabled:
579 `imgcodecs`, `videoio`, `highgui`.
581 * Get the OpenCV package using a package manager: pip, conda, conan etc. The
582 package must have the development components included (header files and CMake
585 * Build OpenCV from source using the [build instructions](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html) on the OpenCV site.
587 After you got the built OpenCV library, perform the following preparation steps
588 before running the Inference Engine build:
590 1. Set the `OpenCV_DIR` environment variable to the directory where the
591 `OpenCVConfig.cmake` file of you custom OpenCV build is located.
592 2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF`
593 option for CMake-based build script for Inference Engine.
595 ## Add Inference Engine to Your Project
597 For CMake projects, set the `InferenceEngine_DIR` environment variable:
600 export InferenceEngine_DIR=/path/to/openvino/build/
603 Then you can find Inference Engine by `find_package`:
606 find_package(InferenceEngine)
607 include_directories(${InferenceEngine_INCLUDE_DIRS})
608 target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
611 ## (Optional) Additional Installation Steps for the Intel® Neural Compute Stick 2
613 > **NOTE**: These steps are only required if you want to perform inference on the
614 Intel® Neural Compute Stick 2 using the Inference Engine MYRIAD Plugin. See also
615 [Intel® Neural Compute Stick 2 Get Started].
617 ### For Linux, Raspbian\* Stretch OS
619 1. Add the current Linux user to the `users` group; you will need to log out and
620 log in for it to take effect:
622 sudo usermod -a -G users "$(whoami)"
625 2. To perform inference on Intel® Neural Compute Stick 2, install the USB rules
628 cat <<EOF > 97-myriad-usbboot.rules
629 SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
630 SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
634 sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
637 sudo udevadm control --reload-rules
646 rm 97-myriad-usbboot.rules
651 Congratulations, you have built the Inference Engine. To get started with the
652 OpenVINO™, proceed to the Get Started guides:
654 * [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
658 To enable some additional nGraph features and use your custom nGraph library with the OpenVINO™ binary package,
659 make sure the following:
660 - nGraph library was built with the same version which is used in the Inference Engine.
661 - nGraph library and the Inference Engine were built with the same compilers. Otherwise you might face application binary interface (ABI) problems.
663 To prepare your custom nGraph library for distribution, which includes collecting all headers, copy
664 binaries, and so on, use the `install` CMake target.
665 This target collects all dependencies, prepares the nGraph package and copies it to a separate directory.
667 ## Additional Resources
669 * [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
670 * [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
671 * [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
672 * [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
673 * [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
676 \* Other names and brands may be claimed as the property of others.
679 [Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit
680 [CMake]:https://cmake.org/download/
681 [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441
682 [MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz
683 [MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip)
684 [OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download
685 [mingw64\* runtime dependencies]:https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download
686 [https://download.01.org/opencv/2020/openvinotoolkit]:https://download.01.org/opencv/2020/openvinotoolkit
687 [build instructions]:https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html
688 [driver package]:https://downloadcenter.intel.com/download/29335/Intel-Graphics-Windows-10-DCH-Drivers
689 [Intel® Neural Compute Stick 2 Get Started]:https://software.intel.com/en-us/neural-compute-stick/get-started
690 [OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download