1 # Build OpenVINO™ Inference Engine
5 - [Introduction](#introduction)
6 - [Build on Linux\* Systems](#build-on-linux-systems)
7 - [Software Requirements](#software-requirements)
8 - [Build Steps](#build-steps)
9 - [Additional Build Options](#additional-build-options)
10 - [Build for Raspbian* Stretch OS](#build-for-raspbian-stretch-os)
11 - [Hardware Requirements](#hardware-requirements)
12 - [Native Compilation](#native-compilation)
13 - [Cross Compilation Using Docker\*](#cross-compilation-using-docker)
14 - [Additional Build Options](#additional-build-options-1)
15 - [Build on Windows* Systems](#build-on-windows-systems)
16 - [Software Requirements](#software-requirements-1)
17 - [Build Steps](#build-steps-1)
18 - [Additional Build Options](#additional-build-options-2)
19 - [Building Inference Engine with Ninja* Build System](#building-inference-engine-with-ninja-build-system)
20 - [Build on macOS\* Systems](#build-on-macos-systems)
21 - [Software Requirements](#software-requirements-2)
22 - [Build Steps](#build-steps-2)
23 - [Additional Build Options](#additional-build-options-3)
24 - [Build on Android\* Systems](#build-on-android-systems)
25 - [Software Requirements](#software-requirements-3)
26 - [Build Steps](#build-steps-3)
27 - [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
28 - [Add Inference Engine to Your Project](#add-inference-engine-to-your-project)
29 - [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
30 - [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
31 - [Next Steps](#next-steps)
32 - [Additional Resources](#additional-resources)
36 The Inference Engine can infer models in different formats with various input
39 The open source version of Inference Engine includes the following plugins:
41 | PLUGIN | DEVICE TYPES |
42 | ---------------------| -------------|
43 | CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
44 | GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
45 | GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa\* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
46 | MYRIAD plugin | Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
47 | Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
49 Inference Engine plugin for Intel® FPGA is distributed only in a binary form,
50 as a part of [Intel® Distribution of OpenVINO™].
52 ## Build on Linux\* Systems
54 The software was validated on:
55 - Ubuntu\* 18.04 (64-bit) with default GCC\* 7.5.0
56 - Ubuntu\* 16.04 (64-bit) with default GCC\* 5.4.0
57 - CentOS\* 7.4 (64-bit) with default GCC\* 4.8.5
59 ### Software Requirements
60 - [CMake]\* 3.11 or higher
61 - GCC\* 4.8 or higher to build the Inference Engine
62 - Python 3.5 or higher for Inference Engine Python API wrapper
63 - (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441].
69 git submodule update --init --recursive
71 2. Install build dependencies using the `install_dependencies.sh` script in the
74 chmod +x install_dependencies.sh
77 ./install_dependencies.sh
79 3. By default, the build enables the Inference Engine GPU plugin to infer models
80 on your Intel® Processor Graphics. This requires you to
81 [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]
82 before running the build. If you don't want to use the GPU plugin, use the
83 `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
84 Intel® Graphics Compute Runtime for OpenCL™ Driver.
85 4. Create a build folder:
87 mkdir build && cd build
89 5. Inference Engine uses a CMake-based build system. In the created `build`
90 directory, run `cmake` to fetch project dependencies and create Unix
91 makefiles, then run `make` to build the project:
93 cmake -DCMAKE_BUILD_TYPE=Release ..
94 make --jobs=$(nproc --all)
97 ### Additional Build Options
99 You can use the following additional build options:
101 - The default build uses an internal JIT GEMM implementation.
103 - To switch to an OpenBLAS\* implementation, use the `GEMM=OPENBLAS` option with
104 `BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` CMake options to specify a path to the
105 OpenBLAS headers and library. For example, the following options on CentOS\*:
106 `-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`.
108 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL`
109 and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked
110 MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded
111 from the Intel® [MKL-DNN repository].
113 - Threading Building Blocks (TBB) is used by default. To build the Inference
114 Engine with OpenMP\* threading, set the `-DTHREADING=OMP` option.
116 - Required versions of TBB and OpenCV packages are downloaded automatically by
117 the CMake-based script. If you want to use the automatically downloaded
118 packages but you already have installed TBB or OpenCV packages configured in
119 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
120 environment variables before running the `cmake` command, otherwise they
121 will not be downloaded and the build may fail if incompatible versions were
124 - If the CMake-based build script can not find and download the OpenCV package
125 that is supported on your platform, or if you want to use a custom build of
126 the OpenCV library, refer to the
127 [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
130 - To build the Python API wrapper:
131 1. Install all additional packages listed in the
132 `/inference-engine/ie_bridges/python/requirements.txt` file:
134 pip install -r requirements.txt
136 2. Use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following
139 -DPYTHON_EXECUTABLE=`which python3.7` \
140 -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
141 -DPYTHON_INCLUDE_DIR=/usr/include/python3.7
144 - To switch the CPU and GPU plugins off/on, use the `cmake` options
145 `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
147 - nGraph-specific compilation options:
148 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
149 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
151 ## Build for Raspbian Stretch* OS
153 > **NOTE**: Only the MYRIAD plugin is supported.
155 ### Hardware Requirements
156 * Raspberry Pi\* 2 or 3 with Raspbian\* Stretch OS (32-bit). Check that it's CPU supports ARMv7 instruction set (`uname -m` command returns `armv7l`).
158 > **NOTE**: Despite the Raspberry Pi\* CPU is ARMv8, 32-bit OS detects ARMv7 CPU instruction set. The default `gcc` compiler applies ARMv6 architecture flag for compatibility with lower versions of boards. For more information, run the `gcc -Q --help=target` command and refer to the description of the `-march=` option.
160 You can compile the Inference Engine for Raspberry Pi\* in one of the two ways:
161 * [Native Compilation](#native-compilation), which is the simplest way, but time-consuming
162 * [Cross Compilation Using Docker*](#cross-compilation-using-docker), which is the recommended way
164 ### Native Compilation
165 Native compilation of the Inference Engine is the most straightforward solution. However, it might take at least one hour to complete on Raspberry Pi\* 3.
167 1. Install dependencies:
171 sudo apt-get install -y git cmake libusb-1.0-0-dev
174 2. Go to the cloned `openvino` repository:
180 3. Initialize submodules:
183 git submodule update --init --recursive
186 4. Create a build folder:
189 mkdir build && cd build
192 5. Build the Inference Engine:
195 cmake -DCMAKE_BUILD_TYPE=Release \
198 -DENABLE_GNA=OFF .. && make
201 ### Cross Compilation Using Docker*
203 This compilation was tested on the following configuration:
205 * Host: Ubuntu\* 18.04 (64-bit, Intel® Core™ i7-6700K CPU @ 4.00GHz × 8)
206 * Target: Raspbian\* Stretch (32-bit, ARMv7, Raspberry Pi\* 3)
211 sudo apt-get install -y docker.io
214 2. Add a current user to `docker` group:
217 sudo usermod -a -G docker $USER
220 Log out and log in for this to take effect.
222 3. Create a directory named `ie_cross_armhf` and add a text file named `Dockerfile`
223 with the following content:
230 RUN dpkg --add-architecture armhf && \
232 apt-get install -y --no-install-recommends \
234 crossbuild-essential-armhf \
237 libusb-1.0-0-dev:armhf \
239 libavcodec-dev:armhf \
240 libavformat-dev:armhf \
241 libswscale-dev:armhf \
242 libgstreamer1.0-dev:armhf \
243 libgstreamer-plugins-base1.0-dev:armhf \
244 libpython3-dev:armhf \
249 RUN wget https://www.cmake.org/files/v3.14/cmake-3.14.3.tar.gz && \
250 tar xf cmake-3.14.3.tar.gz && \
251 (cd cmake-3.14.3 && ./bootstrap --parallel=$(nproc --all) && make --jobs=$(nproc --all) && make install) && \
252 rm -rf cmake-3.14.3 cmake-3.14.3.tar.gz
255 It uses the Debian\* Stretch (Debian 9) OS for compilation because it is a base of the Raspbian\* Stretch.
257 4. Build a Docker\* image:
260 docker image build -t ie_cross_armhf ie_cross_armhf
263 5. Run Docker\* container with mounted source code folder from host:
266 docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash
269 6. While in the container:
271 1. Go to the cloned `openvino` repository:
277 2. Create a build folder:
280 mkdir build && cd build
283 3. Build the Inference Engine:
286 cmake -DCMAKE_BUILD_TYPE=Release \
287 -DCMAKE_TOOLCHAIN_FILE="../cmake/arm.toolchain.cmake" \
288 -DTHREADS_PTHREAD_ARG="-pthread" \
291 -DENABLE_GNA=OFF .. && make --jobs=$(nproc --all)
294 7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries
295 in the `openvino/bin/armv7l/` directory and the OpenCV*
296 installation in the `openvino/inference-engine/temp`.
298 >**NOTE**: Native applications that link to cross-compiled Inference Engine
299 library require an extra compilation flag `-march=armv7-a`.
301 ### Additional Build Options
303 You can use the following additional build options:
305 - Required versions of OpenCV packages are downloaded automatically by the
306 CMake-based script. If you want to use the automatically downloaded packages
307 but you already have installed OpenCV packages configured in your environment,
308 you may need to clean the `OpenCV_DIR` environment variable before running
309 the `cmake` command; otherwise they won't be downloaded and the build may
310 fail if incompatible versions were installed.
312 - If the CMake-based build script cannot find and download the OpenCV package
313 that is supported on your platform, or if you want to use a custom build of
314 the OpenCV library, see: [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
317 - To build Python API wrapper, install `libpython3-dev:armhf` and `python3-pip`
318 packages using `apt-get`; then install `numpy` and `cython` python modules
319 via `pip3`, adding the following options:
322 -DPYTHON_EXECUTABLE=/usr/bin/python3.5 \
323 -DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.5m.so \
324 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5
327 - nGraph-specific compilation options:
328 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
329 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
331 ## Build on Windows* Systems
333 The software was validated on:
334 - Microsoft\* Windows\* 10 (64-bit) with Visual Studio 2017 and Intel® C++
335 Compiler 2018 Update 3
337 ### Software Requirements
338 - [CMake]\*3.11 or higher
339 - Microsoft\* Visual Studio 2017, 2019 or [Intel® C++ Compiler] 18.0
340 - (Optional) Intel® Graphics Driver for Windows* (26.20) [driver package].
341 - Python 3.5 or higher for Inference Engine Python API wrapper
347 git submodule update --init --recursive
349 2. By default, the build enables the Inference Engine GPU plugin to infer models
350 on your Intel® Processor Graphics. This requires you to [download and install
351 the Intel® Graphics Driver for Windows (26.20) [driver package] before
352 running the build. If you don't want to use the GPU plugin, use the
353 `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
354 Intel® Graphics Driver.
355 3. Create build directory:
359 4. In the `build` directory, run `cmake` to fetch project dependencies and
360 generate a Visual Studio solution.
362 For Microsoft\* Visual Studio 2017:
364 cmake -G "Visual Studio 15 2017 Win64" -DCMAKE_BUILD_TYPE=Release ..
367 For Microsoft\* Visual Studio 2019:
369 cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_BUILD_TYPE=Release ..
372 For Intel® C++ Compiler 18:
374 cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
375 -DCMAKE_BUILD_TYPE=Release ^
376 -DICCLIB="C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\compiler\lib" ..
379 5. Build generated solution in Visual Studio or run
380 `cmake --build . --config Release` to build from the command line.
382 6. Before running the samples, add paths to the TBB and OpenCV binaries used for
383 the build to the `%PATH%` environment variable. By default, TBB binaries are
384 downloaded by the CMake-based script to the `<openvino_repo>/inference-engine/temp/tbb/bin`
385 folder, OpenCV binaries to the `<openvino_repo>/inference-engine/temp/opencv_4.3.0/opencv/bin`
388 ### Additional Build Options
390 - Internal JIT GEMM implementation is used by default.
392 - To switch to OpenBLAS GEMM implementation, use the `-DGEMM=OPENBLAS` CMake
393 option and specify path to OpenBLAS using the `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include`
394 and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Download
395 a prebuilt OpenBLAS\* package via the [OpenBLAS] link. mingw64* runtime
396 dependencies can be downloaded via the [mingw64\* runtime dependencies] link.
398 - To switch to the optimized MKL-ML\* GEMM implementation, use the
399 `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to
400 unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be
401 downloaded from the Intel® [MKL-DNN repository for Windows].
403 - Threading Building Blocks (TBB) is used by default. To build the Inference
404 Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
406 - Required versions of TBB and OpenCV packages are downloaded automatically by
407 the CMake-based script. If you want to use the automatically-downloaded
408 packages but you already have installed TBB or OpenCV packages configured in
409 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
410 environment variables before running the `cmake` command; otherwise they won't
411 be downloaded and the build may fail if incompatible versions were installed.
413 - If the CMake-based build script can not find and download the OpenCV package
414 that is supported on your platform, or if you want to use a custom build of
415 the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
418 - To switch off/on the CPU and GPU plugins, use the `cmake` options
419 `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
421 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
422 specify an exact Python version, use the following options:
424 -DPYTHON_EXECUTABLE="C:\Program Files\Python37\python.exe" ^
425 -DPYTHON_LIBRARY="C:\Program Files\Python37\libs\python37.lib" ^
426 -DPYTHON_INCLUDE_DIR="C:\Program Files\Python37\include"
429 - nGraph-specific compilation options:
430 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
431 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
433 ### Building Inference Engine with Ninja* Build System
436 call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
439 :: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script
441 cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
442 cmake --build . --config Release
445 ## Build on macOS* Systems
447 > **NOTE**: The current version of the OpenVINO™ toolkit for macOS* supports
448 inference on Intel CPUs only.
450 The software was validated on:
451 - macOS\* 10.14, 64-bit
453 ### Software Requirements
455 - [CMake]\* 3.11 or higher
456 - Clang\* compiler from Xcode\* 10.1 or higher
457 - Python\* 3.5 or higher for the Inference Engine Python API wrapper
464 git submodule update --init --recursive
466 2. Install build dependencies using the `install_dependencies.sh` script in the
469 chmod +x install_dependencies.sh
472 ./install_dependencies.sh
474 3. Create a build folder:
478 4. Inference Engine uses a CMake-based build system. In the created `build`
479 directory, run `cmake` to fetch project dependencies and create Unix makefiles,
480 then run `make` to build the project:
482 cmake -DCMAKE_BUILD_TYPE=Release ..
483 make --jobs=$(nproc --all)
485 ### Additional Build Options
487 You can use the following additional build options:
489 - Internal JIT GEMM implementation is used by default.
491 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and
492 `-DMKLROOT=<path_to_MKL>` cmake options to specify a path to unpacked MKL-ML
493 with the `include` and `lib` folders. MKL-ML\* [package for Mac] can be downloaded
494 [here](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_mac_2019.0.5.20190502.tgz)
496 - Threading Building Blocks (TBB) is used by default. To build the Inference
497 Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
499 - Required versions of TBB and OpenCV packages are downloaded automatically by
500 the CMake-based script. If you want to use the automatically downloaded
501 packages but you already have installed TBB or OpenCV packages configured in
502 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
503 environment variables before running the `cmake` command, otherwise they won't
504 be downloaded and the build may fail if incompatible versions were installed.
506 - If the CMake-based build script can not find and download the OpenCV package
507 that is supported on your platform, or if you want to use a custom build of
508 the OpenCV library, refer to the
509 [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
512 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
513 specify an exact Python version, use the following options:
515 -DPYTHON_EXECUTABLE=/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7 \
516 -DPYTHON_LIBRARY=/Library/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib \
517 -DPYTHON_INCLUDE_DIR=/Library/Frameworks/Python.framework/Versions/3.7/include/python3.7m
520 - nGraph-specific compilation options:
521 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
522 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
524 ## Build on Android* Systems
526 This section describes how to build Inference Engine for Android x86 (64-bit) operating systems.
528 ### Software Requirements
530 - [CMake]\* 3.11 or higher
531 - Android NDK (this guide has been validated with r20 release)
535 1. Download and unpack Android NDK: https://developer.android.com/ndk/downloads. Let's assume that `~/Downloads` is used as a working folder.
538 wget https://dl.google.com/android/repository/android-ndk-r20-linux-x86_64.zip
540 unzip android-ndk-r20-linux-x86_64.zip
541 mv android-ndk-r20 android-ndk
547 git submodule update --init --recursive
550 3. Create a build folder:
555 4. Change working directory to `build` and run `cmake` to create makefiles. Then run `make`.
560 -DCMAKE_TOOLCHAIN_FILE=~/Downloads/android-ndk/build/cmake/android.toolchain.cmake \
561 -DANDROID_ABI=x86_64 \
562 -DANDROID_PLATFORM=21 \
563 -DANDROID_STL=c++_shared \
566 make --jobs=$(nproc --all)
569 * `ANDROID_ABI` specifies target architecture (`x86_64`)
570 * `ANDROID_PLATFORM` - Android API version
571 * `ANDROID_STL` specifies that shared C++ runtime is used. Copy `~/Downloads/android-ndk/sources/cxx-stl/llvm-libc++/libs/x86_64/libc++_shared.so` from Android NDK along with built binaries
574 ## Use Custom OpenCV Builds for Inference Engine
576 > **NOTE**: The recommended and tested version of OpenCV is 4.4.0.
578 Required versions of OpenCV packages are downloaded automatically during the
579 building Inference Engine library. If the build script can not find and download
580 the OpenCV package that is supported on your platform, you can use one of the
583 * Download the most suitable version from the list of available pre-build
584 packages from [https://download.01.org/opencv/2020/openvinotoolkit] from the
585 `<release_version>/inference_engine` directory.
587 * Use a system-provided OpenCV package (e.g with running the
588 `apt install libopencv-dev` command). The following modules must be enabled:
589 `imgcodecs`, `videoio`, `highgui`.
591 * Get the OpenCV package using a package manager: pip, conda, conan etc. The
592 package must have the development components included (header files and CMake
595 * Build OpenCV from source using the [build instructions](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html) on the OpenCV site.
597 After you got the built OpenCV library, perform the following preparation steps
598 before running the Inference Engine build:
600 1. Set the `OpenCV_DIR` environment variable to the directory where the
601 `OpenCVConfig.cmake` file of you custom OpenCV build is located.
602 2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF`
603 option for CMake-based build script for Inference Engine.
605 ## Add Inference Engine to Your Project
607 For CMake projects, set the `InferenceEngine_DIR` environment variable:
610 export InferenceEngine_DIR=/path/to/openvino/build/
613 Then you can find Inference Engine by `find_package`:
616 find_package(InferenceEngine)
617 include_directories(${InferenceEngine_INCLUDE_DIRS})
618 target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
621 ## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
623 > **NOTE**: These steps are only required if you want to perform inference on
624 Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using
625 the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started].
627 ### For Linux, Raspbian\* Stretch OS
629 1. Add the current Linux user to the `users` group; you will need to log out and
630 log in for it to take effect:
632 sudo usermod -a -G users "$(whoami)"
635 2. To perform inference on Intel® Movidius™ Neural Compute Stick and Intel®
636 Neural Compute Stick 2, install the USB rules as follows:
638 cat <<EOF > 97-myriad-usbboot.rules
639 SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
640 SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
641 SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
645 sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
648 sudo udevadm control --reload-rules
657 rm 97-myriad-usbboot.rules
662 Congratulations, you have built the Inference Engine. To get started with the
663 OpenVINO™, proceed to the Get Started guides:
665 * [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
669 To enable some additional nGraph features and use your custom nGraph library with the OpenVINO™ binary package,
670 make sure the following:
671 - nGraph library was built with the same version which is used in the Inference Engine.
672 - nGraph library and the Inference Engine were built with the same compilers. Otherwise you might face application binary interface (ABI) problems.
674 To prepare your custom nGraph library for distribution, which includes collecting all headers, copy
675 binaries, and so on, use the `install` CMake target.
676 This target collects all dependencies, prepares the nGraph package and copies it to a separate directory.
678 ## Additional Resources
680 * [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
681 * [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
682 * [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
683 * [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
684 * [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
687 \* Other names and brands may be claimed as the property of others.
690 [Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit
691 [CMake]:https://cmake.org/download/
692 [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441
693 [MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz
694 [MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip)
695 [OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download
696 [mingw64\* runtime dependencies]:https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download
697 [https://download.01.org/opencv/2020/openvinotoolkit]:https://download.01.org/opencv/2020/openvinotoolkit
698 [build instructions]:https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html
699 [driver package]:https://downloadcenter.intel.com/download/29335/Intel-Graphics-Windows-10-DCH-Drivers
700 [Intel® Neural Compute Stick 2 Get Started]:https://software.intel.com/en-us/neural-compute-stick/get-started
701 [Intel® C++ Compiler]:https://software.intel.com/en-us/intel-parallel-studio-xe
702 [OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download