1 # Build OpenVINO™ Inference Engine
5 - [Introduction](#introduction)
6 - [Build on Linux\* Systems](#build-on-linux-systems)
7 - [Software Requirements](#software-requirements)
8 - [Build Steps](#build-steps)
9 - [Additional Build Options](#additional-build-options)
10 - [Build for Raspbian* Stretch OS](#build-for-raspbian-stretch-os)
11 - [Hardware Requirements](#hardware-requirements)
12 - [Native Compilation](#native-compilation)
13 - [Cross Compilation Using Docker\*](#cross-compilation-using-docker)
14 - [Additional Build Options](#additional-build-options-1)
15 - [Build on Windows* Systems](#build-on-windows-systems)
16 - [Software Requirements](#software-requirements-1)
17 - [Build Steps](#build-steps-1)
18 - [Additional Build Options](#additional-build-options-2)
19 - [Building Inference Engine with Ninja* Build System](#building-inference-engine-with-ninja-build-system)
20 - [Build on macOS\* Systems](#build-on-macos-systems)
21 - [Software Requirements](#software-requirements-2)
22 - [Build Steps](#build-steps-2)
23 - [Additional Build Options](#additional-build-options-3)
24 - [Build on Android\* Systems](#build-on-android-systems)
25 - [Software Requirements](#software-requirements-3)
26 - [Build Steps](#build-steps-3)
27 - [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
28 - [Add Inference Engine to Your Project](#add-inference-engine-to-your-project)
29 - [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
30 - [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
31 - [Next Steps](#next-steps)
32 - [Additional Resources](#additional-resources)
36 The Inference Engine can infer models in different formats with various input
39 The open source version of Inference Engine includes the following plugins:
41 | PLUGIN | DEVICE TYPES |
42 | ---------------------| -------------|
43 | CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
44 | GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
45 | GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa\* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
46 | MYRIAD plugin | Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
47 | Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
49 Inference Engine plugin for Intel® FPGA is distributed only in a binary form,
50 as a part of [Intel® Distribution of OpenVINO™].
52 ## Build on Linux\* Systems
54 The software was validated on:
55 - Ubuntu\* 18.04 (64-bit) with default GCC\* 7.5.0
56 - Ubuntu\* 20.04 (64-bit) with default GCC\* 9.3.0
57 - CentOS\* 7.6 (64-bit) with default GCC\* 4.8.5
59 ### Software Requirements
60 - [CMake]\* 3.13 or higher
61 - GCC\* 4.8 or higher to build the Inference Engine
62 - Python 3.6 or higher for Inference Engine Python API wrapper
63 - (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441].
64 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
70 git submodule update --init --recursive
72 2. Install build dependencies using the `install_dependencies.sh` script in the
75 chmod +x install_dependencies.sh
78 ./install_dependencies.sh
80 3. By default, the build enables the Inference Engine GPU plugin to infer models
81 on your Intel® Processor Graphics. This requires you to
82 [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]
83 before running the build. If you don't want to use the GPU plugin, use the
84 `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
85 Intel® Graphics Compute Runtime for OpenCL™ Driver.
86 4. Create a build folder:
88 mkdir build && cd build
90 5. Inference Engine uses a CMake-based build system. In the created `build`
91 directory, run `cmake` to fetch project dependencies and create Unix
92 makefiles, then run `make` to build the project:
94 cmake -DCMAKE_BUILD_TYPE=Release ..
95 make --jobs=$(nproc --all)
98 ### Additional Build Options
100 You can use the following additional build options:
102 - The default build uses an internal JIT GEMM implementation.
104 - To switch to an OpenBLAS\* implementation, use the `GEMM=OPENBLAS` option with
105 `BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` CMake options to specify a path to the
106 OpenBLAS headers and library. For example, the following options on CentOS\*:
107 `-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`.
109 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL`
110 and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked
111 MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded
112 from the Intel® [MKL-DNN repository].
114 - Threading Building Blocks (TBB) is used by default. To build the Inference
115 Engine with OpenMP\* threading, set the `-DTHREADING=OMP` option.
117 - Required versions of TBB and OpenCV packages are downloaded automatically by
118 the CMake-based script. If you want to use the automatically downloaded
119 packages but you already have installed TBB or OpenCV packages configured in
120 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
121 environment variables before running the `cmake` command, otherwise they
122 will not be downloaded and the build may fail if incompatible versions were
125 - If the CMake-based build script can not find and download the OpenCV package
126 that is supported on your platform, or if you want to use a custom build of
127 the OpenCV library, refer to the
128 [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
131 - To build the Python API wrapper:
132 1. Install all additional packages listed in the
133 `/inference-engine/ie_bridges/python/requirements.txt` file:
135 pip install -r requirements.txt
137 2. Use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following
140 -DPYTHON_EXECUTABLE=`which python3.7` \
141 -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
142 -DPYTHON_INCLUDE_DIR=/usr/include/python3.7
145 - To switch the CPU and GPU plugins off/on, use the `cmake` options
146 `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
148 - nGraph-specific compilation options:
149 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
150 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
152 ## Build for Raspbian Stretch* OS
154 > **NOTE**: Only the MYRIAD plugin is supported.
156 ### Hardware Requirements
157 * Raspberry Pi\* 2 or 3 with Raspbian\* Stretch OS (32-bit). Check that it's CPU supports ARMv7 instruction set (`uname -m` command returns `armv7l`).
159 > **NOTE**: Despite the Raspberry Pi\* CPU is ARMv8, 32-bit OS detects ARMv7 CPU instruction set. The default `gcc` compiler applies ARMv6 architecture flag for compatibility with lower versions of boards. For more information, run the `gcc -Q --help=target` command and refer to the description of the `-march=` option.
161 You can compile the Inference Engine for Raspberry Pi\* in one of the two ways:
162 * [Native Compilation](#native-compilation), which is the simplest way, but time-consuming
163 * [Cross Compilation Using Docker*](#cross-compilation-using-docker), which is the recommended way
165 ### Native Compilation
166 Native compilation of the Inference Engine is the most straightforward solution. However, it might take at least one hour to complete on Raspberry Pi\* 3.
168 1. Install dependencies:
172 sudo apt-get install -y git cmake libusb-1.0-0-dev
175 2. Go to the cloned `openvino` repository:
181 3. Initialize submodules:
184 git submodule update --init --recursive
187 4. Create a build folder:
190 mkdir build && cd build
193 5. Build the Inference Engine:
196 cmake -DCMAKE_BUILD_TYPE=Release \
199 -DENABLE_GNA=OFF .. && make
202 ### Cross Compilation Using Docker*
204 This compilation was tested on the following configuration:
206 * Host: Ubuntu\* 18.04 (64-bit, Intel® Core™ i7-6700K CPU @ 4.00GHz × 8)
207 * Target: Raspbian\* Stretch (32-bit, ARMv7, Raspberry Pi\* 3)
212 sudo apt-get install -y docker.io
215 2. Add a current user to `docker` group:
218 sudo usermod -a -G docker $USER
221 Log out and log in for this to take effect.
223 3. Create a directory named `ie_cross_armhf` and add a text file named `Dockerfile`
224 with the following content:
231 RUN dpkg --add-architecture armhf && \
233 apt-get install -y --no-install-recommends \
235 crossbuild-essential-armhf \
238 libusb-1.0-0-dev:armhf \
240 libavcodec-dev:armhf \
241 libavformat-dev:armhf \
242 libswscale-dev:armhf \
243 libgstreamer1.0-dev:armhf \
244 libgstreamer-plugins-base1.0-dev:armhf \
245 libpython3-dev:armhf \
250 RUN wget https://www.cmake.org/files/v3.14/cmake-3.14.3.tar.gz && \
251 tar xf cmake-3.14.3.tar.gz && \
252 (cd cmake-3.14.3 && ./bootstrap --parallel=$(nproc --all) && make --jobs=$(nproc --all) && make install) && \
253 rm -rf cmake-3.14.3 cmake-3.14.3.tar.gz
256 It uses the Debian\* Stretch (Debian 9) OS for compilation because it is a base of the Raspbian\* Stretch.
258 4. Build a Docker\* image:
261 docker image build -t ie_cross_armhf ie_cross_armhf
264 5. Run Docker\* container with mounted source code folder from host:
267 docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash
270 6. While in the container:
272 1. Go to the cloned `openvino` repository:
278 2. Create a build folder:
281 mkdir build && cd build
284 3. Build the Inference Engine:
287 cmake -DCMAKE_BUILD_TYPE=Release \
288 -DCMAKE_TOOLCHAIN_FILE="../cmake/arm.toolchain.cmake" \
289 -DTHREADS_PTHREAD_ARG="-pthread" \
292 -DENABLE_GNA=OFF .. && make --jobs=$(nproc --all)
295 7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries
296 in the `openvino/bin/armv7l/` directory and the OpenCV*
297 installation in the `openvino/inference-engine/temp`.
299 >**NOTE**: Native applications that link to cross-compiled Inference Engine
300 library require an extra compilation flag `-march=armv7-a`.
302 ### Additional Build Options
304 You can use the following additional build options:
306 - Required versions of OpenCV packages are downloaded automatically by the
307 CMake-based script. If you want to use the automatically downloaded packages
308 but you already have installed OpenCV packages configured in your environment,
309 you may need to clean the `OpenCV_DIR` environment variable before running
310 the `cmake` command; otherwise they won't be downloaded and the build may
311 fail if incompatible versions were installed.
313 - If the CMake-based build script cannot find and download the OpenCV package
314 that is supported on your platform, or if you want to use a custom build of
315 the OpenCV library, see: [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
318 - To build Python API wrapper, install `libpython3-dev:armhf` and `python3-pip`
319 packages using `apt-get`; then install `numpy` and `cython` python modules
320 via `pip3`, adding the following options:
323 -DPYTHON_EXECUTABLE=/usr/bin/python3.5 \
324 -DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.5m.so \
325 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5
328 - nGraph-specific compilation options:
329 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
330 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
332 ## Build on Windows* Systems
334 The software was validated on:
335 - Microsoft\* Windows\* 10 (64-bit) with Visual Studio 2019
337 ### Software Requirements
338 - [CMake]\*3.13 or higher
339 - Microsoft\* Visual Studio 2017, 2019
340 - (Optional) Intel® Graphics Driver for Windows* (26.20) [driver package].
341 - Python 3.6 or higher for Inference Engine Python API wrapper
342 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
348 git submodule update --init --recursive
350 2. By default, the build enables the Inference Engine GPU plugin to infer models
351 on your Intel® Processor Graphics. This requires you to [download and install
352 the Intel® Graphics Driver for Windows (26.20) [driver package] before
353 running the build. If you don't want to use the GPU plugin, use the
354 `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
355 Intel® Graphics Driver.
356 3. Create build directory:
360 4. In the `build` directory, run `cmake` to fetch project dependencies and
361 generate a Visual Studio solution.
363 For Microsoft\* Visual Studio 2017:
365 cmake -G "Visual Studio 15 2017 Win64" -DCMAKE_BUILD_TYPE=Release ..
368 For Microsoft\* Visual Studio 2019:
370 cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_BUILD_TYPE=Release ..
373 5. Build generated solution in Visual Studio or run
374 `cmake --build . --config Release` to build from the command line.
376 6. Before running the samples, add paths to the TBB and OpenCV binaries used for
377 the build to the `%PATH%` environment variable. By default, TBB binaries are
378 downloaded by the CMake-based script to the `<openvino_repo>/inference-engine/temp/tbb/bin`
379 folder, OpenCV binaries to the `<openvino_repo>/inference-engine/temp/opencv_4.5.0/opencv/bin`
382 ### Additional Build Options
384 - Internal JIT GEMM implementation is used by default.
386 - To switch to OpenBLAS GEMM implementation, use the `-DGEMM=OPENBLAS` CMake
387 option and specify path to OpenBLAS using the `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include`
388 and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Download
389 a prebuilt OpenBLAS\* package via the [OpenBLAS] link. mingw64* runtime
390 dependencies can be downloaded via the [mingw64\* runtime dependencies] link.
392 - To switch to the optimized MKL-ML\* GEMM implementation, use the
393 `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to
394 unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be
395 downloaded from the Intel® [MKL-DNN repository for Windows].
397 - Threading Building Blocks (TBB) is used by default. To build the Inference
398 Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
400 - Required versions of TBB and OpenCV packages are downloaded automatically by
401 the CMake-based script. If you want to use the automatically-downloaded
402 packages but you already have installed TBB or OpenCV packages configured in
403 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
404 environment variables before running the `cmake` command; otherwise they won't
405 be downloaded and the build may fail if incompatible versions were installed.
407 - If the CMake-based build script can not find and download the OpenCV package
408 that is supported on your platform, or if you want to use a custom build of
409 the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
412 - To switch off/on the CPU and GPU plugins, use the `cmake` options
413 `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
415 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
416 specify an exact Python version, use the following options:
418 -DPYTHON_EXECUTABLE="C:\Program Files\Python37\python.exe" ^
419 -DPYTHON_LIBRARY="C:\Program Files\Python37\libs\python37.lib" ^
420 -DPYTHON_INCLUDE_DIR="C:\Program Files\Python37\include"
423 - nGraph-specific compilation options:
424 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
425 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
427 ### Building Inference Engine with Ninja* Build System
430 call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
433 :: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script
435 cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
436 cmake --build . --config Release
439 ## Build on macOS* Systems
441 > **NOTE**: The current version of the OpenVINO™ toolkit for macOS* supports
442 inference on Intel CPUs only.
444 The software was validated on:
445 - macOS\* 10.15, 64-bit
447 ### Software Requirements
449 - [CMake]\* 3.13 or higher
450 - Clang\* compiler from Xcode\* 10.1 or higher
451 - Python\* 3.6 or higher for the Inference Engine Python API wrapper
452 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
459 git submodule update --init --recursive
461 2. Create a build folder:
463 mkdir build && cd build
465 3. Inference Engine uses a CMake-based build system. In the created `build`
466 directory, run `cmake` to fetch project dependencies and create Unix makefiles,
467 then run `make` to build the project:
469 cmake -DCMAKE_BUILD_TYPE=Release ..
470 make --jobs=$(nproc --all)
472 ### Additional Build Options
474 You can use the following additional build options:
476 - Internal JIT GEMM implementation is used by default.
478 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and
479 `-DMKLROOT=<path_to_MKL>` cmake options to specify a path to unpacked MKL-ML
480 with the `include` and `lib` folders. MKL-ML\* [package for Mac] can be downloaded
481 [here](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_mac_2019.0.5.20190502.tgz)
483 - Threading Building Blocks (TBB) is used by default. To build the Inference
484 Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
486 - Required versions of TBB and OpenCV packages are downloaded automatically by
487 the CMake-based script. If you want to use the automatically downloaded
488 packages but you already have installed TBB or OpenCV packages configured in
489 your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
490 environment variables before running the `cmake` command, otherwise they won't
491 be downloaded and the build may fail if incompatible versions were installed.
493 - If the CMake-based build script can not find and download the OpenCV package
494 that is supported on your platform, or if you want to use a custom build of
495 the OpenCV library, refer to the
496 [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
499 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
500 specify an exact Python version, use the following options:
501 - If you installed Python through Homebrew*, set the following flags:
503 -DPYTHON_EXECUTABLE=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/bin/python3.7m \
504 -DPYTHON_LIBRARY=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib \
505 -DPYTHON_INCLUDE_DIR=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/include/python3.7m
507 - If you installed Python another way, you can use the following commands to find where the `dylib` and `include_dir` are located, respectively:
509 find /usr/ -name 'libpython*m.dylib'
510 find /usr/ -type d -name python3.7m
512 - nGraph-specific compilation options:
513 `-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
514 `-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
516 ## Build on Android* Systems
518 This section describes how to build Inference Engine for Android x86 (64-bit) operating systems.
520 ### Software Requirements
522 - [CMake]\* 3.13 or higher
523 - Android NDK (this guide has been validated with r20 release)
524 > **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
528 1. Download and unpack Android NDK: https://developer.android.com/ndk/downloads. Let's assume that `~/Downloads` is used as a working folder.
531 wget https://dl.google.com/android/repository/android-ndk-r20-linux-x86_64.zip
533 unzip android-ndk-r20-linux-x86_64.zip
534 mv android-ndk-r20 android-ndk
540 git submodule update --init --recursive
543 3. Create a build folder:
548 4. Change working directory to `build` and run `cmake` to create makefiles. Then run `make`.
553 -DCMAKE_TOOLCHAIN_FILE=~/Downloads/android-ndk/build/cmake/android.toolchain.cmake \
554 -DANDROID_ABI=x86_64 \
555 -DANDROID_PLATFORM=21 \
556 -DANDROID_STL=c++_shared \
559 make --jobs=$(nproc --all)
562 * `ANDROID_ABI` specifies target architecture (`x86_64`)
563 * `ANDROID_PLATFORM` - Android API version
564 * `ANDROID_STL` specifies that shared C++ runtime is used. Copy `~/Downloads/android-ndk/sources/cxx-stl/llvm-libc++/libs/x86_64/libc++_shared.so` from Android NDK along with built binaries
567 ## Use Custom OpenCV Builds for Inference Engine
569 > **NOTE**: The recommended and tested version of OpenCV is 4.4.0.
571 Required versions of OpenCV packages are downloaded automatically during the
572 building Inference Engine library. If the build script can not find and download
573 the OpenCV package that is supported on your platform, you can use one of the
576 * Download the most suitable version from the list of available pre-build
577 packages from [https://download.01.org/opencv/2020/openvinotoolkit] from the
578 `<release_version>/inference_engine` directory.
580 * Use a system-provided OpenCV package (e.g with running the
581 `apt install libopencv-dev` command). The following modules must be enabled:
582 `imgcodecs`, `videoio`, `highgui`.
584 * Get the OpenCV package using a package manager: pip, conda, conan etc. The
585 package must have the development components included (header files and CMake
588 * Build OpenCV from source using the [build instructions](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html) on the OpenCV site.
590 After you got the built OpenCV library, perform the following preparation steps
591 before running the Inference Engine build:
593 1. Set the `OpenCV_DIR` environment variable to the directory where the
594 `OpenCVConfig.cmake` file of you custom OpenCV build is located.
595 2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF`
596 option for CMake-based build script for Inference Engine.
598 ## Add Inference Engine to Your Project
600 For CMake projects, set the `InferenceEngine_DIR` environment variable:
603 export InferenceEngine_DIR=/path/to/openvino/build/
606 Then you can find Inference Engine by `find_package`:
609 find_package(InferenceEngine)
610 include_directories(${InferenceEngine_INCLUDE_DIRS})
611 target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
614 ## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
616 > **NOTE**: These steps are only required if you want to perform inference on
617 Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using
618 the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started].
620 ### For Linux, Raspbian\* Stretch OS
622 1. Add the current Linux user to the `users` group; you will need to log out and
623 log in for it to take effect:
625 sudo usermod -a -G users "$(whoami)"
628 2. To perform inference on Intel® Movidius™ Neural Compute Stick and Intel®
629 Neural Compute Stick 2, install the USB rules as follows:
631 cat <<EOF > 97-myriad-usbboot.rules
632 SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
633 SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
634 SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
638 sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
641 sudo udevadm control --reload-rules
650 rm 97-myriad-usbboot.rules
655 Congratulations, you have built the Inference Engine. To get started with the
656 OpenVINO™, proceed to the Get Started guides:
658 * [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
662 To enable some additional nGraph features and use your custom nGraph library with the OpenVINO™ binary package,
663 make sure the following:
664 - nGraph library was built with the same version which is used in the Inference Engine.
665 - nGraph library and the Inference Engine were built with the same compilers. Otherwise you might face application binary interface (ABI) problems.
667 To prepare your custom nGraph library for distribution, which includes collecting all headers, copy
668 binaries, and so on, use the `install` CMake target.
669 This target collects all dependencies, prepares the nGraph package and copies it to a separate directory.
671 ## Additional Resources
673 * [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
674 * [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
675 * [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
676 * [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
677 * [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
680 \* Other names and brands may be claimed as the property of others.
683 [Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit
684 [CMake]:https://cmake.org/download/
685 [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441
686 [MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz
687 [MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip)
688 [OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download
689 [mingw64\* runtime dependencies]:https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download
690 [https://download.01.org/opencv/2020/openvinotoolkit]:https://download.01.org/opencv/2020/openvinotoolkit
691 [build instructions]:https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html
692 [driver package]:https://downloadcenter.intel.com/download/29335/Intel-Graphics-Windows-10-DCH-Drivers
693 [Intel® Neural Compute Stick 2 Get Started]:https://software.intel.com/en-us/neural-compute-stick/get-started
694 [OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download