1 # Build Inference Engine
5 - [Introduction](#introduction)
6 - [Build on Linux* Systems](#build-on-linux-systems)
7 - [Software Requirements](#software-requirements)
8 - [Build Steps](#build-steps)
9 - [Additional Build Options](#additional-build-options)
10 - [Build for Raspbian* Stretch OS](#build-for-raspbian-stretch-os)
11 - [Hardware Requirements](#hardware-requirements)
12 - [Native Compilation](#native-compilation)
13 - [Cross Compilation Using Docker*](#cross-compilation-using-docker)
14 - [Additional Build Options](#additional-build-options-1)
15 - [Build on Windows* Systems](#build-on-windows-systems)
16 - [Software Requirements](#software-requirements-1)
17 - [Build Steps](#build-steps-1)
18 - [Additional Build Options](#additional-build-options-2)
19 - [Building Inference Engine with Ninja* Build System](#building-inference-engine-with-ninja-build-system)
20 - [Build on macOS* Systems](#build-on-macos-systems)
21 - [Software Requirements](#software-requirements-2)
22 - [Build Steps](#build-steps-2)
23 - [Additional Build Options](#additional-build-options-3)
24 - [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
25 - [Adding Inference Engine to your project](#adding-inference-engine-to-your-project)
26 - [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
27 - [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
28 - [For Windows](#for-windows-1)
29 - [Next Steps](#next-steps)
30 - [Additional Resources](#additional-resources)
33 The Inference Engine can infer models in different formats with various input and output formats.
35 The open source version of Inference Engine includes the following plugins:
37 | PLUGIN | DEVICE TYPES |
38 | ---------------------| -------------|
39 | CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
40 | GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
41 | GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
42 | MYRIAD plugin | Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
43 | Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
45 Inference Engine plugin for Intel® FPGA is distributed only in a binary form as a part of [Intel® Distribution of OpenVINO™](https://software.intel.com/en-us/openvino-toolkit).
47 ## Build on Linux* Systems
49 The software was validated on:
50 - Ubuntu\* 16.04 (64-bit) with default GCC\* 5.4.0
51 - CentOS\* 7.4 (64-bit) with default GCC\* 4.8.5
53 ### Software Requirements
54 - [CMake\*](https://cmake.org/download/) 3.5 or higher
55 - GCC\* 4.8 or higher to build the Inference Engine
56 - Python 2.7 or higher for Inference Engine Python API wrapper
57 - (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.04.12237](https://github.com/intel/compute-runtime/releases/tag/19.04.12237).
62 cd dldt/inference-engine
64 git submodule update --recursive
66 2. Install build dependencies using the `install_dependencies.sh` script in the project root folder:
68 chmod +x install_dependencies.sh
71 ./install_dependencies.sh
73 3. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.04.12237](https://github.com/intel/compute-runtime/releases/tag/19.04.12237) before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Compute Runtime for OpenCL™ Driver.
74 4. Create a build folder:
76 mkdir build && cd build
78 5. Inference Engine uses a CMake-based build system. In the created `build` directory, run `cmake` to fetch project dependencies and create Unix makefiles, then run `make` to build the project:
80 cmake -DCMAKE_BUILD_TYPE=Release ..
81 make --jobs=$(nproc --all)
84 ### Additional Build Options
86 You can use the following additional build options:
88 - Internal JIT GEMM implementation is used by default.
90 - To switch to OpenBLAS\* implementation, use the `GEMM=OPENBLAS` option and `BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` CMake options to specify path to the OpenBLAS headers and library. For example use the following options on CentOS\*: `-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`.
92 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded from the [MKL-DNN repository](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz).
94 - Threading Building Blocks (TBB) is used by default. To build the Inference Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
96 - Required versions of TBB and OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed TBB or OpenCV packages configured in your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR` environment variables before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
98 - If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
100 - To build the Python API wrapper:
101 1. Install all additional packages listed in the `/inference-engine/ie_bridges/python/requirements.txt` file:
103 pip install -r requirements.txt
105 2. use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
107 -DPYTHON_EXECUTABLE=`which python3.7` \
108 -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
109 -DPYTHON_INCLUDE_DIR=/usr/include/python3.7
112 - To switch off/on the CPU and GPU plugins, use the `cmake` options `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
114 ## Build for Raspbian Stretch* OS
116 > **NOTE**: Only the MYRIAD plugin is supported.
118 ### Hardware Requirements
119 * Raspberry Pi\* 2 or 3 with Raspbian\* Stretch OS (32-bit). Check that it's CPU supports ARMv7 instruction set (`uname -m` command returns `armv7l`).
121 > **NOTE**: Despite the Raspberry Pi\* CPU is ARMv8, 32-bit OS detects ARMv7 CPU instruction set. The default `gcc` compiler applies ARMv6 architecture flag for compatibility with lower versions of boards. For more information, run the `gcc -Q --help=target` command and refer to the description of the `-march=` option.
123 You can compile the Inference Engine for Raspberry Pi\* in one of the two ways:
124 * [Native Compilation](#native-compilation), which is the simplest way, but time-consuming
125 * [Cross Compilation Using Docker*](#cross-compilation-using-docker), which is the recommended way
127 ### Native Compilation
128 Native compilation of the Inference Engine is the most straightforward solution. However, it might take at least one hour to complete on Raspberry Pi\* 3.
130 1. Install dependencies:
134 sudo apt-get install -y git cmake libusb-1.0-0-dev
137 2. Go to the `inference-engine` directory of the cloned `dldt` repository:
140 cd dldt/inference-engine
143 3. Initialize submodules:
147 git submodule update --recursive
150 4. Create a build folder:
153 mkdir build && cd build
156 5. Build the Inference Engine:
159 cmake -DCMAKE_BUILD_TYPE=Release \
162 -DENABLE_GNA=OFF .. && make
165 ### Cross Compilation Using Docker*
167 This compilation was tested on the following configuration:
169 * Host: Ubuntu\* 16.04 (64-bit, Intel® Core™ i7-6700K CPU @ 4.00GHz × 8)
170 * Target: Raspbian\* Stretch (32-bit, ARMv7, Raspberry Pi\* 3)
175 sudo apt-get install -y docker.io
178 2. Add a current user to `docker` group:
181 sudo usermod -a -G docker $USER
184 Log out and log in for this to take effect.
186 3. Create a directory named `ie_cross_armhf` and add a text file named `Dockerfile`
187 with the following content:
194 RUN dpkg --add-architecture armhf && \
196 apt-get install -y --no-install-recommends \
198 crossbuild-essential-armhf \
201 libusb-1.0-0-dev:armhf \
203 libavcodec-dev:armhf \
204 libavformat-dev:armhf \
205 libswscale-dev:armhf \
206 libgstreamer1.0-dev:armhf \
207 libgstreamer-plugins-base1.0-dev:armhf \
208 libpython3-dev:armhf \
211 RUN wget https://www.cmake.org/files/v3.14/cmake-3.14.3.tar.gz && \
212 tar xf cmake-3.14.3.tar.gz && \
213 (cd cmake-3.14.3 && ./bootstrap --parallel=$(nproc --all) && make --jobs=$(nproc --all) && make install) && \
214 rm -rf cmake-3.14.3 cmake-3.14.3.tar.gz
218 It uses the Debian\* Stretch (Debian 9) OS for compilation because it is a base of the Raspbian\* Stretch.
220 4. Build a Docker\* image:
223 docker image build -t ie_cross_armhf ie_cross_armhf
226 5. Run Docker\* container with mounted source code folder from host:
229 docker run -it -v /absolute/path/to/dldt:/dldt ie_cross_armhf /bin/bash
232 6. While in the container:
234 1. Go to the `inference-engine` directory of the cloned `dldt` repository:
237 cd dldt/inference-engine
240 2. Create a build folder:
243 mkdir build && cd build
246 3. Build the Inference Engine:
249 cmake -DCMAKE_BUILD_TYPE=Release \
250 -DCMAKE_TOOLCHAIN_FILE="../cmake/arm.toolchain.cmake" \
251 -DTHREADS_PTHREAD_ARG="-pthread" \
254 -DENABLE_GNA=OFF .. && make --jobs=$(nproc --all)
257 7. Press "Ctrl"+"D" to exit from Docker\*. You can find the resulting binaries in the `dldt/inference-engine/bin/armv7l/` directory and the OpenCV* installation in the `dldt/inference-engine/temp`.
259 >**NOTE**: Native applications that link to cross-compiled Inference Engine library require an extra compilation flag `-march=armv7-a`.
261 ### Additional Build Options
263 You can use the following additional build options:
265 - Required versions of OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed OpenCV packages configured in your environment, you may need to clean the `OpenCV_DIR` environment variable before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
267 - If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
269 - To build Python API wrapper, install `libpython3-dev:armhf` and `python3-pip` packages using `apt-get`, then install `numpy` and `cython` python modules using `pip3` command and add the following cmake options:
272 -DPYTHON_EXECUTABLE=/usr/bin/python3.5 \
273 -DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.5m.so \
274 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5
277 ## Build on Windows* Systems
279 The software was validated on:
280 - Microsoft\* Windows\* 10 (64-bit) with Visual Studio 2017 and Intel® C++ Compiler 2018 Update 3
282 ### Software Requirements
283 - [CMake\*](https://cmake.org/download/) 3.5 or higher
284 - [OpenBLAS\*](https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download) and [mingw64\* runtime dependencies](https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download).
285 - [Intel® C++ Compiler](https://software.intel.com/en-us/intel-parallel-studio-xe) 18.0 to build the Inference Engine on Windows.
286 - (Optional) [Intel® Graphics Driver for Windows* [25.20] driver package](https://downloadcenter.intel.com/download/28646/Intel-Graphics-Windows-10-DCH-Drivers?product=80939).
287 - Python 3.4 or higher for Inference Engine Python API wrapper
293 git submodule update --recursive
295 2. Download and install [Intel® C++ Compiler](https://software.intel.com/en-us/intel-parallel-studio-xe) 18.0
297 1. Download [OpenBLAS\*](https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download)
298 2. Unzip the downloaded package to a directory on your machine. In this document, this directory is referred to as `<OPENBLAS_DIR>`.
299 4. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to [download and install the Intel® Graphics Driver for Windows* [25.20] driver package](https://downloadcenter.intel.com/download/28646/Intel-Graphics-Windows-10-DCH-Drivers?product=80939) before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Driver.
300 5. Create build directory:
304 6. In the `build` directory, run `cmake` to fetch project dependencies and generate a Visual Studio solution:
307 cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
308 -DCMAKE_BUILD_TYPE=Release ^
309 -DICCLIB="C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\compiler\lib" ..
312 7. Build generated solution in Visual Studio 2017 or run `cmake --build . --config Release` to build from the command line.
314 8. Before running the samples, add paths to TBB and OpenCV binaries used for the build to the `%PATH%` environment variable. By default, TBB binaries are downloaded by the CMake-based script to the `<dldt_repo>/inference-engine/temp/tbb/lib` folder, OpenCV binaries - to the `<dldt_repo>/inference-engine/temp/opencv_4.1.0/bin` folder.
316 ### Additional Build Options
318 - Internal JIT GEMM implementation is used by default.
319 - To switch to OpenBLAS GEMM implementation, use the `-DGEMM=OPENBLAS` CMake option and specify path to OpenBLAS using the `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include` and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Prebuilt OpenBLAS\* package can be downloaded [here](https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download). mingw64* runtime dependencies can be downloaded [here](https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download).
320 - To switch to the optimized MKL-ML\* GEMM implementation, use the `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded from the [MKL-DNN repository](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip).
322 - Threading Building Blocks (TBB) is used by default. To build the Inference Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
324 - Required versions of TBB and OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed TBB or OpenCV packages configured in your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR` environment variables before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
326 - If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
328 - To switch off/on the CPU and GPU plugins, use the `cmake` options `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
330 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
332 -DPYTHON_EXECUTABLE="C:\Program Files\Python37\python.exe" ^
333 -DPYTHON_LIBRARY="C:\Program Files\Python37\libs\python37.lib" ^
334 -DPYTHON_INCLUDE_DIR="C:\Program Files\Python37\include"
337 ### Building Inference Engine with Ninja* Build System
340 call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
343 :: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by dldt cmake script
345 cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
346 cmake --build . --config Release
349 ## Build on macOS* Systems
351 > **NOTE**: The current version of the OpenVINO™ toolkit for macOS* supports inference on Intel CPUs only.
353 The software was validated on:
354 - macOS\* 10.14, 64-bit
356 ### Software Requirements
357 - [CMake\*](https://cmake.org/download/) 3.5 or higher
358 - Clang\* compiler from Xcode\* 10.1
359 - Python\* 3.4 or higher for the Inference Engine Python API wrapper
364 cd dldt/inference-engine
366 git submodule update --recursive
368 2. Install build dependencies using the `install_dependencies.sh` script in the project root folder:
370 chmod +x install_dependencies.sh
373 ./install_dependencies.sh
375 3. Create a build folder:
379 4. Inference Engine uses a CMake-based build system. In the created `build` directory, run `cmake` to fetch project dependencies and create Unix makefiles, then run `make` to build the project:
381 cmake -DCMAKE_BUILD_TYPE=Release ..
382 make --jobs=$(nproc --all)
384 ### Additional Build Options
386 You can use the following additional build options:
387 - Internal JIT GEMM implementation is used by default.
388 - To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` cmake options to specify a path to unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded [here](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_mac_2019.0.5.20190502.tgz)
390 - Threading Building Blocks (TBB) is used by default. To build the Inference Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
392 - Required versions of TBB and OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed TBB or OpenCV packages configured in your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR` environment variables before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
394 - If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
396 - To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
398 -DPYTHON_EXECUTABLE=/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7 \
399 -DPYTHON_LIBRARY=/Library/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib \
400 -DPYTHON_INCLUDE_DIR=/Library/Frameworks/Python.framework/Versions/3.7/include/python3.7m
403 ## Use Custom OpenCV Builds for Inference Engine
405 > **NOTE**: The recommended and tested version of OpenCV is 4.1. The minimum supported version is 3.4.0.
407 Required versions of OpenCV packages are downloaded automatically during the building Inference Engine library. If the build script can not find and download the OpenCV package that is supported on your platform, you can use one of the following options:
409 * Download the most suitable version from the list of available pre-build packages from [https://download.01.org/opencv/2019/openvinotoolkit](https://download.01.org/opencv/2019/openvinotoolkit) from the `<release_version>/inference_engine` directory.
411 * Use a system provided OpenCV package (e.g with running the `apt install libopencv-dev` command). The following modules must be enabled: `imgcodecs`, `videoio`, `highgui`.
413 * Get the OpenCV package using a package manager: pip, conda, conan etc. The package must have the development components included (header files and CMake scripts).
415 * Build OpenCV from source using the [build instructions](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html) on the OpenCV site.
417 After you got the built OpenCV library, perform the following preparation steps before running the Inference Engine build:
419 1. Set the `OpenCV_DIR` environment variable to the directory where the `OpenCVConfig.cmake` file of you custom OpenCV build is located.
420 2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF` option for CMake-based build script for Inference Engine.
422 ## Adding Inference Engine to your project
424 For CMake projects, set the `InferenceEngine_DIR` environment variable:
427 export InferenceEngine_DIR=/path/to/dldt/inference-engine/build/
430 Then you can find Inference Engine by `find_package`:
433 find_package(InferenceEngine)
434 include_directories(${InferenceEngine_INCLUDE_DIRS})
435 target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
438 ## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
440 > **NOTE**: These steps are only required if you want to perform inference on Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started](https://software.intel.com/en-us/neural-compute-stick/get-started)
442 ### For Linux, Raspbian\* Stretch OS
444 1. Add the current Linux user to the `users` group:
446 sudo usermod -a -G users "$(whoami)"
448 Log out and log in for it to take effect.
450 2. To perform inference on Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, install the USB rules as follows:
452 cat <<EOF > 97-myriad-usbboot.rules
453 SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
454 SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
455 SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
459 sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
462 sudo udevadm control --reload-rules
471 rm 97-myriad-usbboot.rules
476 For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, install the Movidius™ VSC driver:
477 1. Go to the `<DLDT_ROOT_DIR>/inference-engine/thirdparty/movidius/MovidiusDriver` directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT repository was cloned.
478 2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from the pop up menu.
480 You have installed the driver for your Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.
484 Congratulations, you have built the Inference Engine. To get started with the OpenVINO™ DLDT, proceed to the Get Started guides:
486 * [Get Started with Deep Learning Deployment Toolkit on Linux*](../get-started-linux.md)
488 ## Additional Resources
490 * [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
491 * [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
492 * [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
493 * [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
494 * [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
497 \* Other names and brands may be claimed as the property of others.