# [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Deep Learning Deployment Toolkit repository
-[![Stable release](https://img.shields.io/badge/version-2020.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2020.2)
+[![Stable release](https://img.shields.io/badge/version-2020.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2020.3.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
This toolkit allows developers to deploy pre-trained deep learning models
- [Add Inference Engine to Your Project](#add-inference-engine-to-your-project)
- [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
- [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
- - [For Windows](#for-windows-1)
- [Next Steps](#next-steps)
- [Additional Resources](#additional-resources)
- [CMake]\* 3.11 or higher
- GCC\* 4.8 or higher to build the Inference Engine
- Python 2.7 or higher for Inference Engine Python API wrapper
-- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441].
+- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352].
### Build Steps
1. Clone submodules:
```sh
- cd dldt
+ cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the
```
3. By default, the build enables the Inference Engine GPU plugin to infer models
on your Intel® Processor Graphics. This requires you to
- [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]
+ [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352]
before running the build. If you don't want to use the GPU plugin, use the
`-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
Intel® Graphics Compute Runtime for OpenCL™ Driver.
sudo apt-get install -y git cmake libusb-1.0-0-dev
```
-2. Go to the cloned `dldt` repository:
+2. Go to the cloned `openvino` repository:
```bash
- cd dldt
+ cd openvino
```
3. Initialize submodules:
5. Run Docker\* container with mounted source code folder from host:
```bash
- docker run -it -v /absolute/path/to/dldt:/dldt ie_cross_armhf /bin/bash
+ docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash
```
6. While in the container:
- 1. Go to the cloned `dldt` repository:
+ 1. Go to the cloned `openvino` repository:
```bash
- cd dldt
+ cd openvino
```
2. Create a build folder:
```
7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries
- in the `dldt/bin/armv7l/` directory and the OpenCV*
- installation in the `dldt/inference-engine/temp`.
+ in the `openvino/bin/armv7l/` directory and the OpenCV*
+ installation in the `openvino/inference-engine/temp`.
>**NOTE**: Native applications that link to cross-compiled Inference Engine
library require an extra compilation flag `-march=armv7-a`.
6. Before running the samples, add paths to the TBB and OpenCV binaries used for
the build to the `%PATH%` environment variable. By default, TBB binaries are
- downloaded by the CMake-based script to the `<dldt_repo>/inference-engine/temp/tbb/bin`
- folder, OpenCV binaries to the `<dldt_repo>/inference-engine/temp/opencv_4.3.0/opencv/bin`
+ downloaded by the CMake-based script to the `<openvino_repo>/inference-engine/temp/tbb/bin`
+ folder, OpenCV binaries to the `<openvino_repo>/inference-engine/temp/opencv_4.3.0/opencv/bin`
folder.
### Additional Build Options
call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
set CXX=icl
set CC=icl
-:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by dldt cmake script
+:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script
set TBBROOT=
cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --config Release
1. Clone submodules:
```sh
- cd dldt
+ cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the
2. Clone submodules
```sh
- cd dldt
+ cd openvino
git submodule update --init --recursive
```
For CMake projects, set the `InferenceEngine_DIR` environment variable:
```sh
-export InferenceEngine_DIR=/path/to/dldt/build/
+export InferenceEngine_DIR=/path/to/openvino/build/
```
Then you can find Inference Engine by `find_package`:
rm 97-myriad-usbboot.rules
```
-### For Windows
-
-For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2,
-install the Movidius™ VSC driver:
-
-1. Go to the `<DLDT_ROOT_DIR>/inference-engine/thirdparty/movidius/MovidiusDriver`
- directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT
- repository was cloned.
-2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from
- the pop-up menu.
-
-You have installed the driver for your Intel® Movidius™ Neural Compute Stick
-or Intel® Neural Compute Stick 2.
-
## Next Steps
Congratulations, you have built the Inference Engine. To get started with the
[Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit
[CMake]:https://cmake.org/download/
-[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441
+[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352]:https://github.com/intel/compute-runtime/releases/tag/20.13.16352
[MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz
[MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip)
[OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download
-# Get Started with OpenVINO™ Deep Learning Deployment Toolkit (DLDT) on Linux*
+# Get Started with OpenVINO™ Toolkit on Linux*
This guide provides you with the information that will help you to start using
-the DLDT on Linux\*. With this guide, you will learn how to:
+the OpenVINO™ Toolkit on Linux\*. With this guide, you will learn how to:
1. [Configure the Model Optimizer](#configure-the-model-optimizer)
2. [Prepare a model for sample inference](#prepare-a-model-for-sample-inference)
3. [Run the Image Classification Sample Application with the model](#run-the-image-classification-sample-application)
## Prerequisites
-1. This guide assumes that you have already cloned the `dldt` repo and
+1. This guide assumes that you have already cloned the `openvino` repo and
successfully built the Inference Engine and Samples using the
[build instructions](inference-engine/README.md).
2. The original structure of the repository directories remains unchanged.
-> **NOTE**: Below, the directory to which the `dldt` repository is cloned is
-referred to as `<DLDT_DIR>`.
+> **NOTE**: Below, the directory to which the `openvino` repository is cloned is
+referred to as `<OPENVINO_DIR>`.
## Configure the Model Optimizer
1. Go to the Model Optimizer prerequisites directory:
```sh
-cd <DLDT_DIR>/model_optimizer/install_prerequisites
+cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
```
2. Run the script to configure the Model Optimizer for Caffe,
TensorFlow, MXNet, Kaldi\*, and ONNX:
1. Go to the Model Optimizer prerequisites directory:
```sh
-cd <DLDT_DIR>/model_optimizer/install_prerequisites
+cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
```
2. Run the script for your model framework. You can run more than one script:
**For CPU (FP32):**
```sh
- python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
+ python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
```
**For GPU and MYRIAD (FP16):**
```sh
- python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
+ python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
```
After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `<ir_dir>` directory.
-3. Copy the `squeezenet1.1.labels` file from the `<DLDT_DIR>/inference-engine/samples/sample_data/`
+3. Copy the `squeezenet1.1.labels` file from the `<OPENVINO_DIR>/scripts/demo/`
folder to the model IR directory. This file contains the classes that ImageNet
uses so that the inference results show text instead of classification numbers:
```sh
- cp <DLDT_DIR>/inference-engine/samples/sample_data/squeezenet1.1.labels <ir_dir>
+ cp <OPENVINO_DIR>/scripts/demo/squeezenet1.1.labels <ir_dir>
```
Now you are ready to run the Image Classification Sample Application.
The Inference Engine sample applications are automatically compiled when you
built the Inference Engine using the [build instructions](inference-engine/README.md).
-The binary files are located in the `<DLDT_DIR>/inference-engine/bin/intel64/Release`
+The binary files are located in the `<OPENVINO_DIR>/inference-engine/bin/intel64/Release`
directory.
To run the Image Classification sample application with an input image on the prepared IR:
1. Go to the samples build directory:
```sh
- cd <DLDT_DIR>/inference-engine/bin/intel64/Release
+ cd <OPENVINO_DIR>/inference-engine/bin/intel64/Release
2. Run the sample executable with specifying the `car.png` file from the
- `<DLDT_DIR>/inference-engine/samples/sample_data/` directory as an input
+ `<OPENVINO_DIR>/scripts/demo/` directory as an input
image, the IR of your model and a plugin for a hardware device to perform
inference on:
**For CPU:**
```sh
- ./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d CPU
+ ./classification_sample -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d CPU
```
**For GPU:**
```sh
- ./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d GPU
+ ./classification_sample -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d GPU
```
**For MYRIAD:**
Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires
performing [additional hardware configuration steps](inference-engine/README.md#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2).
```sh
- ./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD
+ ./classification_sample -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD
```
When the Sample Application completes, you will have the label and confidence for the top-10 categories printed on the screen. Below is a sample output with inference results on CPU:
```sh
Top 10 results:
-Image /home/user/dldt/inference-engine/samples/sample_data/car.png
+Image /home/user/openvino/scripts/demo/car.png
classid probability label
------- ----------- -----
# install dependencies
if [ -f /etc/lsb-release ]; then
# Ubuntu
+ host_cpu=$(uname -m)
+ if [ $host_cpu = x86_64 ]; then
+ x86_64_specific_packages="gcc-multilib g++-multilib"
+ else
+ x86_64_specific_packages=""
+ fi
+
sudo -E apt update
sudo -E apt-get install -y \
build-essential \
ca-certificates \
git \
libboost-regex-dev \
- gcc-multilib \
- g++-multilib \
+ $x86_64_specific_packages \
libgtk2.0-dev \
pkg-config \
unzip \