1 # Get Started with OpenVINO™ Deep Learning Deployment Toolkit (DLDT) on Linux*
3 This guide provides you with the information that will help you to start using
4 the DLDT on Linux\*. With this guide, you will learn how to:
6 1. [Configure the Model Optimizer](#configure-the-model-optimizer)
7 2. [Prepare a model for sample inference](#prepare-a-model-for-sample-inference)
8 1. [Download a pre-trained model](#download-a-trained-model)
9 2. [Convert the model to an Intermediate Representation (IR) with the Model Optimizer](#convert-the-model-to-an-intermediate-representation-with-the-model-optimizer)
10 3. [Run the Image Classification Sample Application with the model](#run-the-image-classification-sample-application)
13 1. This guide assumes that you have already cloned the `dldt` repo and
14 successfully built the Inference Engine and Samples using the
15 [build instructions](inference-engine/README.md).
16 2. The original structure of the repository directories remains unchanged.
18 > **NOTE**: Below, the directory to which the `dldt` repository is cloned is
19 referred to as `<DLDT_DIR>`.
21 ## Configure the Model Optimizer
23 The Model Optimizer is a Python\*-based command line tool for importing trained
24 models from popular deep learning frameworks such as Caffe\*, TensorFlow\*,
25 Apache MXNet\*, ONNX\* and Kaldi\*.
27 You cannot perform inference on your trained model without having first run the
28 model through the Model Optimizer. When you run a pre-trained model through the
29 Model Optimizer, it outputs an *Intermediate Representation*, or *(IR)* of
30 the network, a pair of files that describes the whole model:
32 - `.xml`: Describes the network topology
33 - `.bin`: Contains the weights and biases binary data
35 For more information about the Model Optimizer, refer to the
36 [Model Optimizer Developer Guide].
38 ### Model Optimizer Configuration Steps
40 You can choose to either configure all supported frameworks at once **OR**
41 configure one framework at a time. Choose the option that best suits your needs.
42 If you see error messages, check for any missing dependencies.
44 > **NOTE**: The TensorFlow\* framework is not officially supported on CentOS\*,
45 so the Model Optimizer for TensorFlow cannot be configured on, or run with
48 > **IMPORTANT**: Internet access is required to execute the following steps
49 successfully. If you access the Internet via proxy server only, please make
50 sure that it is configured in your OS environment as well.
52 **Option 1: Configure all supported frameworks at the same time**
54 1. Go to the Model Optimizer prerequisites directory:
56 cd <DLDT_DIR>/model_optimizer/install_prerequisites
58 2. Run the script to configure the Model Optimizer for Caffe,
59 TensorFlow, MXNet, Kaldi\*, and ONNX:
61 sudo ./install_prerequisites.sh
64 **Option 2: Configure each framework separately**
66 Configure individual frameworks separately **ONLY** if you did not select
69 1. Go to the Model Optimizer prerequisites directory:
71 cd <DLDT_DIR>/model_optimizer/install_prerequisites
73 2. Run the script for your model framework. You can run more than one script:
77 sudo ./install_prerequisites_caffe.sh
82 sudo ./install_prerequisites_tf.sh
87 sudo ./install_prerequisites_mxnet.sh
92 sudo ./install_prerequisites_onnx.sh
97 sudo ./install_prerequisites_kaldi.sh
99 The Model Optimizer is configured for one or more frameworks. Continue to the
100 next session to download and prepare a model for running a sample inference.
102 ## Prepare a Model for Sample Inference
104 This section describes how to get a pre-trained model for sample inference
105 and how to prepare the optimized Intermediate Representation (IR) that
106 Inference Inference Engine uses.
109 ### Download a Trained Model
111 To run the Image Classification Sample, you need a pre-trained model to run
112 the inference on. This guide uses the public SqueezeNet 1.1 Caffe\* model.
113 You can find and download this model manually or use the OpenVINO™
116 With the Model Downloader, you can download other popular public deep learning
117 topologies and [OpenVINO™ pre-trained models], which are already prepared for
118 running inference upon a wide list of inference scenarios:
121 * object recognition,
122 * object re-identification,
123 * human pose estimation,
124 * action recognition, and others.
126 To download the SqueezeNet 1.1 Caffe* model to a `models` folder (referred to
127 as `<models_dir>` below) with the Model Downloader:
129 1. Install the [prerequisites].
130 2. Run the `downloader.py` script, specifying the topology name and the path
131 to your `<models_dir>`. For example, to download the model to a directory
132 named `~/public_models`, run:
134 ./downloader.py --name squeezenet1.1 --output_dir ~/public_models
136 When the model files are successfully downloaded, output similar to the
137 following is printed:
139 ###############|| Downloading topologies ||###############
141 ========= Downloading /home/username/public_models/classification/squeezenet/1.1/caffe/squeezenet1.1.prototxt
143 ========= Downloading /home/username/public_models/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel
144 ... 100%, 4834 KB, 3157 KB/s, 1 seconds passed
146 ###############|| Post processing ||###############
148 ========= Changing input dimensions in squeezenet1.1.prototxt =========
151 ### Convert the model to an Intermediate Representation with the Model Optimizer
153 > **NOTE**: This section assumes that you have configured the Model Optimizer using the instructions from the [Configure the Model Optimizer](#configure-the-model-optimizer) section.
155 1. Create a `<ir_dir>` directory that will contains the Intermediate Representation (IR) of the model.
157 2. Inference Engine can perform inference on a [list of supported devices]
158 using specific device plugins. Different plugins support models of
159 [different precision formats], such as `FP32`, `FP16`, `INT8`. To prepare an
160 IR to run inference on particular hardware, run the Model Optimizer with the
161 appropriate `--data_type` options:
165 python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
168 **For GPU and MYRIAD (FP16):**
170 python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
172 After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `<ir_dir>` directory.
174 3. Copy the `squeezenet1.1.labels` file from the `<DLDT_DIR>/inference-engine/samples/sample_data/`
175 folder to the model IR directory. This file contains the classes that ImageNet
176 uses so that the inference results show text instead of classification numbers:
178 cp <DLDT_DIR>/inference-engine/samples/sample_data/squeezenet1.1.labels <ir_dir>
181 Now you are ready to run the Image Classification Sample Application.
183 ## Run the Image Classification Sample Application
185 The Inference Engine sample applications are automatically compiled when you
186 built the Inference Engine using the [build instructions](inference-engine/README.md).
187 The binary files are located in the `<DLDT_DIR>/inference-engine/bin/intel64/Release`
190 To run the Image Classification sample application with an input image on the prepared IR:
192 1. Go to the samples build directory:
194 cd <DLDT_DIR>/inference-engine/bin/intel64/Release
196 2. Run the sample executable with specifying the `car.png` file from the
197 `<DLDT_DIR>/inference-engine/samples/sample_data/` directory as an input
198 image, the IR of your model and a plugin for a hardware device to perform
203 ./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d CPU
208 ./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d GPU
213 >**NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute
214 Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires
215 performing [additional hardware configuration steps](inference-engine/README.md#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2).
217 ./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD
220 When the Sample Application completes, you will have the label and confidence for the top-10 categories printed on the screen. Below is a sample output with inference results on CPU:
224 Image /home/user/dldt/inference-engine/samples/sample_data/car.png
226 classid probability label
227 ------- ----------- -----
228 817 0.8363345 sports car, sport car
229 511 0.0946488 convertible
230 479 0.0419131 car wheel
231 751 0.0091071 racer, race car, racing car
232 436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
233 656 0.0037564 minivan
234 586 0.0025741 half track
235 717 0.0016069 pickup, pickup truck
236 864 0.0012027 tow truck, tow car, wrecker
237 581 0.0005882 grille, radiator grille
240 total inference time: 2.6642941
241 Average running time of one iteration: 2.6642941 ms
243 Throughput: 375.3339402 FPS
245 [ INFO ] Execution successful
248 ## Additional Resources
250 * [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
251 * [Inference Engine build instructions](inference-engine/README.md)
252 * [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
253 * [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
254 * [Model Optimizer Developer Guide]
255 * [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html).
257 [Model Optimizer Developer Guide]:https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
258 [Model Downloader]:https://github.com/opencv/open_model_zoo/tree/master/tools/downloader
259 [OpenVINO™ pre-trained models]:https://github.com/opencv/open_model_zoo/tree/master/models/intel
260 [prerequisites]:https://github.com/opencv/open_model_zoo/tree/master/tools/downloader#prerequisites
261 [list of supported devices]:https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html
262 [different precision formats]:https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#supported_model_formats