1 # Get Started with OpenVINO™ Toolkit on Linux*
3 This guide provides you with the information that will help you to start using
4 the OpenVINO™ Toolkit on Linux\*. With this guide, you will learn how to:
6 1. [Configure the Model Optimizer](#configure-the-model-optimizer)
7 2. [Prepare a model for sample inference](#prepare-a-model-for-sample-inference)
8 1. [Download a pre-trained model](#download-a-trained-model)
9 2. [Convert the model to an Intermediate Representation (IR) with the Model Optimizer](#convert-the-model-to-an-intermediate-representation-with-the-model-optimizer)
10 3. [Run the Image Classification Sample Application with the model](#run-the-image-classification-sample-application)
13 1. This guide assumes that you have already cloned the `openvino` repo and
14 successfully built the Inference Engine and Samples using the
15 [build instructions](build-instruction.md).
16 2. The original structure of the repository directories remains unchanged.
18 > **NOTE**: Below, the directory to which the `openvino` repository is cloned is
19 referred to as `<OPENVINO_DIR>`.
21 ## Configure the Model Optimizer
23 The Model Optimizer is a Python\*-based command line tool for importing trained
24 models from popular deep learning frameworks such as Caffe\*, TensorFlow\*,
25 Apache MXNet\*, ONNX\* and Kaldi\*.
27 You cannot perform inference on your trained model without having first run the
28 model through the Model Optimizer. When you run a pre-trained model through the
29 Model Optimizer, it outputs an *Intermediate Representation*, or *(IR)* of
30 the network, a pair of files that describes the whole model:
32 - `.xml`: Describes the network topology
33 - `.bin`: Contains the weights and biases binary data
35 For more information about the Model Optimizer, refer to the
36 [Model Optimizer Developer Guide].
38 ### Model Optimizer Configuration Steps
40 You can choose to either configure all supported frameworks at once **OR**
41 configure one framework at a time. Choose the option that best suits your needs.
42 If you see error messages, check for any missing dependencies.
44 > **NOTE**: The TensorFlow\* framework is not officially supported on CentOS\*,
45 so the Model Optimizer for TensorFlow cannot be configured on, or run with
48 > **IMPORTANT**: Internet access is required to execute the following steps
49 successfully. If you access the Internet via proxy server only, please make
50 sure that it is configured in your OS environment as well.
52 **Option 1: Configure all supported frameworks at the same time**
54 1. Go to the Model Optimizer prerequisites directory:
56 cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
58 2. Run the script to configure the Model Optimizer for Caffe,
59 TensorFlow 1.x, MXNet, Kaldi\*, and ONNX:
61 sudo ./install_prerequisites.sh
64 **Option 2: Configure each framework separately**
66 Configure individual frameworks separately **ONLY** if you did not select
69 1. Go to the Model Optimizer prerequisites directory:
71 cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
73 2. Run the script for your model framework. You can run more than one script:
77 sudo ./install_prerequisites_caffe.sh
80 - For **TensorFlow 1.x**:
82 sudo ./install_prerequisites_tf.sh
85 - For **TensorFlow 2.x**:
87 sudo ./install_prerequisites_tf2.sh
92 sudo ./install_prerequisites_mxnet.sh
97 sudo ./install_prerequisites_onnx.sh
102 sudo ./install_prerequisites_kaldi.sh
104 The Model Optimizer is configured for one or more frameworks. Continue to the
105 next session to download and prepare a model for running a sample inference.
107 ## Prepare a Model for Sample Inference
109 This section describes how to get a pre-trained model for sample inference
110 and how to prepare the optimized Intermediate Representation (IR) that
111 Inference Inference Engine uses.
114 ### Download a Trained Model
116 To run the Image Classification Sample, you need a pre-trained model to run
117 the inference on. This guide uses the public SqueezeNet 1.1 Caffe\* model.
118 You can find and download this model manually or use the OpenVINO™
121 With the Model Downloader, you can download other popular public deep learning
122 topologies and [OpenVINO™ pre-trained models], which are already prepared for
123 running inference upon a wide list of inference scenarios:
126 * object recognition,
127 * object re-identification,
128 * human pose estimation,
129 * action recognition, and others.
131 To download the SqueezeNet 1.1 Caffe* model to a `models` folder (referred to
132 as `<models_dir>` below) with the Model Downloader:
134 1. Install the [prerequisites].
135 2. Run the `downloader.py` script, specifying the topology name and the path
136 to your `<models_dir>`. For example, to download the model to a directory
137 named `~/public_models`, run:
139 ./downloader.py --name squeezenet1.1 --output_dir ~/public_models
141 When the model files are successfully downloaded, output similar to the
142 following is printed:
144 ################|| Downloading squeezenet1.1 ||################
146 ========== Downloading /home/user/public_models/public/squeezenet1.1/squeezenet1.1.prototxt
147 ... 100%, 9 KB, 19621 KB/s, 0 seconds passed
149 ========== Downloading /home/user/public_models/public/squeezenet1.1/squeezenet1.1.caffemodel
150 ... 100%, 4834 KB, 5159 KB/s, 0 seconds passed
152 ========== Replacing text in /home/user/public_models/public/squeezenet1.1/squeezenet1.1.prototxt
155 ### Convert the model to an Intermediate Representation with the Model Optimizer
157 > **NOTE**: This section assumes that you have configured the Model Optimizer using the instructions from the [Configure the Model Optimizer](#configure-the-model-optimizer) section.
159 1. Create a `<ir_dir>` directory that will contains the Intermediate Representation (IR) of the model.
161 2. Inference Engine can perform inference on a [list of supported devices]
162 using specific device plugins. Different plugins support models of
163 [different precision formats], such as `FP32`, `FP16`, `INT8`. To prepare an
164 IR to run inference on particular hardware, run the Model Optimizer with the
165 appropriate `--data_type` options:
169 python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/public_models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
172 **For GPU and MYRIAD (FP16):**
174 python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/public_models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
176 After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `<ir_dir>` directory.
178 3. Copy the `squeezenet1.1.labels` file from the `<OPENVINO_DIR>/scripts/demo/`
179 folder to the model IR directory. This file contains the classes that ImageNet
180 uses so that the inference results show text instead of classification numbers:
182 cp <OPENVINO_DIR>/scripts/demo/squeezenet1.1.labels <ir_dir>
185 Now you are ready to run the Image Classification Sample Application.
187 ## Run the Image Classification Sample Application
189 The Inference Engine sample applications are automatically compiled when you
190 built the Inference Engine using the [build instructions](build-instruction.md).
191 The binary files are located in the `<OPENVINO_DIR>/bin/intel64/Release`
194 To run the Image Classification sample application with an input image on the prepared IR:
196 1. Go to the samples build directory:
198 cd <OPENVINO_DIR>/bin/intel64/Release
200 2. Run the sample executable with specifying the `car.png` file from the
201 `<OPENVINO_DIR>/scripts/demo/` directory as an input
202 image, the IR of your model and a plugin for a hardware device to perform
207 ./classification_sample_async -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d CPU
212 ./classification_sample_async -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d GPU
217 >**NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute
218 Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires
219 performing [additional hardware configuration steps](build-instruction.md#optional-additional-installation-steps-for-the-intel-neural-compute-stick-2).
221 ./classification_sample_async -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD
224 When the Sample Application completes, you will have the label and confidence for the top-10 categories printed on the screen. Below is a sample output with inference results on CPU:
228 Image ../../../scripts/demo/car.png
230 classid probability label
231 ------- ----------- -----
232 817 0.8363342 sports car, sport car
233 511 0.0946487 convertible
234 479 0.0419130 car wheel
235 751 0.0091071 racer, race car, racing car
236 436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
237 656 0.0037564 minivan
238 586 0.0025741 half track
239 717 0.0016069 pickup, pickup truck
240 864 0.0012027 tow truck, tow car, wrecker
241 581 0.0005882 grille, radiator grille
243 [ INFO ] Execution successful
245 [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
248 ## Additional Resources
250 * [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
251 * [Inference Engine build instructions](build-instruction.md)
252 * [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
253 * [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
254 * [Model Optimizer Developer Guide]
255 * [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html).
257 [Model Optimizer Developer Guide]:https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
258 [Model Downloader]:https://github.com/opencv/open_model_zoo/tree/master/tools/downloader
259 [OpenVINO™ pre-trained models]:https://github.com/opencv/open_model_zoo/tree/master/models/intel
260 [prerequisites]:https://github.com/opencv/open_model_zoo/tree/master/tools/downloader#prerequisites
261 [list of supported devices]:https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html
262 [different precision formats]:https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#supported_model_formats