1 # Validation Application
3 Inference Engine Validation Application is a tool that allows to infer deep learning models with
4 standard inputs and outputs configuration and to collect simple
5 validation metrics for topologies. It supports **top-1** and **top-5** metric for Classification networks and
6 11-points **mAP** metric for Object Detection networks.
8 > **NOTE**: Before running the application with trained models, make sure the models are converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
10 Possible use cases of the tool:
11 * Check if the Inference Engine infers the public topologies well (the engineering team uses the Validation Application for
13 * Verify if a custom model is compatible with the default input/output configuration and compare its
14 accuracy with the public models
15 * Use Validation Application as another sample: although the code is much more complex than in classification and object
16 detection samples, the source code is open and can be re-used.
18 > **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
20 ## Validation Application Options
22 The Validation Application provides the following command-line interface (CLI):
24 Usage: validation_app [OPTION]
28 -h Print a help message
29 -t <type> Type of an inferred network ("C" by default)
30 -t "C" for classification
31 -t "OD" for object detection
32 -i <path> Required. Folder with validation images. Path to a directory with validation images. For Classification models, the directory must contain folders named as labels with images inside or a .txt file with a list of images. For Object Detection models, the dataset must be in VOC format.
33 -m <path> Required. Path to an .xml file with a trained model
34 -lbl <path> Labels file path. The labels file contains names of the dataset classes
35 -l <absolute_path> Required for CPU custom layers. Absolute path to a shared library with the kernel implementations
36 -c <absolute_path> Required for GPU custom kernels. Absolute path to an .xml file with the kernel descriptions.
37 -d <device> Target device to infer on: CPU (default), GPU, FPGA, HDDL or MYRIAD. The application looks for a suitable plugin for the specified device.
38 -b N Batch size value. If not specified, the batch size value is taken from IR
39 -ppType <type> Preprocessing type. Options: "None", "Resize", "ResizeCrop"
40 -ppSize N Preprocessing size (used with ppType="ResizeCrop")
41 -ppWidth W Preprocessing width (overrides -ppSize, used with ppType="ResizeCrop")
42 -ppHeight H Preprocessing height (overrides -ppSize, used with ppType="ResizeCrop")
43 --dump Dump file names and inference results to a .csv file
45 Classification-specific options:
46 -Czb true "Zero is a background" flag. Some networks are trained with a modified dataset where the class IDs are enumerated from 1, but 0 is an undefined "background" class (which is never detected)
48 Object detection-specific options:
49 -ODkind <kind> Type of an Object Detection model. Options: SSD
50 -ODa <path> Required for Object Detection models. Path to a directory containing an .xml file with annotations for images.
51 -ODc <file> Required for Object Detection models. Path to a file containing a list of classes
52 -ODsubdir <name> Directory between the path to images (specified with -i) and image name (specified in the .xml file). For VOC2007 dataset, use JPEGImages.
54 The tool options are divided into two categories:
55 1. **Common options** named with a single letter or a word, such as `-b` or `--dump`.
56 These options are the same in all Validation Application modes.
57 2. **Network type-specific options** named as an acronym of the network type (`C` or `OD`)
58 followed by a letter or a word.
62 > **NOTE**: By default, Inference Engine samples expect input images to have BGR channels order. If you trained you model to work with images in RGB order, you need to manually rearrange the default channels order in the sample application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to [When to Reverse Input Channels](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md#when_to_reverse_input_channels).
64 When executed, the Validation Application perform the following steps:
66 1. Loads a model to an Inference Engine plugin
67 2. Reads validation set (specified with the `-i` option):
68 - if you specified a directory, the application tries to load labels first. To do this, it searches for the file
69 with the same name as a model, but with `.labels` extension (instead of `.xml`).
70 Then it searches for the specified folder, detects its sub-folders named as known labels, and adds all images from these sub-folders to the validation set. When there are no such sub-folders, validation set is considered empty.
71 - if you specified a `.txt` file, the application reads this file expecting every line to be in the correct format.
72 For more information about the format, refer to the <a href="#preparing">Preparing the Dataset</a> section below.
74 3. Reads the batch size value specified with the `-b` option and loads this number of images to the plugin
75 > **NOTE**: Images loading time is not a part of inference time reported by the application.
77 4. The plugin infers the model, and the Validation Application collects the statistics.
79 You can also retrieve infer result by specifying the `--dump` option, however it generates a report only
80 for Classification models. This CLI option enables creation (if possible) of an inference report in
83 The structure of the report is a set of lines, each of them contains semicolon-separated values:
85 * a flag representing correctness of prediction
87 * probability that the image belongs to Top-1 class in per cents
89 * probability that the image belongs to Top-2 class in per cents
92 This is an example line from such report:
94 "ILSVRC2012_val_00002138.bmp";1;1;8.5;392;6.875;123;5.875;2;5.5;396;5;
96 It means that the given image was predicted correctly. The most probable prediction is that this image
97 represents class *1* with the probability *0.085*.
99 ## <a name="preparing"></a>Prepare a Dataset
101 You must prepare the dataset before running the Validation Application. The format of dataset depends on
102 a type of the model you are going to validate. Make sure that the dataset is format is applicable
103 for the chosen model type.
105 ### Dataset Format for Classification: Folders as Classes
107 In this case, a dataset has the following structure:
121 This structure means that each folder in dataset directory must have the name of one of the classes and contain all images of this class. In the given example, there are two images that represent the class `apron`, while three other classes have only one image
124 > **NOTE:** A dataset can contain images of both `.bmp` and `.jpg` formats.
126 The correct way to use such dataset is to specify the path as `-i <path>/dataset`.
128 ### Dataset Format for Classification: List of Images (ImageNet-like)
130 If you want to use this dataset format, create a single file with a list of images. In this case, the correct set of files must be similar to the following:
141 Where `labels.txt` looks like:
150 Each line of the file must contain the name of the image and the ID of the class
151 that it represents in the format `<image_name> tabulation <class_id>`. For example, `apron1.bmp` represents the class with ID `411`.
153 > **NOTE:** A dataset can contain images of both `.bmp` and `.jpg` formats.
155 The correct way to use such dataset is to specify the path as `-i <path>/dataset/labels.txt`.
157 ### Dataset Format for Object Detection (VOC-like)
159 Object Detection SSD models can be inferred on the original dataset that was used as a testing dataset during the model training.
160 To prepare the VOC dataset, follow the steps below:
162 1. Download the pre-trained SSD-300 model from the SSD GitHub* repository at
163 [https://github.com/weiliu89/caffe/tree/ssd](https://github.com/weiliu89/caffe/tree/ssd).
165 2. Download VOC2007 testing dataset:
167 $wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
168 tar -xvf VOCtest_06-Nov-2007.tar
170 3. Convert the model with the [Model Optimizer](./docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md).
172 4. Create a proper `.txt` class file from the original `labelmap_voc.prototxt`. The new file must be in
173 the following format:
197 Save this file as `VOC_SSD_Classes.txt`.
199 ## Validate Classification Models
201 > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
203 Once you have prepared the dataset (refer to the <a href="#preparing">Preparing the Dataset</a> section above),
204 run the following command to infer a classification model on the selected dataset:
206 ./validation_app -t C -i <path_to_images_directory_or_txt_file> -m <path_to_classification_model>/<model_name>.xml -d <CPU|GPU>
209 ## Validate Object Detection Models
211 > **NOTE**: Validation Application was validated with SSD CNN. Any network that can be inferred by the Inference Engine
212 > and has the same input and output format as one of these should be supported as well.
214 > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
216 Once you have prepared the dataset (refer to the <a href="#preparing">Preparing the Dataset</a> section above),
217 run the following command to infer an Object Detection model on the selected dataset:
219 ./validation_app -d CPU -t OD -ODa "<path_to_VOC_dataset>/VOCdevkit/VOC2007/Annotations" -i "<path_to_VOC_dataset>/VOCdevkit" -m "<path_to_model>/vgg_voc0712_ssd_300x300.xml" -ODc "<path_to_classes_file>/VOC_SSD_Classes.txt" -ODsubdir JPEGImages
222 ## Understand Validation Application Output
224 During the validation process, you can see the interactive progress bar that represents the current validation stage. When it is
225 full, the validation process is over, and you can analyze the output.
227 Key data from the output:
228 * **Network loading time** - time spent on topology loading in ms
229 * **Model** - path to a chosen model
230 * **Model Precision** - precision of the chosen model
231 * **Batch size** - specified batch size
232 * **Validation dataset** - path to a validation set
233 * **Validation approach** - type of the model: Classification or Object Detection
234 * **Device** - device type
236 Below you can find the example output for Classification models, which reports average infer time and
237 **Top-1** and **Top-5** metric values:
239 Average infer time (ms): 588.977 (16.98 images per second with batch size = 10)
241 Top1 accuracy: 70.00% (7 of 10 images were detected correctly, top class is correct)
242 Top5 accuracy: 80.00% (8 of 10 images were detected correctly, top five classes contain required class)
245 Below you can find the example output for Object Detection models:
248 Progress: [....................] 100.00% done
249 [ INFO ] Processing output blobs
250 Network load time: 27.70ms
251 Model: /home/user/models/ssd/withmean/vgg_voc0712_ssd_300x300/vgg_voc0712_ssd_300x300.xml
252 Model Precision: FP32
254 Validation dataset: /home/user/Data/SSD-data/testonly/VOCdevkit
255 Validation approach: Object detection network
257 Average infer time (ms): 166.49 (6.01 images per second with batch size = 1)
258 Average precision per class table:
282 Mean Average Precision (mAP): 0.7767
285 This output shows the resulting `mAP` metric value for the SSD300 model used to prepare the
286 dataset. This value repeats the result stated in the
287 [SSD GitHub* repository](https://github.com/weiliu89/caffe/tree/ssd) and in the
288 [original arXiv paper](http://arxiv.org/abs/1512.02325).
294 * [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)