3 This topic demonstrates how to run the `vpu_profile` tool application, which estimates performance by calculating average time of each stage in model.
7 Upon the start-up, the sample application reads command line parameters and loads a network and its inputs from given directory to the Inference Engine plugin.
8 Then application starts infer requests in asynchronous mode till specified number of iterations is finished.
9 After inference stage, profile tool computes average time that each stage took.
13 Running the application with the <code>-h</code> option yields the following usage message:
17 API version ............ <version>
18 Build .................. <number>
22 -help Optional. Print a usage message.
23 -model <value> Required. Path to xml model.
24 -inputs_dir <value> Required. Path to folder with images. Default: ".".
25 -plugin_path <value> Optional. Path to a plugin folder.
26 -config <value> Optional. Path to the configuration file. Default value: "config".
27 -platform <value> Optional. Specifies movidius platform.
28 -iterations <value> Optional. Specifies number of iterations. Default value: 16.
29 -plugin <value> Optional. Specifies plugin. Supported values: myriad.
30 Default value: "myriad".
33 Running the application with the empty list of options yields an error.
35 To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
37 > **note**: before running the sample with a trained model, make sure the model is converted to the inference engine format (\*.xml + \*.bin) using the [model optimizer tool](./docs/mo_dg/deep_learning_model_optimizer_devguide.md).
39 You can use the following command to do inference on images from a folder using a trained Faster R-CNN network:
42 ./perfcheck -model <path_to_model>/faster_rcnn.xml -inputs_dir <path_to_inputs>