To run the tool, you can use public or Intel's pre-trained models. To download the models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
## Examples of Running the Tool
## See Also
* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
* [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
-* [Model Downloader](@ref omz_tools_downloader_README)
\ No newline at end of file
+* [Model Downloader](@ref omz_tools_downloader_README)
To run the sample, use AlexNet and GoogLeNet or other public or pre-trained image classification models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a trained AlexNet network on FPGA with fallback to CPU using the following command:
```sh
std::cout << ie.GetVersions(FLAGS_d) << std::endl;
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
slog::info << "Loading network files" << slog::endl;
/** Read network model **/
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
-
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
+
You can do inference of an image using a trained AlexNet network on a GPU using the following command:
```sh
./hello_classification <path_to_model>/alexnet_fp32.xml <path_to_image>/cat.bmp GPU
#if defined(ENABLE_UNICODE_PATH_SUPPORT) && defined(_WIN32)
#define tcout std::wcout
#define file_name_t std::wstring
-#define WEIGHTS_EXT L".bin"
#define imread_t imreadW
#define ClassificationResult_t ClassificationResultW
#else
#define tcout std::cout
#define file_name_t std::string
-#define WEIGHTS_EXT ".bin"
#define imread_t cv::imread
#define ClassificationResult_t ClassificationResult
#endif
Core ie;
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
- CNNNetwork network = ie.ReadNetwork(input_model, input_model.substr(0, input_model.size() - 4) + WEIGHTS_EXT);
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
+ CNNNetwork network = ie.ReadNetwork(input_model);
network.setBatchSize(1);
// -----------------------------------------------------------------------------------------------------
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can perform inference on an NV12 image using a trained AlexNet network on CPU with the following command:
```sh
Core ie;
// -----------------------------------------------------------------------------------------------------
- // -------------------------- 2. Read the IR generated by the Model Optimizer (.xml and .bin files) ----
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
CNNNetwork network = ie.ReadNetwork(input_model);
setBatchSize(network, 1);
// -----------------------------------------------------------------------------------------------------
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can use the following command to do inference on CPU of an image using a trained SSD network:
```sh
}
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
CNNNetwork network = ie.ReadNetwork(input_model);
OutputsDataMap outputs_info(network.getOutputsInfo());
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
For example, to do inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands:
}
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 4. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
- std::string binFileName = fileNameNoExt(FLAGS_m) + ".bin";
+ // 4. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
slog::info << "Loading network files:"
"\n\t" << FLAGS_m <<
- "\n\t" << binFileName <<
slog::endl;
/** Read network model **/
for comparison.
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
## Sample Output
std::cout << ie.GetVersions(deviceStr) << std::endl;
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
slog::info << "Loading network files" << slog::endl;
CNNNetwork network;
inference of style transfer models.
> **NOTE**: The OpenVINO™ toolkit does not include a pre-trained model to run the Neural Style Transfer sample. A public model from the [Zhaw's Neural Style Transfer repository](https://github.com/zhaw/neural_style) can be used. Read the [Converting a Style Transfer Model from MXNet*](../../../docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md) topic from the [Model Optimizer Developer Guide](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to learn about how to get the trained model and how to convert it to the Inference Engine format (\*.xml + \*.bin).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
}
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
slog::info << "Loading network files" << slog::endl;
/** Read network model **/