* ngraph python sample
This sample demonstrates how to execute an inference using ngraph::Function to create a network
- added sample
- added readme
- added lenet weights
* Added onnx support for C samples
* Revert "ngraph python sample"
This reverts commit
8033292dc367017f6325ea4f96614fdcb797e9dd.
* Added onnx support for C samples
Fixed codestyle mistake
* Removed optional code
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a trained AlexNet network on a GPU using the following command:
goto err;
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK)
goto err;
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can perform inference on an NV12 image using a trained AlexNet network on CPU with the following command:
```sh
goto err;
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
+ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK)
goto err;
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
+>
+> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
For example, to do inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands:
}
// -----------------------------------------------------------------------------------------------------
- // --------------------------- 4. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
- input_weight = (char *)calloc(strlen(input_model) + 1, sizeof(char));
- memcpy(input_weight, input_model, strlen(input_model) - 4);
- memcpy(input_weight + strlen(input_model) - 4, ".bin", strlen(".bin") + 1);
- printf("%sLoading network files:\n", info);
+ // 4. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
+ printf("%sLoading network:\n", info);
printf("\t%s\n", input_model);
- printf("\t%s\n", input_weight);
-
- status = ie_core_read_network(core, input_model, input_weight, &network);
+ status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK)
goto err;
// -----------------------------------------------------------------------------------------------------