From: Mikhail Ryzhov Date: Fri, 23 Oct 2020 18:47:01 +0000 (+0300) Subject: Added onnx support for C samples (#2747) X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=dea5f43c9ab809ce1c765f2cf4246c49e4495c56;p=platform%2Fupstream%2Fdldt.git Added onnx support for C samples (#2747) * ngraph python sample This sample demonstrates how to execute an inference using ngraph::Function to create a network - added sample - added readme - added lenet weights * Added onnx support for C samples * Revert "ngraph python sample" This reverts commit 8033292dc367017f6325ea4f96614fdcb797e9dd. * Added onnx support for C samples Fixed codestyle mistake * Removed optional code Co-authored-by: Alexander Zhogov --- diff --git a/inference-engine/ie_bridges/c/samples/hello_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_classification/README.md index 671e26e..845a19e 100644 --- a/inference-engine/ie_bridges/c/samples/hello_classification/README.md +++ b/inference-engine/ie_bridges/c/samples/hello_classification/README.md @@ -17,6 +17,8 @@ To properly demonstrate this API, it is required to run several networks in pipe To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> +> The sample accepts models in ONNX format (.onnx) that do not require preprocessing. You can do inference of an image using a trained AlexNet network on a GPU using the following command: diff --git a/inference-engine/ie_bridges/c/samples/hello_classification/main.c b/inference-engine/ie_bridges/c/samples/hello_classification/main.c index d47aa24..d961bce 100644 --- a/inference-engine/ie_bridges/c/samples/hello_classification/main.c +++ b/inference-engine/ie_bridges/c/samples/hello_classification/main.c @@ -92,7 +92,7 @@ int main(int argc, char **argv) { goto err; // ----------------------------------------------------------------------------------------------------- - // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------ + // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format status = ie_core_read_network(core, input_model, NULL, &network); if (status != OK) goto err; diff --git a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md index 9293506..eeadef1 100644 --- a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md +++ b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md @@ -40,6 +40,8 @@ or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the > Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> +> The sample accepts models in ONNX format (.onnx) that do not require preprocessing. You can perform inference on an NV12 image using a trained AlexNet network on CPU with the following command: ```sh diff --git a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/main.c b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/main.c index ad1690a..529c8ac 100644 --- a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/main.c +++ b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/main.c @@ -152,7 +152,7 @@ int main(int argc, char **argv) { goto err; // ----------------------------------------------------------------------------------------------------- - // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------ + // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format status = ie_core_read_network(core, input_model, NULL, &network); if (status != OK) goto err; diff --git a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md index 882131d..31ba028 100644 --- a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md +++ b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md @@ -40,6 +40,8 @@ Running the application with the empty list of options yields the usage message To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> +> The sample accepts models in ONNX format (.onnx) that do not require preprocessing. For example, to do inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands: diff --git a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/main.c b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/main.c index c67c21d..14ffa77 100644 --- a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/main.c +++ b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/main.c @@ -344,15 +344,10 @@ int main(int argc, char **argv) { } // ----------------------------------------------------------------------------------------------------- - // --------------------------- 4. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------ - input_weight = (char *)calloc(strlen(input_model) + 1, sizeof(char)); - memcpy(input_weight, input_model, strlen(input_model) - 4); - memcpy(input_weight + strlen(input_model) - 4, ".bin", strlen(".bin") + 1); - printf("%sLoading network files:\n", info); + // 4. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format + printf("%sLoading network:\n", info); printf("\t%s\n", input_model); - printf("\t%s\n", input_weight); - - status = ie_core_read_network(core, input_model, input_weight, &network); + status = ie_core_read_network(core, input_model, NULL, &network); if (status != OK) goto err; // -----------------------------------------------------------------------------------------------------