From bff2fe0b2ff7ccd5357b1411132f8c299eb3683b Mon Sep 17 00:00:00 2001 From: Andrey Zaytsev Date: Wed, 25 Nov 2020 18:08:22 +0300 Subject: [PATCH] Cherry-pick #3295 to releases/2021/2 (#3353) * Feature/azaytsev/change layout (#3295) * Updated openvino_docs.xml --- docs/IE_DG/inference_engine_intro.md | 2 +- docs/IE_DG/protecting_model_guide.md | 2 +- docs/doxygen/openvino_docs.xml | 12 +++-- docs/get_started/get_started_linux.md | 4 +- docs/install_guides/installing-openvino-images.md | 14 +++++ docs/install_guides/installing-openvino-linux.md | 66 ++++++++++------------- 6 files changed, 54 insertions(+), 46 deletions(-) create mode 100644 docs/install_guides/installing-openvino-images.md diff --git a/docs/IE_DG/inference_engine_intro.md b/docs/IE_DG/inference_engine_intro.md index 79d0cd7..dbcc324 100644 --- a/docs/IE_DG/inference_engine_intro.md +++ b/docs/IE_DG/inference_engine_intro.md @@ -11,7 +11,7 @@ The open source version is available in the [OpenVINO™ toolkit GitHub reposito To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation. -For complete API Reference, see the [API Reference](usergroup29.html) section. +For complete API Reference, see the [Inference Engine API References](./api_references.html) section. Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs. diff --git a/docs/IE_DG/protecting_model_guide.md b/docs/IE_DG/protecting_model_guide.md index 333f593..59ac3ba 100644 --- a/docs/IE_DG/protecting_model_guide.md +++ b/docs/IE_DG/protecting_model_guide.md @@ -57,7 +57,7 @@ should be called with `weights` passed as an empty `Blob`. - OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org) - Model Optimizer Developer Guide: [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - Inference Engine Developer Guide: [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md) -- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.html) +- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.md) - For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index) - For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic) - For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit). diff --git a/docs/doxygen/openvino_docs.xml b/docs/doxygen/openvino_docs.xml index 3bebf1f..8af2622 100644 --- a/docs/doxygen/openvino_docs.xml +++ b/docs/doxygen/openvino_docs.xml @@ -6,7 +6,7 @@ - + @@ -17,13 +17,15 @@ + - + + @@ -37,7 +39,9 @@ + + @@ -126,8 +130,8 @@ - - + + diff --git a/docs/get_started/get_started_linux.md b/docs/get_started/get_started_linux.md index 7fe9030..672e839 100644 --- a/docs/get_started/get_started_linux.md +++ b/docs/get_started/get_started_linux.md @@ -195,7 +195,7 @@ You will perform the following steps: Each demo and code sample is a separate application, but they use the same behavior and components. The code samples and demo applications are: -* [Code Samples](../IE_DG/Samples_Overview.html) - Small console applications that show how to utilize specific OpenVINO capabilities within an application and execute specific tasks such as loading a model, running inference, querying specific device capabilities, and more. +* [Code Samples](../IE_DG/Samples_Overview.md) - Small console applications that show how to utilize specific OpenVINO capabilities within an application and execute specific tasks such as loading a model, running inference, querying specific device capabilities, and more. * [Demo Applications](@ref omz_demos_README) - Console applications that provide robust application templates to support developers in implementing specific deep learning scenarios. They may also involve more complex processing pipelines that gather analysis from several models that run inference simultaneously. For example concurrently detecting a person in a video stream and detecting attributes such as age, gender and/or emotions. @@ -370,7 +370,7 @@ As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes two ### Step 4: Run the Image Classification Code Sample -> **NOTE**: The Image Classification code sample is automatically compiled when you ran the Image Classification demo script. If you want to compile it manually, see the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.html#build_samples_linux) section. +> **NOTE**: The Image Classification code sample is automatically compiled when you ran the Image Classification demo script. If you want to compile it manually, see the *Build the Sample Applications on Linux* section in the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md). To run the **Image Classification** code sample with an input image on the IR: diff --git a/docs/install_guides/installing-openvino-images.md b/docs/install_guides/installing-openvino-images.md new file mode 100644 index 0000000..e6b0373 --- /dev/null +++ b/docs/install_guides/installing-openvino-images.md @@ -0,0 +1,14 @@ +# Install From Images and Repositories {#openvino_docs_install_guides_installing_openvino_images} + +You may install Intel® Distribution of OpenVINO™ toolkit from images and repositories using the **Install OpenVINO™** button above or directly from the [Get the Intel® Distribution of OpenVINO™ Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html) page. Use the documentation below if you need additional support: + +* [Docker](installing-openvino-docker-linux.md) +* [Docker with DL Workbench](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) +* [APT](installing-openvino-apt.md) +* [YUM](installing-openvino-yum.md) +* [Anaconda Cloud](installing-openvino-conda.md) +* [Yocto](installing-openvino-yocto.md) +* [PyPI](installing-openvino-pip.md) + +The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and you can build it for supported platforms using the Inference Engine Build Instructions. + diff --git a/docs/install_guides/installing-openvino-linux.md b/docs/install_guides/installing-openvino-linux.md index 34b1444..828c858 100644 --- a/docs/install_guides/installing-openvino-linux.md +++ b/docs/install_guides/installing-openvino-linux.md @@ -9,7 +9,7 @@ ## Introduction -The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT). +OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud. The Intel® Distribution of OpenVINO™ toolkit for Linux\*: - Enables CNN-based deep learning inference on the edge @@ -28,7 +28,8 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*: | [Inference Engine Code Samples](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to utilize specific OpenVINO capabilities in an application and how to perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more. | | [Demo Applications](@ref omz_demos_README) | A set of simple console applications that provide robust application templates to help you implement specific deep learning scenarios. | | Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other | -| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) | +| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo). | +| Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). | ## System Requirements @@ -84,28 +85,25 @@ If you downloaded the package file to the current user's `Downloads` directory: ```sh cd ~/Downloads/ ``` -By default, the file is saved as `l_openvino_toolkit_p_.tgz`. - + By default, the file is saved as `l_openvino_toolkit_p_.tgz`. 3. Unpack the .tgz file: ```sh tar -xvzf l_openvino_toolkit_p_.tgz ``` -The files are unpacked to the `l_openvino_toolkit_p_` directory. - + The files are unpacked to the `l_openvino_toolkit_p_` directory. 4. Go to the `l_openvino_toolkit_p_` directory: ```sh cd l_openvino_toolkit_p_ ``` -If you have a previous version of the Intel Distribution of OpenVINO + If you have a previous version of the Intel Distribution of OpenVINO toolkit installed, rename or delete these two directories: - `~/inference_engine_samples_build` - `~/openvino_models` -**Installation Notes:** - -- Choose an installation option and run the related script as root. -- You can use either a GUI installation wizard or command-line instructions (CLI). -- Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks. + **Installation Notes:** + - Choose an installation option and run the related script as root. + - You can use either a GUI installation wizard or command-line instructions (CLI). + - Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks. 5. Choose your installation option: - **Option 1:** GUI Installation Wizard: @@ -164,7 +162,7 @@ cd /opt/intel/openvino_2021/install_dependencies ```sh sudo -E ./install_openvino_dependencies.sh ``` -The dependencies are installed. Continue to the next section to set your environment variables. + The dependencies are installed. Continue to the next section to set your environment variables. ## Set the Environment Variables @@ -287,20 +285,18 @@ cd /opt/intel/openvino_2021/deployment_tools/demo ```sh ./demo_squeezenet_download_convert_run.sh ``` -This verification script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
-This verification script builds the [Image Classification Sample Async](../../inference-engine/samples/classification_sample_async/README.md) application and run it with the `car.png` image located in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories: -![](../img/image_classification_script_output_lnx.png) + This verification script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
+ This verification script builds the [Image Classification Sample Async](../../inference-engine/samples/classification_sample_async/README.md) application and run it with the `car.png` image located in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories: + ![](../img/image_classification_script_output_lnx.png) 3. Run the **Inference Pipeline verification script**: ```sh ./demo_security_barrier_camera.sh ``` -This script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application, and runs it with the downloaded models and the `car_1.bmp` image from the `demo` directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute. - - First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate. - - When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text: - ![](../img/inference_pipeline_script_lnx.png) + This script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application, and runs it with the downloaded models and the `car_1.bmp` image from the `demo` directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
+ First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
+ When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text: + ![](../img/inference_pipeline_script_lnx.png) 4. Close the image viewer window to complete the verification script. @@ -331,20 +327,15 @@ sudo -E su ```sh ./install_NEO_OCL_driver.sh ``` -The drivers are not included in the package and the script downloads them. Make sure you have the -internet connection for this step. - -The script compares the driver version on the system to the current version. -If the driver version on the system is higher or equal to the current version, the script does -not install a new driver. -If the version of the driver is lower than the current version, the script uninstalls the lower -and installs the current version with your permission: -![](../img/NEO_check_agreement.png) -Higher hardware versions require a higher driver version, namely 20.35 instead of 19.41. -If the script fails to uninstall the driver, uninstall it manually. -During the script execution, you may see the following command line output: - - Add OpenCL user to video group -Ignore this suggestion and continue. + The drivers are not included in the package and the script downloads them. Make sure you have the internet connection for this step.
+ The script compares the driver version on the system to the current version. If the driver version on the system is higher or equal to the current version, the script does +not install a new driver. If the version of the driver is lower than the current version, the script uninstalls the lower and installs the current version with your permission: + ![](../img/NEO_check_agreement.png) + Higher hardware versions require a higher driver version, namely 20.35 instead of 19.41. If the script fails to uninstall the driver, uninstall it manually. During the script execution, you may see the following command line output: +```sh +Add OpenCL user to video group +``` + Ignore this suggestion and continue. 4. **Optional** Install header files to allow compiling a new code. You can find the header files at [Khronos OpenCL™ API Headers](https://github.com/KhronosGroup/OpenCL-Headers.git). ## Steps for Intel® Neural Compute Stick 2 @@ -355,8 +346,7 @@ These steps are only required if you want to perform inference on Intel® Movidi ```sh sudo usermod -a -G users "$(whoami)" ``` -Log out and log in for it to take effect. - + Log out and log in for it to take effect. 2. To perform inference on Intel® Neural Compute Stick 2, install the USB rules as follows: ```sh sudo cp /opt/intel/openvino_2021/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ -- 2.7.4