Merging Documentation updates for 2020.4 (#1672) (#1726)
authorAndrey Zaytsev <andrey.zaytsev@intel.com>
Tue, 11 Aug 2020 16:10:56 +0000 (19:10 +0300)
committerGitHub <noreply@github.com>
Tue, 11 Aug 2020 16:10:56 +0000 (19:10 +0300)
docs/IE_DG/Integrate_with_customer_application_new_API.md
docs/IE_DG/Samples_Overview.md
docs/IE_DG/supported_plugins/FPGA.md
docs/Legal_Information.md
docs/doxygen/ie_docs.xml
docs/install_guides/installing-openvino-linux-fpga.md
docs/install_guides/installing-openvino-macos.md
docs/install_guides/installing-openvino-raspbian.md
docs/install_guides/installing-openvino-windows-fpga.md
docs/install_guides/installing-openvino-windows.md

index 07618a77a0a9a1a9b30ab9b8fd86934f008c6b93..2dee8b0431740e05926b2e58b07bafe4b8151a6e 100644 (file)
@@ -295,7 +295,7 @@ It's allowed to specify additional build options (e.g. to build CMake project on
 
 > **NOTE**: Before running, make sure you completed **Set the Environment Variables** section in [OpenVINO Installation](../../inference-engine/samples/hello_nv12_input_classification/README.md) document so that the application can find the libraries.
 
-To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2015
+To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2017
 Redistributable and Intel® C++ Compiler 2017 Redistributable packages are installed and
 `<INSTALL_DIR>/bin/intel64/Release/*.dll` files are placed to the
 application folder or accessible via `%PATH%` environment variable.
index af60575f2aaf2bf1203dd0025fad3c66d650b8d2..6f4411d47babd7d5e1dbfdc823d28d4624a1a5ad 100644 (file)
@@ -51,7 +51,7 @@ The officially supported Linux* build environment is the following:
 
 * Ubuntu* 16.04 LTS 64-bit or CentOS* 7.4 64-bit
 * GCC* 5.4.0 (for Ubuntu* 16.04) or GCC* 4.8.5 (for CentOS* 7.4)
-* CMake* version 2.8 or higher
+* CMake* version 2.8.12 or higher
 
 To build the C or C++ sample applications for Linux, go to the `<INSTALL_DIR>/inference_engine/samples/c` or `<INSTALL_DIR>/inference_engine/samples/cpp` directory, respectively, and run the `build_samples.sh` script:
 ```sh
@@ -98,8 +98,8 @@ for the debug configuration — in `<path_to_build_directory>/intel64/Debug/`.
 
 The recommended Windows* build environment is the following:
 * Microsoft Windows* 10
-* Microsoft Visual Studio* 2015, 2017, or 2019
-* CMake* version 2.8 or higher
+* Microsoft Visual Studio* 2017, or 2019
+* CMake* version 2.8.12 or higher
 
 > **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
 
@@ -110,7 +110,7 @@ build_samples_msvc.bat
 
 By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
 a solution for a sample code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
-versions are `VS2015`, `VS2017`, and `VS2019`. For example, to build the C++ samples using the Microsoft Visual Studio 2017, use the following command:
+versions are `VS2017` and `VS2019`. For example, to build the C++ samples using the Microsoft Visual Studio 2017, use the following command:
 ```sh
 <INSTALL_DIR>\inference_engine\samples\cpp\build_samples_msvc.bat VS2017
 ```
index c7c080bb4cc152312e9ac57283bbe2724cb24f5c..ee76253db04d7001163bae38d3aa217a39438043 100644 (file)
@@ -1,6 +1,24 @@
 FPGA Plugin {#openvino_docs_IE_DG_supported_plugins_FPGA}
 ===========
 
+## Product Change Notice
+Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
+
+<table>
+  <tr>
+    <td><strong>Change Notice Begins</strong></td>
+    <td>July 2020</td>
+  </tr>
+  <tr>
+    <td><strong>Change Date</strong></td>
+    <td>October 2020</td>
+  </tr>
+</table> 
+
+Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
+
+Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
+
 ## Introducing FPGA Plugin
 
 The FPGA plugin provides an opportunity for high performance scoring of neural networks on Intel&reg; FPGA devices.
index 17c3788f9eac0fa822e95f153fbeb7663f2962f8..4bcb046a8909d96d83737a1555480980f28ac9e6 100644 (file)
@@ -2,11 +2,11 @@
 
 This software and the related documents are Intel copyrighted materials, and your use of them is governed by the express license (the “License”) under which they were provided to you. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Unless the License provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit this software or the related documents without Intel's prior written permission. This software and the related documents are provided as is, with no express or implied warranties, other than those that are expressly stated in the License. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
 
-This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting [www.intel.com/design/literature.htm](www.intel.com/design/literature.htm).
+This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting [www.intel.com/design/literature.htm](https://www.intel.com/design/literature.htm).
 
 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.  
 
-Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit [www.intel.com/benchmarks](www.intel.com/benchmarks).
+Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit [www.intel.com/benchmarks](https://www.intel.com/benchmarks).
 
 Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates.  See backup for configuration details.  No product or component can be absolutely secure. 
 
@@ -14,4 +14,4 @@ Your costs and results may vary.
 
 Intel technologies may require enabled hardware, software or service activation.
 
-© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.  
+© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. \*Other names and brands may be claimed as the property of others.  
index 64c6a691d0e114a365a917c530e6cb2eeea0c85a..c1b4a9bb1a9ae277db6482ee3808fa63006b0814 100644 (file)
                     <tab type="user" title="Optimization Notice" url="@ref openvino_docs_Optimization_notice"/>
                     <tab type="user" title="Glossary" url="@ref openvino_docs_IE_DG_Glossary"/>
                 </tab>
+                <!-- Workbench -->
+                <tab type="usergroup" title="Deep Learning Workbench" url="@ref workbench_docs_Workbench_DG_Introduction">
+                       <tab type="user" title="Introduction to DL Workbench" url="@ref workbench_docs_Workbench_DG_Introduction"/>
+                       <tab type="usergroup" title="DL Workbench Installation Guide" url="@ref workbench_docs_Workbench_DG_Install_Workbench">
+                               <tab type="user" title="Install from Docker Hub*" url="@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub"/>
+                               <tab type="user" title="Install from the Intel® Distribution of OpenVINO™ Toolkit Package" url="@ref workbench_docs_Workbench_DG_Install_from_Package"/>
+                               <tab type="user" title="Enter DL Workbench" url="@ref workbench_docs_Workbench_DG_Authentication"/>
+                       </tab>
+                       <tab type="usergroup" title="DL Workbench Get Started Guide" url="@ref workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets">
+                               <tab type="usergroup" title="Select Models" url="@ref workbench_docs_Workbench_DG_Select_Model">
+                                       <tab type="user" title="Import Models" url="@ref workbench_docs_Workbench_DG_Select_Models"/>
+                                       <tab type="user" title="Import Frozen TensorFlow* SSD MobileNet v2 COCO Tutorial" url="@ref workbench_docs_Workbench_DG_Import_TensorFlow"/>
+                                       <tab type="user" title="Import MXNet* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_MXNet"/>
+                                       <tab type="user" title="Import ONNX* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_ONNX"/>
+                               </tab>
+                               <tab type="usergroup" title="Select Datasets" url="@ref workbench_docs_Workbench_DG_Select_Datasets">
+                                       <tab type="user" title="Import Datasets" url="@ref workbench_docs_Workbench_DG_Import_Datasets"/>
+                                       <tab type="user" title="Generate Datasets" url="@ref workbench_docs_Workbench_DG_Generate_Datasets"/>
+                                       <tab type="user" title="Dataset Types" url="@ref workbench_docs_Workbench_DG_Dataset_Types"/>
+                                       <tab type="user" title="Download and Cut Datasets" url="@ref workbench_docs_Workbench_DG_Download_and_Cut_Datasets"/>
+                               </tab>
+                               <tab type="user" title="Select Environment" url="@ref workbench_docs_Workbench_DG_Select_Environment"/>
+                               <tab type="user" title="Run Baseline Inference" url="@ref workbench_docs_Workbench_DG_Run_Baseline_Inference"/>
+                       </tab>
+                       <tab type="usergroup" title="DL Workbench Developer Guide" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
+                               <tab type="usergroup" title="Measure and Interpret Model Performance" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
+                                       <tab type="user" title="Run Single Inference" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference"/>
+                                       <tab type="user" title="Run Group Inference" url="@ref workbench_docs_Workbench_DG_Run_Range_of_Inferences"/>
+                                       <tab type="usergroup" title="View Inference Results" url="@ref workbench_docs_Workbench_DG_View_Inference_Results">
+                                               <tab type="user" title="Visualize Model" url="@ref workbench_docs_Workbench_DG_Visualize_Model"/>
+                                       </tab>
+                                       <tab type="user" title="Compare Performance between Two Versions of a Model" url="@ref workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models"/>
+                               </tab>
+                               <tab type="usergroup" title="Tune Model for Enhanced Performance" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization">
+                                       <tab type="user" title="INT8 Calibration" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization"/>
+                                       <tab type="user" title="Winograd Algorithmic Tuning" url="@ref workbench_docs_Workbench_DG_Winograd_Algorithmic_Tuning"/>
+                               </tab>
+                               <tab type="usergroup" title="Accuracy Measurements" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy">
+                                       <tab type="user" title="Measure Accuracy" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy"/>
+                                       <tab type="user" title="Configure Accuracy Settings" url="@ref workbench_docs_Workbench_DG_Configure_Accuracy_Settings"/>
+                               </tab>
+                               <tab type="usergroup" title="Remote Profiling" url="@ref workbench_docs_Workbench_DG_Remote_Profiling">
+                                       <tab type="user" title="Profile on Remote Machine" url="@ref workbench_docs_Workbench_DG_Profile_on_Remote_Machine"/>
+                                       <tab type="user" title="Set Up Target for Remote Profiling" url="@ref workbench_docs_Workbench_DG_Setup_Remote_Target"/>
+                                       <tab type="user" title="Register Remote Target in DL Workbench" url="@ref workbench_docs_Workbench_DG_Add_Remote_Target"/>
+                                       <tab type="user" title="Remote Machines" url="@ref workbench_docs_Workbench_DG_Remote_Machines"/>
+                               </tab>
+                               <tab type="user" title="Build Application with Deployment Package" url="@ref workbench_docs_Workbench_DG_Deployment_Package"/>
+                               <tab type="user" title="Deploy and Integrate Performance Criteria into Application" url="@ref workbench_docs_Workbench_DG_Deploy_and_Integrate_Performance_Criteria_into_Application"/>
+                         <tab type="user" title="Persist Database State" url="@ref workbench_docs_Workbench_DG_Persist_Database"/>
+                                                 <tab type="user" title="Work with Docker Container" url="@ref workbench_docs_Workbench_DG_Docker_Container"/>                 
+                  </tab>
+                       <tab type="usergroup" title="DL Workbench Security Guide" url="@ref workbench_docs_Workbench_DG_Configure_TLS">
+                               <tab type="user" title="Configure Transport Layer Security (TLS)" url="@ref workbench_docs_Workbench_DG_Configure_TLS"/>
+                               <tab type="user" title="Configure Authentication Token Saving" url="@ref workbench_docs_Workbench_DG_Configure_Token_Saving"/>
+                       </tab>
+                       <tab type="user" title="Troubleshooting" url="@ref workbench_docs_Workbench_DG_Troubleshooting"/>
+                </tab>
                 <!-- Inference Engine Plugin Development Guide-->
                 <tab type="user" title="Inference Engine Plugin Development Guide" url="ie_plugin_api/index.html"/>
                 <!-- Deployment Manager-->
                 <tab type="user" title="Deployment Manager Guide" url="@ref openvino_docs_install_guides_deployment_manager_tool"/>
-                       <!-- Workbench -->
-                       <tab type="usergroup" title="Deep Learning Workbench" url="@ref workbench_docs_Workbench_DG_Introduction">
-                               <tab type="user" title="Introduction to DL Workbench" url="@ref workbench_docs_Workbench_DG_Introduction"/>
-                               <tab type="usergroup" title="DL Workbench Installation Guide" url="@ref workbench_docs_Workbench_DG_Install_Workbench">
-                                       <tab type="user" title="Install from Docker Hub*" url="@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub"/>
-                                       <tab type="user" title="Install from the Intel® Distribution of OpenVINO™ Toolkit Package" url="@ref workbench_docs_Workbench_DG_Install_from_Package"/>
-                                       <tab type="user" title="Enter DL Workbench" url="@ref workbench_docs_Workbench_DG_Authentication"/>
-                               </tab>
-                               <tab type="usergroup" title="DL Workbench Get Started Guide" url="@ref workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets">
-                                       <tab type="usergroup" title="Select Models" url="@ref workbench_docs_Workbench_DG_Select_Model">
-                                               <tab type="user" title="Import Models" url="@ref workbench_docs_Workbench_DG_Select_Models"/>
-                                               <tab type="user" title="Import Frozen TensorFlow* SSD MobileNet v2 COCO Tutorial" url="@ref workbench_docs_Workbench_DG_Import_TensorFlow"/>
-                                               <tab type="user" title="Import MXNet* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_MXNet"/>
-                                               <tab type="user" title="Import ONNX* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_ONNX"/>
-                                       </tab>
-                                       <tab type="usergroup" title="Select Datasets" url="@ref workbench_docs_Workbench_DG_Select_Datasets">
-                                               <tab type="user" title="Import Datasets" url="@ref workbench_docs_Workbench_DG_Import_Datasets"/>
-                                               <tab type="user" title="Generate Datasets" url="@ref workbench_docs_Workbench_DG_Generate_Datasets"/>
-                                               <tab type="user" title="Dataset Types" url="@ref workbench_docs_Workbench_DG_Dataset_Types"/>
-                                               <tab type="user" title="Download and Cut Datasets" url="@ref workbench_docs_Workbench_DG_Download_and_Cut_Datasets"/>
-                                       </tab>
-                                       <tab type="user" title="Select Environment" url="@ref workbench_docs_Workbench_DG_Select_Environment"/>
-                                       <tab type="user" title="Run Baseline Inference" url="@ref workbench_docs_Workbench_DG_Run_Baseline_Inference"/>
-                               </tab>
-                               <tab type="usergroup" title="DL Workbench Developer Guide" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
-                                       <tab type="usergroup" title="Measure and Interpret Model Performance" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
-                                               <tab type="user" title="Run Single Inference" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference"/>
-                                               <tab type="user" title="Run Group Inference" url="@ref workbench_docs_Workbench_DG_Run_Range_of_Inferences"/>
-                                               <tab type="usergroup" title="View Inference Results" url="@ref workbench_docs_Workbench_DG_View_Inference_Results">
-                                                       <tab type="user" title="Visualize Model" url="@ref workbench_docs_Workbench_DG_Visualize_Model"/>
-                                               </tab>
-                                               <tab type="user" title="Compare Performance between Two Versions of a Model" url="@ref workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models"/>
-                                       </tab>
-                                       <tab type="usergroup" title="Tune Model for Enhanced Performance" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization">
-                                               <tab type="user" title="INT8 Calibration" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization"/>
-                                               <tab type="user" title="Winograd Algorithmic Tuning" url="@ref workbench_docs_Workbench_DG_Winograd_Algorithmic_Tuning"/>
-                                       </tab>
-                                       <tab type="usergroup" title="Accuracy Measurements" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy">
-                                               <tab type="user" title="Measure Accuracy" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy"/>
-                                               <tab type="user" title="Configure Accuracy Settings" url="@ref workbench_docs_Workbench_DG_Configure_Accuracy_Settings"/>
-                                       </tab>
-                                       <tab type="usergroup" title="Remote Profiling" url="@ref workbench_docs_Workbench_DG_Remote_Profiling">
-                                               <tab type="user" title="Profile on Remote Machine" url="@ref workbench_docs_Workbench_DG_Profile_on_Remote_Machine"/>
-                                               <tab type="user" title="Set Up Target for Remote Profiling" url="@ref workbench_docs_Workbench_DG_Setup_Remote_Target"/>
-                                               <tab type="user" title="Register Remote Target in DL Workbench" url="@ref workbench_docs_Workbench_DG_Add_Remote_Target"/>
-                                               <tab type="user" title="Remote Machines" url="@ref workbench_docs_Workbench_DG_Remote_Machines"/>
-                                       </tab>
-                                       <tab type="user" title="Build Application with Deployment Package" url="@ref workbench_docs_Workbench_DG_Deployment_Package"/>
-                                       <tab type="user" title="Deploy and Integrate Performance Criteria into Application" url="@ref workbench_docs_Workbench_DG_Deploy_and_Integrate_Performance_Criteria_into_Application"/>
-                                       <tab type="user" title="Persist Database State" url="@ref workbench_docs_Workbench_DG_Persist_Database"/>
-                                       <tab type="user" title="Work with Docker Container" url="@ref workbench_docs_Workbench_DG_Docker_Container"/>
-                               </tab>
-                               <tab type="usergroup" title="DL Workbench Security Guide" url="@ref workbench_docs_Workbench_DG_Configure_TLS">
-                                       <tab type="user" title="Configure Transport Layer Security (TLS)" url="@ref workbench_docs_Workbench_DG_Configure_TLS"/>
-                                       <tab type="user" title="Configure Authentication Token Saving" url="@ref workbench_docs_Workbench_DG_Configure_Token_Saving"/>
-                               </tab>
-                               <tab type="user" title="Troubleshooting" url="@ref workbench_docs_Workbench_DG_Troubleshooting"/>
-                       </tab>
-            <!-- Security -->
+                             <!-- Security -->
             <tab type="usergroup" title="Security" url="@ref openvino_docs_security_guide_introduction">
                 <tab type="user" title="Introduction" url="@ref openvino_docs_security_guide_introduction"/>
                 <tab type="user" title="Using DL Workbench Securely" url="@ref openvino_docs_security_guide_workbench"/>
index d36da473a04b21f522cfb7f0b0aa29fc29bf300b..ac0ab1dd3a804f017973651c9eb1d2b1d1de4a9b 100644 (file)
@@ -11,6 +11,24 @@ support, see [Installation Guide for Linux*](installing-openvino-linux.md).
 are not covered in this guide.
 - An internet connection is required to follow the steps in this guide.
 
+## Product Change Notice
+Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
+
+<table>
+  <tr>
+    <td><strong>Change Notice Begins</strong></td>
+    <td>July 2020</td>
+  </tr>
+  <tr>
+    <td><strong>Change Date</strong></td>
+    <td>October 2020</td>
+  </tr>
+</table> 
+
+Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
+
+Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
+
 ## Introduction
 
 The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).
index 5fb5fd8e5c6f9e1c6a62ca34fd1724edd9ef5663..bc4b07f8d210c253527f1037ce83dc3bf1cc313b 100644 (file)
@@ -48,8 +48,8 @@ The development and target platforms have the same requirements, but you can sel
 
 **Software Requirements**
 
-- CMake 3.4 or higher
-- Python 3.5 or higher
+- CMake 3.9 or higher
+- Python 3.5 - 3.7
 - Apple Xcode\* Command Line Tools
 - (Optional) Apple Xcode\* IDE (not required for OpenVINO, but useful for development)
 
index d21855c92e1fb7691a01aa19c9cae4bd08d37673..a9398d2d5c6312aa1f95175bdfc83b6a7a0d10d7 100644 (file)
@@ -190,4 +190,4 @@ If you want to use your model for inference, the model must be converted to the
    * [Installation Guide for Windows*](installing-openvino-windows.md)
    * [Installation Guide for Linux*](installing-openvino-linux.md)
 
-   For more information about how to use the Model Optimizer, see the [Model Optimizer Developer Guide](https://software.intel.com/articles/OpenVINO-ModelOptimizer)
+   For more information about how to use the Model Optimizer, see the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
index 2a5e6db38fbb8aae85e2135e1f50f8d259e63554..ee104a21fefed08f8f63acd89d83b26352150a74 100644 (file)
@@ -7,6 +7,24 @@ support, see [Installation Guide for Windows*](installing-openvino-windows.md).
 - An internet connection is required to follow the steps in this guide.
 - [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
 
+## Product Change Notice
+Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
+
+<table>
+  <tr>
+    <td><strong>Change Notice Begins</strong></td>
+    <td>July 2020</td>
+  </tr>
+  <tr>
+    <td><strong>Change Date</strong></td>
+    <td>October 2020</td>
+  </tr>
+</table> 
+
+Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
+
+Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
+
 ## Introduction
 
 > **IMPORTANT**:
@@ -19,10 +37,10 @@ Your installation is complete when these are all completed:
 
 2. Install the dependencies:
 
-   - [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)  
-   - [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+   - [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)  
+   - [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
    > **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
-   - [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
+   - [Python **3.5**-**3.7** 64-bit](https://www.python.org/downloads/windows/)
    > **IMPORTANT**: As part of this installation, make sure you click the option to add the application to your `PATH` environment variable.
 
 3. <a href="#set-the-environment-variables">Set Environment Variables</a>         
@@ -77,7 +95,7 @@ The development and target platforms have the same requirements, but you can sel
 * Intel Pentium® processor N4200/5, N3350/5, or N3450/5 with Intel® HD Graphics
 * Intel® Neural Compute Stick 2
 * Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
-* Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Mustang-F100-A10) SG2
+* Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA (Mustang-F100-A10) SG2
 
 > **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.
 
@@ -93,8 +111,8 @@ The development and target platforms have the same requirements, but you can sel
 - Microsoft Windows 10*,  64-bit
 
 **Software**
-- [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)
-- [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+- [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
+- [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
    > **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
 - [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
 
@@ -360,7 +378,7 @@ To perform inference on Intel® Vision Accelerator Design with Intel® Movidius
       1. Go to the `<INSTALL_DIR>\deployment_tools\inference-engine\external\hddl\SMBusDriver` directory, where `<INSTALL_DIR>` is the directory in which the Intel Distribution of OpenVINO toolkit is installed.
       2. Right click on the `hddlsmbus.inf` file and choose **Install** from the pop up menu.
 
-  2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2015</a>
+  2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2017</a>
 
 You are done installing your device driver and are ready to use your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
 
index ff7d93248874b5c48d51074f758b1cae6b93f397..00aaff6b0576dadc01519784005cf288b8d78ed9 100644 (file)
@@ -18,10 +18,10 @@ Your installation is complete when these are all completed:
 
 2. Install the dependencies:
 
-   - [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)  
-   - [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+   - [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)  
+   - [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
    > **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
-   - [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
+   - [Python **3.5** - **3.7** 64-bit](https://www.python.org/downloads/windows/)
    > **IMPORTANT**: As part of this installation, make sure you click the option to add the application to your `PATH` environment variable.
 
 3. <a href="#set-the-environment-variables">Set Environment Variables</a>         
@@ -90,10 +90,10 @@ The following components are installed by default:
 - Microsoft Windows\* 10 64-bit
 
 **Software**
-- [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)
-- [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+- [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
+- [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
    > **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
-- [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
+- [Python **3.5** - **3.7** 64-bit](https://www.python.org/downloads/windows/)
 
 ## Installation Steps
 
@@ -301,7 +301,7 @@ In this section, you saw a preview of the Intel® Distribution of OpenVINO™ to
 
 Congratulations. You have completed all the required installation, configuration, and build steps to work with your trained models using CPU. 
 
-If you want to use Intel® Processor graphics (GPU), Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ (VPU), or add CMake* and Python* to your Windows* environment variables, read through the next section for additional steps.
+If you want to use Intel® Processor graphics (GPU), Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, or add CMake* and Python* to your Windows* environment variables, read through the next section for additional steps.
 
 If you want to continue and run the Image Classification Sample Application on one of the supported hardware device, see the [Run the Image Classification Sample Application](#run-the-image-classification-sample-application) section.
 
@@ -345,7 +345,7 @@ To perform inference on Intel® Vision Accelerator Design with Intel® Movidius
       1. Go to the `<INSTALL_DIR>\deployment_tools\inference-engine\external\hddl\SMBusDriver` directory, where `<INSTALL_DIR>` is the directory in which the Intel Distribution of OpenVINO toolkit is installed.
       2. Right click on the `hddlsmbus.inf` file and choose **Install** from the pop up menu.
 
-  2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2015</a>
+  2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2017</a>
 
 You are done installing your device driver and are ready to use your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.