> **NOTE**: Before running, make sure you completed **Set the Environment Variables** section in [OpenVINO Installation](../../inference-engine/samples/hello_nv12_input_classification/README.md) document so that the application can find the libraries.
-To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2015
+To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2017
Redistributable and Intel® C++ Compiler 2017 Redistributable packages are installed and
`<INSTALL_DIR>/bin/intel64/Release/*.dll` files are placed to the
application folder or accessible via `%PATH%` environment variable.
* Ubuntu* 16.04 LTS 64-bit or CentOS* 7.4 64-bit
* GCC* 5.4.0 (for Ubuntu* 16.04) or GCC* 4.8.5 (for CentOS* 7.4)
-* CMake* version 2.8 or higher
+* CMake* version 2.8.12 or higher
To build the C or C++ sample applications for Linux, go to the `<INSTALL_DIR>/inference_engine/samples/c` or `<INSTALL_DIR>/inference_engine/samples/cpp` directory, respectively, and run the `build_samples.sh` script:
```sh
The recommended Windows* build environment is the following:
* Microsoft Windows* 10
-* Microsoft Visual Studio* 2015, 2017, or 2019
-* CMake* version 2.8 or higher
+* Microsoft Visual Studio* 2017, or 2019
+* CMake* version 2.8.12 or higher
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
a solution for a sample code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
-versions are `VS2015`, `VS2017`, and `VS2019`. For example, to build the C++ samples using the Microsoft Visual Studio 2017, use the following command:
+versions are `VS2017` and `VS2019`. For example, to build the C++ samples using the Microsoft Visual Studio 2017, use the following command:
```sh
<INSTALL_DIR>\inference_engine\samples\cpp\build_samples_msvc.bat VS2017
```
FPGA Plugin {#openvino_docs_IE_DG_supported_plugins_FPGA}
===========
+## Product Change Notice
+Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
+
+<table>
+ <tr>
+ <td><strong>Change Notice Begins</strong></td>
+ <td>July 2020</td>
+ </tr>
+ <tr>
+ <td><strong>Change Date</strong></td>
+ <td>October 2020</td>
+ </tr>
+</table>
+
+Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
+
+Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
+
## Introducing FPGA Plugin
The FPGA plugin provides an opportunity for high performance scoring of neural networks on Intel® FPGA devices.
This software and the related documents are Intel copyrighted materials, and your use of them is governed by the express license (the “License”) under which they were provided to you. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Unless the License provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit this software or the related documents without Intel's prior written permission. This software and the related documents are provided as is, with no express or implied warranties, other than those that are expressly stated in the License. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
-This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting [www.intel.com/design/literature.htm](www.intel.com/design/literature.htm).
+This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting [www.intel.com/design/literature.htm](https://www.intel.com/design/literature.htm).
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
-Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit [www.intel.com/benchmarks](www.intel.com/benchmarks).
+Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit [www.intel.com/benchmarks](https://www.intel.com/benchmarks).
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Intel technologies may require enabled hardware, software or service activation.
-© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
+© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. \*Other names and brands may be claimed as the property of others.
<tab type="user" title="Optimization Notice" url="@ref openvino_docs_Optimization_notice"/>
<tab type="user" title="Glossary" url="@ref openvino_docs_IE_DG_Glossary"/>
</tab>
+ <!-- Workbench -->
+ <tab type="usergroup" title="Deep Learning Workbench" url="@ref workbench_docs_Workbench_DG_Introduction">
+ <tab type="user" title="Introduction to DL Workbench" url="@ref workbench_docs_Workbench_DG_Introduction"/>
+ <tab type="usergroup" title="DL Workbench Installation Guide" url="@ref workbench_docs_Workbench_DG_Install_Workbench">
+ <tab type="user" title="Install from Docker Hub*" url="@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub"/>
+ <tab type="user" title="Install from the Intel® Distribution of OpenVINO™ Toolkit Package" url="@ref workbench_docs_Workbench_DG_Install_from_Package"/>
+ <tab type="user" title="Enter DL Workbench" url="@ref workbench_docs_Workbench_DG_Authentication"/>
+ </tab>
+ <tab type="usergroup" title="DL Workbench Get Started Guide" url="@ref workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets">
+ <tab type="usergroup" title="Select Models" url="@ref workbench_docs_Workbench_DG_Select_Model">
+ <tab type="user" title="Import Models" url="@ref workbench_docs_Workbench_DG_Select_Models"/>
+ <tab type="user" title="Import Frozen TensorFlow* SSD MobileNet v2 COCO Tutorial" url="@ref workbench_docs_Workbench_DG_Import_TensorFlow"/>
+ <tab type="user" title="Import MXNet* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_MXNet"/>
+ <tab type="user" title="Import ONNX* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_ONNX"/>
+ </tab>
+ <tab type="usergroup" title="Select Datasets" url="@ref workbench_docs_Workbench_DG_Select_Datasets">
+ <tab type="user" title="Import Datasets" url="@ref workbench_docs_Workbench_DG_Import_Datasets"/>
+ <tab type="user" title="Generate Datasets" url="@ref workbench_docs_Workbench_DG_Generate_Datasets"/>
+ <tab type="user" title="Dataset Types" url="@ref workbench_docs_Workbench_DG_Dataset_Types"/>
+ <tab type="user" title="Download and Cut Datasets" url="@ref workbench_docs_Workbench_DG_Download_and_Cut_Datasets"/>
+ </tab>
+ <tab type="user" title="Select Environment" url="@ref workbench_docs_Workbench_DG_Select_Environment"/>
+ <tab type="user" title="Run Baseline Inference" url="@ref workbench_docs_Workbench_DG_Run_Baseline_Inference"/>
+ </tab>
+ <tab type="usergroup" title="DL Workbench Developer Guide" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
+ <tab type="usergroup" title="Measure and Interpret Model Performance" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
+ <tab type="user" title="Run Single Inference" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference"/>
+ <tab type="user" title="Run Group Inference" url="@ref workbench_docs_Workbench_DG_Run_Range_of_Inferences"/>
+ <tab type="usergroup" title="View Inference Results" url="@ref workbench_docs_Workbench_DG_View_Inference_Results">
+ <tab type="user" title="Visualize Model" url="@ref workbench_docs_Workbench_DG_Visualize_Model"/>
+ </tab>
+ <tab type="user" title="Compare Performance between Two Versions of a Model" url="@ref workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models"/>
+ </tab>
+ <tab type="usergroup" title="Tune Model for Enhanced Performance" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization">
+ <tab type="user" title="INT8 Calibration" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization"/>
+ <tab type="user" title="Winograd Algorithmic Tuning" url="@ref workbench_docs_Workbench_DG_Winograd_Algorithmic_Tuning"/>
+ </tab>
+ <tab type="usergroup" title="Accuracy Measurements" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy">
+ <tab type="user" title="Measure Accuracy" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy"/>
+ <tab type="user" title="Configure Accuracy Settings" url="@ref workbench_docs_Workbench_DG_Configure_Accuracy_Settings"/>
+ </tab>
+ <tab type="usergroup" title="Remote Profiling" url="@ref workbench_docs_Workbench_DG_Remote_Profiling">
+ <tab type="user" title="Profile on Remote Machine" url="@ref workbench_docs_Workbench_DG_Profile_on_Remote_Machine"/>
+ <tab type="user" title="Set Up Target for Remote Profiling" url="@ref workbench_docs_Workbench_DG_Setup_Remote_Target"/>
+ <tab type="user" title="Register Remote Target in DL Workbench" url="@ref workbench_docs_Workbench_DG_Add_Remote_Target"/>
+ <tab type="user" title="Remote Machines" url="@ref workbench_docs_Workbench_DG_Remote_Machines"/>
+ </tab>
+ <tab type="user" title="Build Application with Deployment Package" url="@ref workbench_docs_Workbench_DG_Deployment_Package"/>
+ <tab type="user" title="Deploy and Integrate Performance Criteria into Application" url="@ref workbench_docs_Workbench_DG_Deploy_and_Integrate_Performance_Criteria_into_Application"/>
+ <tab type="user" title="Persist Database State" url="@ref workbench_docs_Workbench_DG_Persist_Database"/>
+ <tab type="user" title="Work with Docker Container" url="@ref workbench_docs_Workbench_DG_Docker_Container"/>
+ </tab>
+ <tab type="usergroup" title="DL Workbench Security Guide" url="@ref workbench_docs_Workbench_DG_Configure_TLS">
+ <tab type="user" title="Configure Transport Layer Security (TLS)" url="@ref workbench_docs_Workbench_DG_Configure_TLS"/>
+ <tab type="user" title="Configure Authentication Token Saving" url="@ref workbench_docs_Workbench_DG_Configure_Token_Saving"/>
+ </tab>
+ <tab type="user" title="Troubleshooting" url="@ref workbench_docs_Workbench_DG_Troubleshooting"/>
+ </tab>
<!-- Inference Engine Plugin Development Guide-->
<tab type="user" title="Inference Engine Plugin Development Guide" url="ie_plugin_api/index.html"/>
<!-- Deployment Manager-->
<tab type="user" title="Deployment Manager Guide" url="@ref openvino_docs_install_guides_deployment_manager_tool"/>
- <!-- Workbench -->
- <tab type="usergroup" title="Deep Learning Workbench" url="@ref workbench_docs_Workbench_DG_Introduction">
- <tab type="user" title="Introduction to DL Workbench" url="@ref workbench_docs_Workbench_DG_Introduction"/>
- <tab type="usergroup" title="DL Workbench Installation Guide" url="@ref workbench_docs_Workbench_DG_Install_Workbench">
- <tab type="user" title="Install from Docker Hub*" url="@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub"/>
- <tab type="user" title="Install from the Intel® Distribution of OpenVINO™ Toolkit Package" url="@ref workbench_docs_Workbench_DG_Install_from_Package"/>
- <tab type="user" title="Enter DL Workbench" url="@ref workbench_docs_Workbench_DG_Authentication"/>
- </tab>
- <tab type="usergroup" title="DL Workbench Get Started Guide" url="@ref workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets">
- <tab type="usergroup" title="Select Models" url="@ref workbench_docs_Workbench_DG_Select_Model">
- <tab type="user" title="Import Models" url="@ref workbench_docs_Workbench_DG_Select_Models"/>
- <tab type="user" title="Import Frozen TensorFlow* SSD MobileNet v2 COCO Tutorial" url="@ref workbench_docs_Workbench_DG_Import_TensorFlow"/>
- <tab type="user" title="Import MXNet* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_MXNet"/>
- <tab type="user" title="Import ONNX* MobileNet v2 Tutorial" url="@ref workbench_docs_Workbench_DG_Import_ONNX"/>
- </tab>
- <tab type="usergroup" title="Select Datasets" url="@ref workbench_docs_Workbench_DG_Select_Datasets">
- <tab type="user" title="Import Datasets" url="@ref workbench_docs_Workbench_DG_Import_Datasets"/>
- <tab type="user" title="Generate Datasets" url="@ref workbench_docs_Workbench_DG_Generate_Datasets"/>
- <tab type="user" title="Dataset Types" url="@ref workbench_docs_Workbench_DG_Dataset_Types"/>
- <tab type="user" title="Download and Cut Datasets" url="@ref workbench_docs_Workbench_DG_Download_and_Cut_Datasets"/>
- </tab>
- <tab type="user" title="Select Environment" url="@ref workbench_docs_Workbench_DG_Select_Environment"/>
- <tab type="user" title="Run Baseline Inference" url="@ref workbench_docs_Workbench_DG_Run_Baseline_Inference"/>
- </tab>
- <tab type="usergroup" title="DL Workbench Developer Guide" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
- <tab type="usergroup" title="Measure and Interpret Model Performance" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference">
- <tab type="user" title="Run Single Inference" url="@ref workbench_docs_Workbench_DG_Run_Single_Inference"/>
- <tab type="user" title="Run Group Inference" url="@ref workbench_docs_Workbench_DG_Run_Range_of_Inferences"/>
- <tab type="usergroup" title="View Inference Results" url="@ref workbench_docs_Workbench_DG_View_Inference_Results">
- <tab type="user" title="Visualize Model" url="@ref workbench_docs_Workbench_DG_Visualize_Model"/>
- </tab>
- <tab type="user" title="Compare Performance between Two Versions of a Model" url="@ref workbench_docs_Workbench_DG_Compare_Performance_between_Two_Versions_of_Models"/>
- </tab>
- <tab type="usergroup" title="Tune Model for Enhanced Performance" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization">
- <tab type="user" title="INT8 Calibration" url="@ref workbench_docs_Workbench_DG_Int_8_Quantization"/>
- <tab type="user" title="Winograd Algorithmic Tuning" url="@ref workbench_docs_Workbench_DG_Winograd_Algorithmic_Tuning"/>
- </tab>
- <tab type="usergroup" title="Accuracy Measurements" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy">
- <tab type="user" title="Measure Accuracy" url="@ref workbench_docs_Workbench_DG_Measure_Accuracy"/>
- <tab type="user" title="Configure Accuracy Settings" url="@ref workbench_docs_Workbench_DG_Configure_Accuracy_Settings"/>
- </tab>
- <tab type="usergroup" title="Remote Profiling" url="@ref workbench_docs_Workbench_DG_Remote_Profiling">
- <tab type="user" title="Profile on Remote Machine" url="@ref workbench_docs_Workbench_DG_Profile_on_Remote_Machine"/>
- <tab type="user" title="Set Up Target for Remote Profiling" url="@ref workbench_docs_Workbench_DG_Setup_Remote_Target"/>
- <tab type="user" title="Register Remote Target in DL Workbench" url="@ref workbench_docs_Workbench_DG_Add_Remote_Target"/>
- <tab type="user" title="Remote Machines" url="@ref workbench_docs_Workbench_DG_Remote_Machines"/>
- </tab>
- <tab type="user" title="Build Application with Deployment Package" url="@ref workbench_docs_Workbench_DG_Deployment_Package"/>
- <tab type="user" title="Deploy and Integrate Performance Criteria into Application" url="@ref workbench_docs_Workbench_DG_Deploy_and_Integrate_Performance_Criteria_into_Application"/>
- <tab type="user" title="Persist Database State" url="@ref workbench_docs_Workbench_DG_Persist_Database"/>
- <tab type="user" title="Work with Docker Container" url="@ref workbench_docs_Workbench_DG_Docker_Container"/>
- </tab>
- <tab type="usergroup" title="DL Workbench Security Guide" url="@ref workbench_docs_Workbench_DG_Configure_TLS">
- <tab type="user" title="Configure Transport Layer Security (TLS)" url="@ref workbench_docs_Workbench_DG_Configure_TLS"/>
- <tab type="user" title="Configure Authentication Token Saving" url="@ref workbench_docs_Workbench_DG_Configure_Token_Saving"/>
- </tab>
- <tab type="user" title="Troubleshooting" url="@ref workbench_docs_Workbench_DG_Troubleshooting"/>
- </tab>
- <!-- Security -->
+ <!-- Security -->
<tab type="usergroup" title="Security" url="@ref openvino_docs_security_guide_introduction">
<tab type="user" title="Introduction" url="@ref openvino_docs_security_guide_introduction"/>
<tab type="user" title="Using DL Workbench Securely" url="@ref openvino_docs_security_guide_workbench"/>
are not covered in this guide.
- An internet connection is required to follow the steps in this guide.
+## Product Change Notice
+Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
+
+<table>
+ <tr>
+ <td><strong>Change Notice Begins</strong></td>
+ <td>July 2020</td>
+ </tr>
+ <tr>
+ <td><strong>Change Date</strong></td>
+ <td>October 2020</td>
+ </tr>
+</table>
+
+Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
+
+Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
+
## Introduction
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).
**Software Requirements**
-- CMake 3.4 or higher
-- Python 3.5 or higher
+- CMake 3.9 or higher
+- Python 3.5 - 3.7
- Apple Xcode\* Command Line Tools
- (Optional) Apple Xcode\* IDE (not required for OpenVINO, but useful for development)
* [Installation Guide for Windows*](installing-openvino-windows.md)
* [Installation Guide for Linux*](installing-openvino-linux.md)
- For more information about how to use the Model Optimizer, see the [Model Optimizer Developer Guide](https://software.intel.com/articles/OpenVINO-ModelOptimizer)
+ For more information about how to use the Model Optimizer, see the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
- An internet connection is required to follow the steps in this guide.
- [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
+## Product Change Notice
+Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
+
+<table>
+ <tr>
+ <td><strong>Change Notice Begins</strong></td>
+ <td>July 2020</td>
+ </tr>
+ <tr>
+ <td><strong>Change Date</strong></td>
+ <td>October 2020</td>
+ </tr>
+</table>
+
+Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
+
+Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
+
## Introduction
> **IMPORTANT**:
2. Install the dependencies:
- - [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)
- - [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+ - [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
+ - [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
- - [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
+ - [Python **3.5**-**3.7** 64-bit](https://www.python.org/downloads/windows/)
> **IMPORTANT**: As part of this installation, make sure you click the option to add the application to your `PATH` environment variable.
3. <a href="#set-the-environment-variables">Set Environment Variables</a>
* Intel Pentium® processor N4200/5, N3350/5, or N3450/5 with Intel® HD Graphics
* Intel® Neural Compute Stick 2
* Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
-* Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Mustang-F100-A10) SG2
+* Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA (Mustang-F100-A10) SG2
> **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.
- Microsoft Windows 10*, 64-bit
**Software**
-- [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)
-- [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+- [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
+- [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
- [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
1. Go to the `<INSTALL_DIR>\deployment_tools\inference-engine\external\hddl\SMBusDriver` directory, where `<INSTALL_DIR>` is the directory in which the Intel Distribution of OpenVINO toolkit is installed.
2. Right click on the `hddlsmbus.inf` file and choose **Install** from the pop up menu.
- 2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2015</a>
+ 2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2017</a>
You are done installing your device driver and are ready to use your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
2. Install the dependencies:
- - [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)
- - [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+ - [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
+ - [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
- - [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
+ - [Python **3.5** - **3.7** 64-bit](https://www.python.org/downloads/windows/)
> **IMPORTANT**: As part of this installation, make sure you click the option to add the application to your `PATH` environment variable.
3. <a href="#set-the-environment-variables">Set Environment Variables</a>
- Microsoft Windows\* 10 64-bit
**Software**
-- [Microsoft Visual Studio* with C++ **2019, 2017, or 2015** with MSBuild](http://visualstudio.microsoft.com/downloads/)
-- [CMake **3.4 or higher** 64-bit](https://cmake.org/download/)
+- [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
+- [CMake **2.8.12 or higher** 64-bit](https://cmake.org/download/)
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
-- [Python **3.6.5** 64-bit](https://www.python.org/downloads/release/python-365/)
+- [Python **3.5** - **3.7** 64-bit](https://www.python.org/downloads/windows/)
## Installation Steps
Congratulations. You have completed all the required installation, configuration, and build steps to work with your trained models using CPU.
-If you want to use Intel® Processor graphics (GPU), Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ (VPU), or add CMake* and Python* to your Windows* environment variables, read through the next section for additional steps.
+If you want to use Intel® Processor graphics (GPU), Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, or add CMake* and Python* to your Windows* environment variables, read through the next section for additional steps.
If you want to continue and run the Image Classification Sample Application on one of the supported hardware device, see the [Run the Image Classification Sample Application](#run-the-image-classification-sample-application) section.
1. Go to the `<INSTALL_DIR>\deployment_tools\inference-engine\external\hddl\SMBusDriver` directory, where `<INSTALL_DIR>` is the directory in which the Intel Distribution of OpenVINO toolkit is installed.
2. Right click on the `hddlsmbus.inf` file and choose **Install** from the pop up menu.
- 2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2015</a>
+ 2. Download and install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2017</a>
You are done installing your device driver and are ready to use your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.