Install the Intel® Distribution of OpenVINO™ toolkit core components
Install the dependencies:
Microsoft Visual Studio* with C++ 2019 or 2017 with MSBuild
NOTE: If you want to use Microsoft Visual Studio 2019, you are required to install CMake v3.14.
IMPORTANT: As part of this installation, make sure you click the option to add the application to your PATH
environment variable.
NOTE: System Requirements
Hardware
- 6th to 10th generation Intel® Core™ processors and Intel® Xeon® processors
- Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
- 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
- Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
- Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
- Intel Pentium® processor N4200/5, N3350/5, or N3450/5 with Intel® HD Graphics
Processor Notes:
- Processor graphics are not included in all processors. See Processors specifications for information about your processor.
- A chipset that supports processor graphics is required if you're using an Intel Xeon processor. See Chipset specifications for information about your chipset.
Operating System
- Microsoft Windows* 10 64-bit
Software
NOTE: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.STEP 2
Set the Environment Variables
NOTE : If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace C:\Program Files (x86)\IntelSWTools
with the directory in which you installed the software.
You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the setupvars.bat
batch file to temporarily set your environment variables:
cd C:\Program Files (x86)\IntelSWTools\openvino\bin\
setupvars.bat
OpenVINO toolkit environment variables are removed when you close the Command Prompt window. As an option, you can permanently set the environment variables manually.
The environment variables are set. Continue to the next section to configure the Model Optimizer.
STEP 3Configure the Model Optimizer
IMPORTANT: These steps are required. You must configure the Model Optimizer for at least one framework. The Model Optimizer will fail if you do not complete the steps in this section.
NOTE: If you see an error indicating Python is not installed when you know you installed it, your computer might not be able to find the program. For the instructions to add Python to your system environment variables, see Update Your Windows Environment Variables.
The Model Optimizer is a key component of the Intel® Distribution of OpenVINO™ toolkit. You cannot do inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The IR is a pair of files that describe the whole model:
.xml
: Describes the network topology.bin
: Contains the weights and biases binary data
The Inference Engine reads, loads, and infers the IR files, using a common API across the CPU, GPU, or VPU hardware.
The Model Optimizer is a Python*-based command line tool (mo.py
), which is located in C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer
. Use this tool on models trained with popular deep learning frameworks such as Caffe*, TensorFlow*, MXNet*, and ONNX* to convert them to an optimized IR format that the Inference Engine can use.
This section explains how to use scripts to configure the Model Optimizer either for all of the supported frameworks at the same time or for individual frameworks. If you want to manually configure the Model Optimizer instead of using scripts, see the Using Manual Configuration Process section on the Configuring the Model Optimizer page.
For more information about the Model Optimizer, see the Model Optimizer Developer Guide.
Model Optimizer Configuration Steps
You can configure the Model Optimizer either for all supported frameworks at once or for one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
IMPORTANT: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment.
NOTE: In the steps below:
- If you you want to use the Model Optimizer from another installed versions of Intel® Distribution of OpenVINO™ toolkit installed, replace
openvino
withopenvino_<version>
. - If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace
C:\Program Files (x86)\IntelSWTools
with the directory where you installed the software.
NOTE: In the steps below:If you you want to use the Model Optimizer from another installed versions of Intel® Distribution of OpenVINO™ toolkit installed, replaceopenvino
withopenvino_<version>
.If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replaceC:\Program Files (x86)\IntelSWTools
with the directory where you installed the software.
These steps use a command prompt to make sure you see error messages.
Option 1: Configure the Model Optimizer for all supported frameworks at the same time:
Open a command prompt. To do so, type cmd in your Search Windows box and then press Enter. Type commands in the opened window:
- Open a command prompt. To do so, type
cmd
in your Search Windows box and then press Enter. Type commands in the opened window:
Go to the Model Optimizer prerequisites directory.
cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites
Run the following batch file to configure the Model Optimizer for Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX*:
install_prerequisites.bat
Option 2: Configure the Model Optimizer for each framework separately:
Go to the Model Optimizer prerequisites directory:
cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites
- Go to the Model Optimizer prerequisites directory:cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites
Run the batch file for the framework you will use with the Model Optimizer. You can use more than one:
For Caffe:
install_prerequisites_caffe.bat
For TensorFlow:
install_prerequisites_tf.bat
For MXNet:
install_prerequisites_mxnet.bat
For ONNX:
install_prerequisites_onnx.bat
For Kaldi:
install_prerequisites_kaldi.bat
The Model Optimizer is configured for one or more frameworks. Success is indicated by a screen similar to this:
You are ready to use two short demos to see the results of running the Intel Distribution of OpenVINO toolkit and to verify your installation was successful. The demo scripts are required since they perform additional configuration steps. Continue to the next section.
If you want to use a GPU or VPU, or update your Windows* environment variables, read through the Optional Steps section.
STEP 4Use Verification Scripts to Verify Your Installation
IMPORTANT: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.
NOTE: The paths in this section assume you used the default installation directory. If you used a directory other than C:\Program Files (x86)\IntelSWTools
, update the directory with the location where you installed the software.
To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:
- Open a command prompt window.
- Go to the Inference Engine demo directory:
cd C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo\
- Run the verification scripts by following the instructions in the next section.
Run the Image Classification Verification Script
To run the script, start the demo_squeezenet_download_convert_run.bat file:
demo_squeezenet_download_convert_run.bat
This script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin
and .xml
Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.This verification script builds the Image Classification Sample Async application and run it with the car.png
image in the demo directory. For a brief description of the Intermediate Representation, see Configuring the Model Optimizer.
When the verification script completes, you will have the label and confidence for the top-10 categories:
This demo is complete. Leave the console open and continue to the next section to run the Inference Pipeline demo.
Run the Inference Pipeline Verification Script
To run the script, start the demo_security_barrier_camera.bat file while still in the console:
demo_security_barrier_camera.bat
This script downloads three pre-trained model IRs, builds the Security Barrier Camera Demo application, and runs it with the downloaded models and the car_1.bmp
image from the demo
directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
When the demo completes, you have two windows open:
- A console window that displays information about the tasks performed by the demo
- An image viewer window that displays a resulting frame with detections rendered as bounding boxes, similar to the following:
Close the image viewer window to end the demo.
To learn more about the verification scripts, see README.txt
in C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\demo
.
For detailed description of the OpenVINO™ pre-trained object detection and object recognition models, see the Overview of OpenVINO™ toolkit Pre-Trained Models page.
In this section, you saw a preview of the Intel® Distribution of OpenVINO™ toolkit capabilities.
Congratulations. You have completed all the required installation, configuration, and build steps to work with your trained models using CPU.
-X-
Thank you for watching this guide. If you face any problems then do put them in the comments below. I will check and try to answer them surely.
Comments