Difference between revisions of "Edge AI SDK/AI Framework/OpenVINO"

From ESS-WIKI
Jump to: navigation, search
(Application)
Line 26: Line 26:
  
  
= Application =
+
= Applications =
  
 
{| border="1" cellpadding="1" cellspacing="1" style="width: 500px;"
 
{| border="1" cellpadding="1" cellspacing="1" style="width: 500px;"

Revision as of 09:41, 7 December 2023

OpenVINO

OpenVINO™ toolkit: An open-source solution for optimizing and deploying AI inference, in domains such as computer vision, automatic speech recognition, natural language processing, recommendation systems, and more. With its plug-in architecture, OpenVINO allows developers to write once and deploy anywhere.  We are proud to announce the release of OpenVINO 2023.0 introducing a range of new features, improvements, and deprecations aimed at enhancing the developer experience.

 

  • Enables the use of models trained with popular frameworks, such as TensorFlow and PyTorch.  
  • Optimizes inference of deep learning models by applying model retraining or fine-tuning, like post-training quantization.  
  • Supports heterogeneous execution across Intel hardware, using a common API for the Intel CPU, Intel Integrated Graphics, Intel Discrete Graphics, and other commonly used accelerators. 

 

OpenVINO Runtime SDK

2023.0

Overall updates

  • Proxy & hetero plugins have been migrated to API 2.0, providing enhanced compatibility and stability. 
  • Symbolic shape inference preview is now available, leading to improved performance for LLMs. 
  • OpenVINO's graph representation has been upgraded to opset12, introducing a new set of operations that offer enhanced functionality and optimizations.

 


Applications

Application Model
Object Detection  
Person Detection  
Face Detection  
Pose Estimation  

Benchmark

You can refer the link to test the performance with the benchmark_app

benchmark_app

The OpenVINO benchmark setup includes a single system with OpenVINO™, as well as the benchmark application installed. It measures the time spent on actual inference (excluding any pre or post processing) and then reports on the inferences per second (or Frames Per Second).

You can refer : link

 

Examples

cd /opt/Advantech/EdgeAISuite/Intel_Standard/benchmark  

<CPU>
$ ./benchmark_app -m ../model/mobilenet-ssd/FP16/mobilenet-ssd.xml -i car.png -t 8 -d CPU  

<iGPU>
$ ./benchmark_app -m ../model/mobilenet-ssd/FP16/mobilenet-ssd.xml -i car.png -t 8 -d GPU.0  

<dGPU>
$ ./benchmark_app -m ../model/mobilenet-ssd/FP16/mobilenet-ssd.xml -i car.png -t 8 -d GPU.1  


Utility

XPU-SIM

Intel® XPU Manager ( offical link ) is a free and open-source solution for local and remote monitoring and managing Intel® Data Center GPUs. It is designed to simplify administration, maximize reliability and uptime, and improve utilization.

Intel XPU System Management Interface (SMI) A command line utility for local XPU management.

Key features

Monitoring GPU utilization and health, getting job-level statistics, running comprehensive diagnostics, controlling power, policy management, firmware updating, and more.

Show GPU basic information , sample below

Xpu-smi.png

You can refer more info xpu-smi