Difference between revisions of "Edge AI SDK/Q&A"

From ESS-WIKI
Jump to: navigation, search
Line 17: Line 17:
 
When evaluating AI performance, several factors need to be considered. These include the inference speed of the model (i.e., the speed at which the model makes predictions),  the performance of the hardware (including CPU and GPU usage, memory usage, and power consumption). 
 
When evaluating AI performance, several factors need to be considered. These include the inference speed of the model (i.e., the speed at which the model makes predictions),  the performance of the hardware (including CPU and GPU usage, memory usage, and power consumption). 
  
The Edge AI SDK [http://ess-wiki.advantech.com.tw/view/Edge_AI_Suite/User_Guide#Evaluate_the_AI_Performance benchmark] enabling developers to comprehensively evaluate the performance of their AI applications on edge devices.
+
The Edge AI SDK [http://ess-wiki.advantech.com.tw/view/Edge_AI_SDK/User_Guide#Evaluate_the_AI_Performance benchmark] enabling developers to comprehensively evaluate the performance of their AI applications on edge devices.
  
 
Through the Edge AI SDK, developers can easily establish an Edge AI environment, allowing them to quickly begin evaluating their AI applications on Advantech's Edge AI platforms with various AI accelerators.
 
Through the Edge AI SDK, developers can easily establish an Edge AI environment, allowing them to quickly begin evaluating their AI applications on Advantech's Edge AI platforms with various AI accelerators.

Revision as of 08:12, 24 November 2023

Edge AI

How To convert an AI Model to run on edge inference runtime ?

  Converting an AI model to run on edge inference runtime is a crucial step in implementing complex AI applications on resource-constrained edge devices. This article will discuss this topic focusing on three platforms: Intel OpenVINO, Nvidia, and Hailo.

  • Intel OpenVINO platform. OpenVINO is a deep learning inference optimizer and runtime library developed by Intel. It is designed for Intel hardware (including CPUs, GPUs, FPGAs, and VPUs) to achieve optimal deep learning performance on these devices. A key component of OpenVINO is the Model Optimizer, a powerful tool that can convert various deep learning models (such as TensorFlow, Caffe, and ONNX) into OpenVINO's IR (Intermediate Representation) format. This IR model can then run on any Intel hardware that supports OpenVINO. For more information about OpenVINO, please refer to Intel's official website: https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html
  • Nvidia platform. Nvidia offers TensorRT, a high-performance library for optimizing and running deep learning models. TensorRT can convert various deep learning models (such as TensorFlow, Keras, and ONNX) into a format that can be efficiently run on Nvidia GPUs. This is particularly useful for applications that need to run AI models on edge devices (such as Nvidia's Jetson series). For more information about TensorRT, please refer to Nvidia's official website: https://developer.nvidia.com/tensorrt
  • Hailo platform. Hailo is a company specializing in edge AI processors, and their Hailo-8 deep learning processor is designed for edge devices to provide optimal performance and efficiency. Hailo offers an SDK that can convert TensorFlow and ONNX models into a format that can run on the Hailo-8 processor. This allows developers to easily deploy their AI models to Hailo's edge devices. For more information about Hailo, please refer to Hailo's official website: https://hailo.ai/

In general, converting an AI model to run on edge inference runtime involves a process of model optimization and conversion. Each platform provides their own tools and methods to ensure the model can run efficiently on their hardware.


How To evaluate AI performance of an Edge AI platfrom or AI Accelerator ?

When evaluating AI performance, several factors need to be considered. These include the inference speed of the model (i.e., the speed at which the model makes predictions),  the performance of the hardware (including CPU and GPU usage, memory usage, and power consumption). 

The Edge AI SDK benchmark enabling developers to comprehensively evaluate the performance of their AI applications on edge devices.

Through the Edge AI SDK, developers can easily establish an Edge AI environment, allowing them to quickly begin evaluating their AI applications on Advantech's Edge AI platforms with various AI accelerators.

AI Model 

Where to find AI Model for my AI platform?

 The AI Model Zoo is a collection that provides various pre-trained deep learning models. Developers can use these models to accelerate the development of their AI applications. Below is the relevant information about the AI Model Zoo for Intel's OpenVINO, Nvidia, and Hailo platforms.

  • Intel's OpenVINO provides a pre-trained model set called the "Open Model Zoo." This model set contains a series of models that can be used for various different computer vision tasks, such as object detection, face recognition, human pose estimation, and so on. You can find these models on OpenVINO's official website: https://github.com/openvinotoolkit/open_model_zoo
  • Nvidia's AI Model Zoo. Nvidia provides a model library called "Nvidia NGC," which contains various pre-trained models optimized for Nvidia hardware. These models can be used for various AI tasks, such as image classification, object detection, speech recognition, and so on. You can find these models on the official Nvidia NGC website: https://ngc.nvidia.com/catalog/models
  •  Hailo's AI Model Zoo. Hailo provides a model library that contains various pre-trained models specifically optimized for the Hailo-8 deep learning processor. These models can be used for various edge AI applications, such as object detection, face recognition, speech recognition, and so on. You can find these models on Hailo's official website: https://hailo.ai/products/hailo-software/hailo-ai-software-suite/#sw-modelzoo

In summary, no matter which platform you are developing AI applications on, you can find a suitable AI Model Zoo to accelerate your AI application development.