Difference between revisions of "AIM-Linux/ArmEdgeAI FAQ"

From ESS-WIKI
Jump to: navigation, search
Line 2: Line 2:
 
== '''NXP Series''' ==
 
== '''NXP Series''' ==
 
* eIQ
 
* eIQ
 +
NXP eIQ is a software development platform designed to enable developers to easily build and deploy machine learning applications on NXP microcontrollers and processors. It includes a suite of tools and libraries for developing and optimizing machine learning models, as well as a runtime engine for deploying those models on NXP devices. The platform supports a range of popular machine learning frameworks, including TensorFlow, Caffe, and PyTorch, and is designed to work seamlessly with NXP's hardware and software development tools. The goal of NXP eIQ is to simplify the development of machine learning applications for IoT devices, enabling developers to bring intelligent, connected products to market faster and with less effort.
 +
 
#[[AIMLinux/AddOn/Edge_AI#Installation_2|How to install eIQ?]]
 
#[[AIMLinux/AddOn/Edge_AI#Installation_2|How to install eIQ?]]
 
#[[AIMLinux/AddOn/Edge_AI#How_to_Run_Samples|How to run samples?]]
 
#[[AIMLinux/AddOn/Edge_AI#How_to_Run_Samples|How to run samples?]]
Line 8: Line 10:
 
== '''NVIDIA Jetson Series''' ==
 
== '''NVIDIA Jetson Series''' ==
 
* TensorRT
 
* TensorRT
 +
Nvidia Jetson TensorRT is a platform designed for deploying deep learning models and accelerating AI applications. It is built on top of the Nvidia Jetson system on a chip (SoC). Jetson TensorRT includes a software library called TensorRT, which is specifically optimized for running deep neural networks on Nvidia GPUs. TensorRT uses a combination of techniques like layer fusion, precision calibration, and dynamic tensor memory management to speed up inference and reduce latency.
 +
 
#[[NVidia_check_soc_jetpack|How to check Jetpack version?]]
 
#[[NVidia_check_soc_jetpack|How to check Jetpack version?]]
 
#[https://docs.nvidia.com/jetson/jetpack/install-jetpack/index.html#how-to-install-jetpack How to install Jetpack?]
 
#[https://docs.nvidia.com/jetson/jetpack/install-jetpack/index.html#how-to-install-jetpack How to install Jetpack?]
Line 15: Line 19:
 
== '''AI Accelerator module''' ==
 
== '''AI Accelerator module''' ==
 
* Hailo AI (Under construction)
 
* Hailo AI (Under construction)
 +
The Hailo AI accelerator module is a small, high-performance chip designed to accelerate deep learning applications in edge devices, such as cameras, drones, and smart home devices. The module is based on a proprietary neural processing architecture that optimizes power consumption and performance, enabling efficient and fast processing of complex AI algorithms on the device itself, without the need for cloud connectivity or significant computational resources. The Hailo accelerator module is designed to be integrated seamlessly into existing edge devices, providing them with advanced AI capabilities while preserving their form factor and power consumption characteristics.
 +
 +
#[[How to install eIQ?]]
 +
#[[How to run samples?]]

Revision as of 03:03, 15 March 2023

NXP Series

  • eIQ

NXP eIQ is a software development platform designed to enable developers to easily build and deploy machine learning applications on NXP microcontrollers and processors. It includes a suite of tools and libraries for developing and optimizing machine learning models, as well as a runtime engine for deploying those models on NXP devices. The platform supports a range of popular machine learning frameworks, including TensorFlow, Caffe, and PyTorch, and is designed to work seamlessly with NXP's hardware and software development tools. The goal of NXP eIQ is to simplify the development of machine learning applications for IoT devices, enabling developers to bring intelligent, connected products to market faster and with less effort.

  1. How to install eIQ?
  2. How to run samples?
  3. Find more from NXP eIQ portal

NVIDIA Jetson Series

  • TensorRT

Nvidia Jetson TensorRT is a platform designed for deploying deep learning models and accelerating AI applications. It is built on top of the Nvidia Jetson system on a chip (SoC). Jetson TensorRT includes a software library called TensorRT, which is specifically optimized for running deep neural networks on Nvidia GPUs. TensorRT uses a combination of techniques like layer fusion, precision calibration, and dynamic tensor memory management to speed up inference and reduce latency.

  1. How to check Jetpack version?
  2. How to install Jetpack?
  3. How to run samples?
  4. Find more from NVIDIA TensorRT portal

AI Accelerator module

  • Hailo AI (Under construction)

The Hailo AI accelerator module is a small, high-performance chip designed to accelerate deep learning applications in edge devices, such as cameras, drones, and smart home devices. The module is based on a proprietary neural processing architecture that optimizes power consumption and performance, enabling efficient and fast processing of complex AI algorithms on the device itself, without the need for cloud connectivity or significant computational resources. The Hailo accelerator module is designed to be integrated seamlessly into existing edge devices, providing them with advanced AI capabilities while preserving their form factor and power consumption characteristics.

  1. How to install eIQ?
  2. How to run samples?