Arm Edge AI FAQ

From ESS-WIKI
Jump to: navigation, search

NXP Series

  • eIQ

NXP eIQ is a software development platform designed to enable developers to easily build and deploy machine learning applications on NXP microcontrollers and processors. It includes a suite of tools and libraries for developing and optimizing machine learning models, as well as a runtime engine for deploying those models on NXP devices. The platform supports a range of popular machine learning frameworks, including TensorFlow, Caffe, and PyTorch, and is designed to work seamlessly with NXP's hardware and software development tools. The goal of NXP eIQ is to simplify the development of machine learning applications for IoT devices, enabling developers to bring intelligent, connected products to market faster and with less effort.

  1. How to install eIQ?
  2. How to run samples?
  3. Find more from NXP eIQ portal

NVIDIA Jetson Series

  • TensorRT

Nvidia Jetson TensorRT is a platform designed for deploying deep learning models and accelerating AI applications. It is built on top of the Nvidia Jetson system on a chip (SoC). Jetson TensorRT includes a software library called TensorRT, which is specifically optimized for running deep neural networks on Nvidia GPUs. TensorRT uses a combination of techniques like layer fusion, precision calibration, and dynamic tensor memory management to speed up inference and reduce latency.

  1. How to check Jetpack version?
  2. How to install Jetpack?
  3. How to run samples?
  4. Find more from NVIDIA TensorRT portal

AI Accelerator module

  • Hailo AI (Under construction)

The Hailo AI accelerator module is a small, high-performance chip designed to accelerate deep learning applications in edge devices, such as cameras, drones, and smart home devices. The module is based on a proprietary neural processing architecture that optimizes power consumption and performance, enabling efficient and fast processing of complex AI algorithms on the device itself, without the need for cloud connectivity or significant computational resources. The Hailo accelerator module is designed to be integrated seamlessly into existing edge devices, providing them with advanced AI capabilities while preserving their form factor and power consumption characteristics.

  1. How to integrate the Hailo AI module?
  2. How to run samples?