Difference between revisions of "NXP eIQ"
Darren.huang (talk | contribs) |
Darren.huang (talk | contribs) |
||
Line 1: | Line 1: | ||
| | ||
+ | |||
=== <span style="color:#0070c0">NXP i.MX series</span> === | === <span style="color:#0070c0">NXP i.MX series</span> === | ||
Line 6: | Line 7: | ||
The i.MX 8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, andindustrial automation with high reliability. | The i.MX 8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, andindustrial automation with high reliability. | ||
− | * | + | *The Neural Processing Unit (NPU) of i.MX 8M Plus operating at up to 2.3 TOPS''' |
− | + | <span style="color:#0070c0">NXP Demo Experience (Yocto 3.3 ~ )</span> | |
− | |||
*Preinstalled on NXP-provided demo Linux images | *Preinstalled on NXP-provided demo Linux images | ||
Line 69: | Line 69: | ||
| | ||
+ | | ||
==== <span style="color:#0070c0">eIQ - A Python Framework for eIQ on i.MX Processors (Yocto 3.0)</span> ==== | ==== <span style="color:#0070c0">eIQ - A Python Framework for eIQ on i.MX Processors (Yocto 3.0)</span> ==== |
Revision as of 10:44, 27 September 2023
Contents
NXP i.MX series
The i.MX 8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, andindustrial automation with high reliability.
- The Neural Processing Unit (NPU) of i.MX 8M Plus operating at up to 2.3 TOPS
NXP Demo Experience (Yocto 3.3 ~ )
- Preinstalled on NXP-provided demo Linux images
- imx-image-full image must be used
- Yocto 3.3 (5.10.52_2.1.0 ) ~ Yocto 4.2 (6.1.1_1.0.0)
- Need to connect the internet
Start the demo launcher by clicking NXP Logo is displayed on the top left-hand corner of the screen
Machine Learning Demos
- NNStreamer demos
- Object classification
- Object detection
- Pose detection
- Brand detection
- ML gateway
- OpenCV demos
- Face recognition
- Selfie segmentation
NNStreamer Demo: Object Detection
Click the "Object Detection " and Launch Demo
Set some parameters:
- Source: Select the camera to use or to use the example video
- Backend: Select whether to use the NPU (if available) or CPU for inferences.
- Height: Select the input height of the video if using a camera.
- Width: Select the input width of the video if using a camera.
- Label Color: Select the color of the overlay labels.
The result of NPU object detection:
NXP Demo Experience - Text User Interface(TUI)
$ demoexperience tui
eIQ - A Python Framework for eIQ on i.MX Processors (Yocto 3.0)
PyeIQ is written on top of eIQ™ ML Software Development Environment and provides a set of Python classes
allowing the user to run Machine Learning applications in a simplified and efficiently way without spending time on
cross-compilations, deployments or reading extensive guides.
Installation
- Method 1: Use pip3 tool to install the package located at PyPI repository:
$ pip3 install pyeiq
- Method 2: Get the latest tarball Download files and copy it to the board:
$ pip3 install <tarball>
pyeiq tarball:
For the 5.4.70_2.3.0 BSP:
- Install the v3.0.0 version and it run on NPU
For the 5.10.72_2.2.0 ~ 6.1.22_2.0.0 BSP (Suggest to use the demo experience)
- Install the v3.1.0 version, but it run on CPU
Download PyeIQ Cache Data
- Download link: https://github.com/ADVANTECH-Corp/pyeiq-data
- Decompress the files to /home/root/.cache/
$ tar -zxvf eiq-cache-data_3.0.0.tar.gz
How to Run Samples
- Start the manager tool:
$ pyeiq
- The above command returns the PyeIQ manager tool options:
Manager Tool Command | Description | Example |
pyeiq --list-apps | List the available applications. | |
pyeiq --list-demos | List the available demos. | |
pyeiq --run <app_name/demo_name> | Run the application or demo. | # pyeiq --run object_detection_tflite |
pyeiq --info <app_name/demo_name> | Application or demo short description and usage. | |
pyeiq --clear-cache | Clear cached media generated by demos. | # pyeiq --info object_detection_tflite |
PyeIQ Demos
- covid19_detection
- object_classification_tflite
- object_detection_tflite
Demos Example - Running Object Detection
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.
- Run the Object Detection Default Image demo using the following line:
$ pyeiq --run object_detection_tflite
* This runs inference on a default image:
- Run the Object Detection Custom Image demo using the following line:
$ pyeiq --run object_detection_tflite --image=/path_to_the_image
- Run the Object Detection Video File using the following line:
$ pyeiq --run object_detection_tflite --video_src=/path_to_the_video
- Run the Object Detection Video Camera or Webcam using the following line:
$ pyeiq --run object_detection_tflite --video_src=/dev/video<index>