Difference between revisions of "NXP eIQ"

From ESS-WIKI
Jump to: navigation, search
(eIQ - A Python Framework for eIQ on i.MX Processors (Yocto 3.0))
 
(41 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
  
 
=== <span style="color:#0070c0">NXP i.MX series</span> ===
 
=== <span style="color:#0070c0">NXP i.MX series</span> ===
Line 5: Line 4:
 
The i.MX 8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, andindustrial automation with high reliability.
 
The i.MX 8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, andindustrial automation with high reliability.
  
*'''The Neural Processing Unit (NPU)&nbsp;''''''of&nbsp;i.MX 8M Plus&nbsp;operating at up to'''''&nbsp;'''<span style="color:#FF0000;">2.3 TOPS</span>'''''  
+
*The Neural Processing Unit (NPU) of i.MX 8M Plus operating at up to 2.3 TOPS'''  
  
==== <span style="color:#0070c0">NXP Demo Experience&nbsp;</span> ====
+
==== <span style="color:#0070c0">NXP Demo Experience (Yocto 3.3 ~ latest)</span> ====
  
 
*Preinstalled on NXP-provided demo Linux images  
 
*Preinstalled on NXP-provided demo Linux images  
Line 20: Line 19:
 
[[File:2023-09-27 155552.png|400px|2023-09-27 155552.png]]
 
[[File:2023-09-27 155552.png|400px|2023-09-27 155552.png]]
  
Machine Learning Demos:
+
===== <span style="color:#0070c0">Machine Learning Demos</span> =====
  
 
*NNStreamer demos  
 
*NNStreamer demos  
Line 29: Line 28:
 
**ML gateway   
 
**ML gateway   
  
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;[[File:2023-09-27 155836.png|400px|2023-09-27 155836.png]] &nbsp;
+
&nbsp;[[File:2023-09-27 155836.png|400px|2023-09-27 155836.png]] &nbsp;
  
 
*OpenCV demos  
 
*OpenCV demos  
Line 35: Line 34:
 
**Selfie segmentation   
 
**Selfie segmentation   
  
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;[[File:2023-09-27 162615.png|400px|2023-09-27 162615.png]]
+
&nbsp;[[File:2023-09-27 162615.png|400px|2023-09-27 162615.png]]
 +
 
 +
===== <span style="color:#0070c0">NNStreamer Demo: Object Detection</span> =====
 +
 
 +
Click the "Object Detection&nbsp;" and Launch Demo
 +
 
 +
[[File:111111111333.png|400px|111111111333.png]] &nbsp;
 +
 
 +
Set some parameters:
 +
 
 +
[[File:22222222222.png|400px|22222222222.png]] &nbsp;
 +
 
 +
*Source: Select the camera to use or to use the example video
 +
*Backend: Select whether to use the NPU (if available) or CPU for inferences.
 +
*Height: Select the input height of the video if using a camera.
 +
*Width: Select the input width of the video if using a camera.
 +
*Label Color: Select the color of the overlay labels.
 +
 
 +
The result of&nbsp;NPU object detection:
 +
 
 +
[[File:3333333333.png|400px|3333333333.png]] &nbsp;
 +
 
 +
===== <span style="color:#0070c0">NXP Demo Experience - Text User Interface(TUI)</span> =====
 +
 
 +
Demos can also be launched from the command line through log-in into the board remotely or using the onboard serial debug console. Keep in mind that most demos still require a display to run successfully.
 +
 
 +
To start the text user interface, type the following command into the command line.
 +
 
 +
$ demoexperience tui
 +
 
 +
[[File:2023-09-27 170604.png|400px|2023-09-27 170604.png]] &nbsp;
 +
 
 +
The interface can be navigated using the following keyboard inputs:
 +
 
 +
*'''Up and down arrow keys:''' Select a demo from the list on the left
 +
*'''Enter key:''' Runs the selected demo
 +
*'''Q key or Ctrl+C keys:''' Quit the interface
 +
*'''H key: '''Opens the help menu
 +
 
 +
Demos can be closed by closing the demo onscreen or pressing the "Ctrl" and "C" keys at the same time.
  
 
&nbsp;
 
&nbsp;
  
==== <span style="color:#0070c0">eIQ - A Python Framework for eIQ on i.MX Processors</span> ====
+
==== <span style="color:#0070c0">PyeIQ - A Python Framework for eIQ on i.MX Processors (Yocto 3.0)</span> ====
  
 
[https://source.codeaurora.org/external/imxsupport/pyeiq/ PyeIQ]&nbsp;is written on top of&nbsp;[https://www.nxp.com/design/software/development-software/eiq-ml-development-environment:EIQ eIQ™ ML Software Development Environment]&nbsp;and provides a set of Python classes
 
[https://source.codeaurora.org/external/imxsupport/pyeiq/ PyeIQ]&nbsp;is written on top of&nbsp;[https://www.nxp.com/design/software/development-software/eiq-ml-development-environment:EIQ eIQ™ ML Software Development Environment]&nbsp;and provides a set of Python classes
Line 49: Line 87:
 
===== <span style="color:#0070c0">Installation</span> =====
 
===== <span style="color:#0070c0">Installation</span> =====
  
*Method 1: Use pip3 tool to install the package located at&nbsp;[https://pypi.org/project/eiq/#description PyPI]&nbsp;repository:  
+
*Method 1: Use pip3 tool to install the package located at&nbsp;[https://pypi.org/project/pyeiq/#description PyPI]&nbsp;repository:  
  
  $ pip3 install eiq
+
  $ pip3 install pyeiq
 
Collecting eiq
 
  Downloading [[https://files.pythonhosted.org/packages/10/54/7a78fca1ce02586a91c88ced1c70acb16ca095779e5c6c8bdd379cd43077/eiq-2.1.0.tar.gz https://files.pythonhosted.org/packages/10/54/7a78fca1ce02586a91c88ced1c70acb16ca095779e5c6c8bdd379cd43077/eiq-2.1.0.tar.gz]]
 
Installing collected packages: eiq
 
  Running setup.py install for eiq ... done
 
Successfully installed eiq-2.1.0
 
  
 
*Method 2: Get the latest tarball &nbsp;[https://pypi.org/project/eiq/#files Download files]&nbsp; and copy it to the board:  
 
*Method 2: Get the latest tarball &nbsp;[https://pypi.org/project/eiq/#files Download files]&nbsp; and copy it to the board:  
Line 63: Line 95:
 
  $ pip3 install <tarball>
 
  $ pip3 install <tarball>
  
Other eiq versions:
+
pyeiq tarball:
 +
 
 +
*[[:File:pyeiq-3.0.0.tar.gz]]
 +
*[[:File:pyeiq-3.1.0.tar.gz]]
 +
 
 +
For the&nbsp;5.4.70_2.3.0 BSP:
 +
 
 +
*Install the v3.0.0 version and it run on NPU
 +
 
 +
For the&nbsp;5.10.72_2.2.0 ~ 6.1.22_2.0.0​ BSP (Suggest to use the demo experience)
  
[http://ess-wiki.advantech.com.tw/wiki/images/5/55/Eiq-1.0.0.tar.gz eiq-1.0.0.tar.gz]
+
*Install the v3.1.0 version, but it run on CPU
  
[http://ess-wiki.advantech.com.tw/wiki/images/7/7f/Eiq-2.0.0.tar.gz eiq-2.0.0.tar.gz]
+
Download PyeIQ&nbsp;Cache Data
  
[http://ess-wiki.advantech.com.tw/wiki/images/6/68/Eiq-2.1.0.tar.gz eiq-2.1.0.tar.gz]
+
*Download link: [https://github.com/ADVANTECH-Corp/pyeiq-data https://github.com/ADVANTECH-Corp/pyeiq-data]
 +
*Decompress the files to /home/root/.cache/
  
[http://ess-wiki.advantech.com.tw/wiki/images/c/c4/Eiq-2.2.0.tar.gz eiq-2.2.0.tar.gz]
+
$ tar -zxvf eiq-cache-data_3.0.0.tar.gz
 +
 
 +
[[File:2023-09-28 092741.png|400px|2023-09-28 092741.png]] &nbsp;
  
 
===== <span style="color:#0070c0">How to Run Samples</span> =====
 
===== <span style="color:#0070c0">How to Run Samples</span> =====
  
 
*&nbsp;Start the manager tool:  
 
*&nbsp;Start the manager tool:  
 
+
<pre>$ pyeiq</pre>
$ pyeiq
 
  
 
*The above command returns the PyeIQ manager tool options:  
 
*The above command returns the PyeIQ manager tool options:  
Line 105: Line 148:
 
| pyeiq --clear-cache
 
| pyeiq --clear-cache
 
| Clear cached media generated by demos.
 
| Clear cached media generated by demos.
| # pyeiq --info object_detection_dnn
+
| # pyeiq --info object_detection_tflite
 
|}
 
|}
  
*Common Demos Parameters
+
===== <span style="color:#0070c0">PyeIQ Demos</span> =====
  
{| border="1" cellpadding="1" cellspacing="1" style="width:1000px;"
+
*covid19_detection&nbsp;  
|-
+
*object_classification_tflite
| style="width: 96px;" | '''Argument'''
+
*object_detection_tflite
| style="width: 421px;" | '''Description'''
 
| style="width: 465px;" | '''Example of usage'''
 
|-
 
| style="width: 96px;" | --download -d
 
| style="width: 421px;" | Choose from which server the models are going to download. Options: drive, github, wget. If none is specified, the application search automatically for the best server.
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --download drive
 
  
/opt/eiq/demos# eiq_demo.py -d github
+
<span style="color:#0070c0">Demos Example - Running Object Detection</span>
  
|-
+
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.
| style="width: 96px;" | --help -h
 
| style="width: 421px;" | Shows all available arguments for a certain demo and a brief explanation of its usage.
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --help
 
  
/opt/eiq/demos# eiq_demo.py -h
+
*Run the&nbsp;''Object Detection''&nbsp;'''Default Image&nbsp;'''demo using the following line:
 +
<pre>$ pyeiq --run object_detection_tflite</pre>
  
|-
+
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;*&nbsp; This runs inference on a default image:
| style="width: 96px;" | --image -i
 
| style="width: 421px;" | Use an image of your choice within the demo.
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --image /home/root/image.jpg
 
  
/opt/eiq/demos# eiq_demo.py -i /home/root/image.jpg
+
[[File:2023-09-27 172321.png|400px|2023-09-27 172321.png]] &nbsp;
  
|-
+
*Run the&nbsp;''Object Detection''&nbsp;'''Custom Image&nbsp;'''demo using the following line:  
| style="width: 96px;" | --labels -l&nbsp;
+
<pre>$ pyeiq --run object_detection_tflite --image=/path_to_the_image</pre>
| style="width: 421px;" | Use a labels file of your choice within the demo. Labels and models must be compatible for proper outputs.
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --labels /home/root/labels.txt
 
  
/opt/eiq/demos# eiq_demo.py -l /home/root/labels.txt
+
*Run the&nbsp;''Object Detection&nbsp;'''''Video File'''&nbsp;using the following line:
 +
<pre>$ pyeiq --run object_detection_tflite --video_src=/path_to_the_video</pre>
  
|-
+
*Run the&nbsp;''Object Detection&nbsp;'''''Video Camera or Webcam'''&nbsp;using the following line:  
| style="width: 96px;" | --model -m
+
<pre>$ pyeiq --run object_detection_tflite --video_src=/dev/video<index></pre>
| style="width: 421px;" | Use a model file of your choice within the demo. Labels and models must be compatible for proper outputs.
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --model /home/root/model.tflite
 
  
/opt/eiq/demos# eiq_demo.py -m /home/root/model.tflite&nbsp;
+
&nbsp;
  
|-
+
==== <font color="#0070c0">eIQ Toolkit</font> ====
| style="width: 96px;" | --res -r
 
| style="width: 421px;" | Choose the resolution of your video capture device. Options: full_hd (1920x1080), hd (1280x720), vga (640x480). If none is specified, it uses hd as default. If your video device doesn’t support the chosen resolution, it automatically selects the best one available.
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --res full_hd
 
  
/opt/eiq/demos# eiq_demo.py -r vga
+
The eIQ Toolkit is a machine-learning software development environment that enables the use of ML algorithms on<br/> NXP microcontrollers, microprocessors, and SoCs.<br/> The eIQ Toolkit is for those interested in building machine learning solutions on embedded devices. A background in<br/> machine learning, especially in supervised classification, is helpful to understand the entire pipeline. However, if the<br/> user does not have any background in the areas mentioned above, the eIQ Toolkit is designed to assist the user.<br/> The eIQ Toolkit consists of the following three key components:
  
|-
+
*eIQ Portal
| style="width: 96px;" | --video_fwk -f
+
*eIQ Model Tool
| style="width: 421px;" | Choose which video framework is used to display the video. Options: opencv, v4l2, gstreamer (need improvements). If none is specified, it uses v4l2 as default.
+
*eIQ Command-line Tools
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --video_fwk opencv
 
  
/opt/eiq/demos# eiq_demo.py -f v4l2
+
&nbsp; [[File:2023-09-27 174948.png|400px|2023-09-27 174948.png]] &nbsp;
  
|-
+
Download link:&nbsp;[https://www.nxp.com/design/software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT/ https://www.nxp.com/design/software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT\]
| style="width: 96px;" | --video_src -v
 
| style="width: 421px;" | It makes the demo run inference on a video instead of an image. You can simply use the parameter “True” for it to run, specify your video capture device or even a video file. Options: True, /dev/video, path_to_your_video_file.
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --video_src /dev/video3
 
 
 
/opt/eiq/demos# eiq_demo.py -v True
 
 
 
/opt/eiq/demos# eiq_demo.py -v /home/root/video.mp4
 
 
 
|}
 
 
 
===== <span style="color:#0070c0">Run Applications and Demos</span> =====
 
  
*Applications
+
There are two approaches available with the eIQ Toolkit, based on what the user provides and what the expectations are.
  
{| border="1" cellpadding="1" cellspacing="1" style="width: 599px;"
+
*'''Bring Your Own Data (BYOD) '''– the users bring image data, use the eIQ Toolkit to develop their own model, and deploy it on the target.  
|-
+
*'''Bring Your Own Model (BYOM) '''– the users bring a pretrained model and use the eIQ Toolkit for optimization, deployment, or profiling.  
| style="width: 172px;" | '''Application Name'''
 
| style="width: 80px;" | '''Framework'''
 
| style="width: 77px;" | '''i.MX Board'''
 
| style="width: 86px;" | '''BSP Release'''
 
| style="width: 112px;" | '''Inference Core'''
 
| style="width: 44px;" | '''Status'''
 
|-
 
| style="width: 172px;" | Switch Classification Image
 
| style="width: 80px;" | TFLite:2.1.0
 
| style="width: 77px;" | RSB-3720
 
| style="width: 86px;" | 5.4.24_2.1.0
 
| style="width: 112px;" | CPU, GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 172px;" | Switch Detection Video
 
| style="width: 80px;" | TFLite:2.1.0
 
| style="width: 77px;" | RSB-3720
 
| style="width: 86px;" | 5.4.24_2.1.0
 
| style="width: 112px;" | CPU, GPU, NPU
 
| style="width: 44px;" | PASS
 
|}
 
 
 
*Demos
 
 
 
{| border="1" cellpadding="1" cellspacing="1" style="width: 616px;"
 
|-
 
| style="width: 179px;" | '''Demo&nbsp;Name'''
 
| style="width: 92px;" | '''Framework'''
 
| style="width: 81px;" | '''i.MX Board'''
 
| style="width: 92px;" | '''BSP Release'''
 
| style="width: 103px;" | '''Inference Core'''
 
| style="width: 44px;" | '''Status'''
 
|-
 
| style="width: 179px;" | Object Classification
 
| style="width: 92px;" | TFLite:2.1.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Object Detection SSD
 
| style="width: 92px;" | TFLite:2.1.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Object Detection YOLOv3
 
| style="width: 92px;" | TFLite:2.1.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Object Detection DNN
 
| style="width: 92px;" | OpenCV:4.2.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | CPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Facial Expression Detection
 
| style="width: 92px;" | TFLite:2.1.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Fire Classification
 
| style="width: 92px;" | TFLite:2.1.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Fire Classification
 
| style="width: 92px;" | ArmNN:19.08
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Pose Detection
 
| style="width: 92px;" | TFLite:2.1.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Face/Eyes Detection
 
| style="width: 92px;" | OpenCV:4.2.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|}
 
 
 
====== <span style="color:#0070c0">Applications Example - Switch Detection Video</span> ======
 
 
 
This application offers a graphical interface for users to run an object detection demo using either CPU or GPU/NPU to perform inference on a video file.
 
 
 
*Run the&nbsp;''Switch Detection Video''&nbsp;demo using the following line: <pre>$ pyeiq --run switch_video</pre>
 
  
 +
&nbsp; [[File:2023-09-27 175257.png|400px|2023-09-27 175257.png]]
  
 
&nbsp;
 
&nbsp;
  
*Type on&nbsp;'''CPU'''&nbsp;or&nbsp;'''GPU'''/'''NPU'''&nbsp;in the terminal to switch between cores.
+
=== <span style="color:#0070c0">References</span> ===
**This runs inference on a default video: 
 
  
[[File:Switch detection resized logo.gif|RTENOTITLE]]
+
'''i.MX Machine Learning User's Guide:''' [https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf]
  
====== <span style="color:#0070c0">Demos Example - Running Object Detection SSD</span> ======
+
'''NXP Demo Experience User's Guide:&nbsp;''' [https://www.nxp.com/docs/en/user-guide/DEXPUG.pdf https://www.nxp.com/docs/en/user-guide/DEXPUG.pdf]
  
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.
+
'''pyeiq:&nbsp;'''[https://community.nxp.com/t5/Blogs/PyeIQ-3-x-Release-User-Guide/ba-p/1305998 https://community.nxp.com/t5/Blogs/PyeIQ-3-x-Release-User-Guide/ba-p/1305998]
  
*Run the&nbsp;''Object Detection''&nbsp;'''Default Image&nbsp;'''demo using the following line: <pre>$ pyeiq --run object_detection_tflite</pre>
+
'''eIQ Toolkit User Guide:''' [https://www.nxp.com/docs/en/user-guide/EIQTKUG-1.8.0.pdf https://www.nxp.com/docs/en/user-guide/EIQTKUG-1.8.0.pdf]
  
 
+
'''TP-EVB_eIQ_presentation.pdf:'''&nbsp;[https://www.nxp.com/docs/en/training-reference-material/TP-EVB_eIQ_presentation.pdf https://www.nxp.com/docs/en/training-reference-material/TP-EVB_eIQ_presentation.pdf]
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;*&nbsp; This runs inference on a default image:
 
 
 
[[File:Image eiqobjectdetection resized logo.gif|RTENOTITLE]]
 
 
 
*Run the&nbsp;''Object Detection''&nbsp;'''Custom Image&nbsp;'''demo using the following line:  
 
<pre>$ pyeiq --run object_detection_tflite --image=/path_to_the_image</pre>
 
 
 
*Run the&nbsp;''Object Detection&nbsp;'''''Video File'''&nbsp;using the following line:
 
<pre>$ pyeiq --run object_detection_tflite --video_src=/path_to_the_video</pre>
 
 
 
*Run the&nbsp;''Object Detection&nbsp;'''''Video Camera or Webcam'''&nbsp;using the following line:
 
<pre>$ pyeiq --run object_detection_tflite --video_src=/dev/video<index></pre>
 

Latest revision as of 03:51, 28 September 2023

NXP i.MX series

The i.MX 8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, andindustrial automation with high reliability.

  • The Neural Processing Unit (NPU) of i.MX 8M Plus operating at up to 2.3 TOPS

NXP Demo Experience (Yocto 3.3 ~ latest)

  • Preinstalled on NXP-provided demo Linux images
  • imx-image-full image must be used
  • Yocto 3.3 (5.10.52_2.1.0 ) ~ Yocto 4.2 (6.1.1_1.0.0)
  • Need to connect the internet

Start the demo launcher by clicking NXP Logo is displayed on the top left-hand corner of the screen

2023-09-27 155355.png

2023-09-27 155552.png

Machine Learning Demos
  • NNStreamer demos
    • Object classification
    • Object detection
    • Pose detection
    • Brand detection
    • ML gateway

 2023-09-27 155836.png  

  • OpenCV demos
    • Face recognition
    • Selfie segmentation

 2023-09-27 162615.png

NNStreamer Demo: Object Detection

Click the "Object Detection " and Launch Demo

111111111333.png  

Set some parameters:

22222222222.png  

  • Source: Select the camera to use or to use the example video
  • Backend: Select whether to use the NPU (if available) or CPU for inferences.
  • Height: Select the input height of the video if using a camera.
  • Width: Select the input width of the video if using a camera.
  • Label Color: Select the color of the overlay labels.

The result of NPU object detection:

3333333333.png  

NXP Demo Experience - Text User Interface(TUI)

Demos can also be launched from the command line through log-in into the board remotely or using the onboard serial debug console. Keep in mind that most demos still require a display to run successfully.

To start the text user interface, type the following command into the command line.

$ demoexperience tui

2023-09-27 170604.png  

The interface can be navigated using the following keyboard inputs:

  • Up and down arrow keys: Select a demo from the list on the left
  • Enter key: Runs the selected demo
  • Q key or Ctrl+C keys: Quit the interface
  • H key: Opens the help menu

Demos can be closed by closing the demo onscreen or pressing the "Ctrl" and "C" keys at the same time.

 

PyeIQ - A Python Framework for eIQ on i.MX Processors (Yocto 3.0)

PyeIQ is written on top of eIQ™ ML Software Development Environment and provides a set of Python classes

allowing the user to run Machine Learning applications in a simplified and efficiently way without spending time on

cross-compilations, deployments or reading extensive guides.

Installation
  • Method 1: Use pip3 tool to install the package located at PyPI repository:
$ pip3 install pyeiq
  • Method 2: Get the latest tarball  Download files  and copy it to the board:
$ pip3 install <tarball>

pyeiq tarball:

For the 5.4.70_2.3.0 BSP:

  • Install the v3.0.0 version and it run on NPU

For the 5.10.72_2.2.0 ~ 6.1.22_2.0.0​ BSP (Suggest to use the demo experience)

  • Install the v3.1.0 version, but it run on CPU

Download PyeIQ Cache Data

$ tar -zxvf eiq-cache-data_3.0.0.tar.gz

2023-09-28 092741.png  

How to Run Samples
  •  Start the manager tool:
$ pyeiq
  • The above command returns the PyeIQ manager tool options:
Manager Tool Command Description Example
pyeiq --list-apps List the available applications.  
pyeiq --list-demos List the available demos.  
pyeiq --run <app_name/demo_name> Run the application or demo. # pyeiq --run object_detection_tflite
pyeiq --info <app_name/demo_name> Application or demo short description and usage.  
pyeiq --clear-cache Clear cached media generated by demos. # pyeiq --info object_detection_tflite
PyeIQ Demos
  • covid19_detection 
  • object_classification_tflite
  • object_detection_tflite

Demos Example - Running Object Detection

Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.

  • Run the Object Detection Default Image demo using the following line:
$ pyeiq --run object_detection_tflite

           *  This runs inference on a default image:

2023-09-27 172321.png  

  • Run the Object Detection Custom Image demo using the following line:
$ pyeiq --run object_detection_tflite --image=/path_to_the_image
  • Run the Object Detection Video File using the following line:
$ pyeiq --run object_detection_tflite --video_src=/path_to_the_video
  • Run the Object Detection Video Camera or Webcam using the following line:
$ pyeiq --run object_detection_tflite --video_src=/dev/video<index>

 

eIQ Toolkit

The eIQ Toolkit is a machine-learning software development environment that enables the use of ML algorithms on
NXP microcontrollers, microprocessors, and SoCs.
The eIQ Toolkit is for those interested in building machine learning solutions on embedded devices. A background in
machine learning, especially in supervised classification, is helpful to understand the entire pipeline. However, if the
user does not have any background in the areas mentioned above, the eIQ Toolkit is designed to assist the user.
The eIQ Toolkit consists of the following three key components:

  • eIQ Portal
  • eIQ Model Tool
  • eIQ Command-line Tools

  2023-09-27 174948.png  

Download link: https://www.nxp.com/design/software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT\

There are two approaches available with the eIQ Toolkit, based on what the user provides and what the expectations are.

  • Bring Your Own Data (BYOD) – the users bring image data, use the eIQ Toolkit to develop their own model, and deploy it on the target.
  • Bring Your Own Model (BYOM) – the users bring a pretrained model and use the eIQ Toolkit for optimization, deployment, or profiling.

  2023-09-27 175257.png

 

References

i.MX Machine Learning User's Guide: https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf

NXP Demo Experience User's Guide:  https://www.nxp.com/docs/en/user-guide/DEXPUG.pdf

pyeiq: https://community.nxp.com/t5/Blogs/PyeIQ-3-x-Release-User-Guide/ba-p/1305998

eIQ Toolkit User Guide: https://www.nxp.com/docs/en/user-guide/EIQTKUG-1.8.0.pdf

TP-EVB_eIQ_presentation.pdf: https://www.nxp.com/docs/en/training-reference-material/TP-EVB_eIQ_presentation.pdf