Difference between revisions of "AIMLinux/AddOn/Edge AI"

From ESS-WIKI
Jump to: navigation, search
(NXP i.MX series)
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
 
{{DISPLAYTITLE:AIM-Linux EdgeAI}}
 
{{DISPLAYTITLE:AIM-Linux EdgeAI}}
 +
 
= Applications =
 
= Applications =
  
Line 10: Line 12:
 
==== <span style="color:#0070c0">Supported Platforms</span> ====
 
==== <span style="color:#0070c0">Supported Platforms</span> ====
  
*nVidia TX2: ''EPC-R7000''
+
*nVidia TX2: ''EPC-R7000''  
  
 
==== <span style="color:#0070c0">Installation</span> ====
 
==== <span style="color:#0070c0">Installation</span> ====
Line 56: Line 58:
 
The two camera products below are verified, but it's OK to use other products which support '''''1920x1080&nbsp;'''''or '''''1280x720''''' resolution.
 
The two camera products below are verified, but it's OK to use other products which support '''''1920x1080&nbsp;'''''or '''''1280x720''''' resolution.
  
*Microsoft LifeCam HD-3000
+
*Microsoft LifeCam HD-3000  
*Logitech BRIO V-U0040
+
*Logitech BRIO V-U0040  
  
 
'''[IP Camera]'''
 
'''[IP Camera]'''
Line 65: Line 67:
 
The IP camera below is verified. We will take it as an example to set up.
 
The IP camera below is verified. We will take it as an example to set up.
  
*Vivotek_IB9360H
+
*Vivotek_IB9360H  
  
 
To configure&nbsp;IP camera for FaceView, we have to route RTSP streaming to video device.
 
To configure&nbsp;IP camera for FaceView, we have to route RTSP streaming to video device.
Line 142: Line 144:
 
  $ sudo cp -a * /usr/local/
 
  $ sudo cp -a * /usr/local/
  
 
+
&nbsp;
  
 
== <span style="color:#0070c0">Traffic Analysis</span> ==
 
== <span style="color:#0070c0">Traffic Analysis</span> ==
Line 152: Line 154:
 
==== <span style="color:#0070c0">Supported Platforms</span> ====
 
==== <span style="color:#0070c0">Supported Platforms</span> ====
  
*nVidia TX2: ''EPC-R7000''
+
*nVidia TX2: ''EPC-R7000''  
  
 
==== <span style="color:#0070c0">Run Application</span> ====
 
==== <span style="color:#0070c0">Run Application</span> ====
Line 158: Line 160:
 
You can connect the host PC with EPC-R7000 with microUSB cable (RNDIS) or Ethernet. Then, open '''browser''' with specific URL to access IVS service.
 
You can connect the host PC with EPC-R7000 with microUSB cable (RNDIS) or Ethernet. Then, open '''browser''' with specific URL to access IVS service.
  
*RNDIS: 192.168.55.1:5000/admin
+
*RNDIS: 192.168.55.1:5000/admin  
*Ethernet: &lt;your device IP address&gt;:5000/admin
+
*Ethernet: <your device IP address>:5000/admin  
  
 
After login with correct account/password, you can configure the monitor regions as your requirements.
 
After login with correct account/password, you can configure the monitor regions as your requirements.
Line 179: Line 181:
 
[[File:Ivs-people.jpg|800px|Ivs-people.jpg]]
 
[[File:Ivs-people.jpg|800px|Ivs-people.jpg]]
  
 
+
&nbsp;
  
 
= Inference Engines =
 
= Inference Engines =
Line 189: Line 191:
 
=== <span style="color:#0070c0">NXP i.MX series</span> ===
 
=== <span style="color:#0070c0">NXP i.MX series</span> ===
  
The i.MX 8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, andindustrial automation with high reliability.
+
[http://ess-wiki.advantech.com.tw/view/NXP_eIQ http://ess-wiki.advantech.com.tw/view/NXP_eIQ]
 
 
*'''The Neural Processing Unit (NPU) operating at up to <span style="color:#FF0000;">2.3 TOPS</span>'''
 
**Keyword detect, noise reduction, beamforming
 
**Speech recognition (i.e. Deep Speech 2)
 
**Image recognition (i.e. ResNet-50)
 
 
 
==== <span style="color:#0070c0">eIQ - A Python Framework for eIQ on i.MX Processors</span> ====
 
 
 
[https://source.codeaurora.org/external/imxsupport/pyeiq/ PyeIQ]&nbsp;is written on top of&nbsp;[https://www.nxp.com/design/software/development-software/eiq-ml-development-environment:EIQ eIQ™ ML Software Development Environment]&nbsp;and provides a set of Python classes
 
 
 
allowing the user to run Machine Learning applications in a simplified and efficiently way without spending time on
 
 
 
cross-compilations, deployments or reading extensive guides.
 
 
 
===== <span style="color:#0070c0">Installation</span> =====
 
 
 
*Method 1: Use pip3 tool to install the package located at&nbsp;[https://pypi.org/project/eiq/#description PyPI]&nbsp;repository:
 
 
 
$ pip3 install eiq
 
 
Collecting eiq
 
  Downloading [[https://files.pythonhosted.org/packages/10/54/7a78fca1ce02586a91c88ced1c70acb16ca095779e5c6c8bdd379cd43077/eiq-2.1.0.tar.gz https://files.pythonhosted.org/packages/10/54/7a78fca1ce02586a91c88ced1c70acb16ca095779e5c6c8bdd379cd43077/eiq-2.1.0.tar.gz]]
 
Installing collected packages: eiq
 
  Running setup.py install for eiq ... done
 
Successfully installed eiq-2.1.0
 
 
 
*Method 2: Get the latest tarball &nbsp;[https://pypi.org/project/eiq/#files Download files]&nbsp; and copy it to the board:
 
 
 
$ pip3 install <tarball>
 
 
 
Other eiq versions:
 
 
 
[http://ess-wiki.advantech.com.tw/wiki/images/5/55/Eiq-1.0.0.tar.gz eiq-1.0.0.tar.gz]
 
 
 
[http://ess-wiki.advantech.com.tw/wiki/images/7/7f/Eiq-2.0.0.tar.gz eiq-2.0.0.tar.gz]
 
 
 
[http://ess-wiki.advantech.com.tw/wiki/images/6/68/Eiq-2.1.0.tar.gz eiq-2.1.0.tar.gz]
 
 
 
[http://ess-wiki.advantech.com.tw/wiki/images/c/c4/Eiq-2.2.0.tar.gz eiq-2.2.0.tar.gz]
 
 
 
===== <span style="color:#0070c0">How to Run Samples</span> =====
 
 
 
*&nbsp;Start the manager tool:
 
 
 
$ pyeiq
 
 
 
*The above command returns the PyeIQ manager tool options:
 
 
 
{| border="1" cellspacing="1" cellpadding="1" style="width:500px;"
 
|-
 
| '''Manager Tool Command'''<br/>
 
| '''Description'''<br/>
 
| '''Example'''<br/>
 
|-
 
| pyeiq --list-apps<br/>
 
| List the available applications.<br/>
 
| <br/>
 
|-
 
| pyeiq --list-demos<br/>
 
| List the available demos.<br/>
 
| <br/>
 
|-
 
| pyeiq --run &lt;app_name/demo_name&gt;<br/>
 
| Run the application or demo.<br/>
 
| # pyeiq --run object_detection_tflite<br/>
 
|-
 
| pyeiq --info &lt;app_name/demo_name&gt;<br/>
 
| Application or demo short description and usage.<br/>
 
| <br/>
 
|-
 
| pyeiq --clear-cache<br/>
 
| Clear cached media generated by demos.<br/>
 
| # pyeiq --info object_detection_dnn<br/>
 
|}
 
 
 
*Common Demos Parameters
 
 
 
{| border="1" cellspacing="1" cellpadding="1" style="width:1000px;"
 
|-
 
| style="width: 96px;" | '''Argument'''<br/>
 
| style="width: 421px;" | '''Description'''<br/>
 
| style="width: 465px;" | '''Example of usage'''<br/>
 
|-
 
| style="width: 96px;" | --download -d<br/>
 
| style="width: 421px;" | Choose from which server the models are going to download. Options: drive, github, wget. If none is specified, the application search automatically for the best server.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --download drive
 
 
 
/opt/eiq/demos# eiq_demo.py -d github
 
 
 
|-
 
| style="width: 96px;" | --help -h<br/>
 
| style="width: 421px;" | Shows all available arguments for a certain demo and a brief explanation of its usage.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --help
 
 
 
/opt/eiq/demos# eiq_demo.py -h
 
 
 
|-
 
| style="width: 96px;" | --image -i<br/>
 
| style="width: 421px;" | Use an image of your choice within the demo.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --image /home/root/image.jpg
 
 
 
/opt/eiq/demos# eiq_demo.py -i /home/root/image.jpg
 
 
 
|-
 
| style="width: 96px;" | --labels -l&nbsp;<br/>
 
| style="width: 421px;" | Use a labels file of your choice within the demo. Labels and models must be compatible for proper outputs.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --labels /home/root/labels.txt
 
 
 
/opt/eiq/demos# eiq_demo.py -l /home/root/labels.txt
 
 
 
|-
 
| style="width: 96px;" | --model -m<br/>
 
| style="width: 421px;" | Use a model file of your choice within the demo. Labels and models must be compatible for proper outputs.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --model /home/root/model.tflite
 
 
 
/opt/eiq/demos# eiq_demo.py -m /home/root/model.tflite&nbsp;
 
 
 
|-
 
| style="width: 96px;" | --res -r<br/>
 
| style="width: 421px;" | Choose the resolution of your video capture device. Options: full_hd (1920x1080), hd (1280x720), vga (640x480). If none is specified, it uses hd as default. If your video device doesn’t support the chosen resolution, it automatically selects the best one available.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --res full_hd
 
 
 
/opt/eiq/demos# eiq_demo.py -r vga
 
 
 
|-
 
| style="width: 96px;" | --video_fwk -f<br/>
 
| style="width: 421px;" | Choose which video framework is used to display the video. Options: opencv, v4l2, gstreamer (need improvements). If none is specified, it uses v4l2 as default.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --video_fwk opencv
 
 
 
/opt/eiq/demos# eiq_demo.py -f v4l2
 
 
 
|-
 
| style="width: 96px;" | --video_src -v<br/>
 
| style="width: 421px;" | It makes the demo run inference on a video instead of an image. You can simply use the parameter “True” for it to run, specify your video capture device or even a video file. Options: True, /dev/video, path_to_your_video_file.<br/>
 
| style="width: 465px;" |
 
/opt/eiq/demos# eiq_demo.py --video_src /dev/video3
 
 
 
/opt/eiq/demos# eiq_demo.py -v True
 
 
 
/opt/eiq/demos# eiq_demo.py -v /home/root/video.mp4
 
 
 
|}
 
 
 
===== <span style="color:#0070c0">Run Applications and Demos</span> =====
 
 
 
*Applications
 
 
 
{| border="1" cellspacing="1" cellpadding="1" style="width: 599px;"
 
|-
 
| style="width: 172px;" | '''Application Name'''<br/>
 
| style="width: 80px;" | '''Framework'''<br/>
 
| style="width: 77px;" | '''i.MX Board'''<br/>
 
| style="width: 86px;" | '''BSP Release'''<br/>
 
| style="width: 112px;" | '''Inference Core'''<br/>
 
| style="width: 44px;" | '''Status'''<br/>
 
|-
 
| style="width: 172px;" | Switch Classification Image<br/>
 
| style="width: 80px;" | TFLite:2.1.0
 
| style="width: 77px;" | RSB-3720
 
| style="width: 86px;" | 5.4.24_2.1.0
 
| style="width: 112px;" | CPU, GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 172px;" | Switch Detection Video<br/>
 
| style="width: 80px;" | TFLite:2.1.0<br/>
 
| style="width: 77px;" | RSB-3720<br/>
 
| style="width: 86px;" | 5.4.24_2.1.0<br/>
 
| style="width: 112px;" | CPU, GPU, NPU<br/>
 
| style="width: 44px;" | PASS<br/>
 
|}
 
 
 
*Demos
 
 
 
{| border="1" cellspacing="1" cellpadding="1" style="width: 616px;"
 
|-
 
| style="width: 179px;" | '''Demo&nbsp;Name'''<br/>
 
| style="width: 92px;" | '''Framework'''<br/>
 
| style="width: 81px;" | '''i.MX Board'''<br/>
 
| style="width: 92px;" | '''BSP Release'''<br/>
 
| style="width: 103px;" | '''Inference Core'''<br/>
 
| style="width: 44px;" | '''Status'''<br/>
 
|-
 
| style="width: 179px;" | Object Classification<br/>
 
| style="width: 92px;" | TFLite:2.1.0
 
| style="width: 81px;" | RSB-3720
 
| style="width: 92px;" | 5.4.24_2.1.0
 
| style="width: 103px;" | GPU, NPU
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Object Detection SSD<br/>
 
| style="width: 92px;" | TFLite:2.1.0<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | GPU, NPU<br/>
 
| style="width: 44px;" | PASS
 
|-
 
| style="width: 179px;" | Object Detection YOLOv3<br/>
 
| style="width: 92px;" | TFLite:2.1.0<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | GPU, NPU<br/>
 
| style="width: 44px;" | PASS<br/>
 
|-
 
| style="width: 179px;" | Object Detection DNN<br/>
 
| style="width: 92px;" | OpenCV:4.2.0<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | CPU
 
| style="width: 44px;" | PASS<br/>
 
|-
 
| style="width: 179px;" | Facial Expression Detection<br/>
 
| style="width: 92px;" | TFLite:2.1.0<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | GPU, NPU<br/>
 
| style="width: 44px;" | PASS<br/>
 
|-
 
| style="width: 179px;" | Fire Classification<br/>
 
| style="width: 92px;" | TFLite:2.1.0<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | GPU, NPU<br/>
 
| style="width: 44px;" | PASS<br/>
 
|-
 
| style="width: 179px;" | Fire Classification<br/>
 
| style="width: 92px;" | ArmNN:19.08<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | GPU, NPU<br/>
 
| style="width: 44px;" | PASS<br/>
 
|-
 
| style="width: 179px;" | Pose Detection<br/>
 
| style="width: 92px;" | TFLite:2.1.0<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | GPU, NPU<br/>
 
| style="width: 44px;" | PASS<br/>
 
|-
 
| style="width: 179px;" | Face/Eyes Detection<br/>
 
| style="width: 92px;" | OpenCV:4.2.0<br/>
 
| style="width: 81px;" | RSB-3720<br/>
 
| style="width: 92px;" | 5.4.24_2.1.0<br/>
 
| style="width: 103px;" | GPU, NPU<br/>
 
| style="width: 44px;" | PASS<br/>
 
|}
 
 
 
====== <span style="color:#0070c0">Applications Example - Switch Detection Video</span> ======
 
 
 
This application offers a graphical interface for users to run an object detection demo using either CPU or GPU/NPU to perform inference on a video file.
 
 
 
*Run the&nbsp;''Switch Detection Video''&nbsp;demo using the following line:<pre>$ pyeiq --run switch_video</pre>
 
 
 
*Type on&nbsp;'''CPU'''&nbsp;or&nbsp;'''GPU'''/'''NPU'''&nbsp;in the terminal to switch between cores.
 
**This runs inference on a default video:
 
 
 
[[File:Switch detection resized logo.gif|RTENOTITLE]]
 
 
 
====== <span style="color:#0070c0">Demos Example - Running Object Detection SSD</span> ======
 
 
 
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.
 
 
 
*Run the&nbsp;''Object Detection''&nbsp;'''Default Image&nbsp;'''demo using the following line:<pre>$ pyeiq --run object_detection_tflite</pre>
 
 
 
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;*&nbsp; This runs inference on a default image:
 
 
 
[[File:Image eiqobjectdetection resized logo.gif|RTENOTITLE]]
 
 
 
*Run the&nbsp;''Object Detection''&nbsp;'''Custom Image&nbsp;'''demo using the following line:
 
<pre>$ pyeiq --run object_detection_tflite --image=/path_to_the_image</pre>
 
 
 
*Run the&nbsp;''Object Detection&nbsp;'''''Video File'''&nbsp;using the following line:
 
<pre>$ pyeiq --run object_detection_tflite --video_src=/path_to_the_video</pre>
 
 
 
*Run the&nbsp;''Object Detection&nbsp;'''''Video Camera or Webcam'''&nbsp;using the following line:
 
<pre>$ pyeiq --run object_detection_tflite --video_src=/dev/video<index></pre>
 
 
 
== '''References''' ==
 
 
 
'''pyeiq:&nbsp;'''[https://community.nxp.com/t5/Blogs/PyeIQ-3-x-Release-User-Guide/ba-p/1305998 https://community.nxp.com/t5/Blogs/PyeIQ-3-x-Release-User-Guide/ba-p/1305998]
 

Latest revision as of 04:11, 28 September 2023


Applications

Face Recognition

FaceView

Advantech FaceView application powered by CyberLink's FaceMe®, an industry-leading facial recognition engine, Advantech’s FaceView application provides precise and scalable real-time facial recognition for various AIoT applications in the retail, hospitality, and public safety fields.

Supported Platforms

  • nVidia TX2: EPC-R7000

Installation

0. Before we start for FaceView, please make sure SDK components are installed well on your TX2 device.

1. Get the program file by contact with Advantech, e.g. FaceView_1.0.2.0522_aarch64.run

2. Put the self-extractable file into your device and execute it. Note: Make sure the network is connected!

$ chmod +x FaceView_1.0.2.0522_aarch64.run
$ ./FaceView_1.0.2.0522_aarch64.run

Verifying archive integrity...  100%   All good.
Uncompressing FaceView  100%  
installing FaceView application
Press the ENTER key to continue.
************************************************************************
 Installing Package...
************************************************************************

...

Preparing to unpack FaceView_1.0.2.0522_aarch64.deb ...
Unpacking faceview (1.0.2.0522) ...
Setting up faceview (1.0.2.0522) ...
************************************************************************
 Done!
************************************************************************

3. Once it's done, you can find the FaceView application in /usr/local/FaceView/ folder.

4. If you want to uninstall FaceView application, use the command below.

$ sudo dpkg --remove FaceView

Setup Camera

We support two kinds of camera on EPC-R7000. One is USB webcam and the other is IP camera.

[USB Webcam]

It's easy to setup webcam by connecting USB with TX2 device.

The two camera products below are verified, but it's OK to use other products which support 1920x1080 or 1280x720 resolution.

  • Microsoft LifeCam HD-3000
  • Logitech BRIO V-U0040

[IP Camera]

EPC-R7000 supports PoE function, so you can run FaceView with IP camera for more flexible camera configurations.

The IP camera below is verified. We will take it as an example to set up.

  • Vivotek_IB9360H

To configure IP camera for FaceView, we have to route RTSP streaming to video device.

1. Connect IP camera with TX2 device via Ethernet cable.

2. Assign corresponding IP address for the PoE port, e.g. 169.254.111.1/16.

3. Install V4L2Loopback utility & drivers.

$ sudo apt-get install v4l2loopback-utils

4. Route RTSP streaming to video device.

$ sudo modprobe v4l2loopback
$ export RTSP_PATH="rtsp://viewer:inventec2017@169.254.6.42:5554/live.sdp"
$ gst-launch-1.0 rtspsrc location="$RTSP_PATH" latency=300 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! tee ! v4l2sink device=/dev/video0

Run Application

To execute FaceView application, navigate to /usr/local/FaceView/ folder and double-click FaceView icon, or run the application by command mode.

$ cd /usr/local/FaceView/
$ ./FaceView

Then, you will be asked to input license key. Note: Please make sure your network connection is OK to activate the license successfully.

Input License

If the key is valid, you are able to see camera preview screen from the application.

FaceView Screen

For details of operations, please refer to the FaceView user guide.

Set Up Build Environment

To develop QT application with FaceMe SDK, you have to install the packages listed below.

1. Install FaceMe SDK, e.g.FaceMe_SDK_Ubuntu18_ARM64_3.18.0.run

$ chmod +x FaceMe_SDK_Ubuntu18_ARM64_3.18.0.run
$ ./FaceMe_SDK_Ubuntu18_ARM64_3.18.0.run
...
What path do you want to install? (Press ENTER to use default)[/home/advrisc]
Which detection models do you want to install? ML, and (Press ENTER to use default, DNN DNN-X)[DNN DNN-X]
Which extraction models do you want to install? (Press ENTER to use default, H1 H2 H3 VH UH UH3)[H1 H3 VH UH3]
Do you want to install Demo System? (Press ENTER to use default, yes(y)/no(n))[yes]
Do you want to install Sample code? (Press ENTER to use default, yes(y)/no(n))[yes]
Install SDK Path: /home/advrisc/FaceMeSDK
Install Detection Model: DNN DNN-X
Install Extraction Model: H1 H3 VH UH3
Install GPU support: no
Install DemoSystem: yes
Install Sample Code: yes
Are you sure? (Press ENTER to use default, yes(y)/no(n))[yes]

2. Install pre-built Qt 5.14.2 binaries

$ tar zxvf Qt-5.14.2-Ubuntu18.04-ARM64.tar.gz
$ cd Qt-5.14.2-U18.04-ARM64
$ sudo cp -a * /

$ sudo vim ~/.profile
PATH="/usr/local/Qt-5.14.2/bin:$PATH"
$ source ~/.profile

3. Install QT creator

$ sudo apt-get install qtcreator

4. Install pre-built OpenCV 3.4.2 with CUDA & Jpeg1.5.3 support

$ tar zxvf OpenCV-3.4.2-aarch64-U18.04-JPEG1.5.3.tar.gz
$ cd OpenCV-3.4.2-aarch64
$ sudo cp -a * /usr/local/

 

Traffic Analysis

IVS

IVS service is a web application which provides intelligent video analysis for traffic issues, such as vehicle detection, license recognition, car parking & traffic counting.

Supported Platforms

  • nVidia TX2: EPC-R7000

Run Application

You can connect the host PC with EPC-R7000 with microUSB cable (RNDIS) or Ethernet. Then, open browser with specific URL to access IVS service.

  • RNDIS: 192.168.55.1:5000/admin
  • Ethernet: <your device IP address>:5000/admin

After login with correct account/password, you can configure the monitor regions as your requirements.

Ivs-monitor.png

For more details, please refer to the IVS user guide document.

[Traffic]

Ivs-traffic.png

[License Recognition]

Ivs-lp.jpg

[People Counting]

Ivs-people.jpg

 

Inference Engines

Platform

nVidia TX2

NXP i.MX series

http://ess-wiki.advantech.com.tw/view/NXP_eIQ