Edge AI SDK/AI Framework/RK3588
Contents
RKNN SDK
RKNN SDK (Baidu Password: a887)include two parts:
- rknpu2 (on the Board End)
- rknn-toolkit2 (on the PC)
├── rknpu2 │ ├── Driver │ └── RKNPU2 Environment └── rknn-toolkit2
RKNPU2
RKNPU2 include driver and environment to help to fast develop AI applications using rknn model(*.rknn). More Info refer to RK_Platform_NPU_SDK
RKNPU2 Driver
The official firmware of boards all installs the RKNPU2 driver.
You can execute the following command on the board end to query the RKNPU2 driver version:
dmesg | grep -i rknpu.
RKNPU2 Environment
Here are two basic concepts in the RKNPU2 environment:
- RKNN Server: A background proxy service running on the development board. The main function of this service is to call the interface corresponding to the board end Runtime to process the data transmitted by the computer through USB, and return the processing results to the computer.
- RKNPU2 Runtime library (librknnrt.so): The main responsibility is to load the RKNN model in the system and perform inference operations of the RKNN model by calling a dedicated neural processing unit (NPU)
RKNN-TOOLKIT2
RKNN-Toolkit2 is a development kit that provides users with model conversion, inference and performance evaluation on PC platforms. Users can easily complete the following functions through the Python interface provided by the tool:
- Model conversion: support to convert Caffe / TensorFlow / TensorFlow Lite / ONNX / Darknet / PyTorch model to RKNN model, support RKNN model import/export, which can be used on Rockchip NPU platform later.
- Quantization: support to convert float model to quantization model, currently support quantized methods including asymmetric quantization (asymmetric_quantized-8). and support hybrid quantization.
- Model inference: Able to simulate NPU to run RKNN model on PC and get the inference result. This tool can also distribute the RKNN model to the specified NPU device to run, and get the inference results.
- Performance & Memory evaluation: distribute the RKNN model to the specified NPU device to run, and evaluate the model performance and memory consumption in the actual device.
- Quantitative error analysis: This function will give the Euclidean or cosine distance of each layer of inference results before and after the model is quantized. This can be used to analyze how quantitative error occurs, and provide ideas for improving the accuracy of quantitative models.
- Model encryption: Use the specified encryption method to encrypt the RKNN model as a whole.
More Info refer to [https://github.com/airockchip/rknn-toolkit2]
More Info refer to [RK_Platform_NPU_SDK]
Applications
Edge AI SDK / Vision Application
| Application | Model | AOM-3821 FPS (video file) | ASR-A501 FPS (video file) |
| Object Detection | yolov10 | 25 | 25 |
| Person Detection | yolov5 | 25 | 25 |
| Face Detection | retinaface | 25 | 25 |
| Pose Estimation | yolov8_pose | 25 | 25 |
Benchmark
RK3588 is used to deploy a wide range of popular DNN models and ML frameworks to the edge with high performance inferencing, for tasks like real-time classification and object detection, pose estimation, semantic segmentation, and natural language processing (NLP). More Info refer to [ https://github.com/airockchip ]
Advantech, based on the rknn_common_test command, has encapsulated the npu_stress_test.sh to be used for testing the NPU.
Utility
- sysstat: This SDK provides the mpstat utility, which reports memory usage and processor usage for devices. You can find the utility in your package at the following location. More Info refer to Link
- OpenCV4.6: OpenCV (Open Source Computer Vision Library: http://opencv.org) is an open-source library that includes several hundreds of computer vision algorithms. More Info refer to Link