Skip to content

This software pack helps developers build products that are required to detect people using NXP’s i.MX RT1170 crossover MCU and eIQ® ML Software Development Environment. This project includes tested algorithms for Vision Intelligence (uVITA) and provides production quality software for customers to include in their end products.

Notifications You must be signed in to change notification settings

nxp-appcodehub/ap-ml-person-detector

Repository files navigation

MCU-Based Multiple Person Detector

Step-by-step guide for development and deployment of a person detector using convolution neural network (CNN) on MCU-based systems;

Overview

The following is a list of all components that are available in the ml_person_detector folder.

Components Description
scripts Test scripts in Python for the multiple person detection model running on PC.
models The original CNN model in format of ONNX.
data Test images and quantization calibration images.
converter eIQ® Inference with Glow NN.
app The ML-Person-Detector project of the i.MX RT1170EVK and RT1060EVK.
doc Lab guide.

Resources

Assemble the Application

You need to have both Git and West installed, then execute below commands to gather the whole SDK delivery of the ml-person-detector.

west init -m https://github.com/nxp-mcuxpresso/appswpacks-ml-person-detector.git --mr mcux_release_github appswpacks-ml-person-detector
cd appswpacks-ml-person-detector
west update

Build and Run the Application

To build and run the application, please refer to the Lab Guide in the doc folder or check the steps in Run a project using MCUXpresso IDE.

ML-Person-Detector Verification on PC

Dependent package:

  • opencv-python
  • onnxruntime
  • numpy

To use the verification tool, go to the scripts folder and run below commands.

Image test

python image_test.py

Video Test

python video_test.py

Model Deployment

In this section, eIQ® Inference with Glow NN is applied to enable ahead-of-time compilation and convert the neural networks into object files. To follow the given deployment steps, you need to download the Glow installer from eIQ-Glow and install it into the converter folder.

Model profiling

Glow uses profile guided quantization, running inference to extract statistics regarding possible numeric values of each tensor within the neural network.Images in png format with the same resolution as the input should be prepared in advance. Using command below to generate yml proflie:

image-classifier.exe -input-image-dir=data/Calibration -image-mode=0to1 -image-layout=NCHW -image-channel-order=BGR -model=models/Onnx/dperson_shufflenetv2.onnx -model-input-name=input.1 -dump-profile=models/Glow/dperson_shufflenetv2.yml

Then you will get a dperson_shufflenetv2.yml under converter folder.

Generate Glow bundle

Bundle generation represents the model compilation to a binary object file (bundle). Bundle generation is performed using the model-compiler tool.

  • Compile a float32 model to an int8 bundle:

model-compiler.exe -model=models/Onnx/dperson_shufflenetv2.onnx -model-input=input.1,float,[1,3,192,320] -emit-bundle=models/Glow/int8_bundle -backend=CPU -target=arm -mcpu=cortex-m7 -float-abi=hard -load-profile=models/Glow/dperson_shufflenetv2.yml -quantization-schema=symmetric_with_power2_scale -quantization-precision-bias=Int8

  • Compile a float32 model to an int8 bundle with CMSIS-NN:

model-compiler.exe -model=models/Onnx/dperson_shufflenetv2.onnx -model-input=input.1,float,[1,3,192,320] -emit-bundle=models/Glow/int8_cmsis_bundle -backend=CPU -target=arm -mcpu=cortex-m7 -float-abi=hard -load-profile=models/Glow/dperson_shufflenetv2.yml -quantization-schema=symmetric_with_power2_scale -quantization-precision-bias=Int8 -use-cmsis

Quantization Model Verification

Here are two examples of the accuracy verification for the quantized model. Although there are slight difference on the person coordinations between the outputs of original float model and quantized ones, the overall detection results are relatively reliable with good precision.

Image text Image text

Application Overview

The person detection demo project are built on the NXP MCUs i.MX RT1170EVK and i.MX RT1060EVK respectively. It is well known that the ML model inference usually requires huge computation, while there is usually a single core in MCU. It means that the single core needs to handle not only the model inference task, but also the camera and display parts. To get a real-time performance for capturing the image from camera and showing frame with algorithm results on the display screen, we built a Microcontroller based Vision Intelligence Algorithms (uVITA) System based on FreeRTOS. And its structures are give as below

Image text

Other Reference Applications

For other rapid-development software bundles please visit the Application Software Packs page.

For SDK examples please go to the MCUXpresso SDK and get the full delivery to be able to build and run examples that are based on other SDK components.

Reference

About

This software pack helps developers build products that are required to detect people using NXP’s i.MX RT1170 crossover MCU and eIQ® ML Software Development Environment. This project includes tested algorithms for Vision Intelligence (uVITA) and provides production quality software for customers to include in their end products.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published