English traduction: Edge AI: come on, let's get our artificial intelligence on board!
Have you ever heard of the term "AI on edge"? This is the deployment of AI applications on devices located in the physical world. The benefits? Less latency, more security, more efficiency and above all proximity! Today, it is therefore becoming increasingly important to be able to deploy AI models capable of inferring in real time.
Computer vision is particularly concerned by its rapid progression and its use in many fields: automotive, medical, commerce, etc. It includes many techniques such as image classification, image segmentation and object detection.
The last one makes it possible to identify and locate different objects in an image or video. A famous object detection algorithm, known for its fast operation, is called YOLOv7.
In this talk, we will see how to deploy a YOLOv7 model for object detection on a Raspberry Pi 4 board.
To do this, we will look at training and testing a YOLOv7 model within a Jupyter Notebook. We will then convert our model to be able to deploy it and do inference on Raspberry Pi. The end result? A real-time object detection tool at your fingertips.
So, shall we get on board?
- An OVHcloud Public Cloud Project if you want to test it with OVHcloud AI Tools
- A Raspberry Pi 4
ovhai login -u <username> -p <password>
ovhai notebook list --states RUNNING
ovhai notebook get <notebook-id>
Once you are inside the AI Notebook, play the different steps.
The training could take several hours...
When the training is finished, save and export the model inside the dedicated Object Storage container.
Step 5 - Check the availability of the model yolov7-tiny.pt
in the dedicated Object Storage container
ovhai data list GRA edge-ai-yolov7-model
- Go to
/tmp
directory: :cd /tmp
- Clone YOLOv7 repository:
git clone https://github.com/WongKinYiu/yolov7.git
- Access to
/yolov7
directory:cd /yolov7
pip3 install -r requirements.txt
Once you are in the /yolov7
directory, launch the following command: ovhai data download GRA edge-ai-yolov7-model
Here, three tests are done:
Test your model on existing images with the following commmand:
python3 detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg
Try it for real-time detection:
python3 detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source 0
Create the new directories for new images and labels:
mkdir new-data
mkdir new-data/images
mkdir new-data/labels
Take pictures (with Cheese for example), save them and use the model in order to detect objects.
Launch the following command:
python3 detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source new-data/images/ --save-txt
Copy the new labels into the dedicated directory: cp -a runs/detect/exp/labels/. new-data/labels/
Content of the new-data
directory:
.
├── images
│ └── test1.jpg
└── labels
└── test1.txt
2 directories, 2 files
Push the new data to the Object Storage container: ovhai data upload GRA edge-ai-yolov7-data new-data/
Synchronize the Object Storage and the notebook (pull the data): ovhai notebook pull-data <notebook-id>
New data are available? You are ready to train again your YOLOv7 model!
YOLOv7 repository: https://github.com/WongKinYiu/yolov7 Slides of the presentation: soon