2024 August Update: Someone sent me the slides from the briefing of this semester, this repository, along with my other MDP-related ones, are entirely STILL reusable as far as I can see. SCSE can become CCDS but MDP is still MDP. As usual, retrain the YOLO model (or use something more recent la). Once again, that is a 1-day thing. If you are using these repositories and you don't have a functioning, fully-integrated system by end of Week 4, reconsider your life choices and your peer evaluations.
2023 Semester 1 Update: At least from what my juniors told me, this repository, along with my other MDP-related ones, are entirely reusuable. The only exception is that you will need to retrain the YOLO model since the fonts/colors were changed. That is a 1-day thing. If you are using these repositories and you don't have a functioning, fully-integrated system by end of Week 4, reconsider your life choices.
Y'all, if you are using this code, which apparently a LOT of y'all are, at least star this repo leh
This repository contains the code for the Raspberry Pi component of the CZ3004/SC2079 Multidisciplinary Project. The Raspberry Pi is responsible for the following:
- Communicating with the Android app via Bluetooth
- Communicating with the Algorithm API via HTTP requests
- Communicating with the STM32 microcontroller via serial
- Capturing images and sending them to the algorithm API
Messages between the Android app and Raspberry Pi will be in the following format:
{"cat": "xxx", "value": "xxx"}
The cat
(for category) field with the following possible values:
info
: general messageserror
: error messages, usually in response of an invalid actionlocation
: the current location of the robot (in Path mode)image-rec
: image recognition resultsstatus
: status updates of the robot (running
orfinished
)obstacle
: list of obstaclescontrol
: movement-related, like starting the run
Possible info messages:
You are connected to the RPi!
=> Upon Android successfully connecting to the RPiRobot is ready!
=> Once all the initial child processes are ready and runningYou are reconnected!
=> Upon Android successfully reconnecting to the RPiStarting robot on path!
=> Upon Android sending thestart
command to the RPiCommands queue finished!
=> Upon the RPi finishing executing all the commands in the queueCapturing image for obstacle id: {obstacle_id}
=> Upon the RPi capturing an image for a particular obstacleRequesting path from algo...
=> Upon the RPi requesting the path from the algorithmCommands and path received Algo API. Robot is ready to move.
=> Upon the RPi receiving the commands and path from the algorithmImages stitched!
=> Upon the RPi successfully stitching the images
Possible error messages:
API is down, start command aborted.
=> If the API is down, the robot will not start movingCommand queue is empty, did you set obstacles?
=> If the command queue is empty, the robot will not start movingSomething went wrong when requesting stitch from the API.
=> Upon the RPi failing to request the stitched image from the algorithmSomething went wrong when requesting path from Algo API.
=> Upon the RPi failing to request the path from the algorithm
Possible status messages:
running
=> When the robot is runningfinished
=> When the robot has finished running
The message sent from Android to Raspberry Pi will be in the following format for obstacles:
{
"cat": "obstacles",
"value": {
"obstacles": [{"x": 5, "y": 10, "id": 1, "d": 2}],
"mode": "0"
}
}
Android will send the following message to Raspberry Pi to start the movement of the robot (assuming obstacles were set).
{"cat": "control", "value": "start"}
If there are no commands in the queue, the RPi will respond with an error:
{"cat": "error", "value": "Command queue is empty, did you set obstacles?"}
Raspberry Pi will send the following message to Android, so that Android can update the results of the image recognition:
{"cat": "image-rec", "value": {"image_id": "A", "obstacle_id": "1"}}
Raspberry Pi will periodically notify Android with the updated location of the robot.
{"cat": "location", "value": {"x": 1, "y": 1, "d": 0}}
where x
, y
is the location of the robot, and d
is its direction.
The following are the possible commands related to movement. The commands will come from either the algorithm or Raspberry Pi, to be execued or passed along by the Raspberry Pi to STM32. For example, commands like SNAP and FIN are for Raspberry Pi to execute, while commands like FWxx are for Raspberry Pi to pass along to STM32.
RS00
- Gyro Reset - Reset the gyro before starting movement
FWxx
- Forward - Robot moves forward by xx units
FR00
- Forward Right - Robot moves forward right by 3x1 squares
FL00
- Forward Left - Robot moves forward left by 3x1 squares
BWxx
- Backward - Robot moves backward by xx units
BR00
- Backward Right - Robot moves backward right by 3x1 squares
BL00
- Backward Left - Robot moves backward left by 3x1 squares
OB01
- Small Obstacle - Robot moves from starting position to obstacle and stops
UL00
- Go Around Left for Small Obstacle - Robot moves around obstacle to the left
UR00
- Go Around Right for Small Obstacle- Robot moves around obstacle to the right
PL01
- Go Around Left for Large Obstacle - Robot moves around obstacle to the left
PR01
- Go Around Right for Large Obstacle - Robot moves around obstacle to the right
STOP
- Stop - Robot stops moving
SNAP
- Snap - Robot takes a picture and sends it for inference
FIN
- Finish - Robot stops moving and sends a message to the server to signal end of command queue
ACK
- Acknowledgement - Robot sends this message to acknowledge receipt and execution of a command
Instead of using PiCamera which does not allow for finetuned calibration, I used LibCamera which allows for more control over the camera. I used the GUI from the following repository to calibrate the camera: Pi_LibCamera_GUI Please follow the instructions there to calibrate the camera. I created different calibration config files for different scenarios such as indoors, outdoors, and harsh sunlight. As calibration will be different for each camera hardware, I did not include the config files in this repository.
Since LibCamera is used to calibrate the camera, it is also used to capture the images with the given configuration file.
-
Follow the guide provided on NTULearn first, to set up the Raspberry Pi properly. This includes turning it into a wireless access point, communicating with the STM32, and Android tablet. Make sure all connections with all the necessary components are working properly.
-
Run either
Week_8.py
orWeek_9.py
depending on which task you are doing.
I am not responsible for any errors, mishaps, or damages that may occur from using this code. Use at your own risk.
I used Group 28's code as a boilerplate/baseline, but improved it and changed the workflow significantly. The communication has been slightly altered, but still largely follows the original design. Pi_LibCamera_GUI was used to calibrate the camera. The following are the links to their repositories: