AutoParking using RaspberryPi
This project has been performed under three objectives.
-
Study and analyze the latest autonomous parking algorithms for application
-
Implement and simulate autonomous parking
-
Conceptualize business idea of autonomous charging service including a mobile application and a battery charger robot manipulator. The service is to enable self-driving cars charge their batteries instead of letting cars merely idle at parking lots.
For detailed information about the project, please check the report attached to this repository.
- One vehicle chasis with a suspension structure
- Two RPi fisheye camera lens modules
- Arducam Multi Camera Adapter Doubleplexer Stereo mdule
- PCA9685 PWM/Servo driver
- MG996R Servo motor
- TB6612 DC/Stepper motor
- 12V DC motor
- TOF ultrasonic sensor
The fisheye image needs to be converted into undistorted image and then the birdview image. Below pictures show the three stages of image transformation.
Fisheye Image | Undistorted Image | BirdView Image |
Parking Lot Detection model is based on context-based parking slot detection.
After obtaining the bird view image from our test environment, the vacant parking slot could be detected with the context-based recognizer model.
Test Image | Test Result Image |
For validation of our model, we constructed our own envrionment. We tried to reconstruct test environment as close as the real-world parking lot.
You can just simply start the RaspberryPi car to park autonomously with this script.
Still you need to have a context-based parking slot detector model which can be downloaded from here
Save the weights under ./ScatCar/models/context_based/weight_pcr and ./ScatCar/models/context_based/weight_psd
python main.py
Around View Monitoring (AVM)
https://github.com/Nebula4869/fisheye_camera_undistortion
https://github.com/Ahid-Naif/Around-View-Monitoring-AVM
Context-Based Parking Slot Detection
https://github.com/dohoseok/context-based-parking-slot-detect