-
Notifications
You must be signed in to change notification settings - Fork 11
Papers for literature review
Will Heitman edited this page Sep 29, 2022
·
37 revisions
Below are the papers that we have found interesting and compelling. Everyone should read through these. We will conduct a detailed comparison of individual methods as a team during the w/o September 19.
- VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection
-
PointPillars: Fast Encoders for Object Detection from Point Clouds
- Well known, fast object detector. A strong candidate to be used as a standalone detector.
- Sparse Fuse Dense: Towards High Quality 3D Detection with Depth Completion
-
Pseudo-LiDAR from Visual Depth Estimation:
Bridging the Gap in 3D Object Detection for Autonomous Driving
- Narrows the gap between image-based and lidar-based 3D object detection
-
PSEUDO-LIDAR++:
ACCURATE DEPTH FOR 3D OBJECT DETECTION IN
AUTONOMOUS DRIVING
- Improvement on pseudo-lidar framework, with better methods for stereo depth prediction and depth debiasing. Better performance on longer distance detection.
-
Depth Coefficients for Depth Completion
- Addresses a method to avoid artifacts in depth prediction which can help object detection. Talks about 16 ring lidar.
-
Efficient Stereo Depth Estimation for Pseudo LiDAR:
A Self-Supervised Approach Based on Multi-Input ResNet
Encoder
- Instead of using lidar, use stereo camera right and left images to predict depth using a network in an unsupervised manner. A good contender since it is easy to understand and efficient without using lidar.
-
Learning To Track With Object Permanence
- I believe object permanence is a great place to start, or at least to familiarize ourselves with. We might be able to apply some elements from this paper/concept to build upon our future strong 3D object detection models.
- https://arxiv.org/abs/2204.01784 This paper is quite a bit more complicated, but it is worth taking a brief look at. For now, the paper linked above would still be great for our base understanding.
-
Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving
- I believe this is also useful for the prior point about building off of our 3D object models because we can analyze their joint architecture, or even use their joint architecture as a whole just to test it out.
-
New Monte Carlo Localization Using Deep Initialization: A Three-Dimensional LiDAR and a Camera Fusion Approach
- It's a neat idea, and particle filters have a long use with state estimation, but their results look... kinda bad.
-
ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras
- Centimeter-level accuracy on the KITTI dataset using only cameras-- what!?
-
Robust and Precise Vehicle Localization Based on Multi-Sensor Fusion in Diverse City Scenes
- Fuses RTK GNSS (cm-accuracte GNSS with correction data) and Lidar odom (NDT, ICP, etc)
- 3D Monte Carlo Localization with Efficient Distance Field Representation for Automated Driving in Dynamic Environments
-
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
- Recorded accuracy in neighborhood of 5-10cm (though as high as 30cm).
- Recall (in this case, the portion of time where localization was within 5cm and 5 degrees rotation error) was 75%.
-
Fast Odometry and Scene Flow from RGB-D Cameras based on
Geometric Clustering
- Uses RGB Camera data to jointly provide both odometry and scene flow
- https://www.youtube.com/watch?v=Nt-N4Fd7FZ0 It looks promising, but I'm not sure how it would integrate well with our current approaches. Speculating a way we can maybe take some information from the models used and feed it into our state estimator melting pot, enabling us to have both more accurate state estimation and scene flow information.
- It is worth noting that we can try to start with optical flow first, which is the 2D version. Even before that, we can start with just 2D classification and tracking
- https://ui.adsabs.harvard.edu/abs/2021PatRe.11407861Z/abstract is a good starting place to learn about optical and scene flow, following up with - https://vision.in.tum.de/research/sceneflow Helpful to read more up on scene flow
-
A Survey on Motion Prediction of Pedestrians and Vehicles for Autonomous Driving
- Worth a read before reading other papers: doesn't present research of its own but lists the types of models currently published.
-
A probabilistic model for estimating driver behaviors and vehicle trajectories in traffic environments.
- Models an entire traffic situation with a probabilistic network approach that claims to account for the complex interaction between other vehicles. Useful for both traffic prediction and possibly decision making, and seems pretty flexible.
- Cites a similar paper focused only on decision-making here.
-
A Behavioral Planning Framework for Autonomous Driving
- Uses an approach called PCB (Prediction and Cost Function) that essentially looks at a wide candidate space of possible behaviors before selecting the best behavior based on route progress, comfort, safety, and fuel consumption.
-
Tackling Real-World Autonomous Driving using Deep Reinforcement Learning
- Interesting approach that uses reinforcement learning to predict both steering angle and acceleration. Basically a black box that takes localization/perception data and uses it for all planning. Might not work out for us but could providing interesting insights on predictive motion planning.
-
Reinforcement Learning for Behavior Planning of Autonomous Vehicles in Urban Scenarios
- Long read and recent paper, but has an interesting approach to behavior planning. Essentially, it combines heuristic rules (like ours) with reinforcement learning over real human behaviors to generate a complete BP predictor. Haven't gotten all the way through it yet, but I think that if we can get the necessary training data a system like this could be feasible.
-
Behavior Planning of Autonomous Cars with Social Perception
- MPC and inverse reinforcement learning
-
Towards artificial situation awareness by autonomous vehicles
- Attempts to simplify full prediction of other vehicle trajectories into just the information needed to make a decision. Somewhat oriented around aircraft, but the framework of discrete stochastic transition states with temporal distributions may be useful.
-
Safe, Multi-agent, Reinforcement Learning for Autonomous Driving.
- I found this really helpful for understanding how ML can be applied to autonomous vehicles. The first part of the paper gives a nice overview of different categories of planners that use a Markov decision process, and the rest of it gives a new approach that allows for safe multiagent planning.
-
Interaction-Aware Behavior Planning for Autonomous Vehicles Validated with Real Traffic Data
- Another paper that uses Markov and Monte-Carlo for behavior planning.
- Interactive article on how Gaussian belief propagation works
- An argument for more computational and less human-inspired methods for AI in general
-
Adaptive Stress Testing for Autonomous Vehicles
- Read this one first as it introduces AST for AVs most of which use a similar approach as in the article (MCTS/DRL)
-
The Adaptive Stress Testing Formulation
- Also useful for understanding AST
-
How Do We Fail? Stress Testing Perception in Autonomous Vehicles
- Perception Adaptive Stress Testing: Adaptive stress testing for LIDAR systems in adverse weather conditions. Validates and stresses LIDAR-based perception. PAST seeks out the most likely failure scenarios via MCTS. Applies disturbances using a physics-based data augmentation technique for simulating LiDAR point clouds in adverse weather conditions.
-
Finding Failures in High-Fidelity Simulation using Adaptive Stress Testing and the Backward Algorithm
- Another AST implementation. To improve efficiency, it uses a backward algorithm to adapt low-fidelity AST simulation implementations to high-fidelity. Interesting and impressive implementation.
-
Development of ADAS perception applications in ROS and “Software-In-the-Loop” validation with CARLA simulator
- This honestly doesn't have much substance in terms of new approaches we can take to SIL testing, but it's definitely a good read that has a high level overview of simulation and the entire stack. They use basically the exact same stack that we use.
-
Autonomous Vehicles Testing Methods Review
- This goes over a high level overview of implementing traffic scenarios. This puts an emphasis on Model-based testing (MBT) and Rapid control prototyping (RCP). Collecting data from our simulation should also be of upmost importance, as we should be able to consistently reproduce issues that we find. There's also a good section on hardware in the loop testing that's worth a read.
- MP3: A Unified Model to Map, Perceive, Predict and Plan
-
Learning from All Vehicles
- This is the paper from the 2021 winners of the CARLA Challenge
-
Learning by Cheating
- This was featured on the CARLA Leaderboard docs
- The "starter kit" is here
General
- Papers for literature review
- Demo 2: Grand Tour (Overview)
- Our Team
- Learning resources
- Meeting notes
- Archived Pages
Development & Simulation
- Code Standards and Guidelines
- Writing and Running Tests
- Installation and usage
- Logging into the Quad Remote Simulator
- Running the Simulator
Software Design
Outdated or Uncategorized