Skip to content

A curated list of awesome End-to-End Autonomous Driving resources (continually updated)

License

Notifications You must be signed in to change notification settings

opendilab/awesome-end-to-end-autonomous-driving

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Awesome End-to-End Autonomous Driving

This is a collection of research papers for End-to-End Autonomous driving. And the repository will be continuously updated to track the latest update of E2E driving.

Welcome to follow and star!

Table of Contents

An Overview of End-to-End Driving Method

The end-to-end driving methods aim at building a drive model that at each timestamp maps sensor readings (RGB & LiDAR), high-level navigational command, and vehicle state to raw control command. The raw control command usually includes steering, throttle and brake. Based on the command, the autonomous vehicle can drive from the start point to the goal point without collisions and violation of traffic rules. The traditional modular pipeline uses many independent modules, such as preception, localizition, scene understanding, behaviour prediction and path planing, etc. Each of these modules is designed, trained and evaluted for its own purpose. In contrast, the end-to-end methods go from the sensor input to the raw control, skipping everything in between. Most of the works are realized on the CARLA which is an open-source urban simulator for autonomous driving research. The simulator provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose ,and supports flexible specification of sensor suites and environmental conditions. The other simulators include Sumo , MetaDrive and SMARTS .

The recent end-to-end driving methods can be divided into two mainstreams: imitation learning and reinforcement learning. Reinforcement learning (RL) is one of the most interesting areas of machine learning, where an agent interacts with an environment by following a policy. In each state of the environment, it takes action based on the policy, and as a result, receives a reward and transitions to a new state. The goal of RL is to learn an optimal policy which maximizes the long-term cumulative rewards. In imitation learning instead of trying to learn from the sparse rewards or manually specifying a reward function, an expert (typically a human) provides us with a set of demonstrations. The agent then tries to learn the optimal policy by following, imitating the expert’s decisions.

Papers

format:
- [title](paper link) [links]
  - author1, author2, and author3.
  - key 
  - experiment environment

NEWs

CVPR 2023

ECCV 2022

ICLR 2022

CVPR 2022

CVPR 2021

CVPR 2020

ICCV 2021

NeurIPS 2022

CoRL 2022

CoRL 2020

T-PAMI

arXiv

Others

Contributing

Our purpose is to make this repo even better. If you are interested in contributing, please refer to HERE for instructions in contribution.

License

Awesome End-to-End Autonomous driving is released under the Apache 2.0 license.