-
Notifications
You must be signed in to change notification settings - Fork 11
Demo 1. First Drive
This was originally written on Nov. 25, 2020. I've done my best to revise where appropriate. -Will
Assumptions
- Driving will occur in daytime, in clear, fairly sunny weather. No night driving, rain, or fog.
- A human will be behind the wheel at all times, ready to take over.
- This person will have complete control over the throttle and brake.
- The computer will only be able to interface with the steering wheel.
- The computer will have access to processed, pre-recorded map data, including a point cloud and lane layouts.
- Finally, the computer will be given the precise location of its starting position and goal destination in global coordinates.
Success condition
- The car should travel from Point A to Point B.
- Only the computer should apply any control whatsoever to the steering wheel for the duration of the trip.
- The car should remain in the appropriate lane.
Above: Route overview
Above: East track detail
Above: West track detail
These routes (selected by Grace and Kyle) were chosen for the following reasons:
- They contain a good amount of "texture": trees, rocks, building facades. These make NDT localization more stable than, say, a flat parking lot.
- They contain relatively little foot and car traffic, especially during the summer, compared to hotspots like the Visitor Center area and Rutford Ave.
- They have low speed limits (20 mph) that our car can handle.
- The East Track features slight curves and vegetation, while the West Track is a straight line, but on a slope and with buildings on either side. The two combined will give a fair idea of our system's performance.
This lane keep demonstration demands answers to five questions:
- What does the map look like?
- Where am I on the map?
- Where is my destination on the map?
- What is the best path from my location to my destination?
- How can I translate this path into steering angles?
While the demonstration may seem broadly complex (and it is), each of these questions on its own is much more approachable.
- Autonomous Driving Systems (ADSs) nearly always use HD maps. These are detailed maps of the road. They include the locations of key traffic elements, like stop signs and crosswalks, the number of lanes per road, the locations of centerlines on the roads, etc.
- Autoware.Auto and ROS2 take advantage of a system called Lanelet2. It's a library that manages HD map data.
- Example Lanelet2 map:
This question refers to localization. The most popular localization device for everyday use is GPS (broadly called GNSS). GPS is not accurate or reliable enough for autonomous cars. Instead, most algorithms take GPS data as a starting point, then refine things with other data (usually LIDAR, stereo cameras).
Autoware.Auto uses Normal Distribution Transform (NDT) Localization to refine initial GPS estimates. Find more in this Autoware lecture.
We can start with this implementation, but it doesn't account for offsets from the vehicle at significant speeds. NDT localization basically compares LIDAR from the previous frame with the current frame, and the Autoware implementation does not shift the previous frame to account for vehicle movement. This leads to serious instability at high speeds (roughly above 5m/s, 10 mph).
- In other words, our initial demonstration can use the existing implementation, but we should fix it after this.
- It's also worth considering fusing NDT localization with other approaches later on. Adding the locations of any detected signs or lane markings, for instance, has been implemented successfully by other groups. See Poggenhans et al..
The answer here is thankfully pretty simple. Just define a coordinate on the HD map.
This might be the hardest question to answer. The literature usually breaks this into four subsystems: the Route Planner, Path Planner, Behavior Selector, and Motion Planner. I'll offer a brief summary here, but Badue et al. go into some nice detail in section 4.
The Route Planner provides a general route that links the current position and destination. It's typically represented as a list of coordinates, but it could just as effectively be a list of street names.
- The Route Planner gives directions like one human might give to another: "To get to the grocery store from here, go down Mockingbird Ave, then King St, and finally Purple Pkwy."
- To a computer, we instead provide an ordered list of lane IDs from our HD map. Pretty similar, but instead of "Mockingbord Ave," its "Lane #5303."
- To simplify this demo, we can simply provide our list of points ahead of time, then implement the Route Planner later. In other words, the Route Planner is low priority.
The Path Planner converts our list of points from the Route Planner into a smooth curve, represented as a list of poses [(x_1,y_1,θ_1 )…(x_n,y_n,θ_n )]. A good path planner actually produces a set of valid paths, and we can later select the optimal one. One useful approach by Hu et al. (2018) might be useful (you can log in using your UTD account to view this).
The Behavior Selector selects the best path from our Path Planner (if more than one is provided) and adds a desired speed to each pose along the path, up to a certain point called the "decision horizon."
- For this demonstration, we can simply omit the Behavior Selector entirely, since only steering matters in this demo, not speed.
At this point we assume that a path P = [(x_1,y_1,θ_1 )…(x_n,y_n,θ_n )]and associated velocities have been calculated. Three pieces remain, what we'll call the Motion Planner, Safety Guard, and Actuation Bridge. These final pieces are the most critical, and should be implemented properly!
The Motion Planner generates a list of subgoals S = [s_1…s_n ]that link the vehicle's current state with the goal provided by the Behavior Selector. Each subgoal s_k=(p_k, t_k ) contains a pose p_k and an absolute time t_k when the pose is expected to be achieved (see section 4.4 of Badue et al).
The Safety Guard checks our subgoals against a number of heuristics, from obvious ones like comparing desired velocities with the speed limit to checking for large desired angular acceleration to even low-level, last-minute obstacle detection.
- The Safety Guard should be confined to software, and should not manage things like an electrical E-stop switch. It should operate in real time, < 1 ms latency.
- For our demonstration, the Guard can just ensure that our velocities are at or below our predefined speed limit (~10 mph, say).
Finally, the Actuation Bridge translates the current pose p_t within subgoal s_t into "raw" accelerator and steering efforts (commands). This can be accomplished with some sort of PID loop or similar, right? It probably doesn't need to be that complicated, but it needs to account for the current odometry and efforts, then calculate appropriate offsets to generate new efforts.
Here is a breakdown of how the Planning and Interface teams will likely need to work with each other. The green arrows indicate where the results of one part will be used for another. The Planning team can operate entirely in a simulator until the Interface team is ready.
The structure for our first demo is drastically simpler than the typical structure of a more feature-complete system.
Once all five questions are addressed, we should be able to follow a path. There are some obvious limitations to this minimal demonstration (e.g. cannot adapt to obstacles, pedestrians, manual throttle/brake) but all five "answers" can be refined in future demonstrations, and so that no work is wasted.
General
- Papers for literature review
- Demo 2: Grand Tour (Overview)
- Our Team
- Learning resources
- Meeting notes
- Archived Pages
Development & Simulation
- Code Standards and Guidelines
- Writing and Running Tests
- Installation and usage
- Logging into the Quad Remote Simulator
- Running the Simulator
Software Design
Outdated or Uncategorized