Skip to content

Add costmap downsampler - fixes SteveMacenski/navigation2#4#23

Merged
SteveMacenski merged 9 commits intoSteveMacenski:nav2_smac_plannerfrom
carlosluis:costmap_downsampler
Aug 7, 2020
Merged

Add costmap downsampler - fixes SteveMacenski/navigation2#4#23
SteveMacenski merged 9 commits intoSteveMacenski:nav2_smac_plannerfrom
carlosluis:costmap_downsampler

Conversation

@carlosluis
Copy link
Copy Markdown

Basic Info

Info Please fill out this column
Ticket(s) this addresses #4
Primary OS tested on Ubuntu 20.04
Robotic platform tested on Turtlebot3 simulation

Description of contribution in a few bullet points

  • Added a CostmapDownsampler that creates a new costmap with the specified resolution. In short, this downsampler inspects the subcells of the original costmap and assigns the highest cost to the corresponding new downsampled cell.
  • Added an overlay in Rviz to visualize this map, there's an example below

Example with Willow world

Ran a few tests in the willow world, here's how the map looks after being downsampled from a 0.1m resolution to 0.3m

downsampling

Comparison

Resolution (m) Nodes expanded Avg runtime (ms)
0.1 67420 ~160
0.3 5460 ~20

Runtime impact: downsampling the map is only done once (the first time we create a path) and it takes around 5ms for a 566x108 cells map. It's computation time is vastly outweighted by the amount of less nodes expanded.

Description of documentation updates required from your changes

  • Added two new parameters for the smac planner: boolean for using the downsampler, and a float specifying the desired resolution for downsampling.

Future work that may be required

  • Could consider other ways to downsample aside from just picking up the max
  • Create test-suite for the downsampler. I could probably include this in this PR or leave it for 60%+ test coverage (goal 80%+)  #9

Copy link
Copy Markdown
Owner

@SteveMacenski SteveMacenski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good first draft - a few optimizations since this is being called at every planning iteration & clarifying some points that may not have been clear

@SteveMacenski
Copy link
Copy Markdown
Owner

SteveMacenski commented Jul 10, 2020

In you image, I still see small dots - are those from an underlaying full resolution map? Can you give us a screen shot of just the downsampled map? I'm surprised to see black and some smaller cells with small gradients. I also don't think LETHAL is blue (but I could be wrong), I don't see LETHAL's over the small black cells, are you sure that is correct? I would think all the black cells for static obstacles in the map should be overlapped in the downsampled costmap with lethal coloring. I think lethal is red not the light blue.

@carlosluis
Copy link
Copy Markdown
Author

The image had the full resolution map beneath, you're right. Here's one with only the downsampled map

downsampled_only
I just checked and the blue color correspond to an inscribed obstacle (cost = 253), while the magenta is lethal (cost = 254).

I verified that every black dot in the full resolution map actually corresponds to lethal cost, so I think the downsampling is correct.

Summary:
- Optimized stack allocations
- Use existing object for publishing costmap
@SteveMacenski
Copy link
Copy Markdown
Owner

Got it, thanks! Met me re-review

@carlosluis
Copy link
Copy Markdown
Author

I was able to address most comments:

  • Moved costmap downsampler and publisher initializations to SmacPlanner::configure(). I thought I had problems with that setup, but it ended up working fine
  • Now the costmap downsampler supports the case when the costmap is changing sizes (e.g. the SLAM example you mentioned). If the costmap changes size then we call the resize function. I tested this feature by setting a dummy costmap of 1x1 cell in the downsampler initialization, which then got corrected to the actual size needed when calling the downsample function.

if (_costmap_downsampler) {
costmap = _costmap_downsampler->downsample();
if (_node->count_subscribers(_costmap_topic_name) > 0) {
_costmap_pub->publishCostmap();
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused about this, it seems like you don't update the _downsampled_costmap that the publisher initialize function was given (above, its local variable costmap. How is this publisher publishing the current costmap? The pointers are being swapped and it doesn't have access to the new info.

Also I think the variable should be something like _downsampled_costmap_pub to be clear its not the costmap, but the down sampled debug costmap.

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this publisher should be part of the down sampler itself?

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also then you wouldn't have to return a nonsensical costmap in the initialize function

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The publisher is always seeing the most updated version of the downsampled costmap. I give it a pointer upon creation, and then the object being pointed gets updated internally when calling _costmap_downsampler->downsample().

I guess I can make the publisher part of the downsampler, it would only require passing a bunch of variables to the initialize method of the downsampler (ros node, global frame and topic name)

@carlosluis
Copy link
Copy Markdown
Author

Moved the publisher to the downsampler. I think the code looks a bit cleaner now. Let me know what you think!

I've tested again that the publisher is correctly updating the costmap. I initialized with some dummy value (a 1x1 costmap), published that costmap, and then it gets correctly updated when calling downsample

@SteveMacenski
Copy link
Copy Markdown
Owner

Looks more basic now - that's a good sign. When it looks simple, that's the hallmark of a good design. I'm wondering if we should make the downsample factor an input to downsample(). It seems natural to tell it what to downsample by and would make our lives easier for testing if we could just flip a varable range over a loop to an object to see the results.

@SteveMacenski
Copy link
Copy Markdown
Owner

Alright, we're down to pedantic stuff, I think this is the last few changes and we're good to go

@carlosluis
Copy link
Copy Markdown
Author

carlosluis commented Jul 15, 2020

For some reason I pushed to my branch https://github.com/carlosluisg/navigation2/tree/costmap_downsampler but it is not showing in the PR. I addressed there the last few comments

EDIT: registered just now :) seemed like Github had a small hiccup

Copy link
Copy Markdown
Owner

@SteveMacenski SteveMacenski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last change then merge. I'm being overly sticklery on efficiency because its been a focus of mine for this planner.

You have some linting problems but I'll ignore that for now and we can circle back when we're actually done and I'll lint the codebase again.

unsigned int y_offset = new_my * _downsampling_factor;

for(int j = 0; j < _downsampling_factor * _downsampling_factor; ++j) {
mx = std::min(x_offset + j % _downsampling_factor, _size_x - 1);
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these min checks required if you did the ceil correctly in the update step?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave this some thought and normally I wouldn't be able to remove those checks. The downsampled costmap that I'm creating is at least as big as the original costmap, but it can be a bit bigger depending on the size of the map and the sampling factor we chose. Having said that, this means that we can end up exploring cells that are out of bounds for the original costmap... But...

I ran a test in a 566x608 map of cells of 0.1 m, when we downsample to cells of 0.3m, we get a costmap of 189x203 cells. This means that when we are assigning costs to the index new_mx = 188 (corresponding to cells 189s) then we would explore cells 564, 565 and 566! Normally, when getting the cost of a cell with index 566 from the original costmap we would get an out of bounds error, but we are not. Not sure why this is happening, I see the cost associated to it is zero. It could be that it just explores uninitialized memory and interprets it as a byte (char) and it so happened to be zero?

Anyways, bottomline is we shouldn't remove it if we want to have well-defined behaviour. Removing it will likely go out-of-bounds for the charmap and have some undefined behavour near the edges of the map

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, so I just checked and there's no access safety when calling _costmap->getCost(x, y). I tried checking for the cost at the cell 1000x1000 which is clearly out of bounds and got
exploring costmap at 1000x1000 -> equal to 224.

https://stackoverflow.com/questions/39188469/char-array-can-hold-more-than-expected

As soon as you read or write from an array outside of its bounds, you invoke undefined behavior.

Whenever this happens, the program may do anything it wants. It is even allowed to play you a birthday song although it's not your birthday, or to transfer money from your bank account. Or it may crash, or delete files, or just pretend nothing bad happened.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to get rid of the mins, the only option would be to create the costmap being at most the same size as the original costmap, but likely smaller (opposite of what we do now). This way we know for sure we will never explore out-of-bound cells for the original costmap without the need of mins, at the price of having a slightly smaller downsampled map (whereas now we can a slightly larger one). Your call.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means simply changing ceils to floors in the updateCostmapSize method. I think it's minor not having those cells in the edges of the map, taking into account that normally the robot won't be going there anyway. If you think the optimization of not having the min calls outweights the loss of a few cells in the top and right edges of the map, then I'll send the commit. Otherwise I think we can merge as is.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SteveMacenski any thoughts / concerns with the above comment? If not I'll send a commit with the changes I mentioned.

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh sorry, I don't know how I missed that comment (damn you github!).

I suppose a good summary is in order - we've covered a bit of ground and I'm a little confused where we stand right now. You mentioned there's some exploring out of bounds - where is that happening? In the planning or in this set costs thing because we're not doing boundry checks? If here, why not add boundary checks so that we know deterministically when its out of bounds to skip non-existent elements of a partial downsampled-cell? I think that's where we're at so that we're not wrapping around.

The floor instead of ceiling comment would then just clip the partial information cells, correct? Is that really what we want to do, or for a partial cell (where there's only 1x3 or 2x3 of the 3x3 of information possible for the window) do we want to fill it in with information based on what is known? I don't know the answer to that question, I could see both.

What I'm concerned about is reports above about not correctly doing boundary checks so that either:

  • We're searching / expanding / exploring cells that are totally bogus (the 500+ numbers for a 200-something downsampled map)
  • We're populating downsampled costmap with bogus values from wrap around issues (we're wrapping around for cells on the other side of the map because not boundary checking)

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be easy enough knowing the total costmap size in X and Y to check if the cell is a multiple of it to know if we wrap around and skip those cells. We do this all the time in the costmap_2d codebase to not wrap around on the linear char pointers

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current state of the PR there's no exploring out of bounds, we are checking that the cells explored when assigning costs to the downsampled map are within the bounds of the original map, so there's no way to access bogus values. At planning time we use the downsampled map and there's no risk of going out of bounds on that map either.

The reports you mentioned about not doing boundary checks were just some experiments I conducted at the time, it doesn't reflect what truly happens in the code (it seems they brought more confusion than anything else ^.^"). More specifically, I removed the clipping here and ran the code. This came as a suggestion from you at the time to try to remove the clipping as an optimization of the code.

The floor instead of ceiling comment would then just clip the partial information cells, correct? Is that really what we want to do, or for a partial cell (where there's only 1x3 or 2x3 of the 3x3 of information possible for the window) do we want to fill it in with information based on what is known? I don't know the answer to that question, I could see both.

Currently we are doing the second option: filling a partial cell with the information known for that cell. I'm okay keeping it this way.

It should be easy enough knowing the total costmap size in X and Y to check if the cell is a multiple of it to know if we wrap around and skip those cells. We do this all the time in the costmap_2d codebase to not wrap around on the linear char pointers

Isn't that equivalent to the clipping already done when calculating the cells to explore in the costmap (here)? For example, I found this piece of code in the costmap_2d codebase which is essentially doing the same clipping, only that it does it with if statements rather than min calls as I do. Is there any optimization to doing it that way?

Given that in the current code we are not actually exploring any out-of-bounds cells for any of the costmaps (neither the original one nor the downsampled one), is there something else I'm missing? I'm just trying to understand if there are additional concerns related to the way we populate costs in the downsampled map

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine now if that's all accurate. I thought there was something still weird happening here. Sorry. This was prolonged long enough that I lost memory of the issues.

@SteveMacenski
Copy link
Copy Markdown
Owner

OK I think this can go in now. Anything else you want to change?

Only 2 things I see that are really small would be (optional):

  • The min() will continue to iterate over maxed out cells instead of wrapping over, we could see if they're in bounds and continue; if not so we avoid some more calls.
  • The set costs look could be 2 nested for loops so that each iteration you're only updating 1 variable rather than 2. Right now each of the N*N iterations you find the x and y coordinate. If you format as 2 for loops then each one you can set 1 variable so that you have less total mod and casting operations.

@carlosluis
Copy link
Copy Markdown
Author

Last commit should address the 2 comments you mentioned last.

If you agree with those changes, I have nothing else to add to this PR and it can be merged.

@carlosluis
Copy link
Copy Markdown
Author

Reverted back to using continue

@SteveMacenski
Copy link
Copy Markdown
Owner

SteveMacenski commented Aug 7, 2020

Awesome! All good, merging. This was a really great step forward to restarting this work, thanks for working with me on performance items. This planner will naturally be slower than a usual A* so I'm hyper sensitive about making sure to reduce any small overhead that could accumulate.

@SteveMacenski SteveMacenski merged commit a0cdcc0 into SteveMacenski:nav2_smac_planner Aug 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants