Add costmap downsampler - fixes SteveMacenski/navigation2#4#23
Conversation
SteveMacenski
left a comment
There was a problem hiding this comment.
Good first draft - a few optimizations since this is being called at every planning iteration & clarifying some points that may not have been clear
|
In you image, I still see small dots - are those from an underlaying full resolution map? Can you give us a screen shot of just the downsampled map? I'm surprised to see black and some smaller cells with small gradients. I also don't think LETHAL is blue (but I could be wrong), I don't see LETHAL's over the small black cells, are you sure that is correct? I would think all the black cells for static obstacles in the map should be overlapped in the downsampled costmap with lethal coloring. I think lethal is red not the light blue. |
Summary: - Optimized stack allocations - Use existing object for publishing costmap
|
Got it, thanks! Met me re-review |
|
I was able to address most comments:
|
smac_planner/src/smac_planner.cpp
Outdated
| if (_costmap_downsampler) { | ||
| costmap = _costmap_downsampler->downsample(); | ||
| if (_node->count_subscribers(_costmap_topic_name) > 0) { | ||
| _costmap_pub->publishCostmap(); |
There was a problem hiding this comment.
I'm confused about this, it seems like you don't update the _downsampled_costmap that the publisher initialize function was given (above, its local variable costmap. How is this publisher publishing the current costmap? The pointers are being swapped and it doesn't have access to the new info.
Also I think the variable should be something like _downsampled_costmap_pub to be clear its not the costmap, but the down sampled debug costmap.
There was a problem hiding this comment.
Maybe this publisher should be part of the down sampler itself?
There was a problem hiding this comment.
Also then you wouldn't have to return a nonsensical costmap in the initialize function
There was a problem hiding this comment.
The publisher is always seeing the most updated version of the downsampled costmap. I give it a pointer upon creation, and then the object being pointed gets updated internally when calling _costmap_downsampler->downsample().
I guess I can make the publisher part of the downsampler, it would only require passing a bunch of variables to the initialize method of the downsampler (ros node, global frame and topic name)
|
Moved the publisher to the downsampler. I think the code looks a bit cleaner now. Let me know what you think! I've tested again that the publisher is correctly updating the costmap. I initialized with some dummy value (a 1x1 costmap), published that costmap, and then it gets correctly updated when calling |
|
Looks more basic now - that's a good sign. When it looks simple, that's the hallmark of a good design. I'm wondering if we should make the downsample factor an input to |
|
Alright, we're down to pedantic stuff, I think this is the last few changes and we're good to go |
|
For some reason I pushed to my branch https://github.com/carlosluisg/navigation2/tree/costmap_downsampler but it is not showing in the PR. I addressed there the last few comments EDIT: registered just now :) seemed like Github had a small hiccup |
SteveMacenski
left a comment
There was a problem hiding this comment.
Last change then merge. I'm being overly sticklery on efficiency because its been a focus of mine for this planner.
You have some linting problems but I'll ignore that for now and we can circle back when we're actually done and I'll lint the codebase again.
| unsigned int y_offset = new_my * _downsampling_factor; | ||
|
|
||
| for(int j = 0; j < _downsampling_factor * _downsampling_factor; ++j) { | ||
| mx = std::min(x_offset + j % _downsampling_factor, _size_x - 1); |
There was a problem hiding this comment.
Are these min checks required if you did the ceil correctly in the update step?
There was a problem hiding this comment.
I gave this some thought and normally I wouldn't be able to remove those checks. The downsampled costmap that I'm creating is at least as big as the original costmap, but it can be a bit bigger depending on the size of the map and the sampling factor we chose. Having said that, this means that we can end up exploring cells that are out of bounds for the original costmap... But...
I ran a test in a 566x608 map of cells of 0.1 m, when we downsample to cells of 0.3m, we get a costmap of 189x203 cells. This means that when we are assigning costs to the index new_mx = 188 (corresponding to cells 189s) then we would explore cells 564, 565 and 566! Normally, when getting the cost of a cell with index 566 from the original costmap we would get an out of bounds error, but we are not. Not sure why this is happening, I see the cost associated to it is zero. It could be that it just explores uninitialized memory and interprets it as a byte (char) and it so happened to be zero?
Anyways, bottomline is we shouldn't remove it if we want to have well-defined behaviour. Removing it will likely go out-of-bounds for the charmap and have some undefined behavour near the edges of the map
There was a problem hiding this comment.
Yeah, so I just checked and there's no access safety when calling _costmap->getCost(x, y). I tried checking for the cost at the cell 1000x1000 which is clearly out of bounds and got
exploring costmap at 1000x1000 -> equal to 224.
https://stackoverflow.com/questions/39188469/char-array-can-hold-more-than-expected
As soon as you read or write from an array outside of its bounds, you invoke undefined behavior.
Whenever this happens, the program may do anything it wants. It is even allowed to play you a birthday song although it's not your birthday, or to transfer money from your bank account. Or it may crash, or delete files, or just pretend nothing bad happened.
There was a problem hiding this comment.
If you want to get rid of the mins, the only option would be to create the costmap being at most the same size as the original costmap, but likely smaller (opposite of what we do now). This way we know for sure we will never explore out-of-bound cells for the original costmap without the need of mins, at the price of having a slightly smaller downsampled map (whereas now we can a slightly larger one). Your call.
There was a problem hiding this comment.
This means simply changing ceils to floors in the updateCostmapSize method. I think it's minor not having those cells in the edges of the map, taking into account that normally the robot won't be going there anyway. If you think the optimization of not having the min calls outweights the loss of a few cells in the top and right edges of the map, then I'll send the commit. Otherwise I think we can merge as is.
There was a problem hiding this comment.
@SteveMacenski any thoughts / concerns with the above comment? If not I'll send a commit with the changes I mentioned.
There was a problem hiding this comment.
Oh sorry, I don't know how I missed that comment (damn you github!).
I suppose a good summary is in order - we've covered a bit of ground and I'm a little confused where we stand right now. You mentioned there's some exploring out of bounds - where is that happening? In the planning or in this set costs thing because we're not doing boundry checks? If here, why not add boundary checks so that we know deterministically when its out of bounds to skip non-existent elements of a partial downsampled-cell? I think that's where we're at so that we're not wrapping around.
The floor instead of ceiling comment would then just clip the partial information cells, correct? Is that really what we want to do, or for a partial cell (where there's only 1x3 or 2x3 of the 3x3 of information possible for the window) do we want to fill it in with information based on what is known? I don't know the answer to that question, I could see both.
What I'm concerned about is reports above about not correctly doing boundary checks so that either:
- We're searching / expanding / exploring cells that are totally bogus (the 500+ numbers for a 200-something downsampled map)
- We're populating downsampled costmap with bogus values from wrap around issues (we're wrapping around for cells on the other side of the map because not boundary checking)
There was a problem hiding this comment.
It should be easy enough knowing the total costmap size in X and Y to check if the cell is a multiple of it to know if we wrap around and skip those cells. We do this all the time in the costmap_2d codebase to not wrap around on the linear char pointers
There was a problem hiding this comment.
In the current state of the PR there's no exploring out of bounds, we are checking that the cells explored when assigning costs to the downsampled map are within the bounds of the original map, so there's no way to access bogus values. At planning time we use the downsampled map and there's no risk of going out of bounds on that map either.
The reports you mentioned about not doing boundary checks were just some experiments I conducted at the time, it doesn't reflect what truly happens in the code (it seems they brought more confusion than anything else ^.^"). More specifically, I removed the clipping here and ran the code. This came as a suggestion from you at the time to try to remove the clipping as an optimization of the code.
The floor instead of ceiling comment would then just clip the partial information cells, correct? Is that really what we want to do, or for a partial cell (where there's only 1x3 or 2x3 of the 3x3 of information possible for the window) do we want to fill it in with information based on what is known? I don't know the answer to that question, I could see both.
Currently we are doing the second option: filling a partial cell with the information known for that cell. I'm okay keeping it this way.
It should be easy enough knowing the total costmap size in X and Y to check if the cell is a multiple of it to know if we wrap around and skip those cells. We do this all the time in the costmap_2d codebase to not wrap around on the linear char pointers
Isn't that equivalent to the clipping already done when calculating the cells to explore in the costmap (here)? For example, I found this piece of code in the costmap_2d codebase which is essentially doing the same clipping, only that it does it with if statements rather than min calls as I do. Is there any optimization to doing it that way?
Given that in the current code we are not actually exploring any out-of-bounds cells for any of the costmaps (neither the original one nor the downsampled one), is there something else I'm missing? I'm just trying to understand if there are additional concerns related to the way we populate costs in the downsampled map
There was a problem hiding this comment.
I think this is fine now if that's all accurate. I thought there was something still weird happening here. Sorry. This was prolonged long enough that I lost memory of the issues.
|
OK I think this can go in now. Anything else you want to change? Only 2 things I see that are really small would be (optional):
|
|
Last commit should address the 2 comments you mentioned last. If you agree with those changes, I have nothing else to add to this PR and it can be merged. |
This reverts commit 94aac6d.
|
Reverted back to using continue |
|
Awesome! All good, merging. This was a really great step forward to restarting this work, thanks for working with me on performance items. This planner will naturally be slower than a usual A* so I'm hyper sensitive about making sure to reduce any small overhead that could accumulate. |

Basic Info
Description of contribution in a few bullet points
CostmapDownsamplerthat creates a new costmap with the specified resolution. In short, this downsampler inspects the subcells of the original costmap and assigns the highest cost to the corresponding new downsampled cell.Example with Willow world
Ran a few tests in the willow world, here's how the map looks after being downsampled from a 0.1m resolution to 0.3m
Comparison
Runtime impact: downsampling the map is only done once (the first time we create a path) and it takes around 5ms for a 566x108 cells map. It's computation time is vastly outweighted by the amount of less nodes expanded.
Description of documentation updates required from your changes
Future work that may be required