Skip to content

Implementation of model_delay with dynamic parameter#6066

Open
BriceRenaudeau wants to merge 1 commit intoros-navigation:mainfrom
BriceRenaudeau:MPPI_model_delay
Open

Implementation of model_delay with dynamic parameter#6066
BriceRenaudeau wants to merge 1 commit intoros-navigation:mainfrom
BriceRenaudeau:MPPI_model_delay

Conversation

@BriceRenaudeau
Copy link
Copy Markdown
Contributor

Basic Info

Info Please fill out this column
Ticket(s) this addresses (add tickets here #6065)
Primary OS tested on (Ubuntu)
Robotic platform tested on Our real robot
Does this PR contain AI generated software? No
Was this PR description generated by AI software? No

@SteveMacenski and @doisyg , here is a small implementation of our talk long ago.

Description of contribution in a few bullet points

  • Add a new parameter in the mppi model call model_delay
  • Shift the command vector according to the model delay

Description of how this change was tested

  • This modification runs in our robot, and the behavior in long narrow corridor was improved, it reduces zigzag.

Future work that may be required in bullet points

  • Implement the same shift on the visualization to see the real trajectory preview.

For Maintainers:

  • Check that any new parameters added are updated in docs.nav2.org
  • Check that any significant change is added to the migration guide
  • Check that any new features OR changes to existing behaviors are reflected in the tuning guide
  • Check that any new functions have Doxygen added
  • Check that any new features have test coverage
  • Check that any new plugins is added to the plugins page
  • If BT Node, Additionally: add to BT's XML index of nodes for groot, BT package's readme table, and BT library lists
  • Should this be backported to current distributions? If so, tag with backport-*.

}

// Apply model delay
if(model_delay_ == 0.0) {return;}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense if less than model_dt_ to skip? Or I suppose by the current math < 0.5 * model_dt_ would be the same since the offset would be floored to 0 in which case.

But more architecturally, if hte offset is less than one cycle's DT, does it make sense to apply any lag?

auto state_copy = state;
for (unsigned int i = 0; i != state.vx.shape(0); i++) {
for (unsigned int j = 1; j != state.vx.shape(1); j++) {
// Keep the first value before delay (because we cannot do better)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Keep the first value before delay (because we cannot do better)
// Keep the first values before delay

Making plural in case large.

unsigned int offset = std::floor((model_delay_ / model_dt_) + 0.5);
auto state_copy = state;
for (unsigned int i = 0; i != state.vx.shape(0); i++) {
for (unsigned int j = 1; j != state.vx.shape(1); j++) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll probably take care of this for you but leaving a note for myself: These loops may be eigenized for vectorized operations.

@SteveMacenski
Copy link
Copy Markdown
Member

Thanks for this. Briefly, did you look at huynhduc9905#6 which implemented a first-order lag model? Is there any of this that you'd like / think we should use (or a different strategy and what you did is better)? I believe the intention is to estimate the lag at run-time versus a parameterization.

Copy link
Copy Markdown
Member

@SteveMacenski SteveMacenski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usual bits on adding the param to the configuration guide + mention this in the migration guide (though looks like this has some compilation issues).

I know that this is a patch you've built that fixed an issue for you, so if you need me to drive it home with fixing things up with the current version of MPPI, I can do that. I'd just ask that you test to make sure that this works still for your needs & answer my questions above.

Also note this comment: #6065 (comment)

@SteveMacenski SteveMacenski mentioned this pull request Apr 7, 2026
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants