Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add hidden configuration for sending batched messages sequentially #2543

Merged
merged 12 commits into from
Aug 23, 2022

Conversation

soareschen
Copy link
Contributor

Partial fix for #2350

Description

This PR adds a hidden chain configuration sequential_batch_tx, which will cause the relayer to submit batched messages in serial after the previous transaction is committed to the chain.

The config has default value false, which disables sequential sending by default. However we can instruct relayer operators to set this to true, in the rare case that the batched transactions are failing due to priority mempool being turned on by validators of a chain.

Note that enabling sequential sending may cause degradation in performance. Hence we do not expose this config in the public documentation, to prevent relayer operators from shooting themselves in the foot.


PR author checklist:

  • Added changelog entry, using unclog.
  • Added tests: integration (for Hermes) or unit/mock tests (for modules).
  • Linked to GitHub issue.
  • Updated code comments and documentation (e.g., docs/).
  • Tagged one reviewer who will be the one responsible for shepherding this PR.

Reviewer checklist:

  • Reviewed Files changed in the GitHub PR explorer.
  • Manually tested (in case integration/unit/mock tests are absent).

@soareschen
Copy link
Contributor Author

Integration tests will be added before merging is ready.

@soareschen soareschen changed the title Soares/sequential wait Add hidden configuration for sending batched messages sequentially Aug 10, 2022
@soareschen soareschen marked this pull request as ready for review August 10, 2022 18:18
}

Ok(tx_sync_results)
}

async fn sequential_send_messages_as_batches(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned about more code duplication here.
When contemplating refactoring for batching fixes respecting actual tx size, already there are two functions to care about, this adds a third one.

Could this be incorporated in the regular send_messages_as_batches under a dynamic flag?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The core logic has already been abstracted into helper functions like batch_messages, send_tx_with_account_sequence_retry, and wait_for_block_commits. So functions like sequential_send_messages_as_batches only act as higher level functions that wire up the lower level functions. Such low level functions are already there to serve the code deduplication purpose.

Using a dynamic flag can obscure the logic that differentiates send_messages_as_batches from sequential_send_messages_as_batches. With them being separate functions, it is much more clear of how the behavior are different from one another.


tx_sync_results.push(tx_sync_result);

wait_for_block_commits(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With sequential sending, we get the DeliverTX result before sending the next transaction. However it is a bit unclear here whether we should continue submitting the remaining batches to the chain, if DeliverTX or SendTX returns error.

Supposingly, we should probably short circuit and return on the first encounter of error. However I keep the behavior the same as the parallel version, so that we do not observe unexpected difference in behavior when sequential batching is enabled.

The anti-pattern behavior of converting errors to duplicate IBC error events also significantly complicates the design space here. So I'd rather leave the behavior untouched until the chain errors are handled in better ways.

@romac romac marked this pull request as draft August 18, 2022 15:12
@romac romac added this to the v1.1 milestone Aug 22, 2022
@seanchen1991 seanchen1991 marked this pull request as ready for review August 22, 2022 15:43
@romac romac merged commit 134b1bc into master Aug 23, 2022
@romac romac deleted the soares/sequential-wait branch August 23, 2022 14:47
hu55a1n1 pushed a commit to hu55a1n1/hermes that referenced this pull request Sep 13, 2022
…nformalsystems#2543)

Partial fix for informalsystems#2350

This PR adds a hidden chain configuration `sequential_batch_tx`, which will cause the relayer to submit batched messages in serial after the previous transaction is committed to the chain.

The config has default value `false`, which disables sequential sending by default. However we can instruct relayer operators to set this to `true`, in the rare case that the batched transactions are failing due to priority mempool being turned on by validators of a chain.

Note that enabling sequential sending may cause degradation in performance. Hence we do not expose this config in the public documentation, to prevent relayer operators from shooting themselves in the foot.

---

* Add sequential version of send_batched_messages

* Add sequential_batch_tx chain config

* Manual test behavior of integration tests with sequential_batch_tx

* Add draft test for sequential batching

* Add set_mempool_version to enable priority mempool

* Add integration test for sequential batching

* Relax time assertion for parallel batch

* Add some documentation to send_messages functions

Co-authored-by: Romain Ruetschi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants