[mq pallet] Custom next queue selectors#6059
Conversation
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
|
/cmd prdoc --bump minor --audience runtime_dev |
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
|
/cmd bench --pallet pallet_message_queue |
|
Command "bench --pallet pallet_message_queue" has started 🚀 See logs here |
|
Command "bench --pallet pallet_message_queue" has finished ✅ See logs here DetailsSubweight results:
Command output:❌ Failed to build asset-hub-westend |
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
|
/cmd bench --pallet pallet_message_queue |
|
Command "bench --pallet pallet_message_queue" has started 🚀 See logs here |
|
/cmd bench --pallet pallet_message_queue |
|
Command "bench --pallet pallet_message_queue" has started 🚀 See logs here |
|
Command "bench --pallet pallet_message_queue" has finished ✅ See logs here DetailsSubweight results:
Command output:✅ Successful benchmarks of runtimes/pallets: |
|
All GitHub workflows were cancelled due to failure one of the required jobs. |
|
/cmd prdoc --audience runtime_dev --bump patch --force |
…mp patch --force'
Changes: - Expose a `force_set_head` function from the `MessageQueue` pallet via a new trait: `ForceSetHead`. This can be used to force the MQ pallet to process this queue next. - The change only exposes an internal function through a trait, no audit is required. ## Context For the Asset Hub Migration (AHM) we need a mechanism to prioritize the inbound upward messages and the inbound downward messages on the AH. To achieve this, a minimal (and no breaking) change is done to the MQ pallet in the form of adding the `force_set_head` function. An example use of how to achieve prioritization is then demonstrated in `integration_test.rs::AhmPrioritizer`. Normally, all queues are scheduled round-robin like this: `| Relay | Para(1) | Para(2) | ... | Relay | ... ` The prioritizer listens to changes to its queue and triggers if either: - The queue processed in the last block (to keep the general round-robin scheduling) - The queue did not process since `n` blocks (to prevent starvation if there are too many other queues) In either situation, it schedules the queue for a streak of three consecutive blocks, such that it would become: `| Relay | Relay | Relay | Para(1) | Para(2) | ... | Relay | Relay | Relay | ... ` It basically transforms the round-robin into an elongated round robin. Although different strategies can be injected into the pallet at runtime, this one seems to strike a good balance between general service level and prioritization. --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: muharem <ismailov.m.h@gmail.com>
Changes:
force_set_headfunction from theMessageQueuepallet via a new trait:ForceSetHead. This can be used to force the MQ pallet to process this queue next.Context
For the Asset Hub Migration (AHM) we need a mechanism to prioritize the inbound upward messages and the inbound downward messages on the AH. To achieve this, a minimal (and no breaking) change is done to the MQ pallet in the form of adding the
force_set_headfunction.An example use of how to achieve prioritization is then demonstrated in
integration_test.rs::AhmPrioritizer. Normally, all queues are scheduled round-robin like this:| Relay | Para(1) | Para(2) | ... | Relay | ...The prioritizer listens to changes to its queue and triggers if either:
nblocks (to prevent starvation if there are too many other queues)In either situation, it schedules the queue for a streak of three consecutive blocks, such that it would become:
| Relay | Relay | Relay | Para(1) | Para(2) | ... | Relay | Relay | Relay | ...It basically transforms the round-robin into an elongated round robin. Although different strategies can be injected into the pallet at runtime, this one seems to strike a good balance between general service level and prioritization.