Skip to content

Conversation

@DPlayer234
Copy link
Contributor

@DPlayer234 DPlayer234 commented Mar 19, 2025

Moves the check for ShardManagerMessage to the end of the ShardManager::run loop. This avoids unnecessarily waiting wait_time_between_shard_start once before ever starting shards.

Also no longer times out receiving ShardManagerMessage if no more shards need to be started currently. Since the run loop is the only place than can modify the queue, there is no chance of the queue changing while it is waiting for messages.

Moves the check for `ShardManagerMessage` to the end of the run loop.
Also no longer times out receiving those if no more shards need to be started currently.
@github-actions github-actions bot added the gateway Related to the `gateway` module. label Mar 19, 2025
Comment on lines 146 to 160
let msg = if batch.is_empty() {
// This function is the only code that can add new shards
// to start to the queue directly (enforced by `&mut`), so
// if the batch is empty, it will always be empty until a
// `ShardManagerMessage::Boot` is received here.
self.manager_rx.next().await
} else {
self.checked_start(batch).await;

// Include a timeout so we can start the next batch of
// shards even if no more messages are received.
timeout(self.wait_time_between_shard_start, self.manager_rx.next())
.await
.unwrap_or_default()
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason for having two separate branches? Yeah, adding a timeout only makes sense if there are shards in the queue waiting to get started (since we don't want to block forever), but I think just swapping the order serves well enough here.

Empty batches are handled fine by checked_start, so if multiple timeouts happen while waiting for a shard to get queued, all that will happen is a few extra empty Vec will get created.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Timers and running this loop at all add at least some overhead that's not too hard to avoid.

There is also the potential that someone reduces the wait time to zero (for example in a scenario with a proxy) in which case this loop might run constantly with no delay. This wouldn't be a massive problem since tokio manages coop budgets when the timeout call is hit, but it still seems like unnecessary overhead.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just don't like calling self.manager_rx.next() in two different places. Adding a timer to the future really introduces very little overhead (what with zero-cost futures and all that, plus tokio relies on the system timer and doesn't just spinlock).

If you'd like to clean up the extra Vec usage, then you could either inline checked_start into the loop, or change ShardQueue::pop_batch to return Option<Vec<ShardId>> (None for empty batches), or both.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My problem isn't the Vec or the timer future itself, it's CPU time spent on even running the loop when it's known that it won't do anything, especially if the user overrides the wait time. The loop could end up pretty much just spinning if the wait time is set to zero (minus tokio yielding due to coop budget).

Keeping checked_start as its own function seems cleaner to me, but I can change pop_batch to return Option. That seems like it improves the intent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried to verify if my concerns about this are actually measurable and the answer is not really. I'll just revert this snippet to what it was before and avoid any further changes.

@DPlayer234 DPlayer234 requested a review from mkrasnitski March 20, 2025 08:02
@arqunis arqunis added the enhancement An improvement to Serenity. label Mar 20, 2025
@arqunis arqunis merged commit db49b4a into serenity-rs:next Mar 20, 2025
24 checks passed
GnomedDev pushed a commit to GnomedDev/serenity that referenced this pull request Mar 26, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
GnomedDev pushed a commit to GnomedDev/serenity that referenced this pull request Mar 26, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
@DPlayer234 DPlayer234 deleted the fast-one-shard-start branch April 15, 2025 14:26
GnomedDev pushed a commit that referenced this pull request Apr 28, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
GnomedDev pushed a commit that referenced this pull request May 19, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Jun 30, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Jun 30, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Jun 30, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Jul 28, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Jul 28, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Oct 7, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Oct 7, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
mkrasnitski pushed a commit to mkrasnitski/serenity that referenced this pull request Oct 7, 2025
Moves the check for `ShardManagerMessage` to the end of the `ShardManager::run`
loop. This avoids unnecessarily waiting `wait_time_between_shard_start` once
before ever starting shards.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement An improvement to Serenity. gateway Related to the `gateway` module.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants