-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve green scheduler 2 #12254
Improve green scheduler 2 #12254
Conversation
Do you happen to have any ball-park figures for the performance improvements? |
For this type of PR perf numbers help a lot. |
@huonw, as I mentioned in #11730, the exponential backoff scheme made the green benchmark run about 4 times faster (under my machine when the number of threads is twice the number of cores). The other scheme (i.e. getting rid of the global sleeper list) is not working yet. This is the green benchmark I'm using: https://gist.github.com/derekchiang/8994707 This is the native benchmark: https://gist.github.com/derekchiang/8994717 Both were originally provided by Alex. |
We should also consider whether the backoff should be randomized or not to prevent all threads stampeding over the same contended resources. It's not obvious to me that randomization here is required though, and may be worse for energy efficiency. FWIW, the steal process already randomizes the stealing order to reduce the likelyhood of threads competing over work queues. |
@alexcrichton re a discussion we had the other day, I don't think that using the event loop for backoff sleeping is just as bad as completely going to sleep, since the real problem with schedulers sleeping is that it forces other schedulers to wake them up; as long as sleeping doesn't involve broadcasting that you need to be woken up there should still be perf wins. So I'm still worried about the message-handling latency of using usleep for this. On another note, schedulers now do 2 types of sleeping which makes talking about either difficult. |
@derekchiang Thanks for working on this. It's super-cool. |
I'd like to do some experiments with waking up just one or both neighbors. Personally, I think that just waking the neighbor to one side might be sufficient, since we've only enqueued one unit of work. |
@brson, that's an interesting point about using the event loop. Right now libuv only gives us a millisecond granularity of timers, so if we want to sleep for shorter periods of time we'd have to custom sleep regardless. |
I think I fixed most of the issues brought up. It's still not terminating when running the benchmark with multiple threads though. According to Alex, the event loop shuts down when there are no references to it anymore, so that means there are outstanding references somewhere? These lines should get rid of all references in the handles ring, right? |
This commit is NOT production ready at all and will eventually be squashed.
…hen there is active IO
Seems like all previous discussions disappeared because I did a rebase :( Anyway, another review plz? Also, I still haven't figured out why the event loop doesn't shut down. |
@derekchiang if you use |
// Since we are sleeping, deactivate the idle callback. | ||
sched.idle_callback.get_mut_ref().pause(); | ||
} else if !sched.event_loop.has_active_io() { | ||
unistd::usleep((1 << sched.backoff_counter) * 1000u as u32); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can probably be * 1000
, or, if that doesn't work * 1000u32
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally, @brson and I were thinking that 1000 is quite a large number to start out with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once you get into millisecond territory, I don't think usleep
should be used, but rather the event loop itself so I/O can wake up the scheduler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usleep is deprecated, please don't use it. it has been replaced with
nanosleep.
On Wed, Feb 19, 2014 at 4:21 AM, Alex Crichton [email protected]:
In src/libgreen/sched.rs:
sched.idle_callback.get_mut_ref().pause();
} else {
rtdebug!("not sleeping, already doing so or no_sleep set");
// We may not be sleeping, but we still need to deactivate
// the idle callback.
sched.idle_callback.get_mut_ref().pause();
unsafe {
if !sched.no_sleep && !(*sched.sleepy.get()).load(SeqCst) {
if sched.backoff_counter == EXPONENTIAL_BACKOFF_FACTOR {
sched.backoff_counter = 0;
rtdebug!("scheduler has no work to do, going to sleep");
(*sched.sleepy.get()).store(true, SeqCst);
// Since we are sleeping, deactivate the idle callback.
sched.idle_callback.get_mut_ref().pause();
} else if !sched.event_loop.has_active_io() {
unistd::usleep((1 << sched.backoff_counter) \* 1000u as u32);
Once you get into millisecond territory, I don't think usleep should be
used, but rather the event loop itself so I/O can wake up the scheduler.—
Reply to this email directly or view it on GitHubhttps://github.com//pull/12254/files#r9858586
.
Closing due to a lack of activity, but I'd love to see this get done! |
@alexcrichton OK I have been busy lately, will ping you when activity resumes! |
Don't allow derive macros to silence `disallowed_macros` fixes rust-lang#12254 The implementation is a bit of a hack, but "works". A derive expanding to another derive won't work properly, but we shouldn't be linting those anyways. changelog: `disallowed_macros`: Don't allow derive macros to silence their own expansion
This PR will hopefully eventually close #11730. Right now, it's NOT ready to be merged at all. I'm putting it here just so that people like Alex can take a look and tell me how much I'm doing it wrong :)