Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

std: use futex-based locks on Fuchsia #98707

Merged
merged 5 commits into from
Jul 21, 2022
Merged

Conversation

joboet
Copy link
Member

@joboet joboet commented Jun 30, 2022

This switches Condvar and RwLock to the futex-based implementation currently used on Linux and some BSDs. Additionally, Mutex now has its own, priority-inheriting implementation based on the mutex in Fuchsia's libsync. It differs from the original in that it panics instead of aborting when reentrant locking is detected.

@rustbot ping fuchsia
r? @m-ou-se

@rustbot rustbot added the T-libs Relevant to the library team, which will review and decide on the PR/issue. label Jun 30, 2022
@rustbot
Copy link
Collaborator

rustbot commented Jun 30, 2022

Hey! It looks like you've submitted a new PR for the library teams!

If this PR contains changes to any rust-lang/rust public library APIs then please comment with @rustbot label +T-libs-api -T-libs to tag it appropriately. If this PR contains changes to any unstable APIs please edit the PR description to add a link to the relevant API Change Proposal or create one if you haven't already. If you're unsure where your change falls no worries, just leave it as is and the reviewer will take a look and make a decision to forward on if necessary.

Examples of T-libs-api changes:

  • Stabilizing library features
  • Introducing insta-stable changes such as new implementations of existing stable traits on existing stable types
  • Introducing new or changing existing unstable library APIs (excluding permanently unstable features / features without a tracking issue)
  • Changing public documentation in ways that create new stability guarantees
  • Changing observable runtime behavior of library APIs

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Jun 30, 2022
@rustbot
Copy link
Collaborator

rustbot commented Jun 30, 2022

Error: Only Rust team members can ping teams.

Please file an issue on GitHub at triagebot if there's a problem with this bot, or reach out on #t-infra on Zulip.

@m-ou-se
Copy link
Member

m-ou-se commented Jun 30, 2022

Oh, exciting! I'll try to look at this later today :)

@m-ou-se
Copy link
Member

m-ou-se commented Jun 30, 2022

@rustbot ping fuchsia

@rustbot rustbot added the O-fuchsia Operating system: Fuchsia label Jun 30, 2022
@rustbot
Copy link
Collaborator

rustbot commented Jun 30, 2022

Hey friends of Fuchsia! This issue could use some guidance on how this should be
resolved/implemented on Fuchsia. Could one of you weigh in?

cc @ComputerDruid @djkoloski @P1n3appl3 @tmandry

@m-ou-se
Copy link
Member

m-ou-se commented Jul 1, 2022

This looks good to me, at first glance. I'll review it in detail once the Fuchsia folks confirm it works as expected.

@m-ou-se m-ou-se assigned tmandry and unassigned m-ou-se Jul 1, 2022
@joboet
Copy link
Member Author

joboet commented Jul 12, 2022

I discovered some issues that arise if a thread dies while holding a mutex and another thread tries to lock it:

  • If the thread ID is reused, the other thread will boost the priority of a random thread, which is not really expected, but I don't see a way to avoid it.
  • If the thread ID is reused by the thread that tries to lock the mutex, it will get the same error condition that occurs on reentrant locking. It would be nice to panic in the latter situation, but in this one, it is very unexpected, so I decided to put the thread to permanent sleep.
  • If the thread ID is not reused, the futex operation will fail, but a panic is not expected and not really allowed by the API, so I used the same perma-sleep approach as above.

All these situations, while not causing UB, are hard to debug and quite unexpected, so I am not sure if my solution is best or if this behaviour should be mentioned in the documentation?

@tmandry
Copy link
Member

tmandry commented Jul 14, 2022

Thanks for this! I've asked an expert in Fuchsia's futexes to review this. Should be coming soon.

// or the previous owner did not unlock the mutex before exiting. Since it is
// not possible to reliably detect which is the case, the current thread is
// deadlocked. This makes debugging these cases quite a bit harder, but encourages
// portable programming, since all other platforms do the same.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, IMO, I think you should probably panic here instead. Attempting to re-enter a non-re-entrant lock is a programming error, and the process should be terminated ASAP. Likewise, if a thread exists while holding any locks, that is also a programming error, however more difficult to detect.

If another thread were to receive a re-cycled handle ID after the offending thread had exited (unlikely, but possible), and then tried to enter the lock which was held by offending thread when it exited, you basically have detected the difficult to detect thing. Either way, (true re-entrace, or thread-exits-while-holding-lock) the program is in an invalid state and should be terminated ASAP; mostly to allow the main system to restart the component if need be.

Just my 2 cents.


pub unsafe fn notify_all(&self) {
self.futex.fetch_add(1, Relaxed);
futex_wake_all(&self.futex);
Copy link

@johngro johngro Jul 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, it should be noted that the current libsync condvar implementation is not in great shape, and needs to be revisited/simplified. It probably is not the best example to be following.

This said: beware the thundering herd.
If there are 100 threads waiting for the condition to be signaled, and they are all woken up at once, one of them is going to grab the mutex, and all of the others are going to immediately block behind whichever thread made it into the mutex first. This type of thrashing is not going to be particularly efficient.

To solve this problem, we have a tool similar to what other OSes have; zx_futex_requeue (and zx_futex_requeue_single_owner)

Requeue takes two futexes (the "wake" futext and the "requeue" futex) and two counts (the wake_count and the requeue_count). It will logically wake up to wake_count threads from the wake futex, and then (if there are still waiters remaining) move up to requeue_count waiters from wake -> requeue.

When applied to a condvar, the notify_all operation becomes a requeue(condvar_futex, 1, mutex_futex, 0xFFFFFFFF). Basically, wake up only one thread, and place all of the remaining threads into the mutex futex wait queue (IOW - just proactively assume that those threads are now blocking on the lock).

If the mutex here implements PI, then requeue_single_owner can be used instead. It is supposed to wake a single thread from the wait mutex, then move the specified number of threads from wait -> requeue, and finally assign ownership of the requeue target to the woken thread.

Note, the docs on this are either wrong, or the implementation is wrong (https://fuchsia.dev/fuchsia-src/reference/syscalls/futex_requeue_single_owner?hl=en). It claims that the wake futex is the futex whose ownership gets assigned, when it should be the requeue futex (as this is the futex representing the lock, not the notification state). I'm going to file a bug about this and look into it.

In addition to avoiding the thundering herd in general, using requeue allows the scheduler to make better choices. The scheduler can choose to wake the "most important" thread from the futex's blocking queue first, and requeue the rest. If all of threads are simply woken and assigned to different CPUs, the "most important" thread might end up losing the mutex race and will end up blocking again when it really should be running.

Also note that to make this work, the notification futex and the lock futex must logically be fused together. The API should not allow users to make the mistake of failing to acquire the mutex after being notified. Something like

{
  Guard g{condvar->lock);  // we are in the lock after this
  while (!condition_satisfied()) {
    // This drops the lock associated with the condvar and waits on the notify futex.
    // After waking again, the condvar code will re-acquire the lock before proceeding
    condvar->wait();
  }
  // Do any stuff which needs doing while in the lock now that the condition is satisfied.
}  // Lock is finally dropped

The implementation here does not have a mutex specifically associated with the condvar, meaning that users could accidentally pass different mutexes to the wait operation, and prevents the use of requeue (since it is unclear which lock needs to dropped during a notify operation).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I've looked into this a bit more. The re-queue operations as defined really do not seem to be all that helpful if the goal is to implement a condvar whose associated lock implements PI. I'm going to need to take some time to sort this out; and may need to go through the full RFC process in order to make a change to the API which fixes the issue. In the meantime, I'll offer a few reasonable paths forward for this code.

Option 1: Do nothing.
Leave the existing thundering herd behavior in place, and for now, simply assume that having a large number of waiters will be uncommon. I still think that it would be a a good idea to add a mutex to the condvar object itself, and demand that users hold this specific lock when waiting. It would also be good to leave a comment in https://bugs.fuchsia.dev/p/fuchsia/issues/detail?id=104478 saying that we should come back and fix up the Rust implementation once the underlying syscall definitions have been fixed.

Option 2: Just use requeue, and ignore requeue_single_owner.
If we can make the mutex used with the condvar a property of the condvar object itself, it allows us to address the issue in a less than perfect way, but come back later on and make it better. The idea here is we would:

  1. Require that a user be holding the condvar's mutex when calling notify (either one or many).
  2. Change the semantics of notify to be "notify and release"

Now, notify_all_and_release can become

// It is reasonable to mark ourselves as the owner of the queue backing the mutex as we are the current owner.
// We will inherit the profile pressure of all requeued threads in the process.
requeue(wake count = 0, requeue_count = Everyone, requeue_owner = self);

// Now mark the local mutex state as unowned.  From the user mode perspective, we are dropping the lock here
// even though we are continuing to inherit any profile pressure.

// Unconditionally wake up to one thread from the requeue_futex (eg, the futex representing the condvar's mutex),
// and assign ownership to the thread which was woken.
wake_single_owner(requeue_futex)

This will preserve the goal of avoiding the herd, and also implement PI in the lock. The downside is that it will cost two syscalls instead of one. Once the futex requeue API is improved, this can be dropped back down to just a single call.

Sorry about this ☹️

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Condvar implementation is already used on Linux, WASM and some BSDs. This PR just moves it to a different file so it can be shared by Fuchsia.

For these systems, the library team decided not to requeue, but if you feel it is important, I can specialize the implementation.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be more important if you frequently have large numbers of waiters waiting on the condvar. If most of your users tend to only have one waiter on the condition, then it is certainly fine as it is. If N tends to be low, but not one, then it becomes more likely that there is some lock thrash. While this may be an issue, it may not be a super serious one.

TL;DR - This was just a suggestion. You know your users and their patterns better than I do, so feel free to continue doing it as you are. If you ever encounter a situation where the herd becomes a serious issue for one of your users, you can always come back and change course.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be more important if you frequently have large numbers of waiters waiting on the condvar. If most of your users tend to only have one waiter on the condition, then it is certainly fine as it is.

Yeah, it's hard to optimize for every possible use case at once. My assumption is that it's quite uncommon to notify many waiters at once in programs optimized for performance, since it's a bit of an anti-pattern regardless of requeuing. A requeuing implementation just means that the threads will more efficiently wait in line, but their work still ends up being serialized, which arguably defeats the point of paralellization.

I looked a bit through use cases of notify_all() on crates.io to validate my assumptions, but am happy to consider examples that support an argument in favor of requeueing.

Copy link

@johngro johngro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So Tyler Mandry asked me to chime in on this CL.

The mutex code looks correct, however my personal opinion is that you should restore the panic-on-reentry behavior.

The condvar code should also work, but could be optimized through the use of futex requeue.

Also, during the review, I think I stumbled across what appears to be either a doc bug or perhaps even a spec/impl bug in zx_requeue_wake_single_owner 😱 . So, thanks for bringing that to my attention 😀

I'm going to file a bug and follow up on that one. futex_requeue has not gotten a lot of use in the system so far, so this apparently got missed.

@joboet
Copy link
Member Author

joboet commented Jul 20, 2022

@johngro Thank you for the review!
@m-ou-se I personally am happy with the current approach, but maybe the libs-team wants to have a look at this?

@m-ou-se
Copy link
Member

m-ou-se commented Jul 20, 2022

@johngro Thanks for reviewing!

@joboet Yes, I'm reviewing it now!

Copy link
Member

@m-ou-se m-ou-se left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@m-ou-se
Copy link
Member

m-ou-se commented Jul 20, 2022

@bors r+

@bors
Copy link
Contributor

bors commented Jul 20, 2022

📌 Commit c72a77e has been approved by m-ou-se

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Jul 20, 2022
Dylan-DPC added a commit to Dylan-DPC/rust that referenced this pull request Jul 21, 2022
std: use futex-based locks on Fuchsia

This switches `Condvar` and `RwLock` to the futex-based implementation currently used on Linux and some BSDs. Additionally, `Mutex` now has its own, priority-inheriting implementation based on the mutex in Fuchsia's `libsync`. It differs from the original in that it deadlocks when reentrant locking is detected, as leaking a `MutexGuard` could lead to the same thread id being used in another thread, which would then panic with a nonsensical error message, which definitely is not expected.

`@rustbot` ping fuchsia
r? `@m-ou-se`
@m-ou-se
Copy link
Member

m-ou-se commented Jul 21, 2022

@bors r+

@bors
Copy link
Contributor

bors commented Jul 21, 2022

📌 Commit 8ba02f1 has been approved by m-ou-se

It is now in the queue for this repository.

Dylan-DPC added a commit to Dylan-DPC/rust that referenced this pull request Jul 21, 2022
std: use futex-based locks on Fuchsia

This switches `Condvar` and `RwLock` to the futex-based implementation currently used on Linux and some BSDs. Additionally, `Mutex` now has its own, priority-inheriting implementation based on the mutex in Fuchsia's `libsync`. It differs from the original in that it panics instead of aborting when reentrant locking is detected.

`@rustbot` ping fuchsia
r? `@m-ou-se`
Dylan-DPC added a commit to Dylan-DPC/rust that referenced this pull request Jul 21, 2022
std: use futex-based locks on Fuchsia

This switches `Condvar` and `RwLock` to the futex-based implementation currently used on Linux and some BSDs. Additionally, `Mutex` now has its own, priority-inheriting implementation based on the mutex in Fuchsia's `libsync`. It differs from the original in that it panics instead of aborting when reentrant locking is detected.

``@rustbot`` ping fuchsia
r? ``@m-ou-se``
matthiaskrgr added a commit to matthiaskrgr/rust that referenced this pull request Jul 21, 2022
std: use futex-based locks on Fuchsia

This switches `Condvar` and `RwLock` to the futex-based implementation currently used on Linux and some BSDs. Additionally, `Mutex` now has its own, priority-inheriting implementation based on the mutex in Fuchsia's `libsync`. It differs from the original in that it panics instead of aborting when reentrant locking is detected.

```@rustbot``` ping fuchsia
r? ```@m-ou-se```
Dylan-DPC added a commit to Dylan-DPC/rust that referenced this pull request Jul 21, 2022
std: use futex-based locks on Fuchsia

This switches `Condvar` and `RwLock` to the futex-based implementation currently used on Linux and some BSDs. Additionally, `Mutex` now has its own, priority-inheriting implementation based on the mutex in Fuchsia's `libsync`. It differs from the original in that it panics instead of aborting when reentrant locking is detected.

`````@rustbot````` ping fuchsia
r? `````@m-ou-se`````
bors added a commit to rust-lang-ci/rust that referenced this pull request Jul 21, 2022
…askrgr

Rollup of 11 pull requests

Successful merges:

 - rust-lang#98707 (std: use futex-based locks on Fuchsia)
 - rust-lang#99413 (Add `PhantomData` marker for dropck to `BTreeMap`)
 - rust-lang#99454 (Add map_continue and continue_value combinators to ControlFlow)
 - rust-lang#99523 (Fix the stable version of `AsFd for Arc<T>` and `Box<T>`)
 - rust-lang#99526 (Normalize the arg spans to be within the call span)
 - rust-lang#99528 (couple of clippy::perf fixes)
 - rust-lang#99549 (Add regression test for rust-lang#52304)
 - rust-lang#99552 (Rewrite `orphan_check_trait_ref` to use a `TypeVisitor`)
 - rust-lang#99557 (Edit `rustc_index::vec::IndexVec::pick3_mut` docs)
 - rust-lang#99558 (Fix `remap_constness`)
 - rust-lang#99559 (Remove unused field in ItemKind::KeywordItem)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
@bors bors merged commit c5df2f0 into rust-lang:master Jul 21, 2022
@rustbot rustbot added this to the 1.64.0 milestone Jul 21, 2022
@joboet joboet deleted the fuchsia_locks branch July 21, 2022 21:05
Dylan-DPC added a commit to Dylan-DPC/rust that referenced this pull request Aug 3, 2022
Fix futex module imports on wasm+atomics

The futex modules were rearranged a bit in rust-lang#98707, which meant that wasm+atomics would no longer compile on nightly. I don’t believe any other targets were impacted by this.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
O-fuchsia Operating system: Fuchsia S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants