Skip to content

Conversation

@tiif
Copy link
Member

@tiif tiif commented Oct 4, 2024

This PR

  • Implemented blocking for both read and write of eventfd
  • Added test for eventfd blocking read and write
  • Removed eventfd blocking tests from fail-dep
  • Added a new BlockReason::Eventfd

cc #3665

@tiif
Copy link
Member Author

tiif commented Oct 4, 2024

@rustbot author

@rustbot rustbot added the S-waiting-on-author Status: Waiting for the PR author to address review comments label Oct 4, 2024
@tiif
Copy link
Member Author

tiif commented Oct 5, 2024

Hmm...why is rustfmt complaining? Let me try rebase it instead.

@bors
Copy link
Contributor

bors commented Oct 8, 2024

☔ The latest upstream changes (presumably #3951) made this pull request unmergeable. Please resolve the merge conflicts.

@tiif

This comment was marked as resolved.

Comment on lines 220 to 231
let mut waiter = Vec::new();
let mut blocked_write_tid = eventfd.blocked_write_tid.borrow_mut();
while let Some(tid) = blocked_write_tid.pop() {
waiter.push(tid);
}
drop(blocked_write_tid);

waiter.sort();
waiter.dedup();
for thread_id in waiter {
ecx.unblock_thread(thread_id, BlockReason::Eventfd)?;
}
Copy link
Member Author

@tiif tiif Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't think of a testcase that can actually produce thread id duplication in blocked_write/read_id, but I will just keep the dedup here first.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we'll get a panic if that happens, so maybe just don't sort and dedup and wait until we get a test case. I don't think it can happen though, as a thread would need to get blocked again without having been removed and unblocked.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also: the unblocking order is user-visible, and tho I assume that eventfd does not specify which thread gets woken first, we'd probably want a random (deterministic, but depending on the seed) order of unblocking to happen

cc @RalfJung I think we need a new system for unblocking multiple threads at the same time. So far we only ever unblocked one thread, and "unblock N threads, then randomly run them with the normal rules for picking the next thread" seems like a recurring thing.

Also I wonder if the normal thread unblocking is subtly wrong, too. Or just wrong in this PR:

When a thread gets unblocked and immediately performs some operation that is visible from other threads, then that means when the current thread continues after unblocking the other thread, it may behave as if the unblocked thread already had a few CPU cycles to do something. While that may be expected behaviour depending on preemption, the behaviour here does not depend on preemption. So, to avoid such a footgun, thread unblocking should not immediately execute the unblocking operation, but just unblock the thread and make the thread execute the unblocking operation when it gets scheduled next.

@tiif tiif marked this pull request as ready for review October 18, 2024 05:09
@tiif
Copy link
Member Author

tiif commented Oct 18, 2024

@rustbot ready

@rustbot rustbot added S-waiting-on-review Status: Waiting for a review to complete and removed S-waiting-on-author Status: Waiting for the PR author to address review comments labels Oct 18, 2024
Copy link
Contributor

@oli-obk oli-obk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not finished with my review, but we need to figure out a few thread unblocking things first

Comment on lines 220 to 231
let mut waiter = Vec::new();
let mut blocked_write_tid = eventfd.blocked_write_tid.borrow_mut();
while let Some(tid) = blocked_write_tid.pop() {
waiter.push(tid);
}
drop(blocked_write_tid);

waiter.sort();
waiter.dedup();
for thread_id in waiter {
ecx.unblock_thread(thread_id, BlockReason::Eventfd)?;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we'll get a panic if that happens, so maybe just don't sort and dedup and wait until we get a test case. I don't think it can happen though, as a thread would need to get blocked again without having been removed and unblocked.

Comment on lines 220 to 231
let mut waiter = Vec::new();
let mut blocked_write_tid = eventfd.blocked_write_tid.borrow_mut();
while let Some(tid) = blocked_write_tid.pop() {
waiter.push(tid);
}
drop(blocked_write_tid);

waiter.sort();
waiter.dedup();
for thread_id in waiter {
ecx.unblock_thread(thread_id, BlockReason::Eventfd)?;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also: the unblocking order is user-visible, and tho I assume that eventfd does not specify which thread gets woken first, we'd probably want a random (deterministic, but depending on the seed) order of unblocking to happen

cc @RalfJung I think we need a new system for unblocking multiple threads at the same time. So far we only ever unblocked one thread, and "unblock N threads, then randomly run them with the normal rules for picking the next thread" seems like a recurring thing.

Also I wonder if the normal thread unblocking is subtly wrong, too. Or just wrong in this PR:

When a thread gets unblocked and immediately performs some operation that is visible from other threads, then that means when the current thread continues after unblocking the other thread, it may behave as if the unblocked thread already had a few CPU cycles to do something. While that may be expected behaviour depending on preemption, the behaviour here does not depend on preemption. So, to avoid such a footgun, thread unblocking should not immediately execute the unblocking operation, but just unblock the thread and make the thread execute the unblocking operation when it gets scheduled next.

@oli-obk
Copy link
Contributor

oli-obk commented Oct 26, 2024

@RalfJung we raced on the review, do you have any thoughts on #3939 (comment)

@RalfJung
Copy link
Member

RalfJung commented Oct 26, 2024 via email

@oli-obk
Copy link
Contributor

oli-obk commented Oct 27, 2024

What exact issue are you concerned about?

well, in this case, that we perform the reads and writes of the unblocked thread immediately, but then continue on the current thread. So unblocking a thread affects the current thread's next operations (there can now be more space to write to, even tho no thread switch happened). This makes no-preemption tests harder to write, but is not a fundamental issue I guess, just weird and not behavior a real system would have

@tiif
Copy link
Member Author

tiif commented Oct 27, 2024

that we perform the reads and writes of the unblocked thread immediately, but then continue on the current thread.

Oh that's surprising. I always assumed the current thread should finish first before executing other newly unblocked threads.

But I think I observed what is mentioned here before, and I was pretty confused by the execution sequence.

// 1. Thread 1 blocks.
// 2. Thread 2 blocks.
// 3. Thread 3 unblocks both thread 1 and thread 2.
// 4. Either thread 1 or thread 2 writes u64::MAX.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to the preemption rate being zero, this is deterministic no matter the seed, right?

Why is it thread 3 that gets to write first and not thread 2? How does that ordering of the unblocks happen?

Copy link
Member Author

@tiif tiif Nov 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is deterministic, it consistently blocked on thread2 when running with many-seeds . The unblock sequence is related to the unblocking order below:

let waiting_threads = std::mem::take(&mut *eventfd.blocked_write_tid.borrow_mut());
for thread_id in waiting_threads {
    ecx.unblock_thread(thread_id, BlockReason::Eventfd)?;
}

In the test, thread1 gets blocked before thread2, so during the unblock, thread1 gets unblocked before thread2. Hence thread1 gets to return from eventfd_write first, while thread2 hits the blocking condition again.

@oli-obk
Copy link
Contributor

oli-obk commented Nov 12, 2024

Please squash the changes

@tiif
Copy link
Member Author

tiif commented Nov 13, 2024

Thanks! I left FIXME for randomizing unblock sequence, and slightly changed the comments for expected execution in eventfd_blocked_read_twice and eventfd_blocked_write_twice since it is deterministic.

@oli-obk oli-obk added this pull request to the merge queue Nov 13, 2024
Merged via the queue into rust-lang:master with commit 9d9da34 Nov 13, 2024
7 checks passed
@tiif tiif deleted the blockeventfd branch November 13, 2024 09:23
@tiif tiif mentioned this pull request Nov 13, 2024
Comment on lines +217 to +219
// When any of the event happened, we check and update the status of all supported event
// types for current file description.
ecx.check_and_update_readiness(&eventfd_ref)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This used to be done in both match arms. The comment even says that. Now it was moved to only run in one match arm. The comment is outdated now, and also -- why was it moved?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it is because in the other arm, nothing actually changes, we just block?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it is because in the other arm, nothing actually changes, we just block?

Yes exactly. There are four scenarios that could happen here:

  1. It succeed without blocking, we do check_and_update_readiness.
  2. It blocked and eventually unblock because some event happened, it will hit the succeed path that contains check_and_update_readiness.
  3. It errors out with ErrorKind::WouldBlock.
  4. It blocked and never get unblocked, in that case, that's a deadlock.

Comment on lines +230 to +232
thread1.thread().unpark();
thread2.thread().unpark();
thread3.thread().unpark();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason for this parking business? That should be explained in comments.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used it to get the execution sequence I want under -Zmiri-preemption-rate=0. It's not ideal as it is pretty hard to reason from reading the test alone (right now I verify them manually using Zmiri-report-progress). I am trying to figure out a better way to write clearer blocking test case.

In particular, I want to find a way to execute another thread only after a thread is blocked. But when a thread is blocked, I couldn't find a way for it to signal "hey I am blocked now, so you should start doing something".

Copy link
Member

@RalfJung RalfJung Nov 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have plenty of tests that need a particular execution sequence. None of them use thread parking. So either your test is quite special, or you can achieve the same thing in a much less complicated way. I think it's the latter.

The scheduler with -Zmiri-preemption-rate=0 is quite simple: the current thread keeps going until it blocks or yields. Then the next thread goes -- in the order that threads were spawned in. So please rewrite these tests to not use thread parking any more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then the next thread goes -- in the order that threads were spawned in.

This information is useful. I will take a look at the test again and open a pr to simplify the test here and #4033.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

S-waiting-on-review Status: Waiting for a review to complete

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants