-
-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clear global THREAD_CACHE in child after fork() #2764
Comments
There's some previous discussion regarding handling |
Note this can happen even if the fork isn't inside
My understanding is it's not really practical to support |
Do you want to contribute a fix? Otherwise I might try to fix this (though I'm not very comfortable with things related to threads...) |
I don't think @tmaxwell-anthropic is going to, do you want to do it? Otherwise I'll give it a try (and I'm certainly not comfortable with thread things either :P) |
I can repro This, based on #2764 (comment), runs without issue for me (arch linux) on py39-py313 import os
import trio
def blah() -> None:
pass
async def foo() -> None:
await trio.to_thread.run_sync(blah)
trio.run(foo)
# we are now outside trio.run(), but the worker thread is still in THREAD_CACHE
os.fork()
# this is fine on the parent, but fails on the child, due to the bug
trio.run(foo) although I get a deprecationwarning about using threads+forks, see https://docs.python.org/3/library/os.html#os.fork and https://discuss.python.org/t/concerns-regarding-deprecation-of-fork-with-alive-threads/33555 Idk how much of that is relevant to Trio, but I would regardless love a concrete repro from @tmaxwell-anthropic. It's possible my repro above would fail if I pushed it to CI on multiple platforms, but haven't tried that. |
When I run that script, the child process silently hangs, but the parent process appears to exit successfully. When you say you "failed to repro the hang", is it possible that's what you saw? Here's a modified script with some additional logging: import os
import trio
print(f"The parent process is: {os.getpid()=}")
def blah() -> None:
print(f"blah() is executing in {os.getpid()=}")
async def foo() -> None:
print(f"foo() starting in {os.getpid()=}")
await trio.to_thread.run_sync(blah)
print(f"foo() ending in {os.getpid()=}")
trio.run(foo)
# we are now outside trio.run(), but the worker thread is still in THREAD_CACHE
child_pid = os.fork()
if child_pid == 0:
print(f"We are the child, {os.getpid()=}")
else:
print(f"We are the parent, {os.getpid()=} and {child_pid=}")
# this is fine on the parent, but fails on the child, due to the bug
trio.run(foo)
if child_pid != 0:
print("Parent waiting for child to exit...")
os.wait4(child_pid, 0) When I run this on either linux or macos, this prints something like:
Note that the child prints Can you repro the problem with that script? |
I can certainly reproduce with your script. I was reading the Python documentation for
This actually appears; running the latest reproducer on Python 3.12 (minus the wait at the end):
However, trio should still do the right thing (maintain the invariant that thread cache is per process, as |
Consider the following bug:
trio.to_thread.run_sync()
is called, creating a worker thread. The worker thread is left in the globalTHREAD_CACHE
.multiprocessing
module)trio.to_thread.run_sync()
. The globalTHREAD_CACHE
still contains a reference to the worker thread, so the child process thinks it has an idle worker thread, and tries to dispatch a task to it. However, the worker thread doesn't actually exist in the child process. Sotrio.to_thread.run_sync()
hangs forever.Because
THREAD_CACHE
is interpreter-global, this can happen even if the two Trio run loop are completely separate. For example, in a test suite, one test might calltrio.to_thread.run_sync()
, and then later a completely separate test might usemultiprocessing
to spawn a process that callstrio.to_thread.run_sync()
.I think it should be fairly simple to fix this by using
os.register_at_fork()
to ensureTHREAD_CACHE
is cleared in the child whenever the interpreter forks.The text was updated successfully, but these errors were encountered: