-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock when using manifold.time scheduling in application code #148
Comments
I had the same problem with
The above code will print As in @gsnewmark's example, it seems that |
@ztellman Initially this code used fixed pool executor resized to the num of cores. |
@joeltio I've tried a few different versions of your code but it seems I cannot reproduce the issue with it. It works fine from the first glance. @gsnewmark Regarding your example. I know where this issue comes from and I do understand how hard to debug something like that in a large codebase... but semantically, if you block a thread from whatever seems to be an "async" context (whatever that means technically), you always have problems. That's a curse of implementing async flows on top of JVM runtime, you should always keep in mind that there's a thread pool somewhere underneath the API and this thread pool might end up in the situation where all threads are blocked. With that being said I think there's no ideal solution here, what we can do:
@ztellman what do you think? |
@kachayev My bad, I just tried it and it didn't work too. I can't remember what I was trying to do, but I know that after running the above code, it will print The following code without the
However, the following code will print
To make it hang, you can swap out
The code will print |
No problem!
That's exactly the same problem @gsnewmark described earlier (thread blocking that leads to the appropriate thread pool to be exhausted). I'm still thinking about what could be done here to improve the situation except for a better documentation. |
I'm perfectly fine with updating the documentation (as was indicated in the initial report), because right now the most glaring issue is that this behavior is not immediately obvious unless you check the internals of the Manifold. And when you know the reason it's quite easy to circumvent the issue. |
Right now default
manifold.time
scheduler is based on single-thread executor which is used both internally in Manifold itself (e. g., for deferred timeouts) and in public scheduling API (namelymanifold.time/in
&manifold.time/every
). This can lead to nasty deadlock bugs when seemingly unconnected parts of application stop working due to blocking in scheduler. For example, following code will never finish because inner timeout will wait for scheduler thread blocked by the scheduled function itself:This example is artificial, but we actually had similar situation in real code recently (just with more indirection involved).
manifold.time
allows passing of custom clocks/executors to scheduling function (with-clock
), so that's not really a major issue. But still should documentation be updated to clearly reflect that usingmanifold.time
in application code as is can cause issues with other parts of Manifold? Alternatively, should two separate clocks be created by default - one for Manifold's own scheduling needs and one for all othermanifold.time
users (on the first glance I can't see problems with this approach, but maybe I'm missing something)?The text was updated successfully, but these errors were encountered: