-
-
Notifications
You must be signed in to change notification settings - Fork 80
Ready queue mutation, IndexError: pop from an empty deque #11
Comments
Hi Cory, There could be an issue with async generators. Are you by any chance using async iterators? ( |
Hey Ewald, We are not using async generators or iterators. Of note, if we change the code path to always use the patched loop, i.e. never to use the |
I've got the same issue. Using ib_insync together with websockets server. Have loop.run_forver() at the end of my code. Happens sporadically so difficult to diagnose, but doesn't appear to happen if I cut websockets out of equation. I'll sprinkle logging in asyncio and nest_asyncio and try to catch it in the act. I don't fully understand how nest_asyncio supposed to work, but it appears it's modifying the same collection as asyncio (self._ready) when running patched run_once and occasionally it removes items from deque while the loop in asyncio is running, thus causing the exception. Are there some restrictions or rules that need to be followed when nesting? Also my stack is different in a way where there's no nest_asyncio involved: it's just loop.run_forever() -> run_once() -> crash. My code, basically: util.patchAsyncio()
loop = asyncio.get_event_loop()
...
self.bars.updateEvent += self.onTick #receive periodic updates from IB
loop.run_until_complete(websockets.serve(self.tick, '127.0.0.1', port))
loop.run_forever() |
The repo has been updated to follow Cory's approach of always using the patched loop. It also patches I believe this should fix the issues, if it doesn't let me know. I'll keep this bug open for a while and when it works okay a 1.1 version will be released. |
Thanks! But it appears to still be happening, now as such:
I'll see if I can get more detailed output next week. |
Ok, boiled it down to this base case. Gonna think about how to go around it, but if you've got any ideas it would be greatly appreciated. import asyncio
import nest_asyncio
nest_asyncio.apply()
loop = asyncio.get_event_loop()
async def func1 ():
loop.run_until_complete(asyncio.sleep(5))
async def func2 ():
loop.run_until_complete(asyncio.sleep(0.1))
await asyncio.sleep(0.5)
async def start():
await asyncio.gather(func1(),func2())
asyncio.run(start()) Cheers |
I tagged loops with id's so I could track what they're doing as well as track any operations done on the tasks list (appends in _call_soon). What is happening is that you get a nested a loop where outer loop has 2 tasks and when the first task runs it clears the second task in the internal loop, so when internal loop finishes there's nothing to pop from deque. Here's my output log. Here id==2 is the outer loop and id==3 is the inner loop.
Hope this helps. Edit: |
For now I just wrapped it in a try catch, not sure if this will have some unintended side-effects though. I'm thinking not since it the task was already processed. try:
self._run_once()
except IndexError as e:
pass My question is: is it an issue in general that there are situations where loops cross-process each others tasks? A better solution might be to localize task lists to their own loops, so that each loop is only concerned with it's own task list. Thoughts? |
Thanks very much Ostruk for the test case, the analysis and the solution of catching the IndexError. In the test case there are two I agree that there should be no unintended side effects of ignoring the IndexError, since, as you say, all tasks have been processed already and it doesn't really matter in which nesting this happens. What does matter though is that the handles are run in the order that they have become ready. This is why there is only one ready queue. Having a queue per nesting would mean that the inner nesting blocks the outer nesting for as long as the inner one runs. There's a new release (v1.1.0) with the updated code. Thanks again Ostruk for having made this possible. I'll keep this bug open for a while in case there are any unforeseen consequences. |
Happy to help! |
Hey,
First, thanks for sharing the patch and publishing this library. I'm trying to workaround re-entrancy while converting a codebase incrementally to asyncio, where there are call chains of async > loop.run_until_complete(async).
I've hit an issue with the patch with handles from the _ready queue seemingly being popped from elsewhere.
I.e. from
python3.6/asyncio/base_events.py
popping from the ready queue throws, because it iterates too far.The full exception stack trace looks like this:
My hypothesis is that one of the enqueued items triggers the nest_async, or else an enqueued item is processing other elements from the queue.
It is the worst kind of problem where it is reproducible, but I have yet to isolate a simple test case to make it easy to understand. Do you have a hunch of what might be happening? Perhaps something you ran into while writing the patch?
Thanks for your help,
Cory
The text was updated successfully, but these errors were encountered: