-
Notifications
You must be signed in to change notification settings - Fork 107
Deadlock in memory index #1833
Comments
I think I have it: First, we have the write queue which is stuck trying to acquire
Next we have
Now we have
One such place that
The above |
Thanks for the useful analysis. I think the most import change which we should make is this: The enricher event handler should never get stuck on waiting for an external lock, that's just asking for a dead lock and I think i didn't realize at the time when I wrote that code that the memory index lock can now block the execution of the enricher event queue because it is waiting for that id chan to get closed. Instead of pushing the |
The summary I posted indicates how many routines were stuck in each stack, but ~1500 routines were coming from index HTTP requests (find_by_tag and autocomplete). Edit: Looks like ~4000 gocql routines as well. 1300 in |
Describe the bug
Noticed initially during startup, that while
Load
ing the memory index and doing a meta record update at the same time, loading gets stuck forever. Took a stack trace but it's 20MB in size. Here is the (aggressively) reduced file via panicparse.deadlock.clean.stack.txt
Helpful Information
Metrictank Version: master@commit 1dcfc8d
Golang Version (if not using an official binary or docker image): 1.12.6
OS: RHEL 7
The text was updated successfully, but these errors were encountered: