-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime error: index out of range [4294967295] with length 16 in github.com/klauspost/[email protected]/flate #630
Comments
Problem may be on our side. Sorry. Will reopen when we've investigated further and it's not on our side after all. |
👍🏼 It could be more than one goroutine writing at once. Let me know what you find. |
Will do, thanks. |
Heads up that this is still happening with
|
@hanzei Upgrade and (more likely) make sure you don't do concurrent writes. |
We use |
You can easily do concurrent writes to a Also check out https://github.com/klauspost/compress/tree/master/gzhttp#gzip-middleware |
There are literally millions of uses of this per day, so the chance of you doing concurrent writes is a bit bigger than an unseen bug, that hasn't been caught or shown up in fuzz testing. |
Thanks for the swift response. 👍 |
Hey @klauspost - apologies for commenting on a closed issue. But I took a good look at the code from our side. Atleast the staticFilesHandler from where these panics are originating. And I could not see anything wrong. Looking a bit closely at the huffman_code.go file however, I see that the panic is happening from: } else {
// If we stole from below, move down temporarily to replenish it.
for levels[level-1].needed > 0 { // <--- here
level--
}
} And every time, it's the same error I am wondering if there could be a case where all the I have also been running some tests with the binary running in Curious to know your thoughts on this. |
Without a reproducer it is quite hard to get further. Would it be feasible to wrap the encoder in a |
Couldn't agree more.
The issue is that, there's no way for us to actually get the dump file back from people running into this. It's just that we get the stack trace in the Sentry dashboard. And that's about the only info we have. There is no way for us to reach out to the affected people. I apologize for not able to give you anything more solid than this. I spent quite a few hours on this trying to get it to repro but failed to. But we do keep seeing this crash from time to time. :( |
I know the issue. My hunch is that if this is triggered, simply slapping a "if level == 0 { something}" will just cause something else to crash - or worse - be wrong. I will see what I can cook up. |
Thanks. I will leave it to your best judgement. |
We've seen a sentry crash in version 1.15.1 of
klauspost/compress
. The code seems to be essentially the same onmaster
as onv1.15.1
and the history doesn't indicate a fix for this, so I'm opening a new issue.OS / Arch
Linux / AMD64
Go version
1.18.1
The text was updated successfully, but these errors were encountered: