-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(server): accept compressed minidumps #4029
feat(server): accept compressed minidumps #4029
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this has been a long requested feature!
Do you think it would make sense to support additional compression methods such as zstd as well?
Even better.
Yes, definitely. I guess |
Just so you know, I am working on an escalating Android issue and will revisit this when I have more time. |
* change the order of validation * separate decoding from validation
b693f17
to
3dc5860
Compare
Could someone guide me through the integration tests? I probably hijacked the wrong test to integrate the compressed minidumps, but the test failed consistently since the first commit at the same point (https://github.com/getsentry/relay/actions/runs/10988917090/job/30506150074?pr=4029#step:8:1918). I thought this might be due to some fixture assumption regarding the minidump filename, but that wasn't the issue. I manually tested the upload only in "managed" mode, and it works there, so I am not sure whether I missed something in the implementation or just don't understand the test. Someone with more insight might point me in the right direction more quickly. |
I'm pretty sure the problem is in this line: relay/tests/integration/test_minidump.py Line 402 in 59861f1
We iterate until we have reconstructed the original file contents from the chunks. But if the file is compressed, that will never happen because the chunks are parts of the decompressed file. |
Thx, @loewenheim. That was undoubtedly it. I adapted the code to upload the compressed minidump and use the uncompressed one in the chunk retrieval. I'll let you decide whether this should be a separate test, but I think the special cases are minimal enough not to duplicate the remaining code. |
* remove regex for container file-extensions * remove unnecessary Err match * make std::io::Error only a source for InvalidCompressionContainer
* extract magic-header constants * handle error in ZstdDecoder creation with a log and returning None * rewrite removal of container file-extensions with idiomatic iterator mapping.
During the work on native assigned documentation issues, the topic of compressing minidumps when uploading them via
curl
directly to the minidump endpoint came up.This is my attempt at documenting the current process: https://github.com/getsentry/sentry-docs/pull/11304/files.
Uploading a compressed minidump without meta-data is relatively straightforward. You use a gzipped minidump as the sole
POST
content, provide the minidump content type, and define it as gzip-encoded. Axum transparently decodes the minidump before they even reach the relay endpoint, and you can process them if you are oblivious to the encoding.Unfortunately, due to the way encoding can't be supplied per multipart field (at least not RFC conforming) if a user wanted to supply meta-data with the minidump, they would have to construct the entire multipart content manually, which is error-prone, compress that whole body, and send it again as a single data request.
Curl doesn't provide any convenience for gzip encoding arbitrary POST content.
This simple change in Relay would allow users to make no change between their curl requests (they could use simple curl field parameters as explained in docs) besides uploading a gzipped minidump in place of a plain one.
Let me know if this would be an acceptable change compared to adding the documentation, which describes a severely manual and, thus, error-prone process.