Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a proper way to send zlib partial files for unzipping? #4030

Closed
constellates opened this issue Nov 25, 2015 · 11 comments
Closed

Is there a proper way to send zlib partial files for unzipping? #4030

constellates opened this issue Nov 25, 2015 · 11 comments
Labels
question Issues that look for answers. zlib Issues and PRs related to the zlib subsystem.

Comments

@constellates
Copy link

I saw a bug was fixed in v5.0.0 so that zlib will throw an error when it reaches the end of a truncated input (#2595). Is it possible to use zlib on partial files.

I'm working on a project that needs to be able quickly read through a large package of gzipped files (20+ files with a combined size of 2-5GB). All the content I care about is in the header of each file (first 500 bytes). I had this working previously using fs.read() while passing options to only read the first 500 bytes then using zlib.gunzip() to decompress the contents before parsing the header from the binary data.

This now throws an "unexpected end of file" input after v5.0.0. Is there another way to accomplish this or is zlib going to throw errors for the process regardless of what I do?

I've tried using streams and the chunks in the .on('data') event are being properly decompressed and parsed but I'm not confident the chunk size will always contain the full header and I still have to handle the error which is breaking the pipe before it gets to an "end" or "close" event.

var readStream = fs.createReadStream(file.path, {start: 0, end: 500});
var gunzip = zlib.createGunzip();

readStream.pipe(gunzip)
    .on('data', function(chunk) {
        console.log(parseBinaryHeader(chunk));
        console.log('got %d bytes of data', chunk.length);
    })
    .on('error', function (err) {
        console.log(err);
    })
    .on('end', function() {
        console.log('end');
    });
@mscdex mscdex added question Issues that look for answers. zlib Issues and PRs related to the zlib subsystem. labels Nov 25, 2015
@kyriosli
Copy link

I think you should not use pipe here because it will trigger the end of the dest stream when it reaches its end of stream. You can try to listen to its "data" event and write to the gunzip stream, and open another file for read when it reaches its end.

@jhamhader
Copy link
Contributor

Regarding the partial data behavior after #2595: decompression is done the same way as before. The only difference is that once you call .end() on the zlib stream, if it didn't complete the decompression (a truncated input) it will emit an error.

You could ask .pipe() to not end the writable part of the pipe by using its option end.
For example:
readStream.pipe(gunzip, {end: false})

@Fishrock123
Copy link
Contributor

cc @indutny and @trevnorris

@indutny
Copy link
Member

indutny commented Nov 30, 2015

Yeah, I think you may want to write partial data and flush, should work just fine.

@constellates
Copy link
Author

Yeah, I think you may want to write partial data and flush, should work just fine.

Can you explain what you mean by "flush"? I'm relatively new to node streams.

@indutny
Copy link
Member

indutny commented Nov 30, 2015

Sorry, I was referring to https://nodejs.org/api/zlib.html#zlib_zlib_flush_kind_callback

@constellates
Copy link
Author

Thanks for all the suggestions and help. Here's what ended up working for me.

  • Set the chunk size to the full header size.
  • Write the single chunk to the decompress stream and immediately pause the stream.
  • Handle the decompressed chunk.

example

var bytesRead = 500;
var decompressStream = zlib.createGunzip()
    .on('data', function (chunk) {
        parseHeader(chunk);
        decompressStream.pause();
    }).on('error', function(err) {
        handleGunzipError(err, file, chunk);
    });

fs.createReadStream(file.path, {start: 0, end: bytesRead, chunkSize: bytesRead + 1})
    .on('data', function (chunk) {
        decompressStream.write(chunk);
    });

This has been working so far and also allows me to keep handling all other gunzip errors as the pause() prevents the decompress stream from throwing the "unexpected end of file" error. Let me know if there are any consequences of this strategy I might not be aware of.

@evanlucas
Copy link
Contributor

Closing as this seems to have been answered. Thanks!

@justinsg
Copy link

I know this has already been closed, but PR #6069 makes it possible to do this synchronously too.

The synchronous version of @constellates answer above is:

var bufferSize = 500; // you'll need to account for the header size
var buffer = new Buffer(bufferSize);

var fd = fs.openSync(file.path, 'r');
fs.readSync(fd, buffer, 0, bufferSize, 0);

var outBuffer = zlib.unzipSync(buffer, {finishFlush: zlib.Z_SYNC_FLUSH});

console.log(outBuffer.toString());

I use it for reading the start of gzipped log files to parse the date of the first line (oldest entry).

Source:
API reference for zlib
https://github.com/nodejs/node/blob/master/doc/api/zlib.md#compressing-http-requests-and-responses

@constellates
Copy link
Author

Thanks @justinsg

I appreciate hearing about the update.

@LRagji
Copy link

LRagji commented Dec 11, 2022

@justinsg How do you account for the header size?, what i understand is there is some book keeping bytes at start are those fixed? or is there a formulae for this? I know my application header size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Issues that look for answers. zlib Issues and PRs related to the zlib subsystem.
Projects
None yet
Development

No branches or pull requests

9 participants