You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each file inside a zip is supposed to specify the minimum zip extraction version based on the features the file uses. In particular, according to the PKWARE spec, a file using Deflate compression is supposed to specify 2.0 as a minimum:
4.4.3.2 Current minimum feature versions are as defined below:
1.0 - Default value
1.1 - File is a volume label
2.0 - File is a folder (directory)
2.0 - File is compressed using Deflate compression
2.0 - File is encrypted using traditional PKWARE encryption
2.1 - File is compressed using Deflate64(tm)
2.5 - File is compressed using PKWARE DCL Implode
2.7 - File is a patch data set
4.5 - File uses ZIP64 format extensions
[abbreviated]
However, ProGuard, when not using Zip64 extensions, writes 1.0 (= int 10, 1_0) unconditionally in the zip local file header. Expanding the constants, it basically does this:
Again, the extraction version should be at least 2.0 if the file uses Deflate. The creation version is just informational and doesn't affect compatibility. You can write the same thing as the extraction version, or just write 4.5 unconditionally to signify that ProGuard supports Zip64.
This error apparently isn't important in practice, or it would have been noticed by now. To be honest, I only noticed it because I was curious why ProGuard's zip files were smaller than those produced by Java's ZipOutputStream even when containing the same content (it is because ProGuard puts the CRC32 and size in-line instead of using the extra data descriptor), and was looking at the files in a hex viewer.
By the way, have you considered integrating Google Zopfli in ProGuard? I don't know if you'd consider this in-scope of the project, but I experimented and found that replacing DeflaterOutputStream by Zopfli shrinks jars by an additional 5%. Can also run PNGs through ZopfliPNG for similar benefits. It's slow and mucky to generate because it invokes a separate process for every resource in the jar, and the compression can never beat Pack200+LZMA tricks, but it has absolutely zero overhead/downsides at application runtime.
The text was updated successfully, but these errors were encountered:
https://github.com/Guardsquare/proguard-core/blob/master/base/src/main/java/proguard/io/ZipOutput.java
https://pkwaredownloads.blob.core.windows.net/pkware-general/Documentation/APPNOTE-6.3.3.TXT
Each file inside a zip is supposed to specify the minimum zip extraction version based on the features the file uses. In particular, according to the PKWARE spec, a file using Deflate compression is supposed to specify 2.0 as a minimum:
However, ProGuard, when not using Zip64 extensions, writes 1.0 (= int 10, 1_0) unconditionally in the zip local file header. Expanding the constants, it basically does this:
That should be:
And in the central directory file header, ProGuard writes:
Again, the extraction version should be at least 2.0 if the file uses Deflate. The creation version is just informational and doesn't affect compatibility. You can write the same thing as the extraction version, or just write 4.5 unconditionally to signify that ProGuard supports Zip64.
This error apparently isn't important in practice, or it would have been noticed by now. To be honest, I only noticed it because I was curious why ProGuard's zip files were smaller than those produced by Java's ZipOutputStream even when containing the same content (it is because ProGuard puts the CRC32 and size in-line instead of using the extra data descriptor), and was looking at the files in a hex viewer.
By the way, have you considered integrating Google Zopfli in ProGuard? I don't know if you'd consider this in-scope of the project, but I experimented and found that replacing DeflaterOutputStream by Zopfli shrinks jars by an additional 5%. Can also run PNGs through ZopfliPNG for similar benefits. It's slow and mucky to generate because it invokes a separate process for every resource in the jar, and the compression can never beat Pack200+LZMA tricks, but it has absolutely zero overhead/downsides at application runtime.
The text was updated successfully, but these errors were encountered: