You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using zarr remote datasets, it is possible to define a compression scheme in the .zarray file.
Currently, this feature is unused and all requested buckets are decompressed before sending them to the client. We should hand through the compressed files if possible saving time de-compressing and on the transmission. For that it is necessary to write new data request code paths that don't decompress the read buckets.
The text was updated successfully, but these errors were encountered:
This would be a great addition. In a first iteration, I would only apply this optimization to datasets, where the stored chunk size matches the output chunk size. Otherwise we could create too much server load.
Detailed Description
Follow-up for #6144.
When using zarr remote datasets, it is possible to define a compression scheme in the
.zarray
file.Currently, this feature is unused and all requested buckets are decompressed before sending them to the client. We should hand through the compressed files if possible saving time de-compressing and on the transmission. For that it is necessary to write new data request code paths that don't decompress the read buckets.
The text was updated successfully, but these errors were encountered: