-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Listing every format that could be represented as virtual zarr #218
Comments
Unfortunately based on https://gdal.org/user/virtual_file_systems.html#jpeg2000 JPEG2000 is likely in the 'probably can't support' category. I would've liked if these datasets could be virtualized, but they're all JPEG2000 for to optimize for the download to disk model :( Another way to phrase this question, which may help the search, is which of the formats supported by GDAL's raster drivers can be virtualized? |
I like this issue! It's worth saying that anything kerchunk can chunk can be v-zarrred, right? In that repo, there are suggestions of other worthwhile formats, dicom and nifti (medical imaging) spring to mind. The latter is nice, but often whole-file-gzipped, the former is evil in the way that other 90s standards are evil, but extremely widespread. |
❤️ |
Yes, that's the idea. This function does |
Hugging Face safetensors is an interesting example - it's uncompressed so basically just like reading netCDF3, having no internal chunking. But it also puts all the metadata at the start of the file, making it a bit like cloud-optimized HDF5. See also huggingface/safetensors#527 (comment) |
If the format is simple and common, I say it should be included immediately, especially when there is a straight-forward way to check correctness.
but you can assign internal chunking. Is partial reading available in upstream at all yet? |
I raised #367 to track adding it.
This issue seems to suggest it is: zarr-developers/zarr-python#1106. But I think to take advantage of this with virtualizarr would require #199 to be merged. |
No, zarr's PR1106 only implemented it for blosc compression, something I've been arguing about for a very very long time! If you can dynamically re-imagine the chunking at runtime (which is what I tink #119 does), that that would be good enough for most practical uses - but still annoying. Zarr should just do this! i.e., the chunk IO function shouldn't just be passed "I need chunk X", but "I need section (:, s:t, i:j) of chunk X" and a way to characterise what the decompression pipeline looks like (this is OK for decompressed, some blosc, zstd maybe..., but not zlib). This was my suggestion in passing Contexts around in zarr v2. |
I don't disagree, but if we want to discuss this further we should do it on a new issue (on this repo or upstream on zarr). |
Environment and Climate Change Canada here 😊. We have something called RPN/FST files which are binary files containing metadata, grid and fortran arrays. They are incredibly efficient in both disk space and compression (optional). c/Fortran library to work with them We would love to use something like VirtualiZarr to create per model data cubes that can be easily accessed by ML modules like Jax/torch/Dask but also by Web APIs such as pygeoapi for OGC APIs. Some of the difficulties are the fact that we have multiple grids and vertical dimensions per file and not all files of a model contain the same variables (accumulation variables start after 0 forecast hour). Indexing our data lake is something doable but up until recent advancements in both Zarr indexing and Grouping it did not really fit our plans. Tools like SHTools, XPublish, XCube, MetPy, Holoviz, etc. All have an Xarray entry point but some particularities makes the workflow still experimental and hacky. If there was a way to create a massive Zarr/Icechunk/TileDB/Delta Lake on top of a GPFS/Ceph filesystem mounted on a HPC where perhaps access to the data can be accelerated through extra CPU/GPUs that would allow me and my colleagues to build with confidence tooling for some proper Science 😊. |
@itsgifnotjiff , that sounds like it might be the perfect opportunity for me to build a vzarr reader rather than a kerchunk indexer (unless @TomNicholas , you don't think it's worth making a difference between the two). I am in Toronto, so if you are two, I'd be happy to meet in person to go over the format and requirements. |
I am actually in Montreal. 😶 I am however available and motivated. Let me know if you have time to meet virtually if nothing else just for me and my boss to say thank you for your wonderful work. |
Yet another self-describing binary file format! Sounds like something that could be virtualized.
Yes, it should be possible to provide access via the zarr-python API.
I'm less familiar with this, but there might be a way to do that.
The (current) set of restrictions for the virtual zarr approach are documented here. As your data apparently can already be mapped to the xarray/netCDF data model, as long as you don't have any of those listed issues then I would expect this to be possible.
A cloud data lake?
This is technically possible using VirtualiZarr and Icechunk together, though currently the support within icechunk for Ceph & and HPC are underdeveloped compared to the typical AWS use case.
@martindurant it would be great for you to have a go at building a virtualizarr reader (bearing in mind we're just finishing up a refactor that will simply the definition of what a virtualizarr reader actually is), but @itsgifnotjiff as the owner of a bespoke file format you should also be the owner of any code designed to read that format. VirtualiZarr is designed to be extensible in that respect, so you can write a dedicated virtualizarr reader for RPN/FST files that lives outside of this main virtualizarr repository. |
Internal cloud ... ish 🙂 (I can get into details if needed)
That's great to hear! We have published some tools that allow for advanced workflows such as
|
Cool! Let us know how you get on with a virtualizarr reader, and please raise new issues if you have any problems. |
Let's list all the file formats that could potentially be represented efficiently as "virtual zarr" - i.e. zarr + chunk manifests.
The important criteria here is that the format must store data in a small number of contiguous chunks, such that access using http range requests to object storage is efficient. This rules out some formats, for example I don't think we can efficiently access this format that @kmuehlbauer mentioned over in openradar/xradar#187 (comment):
If we start thinking of Zarr as a "SuperFormat" (super as in superset, not as in super-duper), then this is the list of existing formats comprising that set of what can be referenced using chunk manifests (see zarr-developers/zarr-specs#287).
Definitely can support:
Probably can support:
.npz
filesMaybe can support?
.mat
files (specification documented here)Probably can't support:
(The checkboxes indicate whether or not a working implementation already exists - going through kerchunks' in-memory format as an intermediate or creating a
ManifestArray
directly.)cc @jhamman @d-v-b
The text was updated successfully, but these errors were encountered: