You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"In order to build, chunk, and push these images to the R2 registry, we built a custom CLI tool that we use internally in lieu of running docker build and docker push. This makes it easy to use zstd and split layers into 500 MB chunks, which allows uploads to be processed by Workers while staying under body size limits.
Using our custom build and push tool doubled the speed of image pulls. Our 30 GB GPU images now pull in 4 minutes instead of 8. We plan on open sourcing this tool in the near future."
Also, https://github.com/cloudflare/serverless-registry/tree/main/push was initially added on the Sep 26 2024, while blog post that mentions "Using our custom build and push tool doubled the speed of image pulls. Our 30 GB GPU images now pull in 4 minutes instead of 8. We plan on open sourcing this tool in the near future.". So I assume that the production-grade tool, that uses zstd, isn't open-sourced yet?
This tool, if open-sourced, would turn Serverless Registry into a real production-grade thing. Also, would be really nice to have an option to split layers into even smaller chunks - 200 MB (for Business plan limit) and 100 MB (for Pro and Free plans).
The text was updated successfully, but these errors were encountered:
Also, would be really nice to have an option to split layers into even smaller chunks - 200 MB (for Business plan limit) and 100 MB (for Pro and Free plans).
From what I've seen in the code, the "current" push tool already dynamically manages chunk size:
While reading https://blog.cloudflare.com/container-platform-preview/, noticed the following:
"In order to build, chunk, and push these images to the R2 registry, we built a custom CLI tool that we use internally in lieu of running docker build and docker push. This makes it easy to use zstd and split layers into 500 MB chunks, which allows uploads to be processed by Workers while staying under body size limits.
Using our custom build and push tool doubled the speed of image pulls. Our 30 GB GPU images now pull in 4 minutes instead of 8. We plan on open sourcing this tool in the near future."
Is this https://github.com/cloudflare/serverless-registry/tree/main/push the implementation mentioned in the blog post above? Curious because blog post mentions that internal tool "makes easy to use zstd", while in https://github.com/cloudflare/serverless-registry/blob/main/push/README.md it's mentioned among Improvements: Use zstd instead.
Also, https://github.com/cloudflare/serverless-registry/tree/main/push was initially added on the Sep 26 2024, while blog post that mentions "Using our custom build and push tool doubled the speed of image pulls. Our 30 GB GPU images now pull in 4 minutes instead of 8. We plan on open sourcing this tool in the near future.". So I assume that the production-grade tool, that uses zstd, isn't open-sourced yet?
This tool, if open-sourced, would turn Serverless Registry into a real production-grade thing. Also, would be really nice to have an option to split layers into even smaller chunks - 200 MB (for Business plan limit) and 100 MB (for Pro and Free plans).
The text was updated successfully, but these errors were encountered: