Replies: 2 comments
-
Not sure why this hasn't been discussed further - we came across the need for better caching this month as well. We're generating images and data asynchronously when a page loads. This isn't quite the issue when statically building the site, but it does make the build process longer, as well as local development. Having a way to say "based on the input, tell me if we need to recalculate stuff - otherwise, use the output from last time" would be very helpful and save tons of time and resources (our website now takes multiple minutes to build, including reusing the image cache). |
Beta Was this translation helpful? Give feedback.
-
This is definitely a valuable thing to have, and a great proposal in terms of its interface. I would suggest looking into integrating a well-known, standardized cache library instead. For example NestJS uses Cacheable which is a very mature implementation of (already mature) keyv cache adapters for different stores. It not only gives us all the adapters for different stores out of the box, but also has built-in stampede prevention, L1&L2 cache, cache nesting, revalidation, and many other niceities. NestJS relies on it A LOT, and managed to "wrap" it in different ways, to offer composite features like queues, remote procedure calls, blob cache etc. In fact it also is becoming a popular replacement for unstable_cache, in the Next.JS ecosystem, as their cache is crap, I use it with an in-memory LRU L1 cache and Upstash Redis as L2 cache, and use it both for data and component/page caching. My suggestion isn't to limit our interfaces in any way, but instead to take what is already there as a solid base, for writing our own wrappers, much like Nest did. |
Beta Was this translation helpful? Give feedback.
-
Summary
Create a cache interface within Astro to simplify the development of integrations requiring binary blobs / JSON value caching, and of user space components requiring caching of binary blobs / JSON value.
Background & Motivation
At the moment, in user space, there is no way of effectively caching content that is costly to generate. One common use-case for this is the creation of Astro Endpoints to generate OpenGraph share images.
Integrations are also encouraged to use
config.cacheDir
to cache results of expensive operations / transformation, but there is no guidance or standard way of doing so.astro:cache
would be a new core component available to both integrations and user space components. This integration would provide by default two adapters:NodeFsCacheAdapter
: the default configured provider that allows interaction withconfig.cacheDir
. Used at build time, and in user space forstatic
targets.InMemoryCacheAdapter
: Cache provider that stores values in-memory. Compatible with all targets/runtime.Third-party integrations would be able to provide and expose their own cache adapters (for example: AWS S3 Cache Adapter, Redis Cache Adapter, etc.).
Possible Use Cases
astro:assets
– uniforming the caching mechanism forastro:assets
toastro:cache
.canvaskit-wasm
could now cache their results for faster build times.Goals
Example
astro:cache
would export the following for use in user space and in integrations:Here is an example of a usage of a cache in user space. Imagine an endpoint that generates OpenGraph share images from content collection entries. Automatically caching the result of the generation during a build becomes trivial.
[slug]-share.png.ts
Astro would provide two adapters by default... A
nodeFsCacheAdapter
and amemoryCacheAdapter
.In
astro.config.mjs
, adapters could be configured per-namespace.getCache
andgetValueCache
would return the proper cache provider facade with the right adapter based on the configuration.Beta Was this translation helpful? Give feedback.
All reactions