Optional Client Side Memory Caching Support #4950
Replies: 5 comments 4 replies
-
A few weeks ago someone made a guide for using remix SSR with react-query, does that help?
My solution to this problem has been using a in-process cache like the the lru-cache npm package; you can use something like redis if you lru-cache isn't enough
If you're responses are cached, then you could avoid going to your database or API. One pro of having the cached data on the server is that if other user's of your app are requesting the same data, they will get the response, even if they switch browser or clear their browser cache
You could write a custom hook in your remix app using useRevalidator, call the hook in one of your components that would call the server to invalidate/update the cache and return a new response React QueryOne thing I do miss about Browser caches like localStorage/Indexdb are valuable still; I haven't tried the remix solution described in the react-query docs yet; actually stopped using react-query nearly everywhere after using remix and lru-cache package |
Beta Was this translation helpful? Give feedback.
-
Related proposal that has some discussion of a |
Beta Was this translation helpful? Give feedback.
-
What happens if the app is offline ? Will mutation always fail ? In the case of |
Beta Was this translation helpful? Give feedback.
-
It would be great to have the ability to both return the data from the |
Beta Was this translation helpful? Give feedback.
-
Calling a loader function client-side AND a loader function server-side as part of a single request as the OP suggests makes sense to me. As the OP described above in the code snippet, it's a On route change:
I see a few potential benefits to this approach:
Implementation-wise, I've been envisioning a "universal" loader that works both client-side and server-side via an ORM/repository data-source layer. See below. But the separate loaders with next() argument would work too IMO. export function async universalLoader({request}) {
var data = await Project.includes('tasks').where({ status: 'open' }); // Sample "universal" ORM method; hits local store or DB
return json({ projects: data });
} Thoughts? |
Beta Was this translation helpful? Give feedback.
-
Background
With Remix, we cannot really make good use of client side memory caching such as RQ, Apollo Client, SWR, etc.
Even if we did cache using these libraries, the loaders in the server will always be executed on revisits.
If we want make use of cache, we would have to rely on purely client side data fetching, which is definitely not what I want when using Remix.
There are other encouraged caching strategies such as server side caching with something like Redis (make it even better with edge servers) and http cache with control header(in CDN and/or browser). All of these options are viable caching strategies.
But except for browser http cache, the rest will have some latency and CDN caching is limited to public data. Then why not browser http cache? it has no delay, and it is also private to the client.
The biggest downside compared to client side memory cache is that developer does not have fine "control" over the browser cache. Sure, you can put a short
max-age
, and even use differentmax-age
based on the resource created time(one of Ryan's video on controlling cache headers, which I really liked) but what if the age of the data depends on certain random client interactions? where sometimes it can be very short, sometimes long. If it was client side memory caching, we could invalidate this specific cache at the right instant, which will keep the data fresh and only make necessary network requests.Another way to control http browser cache is to use the
Vary
header, by updating its value. But this will invalidate ALL http browser cache. Again we have control issue.Most of the time it could be fine with these caching strategies and using browser in memory cache could be an over kill. But we can clearly see that client side memory cache does have its value, especially later when we want to optimize the product.
Proposal
An option to add
clientLoader
in each route component, where this only executes in the client, similar to theloader
from React Router v6.4+. The main difference to RR loader would be thatclientLoader
, receives another argument,next
(just like in express). If the clientLoader can find valid cached data, it will return the data withjson
, (so far, exactly how it works in RR loader).The main difference to RR's loader is this: If there is no valid cached data, instead of making a fetch request to an API endpoint, you could simply pass on the request to the server side loader using the
next
function. Then the server sideloader
will receive the request, and this loader can do its normal job!(fetch data from db). There will be no changes to the existing APIs basically. The only difference is that now we have an option to put a client side memory layer. Same loader, same hooks, no component changes(still use Forms to navigate and mutate)For data mutation, the order would have to be different.
next
I was blown away when I tried Remix APIs with RR v6.4 with client side in memory caching. As the docs says:
(React Router is about timing, where React Query is about caching.)
Its not Remix's job to do client side caching, but the framework should give the developers the option to use client side caching
This would provide me with all the levers I need, at least for data fetching and mutation!
Beta Was this translation helpful? Give feedback.
All reactions