You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, we currently experimenting with Compute@Edge. VCL provides a lot of features to control the cache. However, the Rust SDK provides only a few overrides like req.set_ttl(60). Is that all? Our use case is to transform POST to GET with the ability to configure the response cache.
At Cloudflare we are able to do that with the Cache API. That store can be configured with simple cache headers.
Primary use case: Cache the POST request on the edge without transforming the origin request to GET + configuring the cache for the stored request.
Fine grained cache control API's for Compute@Edge is something that we are currently working on, with great urgency. While I cannot offer a public timeline for availability today, you would be welcome to our early process if you'd like to be involved in early testing!
There are someother cache overrides in the Rust SDK including stale_while_revalidate and surrogate_key settings; they might not be applicable to your use case, but they are other noteworthy axes of cache behavior that can be controlled today.
I hope this information helps, and look forward to future updates on this front. ❤️
Hi, thanks for the response. We would be happy to test it. I hope in the long term the goal is to have the same flexibility on the Edge as with VCL. As a workaround, we use service chaining to gain from both approaches. I have some questions though:
compute@edge: My rust service
delivery_service: classic fastly CDN
I still do not understand the effect of the overrides when using service chaining. For example surrogate_key. Is it possible to chain them so we can purge them all at once on the compute@edge service? I tried but it didn't work. I also respected the order of the keys compute@edge->delivery_service
Besides that, the only solution to make surrogate keys work on the fastly delivery_service was to use req.set_pass(true) on the compute@edge service otherwise it seems that a default caching policy is used. Is that correct?
Hello, we currently experimenting with Compute@Edge. VCL provides a lot of features to control the cache. However, the Rust SDK provides only a few overrides like
req.set_ttl(60)
. Is that all? Our use case is to transform POST to GET with the ability to configure the response cache.At Cloudflare we are able to do that with the Cache API. That store can be configured with simple cache headers.
Primary use case: Cache the POST request on the edge without transforming the origin request to GET + configuring the cache for the stored request.
According to the documentation I found:
Is there any eta?
Therefore in VCL we can manipulate the
req
based on conditions ofberesp
. How is it done with the Rust SDK?The text was updated successfully, but these errors were encountered: