-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Soft Update seems way to aggressive #1250
Comments
Agreed. Forcing am update check to happen on every single navigation with no user agent flexibility seems extremely aggressive for large swaths of users. Without significant clarification and convincing, I don't think we intend to implement this. It should either be changed to once-a-day like subresources or changed to be a non-normative note. |
It would seem okayish to me if it was not bypassing the cache. What is the reason to bypass it? |
My initial thoughts here:
|
The switch to revalidate on update by default was made in #893. |
Do these developers not know about ServiceWorkerRegistration.update()? |
Sure, but if you have pushed a broken service worker already, its hard to go back and add js code to The update-on-navigate algorithm was introduced before I started working on SW. Maybe others can discuss other options that were considered and why this was chosen as the best alternative. |
I expect setting |
Just to stress what @wanderview said – this change was made following developer feedback. Defaulting to "all" would be the least disruptive way to wilfully break the spec, but developers may hate you for it. But why? What's the downside to the current model? |
You guys added me to this by mistake, please remove me
…On Dec 18, 2017 10:12 AM, "Jake Archibald" ***@***.***> wrote:
Just to stress what @wanderview <https://github.com/wanderview> said –
this change was made following developer feedback. Defaulting to "all"
would be the least disruptive way to wilfully break the spec, but
developers may hate you for it.
What's the downside to the current model?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1250 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AfZOLABWqWNC4rt8ViVyPkFTiv8tjcF9ks5tBqsYgaJpZM4RFz_f>
.
|
Forcing unnecessary bandwidth and radio power usage in a mechanism that is supposed to enable zero-bandwidth and zero-radio-power usage apps. Consider apps in an app store. I don't know how other app stores work, but on iOS the system will only look for updates opportunistically - when there's high battery or the device is plugged in, when on WiFi - and even then it looks "every once and while" and not literally every time you launch the app or navigate around the app's main view And that's just to check for updates available - It doesn't actually perform the update automatically for many users, and for those users with auto updates it's even more discriminatory on how often it will pull the full update. App authors accidentally ship broken apps sometimes, too. They generally don't wait for the OS to fix that mistake for them. If the app wants to fire up the radios and use their user's metered bandwidth itself, that's fine. The system blames apps for data and power usage in a user-visible manner for this reason. But forcing it on all apps? |
Given HTTP 304, this is a pretty tiny right? Or are you talking about times an update is available?
It's likely the radio will wake to fetch data within the app. Or do you think there'll be a high percentage of apps that don't need data at all? If you're building offline-first, you're already saving a lot of bandwidth/power over a traditional website with the current defaults, and you can opt into further savings if you understand the trade-offs. With |
Service worker's update model is different from a store's in a few ways:
|
@n8schloss can you hint at the install-time size you're looking at for Facebook? |
I just wanted to elaborate on this: it's been a source of pain for a while now, and still remains a sore point for Chrome's users, which has yet to update its behavior to bypass HTTP caching by default. Many popular shared hosting environments will enable HTTP caching for static Here are a few examples of this pain/confusion "in the wild":
When the Angular team added a service worker generation to their CLI, they opted-out of the standard service worker update lifecycle entirely, in part to avoid these pain points. (Instead, they explicitly cache-bust the Opting-in, rather than opting-out, to HTTP caching of the service worker JS file was a positive change in the specification, and it would be unfortunate if browsers did not converge on that standard behavior. |
From my perspective, this issue is main reason we’re making service workers opt-in (rather than opt-out) in the next major of Create React App. It caused a lot of pain and frustration to our users, and without a solution I don’t see SW becoming desirable or even tolerable to the mainstream web developer audience. |
Since my previous comment wasn't clear to some people who reached out to me, I mean that service workers being cached has been the problem for our users (not the opposite!). They couldn’t deploy urgent fixes and this was extremely hard to debug because of Chrome caching the service worker. Create React App users are pretty representative of the general web developer community given the popularity of React so I wouldn’t discount their experiences. You can read this thread (one of our most voted and commented issues) to gauge the developer sentiment about service workers, the resistance in large being driven by this behavior, and that most people learn about it too late when it bites them. If this problem didn’t exist in Chrome, maybe we’d keep them in, but at this point our users have made it clear that the deployment pitfalls are too painful to them. If Safari adopts this behavior, I think it will bump into the same issues, and potentially hurt service worker adoption. I might be wrong though, and my perspective as a tool-maintainer might be one-sided. |
Just to clarify, the problem you're referring to is Chrome caching the SW script as per HTTP rules (with a max-age cap of 1 day), right? This was a spec problem. I guess we underestimated how many developers don't have control over their caching headers, and thought the 1 day cap was enough to avoid the foot-gun. My previous take was "You can fight HTTP caching with a service worker, but it's much better to get your basic HTTP stuff in order before attempting service workers", and while that's still true, we decided to do more to help devs that don't have control over caching headers, which is why we introduced |
Yep.
I think this is right. |
It's also not just about control. I think the problem is “it works until it doesn’t”.
So by the time they discover it was due to the cache they are already frustrated enough with this technology that they never want this experience again. Just read this report:
And that’s on a local machine. Imagine what a nightmare it is to debug something like this when it only happens to some users in production. |
Soft Update is basically just an Update as specified here [1]. The description states "The user agent may call this as often as it likes to check for updates.", which seems fine.
My issue is that the "Handle Fetch" section [2] mandates that we call Soft Update when a load is intercepted, if:
a) It a non-subresource load
or
b) if it is a subresource load and 24 hours have past since last update check
I think a) is way too aggressive. Basically, every intercepted navigation will cause Update to be called. Also, because the default value for updateViaCache is "imports", by default, every update is going to go to the network.
Also note that this behavior is tested by update-after-navigation-fetch-event.https.html WPT test.
I think we should apply the 24 hour limiting to all resource loads, not only subresource ones.
[1] https://w3c.github.io/ServiceWorker/#soft-update-algorithm
[2] https://w3c.github.io/ServiceWorker/#on-fetch-request-algorithm (Steps 20, 21 & 22)
The text was updated successfully, but these errors were encountered: