-
-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to disable __uncache parameter? #127
Comments
Hey @jampy! Indeed, If we are speaking about disadvantages of it, then yes, such a way is causing unnecessary invalidation for assets which already have its unique name and are "forever cached" via HTTP.
It's indeed hackish and more, it's dangerous. Please don't do this. By not providing a version you are actually marking your SW unupdatable util you start specifying it again. Though, while there are currently no way to remove Anyway, I see your concerns here and it indeed would be good to have an option to disabled |
Also, take a look at this issue: webpack/webpack#1315 |
Hey @jampy, have my answers helped you? |
Hi @NekR
I don't understand - why? Since the filenames are already unique themselves, what's the use for Having unique file names (not just unique URLs) is an essential part of my deployment strategy. The app will be served from multiple servers (load balancing). Since deployment can take a few minutes, I must guarantee that each version (file bundles) of the app remains available from all servers. When a client requests Therefore, I plan to upload all chunks to Amazon S3 or something like that - and leave old versions there for a relatively long time. It's just the For this to work, I must absolutely avoid that a already-deployed chunk (for example, With these prerequisites, having an additional
I was and I still am using that option. Last time I checked it seemed that the service worker always requests all URLs. I saw a lot of HTTP 304 responses. However, I tried to reproduce it right now and I was not able to. In fact now it seems that the service worker really only requests files that changed. Maybe this is due to updated NPM packages, I don't know (my
I think that issue tells the exact opposite: The hash may change even if the contents remain the same - one more source of invalidation. As a side note, I wonder why the service worker itself ( |
Dangerous here is that you deploy SW without its version and that you use undocumented (probably a bug) feature. You may do that, but I don't provide support for such cases.
It's indeed redundant in your case, but I don't see how it harms you. I already said that it probably makes sense to have a way to disable
There are enough reasons why hash of a file may change. See this: https://github.com/jhnns/webpack-hash-test
This too. You can experience both problems. Do you use webpack generated hashes in your file names? If so, then you have that revalidation anyway. It's not related to
You may also want to look at this issue #128 to get some info about when and why there could be problems related to webpack generated hashes in relying on them. The only thing that solves all those problems completely is using |
I never said you should stop what you're doing. You sound upset - if so, my apologies, I'm just trying to understand if My problem with Maybe I'm wrong, but this means there are only two possibilities in that scenario:
You see, in either case, It seems that
Thanks for the link. None of these situations apply to my situation, though.
Yes, I'm using
You mean sometime in the future, or it will do that now?
What is the comparison are you talking about?
Ah! That is new to me. I couldn't find that detail anywhere in the docs. :-o Currently, I'm simply using Thanks for the link to #128 - very interesting! |
Just a bit, maybe because you sounded upset too at that moment.
It's necessary for cases when file names doesn't change. Since it changes in your case, then it isn't necessary, but there is no way to disable it right now.
webpack's hash problem is totally unrelated to
Yes, it's correct and guaranteed as far as you use For example: 2 files changed. First one downloaded successfully, but second one failed during network problem, hence SW install is failed too. In this situation, when user will visit your site again SW will try to download those 2 files again, i.e. repeat the install process.
There is no error, I will. Right that issue is open and feature is in development.
When you register SW browser check registration URL for the scope, if there is already a SW and it has different URL then registration fails. In other words, to change SW file you need to unregister current one first. Changing SW name is a bad-bad practice.
Unfortunately SW docs are weak. That's general suggestion from SW authors.
This should be |
Ah, I see. So, using a cache-busting parameter for
Actually I meant the Anyway, I'll change my code accordingly and will make sure that I take that detail into account once I start using S3 / the cluster. Please allow me one more question about Since the SW already knows the correct hashes for all assets, wouldn't it be ideal to use the hash value as Thanks :-) |
That's good idea. Can you open an issue about it or maybe even send a PR?
It's a build version. It's generated on build time indeed and used to tell to SW/AppCache to perform an update (in case you don't change any file names).
Probably. |
Could you please elaborate why it is bad-bad practice? I couln't find any hints why that could cause problems. |
@jampy https://developers.google.com/web/fundamentals/instant-and-offline/service-worker/lifecycle#avoid_changing_the_url_of_your_service_worker_script I also don't see why someone may need to it at all. |
Another reason: unregistering SW (which is required to change its name) makes you to loose all your Push Subscriptions. Which in theory could be restored after new SW is installed, or may not if user changed permissions or if that permissions expired. Anyway, that's just wrong way to do handle SW updates. |
Thanks for the link and the explanation - that helped a lot! :-) I completely overlooked that the "index" page is cached, too. I feared a race condition that clearly doesn't exist. So the single source of truth (for a cached application) is The only little race condition perhaps is the index file itself (the HTML URL). All my assets have a unique name (contain a hash), except for the index file. It is theoretically possible that the index file gets updated after a new However, being only the initial HTML I think I can circumvent this issue. |
If I understand you correctly, then that's isn't an issue. When new SW arrives and installs it downloads assets into completely separate cache not related to the previous (current) SW. So until new SW is activated all assets are still served from the old cache, even though new ones are already downloaded. Once new SW is activated it removes all old caches and uses new one which it downloaded during install phase. No race there :-) |
I see, but for the sake of complecity: I had something like this in my index HTML: <script type="text/javascript">var ASSET_BUNDLES={"main":"main-bundle-c706cd4dca14133426b3208a452387-en.js","db-engine-worker":"db-engine-worker-bundle-c706cd4dca14133426b3208a452387-en.js","pdfjs-worker":"pdfjs-worker-bundle-c706cd4dca14133426b3208a452387-en.js"};</script>
<script type="text/javascript" src="main-bundle-c706cd4dca14133426b3208a452387-en.js"></script> The Let me illustrate a potential race condition:
You see, in such a situation the user gets all assets matching version 100 (since they have unique URLs) but with the index file of version 101 (the only asset that has not a hashed URL). In the end I solved this today by putting <script type="text/javascript" src="asset-bundles-45900c92ee5e488af14ea4dd02f693f3.js"></script>
<script type="text/javascript" src="main-bundle-c706cd4dca14133426b3208a452387-en.js"></script> ...with the This will make sure that both Of course this causes an additional request. It's not ideal but I can live with that. The race condition still remains for the rest of the HTML file, but that does not contain anything critical. It also remains sort of a race condition because it may still happen that |
Of course, there would be no problem at all if |
Yes, I see the race here. It won't however break the app completely, but rather just make it not operational in offline. Also on next web page navigation new SW will be downloaded which will recover the situation. But yeah, I see what you mean here.
Yes, hard to do that right. |
The only true way to solve this and #142 is by verifying assets hashes at runtime, i.e. generating hash of received content which is kinda slow. Might not be a problem on install phase though. |
FYI, I use client-side hashing a lot with megabyte-sized data and it's quite fast even on mobile. Also, most browsers have native (asynchronous) support for cryptographic calculations like SHA-1. But yeah, every millisecond counts, that's right. |
If your're interested, for a fast pure-JavaScript SHA1 implementation see rusha I'm focused on SHA-1 because I need strong hashes. MD5 or similar would probably be probably good enough for asset versioning and it's faster. I'm not asking you to integrate hashes, though (but it would be awesome ;-) |
Yeah, I know, this is why I made it generated SHA-1 hashes and not MD5 in #129
As you said, there is native support of SHA-1 in browsers so not need to calculate MD5 manually.
I'll think about it :-) |
Has there been progress on this issue? What I want to do is, fetching data from a google spreadsheet and have it updated every 24h. So the __uncache param actually breaks the api-call! It would be awesome to be able to turn it off. |
Hi, no progress so far. |
Hi, this is something I'm interested in too- I'd love a straightforward way to simply disable the extra URL parameter. In my app, this is causing an avoidable duplicate fetch of a fairly large file every time the app upgrades. (The page itself requests it slightly before the SW gets around to it, and the SW can't rely on the browser's cache because of this parameter.) These resources are always versioned with a filename hash and I use Cache-Control:immutable on them anyway. |
offline-plugin
always adds a__uncache
parameter to the URLs it loads from the server.My assets already all contain the hash value of the contents in the file name itself, so
__uncache
seems useless to me (or, even worse, probably causes unnecessary invalidation):The only way I found to stop
offline-plugin
to do this is by supplying aversion: () => null
option, but that seems slightly hackish to me.Is there a sane way to disable
__uncache
or am I trying to do something bad here?The text was updated successfully, but these errors were encountered: