-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Listing headers safe only for certain values is a bad idea #313
Comments
cc @igrigorik |
This statement is not obvious to me. IETF specs define the grammar for particular fields, it seems reasonable for user agents to validate provided values against said definitions and reject values that do not conform to the spec -- this process, by itself, does not increase risk of security bugs. However, it's probably also true that this mechanism should not be considered as a "security mechanism" either:
Which is to say, even if the user agent enforces some checks, the server must still validate all input fields and cannot and should not count on the user agent to "protect it". So yes, website developers should monitor and enforce their own rules with regards to what values are allowed by specification or their particular implementation. @sicking what is your alternative proposal? Are you suggesting that we should skip UA validation entirely and defer to the server? |
CORS is entirely a "security mechanism". It defines which headers (and, sadly header values) can be sent on incoming requests from 3rd parties. If you want servers to expect arbitrary values for these headers on requests from 3rd parties, and want servers to validate there are no values that have harmful side effects, then the appropriate thing is to whitelist these headers for any value. That way web developers have to expect incoming requests with any value from a 3rd party attacker since such an attacker can use Note that because the CORS spec says that the The reason we know was because we had bugs somewhere in our sendBeacon implementation (i think) which enabled sending other content-types and got bugs filed against us specifically referencing the requirement on the |
I guess you could (should?) extend that same statement to any request (3rd party or not), since any script included on the page (regardless of origin), or introduced due to XSS vector/etc, could carry a value with some harmful side effects. |
I know from https://lists.w3.org/Archives/Public/public-webappsec/2016May/0034.html that there's at least some security folks wanting to restrict the existing CORS-safelisted request-headers (e.g., The reason we restricted the values for these new headers is that Chrome was already violating the same-origin policy for them, with |
I'm certainly opposed to implementing this in Gecko. Who did the security review on the Mozilla side? |
FWIW, I'm similarly opposed to adding restrictions to |
/cc @johnwilander @mikewest for comments on the webappsec discussion... :) |
@dveditz thoughts? |
Sorry for joining late. We are moving the #382 discussion here. @sicking I read your position as "Browser restrictions on CORS header values 1) will make servers rely on them and thus result in poor server input validation, and 2) will result in more security bugs filed against browsers." True? Servers that depend on browser enforced header values might as well result in better server-side input validation, right? For the Content-Type header, the current restriction might as well result in the server comparing with an enum and accepting nothing else. |
I'm not sure how actively involved @sicking still is. So I guess we need input from someone else from Mozilla, e.g., @mcmanus or @smaug----. @tyoshino any thoughts from Google's end on adding more restrictions (to |
I can't speak for mozilla any longer, so definitely advice getting input from others. That said, I don't understand @johnwilander's comment. Why would the fact that browsers enforce preflight for, for example "application/xml", mean that server authors are more likely to test that the content-type really is one of the whitelisted ones? |
There's no lecture about the original issue for which the preflight mechanism was introduced around the If we stand the position that this has been not effective, we shouldn't complicate it anymore as it's useless. If we stand the position that this has been effective, we could say either:
(1) will results in banning almost all all kind of cross-site XHR/Fetch and require explitict permission by preflight for everything in the future while sacrificing the cost of extra RTT. My gut feeling is that we should take the position (2) to share the cost for fighting with attacks between browser vendors and service developers, and make the world with less latency. |
I think enforcing a server to list all acceptable values concretely for each field (no wildcard) in preflight would be definitely effective. Origin policy might reduce its cost. But still non realistic for covering body and URL path and query. But then, we again lead to (1) or (2) above. |
Looks I said too much. I'm not sure how much overlap there would be between things to block and things useful for some people. E.g. limiting content-type to be valid So, I'd change my opinion to about increase of complexity and effectiveness. As I said in the last 2 comments, I'm not sure about effectiveness of current CORS preflight. |
CORS preflight has impact for all server authors since it protects them from receiving requests that could cause harmful side-effects on the server. Without CORS preflight it is very possible that if you visit my website, that I could send a request to your bank asking the bank to transfer a bunch of money from your bank account to mine. This would be possible without the bank having any server-side CORS logic. In fact, it would be possible because the bank has no server-side CORS logic. We can't expect all websites to suddenly be aware of CORS. |
In that sentence, by the term "impact" I meant effect of enforcing server developers to carefully investigate received actual requests' headers which you doubted in #313 (comment). I didn't mean that CORS is not protecting all server authors. Sorry if my words were unclear. I could rewrite my text into the following: Even if we explain the purpose of the CORS preflight and what it does in the Fetch Standard carefully, not all of server authors would learn the purpose and effect seriously to have the server perform sufficient checks on the received value. This is what I wanted to mean in #313 (comment). Isn't that the similar thing as what you meant in #313 (comment) ? |
If you think that server authors will not fully understand the purpose of CORS preflight, then I think that is an argument for keeping the rules of CORS preflight simple. Whitelisting only certain values of a header is not simple. |
Right. Therefore, I said "If we stand the position that this has been not effective, we shouldn't complicate it anymore as it's useless." in #313 (comment) |
My reply was right after your comment, but it doesn't mean that I opposed to you. I just explained my opinion in response to @annevk's call. |
So, in summary,
So, I think triggering CORS preflight for non-safelisted headers (protecting server developers temporarily and give chance to stop and think) is the best we can in web browser side, and the rest (implement the right validation before starting responding to CORS preflight) should be the responsibility of the server side. |
@sicking I think we are misunderstanding each other. I'm solely talking about simple, non-preflight CORS request headers. Currently there are three such header values that are not restricted beyond field-content token production, namely Accept, Accept-Language, and Content-Language. This means all web servers have to have solid input validation for these headers since any page on the web can send those headers to them in CORS requests. Yes, any piece of software capable of sending HTTP requests can send arbitrary headers to servers. But CORS requests can be leveraged in CSRF attacks against intranets or local services that a remote attacker cannot reach from his/her computer. We'd like to discuss browser restriction of Accept, Accept-Language, and Content-Language values in simple CORS since RFC 7231 does restrict them, i.e. they are well-defined. |
It seems that WebKit has added a set of restrictions for Accept, Accept-Language, and Content-Language:
@mikewest @hillbrad thoughts? I regret not discussing headers during TPAC. We should really come up with a story. |
I made comments in https://bugs.webkit.org/show_bug.cgi?id=166363. |
This should reduce the attack surface of non-preflighted requests quite a bit. Tests: web-platform-tests/wpt#11432. Fixes #382. Closes #313.
This should reduce the attack surface of non-preflighted requests quite a bit. Tests: web-platform-tests/wpt#11432. Fixes #382. Closes #313.
I just noticed that the headers
DPR
,Downlink
,Save-Data
,Viewport-Width
andWidth
have been listed as "safe" headers only for certain values.I think this is a bad idea. Having only certain values be considered safe introduces complexity in a security API. This increases the risk that something will go wrong and that security bugs will ensue.
Consider for example https://bugs.chromium.org/p/chromium/issues/detail?id=490015 which is a recent example of a case where such complexity has lead to a security bug which has remained in Chrome for over a year and which is unclear if it will ever get fixed.
Another reason to not consider these headers safe only for certain values is that the set of values is bound to change over time. As the specification for those headers evolve over time, so will the the set of values that will parse successfully.
Are we expecting website developers to constantly monitor these specifications and adjust their server side code as it can get exposed to new request headers?
The text was updated successfully, but these errors were encountered: