Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Listing headers safe only for certain values is a bad idea #313

Closed
sicking opened this issue Jun 2, 2016 · 24 comments
Closed

Listing headers safe only for certain values is a bad idea #313

sicking opened this issue Jun 2, 2016 · 24 comments
Labels
security/privacy There are security or privacy implications

Comments

@sicking
Copy link

sicking commented Jun 2, 2016

I just noticed that the headers DPR, Downlink, Save-Data, Viewport-Width and Width have been listed as "safe" headers only for certain values.

I think this is a bad idea. Having only certain values be considered safe introduces complexity in a security API. This increases the risk that something will go wrong and that security bugs will ensue.

Consider for example https://bugs.chromium.org/p/chromium/issues/detail?id=490015 which is a recent example of a case where such complexity has lead to a security bug which has remained in Chrome for over a year and which is unclear if it will ever get fixed.

Another reason to not consider these headers safe only for certain values is that the set of values is bound to change over time. As the specification for those headers evolve over time, so will the the set of values that will parse successfully.

Are we expecting website developers to constantly monitor these specifications and adjust their server side code as it can get exposed to new request headers?

@mnot
Copy link
Member

mnot commented Jun 3, 2016

cc @igrigorik

@igrigorik
Copy link
Member

I just noticed that the headers DPR, Downlink, Save-Data, Viewport-Width and Width have been listed as "safe" headers only for certain values.

I think this is a bad idea. Having only certain values be considered safe introduces complexity in a security API. This increases the risk that something will go wrong and that security bugs will ensue.

This statement is not obvious to me. IETF specs define the grammar for particular fields, it seems reasonable for user agents to validate provided values against said definitions and reject values that do not conform to the spec -- this process, by itself, does not increase risk of security bugs. However, it's probably also true that this mechanism should not be considered as a "security mechanism" either:

  • Many field definitions allow effectively arbitrary data to be sent
  • Some fields may be more strict but these same fields may change in the future to allow other values

Which is to say, even if the user agent enforces some checks, the server must still validate all input fields and cannot and should not count on the user agent to "protect it". So yes, website developers should monitor and enforce their own rules with regards to what values are allowed by specification or their particular implementation.


@sicking what is your alternative proposal? Are you suggesting that we should skip UA validation entirely and defer to the server?

@sicking
Copy link
Author

sicking commented Jun 3, 2016

CORS is entirely a "security mechanism". It defines which headers (and, sadly header values) can be sent on incoming requests from 3rd parties.

If you want servers to expect arbitrary values for these headers on requests from 3rd parties, and want servers to validate there are no values that have harmful side effects, then the appropriate thing is to whitelist these headers for any value.

That way web developers have to expect incoming requests with any value from a 3rd party attacker since such an attacker can use fetch() or XHR to submit such requests.

Note that because the CORS spec says that the Content-Type header currently only can contain text/plain, application/x-www-form-urlencoded or multipart/form-data we know of authors which has written server side code which relies on that these are the only content types that they will receive from 3rd parties.

The reason we know was because we had bugs somewhere in our sendBeacon implementation (i think) which enabled sending other content-types and got bugs filed against us specifically referencing the requirement on the Content-Type header in the spec. Sadly I don't remember the details, nor am I able to find the bug.

@igrigorik
Copy link
Member

web developers have to expect incoming requests with any value from a 3rd party attacker since such an attacker can use fetch() or XHR to submit such requests

I guess you could (should?) extend that same statement to any request (3rd party or not), since any script included on the page (regardless of origin), or introduced due to XSS vector/etc, could carry a value with some harmful side effects.

@annevk @mnot any thoughts on this one?

@annevk
Copy link
Member

annevk commented Jun 6, 2016

I know from https://lists.w3.org/Archives/Public/public-webappsec/2016May/0034.html that there's at least some security folks wanting to restrict the existing CORS-safelisted request-headers (e.g., Accept) to some set of limited values too.

The reason we restricted the values for these new headers is that Chrome was already violating the same-origin policy for them, with <img> et al, but only with valid values. So letting fetch() do the same for valid values seemed reasonable and this passed security review.

@sicking
Copy link
Author

sicking commented Jun 7, 2016

I'm certainly opposed to implementing this in Gecko. Who did the security review on the Mozilla side?

@sicking
Copy link
Author

sicking commented Jun 7, 2016

FWIW, I'm similarly opposed to adding restrictions to Accept

@igrigorik
Copy link
Member

/cc @johnwilander @mikewest for comments on the webappsec discussion... :)

@annevk
Copy link
Member

annevk commented Jul 22, 2016

@dveditz thoughts?

@johnwilander
Copy link

Sorry for joining late. We are moving the #382 discussion here.

@sicking I read your position as "Browser restrictions on CORS header values 1) will make servers rely on them and thus result in poor server input validation, and 2) will result in more security bugs filed against browsers." True?

Servers that depend on browser enforced header values might as well result in better server-side input validation, right? For the Content-Type header, the current restriction might as well result in the server comparing with an enum and accepting nothing else.

@annevk
Copy link
Member

annevk commented Sep 8, 2016

I'm not sure how actively involved @sicking still is. So I guess we need input from someone else from Mozilla, e.g., @mcmanus or @smaug----. @tyoshino any thoughts from Google's end on adding more restrictions (to Accept, Accept-Language, etc.)?

@sicking
Copy link
Author

sicking commented Sep 8, 2016

I can't speak for mozilla any longer, so definitely advice getting input from others.

That said, I don't understand @johnwilander's comment. Why would the fact that browsers enforce preflight for, for example "application/xml", mean that server authors are more likely to test that the content-type really is one of the whitelisted ones?

@tyoshino
Copy link
Member

tyoshino commented Sep 8, 2016

There's no lecture about the original issue for which the preflight mechanism was introduced around the Access-Control-Allow-Headers definition in the Fetch Standard, yet, IIUC. Even if there is such educating texts, basically the CORS preflight has impact only for developers with much curiosity and carefulness so that they think more about the purpose of the mechanism and what they should do, I think. I'm not sure if this kind of "heads up" has been so effective.

If we stand the position that this has been not effective, we shouldn't complicate it anymore as it's useless.

If we stand the position that this has been effective, we could say either:

  1. let's address new attack vectors at web browser
  2. developers have been well educated for 8 years. now they're responsible for fixing issues by themselves

(1) will results in banning almost all all kind of cross-site XHR/Fetch and require explitict permission by preflight for everything in the future while sacrificing the cost of extra RTT.

My gut feeling is that we should take the position (2) to share the cost for fighting with attacks between browser vendors and service developers, and make the world with less latency.

@tyoshino
Copy link
Member

tyoshino commented Sep 8, 2016

I think enforcing a server to list all acceptable values concretely for each field (no wildcard) in preflight would be definitely effective. Origin policy might reduce its cost. But still non realistic for covering body and URL path and query.

But then, we again lead to (1) or (2) above.

@tyoshino
Copy link
Member

tyoshino commented Sep 8, 2016

banning almost all all kind of cross-site ...

Looks I said too much.

I'm not sure how much overlap there would be between things to block and things useful for some people.

E.g. limiting content-type to be valid media-type wouldn't hurt any good usage.

So, I'd change my opinion to about increase of complexity and effectiveness. As I said in the last 2 comments, I'm not sure about effectiveness of current CORS preflight.

@sicking
Copy link
Author

sicking commented Sep 9, 2016

CORS preflight has impact only for developers with much curiosity

CORS preflight has impact for all server authors since it protects them from receiving requests that could cause harmful side-effects on the server.

Without CORS preflight it is very possible that if you visit my website, that I could send a request to your bank asking the bank to transfer a bunch of money from your bank account to mine.

This would be possible without the bank having any server-side CORS logic. In fact, it would be possible because the bank has no server-side CORS logic. We can't expect all websites to suddenly be aware of CORS.

@tyoshino
Copy link
Member

tyoshino commented Sep 9, 2016

In that sentence, by the term "impact" I meant effect of enforcing server developers to carefully investigate received actual requests' headers which you doubted in #313 (comment). I didn't mean that CORS is not protecting all server authors.

Sorry if my words were unclear. I could rewrite my text into the following:

Even if we explain the purpose of the CORS preflight and what it does in the Fetch Standard carefully, not all of server authors would learn the purpose and effect seriously to have the server perform sufficient checks on the received value.

This is what I wanted to mean in #313 (comment). Isn't that the similar thing as what you meant in #313 (comment) ?

@sicking
Copy link
Author

sicking commented Sep 9, 2016

If you think that server authors will not fully understand the purpose of CORS preflight, then I think that is an argument for keeping the rules of CORS preflight simple. Whitelisting only certain values of a header is not simple.

@tyoshino
Copy link
Member

tyoshino commented Sep 9, 2016

Right. Therefore, I said "If we stand the position that this has been not effective, we shouldn't complicate it anymore as it's useless." in #313 (comment)

@tyoshino
Copy link
Member

tyoshino commented Sep 9, 2016

My reply was right after your comment, but it doesn't mean that I opposed to you. I just explained my opinion in response to @annevk's call.

@tyoshino
Copy link
Member

tyoshino commented Sep 9, 2016

So, in summary,

  • Regarding effectiveness in guiding server authors to learn attack vectors carefully
    • I doubt that whitelisting in client side enforcement logic is very effective
    • We could require server side to announce whitelist of header values for each header. This is effective, but I don't think it's realistic approach.
    • We could just make value validation in UA stricter (either only for cross-origin or both same and cross origin) so that UA throw for header values considered to be bad without sending the request. This is also not realistic due to compatibility.
  • Regarding effectiveness in protecting vulnerable servers from attacks in place of server authors
    • Even if we make the CORS preflight condition stricter, it would be just a temporary resolution. Unless developers are enforced to make the right checks before making their server emit "Access-Control-Allow-Headers", some of them would do only "Access-Control-Allow-Headers" implementation.

So, I think triggering CORS preflight for non-safelisted headers (protecting server developers temporarily and give chance to stop and think) is the best we can in web browser side, and the rest (implement the right validation before starting responding to CORS preflight) should be the responsibility of the server side.

@johnwilander
Copy link

@sicking I think we are misunderstanding each other. I'm solely talking about simple, non-preflight CORS request headers.

Currently there are three such header values that are not restricted beyond field-content token production, namely Accept, Accept-Language, and Content-Language. This means all web servers have to have solid input validation for these headers since any page on the web can send those headers to them in CORS requests.

Yes, any piece of software capable of sending HTTP requests can send arbitrary headers to servers. But CORS requests can be leveraged in CSRF attacks against intranets or local services that a remote attacker cannot reach from his/her computer.

We'd like to discuss browser restriction of Accept, Accept-Language, and Content-Language values in simple CORS since RFC 7231 does restrict them, i.e. they are well-defined.

@annevk
Copy link
Member

annevk commented Jan 17, 2017

It seems that WebKit has added a set of restrictions for Accept, Accept-Language, and Content-Language:

@mikewest @hillbrad thoughts? I regret not discussing headers during TPAC. We should really come up with a story.

@johnwilander
Copy link

I made comments in https://bugs.webkit.org/show_bug.cgi?id=166363.

annevk added a commit that referenced this issue May 25, 2018
This should reduce the attack surface of non-preflighted requests quite a bit.

Fixes #382. Closes #313.
annevk added a commit that referenced this issue May 30, 2018
This should reduce the attack surface of non-preflighted requests quite a bit.

Fixes #382. Closes #313.
annevk added a commit that referenced this issue May 30, 2018
This should reduce the attack surface of non-preflighted requests quite a bit.

Fixes #382. Closes #313.
annevk added a commit that referenced this issue Jun 18, 2018
This should reduce the attack surface of non-preflighted requests quite a bit.

Tests: web-platform-tests/wpt#11432.

Fixes #382. Closes #313.
annevk added a commit that referenced this issue Aug 16, 2018
This should reduce the attack surface of non-preflighted requests quite a bit.

Tests: web-platform-tests/wpt#11432.

Fixes #382. Closes #313.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
security/privacy There are security or privacy implications
Development

No branches or pull requests

6 participants