Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bucket modifications not removed #4

Open
JacobReynolds opened this issue Oct 13, 2017 · 5 comments
Open

Bucket modifications not removed #4

JacobReynolds opened this issue Oct 13, 2017 · 5 comments
Assignees

Comments

@JacobReynolds
Copy link

All of the write permission checks modify the bucket and do not unmodify it. This results in many bucket configurations being changed and overwritten. This makes this extension unusable as it will destroy any buckets scanned that have public write permissions.

self.boto3_client.put_bucket_cors(

@0xSobky
Copy link
Collaborator

0xSobky commented Oct 14, 2017

Hi, Jacob!

Thanks for pointing this out. However, this is a very tricky issue to fix. First, let's get a few things cleared up:

  1. The ability to overwrite a bucket's configurations does not entail the ability to read these configurations. For instance, the s3:PutBucketCORS permission only gives us the ability to overwrite the bucket's CORS configuration but not the ability to read the current CORS configuration in contrast to s3:GetBucketCORS.
  2. We do avoid testing for DELETE operations (e.g., s3:DeleteObject) and make best efforts to avoid potentially destructive tests like changing the ACL of the bucket when that's possible:
    if error.error_code == 'UnresolvableGrantByEmailAddress':
  3. Modifying a bucket's configurations does NOT "destroy" it or affect the data stored on it.

That being said, undoing these modifications would not be possible in most cases. So we are left with these options:

  1. Attempt to modify the configurations restrictively (which might deny allowed operations).
  2. Attempt to modify the configurations permissively (which might allow disallowed operations and create even more security issues).
  3. Attempt to read the current configurations before overwriting them and undo any changes later on when possible (which is only feasible with very uncommonly permissive bucket policies).
  4. Remove tests that can modify the bucket's configurations entirely (which might let dangerous misconfigurations go undetected).

I'm not sure what is the best course of action here, but I think the current behavior (1st option) can easily be reversed by the bucket owner once tests are finished. And we can try to implement the 3rd option alongside the 1st one to mitigate this problem.

Further notes:

  1. There's a "passive mode" in the extension's settings tab that a user can enable to only identify buckets and perform passive tests (so nothing can go wrong on critical environments).
  2. It's always a best practice not to test on production environments/buckets but on development/staging ones instead.

Do you have any better ideas in mind?

@JacobReynolds
Copy link
Author

Hey Sobky,

Thanks for the response. The destroy verbiage came specifically from https://github.com/VirtueSecurity/aws-extender/blob/master/aws_extender.py#L561 which will overwrite the setting, breaking any websites hosted through AWS. But I agree, none of the checks disrupt data stored in the bucket.

Those are some hard challenges to overcome, since writing does not entail read access. I think the easiest solution is adding a notice to the user detailing in what way it will be changing the buckets and using unique identifiers for these changed values so the user can easily identify changed values.

Also a visible notice specifying this should only be used in non-production environments. My worry is without that a tester will use this on production instances without knowing and potentially cause some serious issues for customer-facing buckets.

-Jake

@0xSobky 0xSobky self-assigned this Oct 14, 2017
@nwalsh1995
Copy link

For some write checks you can use the checksum to determine if you have access or not. The exact process and script is in this blog post from Detectify.

I have confirmed personally that their approach works while using Boto3. Note that you probably can't add the checksum to all checks but you can check for some writes 'passively'.

@0xSobky
Copy link
Collaborator

0xSobky commented Oct 17, 2017

Thanks @nwalsh1995, but which "WRITE" checks exactly are you referring to? If you mean tests similar to the following:

key = bucket.new_key('test.txt')

Then this is intentionally not passive, and it's a non-destructive test. The object uploaded is simply meant to act as evidence that you were able to write content to the bucket. If not, what proves the flaw ever existed? The point here is not to be completely passive but only to mitigate potentially disruptive tests.

@nwalsh1995
Copy link

@0xSobky yes its the bucket write test that I am referring to. I haven't researched if you can use checksum on all of your checks but if any of them accept a checksum then you could potentially use the same method.

Also by using the checksum method you can actually do bucket write completely passive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants