-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Support for Google Cloud Storage #1981
Comments
Hey! So @mtekel and I are going to have a go at adding this to the DSL for the google provider. Our intention is to implement the DSL according to the S3 schema.
The created bucket will export the following variables:
We intend to default all buckets to the standard storage. Thus to create a bucket called 'foobar':
|
Heh when I self-assigned this, github only then showed me your message from a day ago. Guess I shouldn't keep tabs open this long. :) |
Have you considered supporting the full BucketAccessControls semantics? I don't mind helping with this. I'd rather not see the field 'acl' repurposed to the predefined ACL strings, although it may be a good idea if using the full acl is too verbose, in which case we can call it predefined_acl or something like that. |
We've got a branch locally coming hopefully tomorrow morning (GMT) - we haven't tackled the website operations nor some of the lower level ACL stuff - just the canned ones to create feature parity with the S3 provider (that was the scope of our internal story on this) - be great to get you to review the pull request and then add in some of the bits we didn't pick up specifically around updates such as change of ACL to existing buckets. The standard storage api (v1) is a bit short on examples around how you apply an ACL to a bucket, would you use this patch command? Happy to redefine the |
SGTM. Yes, you can simply use patch to update either low level acls inline with the bucket resource, or you can specify a different predefinedAcl in the query (but not both). Note that you cannot delete a bucket unless it is empty, and deleting everything in it is a bit hairy because of eventual consistency and concurrent updates to the bucket. So, it's not clear to me how much we should help the user there. Maybe we can just force them to empty their own bucket. Any idea what the terraform s3 bucket resource does? |
As of earlier today, Terraform now supports a force options to clean out the bucket before deletion... #1977 |
Ok we should probably do the same thing. |
So, apologies for the late reply, got caught by the The pull request can be found here: I think the acceptance tests could do with a test case to cover the |
Ok, will look on Tuesday, bank holiday over here in the states :)
|
As the pull request has been merged, can we close this issue now? |
Fixed in #2030 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
It would be really great if you could add support for provisioning buckets using GCS - Although we're able to use a local-exec provisioner to call gstool for creating buckets, it would be great if Terraform could handle it directly through the google provider.
The text was updated successfully, but these errors were encountered: