Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: add google cloud storage #2060

Merged
merged 1 commit into from
Jun 1, 2015
Merged

Feature: add google cloud storage #2060

merged 1 commit into from
Jun 1, 2015

Conversation

dhilton
Copy link
Contributor

@dhilton dhilton commented May 24, 2015

What

This pull request extends the Google provider, enabling the creation of new Google Cloud storage buckets.

This PR request has the following components:

  • A new resource defined in resource_storage_bucket.go
  • A set of acceptance tests in resource_storage_bucket_test.go
  • A new documentation page in storage_bucket_html.md
  • A new documentation navigation item in google.erb

Who should review this PR:

  • Anyone who has knowledge of Terraform and the Google Cloud Storage API.

@@ -60,6 +60,9 @@
<li<%= sidebar_current("docs-google-resource-dns-record-set") %>>
<a href="/docs/providers/google/r/dns_record_set.html">google_dns_record_set</a>
</li>
<li<%= sidebar_current("docs-google-resource-storage-bucket") %>>
<a href="/docs/providers/google/r/storage_bucket.html">google_dns_record_set</a>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the link name was not updated from copy-paste, probably should be st. like google_storage_bucket instead of google_dns_record_set.

@mtekel
Copy link

mtekel commented May 27, 2015

Changed 'bucket' to 'name', added loop for forced deletion. Also general cleanup, formatting and typos. Up for review.


// Add extra metadata about the bucket.
d.Set("name", res.Name)
d.Set("predefined_acl", res.Acl)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this actually work? res.Acl is the low level list of ACLs, right? Whereas predefined_acl needs a string. Seems that it should be set to acl, not res.Acl.

edit: Or not set at all, as it's not a Computed field. I think the same goes for "name" and "location".

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works and what it does is to set those attributes for that resource. They are easily visible in the tfstate file. I am not sure what everything these are used for, but I would say terraform at least uses that to monitor changes in state. I am storing name, acl, location - as these are fields configured by user - to whatever google returns in the response. The result is (this is a snipplet from tfstate):

"google_storage_bucket.michael-testbucket": {
                    "type": "google_storage_bucket",
                    "primary": {
                        "id": "michaeltestbucket2",
                        "attributes": {
                            "force_destroy": "true",
                            "id": "michaeltestbucket2",
                            "location": "US",
                            "name": "michaeltestbucket2",
                            "predefined_acl": "projectprivate"
                        }
                    }
                }

Why use both name and Id ? Because maybe in the future GCS can start returning st. else than name as ID...
Would be great to also get some terraform feedback on proper behaviour here. The AWS S3 provider sets only ID on creation and then sets all other attributes in bucketRead function. Terraform seems to do bucketRead (or, more abstractly, resource read) every time before it is about to modify the resource. I presume this is to find out the real state of that resource to determine if it's ready for update.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in the case of name you are basically doing d.Set("name", d.Get("name")) because res.Name is just the name returned from the request. That's a no-op and you don't need to do it. I think it's wrong (but not enforced) to use d.Set() for anything that is not Computed. In the case of location, if you make the change to use a Terraform default value, the same argument will apply.

In the case of predefined_acl I'm not sure why it appears to behave correctly. The type of res.Acl is []*BucketAccessControl so I'm not sure how Go is converting that to a String. I expect that because the value is not actually Computed, your .Set is actually being ignored.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Id seems to be a special attribute that has meaning within Terraform core. It doesn't really make sense for the Google provider, because the human-readable names are already unique. So it is just set to the name of the resource. I'd be happy to see it go, and for providers like AWS to have an explicit Computed Id attribute if they need one. @phinze what do you think?

@mtekel
Copy link

mtekel commented May 29, 2015

Hi @sparkprime,

we did some more updates with @dcarley . This time also to the tests. Your feedback would be appreciated. Thanks.

config := testAccProvider.Meta().(*Config)
object := &storage.Object{Name: itemName}

file, err := os.Open(location)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about making a Reader from a string with bytes.NewBufferString("your string")

@sparkprime
Copy link
Contributor

Aside from trivial comments, LGTM

Can you squash changes into a single commit?

And then are we ready to merge?

Thanks for the great work

Configure Google Compute Storage buckets using:
* name (compulsory attribute)
* predefined_acl (optional, default: `projectPrivate`)
* location (optional, default: `US`)
* force_destroy (optional, default: `false`)

Currently supporting only `predefined_acl`s. Bucket attribute updates happen via re-creation. force_destroy will cause bucket objects to be purged, enabling bucket destruction.
@mtekel
Copy link

mtekel commented Jun 1, 2015

Squashed commits, updated tests to use string instead of file, used go fmt to fix formatting.

sparkprime added a commit that referenced this pull request Jun 1, 2015
@sparkprime sparkprime merged commit 265b9b2 into hashicorp:master Jun 1, 2015
@sparkprime
Copy link
Contributor

Thanks again!

@dhilton
Copy link
Contributor Author

dhilton commented Jun 3, 2015

Late to the merge party (went on holiday) but thanks to all for the review and cleanup :shipit:

@ghost
Copy link

ghost commented May 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants