diff --git a/.ci/README.md b/.ci/README.md index 60a030b4b658..6d3819f50fc3 100644 --- a/.ci/README.md +++ b/.ci/README.md @@ -37,7 +37,21 @@ a) make no changes to any downstream and fail or b) atomically update every downstream to a fast-forward state that represents the appropriate HEAD as of the beginning of the run -It's possible, if we assume the worst, for a job to be cancelled or fail in the middle of pushing downstreams in a transient way. The sorts of failures that happen at scale - lightning strikes a datacenter or some other unlikely misfortune happens. This has a chance to cause a hiccup in the downstream history, but isn't dangerous. If that happens, the sync tags may need to be manually updated to sit at the same commit, just before the commit which needs to be generated. Then, the downstream pusher workflow will need to be restarted. +#### Something went wrong! +Don't panic - this is all quite safe. :) + +It's possible for a job to be cancelled or fail in the middle of pushing downstreams in a transient way. The sorts of failures that happen at scale - lightning strikes a datacenter or some other unlikely misfortune happens. This has a chance to cause a hiccup in the downstream history, but isn't dangerous. If that happens, the sync tags may need to be manually updated to sit at the same commit, just before the commit which needs to be generated. Then, the downstream pusher workflow will need to be restarted. + +Updating the sync tags is done like this: +First, check their state: `git fetch origin && git rev-parse origin/tpg-sync origin/tpgb-sync origin/ansible-sync origin/inspec-sync origin/tf-oics-sync origin/tf-conv-sync` will list the commits for each of the sync tags. +If you have changed the name of the `googlecloudplatform/magic-modules` remote from `origin`, substitute that name instead. +In normal, steady-state operation, these tags will all be identical. When a failure occurs, some of them may be one commit ahead of the others. It is rare for any of them to be 2 or more commits ahead of any other. If they are not all equal, and there is no pusher task currently running, this means you need to reset them by hand. If they are all equal, skip the next step. + +Second, find which commit caused the error. This will usually be easy - cloud build lists the commit which triggered a build, so you can probably just use that one. You need to set all the sync tags to the parent of that commit. Say the commit which caused the error is `12345abc`. You can find the parent of that commit with `git rev-parse 12345abc~` (note the `~` suffix). Some of the sync tags are likely set to this value already. For the remainder, simply perform a git push. Assuming that the parent commit is `98765fed`, that would be `git push origin 98765fed:tf-conv-sync`. + +If you are unlucky, there may be open PRs - this only happens if the failure occurred during the ~5 second period surrounding the merging of one of the downstreams. Close those PRs before proceeding to the final step. + +Click "retry" on the failed job in Cloud Build. Watch the retried job and see if it succeeds - it should! If it does not, the underlying problem may not have been fixed. ## Deploying the pipeline The code on the PR's branch is used to plan actions - no merge is performed. @@ -46,8 +60,15 @@ If you are making changes to the containers, your changes will not apply until t Pausing the pipeline is done in the cloud console, by setting the downstream-builder trigger to disabled. You can find that trigger [here](https://console.cloud.google.com/cloud-build/triggers/edit/f80a7496-b2f4-4980-a706-c5425a52045b?project=graphite-docker-images) -## Design choices & tradeoffs -* The downstreams share some setup code in common - especially TPG and TPGB. We violated the DRY principle by writing separate workflows for each repo. In practice, this has substantially reduced the amount of code - the coordination layer above the two repos was larger than the code saved by combining them. We also increase speed, since each Action runs separately. + +## Dependency change handbook: +If someone (often a bot) creates a PR which updates Gemfile or Gemfile.lock, they will not be able to generate diffs. This is because bundler doesn't allow you to run a binary unless your installed gems exactly match the Gemfile.lock, and since we have to run generation before and after the change, there is no possible container that will satisfy all requirements. + +The best approach is +* Build the `downstream-generator` container locally, with the new Gemfile and Gemfile.lock. This will involve hand-modifying the Dockerfile to use the local Gemfile/Gemfile.lock instead of wget from this repo's `master` branch. You don't need to check in those changes. +* When that container is built, and while nothing else is running in GCB (wait, if you need to), push the container to GCR, and as soon as possible afterwards, merge the dependency-changing PR. + +## Historical Note: Design choices & tradeoffs * The downstream push doesn't wait for checks on its PRs against downstreams. This may inconvenience some existing workflows which rely on the downstream PR checks. This ensures that merge conflicts never come into play, since the downstreams never have dangling PRs, but it requires some up-front work to get those checks into the differ. If a new check is introduced into the downstream Travis, we will need to introduce it into the terraform-tester container. * The downstream push is disconnected from the output of the differ (but runs the same code). This means that the diff which is approved isn't guaranteed to be applied *exactly*, if for instance magic modules' behavior changes on master between diff generation and downstream push. This is also intended to avoid merge conflicts by, effectively, rebasing each commit on top of master before final generation is done. * Imagine the following situation: PR A and PR B are opened simultaneously. PR A changes the copyright date in each file to 2020. PR B adds a new resource. PR A is merged seconds before PR B, so they are picked up in the same push-downstream run. The commit from PR B will produce a new file with the 2020 copyright date, even though the diff said 2019, since PR A was merged first. diff --git a/.ci/acceptance-tests/ansible-integration.sh b/.ci/acceptance-tests/ansible-integration.sh deleted file mode 100755 index 9940edbc75bd..000000000000 --- a/.ci/acceptance-tests/ansible-integration.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -echo "${SERVICE_ACCOUNT_KEY}" > /tmp/google-account.json -echo "${ANSIBLE_TEMPLATE}" > /tmp/ansible-template.ini - -set -e -set -x - -# Install ansible from source -git clone https://github.com/ansible/ansible.git -pushd ansible -pip install -r requirements.txt -source hacking/env-setup -popd - -# Clone ansible_collections_google because submodules -# break collections -git clone https://github.com/ansible-collections/ansible_collections_google.git - -# Build newest modules -pushd magic-modules-gcp -bundle install -bundle exec compiler -a -e ansible -o ../ansible_collections_google -popd - -# Install collection -pushd ansible_collections_google -ansible-galaxy collection build . -ansible-galaxy collection install *.gz -popd - -# Setup Cloud configuration template with variables -pushd ~/.ansible/collections/ansible_collections/google/cloud -cp /tmp/ansible-template.ini tests/integration/cloud-config-gcp.ini - -# Run ansible -ansible-test integration -v --allow-unsupported --continue-on-error $(find tests/integration/targets -name "gcp*" -type d -printf "%P ") diff --git a/.ci/acceptance-tests/ansible-integration.yml b/.ci/acceptance-tests/ansible-integration.yml deleted file mode 100644 index a8dd4a521e58..000000000000 --- a/.ci/acceptance-tests/ansible-integration.yml +++ /dev/null @@ -1,12 +0,0 @@ -platform: linux - -inputs: - - name: magic-modules-gcp - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/ansible - -run: - path: magic-modules-gcp/.ci/acceptance-tests/ansible-integration.sh diff --git a/.ci/acceptance-tests/inspec-integration.sh b/.ci/acceptance-tests/inspec-integration.sh deleted file mode 100755 index 806b2a662a91..000000000000 --- a/.ci/acceptance-tests/inspec-integration.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -set -e -set -x - -# Service account credentials for GCP to allow terraform to work -export GOOGLE_CLOUD_KEYFILE_JSON="/tmp/google-account.json" -export GOOGLE_APPLICATION_CREDENTIALS="/tmp/google-account.json" -# Setup GOPATH -export GOPATH=${PWD}/go - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${TERRAFORM_KEY}" > /tmp/google-account.json -export GCP_PROJECT_NUMBER=${PROJECT_NUMBER} -export GCP_PROJECT_ID=${PROJECT_NAME} -export GCP_PROJECT_NAME=${PROJECT_NAME} -set -x - -pushd magic-modules-gcp/build/inspec - -# Setup for using current GCP resources -export GCP_ZONE=europe-west2-a -export GCP_LOCATION=europe-west2 - -bundle install - -function cleanup { - cd $INSPEC_DIR - bundle exec rake test:cleanup_integration_tests -} - -export INSPEC_DIR=${PWD} -trap cleanup EXIT -bundle exec rake test:integration -popd \ No newline at end of file diff --git a/.ci/acceptance-tests/inspec-integration.yml b/.ci/acceptance-tests/inspec-integration.yml deleted file mode 100644 index c948d3e2b601..000000000000 --- a/.ci/acceptance-tests/inspec-integration.yml +++ /dev/null @@ -1,13 +0,0 @@ -platform: linux - -inputs: - - name: magic-modules-gcp - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/terraform-gcloud-inspec - tag: '0.12.16-4.0' - -run: - path: magic-modules-gcp/.ci/acceptance-tests/inspec-integration.sh diff --git a/.ci/acceptance-tests/inspec-post-approve.sh b/.ci/acceptance-tests/inspec-post-approve.sh deleted file mode 100755 index 68bece76ca80..000000000000 --- a/.ci/acceptance-tests/inspec-post-approve.sh +++ /dev/null @@ -1,71 +0,0 @@ -#!/bin/bash - -set -e -set -x - -# Service account credentials for GCP to allow terraform to work -export GOOGLE_CLOUD_KEYFILE_JSON="/tmp/google-account.json" -export GOOGLE_APPLICATION_CREDENTIALS="/tmp/google-account.json" -# Setup GOPATH -export GOPATH=${PWD}/go - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${TERRAFORM_KEY}" > /tmp/google-account.json -export GCP_PROJECT_NUMBER=${PROJECT_NUMBER} -export GCP_PROJECT_ID=${PROJECT_NAME} -export GCP_PROJECT_NAME=${PROJECT_NAME} -set -x - -gcloud auth activate-service-account terraform@graphite-test-sam-chef.iam.gserviceaccount.com --key-file=$GOOGLE_CLOUD_KEYFILE_JSON -# TODO(slevenick): Check to see if we have already run this -PR_ID="$(cat ./mm-approved-prs/.git/id)" - -# Check if PR_ID folder exists -set +e -gsutil ls gs://magic-modules-inspec-bucket/$PR_ID -if [ $? -ne 0 ]; then - # Bucket does not exist, so we did not have to record new cassettes to pass the inspec-test step. - # This means no new cassettes need to be generated after this PR is merged. - exit 0 -fi -set -e - -pushd mm-approved-prs -export VCR_MODE=all -# Running other controls may cause caching issues due to underlying clients caching responses -rm build/inspec/test/integration/verify/controls/* -bundle install -bundle exec compiler -a -e inspec -o "build/inspec/" -v beta -cp templates/inspec/vcr_config.rb build/inspec - -pushd build/inspec - -# Setup for using current GCP resources -export GCP_ZONE=europe-west2-a -export GCP_LOCATION=europe-west2 - -bundle install - -function cleanup { - cd $INSPEC_DIR - bundle exec rake test:cleanup_integration_tests -} - -export INSPEC_DIR=${PWD} -trap cleanup EXIT - -seed=$RANDOM -bundle exec rake test:init_workspace -# Seed plan_integration_tests so VCR cassettes work with random resource suffixes -bundle exec rake test:plan_integration_tests[$seed] -bundle exec rake test:setup_integration_tests -bundle exec rake test:run_integration_tests -bundle exec rake test:cleanup_integration_tests - -echo $seed > inspec-cassettes/seed.txt - -gsutil -m cp inspec-cassettes/* gs://magic-modules-inspec-bucket/$PR_ID/inspec-cassettes/approved/ - -popd \ No newline at end of file diff --git a/.ci/acceptance-tests/inspec-post-approve.yml b/.ci/acceptance-tests/inspec-post-approve.yml deleted file mode 100644 index f724216954d4..000000000000 --- a/.ci/acceptance-tests/inspec-post-approve.yml +++ /dev/null @@ -1,13 +0,0 @@ -platform: linux - -inputs: - - name: mm-approved-prs - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/terraform-gcloud-inspec - tag: '0.12.16-4.0' - -run: - path: mm-approved-prs/.ci/acceptance-tests/inspec-post-approve.sh diff --git a/.ci/acceptance-tests/inspec-post-merge.sh b/.ci/acceptance-tests/inspec-post-merge.sh deleted file mode 100755 index 57c3bcf6c4a6..000000000000 --- a/.ci/acceptance-tests/inspec-post-merge.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -set -e -set -x - -# Service account credentials for GCP to allow terraform to work -export GOOGLE_CLOUD_KEYFILE_JSON="/tmp/google-account.json" -export GOOGLE_APPLICATION_CREDENTIALS="/tmp/google-account.json" -# Setup GOPATH -export GOPATH=${PWD}/go - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${TERRAFORM_KEY}" > /tmp/google-account.json -export GCP_PROJECT_NUMBER=${PROJECT_NUMBER} -export GCP_PROJECT_ID=${PROJECT_NAME} -export GCP_PROJECT_NAME=${PROJECT_NAME} -set -x - -gcloud auth activate-service-account terraform@graphite-test-sam-chef.iam.gserviceaccount.com --key-file=$GOOGLE_CLOUD_KEYFILE_JSON - -PR_ID="$(cat ./mm-approved-prs/.git/id)" -# Check if PR_ID folder exists in the GS bucket. -set +e -gsutil ls gs://magic-modules-inspec-bucket/$PR_ID -if [ $? -ne 0 ]; then - # Bucket does not exist, so we did not have to record new cassettes to pass the inspec-test step. - # This means no new cassettes need to be generated after this PR is merged. - exit 0 -fi -set -e - -pushd mm-approved-prs -export VCR_MODE=all -# Running other controls may cause caching issues due to underlying clients caching responses -rm build/inspec/test/integration/verify/controls/* -bundle install -bundle exec compiler -a -e inspec -o "build/inspec/" -v beta -cp templates/inspec/vcr_config.rb build/inspec - -pushd build/inspec - -# Setup for using current GCP resources -export GCP_ZONE=europe-west2-a -export GCP_LOCATION=europe-west2 - -bundle install - -function cleanup { - cd $INSPEC_DIR - bundle exec rake test:cleanup_integration_tests -} - -export INSPEC_DIR=${PWD} -trap cleanup EXIT - -set +e -gsutil ls gs://magic-modules-inspec-bucket/$PR_ID/inspec-cassettes/approved -if [ $? -eq 0 ]; then - # We have already recorded new cassettes during the inspec-post-merge step - gsutil -m cp gs://magic-modules-inspec-bucket/$PR_ID/inspec-cassettes/approved/* gs://magic-modules-inspec-bucket/master/inspec-cassettes -else - # We need to record new cassettes for this PR - seed=$RANDOM - bundle exec rake test:init_workspace - # Seed plan_integration_tests so VCR cassettes work with random resource suffixes - bundle exec rake test:plan_integration_tests[$seed] - bundle exec rake test:setup_integration_tests - bundle exec rake test:run_integration_tests - bundle exec rake test:cleanup_integration_tests - - echo $seed > inspec-cassettes/seed.txt - gsutil -m cp inspec-cassettes/* gs://magic-modules-inspec-bucket/master/inspec-cassettes/ -fi -set -e - -# Clean up cassettes for merged PR -gsutil -m rm -r gs://magic-modules-inspec-bucket/$PR_ID/inspec-cassettes/* -popd \ No newline at end of file diff --git a/.ci/acceptance-tests/inspec-post-merge.yml b/.ci/acceptance-tests/inspec-post-merge.yml deleted file mode 100644 index 838672508199..000000000000 --- a/.ci/acceptance-tests/inspec-post-merge.yml +++ /dev/null @@ -1,13 +0,0 @@ -platform: linux - -inputs: - - name: mm-approved-prs - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/terraform-gcloud-inspec - tag: '0.12.16-4.0' - -run: - path: mm-approved-prs/.ci/acceptance-tests/inspec-post-merge.sh diff --git a/.ci/acceptance-tests/inspec-vcr.sh b/.ci/acceptance-tests/inspec-vcr.sh deleted file mode 100755 index 179f1eb4facc..000000000000 --- a/.ci/acceptance-tests/inspec-vcr.sh +++ /dev/null @@ -1,59 +0,0 @@ -#!/bin/bash - -set -e -set -x - -# Service account credentials for GCP to allow terraform to work -export GOOGLE_CLOUD_KEYFILE_JSON="/tmp/google-account.json" -export GOOGLE_APPLICATION_CREDENTIALS="/tmp/google-account.json" -# Setup GOPATH -export GOPATH=${PWD}/go - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${TERRAFORM_KEY}" > /tmp/google-account.json -export GCP_PROJECT_NUMBER=${PROJECT_NUMBER} -export GCP_PROJECT_ID=${PROJECT_NAME} -export GCP_PROJECT_NAME=${PROJECT_NAME} -set -x - -gcloud auth activate-service-account terraform@graphite-test-sam-chef.iam.gserviceaccount.com --key-file=$GOOGLE_CLOUD_KEYFILE_JSON -PR_ID="$(cat ./magic-modules-new-prs/.git/id)" - -pushd magic-modules-new-prs -export VCR_MODE=all -# Running other controls may cause caching issues due to underlying clients caching responses -rm build/inspec/test/integration/verify/controls/* -bundle install -bundle exec compiler -a -e inspec -o "build/inspec/" -v beta -cp templates/inspec/vcr_config.rb build/inspec - -pushd build/inspec - -# Setup for using current GCP resources -export GCP_ZONE=europe-west2-a -export GCP_LOCATION=europe-west2 - -bundle install - -function cleanup { - cd $INSPEC_DIR - bundle exec rake test:cleanup_integration_tests -} - -export INSPEC_DIR=${PWD} -trap cleanup EXIT - -seed=$RANDOM -bundle exec rake test:init_workspace -# Seed plan_integration_tests so VCR cassettes work with random resource suffixes -bundle exec rake test:plan_integration_tests[$seed] -bundle exec rake test:setup_integration_tests -bundle exec rake test:run_integration_tests -bundle exec rake test:cleanup_integration_tests - -echo $seed > inspec-cassettes/seed.txt - -gsutil -m cp inspec-cassettes/* gs://magic-modules-inspec-bucket/$PR_ID/inspec-cassettes/ -popd \ No newline at end of file diff --git a/.ci/acceptance-tests/inspec-vcr.yml b/.ci/acceptance-tests/inspec-vcr.yml deleted file mode 100644 index bf7857f481f8..000000000000 --- a/.ci/acceptance-tests/inspec-vcr.yml +++ /dev/null @@ -1,13 +0,0 @@ -platform: linux - -inputs: - - name: magic-modules-new-prs - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/terraform-gcloud-inspec - tag: '0.12.16-4.0' - -run: - path: magic-modules-new-prs/.ci/acceptance-tests/inspec-vcr.sh diff --git a/.ci/acceptance-tests/terraform-acceptance.sh b/.ci/acceptance-tests/terraform-acceptance.sh deleted file mode 100755 index 1636d4c1036e..000000000000 --- a/.ci/acceptance-tests/terraform-acceptance.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env bash - -set -e -set -x - -export GOOGLE_CREDENTIALS_FILE="/tmp/google-account.json" -export GOOGLE_REGION="us-central1" -export GOOGLE_ZONE="us-central1-a" -# Setup GOPATH -export GOPATH=${PWD}/go - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${GOOGLE_JSON_ACCOUNT}" > $GOOGLE_CREDENTIALS_FILE -set -x - -# Create GOPATH structure -mkdir -p "${GOPATH}/src/github.com/terraform-providers" -ln -s "${PWD}/magic-modules/build/$SHORT_NAME" "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -cd "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -git diff HEAD~ > tmp.diff -OUTPUT=( $(go run scripts/affectedtests/affectedtests.go -diff tmp.diff) ) -rm tmp.diff - -if [ ${#OUTPUT[@]} -eq 0 ]; then - echo "No tests to run" -else - make testacc TEST=./$TEST_DIR TESTARGS="-run=\"$( IFS=$'|'; echo "${OUTPUT[*]}" )\"" -fi diff --git a/.ci/acceptance-tests/terraform-acceptance.yml b/.ci/acceptance-tests/terraform-acceptance.yml deleted file mode 100644 index 1077fa53e885..000000000000 --- a/.ci/acceptance-tests/terraform-acceptance.yml +++ /dev/null @@ -1,22 +0,0 @@ -platform: linux -params: - # Params are set as environment variables when the run part is executed. - # Here we use (()) notation to indicate that we're using a credhub secret. - GOOGLE_JSON_ACCOUNT: ((terraform-integration-key)) - GOOGLE_PROJECT: ((terraform-integration-project)) - GOOGLE_ORG: ((terraform-integration-org)) - GOOGLE_BILLING_ACCOUNT: ((terraform-integration-billing-account)) - GOOGLE_PROJECT_NUMBER: ((terraform-integration-project-number)) - TEST_DIR: "" - PROVIDER_NAME: "" - SHORT_NAME: "" - # TODO: GOOGLE_BILLING_ACCOUNT_2 -inputs: - - name: magic-modules -image_resource: - type: docker-image - source: - repository: golang - tag: '1.11' -run: - path: magic-modules/.ci/acceptance-tests/terraform-acceptance.sh diff --git a/.ci/acceptance-tests/terraform-integration.sh b/.ci/acceptance-tests/terraform-integration.sh deleted file mode 100755 index d22b99869c84..000000000000 --- a/.ci/acceptance-tests/terraform-integration.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash - -set -e -set -x - -export GOOGLE_CREDENTIALS_FILE="/tmp/google-account.json" -export GOOGLE_REGION="us-central1" -export GOOGLE_ZONE="us-central1-a" -# Setup GOPATH -export GOPATH=${PWD}/go - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${GOOGLE_JSON_ACCOUNT}" > $GOOGLE_CREDENTIALS_FILE -set -x - -# Create GOPATH structure -mkdir -p "${GOPATH}/src/github.com/terraform-providers" -ln -s "${PWD}/magic-modules-gcp/build/$SHORT_NAME" "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -cd "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -make testacc TEST=./$TEST_DIR diff --git a/.ci/acceptance-tests/terraform-integration.yml b/.ci/acceptance-tests/terraform-integration.yml deleted file mode 100644 index 828b5be71ccc..000000000000 --- a/.ci/acceptance-tests/terraform-integration.yml +++ /dev/null @@ -1,22 +0,0 @@ -platform: linux -params: - # Params are set as environment variables when the run part is executed. - # Here we use (()) notation to indicate that we're using a credhub secret. - GOOGLE_JSON_ACCOUNT: ((terraform-integration-key)) - GOOGLE_PROJECT: ((terraform-integration-project)) - GOOGLE_ORG: ((terraform-integration-org)) - GOOGLE_BILLING_ACCOUNT: ((terraform-integration-billing-account)) - GOOGLE_PROJECT_NUMBER: ((terraform-integration-project-number)) - TEST_DIR: "" - PROVIDER_NAME: "" - SHORT_NAME: "" - # TODO: GOOGLE_BILLING_ACCOUNT_2 -inputs: - - name: magic-modules-gcp -image_resource: - type: docker-image - source: - repository: golang - tag: '1.11' -run: - path: magic-modules-gcp/.ci/acceptance-tests/terraform-integration.sh diff --git a/.ci/changelog.tmpl b/.ci/changelog.tmpl index c35942e2e34d..03b7cea36a27 100644 --- a/.ci/changelog.tmpl +++ b/.ci/changelog.tmpl @@ -64,4 +64,4 @@ BUG FIXES: {{range $bugs | sortAlpha -}} * {{. }} {{- end -}} -{{- end -}} +{{- end -}} \ No newline at end of file diff --git a/.ci/ci.yml.tmpl b/.ci/ci.yml.tmpl deleted file mode 100644 index dface4b51bac..000000000000 --- a/.ci/ci.yml.tmpl +++ /dev/null @@ -1,603 +0,0 @@ -{% import "vars.tmpl" as vars %} - -resource_types: - - name: merged-downstreams - type: docker-image - source: - repository: gcr.io/magic-modules/merged-prs-resource - tag: '1.1' - - - name: git-branch - type: docker-image - source: - repository: gcr.io/magic-modules/concourse-git-resource - tag: '1.0' - - - name: github-pull-request - type: docker-image - source: - repository: gcr.io/magic-modules/concourse-github-pr-resource - tag: '1.1' - -resources: - - name: magic-modules - type: git-branch - source: - uri: git@github.com:((github-account.username))/magic-modules.git - private_key: ((repo-key.private_key)) - - - name: magic-modules-gcp - type: git-branch - source: - uri: git@github.com:GoogleCloudPlatform/magic-modules.git - private_key: ((repo-key.private_key)) - - - name: magic-modules-new-external-prs - type: github-pull-request - source: - repo: GoogleCloudPlatform/magic-modules - private_key: ((repo-key.private_key)) - access_token: ((github-account.password)) - community_only: true - no_label: community - base: master - - - name: magic-modules-external-prs - type: github-pull-request - source: - repo: GoogleCloudPlatform/magic-modules - private_key: ((repo-key.private_key)) - access_token: ((github-account.password)) - community_only: true - base: master - - - name: magic-modules-new-prs - type: github-pull-request - source: - repo: GoogleCloudPlatform/magic-modules - private_key: ((repo-key.private_key)) - access_token: ((github-account.password)) - authorship_restriction: true - no_label: automerged - base: master - - - name: magic-modules-3.0-prs - type: github-pull-request - source: - repo: GoogleCloudPlatform/magic-modules - private_key: ((repo-key.private_key)) - access_token: ((github-account.password)) - authorship_restriction: true - no_label: automerged - base: 3.0.0 - -{% for v in vars.terraform_v.itervalues() %} - - name: {{ v.short_name }}-intermediate - type: git-branch - source: - uri: git@github.com:((github-account.username))/{{ v.provider_name }}.git - private_key: ((repo-key.private_key)) -{% endfor %} - - - name: ansible-intermediate - type: git-branch - source: - uri: git@github.com:((github-account.username))/ansible_collections_google.git - private_key: ((repo-key.private_key)) - - - name: inspec-intermediate - type: git-branch - source: - uri: git@github.com:((github-account.username))/inspec-gcp.git - private_key: ((repo-key.private_key)) - - - name: mm-approved-prs - type: github-pull-request - source: - repo: GoogleCloudPlatform/magic-modules - private_key: ((repo-key.private_key)) - access_token: ((github-account.password)) - only_mergeable: true - require_review_approval: true - check_dependent_prs: true - label: downstream-generated - base: master - - - name: merged-prs - type: merged-downstreams - check_every: 5m - source: - repo: GoogleCloudPlatform/magic-modules - token: ((github-account.password)) - -jobs: - - name: respond-to-community-pr - plan: - - get: magic-modules-new-external-prs - trigger: true - - get: magic-modules-gcp - # NOTE: we do NOT run a script from the external PR! - - task: write-welcome-message - file: magic-modules-gcp/.ci/magic-modules/welcome-contributor.yml - - put: magic-modules-external-prs - params: - status: pending - path: magic-modules-new-external-prs - label: community - comment: comment/pr_comment - assignee_file: comment/assignee - get_params: - skip_clone: true - - - name: authorize-single-rev - plan: - - get: magic-modules-external-prs - trigger: false - - put: magic-modules-new-prs - params: - status: pending - path: magic-modules-external-prs - get_params: - skip_clone: true - - - name: mm-3.0-diff - plan: - - get: magic-modules - resource: magic-modules-3.0-prs - version: every - trigger: true - attempts: 2 - params: - fetch_merge: true - # This isn't strictly-speaking necessary - we aren't actually - # pushing this anywhere - but it lets us reuse all the other - # generation stuff. - - aggregate: - # consumes: magic-modules (detached HEAD) - # produces: magic-modules-branched (new branch, with submodule) - - task: branch-magic-modules - file: magic-modules/.ci/magic-modules/branch.yml - params: - GH_TOKEN: ((github-account.password)) - CREDS: ((repo-key.private_key)) - ALL_SUBMODULES: {{' '.join(vars.all_submodules)}} - INCLUDE_PREVIOUS: true - - put: magic-modules-3.0-prs - params: - status: pending - path: magic-modules - get_params: - skip_clone: true - - - aggregate: -{% for k, v in vars.terraform_v.iteritems() %} - - do: - # consumes: magic-modules-branched - # produces: terraform-generated - - task: diff-{{v.short_name}} - file: magic-modules-branched/.ci/magic-modules/diff-terraform.yml - params: - VERSION: {{k}} - PROVIDER_NAME: {{v.provider_name}} - SHORT_NAME: {{v.short_name}} - GITHUB_ORG: {{v.github_org}} - OVERRIDE_PROVIDER: {{v.override_provider}} - - - put: {{v.short_name}}-intermediate - params: - repository: terraform-diff/{{k}}/new - branch_file: magic-modules-branched/branchname - force: true - get_params: - skip_clone: true - - - put: {{v.short_name}}-intermediate - params: - repository: terraform-diff/{{k}}/old - branch_file: magic-modules-previous/branchname - force: true - get_params: - skip_clone: true - - - task: test-{{v.short_name}} - file: magic-modules-branched/.ci/unit-tests/tf-3.yml - timeout: 30m - params: - PROVIDER_NAME: {{v.provider_name}} - TEST_DIR: {{v.test_dir}} - SUBDIR: {{k}} - -{% endfor %} - - on_failure: - put: magic-modules-3.0-prs - params: - status: failure - context: code-generation - path: magic-modules-3.0-prs - get_params: - skip_clone: true - - - task: create-message - file: magic-modules-branched/.ci/magic-modules/create-diff-message.yml - - - put: magic-modules-3.0-prs - params: - status: success - path: magic-modules - comment: message/message.txt - get_params: - skip_clone: true - - - - - name: mm-generate - plan: - - get: magic-modules - resource: magic-modules-new-prs - version: every - trigger: true - attempts: 2 - params: - fetch_merge: true - - aggregate: - - get: patches - resource: merged-prs - # consumes: magic-modules (detached HEAD) - # produces: magic-modules-branched (new branch, with submodule) - - task: branch-magic-modules - file: magic-modules/.ci/magic-modules/branch.yml - params: - GH_TOKEN: ((github-account.password)) - CREDS: ((repo-key.private_key)) - ALL_SUBMODULES: {{' '.join(vars.all_submodules)}} - - put: magic-modules-new-prs - params: - status: pending - path: magic-modules - get_params: - skip_clone: true - - aggregate: -{% for k, v in vars.terraform_v.iteritems() %} - - do: - # consumes: magic-modules-branched - # produces: terraform-generated - - task: generate-{{v.short_name}} - file: magic-modules-branched/.ci/magic-modules/generate-terraform.yml - params: - VERSION: {{k}} - PROVIDER_NAME: {{v.provider_name}} - SHORT_NAME: {{v.short_name}} - GITHUB_ORG: {{v.github_org}} - OVERRIDE_PROVIDER: {{v.override_provider}} - # Puts 'terraform-generated' into the robot's fork. - - aggregate: - - put: {{v.short_name}}-intermediate - params: - repository: terraform-generated/{{k}} - branch_file: magic-modules-branched/branchname - only_if_diff: true - force: true - get_params: - skip_clone: true -{% endfor %} - - do: - # consumes: magic-modules-branched - # produces: ansible-generated - - task: generate-ansible - file: magic-modules-branched/.ci/magic-modules/generate-ansible.yml - # Puts 'ansible-generated' into the robot's fork. - - put: ansible-intermediate - params: - repository: ansible-generated - branch_file: magic-modules-branched/branchname - only_if_diff: true - force: true - get_params: - skip_clone: true - - do: - # consumes: magic-modules-branched - # produces: inspec-generated - - task: generate-inspec - file: magic-modules-branched/.ci/magic-modules/generate-inspec.yml - # Puts 'inspec-generated' into the robot's fork. - - put: inspec-intermediate - params: - repository: inspec-generated - branch_file: magic-modules-branched/branchname - only_if_diff: true - force: true - get_params: - skip_clone: true - on_failure: - put: magic-modules-new-prs - params: - status: failure - context: code-generation - path: magic-modules - get_params: - skip_clone: true - - # consumes: magic-modules-branched - # produces: magic-modules-submodules - - task: point-to-submodules - file: magic-modules-branched/.ci/magic-modules/point-to-submodules.yml - params: - # This needs to match the username for the 'intermediate' resources. - GH_USERNAME: ((github-account.username)) - CREDS: ((repo-key.private_key)) - TERRAFORM_VERSIONS: "{{','.join(vars.terraform_properties_serialized)}}" - TERRAFORM_ENABLED: true - ANSIBLE_ENABLED: true - INSPEC_ENABLED: true - - # Push the magic modules branch that contains the updated submodules. - - put: magic-modules - params: - repository: magic-modules-submodules - branch_file: magic-modules-branched/branchname - force: true - get_params: - skip_clone: true - - - name: terraform-test - plan: - - get: magic-modules - version: every - trigger: true - params: - submodules: [{{','.join(vars.terraform_submodules)}}] - passed: [mm-generate] - - aggregate: -{% for v in vars.terraform_v.itervalues() %} - - task: test-{{v.short_name}} - file: magic-modules/.ci/unit-tests/task.yml - timeout: 30m - params: - PROVIDER_NAME: {{v.provider_name}} - SHORT_NAME: {{v.short_name}} - TEST_DIR: {{v.test_dir}} -{% endfor %} - on_failure: - do: - - get: magic-modules-new-prs - passed: [mm-generate] - - put: magic-modules-new-prs - params: - status: failure - context: terraform-tests - path: magic-modules-new-prs - get_params: - skip_clone: true - - - name: inspec-unit-test - plan: - - get: magic-modules-new-prs - passed: [mm-generate] - - get: magic-modules - version: every - trigger: true - params: - submodules: [build/inspec] - passed: [mm-generate] - - task: test - file: magic-modules/.ci/unit-tests/inspec.yml - timeout: 30m - params: - TERRAFORM_KEY: ((terraform-key)) - PROJECT_NAME: ((inspec-project-name)) - PROJECT_NUMBER: ((inspec-project-number)) - - - name: create-prs - plan: - - get: magic-modules - version: every - trigger: true - params: - submodules: {{vars.all_submodules_yaml_format}} - passed: - - mm-generate - - get: mm-initial-pr - attempts: 2 - resource: magic-modules-new-prs - passed: [mm-generate] - version: every - # This task either uses the 'hub' cli to create a PR from the generated repo, - # or, if a PR already exists, it uses 'git branch -f' to update the branch - # that PR is from to point at the commit generated earlier from this run - # of the pipeline. - - task: write-original-branch-name - file: mm-initial-pr/.ci/magic-modules/write-branch-name.yml - # This will be a no-op the first time through the pipeline. This pushes the updated - # branch named "codegen-pr-$MM_PR_NUMBER" to the downstream terraform repo. The - # first time through the pipeline, that branch is unchanged by the create-prs task, - # because a new PR has just been created from that branch. The second time through - # the pipeline (when a PR needs to be updated), this does that updating by pushing - # the new code to the repository/branch from which a pull request is already open. - - aggregate: -{% for v in vars.terraform_v.itervalues() %} - - put: {{v.short_name}}-intermediate - params: - repository: magic-modules/build/{{ v.short_name }} - branch_file: branchname/original_pr_branch_name - # Every time a change runs through this pipeline, it will generate a commit with - # a different hash - the hash includes timestamps. Therefore, even if there's no - # code diff, this push will update terraform's pending PR on every update to the - # magic-modules PR. With this 'only_if_diff' feature, if the change to the - # magic-modules PR does not require an update to the terraform PR, this will - # not push the update even though the commit hashes are different. - only_if_diff: true - force: true - get_params: - skip_clone: true -{% endfor %} - - put: ansible-intermediate - params: - repository: magic-modules/build/ansible - branch_file: branchname/original_pr_branch_name - # See comment on terraform-intermediate - only_if_diff: true - force: true - get_params: - skip_clone: true - - put: inspec-intermediate - params: - repository: magic-modules/build/inspec - branch_file: branchname/original_pr_branch_name - # See comment on terraform-intermediate - only_if_diff: true - force: true - get_params: - skip_clone: true - - task: create-or-update-pr - file: magic-modules/.ci/magic-modules/create-pr.yml - params: - GITHUB_TOKEN: ((github-account.password)) - # This is what tells us which terraform repo to write PRs against - this - # is what you change if you want to test this in a non-live environment. - ANSIBLE_REPO_USER: ansible-collections - INSPEC_REPO_USER: modular-magician - TERRAFORM_VERSIONS: "{{','.join(vars.terraform_properties_serialized)}}" - on_failure: - put: magic-modules-new-prs - params: - status: failure - context: pr-creation - path: mm-initial-pr - get_params: - skip_clone: true - - put: magic-modules - params: - repository: magic-modules/ - branch_file: branchname/original_pr_branch_name - only_if_diff: true - force: true - get_params: - skip_clone: true - # Once everything is done and working, post the updated information to the - # magic-modules PR. - - put: magic-modules-new-prs - params: - status: success - label: downstream-generated - path: mm-initial-pr - comment: magic-modules-with-comment/pr_comment - label_file: magic-modules-with-comment/label_file - get_params: - skip_clone: true - # Downstream changelog metadata - - task: downstream-changelog-metadata - file: magic-modules-with-comment/.ci/magic-modules/downstream-changelog-metadata.yml - params: - GITHUB_TOKEN: ((github-account.password)) - DOWNSTREAM_REPOS: "{{','.join(vars.downstreams_with_changelogs)}}" - - name: terraform-acceptance-tests - plan: - - get: magic-modules - version: every - trigger: true - params: - submodules: [{{', '.join(vars.terraform_submodules)}}] - passed: [create-prs] - - aggregate: -{% for v in vars.terraform_v.itervalues() %} - - task: test-{{v.short_name}} - file: magic-modules/.ci/acceptance-tests/terraform-acceptance.yml - params: - PROVIDER_NAME: {{v.provider_name}} - SHORT_NAME: {{v.short_name}} - TEST_DIR: {{v.test_dir}} -{% endfor %} - - - name: merge-prs - plan: - - get: mm-approved-prs - attempts: 2 - - task: downstream-changelog-metadata - file: mm-approved-prs/.ci/magic-modules/downstream-changelog-metadata-mergeprs.yml - params: - GITHUB_TOKEN: ((github-account.password)) - DOWNSTREAM_REPOS: "{{','.join(vars.downstreams_with_changelogs)}}" - - task: ensure-downstreams-merged - file: mm-approved-prs/.ci/magic-modules/ensure-downstreams-merged.yml - params: - GH_TOKEN: ((github-account.password)) - - put: mark-automerged - resource: mm-approved-prs - params: - path: mm-approved-prs - status: success - label: automerged - get_params: - skip_clone: true - - task: rebase-and-update - file: mm-approved-prs/.ci/magic-modules/merge.yml - params: - CREDS: ((repo-key.private_key)) - ALL_SUBMODULES: "{{' '.join(vars.all_submodules)}}" - # TODO(ndmckinley): This will work to update the magic-modules PR *if* the original PR - # was opened from the magic-modules repository. That's not always going to be - # true - figure out what to do if, for instance, we can't modify the PR. - # Update: right now, we just require everyone to push to the GCP repo. That's not - # been a problem yet. - - put: magic-modules-gcp - params: - repository: mm-output - branch_file: mm-approved-prs/.git/branch - force: true - - put: mark-success - resource: mm-approved-prs - params: - path: mm-output - status: success - get_params: - skip_clone: true - - put: merge-pr - resource: mm-approved-prs - params: - path: mm-output - status: success - merge: - method: squash - commit_msg: mm-output/commit_message - get_params: - skip_clone: true - - - name: inspec-vcr-record - serial: true - serial_groups: [inspec-integration] - plan: - - get: magic-modules-new-prs - - task: inspec-vcr - file: magic-modules-new-prs/.ci/acceptance-tests/inspec-vcr.yml - params: - TERRAFORM_KEY: ((terraform-key)) - PROJECT_NAME: ((inspec-project-name)) - PROJECT_NUMBER: ((inspec-project-number)) - - - name: inspec-post-merge - serial: true - serial_groups: [inspec-integration] - plan: - - get: mm-approved-prs - passed: [merge-prs] - trigger: true - - task: inspec-post-merge - file: mm-approved-prs/.ci/acceptance-tests/inspec-post-merge.yml - params: - TERRAFORM_KEY: ((terraform-key)) - PROJECT_NAME: ((inspec-project-name)) - PROJECT_NUMBER: ((inspec-project-number)) - - - name: inspec-post-approve - serial: true - serial_groups: [inspec-integration] - plan: - - get: mm-approved-prs - trigger: true - - task: inspec-post-approve - file: mm-approved-prs/.ci/acceptance-tests/inspec-post-approve.yml - params: - TERRAFORM_KEY: ((terraform-key)) - PROJECT_NAME: ((inspec-project-name)) - PROJECT_NUMBER: ((inspec-project-number)) diff --git a/.ci/containers/contributor-checker/Dockerfile b/.ci/containers/contributor-checker/Dockerfile new file mode 100644 index 000000000000..9df163f0625c --- /dev/null +++ b/.ci/containers/contributor-checker/Dockerfile @@ -0,0 +1,5 @@ +from alpine +run apk update +run apk add git curl jq bash +add check-contributor.sh /main.sh +entrypoint ["/main.sh"] diff --git a/.ci/containers/contributor-checker/check-contributor.sh b/.ci/containers/contributor-checker/check-contributor.sh new file mode 100755 index 000000000000..728bae3b3ede --- /dev/null +++ b/.ci/containers/contributor-checker/check-contributor.sh @@ -0,0 +1,59 @@ +#!/bin/bash +if [[ -z "$GITHUB_TOKEN" ]]; then + echo "Did not provide GITHUB_TOKEN environment variable." + exit 1 +fi +if [[ $# -lt 1 ]]; then + echo "Usage: $0 pr-number" + exit 1 +fi +PR_NUMBER=$1 + +set -x + +ASSIGNEE=$(curl -H "Authorization: token ${GITHUB_TOKEN}" \ + "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/pulls/${PR_NUMBER}/requested_reviewers" | jq .users[0].login) + +if [[ "$ASSIGNEE" == "null" || -z "$ASSIGNEE" ]] ; then + ASSIGNEE=$(curl -H "Authorization: token ${GITHUB_TOKEN}" \ + "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/pulls/${PR_NUMBER}/reviews" | jq .[0].user.login) +fi + +if [[ "$ASSIGNEE" == "null" || -z "$ASSIGNEE" ]] ; then + echo "Issue is not assigned." +else + echo "Issue is assigned, not assigning." + exit 0 +fi + +USER=$(curl -H "Authorization: token ${GITHUB_TOKEN}" \ + "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/issues/${PR_NUMBER}" | jq .user.login) + +# This is where you add users who do not need to have an assignee chosen for +# them. +if $(echo $USER | fgrep -wq -e ndmckinley -e danawillow -e megan07 -e paddycarver -e rambleraptor -e SirGitsalot -e slevenick -e c2thorn -e rileykarson); then + echo "User is on the list, not assigning." + exit 0 +fi + +# This is where you add people to the random-assignee rotation. This list +# might not equal the list above. +ASSIGNEE=$(shuf -n 1 <(printf "danawillow\nrileykarson\nslevenick\nc2thorn\nndmckinley\nmegan07")) + +comment=$(cat << EOF +Hello! I am a robot who works on Magic Modules PRs. + +I have detected that you are a community contributor, so your PR will be assigned to someone with a commit-bit on this repo for initial review. + +Thanks for your contribution! A human will be with you soon. + +@$ASSIGNEE, please review this PR or find an appropriate assignee. +EOF +) + +curl -H "Authorization: token ${GITHUB_TOKEN}" \ + -d "$(jq -r --arg comment "$comment" -n "{body: \$comment}")" \ + "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/issues/${PR_NUMBER}/comments" +curl -H "Authorization: token ${GITHUB_TOKEN}" \ + -d "$(jq -r --arg assignee "$ASSIGNEE" -n "{reviewers: [\$assignee], team_reviewers: []}")" \ + "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/pulls/${PR_NUMBER}/requested_reviewers" diff --git a/.ci/containers/downstream-builder/Dockerfile b/.ci/containers/downstream-builder/Dockerfile index d5a91fec73ce..143905b57791 100644 --- a/.ci/containers/downstream-builder/Dockerfile +++ b/.ci/containers/downstream-builder/Dockerfile @@ -9,8 +9,7 @@ RUN go get github.com/github/hub RUN ssh-keyscan github.com >> /known_hosts RUN echo "UserKnownHostsFile /known_hosts" >> /etc/ssh/ssh_config -ENV GOFLAGS "-mod=vendor" -ENV GO111MODULE "off" +ENV GO111MODULE "on" # Install Ruby from source. RUN apt-get update diff --git a/.ci/containers/downstream-builder/generate_downstream.sh b/.ci/containers/downstream-builder/generate_downstream.sh index 80c3d54247d0..5066d97bb622 100755 --- a/.ci/containers/downstream-builder/generate_downstream.sh +++ b/.ci/containers/downstream-builder/generate_downstream.sh @@ -27,7 +27,7 @@ function clone_repo() { LOCAL_PATH=$GOPATH/src/github.com/terraform-google-modules/docs-examples elif [ "$REPO" == "ansible" ]; then UPSTREAM_OWNER=ansible-collections - GH_REPO=ansible_collections_google + GH_REPO=google.cloud LOCAL_PATH=$PWD/../ansible elif [ "$REPO" == "inspec" ]; then UPSTREAM_OWNER=modular-magician @@ -85,7 +85,8 @@ fi if [ "$REPO" == "terraform" ]; then pushd $LOCAL_PATH - find . -type f -not -wholename "./.git*" -not -wholename "./.changelog*" -not -wholename "./vendor*" -not -name ".travis.yml" -not -name ".golangci.yml" -not -name "CHANGELOG.md" -not -name "GNUmakefile" -not -name "docscheck.sh" -not -name "LICENSE" -not -name "README.md" -not -wholename "./examples*" -not -name "go.mod" -not -name "go.sum" -not -name "staticcheck.conf" -not -name ".go-version" -not -name ".hashibot.hcl" -not -name "tools.go" -exec git rm {} \; + find . -type f -not -wholename "./.git*" -not -wholename "./.changelog*" -not -name ".travis.yml" -not -name ".golangci.yml" -not -name "CHANGELOG.md" -not -name "GNUmakefile" -not -name "docscheck.sh" -not -name "LICENSE" -not -name "README.md" -not -wholename "./examples*" -not -name "go.mod" -not -name "go.sum" -not -name "staticcheck.conf" -not -name ".go-version" -not -name ".hashibot.hcl" -not -name "tools.go" -exec git rm {} \; + go mod download popd fi @@ -100,6 +101,11 @@ else fi pushd $LOCAL_PATH + +if [ "$REPO" == "terraform" ]; then + make generate +fi + git config --local user.name "Modular Magician" git config --local user.email "magic-modules@google.com" git add . diff --git a/.ci/containers/github-differ/generate_comment.sh b/.ci/containers/github-differ/generate_comment.sh index af85696f37fa..af56b96c330c 100755 --- a/.ci/containers/github-differ/generate_comment.sh +++ b/.ci/containers/github-differ/generate_comment.sh @@ -22,7 +22,7 @@ TFC_SCRATCH_PATH=https://modular-magician:$GITHUB_TOKEN@github.com/modular-magic TFC_LOCAL_PATH=$PWD/../tfc TFOICS_SCRATCH_PATH=https://modular-magician:$GITHUB_TOKEN@github.com/modular-magician/docs-examples TFOICS_LOCAL_PATH=$PWD/../tfoics -ANSIBLE_SCRATCH_PATH=https://modular-magician:$GITHUB_TOKEN@github.com/modular-magician/ansible_collections_google +ANSIBLE_SCRATCH_PATH=https://modular-magician:$GITHUB_TOKEN@github.com/modular-magician/google.cloud ANSIBLE_LOCAL_PATH=$PWD/../ansible INSPEC_SCRATCH_PATH=https://modular-magician:$GITHUB_TOKEN@github.com/modular-magician/inspec-gcp INSPEC_LOCAL_PATH=$PWD/../inspec @@ -59,7 +59,7 @@ pushd $ANSIBLE_LOCAL_PATH git fetch origin $OLD_BRANCH if ! git diff --exit-code origin/$OLD_BRANCH origin/$NEW_BRANCH; then SUMMARY=`git diff origin/$OLD_BRANCH origin/$NEW_BRANCH --shortstat` - DIFFS="${DIFFS}${NEWLINE}Ansible: [Diff](https://github.com/modular-magician/ansible_collections_google/compare/$OLD_BRANCH..$NEW_BRANCH) ($SUMMARY)" + DIFFS="${DIFFS}${NEWLINE}Ansible: [Diff](https://github.com/modular-magician/google.cloud/compare/$OLD_BRANCH..$NEW_BRANCH) ($SUMMARY)" fi popd diff --git a/.ci/containers/go-ruby-python/Dockerfile b/.ci/containers/go-ruby-python/Dockerfile deleted file mode 100644 index e71de141ca87..000000000000 --- a/.ci/containers/go-ruby-python/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM gcr.io/magic-modules/go-ruby:1.13.8-2.6.0-v2 - -# Install python & python libraries. -RUN apt-get update -RUN apt-get install -y git -RUN apt-get install -y rsync -RUN apt-get install -y build-essential libbz2-dev libssl-dev libreadline-dev \ - libffi-dev libsqlite3-dev tk-dev -RUN apt-get install -y libpng-dev libfreetype6-dev -RUN apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \ - libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \ - xz-utils tk-dev libffi-dev liblzma-dev python-openssl -RUN curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash -ENV PATH="/root/.pyenv/bin:${PATH}" -RUN eval "$(pyenv init -)" -RUN eval "$(pyenv virtualenv-init -)" -RUN pyenv install 3.6.8 -RUN pyenv install 2.7.13 -RUN pyenv rehash -ENV PATH="/root/.pyenv/shims:${PATH}" -RUN pyenv global 2.7.13 3.6.8 -RUN pip install beautifulsoup4 mistune -RUN pip3 install black -ENV LC_ALL=C.UTF-8 -ENV LANG=C.UTF-8 diff --git a/.ci/containers/go-ruby/Dockerfile b/.ci/containers/go-ruby/Dockerfile deleted file mode 100644 index d458ed98c8ff..000000000000 --- a/.ci/containers/go-ruby/Dockerfile +++ /dev/null @@ -1,38 +0,0 @@ -from golang:1.13-stretch as resource - -SHELL ["/bin/bash", "-c"] - -RUN go get golang.org/x/tools/cmd/goimports - -# Set up Github SSH cloning. -RUN ssh-keyscan github.com >> /known_hosts -RUN echo "UserKnownHostsFile /known_hosts" >> /etc/ssh/ssh_config - -ENV GOFLAGS "-mod=vendor" - -# Install Ruby from source. -RUN apt-get update -RUN apt-get install -y bzip2 libssl-dev libreadline-dev zlib1g-dev -RUN git clone https://github.com/rbenv/rbenv.git /rbenv -ENV PATH /rbenv/bin:/root/.rbenv/shims:$PATH - -ENV RUBY_VERSION 2.6.0 -ENV RUBYGEMS_VERSION 3.0.2 -ENV BUNDLER_VERSION 1.17.0 - -RUN /rbenv/bin/rbenv init || true -RUN eval "$(rbenv init -)" -RUN mkdir -p "$(rbenv root)"/plugins -RUN git clone https://github.com/rbenv/ruby-build.git "$(rbenv root)"/plugins/ruby-build - -RUN rbenv install $RUBY_VERSION -RUN rbenv global 2.6.0 -RUN rbenv rehash - -RUN gem update --system "$RUBYGEMS_VERSION" -RUN gem install bundler --version "$BUNDLER_VERSION" --force - -ADD Gemfile Gemfile -ADD Gemfile.lock Gemfile.lock -RUN bundle install -RUN rbenv rehash diff --git a/.ci/containers/go-ruby/Gemfile b/.ci/containers/go-ruby/Gemfile deleted file mode 100644 index 62dae70bdfb0..000000000000 --- a/.ci/containers/go-ruby/Gemfile +++ /dev/null @@ -1,15 +0,0 @@ -source 'https://rubygems.org' - -gem 'activesupport' -gem 'binding_of_caller' -gem 'rake' - -group :test do - gem 'mocha', '~> 1.3.0' - gem 'rspec' - gem 'rubocop', '>= 0.77.0' -end - -group :pr_script do - gem 'octokit' -end diff --git a/.ci/containers/go-ruby/Gemfile.lock b/.ci/containers/go-ruby/Gemfile.lock deleted file mode 100644 index 59cb6be3aa0c..000000000000 --- a/.ci/containers/go-ruby/Gemfile.lock +++ /dev/null @@ -1,77 +0,0 @@ -GEM - remote: https://rubygems.org/ - specs: - activesupport (5.2.3) - concurrent-ruby (~> 1.0, >= 1.0.2) - i18n (>= 0.7, < 2) - minitest (~> 5.1) - tzinfo (~> 1.1) - addressable (2.5.2) - public_suffix (>= 2.0.2, < 4.0) - ast (2.4.0) - binding_of_caller (0.8.0) - debug_inspector (>= 0.0.1) - concurrent-ruby (1.1.5) - debug_inspector (0.0.3) - diff-lcs (1.3) - faraday (0.15.4) - multipart-post (>= 1.2, < 3) - i18n (1.6.0) - concurrent-ruby (~> 1.0) - jaro_winkler (1.5.4) - metaclass (0.0.4) - minitest (5.11.3) - mocha (1.3.0) - metaclass (~> 0.0.1) - multipart-post (2.0.0) - octokit (4.13.0) - sawyer (~> 0.8.0, >= 0.5.3) - parallel (1.19.1) - parser (2.6.5.0) - ast (~> 2.4.0) - public_suffix (3.0.3) - rainbow (3.0.0) - rake (12.3.3) - rspec (3.8.0) - rspec-core (~> 3.8.0) - rspec-expectations (~> 3.8.0) - rspec-mocks (~> 3.8.0) - rspec-core (3.8.0) - rspec-support (~> 3.8.0) - rspec-expectations (3.8.1) - diff-lcs (>= 1.2.0, < 2.0) - rspec-support (~> 3.8.0) - rspec-mocks (3.8.0) - diff-lcs (>= 1.2.0, < 2.0) - rspec-support (~> 3.8.0) - rspec-support (3.8.0) - rubocop (0.77.0) - jaro_winkler (~> 1.5.1) - parallel (~> 1.10) - parser (>= 2.6) - rainbow (>= 2.2.2, < 4.0) - ruby-progressbar (~> 1.7) - unicode-display_width (>= 1.4.0, < 1.7) - ruby-progressbar (1.10.1) - sawyer (0.8.1) - addressable (>= 2.3.5, < 2.6) - faraday (~> 0.8, < 1.0) - thread_safe (0.3.6) - tzinfo (1.2.5) - thread_safe (~> 0.1) - unicode-display_width (1.6.0) - -PLATFORMS - ruby - -DEPENDENCIES - activesupport - binding_of_caller - mocha (~> 1.3.0) - octokit - rake - rspec - rubocop (>= 0.77.0) - -BUNDLED WITH - 1.17.2 diff --git a/.ci/containers/hub/Dockerfile b/.ci/containers/hub/Dockerfile deleted file mode 100644 index 6b126b0ef11c..000000000000 --- a/.ci/containers/hub/Dockerfile +++ /dev/null @@ -1,6 +0,0 @@ -from gcr.io/magic-modules/go-ruby-python:1.11.5-2.6.0-2.7-v6 - -RUN apt-get update -RUN apt-get install -y ca-certificates -RUN apt-get install -y jq -RUN go get github.com/github/hub diff --git a/.ci/containers/merged-prs-resource/Dockerfile b/.ci/containers/merged-prs-resource/Dockerfile deleted file mode 100644 index 70052b9ed26c..000000000000 --- a/.ci/containers/merged-prs-resource/Dockerfile +++ /dev/null @@ -1,4 +0,0 @@ -FROM gcr.io/magic-modules/python -ADD get_downstream_prs.py /opt/resource/get_downstream_prs.py -ADD in.py /opt/resource/in -ADD check.py /opt/resource/check diff --git a/.ci/containers/merged-prs-resource/check.py b/.ci/containers/merged-prs-resource/check.py deleted file mode 100755 index 32e4c2ec9429..000000000000 --- a/.ci/containers/merged-prs-resource/check.py +++ /dev/null @@ -1,50 +0,0 @@ -#! /usr/local/bin/python -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from absl import app -import json -import collections -import sys -import get_downstream_prs -import re -import itertools -import operator -from github import Github - -def main(argv): - in_json = json.load(sys.stdin) - out_version = {} - g = Github(in_json['source']['token']) - open_pulls = g.get_repo(in_json['source']['repo']).get_pulls(state='open') - # For each open pull request, get all the dependencies. - depends = itertools.chain.from_iterable( - [get_downstream_prs.get_github_dependencies(g, open_pull.number) - for open_pull in open_pulls]) - # for each dependency, generate a tuple - (repo, pr_number) - parsed_dependencies = [re.match(r'https://github.com/([\w-]+/[\w-]+)/pull/(\d+)', d).groups() - for d in depends] - parsed_dependencies.sort(key=operator.itemgetter(0)) - # group those dependencies by repo - e.g. [("terraform-provider-google", ["123", "456"]), ...] - for r, pulls in itertools.groupby(parsed_dependencies, key=operator.itemgetter(0)): - repo = g.get_repo(r) - out_version[r] = [] - for pull in pulls: - # check whether the PR is merged - if it is, add it to the version. - pr = repo.get_pull(int(pull[1])) - if pr.is_merged(): - out_version[r].append(pull[1]) - for k, v in out_version.iteritems(): - out_version[k] = ','.join(v) - print(json.dumps([out_version])) - # version dict: - # { - # "terraform-providers/terraform-provider-google": "1514,1931", - # "terraform-providers/terraform-provider-google-beta": "121,220", - # "modular-magician/ansible": "", - # } - -if __name__ == '__main__': - app.run(main) - diff --git a/.ci/containers/merged-prs-resource/get_downstream_prs.py b/.ci/containers/merged-prs-resource/get_downstream_prs.py deleted file mode 100644 index d183323efd3f..000000000000 --- a/.ci/containers/merged-prs-resource/get_downstream_prs.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python -import functools -import os -import re -import sys -from github import Github - -def append_github_dependencies_to_list(lst, comment_body): - list_of_urls = re.findall(r'^depends: (https://github.com/.*)', comment_body, re.MULTILINE) - return lst + list_of_urls - -def get_github_dependencies(g, pr_number): - pull_request = g.get_repo('GoogleCloudPlatform/magic-modules').get_pull(pr_number) - comment_bodies = [c.body for c in pull_request.get_issue_comments()] - # "reduce" is "foldl" - apply this function to the result of the previous function and - # the next value in the iterable. - return functools.reduce(append_github_dependencies_to_list, comment_bodies, []) - -if __name__ == '__main__': - g = Github(os.environ.get('GH_TOKEN')) - assert len(sys.argv) == 2 - for downstream_pr in get_github_dependencies(g, int(sys.argv[1])): - print downstream_pr diff --git a/.ci/containers/merged-prs-resource/in.py b/.ci/containers/merged-prs-resource/in.py deleted file mode 100755 index 9696a24ddd61..000000000000 --- a/.ci/containers/merged-prs-resource/in.py +++ /dev/null @@ -1,29 +0,0 @@ -#! /usr/local/bin/python -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from absl import app -import json -import sys -import os -from github import Github -import urllib - -def main(argv): - in_json = json.load(sys.stdin) - g = Github(in_json['source']['token']) - version = in_json.get('version', {}) - for repo_name, pr_numbers in version.iteritems(): - repo = g.get_repo(repo_name) - if not pr_numbers: continue - for pr_number in pr_numbers.split(','): - download_location = os.path.join(argv[1], repo_name, pr_number + '.patch') - if not os.path.exists(os.path.dirname(download_location)): - os.makedirs(os.path.dirname(download_location)) - pr = repo.get_pull(int(pr_number)) - urllib.urlretrieve(pr.patch_url, download_location) - print(json.dumps({"version": version})) - -if __name__ == '__main__': - app.run(main) diff --git a/.ci/containers/python/Dockerfile b/.ci/containers/python/Dockerfile deleted file mode 100644 index 47e9a6222079..000000000000 --- a/.ci/containers/python/Dockerfile +++ /dev/null @@ -1,10 +0,0 @@ -from python:2.7-stretch - -run pip install pygithub -run pip install absl-py -run pip install autopep8 -run pip install beautifulsoup4 mistune - -# Set up Github SSH cloning. -RUN ssh-keyscan github.com >> /known_hosts -RUN echo "UserKnownHostsFile /known_hosts" >> /etc/ssh/ssh_config diff --git a/.ci/containers/terraform-gcloud-inspec/Dockerfile b/.ci/containers/terraform-gcloud-inspec/Dockerfile deleted file mode 100644 index 7a74876cf0c3..000000000000 --- a/.ci/containers/terraform-gcloud-inspec/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM gcr.io/magic-modules/go-ruby-python:1.11.5-2.6.0-2.7-v6 - -RUN apt-get install unzip -RUN curl https://releases.hashicorp.com/terraform/0.12.16/terraform_0.12.16_linux_amd64.zip > terraform_0.12.16_linux_amd64.zip -RUN unzip terraform_0.12.16_linux_amd64.zip -d /usr/bin -# Install google cloud sdk -RUN echo "deb http://packages.cloud.google.com/apt cloud-sdk-stretch main" >> /etc/apt/sources.list.d/google-cloud-sdk.list -RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - -RUN apt-get update && apt-get install google-cloud-sdk -y - -ADD Gemfile Gemfile -RUN bundle install diff --git a/.ci/containers/terraform-gcloud-inspec/Gemfile b/.ci/containers/terraform-gcloud-inspec/Gemfile deleted file mode 100644 index ca69894908b5..000000000000 --- a/.ci/containers/terraform-gcloud-inspec/Gemfile +++ /dev/null @@ -1,17 +0,0 @@ -source 'https://rubygems.org' - -gem 'bundle' -gem 'google-api-client' -gem 'google-cloud' -gem 'googleauth' -gem 'inifile' -gem 'inspec-bin' -gem 'rubocop', '>= 0.77.0' - -group :development do - gem 'github_changelog_generator' - gem 'pry-coolline' - gem 'rake' - gem 'vcr' - gem 'webmock' -end \ No newline at end of file diff --git a/.ci/containers/terraform-vcr-tester/Dockerfile b/.ci/containers/terraform-vcr-tester/Dockerfile new file mode 100644 index 000000000000..1f50d1666fd9 --- /dev/null +++ b/.ci/containers/terraform-vcr-tester/Dockerfile @@ -0,0 +1,9 @@ +FROM alpine + +RUN apk add --no-cache bash +RUN apk add --no-cache curl +RUN apk add --no-cache jq + +ADD teamcityparams.xml /teamcityparams.xml +ADD vcr_test_terraform.sh /vcr_test_terraform.sh +ENTRYPOINT ["/vcr_test_terraform.sh"] diff --git a/.ci/containers/terraform-vcr-tester/teamcityparams.xml b/.ci/containers/terraform-vcr-tester/teamcityparams.xml new file mode 100644 index 000000000000..0b6415b3f913 --- /dev/null +++ b/.ci/containers/terraform-vcr-tester/teamcityparams.xml @@ -0,0 +1,9 @@ + + + + + + + + Magician triggered PR + \ No newline at end of file diff --git a/.ci/containers/terraform-vcr-tester/vcr_test_terraform.sh b/.ci/containers/terraform-vcr-tester/vcr_test_terraform.sh new file mode 100755 index 000000000000..a74e16f77d25 --- /dev/null +++ b/.ci/containers/terraform-vcr-tester/vcr_test_terraform.sh @@ -0,0 +1,23 @@ +#!/bin/bash + +set -e + +PR_NUMBER=$1 + +sed -i 's/{{PR_NUMBER}}/'"$PR_NUMBER"'/g' /teamcityparams.xml +curl --header "Accept: application/json" --header "Authorization: Bearer $TEAMCITY_TOKEN" https://ci-oss.hashicorp.engineering/app/rest/buildQueue --request POST --header "Content-Type:application/xml" --data-binary @/teamcityparams.xml -o build.json + +# Dont crash here if the curl failed due to authorization +# TODO(slevenick): remove this once this all is stable +set +e +URL=echo $(cat build.json | jq .webUrl) +ret=$? +if [ $ret -ne 0 ]; then + echo "Auth failed" +else + comment="I have triggered VCR tests based on this PR's diffs. See the results here: $URL" + + curl -H "Authorization: token ${GITHUB_TOKEN}" \ + -d "$(jq -r --arg comment "$comment" -n "{body: \$comment}")" \ + "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/issues/${PR_NUMBER}/comments" +fi \ No newline at end of file diff --git a/.ci/containers/vcr-cassette-merger/Dockerfile b/.ci/containers/vcr-cassette-merger/Dockerfile new file mode 100644 index 000000000000..83cfea1d3ef9 --- /dev/null +++ b/.ci/containers/vcr-cassette-merger/Dockerfile @@ -0,0 +1,8 @@ +from gcr.io/google.com/cloudsdktool/cloud-sdk:alpine as resource + +RUN apk add --no-cache bash +RUN apk add --no-cache curl +RUN apk add --no-cache jq + +ADD vcr_merge.sh /vcr_merge.sh +ENTRYPOINT ["/vcr_merge.sh"] diff --git a/.ci/containers/vcr-cassette-merger/vcr_merge.sh b/.ci/containers/vcr-cassette-merger/vcr_merge.sh new file mode 100755 index 000000000000..e0077864f5c4 --- /dev/null +++ b/.ci/containers/vcr-cassette-merger/vcr_merge.sh @@ -0,0 +1,18 @@ +#!/bin/bash + +set -e + +REFERENCE=$1 + +PR_NUMBER=$(curl -s -H "Authorization: token ${GITHUB_TOKEN}" \ + "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/pulls?state=closed&base=master&sort=updated&direction=desc" | \ + jq -r ".[] | if .merge_commit_sha == \"$REFERENCE\" then .number else empty end") + +set +e +gsutil ls gs://vcr-$GOOGLE_PROJECT/auto-pr-$PR_NUMBER/fixtures/ +if [ $? -eq 0 ]; then + # We have recorded new cassettes for this branch + gsutil -m cp gs://vcr-$GOOGLE_PROJECT/refs/heads/auto-pr-$PR_NUMBER/fixtures/* gs://vcr-$GOOGLE_PROJECT/fixtures/ + gsutil -m rm -r gs://vcr-$GOOGLE_PROJECT/refs/heads/auto-pr-$PR_NUMBER/ +fi +set -e diff --git a/.ci/gcb-generate-diffs.yml b/.ci/gcb-generate-diffs.yml index b071f5171e3d..6e5e35c32106 100644 --- a/.ci/gcb-generate-diffs.yml +++ b/.ci/gcb-generate-diffs.yml @@ -1,5 +1,10 @@ --- steps: + - name: 'gcr.io/graphite-docker-images/contributor-checker' + secretEnv: ["GITHUB_TOKEN"] + args: + - $_PR_NUMBER + # The GCB / GH integration doesn't satisfy our use case perfectly. # It doesn't check out the merge commit, and it doesn't check out the repo # itself - it only gives us the actual code, not the repo. So we need @@ -174,30 +179,45 @@ steps: args: - $_PR_NUMBER - - name: 'gcr.io/graphite-docker-images/changelog-checker' + - name: 'gcr.io/graphite-docker-images/terraform-tester' + id: tpgb-test secretEnv: ["GITHUB_TOKEN"] waitFor: ["diff"] args: + - 'beta' - $_PR_NUMBER - name: 'gcr.io/graphite-docker-images/terraform-tester' + id: tpg-test secretEnv: ["GITHUB_TOKEN"] waitFor: ["diff"] args: - - 'beta' + - 'ga' - $_PR_NUMBER - - name: 'gcr.io/graphite-docker-images/terraform-tester' - secretEnv: ["GITHUB_TOKEN"] + - name: 'gcr.io/graphite-docker-images/terraform-vcr-tester' + id: tpg-vcr-test + secretEnv: ["TEAMCITY_TOKEN", "GITHUB_TOKEN"] waitFor: ["diff"] + timeout: 1800s args: - - 'ga' - $_PR_NUMBER + - name: 'gcr.io/graphite-docker-images/changelog-checker' + secretEnv: ["GITHUB_TOKEN"] + waitFor: ["tpg-test", "tpgb-test"] + args: + - $_PR_NUMBER + +# Long timeout to enable waiting on VCR test +timeout: 2400s options: machineType: 'N1_HIGHCPU_32' secrets: - kmsKeyName: projects/graphite-docker-images/locations/global/keyRings/token-keyring/cryptoKeys/github-token secretEnv: - GITHUB_TOKEN: CiQADkR4NnCVXo1OLSWFuPX7eSiifaOfQVzSYmKi2jZdVbKlfYMSUQBfF82vNAgpvSVyhzM8JsQaP6Oky0SAdoR5fPED5cU3qxsCB9wArmdGcgQoRzP7S6jEWHRcvxv/xauznjkJQMWCORzcbUbk6T7k80bdo2mpqw== + GITHUB_TOKEN: CiQADkR4Nt6nHLI52Kc1W55OwpLdc4vjBfVR0SGQNzm6VSVj9lUSUQBfF82vVhn43A1jNYOv8ScoWgrZONwNrUabHfGjkvl+IZxcii0JlOVUawbscs4OJga0eitNNlagAOruLs6C926X20ZZPqWtH97ui6CKNvxgkQ== + - kmsKeyName: projects/graphite-docker-images/locations/global/keyRings/token-keyring/cryptoKeys/teamcity-token + secretEnv: + TEAMCITY_TOKEN: CiQAth83aSgKrb5ASI5XwE+yv62KbNtNG+O9gKXJzoflm65H7fESkwEASc1NF0oM3pHb5cUBAHcXZqFjEJrF4eGowPycUpKDmEncuQQSkm8v+dswSNXTXnX2C/reLpw9uGTw7G+K1kqA0sVrzYG3sTdDf/IcS//uloAerUff2wVIlV5rxV357PMkBl5dGyybnKMybgrXGl+CcW9PDLAwqfELWrr5zTSHy799dAhJZi1Wb5KbImmvvU5Z46g= diff --git a/.ci/gcb-push-downstream.yml b/.ci/gcb-push-downstream.yml index 6d870a1865b4..6d76c3197ed2 100644 --- a/.ci/gcb-push-downstream.yml +++ b/.ci/gcb-push-downstream.yml @@ -192,6 +192,12 @@ steps: - -c - git push https://modular-magician:$$GITHUB_TOKEN@github.com/GoogleCloudPlatform/magic-modules $COMMIT_SHA:inspec-sync + - name: 'gcr.io/graphite-docker-images/vcr-cassette-merger' + secretEnv: ["GITHUB_TOKEN", "GOOGLE_PROJECT"] + waitFor: ["tpg-push"] + args: + - $COMMIT_SHA + # set extremely long 1 day timeout, in order to ensure that any jams / backlogs can be cleared. timeout: 86400s options: @@ -201,4 +207,7 @@ options: secrets: - kmsKeyName: projects/graphite-docker-images/locations/global/keyRings/token-keyring/cryptoKeys/github-token secretEnv: - GITHUB_TOKEN: CiQADkR4NnCVXo1OLSWFuPX7eSiifaOfQVzSYmKi2jZdVbKlfYMSUQBfF82vNAgpvSVyhzM8JsQaP6Oky0SAdoR5fPED5cU3qxsCB9wArmdGcgQoRzP7S6jEWHRcvxv/xauznjkJQMWCORzcbUbk6T7k80bdo2mpqw== + GITHUB_TOKEN: CiQADkR4Nt6nHLI52Kc1W55OwpLdc4vjBfVR0SGQNzm6VSVj9lUSUQBfF82vVhn43A1jNYOv8ScoWgrZONwNrUabHfGjkvl+IZxcii0JlOVUawbscs4OJga0eitNNlagAOruLs6C926X20ZZPqWtH97ui6CKNvxgkQ== + - kmsKeyName: projects/graphite-docker-images/locations/global/keyRings/environment-keyring/cryptoKeys/ci-project-key + secretEnv: + GOOGLE_PROJECT: CiQAis6xrDDU4Wcxn5s8Y790IMxTUEe2d3SaYEXUGScHfaLjOw8SPwDOc1nLe6Yz0zzA0mcYTsXaeGSFYu7uQ5+QCtTProJWRv2ITrNwCS3AF/kvMCrHvltx7O1CZnJveutlVpZH3w== diff --git a/.ci/magic-modules/branch-magic-modules.sh b/.ci/magic-modules/branch-magic-modules.sh deleted file mode 100755 index 61530090444a..000000000000 --- a/.ci/magic-modules/branch-magic-modules.sh +++ /dev/null @@ -1,43 +0,0 @@ -#! /bin/bash -set -e -set -x - -pushd "magic-modules" -export GH_TOKEN -if PR_ID=$(git config --get pullrequest.id) && - [ -z "$USE_SHA" ] && - DEPS=$(python ./.ci/magic-modules/get_downstream_prs.py "$PR_ID") && - [ -z "$DEPS" ]; then - BRANCH="codegen-pr-$(git config --get pullrequest.id)" -else - BRANCH="codegen-sha-$(git rev-parse --short HEAD)" -fi -git checkout -B "$BRANCH" -# ./branchname is intentionally never committed - it isn't necessary once -# this output is no longer available. -echo "$BRANCH" > ./branchname - -set +x -# Don't show the credential in the output. -echo "$CREDS" > ~/github_private_key -set -x -chmod 400 ~/github_private_key - -# Update to head on master on all submodules, so we avoid spurious diffs. -# Note: $ALL_SUBMODULES will be re-split by the ssh-agent's "bash". -ssh-agent bash -c "ssh-add ~/github_private_key; git submodule update --remote --init $ALL_SUBMODULES" - -cp -r ./ ../magic-modules-branched/ - -if [ "true" == "$INCLUDE_PREVIOUS" ] ; then - # Since this is fetched after a merge commit, HEAD~ is - # the newest commit on the branch being merged into. - git reset --hard HEAD~ - BRANCH="$BRANCH-previous" - git checkout -B "$BRANCH" - # ./branchname is intentionally never committed - it isn't necessary once - # this output is no longer available. - echo "$BRANCH" > ./branchname - ssh-agent bash -c "ssh-add ~/github_private_key; git submodule update --remote --init $ALL_SUBMODULES" - cp -r ./ ../magic-modules-previous/ -fi diff --git a/.ci/magic-modules/branch.yml b/.ci/magic-modules/branch.yml deleted file mode 100644 index 4bf6450772c7..000000000000 --- a/.ci/magic-modules/branch.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -# This file takes one input: magic-modules in detached-HEAD state. -# It spits out "magic-modules-branched", a magic-modules repo on a new branch (named -# after the HEAD commit on the PR). -platform: linux - -image_resource: - type: docker-image - source: - # This task requires a container with 'git', 'python', and the pip - # package 'pygithub'. - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: magic-modules - -outputs: - - name: magic-modules-branched - - name: magic-modules-previous - -params: - USE_SHA: "" - GH_TOKEN: "" - CREDS: "" - ALL_SUBMODULES: "" - INCLUDE_PREVIOUS: "" - -run: - path: magic-modules/.ci/magic-modules/branch-magic-modules.sh diff --git a/.ci/magic-modules/coverage-spreadsheet-upload.sh b/.ci/magic-modules/coverage-spreadsheet-upload.sh deleted file mode 100755 index d67ee2e53044..000000000000 --- a/.ci/magic-modules/coverage-spreadsheet-upload.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -set -e -set -x - -# Service account credentials for GCP to allow terraform to work -export GOOGLE_CLOUD_KEYFILE_JSON="/tmp/google-account.json" -export GOOGLE_APPLICATION_CREDENTIALS="/tmp/google-account.json" - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${SERVICE_ACCOUNT}" > /tmp/google-account.json -set -x - -gcloud auth activate-service-account magic-modules-spreadsheet@magic-modules.iam.gserviceaccount.com --key-file=$GOOGLE_CLOUD_KEYFILE_JSON - -pushd magic-modules-gcp -bundle install -gem install rspec - -# || true will suppress errors, but it's necessary for this to run. If unset, -# Concourse will fail on *any* rspec step failing (eg: any API mismatch) -bundle exec rspec tools/linter/spreadsheet.rb || true - -echo "File created" -date=$(date +'%m%d%Y') -echo "Date established" - -gsutil cp output.csv gs://magic-modules-coverage/$date.csv -popd diff --git a/.ci/magic-modules/coverage-spreadsheet-upload.yml b/.ci/magic-modules/coverage-spreadsheet-upload.yml deleted file mode 100644 index 46bf98413ccb..000000000000 --- a/.ci/magic-modules/coverage-spreadsheet-upload.yml +++ /dev/null @@ -1,15 +0,0 @@ ---- -platform: linux - -image_resource: -# This image has gcloud, which we need for uploading coverage file to a bucket. - type: docker-image - source: - repository: gcr.io/magic-modules/terraform-gcloud-inspec - tag: '0.12.16-4.0' - -inputs: - - name: magic-modules-gcp - -run: - path: magic-modules-gcp/.ci/magic-modules/coverage-spreadsheet-upload.sh diff --git a/.ci/magic-modules/create-diff-message.sh b/.ci/magic-modules/create-diff-message.sh deleted file mode 100755 index 75ba4275f77b..000000000000 --- a/.ci/magic-modules/create-diff-message.sh +++ /dev/null @@ -1,11 +0,0 @@ -#! /bin/bash - -pushd magic-modules-branched - -BRANCH_NAME=$(cat branchname) -{ - echo "## 3.0.0 diff report as of $(git rev-parse HEAD^2)"; - echo "[TPG Diff](https://github.com/modular-magician/terraform-provider-google/compare/$BRANCH_NAME-previous..$BRANCH_NAME)"; - echo "[TPGB Diff](https://github.com/modular-magician/terraform-provider-google-beta/compare/$BRANCH_NAME-previous..$BRANCH_NAME)"; - echo "[Mapper Diff](https://github.com/modular-magician/terraform-google-conversion/compare/$BRANCH_NAME-previous..$BRANCH_NAME)"; -} > ../message/message.txt diff --git a/.ci/magic-modules/create-diff-message.yml b/.ci/magic-modules/create-diff-message.yml deleted file mode 100644 index 1545d789e7be..000000000000 --- a/.ci/magic-modules/create-diff-message.yml +++ /dev/null @@ -1,19 +0,0 @@ ---- -platform: linux - -image_resource: - type: docker-image - source: - # This task requires a container with 'git', 'python', and the pip - # package 'pygithub'. - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: magic-modules-branched - -outputs: - - name: message - -run: - path: magic-modules-branched/.ci/magic-modules/create-diff-message.sh diff --git a/.ci/magic-modules/create-pr.sh b/.ci/magic-modules/create-pr.sh deleted file mode 100755 index 8aefacfdcda3..000000000000 --- a/.ci/magic-modules/create-pr.sh +++ /dev/null @@ -1,198 +0,0 @@ -#!/bin/bash - -# This script configures the git submodule under magic-modules so that it is -# ready to create a new pull request. It is cloned in a detached-head state, -# but its branch is relevant to the PR creation process, so we want to make -# sure that it's on a branch, and most importantly that that branch tracks -# a branch upstream. - -set -e -set -x - -shopt -s dotglob -cp -r magic-modules/* magic-modules-with-comment - -PR_ID="$(cat ./mm-initial-pr/.git/id)" -ORIGINAL_PR_BRANCH="codegen-pr-$PR_ID" -set +e -ORIGINAL_PR_USER=$(curl "https://api.github.com/repos/GoogleCloudPlatform/magic-modules/issues/$PR_ID" | jq -r ".user.login") -set -e -pushd magic-modules-with-comment -echo "$ORIGINAL_PR_BRANCH" > ./original_pr_branch_name - -# Check out the magic-modules branch with the same name as the current tracked -# branch of the terraform submodule. All the submodules will be on the the same -# branch name - we pick terraform because it's the first one the magician supported. -BRANCH_NAME="$(git config -f .gitmodules --get submodule.build/terraform.branch)" -IFS="," read -ra TERRAFORM_VERSIONS <<< "$TERRAFORM_VERSIONS" - -git checkout -b "$BRANCH_NAME" -NEWLINE=$'\n' -MESSAGE="Hi! I'm the modular magician, I work on Magic Modules.$NEWLINE" -LAST_USER_COMMIT="$(git rev-parse HEAD~1^2)" - -if [ "$BRANCH_NAME" = "$ORIGINAL_PR_BRANCH" ]; then - MESSAGE="${MESSAGE}This PR seems not to have generated downstream PRs before, as of $LAST_USER_COMMIT. " -else - MESSAGE="${MESSAGE}I see that this PR has already had some downstream PRs generated. " - MESSAGE="${MESSAGE}Any open downstreams are already updated to your most recent commit, $LAST_USER_COMMIT. " -fi - -MESSAGE="${MESSAGE}${NEWLINE}## Pull request statuses" -DEPENDENCIES="" -LABELS="" -# There is no existing PR - this is the first pass through the pipeline and -# we will need to create a PR using 'hub'. - -# Check the files between this commit and HEAD -# If they're only contained in third_party, add the third_party label. -if [ -z "$(git diff --name-only HEAD^1 | grep -v "third_party" | grep -v ".gitmodules" | grep -r "build/")" ]; then - LABELS="${LABELS}only_third_party," -fi - -VALIDATOR_WARN_FILES="$(git show --name-only "${LAST_USER_COMMIT}" | grep -v ".gitmodules" | grep -v "build/" | grep -Ff '.ci/magic-modules/vars/validator_handwritten_files.txt' | sed 's/^/* /')" -if [ -n "${VALIDATOR_WARN_FILES}" ]; then - MESSAGE="${MESSAGE}${NEWLINE}**WARNING**: The following files changed in commit ${LAST_USER_COMMIT} may need corresponding changes in third_party/validator:" - MESSAGE="${MESSAGE}${NEWLINE}${VALIDATOR_WARN_FILES}${NEWLINE}" -fi - -# Terraform -if [ -n "$TERRAFORM_VERSIONS" ]; then - for VERSION in "${TERRAFORM_VERSIONS[@]}"; do - IFS=":" read -ra TERRAFORM_DATA <<< "$VERSION" - PROVIDER_NAME="${TERRAFORM_DATA[0]}" - SUBMODULE_DIR="${TERRAFORM_DATA[1]}" - TERRAFORM_REPO_USER="${TERRAFORM_DATA[2]}" - - pushd "build/$SUBMODULE_DIR" - - git log -1 --pretty=%s > ./downstream_body - echo "" >> ./downstream_body - echo "" >> ./downstream_body - if [ -n "$ORIGINAL_PR_USER" ]; then - echo "Original Author: @$ORIGINAL_PR_USER" >> ./downstream_body - fi - - git checkout -b "$BRANCH_NAME" - if hub pull-request -b "$TERRAFORM_REPO_USER/$PROVIDER_NAME:master" -h "$ORIGINAL_PR_BRANCH" -F ./downstream_body > ./tf_pr 2> ./tf_pr_err ; then - DEPENDENCIES="${DEPENDENCIES}depends: $(cat ./tf_pr) ${NEWLINE}" - LABELS="${LABELS}${PROVIDER_NAME}," - else - echo "$SUBMODULE_DIR - did not generate a PR." - if grep "No commits between" ./tf_pr_err; then - echo "There were no diffs in $SUBMODULE_DIR." - MESSAGE="$MESSAGE${NEWLINE}No diff detected in $PROVIDER_NAME." - elif grep "A pull request already exists" ./tf_pr_err; then - echo "Already have a PR for $SUBMODULE_DIR." - MESSAGE="$MESSAGE${NEWLINE}$PROVIDER_NAME already has an open PR." - fi - - fi - popd - done -fi - -if [ -n "$ANSIBLE_REPO_USER" ]; then - pushd build/ansible - - git log -1 --pretty=%s > ./downstream_body - echo "" >> ./downstream_body - echo "" >> ./downstream_body - if [ -n "$ORIGINAL_PR_USER" ]; then - echo "/cc @$ORIGINAL_PR_USER" >> ./downstream_body - fi - - git checkout -b "$BRANCH_NAME" - if hub pull-request -b "$ANSIBLE_REPO_USER/ansible_collections_google:master" -h "$ORIGINAL_PR_BRANCH" -F ./downstream_body > ./ansible_pr 2> ./ansible_pr_err ; then - DEPENDENCIES="${DEPENDENCIES}depends: $(cat ./ansible_pr) ${NEWLINE}" - LABELS="${LABELS}ansible," - else - echo "Ansible - did not generate a PR." - if grep "No commits between" ./ansible_pr_err; then - echo "There were no diffs in Ansible." - MESSAGE="$MESSAGE${NEWLINE}No diff detected in Ansible." - elif grep "A pull request already exists" ./ansible_pr_err; then - MESSAGE="$MESSAGE${NEWLINE}Ansible already has an open PR." - fi - fi - popd - - pwd - - # If there is now a difference in the ansible_version_added files, those - # should be pushed back up to the user's MM branch to be reviewed. - if git diff --name-only HEAD^1 | grep "ansible_version_added.yaml"; then - # Setup git config. - git config --global user.email "magic-modules@google.com" - git config --global user.name "Modular Magician" - - BRANCH=$(git config --get pullrequest.branch) - REPO=$(git config --get pullrequest.repo) - # Add user's branch + get latest copy. - git remote add non-gcp-push-target "git@github.com:$REPO" - git fetch non-gcp-push-target $BRANCH - - # Make a commit to the current branch and track that commit's SHA1. - git add products/**/ansible_version_added.yaml - git commit -m "Ansible version_added changes" - CHERRY_PICKED_COMMIT=$(git rev-parse HEAD) - - # Checkout the user's branch + add the new cherry-picked commit. - git checkout non-gcp-push-target/$BRANCH - git cherry-pick $CHERRY_PICKED_COMMIT - - # Create commit + push (no force flag to avoid overwrites). - # If the push doesn't work, it's not problematic because a commit - # down the line will pick up the changes. - ssh-agent bash -c "ssh-add ~/github_private_key; git push non-gcp-push-target \"HEAD:$BRANCH\"" || true - - # Check out the branch we were on to ensure that the downstream commits don't change. - git checkout $CHERRY_PICKED_COMMIT - fi -fi - - if [ -n "$INSPEC_REPO_USER" ]; then - pushd build/inspec - - git log -1 --pretty=%s > ./downstream_body - echo "" >> ./downstream_body - echo "" >> ./downstream_body - if [ -n "$ORIGINAL_PR_USER" ]; then - echo "/cc @$ORIGINAL_PR_USER" >> ./downstream_body - fi - - git checkout -b "$BRANCH_NAME" - if hub pull-request -b "$INSPEC_REPO_USER/inspec-gcp:master" -h "$ORIGINAL_PR_BRANCH" -F ./downstream_body > ./inspec_pr 2> ./inspec_pr_err ; then - DEPENDENCIES="${DEPENDENCIES}depends: $(cat ./inspec_pr) ${NEWLINE}" - LABELS="${LABELS}inspec," - else - echo "InSpec - did not generate a PR." - if grep "No commits between" ./inspec_pr_err; then - echo "There were no diffs in Inspec." - MESSAGE="$MESSAGE${NEWLINE}No diff detected in Inspec." - elif grep "A pull request already exists" ./inspec_pr_err; then - MESSAGE="$MESSAGE${NEWLINE}InSpec already has an open PR." - fi - fi - popd -fi - -MESSAGE="${MESSAGE}${NEWLINE}## New Pull Requests" - -# Create PR comment with the list of dependencies. -if [ -z "$DEPENDENCIES" ]; then - MESSAGE="${MESSAGE}${NEWLINE}I didn't open any new pull requests because of this PR." -else - MESSAGE="${MESSAGE}${NEWLINE}I built this PR into one or more new PRs on other repositories, " - MESSAGE="${MESSAGE}and when those are closed, this PR will also be merged and closed." - MESSAGE="${MESSAGE}${NEWLINE}${DEPENDENCIES}" -fi - -echo "$MESSAGE" > ./pr_comment - -# Create Labels list with the comma-separated list of labels for this PR -if [ -z "$LABELS" ]; then - touch ./label_file -else - printf "%s" "$LABELS" > ./label_file -fi diff --git a/.ci/magic-modules/create-pr.yml b/.ci/magic-modules/create-pr.yml deleted file mode 100644 index 63ef789cdcd9..000000000000 --- a/.ci/magic-modules/create-pr.yml +++ /dev/null @@ -1,27 +0,0 @@ ---- -# This takes in the magic-modules repo in detached-HEAD state, -# creates a PR on downstream modules, and writes a comment into -# a file so that the PR can be updated with that comment. -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/hub - tag: '1.0' - -inputs: - - name: magic-modules - - name: mm-initial-pr - -outputs: - - name: magic-modules-with-comment - -run: - path: magic-modules/.ci/magic-modules/create-pr.sh - -params: - GITHUB_TOKEN: "" - ANSIBLE_REPO_USER: "" - INSPEC_REPO_USER: "" - TERRAFORM_VERSIONS: "" diff --git a/.ci/magic-modules/diff-terraform.sh b/.ci/magic-modules/diff-terraform.sh deleted file mode 100755 index 596ea881a016..000000000000 --- a/.ci/magic-modules/diff-terraform.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/bin/bash - -# The vast majority of this file is a direct copy of generate-terraform.sh. We could factor out all that -# code into a shared library, but I don't think we need to do that. This is an inherently temporary file, -# until TPG 3.0.0 is released, which is in the relatively near future. The cost of the copy is that -# we need to maintain both files - but the last change to that file was several months ago and I expect -# we're looking at 1 - 2 changes that need to be made in both places. The cost of not copying it is -# an extra few hours of work now, and some minor readability issues. - -set -x -set -e -source "$(dirname "$0")/helpers.sh" - -# Create $GOPATH structure - in order to successfully run Terraform codegen, we need to run -# it with a correctly-set-up $GOPATH. It calls out to `goimports`, which means that -# we need to have all the dependencies correctly downloaded. -export GOPATH="${PWD}/go" -mkdir -p "${GOPATH}/src/github.com/$GITHUB_ORG" - -for mm_dir in magic-modules-branched magic-modules-previous; do - - pushd $mm_dir - # delete the symlink if it exists - rm "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME" || true - ln -s "${PWD}/build/$SHORT_NAME/" "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME" - popd - - pushd "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME" - - # Other orgs are not fully-generated. This may be transitional - if this causes pain, - # try vendoring into third-party, as with TPG and TPGB. - if [ "$GITHUB_ORG" = "terraform-providers" ]; then - # This line removes every file which is not specified here. - # If you add files to Terraform which are not generated, you have to add them here. - # It uses the somewhat obtuse 'find' command. To explain: - # "find .": all files and directories recursively under the current directory, subject to matchers. - # "-type f": all regular real files, i.e. not directories. - # "-not": do the opposite of the next thing, always used with another matcher. - # "-wholename": entire relative path - including directory names - matches following wildcard. - # "-name": filename alone matches following string. e.g. -name README.md matches ./README.md *and* ./foo/bar/README.md - # "-exec": for each file found, execute the command following until the literal ';' - find . -type f -not -wholename "./.git*" -not -wholename "./vendor*" -not -name ".travis.yml" -not -name ".golangci.yml" -not -name "CHANGELOG.md" -not -name GNUmakefile -not -name LICENSE -not -name README.md -not -wholename "./examples*" -not -name "go.mod" -not -name "go.sum" -not -name "staticcheck.conf" -not -name ".hashibot.hcl" -exec git rm {} \; - fi - - popd - - pushd $mm_dir - - # Choose the author of the most recent commit as the downstream author - # Note that we don't use the last submitted commit, we use the primary GH email - # of the GH PR submitted. If they've enabled a private email, we'll actually - # use their GH noreply email which isn't compatible with CLAs. - COMMIT_AUTHOR="$(git log --pretty="%an <%ae>" -n1 HEAD)" - - if [ -n "$OVERRIDE_PROVIDER" ] && [ "$OVERRIDE_PROVIDER" != "null" ]; then - bundle exec compiler -a -e terraform -f "$OVERRIDE_PROVIDER" -o "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME/" - else - bundle exec compiler -a -e terraform -o "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME/" -v "$VERSION" - fi - - if [ "$mm_dir" == "magic-modules-branched" ] ; then - TERRAFORM_COMMIT_MSG="$(cat .git/title)" - else - TERRAFORM_COMMIT_MSG="Old generated base as of $(git rev-parse HEAD)." - fi - - BRANCH_NAME="$(cat branchname)" - - pushd "build/$SHORT_NAME" - - # These config entries will set the "committer". - git config --global user.email "magic-modules@google.com" - git config --global user.name "Modular Magician" - - git add -A - - git commit -m "$TERRAFORM_COMMIT_MSG" --author="$COMMIT_AUTHOR" || true # don't crash if no changes - git checkout -B "$BRANCH_NAME" - - popd - popd - -done - -mkdir "./terraform-diff/$VERSION" - -git clone "magic-modules-branched/build/$SHORT_NAME" "./terraform-diff/$VERSION/new" -git clone "magic-modules-previous/build/$SHORT_NAME" "./terraform-diff/$VERSION/old" diff --git a/.ci/magic-modules/diff-terraform.yml b/.ci/magic-modules/diff-terraform.yml deleted file mode 100644 index 337099ad8432..000000000000 --- a/.ci/magic-modules/diff-terraform.yml +++ /dev/null @@ -1,28 +0,0 @@ ---- -# This file takes two inputs: magic-modules-branched in detached-HEAD state, and magic-modules-previous. -# It spits out "terraform-diff/comment.txt", which contains the markdown-format diff, as well as -# "terraform-diff/old" and "terraform-diff/new". -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby-python - tag: '1.11.5-2.6.0-2.7-v6' - -inputs: - - name: magic-modules-branched - - name: magic-modules-previous - -outputs: - - name: terraform-diff - -run: - path: magic-modules-branched/.ci/magic-modules/diff-terraform.sh - -params: - VERSION: "" - PROVIDER_NAME: "" - SHORT_NAME: "" - OVERRIDE_PROVIDER: "" - GITHUB_ORG: "terraform-providers" diff --git a/.ci/magic-modules/downstream-changelog-metadata-mergeprs.yml b/.ci/magic-modules/downstream-changelog-metadata-mergeprs.yml deleted file mode 100644 index b71e7fff7a4c..000000000000 --- a/.ci/magic-modules/downstream-changelog-metadata-mergeprs.yml +++ /dev/null @@ -1,28 +0,0 @@ ---- -# This file takes in mm-approved-prs (magic-modules) to get code that it runs -# and upstream PR. -# Required information: -# - Github API token. -# - Upstream PR (magic modules) number -# It produces no output. - -platform: linux - -image_resource: - type: docker-image - source: - # This task requires python + pip package 'pygithub'. - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: mm-approved-prs - -params: - GITHUB_TOKEN: "" - DOWNSTREAM_REPOS: "" - -run: - path: mm-approved-prs/.ci/magic-modules/downstream_changelog_metadata.py - args: - - mm-approved-prs/.git/id diff --git a/.ci/magic-modules/downstream-changelog-metadata.yml b/.ci/magic-modules/downstream-changelog-metadata.yml deleted file mode 100644 index 08e12e758a3b..000000000000 --- a/.ci/magic-modules/downstream-changelog-metadata.yml +++ /dev/null @@ -1,29 +0,0 @@ ---- -# This file takes in mm-approved-prs (magic-modules) to get code that it runs -# and upstream PR. -# Required information: -# - Github API token. -# - Upstream PR (magic modules) number -# It produces no output. - -platform: linux - -image_resource: - type: docker-image - source: - # Requires python + pip packages 'pygithub', 'mistune', 'beautifulsoup4' - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: magic-modules-with-comment - - name: mm-initial-pr - -params: - GITHUB_TOKEN: "" - DOWNSTREAM_REPOS: "" - -run: - path: magic-modules-with-comment/.ci/magic-modules/downstream_changelog_metadata.py - args: - - mm-initial-pr/.git/id diff --git a/.ci/magic-modules/downstream_changelog_metadata.py b/.ci/magic-modules/downstream_changelog_metadata.py deleted file mode 100755 index 81640ca8fe75..000000000000 --- a/.ci/magic-modules/downstream_changelog_metadata.py +++ /dev/null @@ -1,103 +0,0 @@ -#!/usr/bin/env python -""" -Script to edit downstream PRs with CHANGELOG release note and label metadata. - -Usage: - ./downstream_changelog_info.py path/to/.git/.id - python /downstream_changelog_info.py - -Note that release_note/labels are authoritative - if empty or not set in the MM -upstream PR, release notes will be removed from downstreams and labels -unset. -""" -import os -import sys -import github -from pyutils import strutils, downstreams - -CHANGELOG_LABEL_PREFIX = "changelog: " - -def downstream_changelog_info(gh, upstream_pr_num, changelog_repos): - """Edit downstream PRs with CHANGELOG info. - - Args: - gh: github.Github client - upstream_pr_num: Upstream PR number - changelog_repos: List of repo names to downstream changelog metadata for - """ - # Parse CHANGELOG info from upstream - print "Fetching upstream PR '%s'..." % upstream_pr_num - upstream_pr = gh.get_repo(downstreams.UPSTREAM_REPO)\ - .get_pull(upstream_pr_num) - release_notes = strutils.get_release_notes(upstream_pr.body) - labels_to_add = strutils.find_prefixed_labels( - [l.name for l in upstream_pr.labels], - CHANGELOG_LABEL_PREFIX) - - if not labels_to_add and not release_notes: - print "No release note or labels found, skipping PR %d" % ( - upstream_pr_num) - return - - print "Found changelog info on upstream PR %d:" % ( - upstream_pr.number) - print "Release Note: \"%s\"" % release_notes - print "Labels: %s" % labels_to_add - - parsed_urls = downstreams.get_parsed_downstream_urls(gh, upstream_pr.number) - found = False - - for repo_name, pulls in parsed_urls: - found = True - print "Found downstream PR for repo %s" % repo_name - - if repo_name not in changelog_repos: - print "[DEBUG] skipping repo %s with no CHANGELOG" % repo_name - continue - - print "Generating changelog for pull requests in %s" % repo_name - - print "Fetching repo %s" % repo_name - ghrepo = gh.get_repo(repo_name) - - for _r, prnum in pulls: - print "Fetching %s PR %d" % (repo_name, prnum) - pr = ghrepo.get_pull(int(prnum)) - set_changelog_info(pr, release_notes, labels_to_add) - - if not found: - print "No downstreams found for upstream PR %d, returning!" % upstream_pr.number - -def set_changelog_info(gh_pull, release_notes, labels_to_add): - """Set release note and labels on a downstream PR in Github. - - Args: - gh_pull: A github.PullRequest.PullRequest handle - release_note: String of release note text to set - labels_to_add: List of strings. Changelog-related labels to add/replace. - """ - print "Setting changelog info for downstream PR %s" % gh_pull.url - edited_body = strutils.set_release_notes(release_notes, gh_pull.body) - gh_pull.edit(body=edited_body) - - # Get all non-changelog-related labels - labels_to_set = [] - for l in gh_pull.get_labels(): - if not l.name.startswith(CHANGELOG_LABEL_PREFIX): - labels_to_set.append(l.name) - labels_to_set += labels_to_add - gh_pull.set_labels(*labels_to_set) - - -if __name__ == '__main__': - downstream_repos = os.environ.get('DOWNSTREAM_REPOS').split(',') - if len(downstream_repos) == 0: - print "Skipping, no downstreams repos given to downstream changelog info for" - sys.exit(0) - - assert len(sys.argv) == 2, "expected id filename as argument" - with open(sys.argv[1]) as f: - pr_num = int(f.read()) - downstream_changelog_info( - github.Github(os.environ.get('GITHUB_TOKEN')), - pr_num, downstream_repos) diff --git a/.ci/magic-modules/ensure-downstreams-merged.yml b/.ci/magic-modules/ensure-downstreams-merged.yml deleted file mode 100644 index 389d122e2bc6..000000000000 --- a/.ci/magic-modules/ensure-downstreams-merged.yml +++ /dev/null @@ -1,24 +0,0 @@ ---- -# This file takes in only magic-modules, to get the code that -# it runs. It does need the github API token. -# It produces no output. -platform: linux - -image_resource: - type: docker-image - source: - # This task requires a container with python and the pip - # package 'pygithub'. - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: mm-approved-prs - -params: - GH_TOKEN: "" - -run: - path: mm-approved-prs/.ci/magic-modules/ensure_downstreams_merged.py - args: - - mm-approved-prs/.git/id diff --git a/.ci/magic-modules/ensure_downstreams_merged.py b/.ci/magic-modules/ensure_downstreams_merged.py deleted file mode 100755 index 25867f22c6cd..000000000000 --- a/.ci/magic-modules/ensure_downstreams_merged.py +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env python -""" -This script takes the name of a file containing an upstream PR number -and returns an error if not all of its downstreams have been merged. - -Required env vars: - GH_TOKEN: Github token -""" -import os -import sys -from github import Github -from pyutils import downstreams - -if __name__ == '__main__': - assert len(sys.argv) == 2, "expected id filename as argument" - with open(sys.argv[1]) as f: - pr_num = int(f.read()) - - client = Github(os.environ.get('GH_TOKEN')) - unmerged = downstreams.find_unmerged_downstreams(client, pr_num) - if unmerged: - raise ValueError("some PRs are unmerged", unmerged) diff --git a/.ci/magic-modules/generate-ansible.sh b/.ci/magic-modules/generate-ansible.sh deleted file mode 100755 index 6e14d5a47f18..000000000000 --- a/.ci/magic-modules/generate-ansible.sh +++ /dev/null @@ -1,45 +0,0 @@ -#!/bin/bash - -# This script takes in 'magic-modules-branched', a git repo tracking the head of a PR against magic-modules. -# It outputs "ansible-generated", a non-submodule git repo containing the generated ansible code. - -set -x -set -e -source "$(dirname "$0")/helpers.sh" -PATCH_DIR="$(pwd)/patches" - -pushd magic-modules-branched - -# Choose the author of the most recent commit as the downstream author -# Note that we don't use the last submitted commit, we use the primary GH email -# of the GH PR submitted. If they've enabled a private email, we'll actually -# use their GH noreply email which isn't compatible with CLAs. -COMMIT_AUTHOR="$(git log --pretty="%an <%ae>" -n1 HEAD)" - -# Remove all modules so that old files are removed in process. -rm build/ansible/plugins/modules/gcp_* - -bundle exec compiler -a -e ansible -o "build/ansible/" - -ANSIBLE_COMMIT_MSG="$(cat .git/title)" - -pushd "build/ansible" -# This module is handwritten. It's the only one. -# It was deleted earlier, so it needs to be undeleted. -git checkout HEAD -- plugins/modules/gcp_storage_object.py - -# These config entries will set the "committer". -git config --global user.email "magic-modules@google.com" -git config --global user.name "Modular Magician" - -git add -A - -git commit -m "$ANSIBLE_COMMIT_MSG" --author="$COMMIT_AUTHOR" || true # don't crash if no changes -git checkout -B "$(cat ../../branchname)" - -apply_patches "$PATCH_DIR/modular-magician/ansible" "$ANSIBLE_COMMIT_MSG" "$COMMIT_AUTHOR" "master" - -popd -popd - -git clone magic-modules-branched/build/ansible ./ansible-generated diff --git a/.ci/magic-modules/generate-ansible.yml b/.ci/magic-modules/generate-ansible.yml deleted file mode 100644 index 326d3458803b..000000000000 --- a/.ci/magic-modules/generate-ansible.yml +++ /dev/null @@ -1,21 +0,0 @@ ---- -# This file takes two inputs: magic-modules-branched in detached-HEAD state, and the patches. -# It spits out "ansible-generated", a ansible repo on a new branch (named after the -# HEAD commit on the PR), with the new generated code in it. -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby-python - tag: '1.11.5-2.6.0-2.7-v6' - -inputs: - - name: magic-modules-branched - - name: patches - -outputs: - - name: ansible-generated - -run: - path: magic-modules-branched/.ci/magic-modules/generate-ansible.sh diff --git a/.ci/magic-modules/generate-inspec.sh b/.ci/magic-modules/generate-inspec.sh deleted file mode 100755 index 0ff8d96d7384..000000000000 --- a/.ci/magic-modules/generate-inspec.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -# This script takes in 'magic-modules-branched', a git repo tracking the head of a PR against magic-modules. -# It outputs "inspec-generated", a non-submodule git repo containing the generated inspec code. - -set -x -set -e -source "$(dirname "$0")/helpers.sh" -PATCH_DIR="$(pwd)/patches" - -pushd magic-modules-branched - -# Choose the author of the most recent commit as the downstream author -# Note that we don't use the last submitted commit, we use the primary GH email -# of the GH PR submitted. If they've enabled a private email, we'll actually -# use their GH noreply email which isn't compatible with CLAs. -COMMIT_AUTHOR="$(git log --pretty="%an <%ae>" -n1 HEAD)" - -bundle exec compiler -a -e inspec -o "build/inspec/" -v beta - -INSPEC_COMMIT_MSG="$(cat .git/title)" - -pushd "build/inspec" - -# These config entries will set the "committer". -git config --global user.email "magic-modules@google.com" -git config --global user.name "Modular Magician" - -git add -A - -git commit -m "$INSPEC_COMMIT_MSG" --author="$COMMIT_AUTHOR" || true # don't crash if no changes -git checkout -B "$(cat ../../branchname)" - -apply_patches "$PATCH_DIR/modular-magician/inspec-gcp" "$INSPEC_COMMIT_MSG" "$COMMIT_AUTHOR" "master" - -popd -popd - -git clone magic-modules-branched/build/inspec ./inspec-generated diff --git a/.ci/magic-modules/generate-inspec.yml b/.ci/magic-modules/generate-inspec.yml deleted file mode 100644 index a984befdaaa9..000000000000 --- a/.ci/magic-modules/generate-inspec.yml +++ /dev/null @@ -1,21 +0,0 @@ ---- -# This file takes two inputs: magic-modules-branched in detached-HEAD state, and the patches. -# It spits out "inspec-generated", an inspec repo on a new branch (named after the -# HEAD commit on the PR), with the new generated code in it. -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby-python - tag: '1.11.5-2.6.0-2.7-v6' - -inputs: - - name: magic-modules-branched - - name: patches - -outputs: - - name: inspec-generated - -run: - path: magic-modules-branched/.ci/magic-modules/generate-inspec.sh diff --git a/.ci/magic-modules/generate-terraform-all-platforms.sh b/.ci/magic-modules/generate-terraform-all-platforms.sh deleted file mode 100755 index 5e8283e47176..000000000000 --- a/.ci/magic-modules/generate-terraform-all-platforms.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/usr/bin/env bash - - -set -x - -function init { - START_DIR=${PWD} - # Setup GOPATH - export GOPATH=${PWD}/go - # Setup GOBIN - export GOBIN=${PWD}/dist - # Create GOBIN folder - mkdir -p "$GOBIN" - # Create GOPATH structure - mkdir -p "${GOPATH}/src/github.com/terraform-providers" - # Copy the repo - cp -rf "$1" "${GOPATH}/src/github.com/terraform-providers/terraform-provider-google" - # Paths and vars - PROVIDER_NAME="google" - PROVIDERPATH="$GOPATH/src/github.com/terraform-providers" - SRC_DIR="$PROVIDERPATH/terraform-provider-$PROVIDER_NAME" - TARGET_DIR="$START_DIR/dist" - XC_ARCH=${XC_ARCH:-"386 amd64 arm"} - XC_OS=${XC_OS:=linux darwin windows freebsd openbsd solaris} - XC_EXCLUDE_OSARCH="!darwin/arm !darwin/386" - export CGO_ENABLED=0 - mkdir -p "$TARGET_DIR" -} - -function installGox { - if ! which gox > /dev/null; then - go get -u github.com/mitchellh/gox - fi -} - -function compile { - pushd "$SRC_DIR" - printf "\n" - make fmtcheck - - # Set LD Flags - LD_FLAGS="-s -w" - - # Clean any old directories (should never be here) - rm -f bin/* - rm -fr pkg/* - # Build with gox - "$GOBIN/gox" \ - -os="${XC_OS}" \ - -arch="${XC_ARCH}" \ - -osarch="${XC_EXCLUDE_OSARCH}" \ - -ldflags "${LD_FLAGS}" \ - -output "$TARGET_DIR/terraform-provider-${PROVIDER_NAME}.{{.OS}}_{{.Arch}}" \ - . - - popd -} - -function main { - init "$1" - installGox - compile -} - -main "$@" diff --git a/.ci/magic-modules/generate-terraform-all-platforms.yml b/.ci/magic-modules/generate-terraform-all-platforms.yml deleted file mode 100644 index 75653b224346..000000000000 --- a/.ci/magic-modules/generate-terraform-all-platforms.yml +++ /dev/null @@ -1,19 +0,0 @@ ---- -platform: linux -inputs: - - name: terraform-head - - name: magic-modules-gcp - -image_resource: - type: docker-image - source: - repository: golang - tag: '1.10' - -run: - path: magic-modules-gcp/.ci/magic-modules/generate-terraform-all-platforms.sh - args: - - terraform-head - -outputs: - - name: dist diff --git a/.ci/magic-modules/generate-terraform.sh b/.ci/magic-modules/generate-terraform.sh deleted file mode 100755 index f569b95c0460..000000000000 --- a/.ci/magic-modules/generate-terraform.sh +++ /dev/null @@ -1,72 +0,0 @@ -#!/bin/bash - -# This script takes in 'magic-modules-branched', a git repo tracking the head of a PR against magic-modules. -# It outputs "terraform-generated", a non-submodule git repo containing the generated terraform code. - -set -x -set -e -source "$(dirname "$0")/helpers.sh" -PATCH_DIR="$(pwd)/patches" - -# Create $GOPATH structure - in order to successfully run Terraform codegen, we need to run -# it with a correctly-set-up $GOPATH. It calls out to `goimports`, which means that -# we need to have all the dependencies correctly downloaded. -export GOPATH="${PWD}/go" -mkdir -p "${GOPATH}/src/github.com/$GITHUB_ORG" - -pushd magic-modules-branched -ln -s "${PWD}/build/$SHORT_NAME/" "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME" -popd - -pushd "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME" - -# Other orgs are not fully-generated. This may be transitional - if this causes pain, -# try vendoring into third-party, as with TPG and TPGB. -if [ "$GITHUB_ORG" = "terraform-providers" ]; then - # This line removes every file which is not specified here. - # If you add files to Terraform which are not generated, you have to add them here. - # It uses the somewhat obtuse 'find' command. To explain: - # "find .": all files and directories recursively under the current directory, subject to matchers. - # "-type f": all regular real files, i.e. not directories. - # "-not": do the opposite of the next thing, always used with another matcher. - # "-wholename": entire relative path - including directory names - matches following wildcard. - # "-name": filename alone matches following string. e.g. -name README.md matches ./README.md *and* ./foo/bar/README.md - # "-exec": for each file found, execute the command following until the literal ';' - find . -type f -not -wholename "./.git*" -not -wholename "./vendor*" -not -name ".travis.yml" -not -name ".golangci.yml" -not -name "CHANGELOG.md" -not -name "GNUmakefile" -not -name "docscheck.sh" -not -name "LICENSE" -not -name "README.md" -not -wholename "./examples*" -not -name "go.mod" -not -name "go.sum" -not -name "staticcheck.conf" -not -name ".go-version" -not -name ".hashibot.hcl" -not -name "tools.go" -exec git rm {} \; -fi - -popd - -pushd magic-modules-branched - -# Choose the author of the most recent commit as the downstream author -# Note that we don't use the last submitted commit, we use the primary GH email -# of the GH PR submitted. If they've enabled a private email, we'll actually -# use their GH noreply email which isn't compatible with CLAs. -COMMIT_AUTHOR="$(git log --pretty="%an <%ae>" -n1 HEAD)" - -if [ -n "$OVERRIDE_PROVIDER" ] && [ "$OVERRIDE_PROVIDER" != "null" ]; then - bundle exec compiler -a -e terraform -f "$OVERRIDE_PROVIDER" -o "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME/" -else - bundle exec compiler -a -e terraform -o "${GOPATH}/src/github.com/$GITHUB_ORG/$PROVIDER_NAME/" -v "$VERSION" -fi - -TERRAFORM_COMMIT_MSG="$(cat .git/title)" - -pushd "build/$SHORT_NAME" - -# These config entries will set the "committer". -git config --global user.email "magic-modules@google.com" -git config --global user.name "Modular Magician" - -git add -A - -git commit -m "$TERRAFORM_COMMIT_MSG" --author="$COMMIT_AUTHOR" || true # don't crash if no changes -git checkout -B "$(cat ../../branchname)" - -apply_patches "$PATCH_DIR/$GITHUB_ORG/$PROVIDER_NAME" "$TERRAFORM_COMMIT_MSG" "$COMMIT_AUTHOR" "master" - -popd -popd - -git clone "magic-modules-branched/build/$SHORT_NAME" "./terraform-generated/$VERSION" diff --git a/.ci/magic-modules/generate-terraform.yml b/.ci/magic-modules/generate-terraform.yml deleted file mode 100644 index ab20a5875400..000000000000 --- a/.ci/magic-modules/generate-terraform.yml +++ /dev/null @@ -1,28 +0,0 @@ ---- -# This file takes two inputs: magic-modules-branched in detached-HEAD state, and the list of patches. -# It spits out "terraform-generated", a terraform repo on a new branch (named after the -# HEAD commit on the PR), with the new generated code in it. -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby-python - tag: '1.11.5-2.6.0-2.7-v6' - -inputs: - - name: magic-modules-branched - - name: patches - -outputs: - - name: terraform-generated - -run: - path: magic-modules-branched/.ci/magic-modules/generate-terraform.sh - -params: - VERSION: "" - PROVIDER_NAME: "" - SHORT_NAME: "" - OVERRIDE_PROVIDER: "" - GITHUB_ORG: "terraform-providers" diff --git a/.ci/magic-modules/get-merged-patches.yml b/.ci/magic-modules/get-merged-patches.yml deleted file mode 100644 index 7afae2e1094e..000000000000 --- a/.ci/magic-modules/get-merged-patches.yml +++ /dev/null @@ -1,28 +0,0 @@ ---- -# This file takes in only magic-modules, to get the code that -# it runs. It does need the github API token. -# It produces "patches", a set of directories that contain -# `git format-patch` style patches for any PR which was generated -# by MagicModules, and which has already been merged, despite -# the upstream MagicModules PR not yet being merged. -platform: linux - -image_resource: - type: docker-image - source: - # This task requires a container with python and the pip - # package 'pygithub'. - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: magic-modules - -outputs: - - name: patches - -params: - GH_TOKEN: "" - -run: - path: magic-modules/.ci/magic-modules/get_merged_patches.py diff --git a/.ci/magic-modules/get_downstream_prs.py b/.ci/magic-modules/get_downstream_prs.py deleted file mode 100755 index ce4c7c739da0..000000000000 --- a/.ci/magic-modules/get_downstream_prs.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python -import os -import sys -from github import Github -from pyutils import downstreams - -if __name__ == '__main__': - assert len(sys.argv) == 2, "expected a Github PR ID as argument" - upstream_pr = int(sys.argv[1]) - - downstream_urls = downstreams.get_downstream_urls( - Github(os.environ.get('GH_TOKEN')), upstream_pr) - for url in downstream_urls: - print url diff --git a/.ci/magic-modules/get_merged_patches.py b/.ci/magic-modules/get_merged_patches.py deleted file mode 100755 index dd4ba73a152a..000000000000 --- a/.ci/magic-modules/get_merged_patches.py +++ /dev/null @@ -1,45 +0,0 @@ -#!/usr/bin/env python -import os -import urllib -from github import Github -from pyutils import downstreams - -def get_merged_patches(gh): - """Download all merged patches for open upstream PRs. - - Args: - gh: Github client to make calls to Github with. - """ - open_pulls = gh.get_repo('GoogleCloudPlatform/magic-modules')\ - .get_pulls(state='open') - for open_pr in open_pulls: - print 'Downloading patches for upstream PR %d...' % open_pr.number - parsed_urls = downstreams.get_parsed_downstream_urls(gh, open_pr.number) - for repo_name, pulls in parsed_urls: - repo = gh.get_repo(repo_name) - for r, pr_num in pulls: - print 'Check to see if %s/%s is merged and should be downloaded\n' % ( - r, pr_num) - downstream_pr = repo.get_pull(int(pr_num)) - if downstream_pr.is_merged(): - download_patch(r, downstream_pr) - -def download_patch(repo, pr): - """Download merged downstream PR patch. - - Args: - pr: Github Pull request to download patch for - """ - download_location = os.path.join('./patches', repo_name, '%d.patch' % pr.id) - print download_location - # Skip already downloaded patches - if os.path.exists(download_location): - return - - if not os.path.exists(os.path.dirname(download_location)): - os.makedirs(os.path.dirname(download_location)) - urllib.urlretrieve(pr.patch_url, download_location) - -if __name__ == '__main__': - gh = Github(os.environ.get('GH_TOKEN')) - get_merged_patches(gh) diff --git a/.ci/magic-modules/helpers.sh b/.ci/magic-modules/helpers.sh deleted file mode 100644 index 8635966bba97..000000000000 --- a/.ci/magic-modules/helpers.sh +++ /dev/null @@ -1,21 +0,0 @@ -# Arguments to 'apply_patches' are: -# - name of patch directory -# - commit message -# - author -# - target branch -function apply_patches { - # Apply necessary downstream patches. - shopt -s nullglob - for patch in "$1"/*; do - # This is going to apply the patch as at least 1 commit, possibly more. - git am --3way --signoff "$patch" - done - shopt -u nullglob - # Now, collapse the patch commits into one. - # This looks a little silly, but here's what we're doing. - # We get rid of all the commits since we diverged from 'master', - # We keep all the changes (--soft). - git reset --soft "$(git merge-base HEAD "$4")" - # Then we commit again. - git commit -m "$2" --author="$3" --signoff || true # don't crash if no changes -} diff --git a/.ci/magic-modules/merge-pr.sh b/.ci/magic-modules/merge-pr.sh deleted file mode 100755 index 23d90334b104..000000000000 --- a/.ci/magic-modules/merge-pr.sh +++ /dev/null @@ -1,60 +0,0 @@ -#!/bin/bash - -# This script updates the submodule to track terraform master. -set -e -set -x -shopt -s dotglob - -# Since these creds are going to be managed externally, we need to pass -# them into the container as an environment variable. We'll use -# ssh-agent to ensure that these are the credentials used to update. -set +x -echo "$CREDS" > ~/github_private_key -set -x -chmod 400 ~/github_private_key - -pushd mm-approved-prs -ID=$(git config --get pullrequest.id) -# We need to know what branch to check out for the update. -BRANCH=$(git config --get pullrequest.branch) -REPO=$(git config --get pullrequest.repo) -popd - -cp -r mm-approved-prs/* mm-output - -pushd mm-output -# The github pull request resource reads this value to find -# out which pull request to update. -git config pullrequest.id "$ID" - -# We should rebase onto master to avoid ugly merge histories. -git fetch origin master -git config --global user.email "magic-modules@google.com" -git config --global user.name "Modular Magician" -git rebase origin/master - -ssh-agent bash -c "ssh-add ~/github_private_key; git submodule update --remote --init $ALL_SUBMODULES" - -# Word-splitting here is intentional. -git add $ALL_SUBMODULES - -# It's okay for the commit to fail if there's no changes. -set +e -git commit -m "Update tracked submodules -> HEAD on $(date) - -Tracked submodules are $ALL_SUBMODULES." -echo "Merged PR #$ID." > ./commit_message - -# If the repo isn't 'GoogleCloudPlatform/magic-modules', then the PR has been -# opened from someone's fork. We ought to have push rights to that fork, no -# problem, but if we don't, that's also okay. This is a tiny bit dangerous -# because it's a force-push. - -set +e -if [ "$REPO" != "GoogleCloudPlatform/magic-modules" ]; then - git remote add non-gcp-push-target "git@github.com:$REPO" - # We know we have a commit, so all the machinery of the git resources is - # unnecessary. We can just try to push directly. - ssh-agent bash -c "ssh-add ~/github_private_key; git push -f non-gcp-push-target \"HEAD:$BRANCH\"" -fi -set -e diff --git a/.ci/magic-modules/merge.yml b/.ci/magic-modules/merge.yml deleted file mode 100644 index 40545a7bd198..000000000000 --- a/.ci/magic-modules/merge.yml +++ /dev/null @@ -1,23 +0,0 @@ ---- -# This takes in the approved PR and CI repo, and updates the PR so that -# its submodules track the `master` branch on their assorted repos. -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby - tag: '1.11.5-2.6.0' - -inputs: - - name: mm-approved-prs - -outputs: - - name: mm-output - -run: - path: mm-approved-prs/.ci/magic-modules/merge-pr.sh - -params: - CREDS: "" - ALL_SUBMODULES: "" diff --git a/.ci/magic-modules/point-to-submodules.sh b/.ci/magic-modules/point-to-submodules.sh deleted file mode 100755 index 91bd868dd307..000000000000 --- a/.ci/magic-modules/point-to-submodules.sh +++ /dev/null @@ -1,55 +0,0 @@ -#!/bin/bash - -# This script takes in 'magic-modules-branched', a git repo tracking the head of a PR against magic-modules. -# It needs to output the same git repo, but with the code generation done and submodules updated, at 'magic-modules-submodules'. - -set -e -set +x -# Don't show the credential in the output. -echo "$CREDS" > ~/github_private_key -set -x -chmod 400 ~/github_private_key - -pushd magic-modules-branched -BRANCH="$(cat ./branchname)" -# Update this repo to track the submodules we just pushed: - -if [ "$TERRAFORM_ENABLED" = "true" ]; then - IFS="," read -ra TERRAFORM_VERSIONS <<< "$TERRAFORM_VERSIONS" - for VERSION in "${TERRAFORM_VERSIONS[@]}"; do - IFS=":" read -ra TERRAFORM_DATA <<< "$VERSION" - PROVIDER_NAME="${TERRAFORM_DATA[0]}" - SUBMODULE_DIR="${TERRAFORM_DATA[1]}" - - git config -f .gitmodules "submodule.build/$SUBMODULE_DIR.branch" "$BRANCH" - git config -f .gitmodules "submodule.build/$SUBMODULE_DIR.url" "https://github.com/$GH_USERNAME/$PROVIDER_NAME.git" - git submodule sync "build/$SUBMODULE_DIR" - ssh-agent bash -c "ssh-add ~/github_private_key; git submodule update --remote --init build/$SUBMODULE_DIR" - git add "build/$SUBMODULE_DIR" - done -fi - -if [ "$ANSIBLE_ENABLED" = "true" ]; then - git config -f .gitmodules submodule.build/ansible.branch "$BRANCH" - git config -f .gitmodules submodule.build/ansible.url "https://github.com/$GH_USERNAME/ansible_collections_google.git" - git submodule sync build/ansible - ssh-agent bash -c "ssh-add ~/github_private_key; git submodule update --remote --init build/ansible" - git add build/ansible -fi - -if [ "$INSPEC_ENABLED" = "true" ]; then - git config -f .gitmodules submodule.build/inspec.branch "$BRANCH" - git config -f .gitmodules submodule.build/inspec.url "https://github.com/$GH_USERNAME/inspec-gcp.git" - git submodule sync build/inspec - ssh-agent bash -c "ssh-add ~/github_private_key; git submodule update --remote --init build/inspec" - git add build/inspec -fi - -# Commit those changes so that they can be tested in the next phase. -git add .gitmodules -git config --global user.email "magic-modules@google.com" -git config --global user.name "Modular Magician" -git commit -m "Automatic submodule update to generated code." || true # don't crash if no changes -git checkout -B "$BRANCH" - -cp -r ./ ../magic-modules-submodules diff --git a/.ci/magic-modules/point-to-submodules.yml b/.ci/magic-modules/point-to-submodules.yml deleted file mode 100644 index 2d16efb74955..000000000000 --- a/.ci/magic-modules/point-to-submodules.yml +++ /dev/null @@ -1,29 +0,0 @@ ---- -# This file takes two inputs: magic-modules-branched in detached-HEAD state, and the CI repo. -# It spits out "terraform-generated", a terraform repo on a new branch (named after the -# HEAD commit on the PR), with the new generated code in it. -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby - tag: '1.11.5-2.6.0' - -inputs: - - name: magic-modules-branched - -outputs: - - name: magic-modules-submodules - -run: - path: magic-modules-branched/.ci/magic-modules/point-to-submodules.sh - -params: - GH_USERNAME: "" - CREDS: "" - TERRAFORM_ENABLED: false - TERRAFORM_VERSIONS: "" - ANSIBLE_ENABLED: false - INSPEC_ENABLED: false - diff --git a/.ci/magic-modules/pyutils/README.md b/.ci/magic-modules/pyutils/README.md deleted file mode 100644 index 8570ed3be9e7..000000000000 --- a/.ci/magic-modules/pyutils/README.md +++ /dev/null @@ -1,50 +0,0 @@ -# Magic Modules CI Utils - -This directory manages all Python utils that the Magician uses to take upstream Magic Module PRs and generate and manage PRs in various downstream repos. - -What this shouldn't contain: - -- Python scripts called directly by Concourse jobs. -- Non-Python code - -## Tests - -Currently we use the standard [unittest](https://docs.python.org/3/library/unittest.html) library. Because CI development is mostly done locally on your developer machine before being directly deployed, these tests are run manually. - -This section reviews running/writing tests for someone fairly new to Python/unittest, so some of this information is just from unittest docs. - -### Running tests - -Set a test environment variable to make calls to Github: -``` -export TEST_GITHUB_TOKEN=... -``` - -Otherwise, tests calling Github will be ignored (or likely be rate-limited). -``` -cd pyutils - -python -m unittest discover -p "*_test.py" -python ./changelog_utils_test.py -``` - -Read [unittest](https://docs.python.org/3/library/unittest.html#command-line-interface) docs to see how to run tests at finer granularity. - -*NOTE*: Don't forget to delete .pyc files if you feel like tests aren't reflecting your changes! - -### Writing Tests: - -This is mostly a very shallow review of unittest, but your test should inherit from the `unittest.TestCase` class in some way (i.e. we haven't had the need to write our own TestCase-inheriting Test class but feel free to in the future if needed). - -``` -class MyModuleTest(unittest.TestCase): -``` - -Make sure to include the following at the bottom of your test file, so it defaults to running the tests in this file if run as a normal Python script. -``` -if __name__ == '__main__': - unittest.main() -``` - - - diff --git a/.ci/magic-modules/pyutils/__init__.py b/.ci/magic-modules/pyutils/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/.ci/magic-modules/pyutils/downstreams.py b/.ci/magic-modules/pyutils/downstreams.py deleted file mode 100644 index e26e767878cb..000000000000 --- a/.ci/magic-modules/pyutils/downstreams.py +++ /dev/null @@ -1,89 +0,0 @@ -"""Helper class for obtaining information about upstream PR and its downstreams. - - Typical usage example: - - import upstream_pull_request - - client = github.Github(github_token) - downstreams = upstream_pull_request.downstream_urls(client, 100) - -""" - -import os -import re -import sys -import itertools -import operator -from strutils import * - -UPSTREAM_REPO = 'GoogleCloudPlatform/magic-modules' - -def find_unmerged_downstreams(client, pr_num): - """Returns list of urls for unmerged, open downstreams. - - For each downstream PR URL found from get_parsed_downstream_urls(), - fetches the status of each downstream PR to determine which PRs are still - unmerged (i.e. not closed and not merged). - - Args: - client: github.Github client - pr_num: PR Number for upstream PR - Returns: - All unmerged downstreams found for a PR. - """ - unmerged_dependencies = [] - for r, pulls in get_parsed_downstream_urls(client, pr_num): - repo = client.get_repo(r) - for _repo, pr_num in pulls: - pr = repo.get_pull(int(pr_num)) - # Disregard merged or closed PRs. - if not pr.is_merged() and not pr.state == "closed": - unmerged_dependencies.append(pr.html_url) - - return unmerged_dependencies - -def get_parsed_downstream_urls(client, pr_num): - """Get parsed URLs for downstream PRs grouped by repo. - - For each downstream PR URL referenced by the upstream PR, this method - parses the downstream repo name - (i.e. "terraform-providers/terraform-providers-google") and PR number - (e.g. 100) and groups them by repo name so calling code only needs to fetch - each repo once. - - Example: - parsed = UpstreamPullRequest(pr_num).parsed_downstream_urls - for repo, repo_pulls in parsed: - for _repo, pr in repo_pulls: - print "Downstream is https://github.com/%s/pull/%d" % (repo, pr) - - Args: - client: github.Github client - pr_num: PR Number for upstream PR - - Returns: - Iterator over $repo and sub-iterators of ($repo, $pr_num) parsed tuples - """ - parsed = [parse_github_url(u) for u in get_downstream_urls(client, pr_num)] - return itertools.groupby(parsed, key=operator.itemgetter(0)) - -def get_downstream_urls(client, pr_num): - """Get list of URLs for downstream PRs. - - This fetches the upstream PR and finds its downstream PR URLs by - searching for references in its comments. - - Args: - client: github.Github client - pr_num: PR Number for upstream PR - - Returns: - List of downstream PR URLs. - """ - urls = [] - print "Getting downstream URLs for PR %d..." % pr_num - pr = client.get_repo(UPSTREAM_REPO).get_pull(pr_num) - for comment in pr.get_issue_comments(): - urls = urls + find_dependency_urls_in_comment(comment.body) - print "Found downstream URLs: %s" % urls - return urls diff --git a/.ci/magic-modules/pyutils/downstreams_test.py b/.ci/magic-modules/pyutils/downstreams_test.py deleted file mode 100644 index 255121bacd2e..000000000000 --- a/.ci/magic-modules/pyutils/downstreams_test.py +++ /dev/null @@ -1,75 +0,0 @@ -from downstreams import * -import unittest -import os -from github import Github - -TOKEN_ENV_VAR = "TEST_GITHUB_TOKEN" - -class TestUpstreamPullRequests(unittest.TestCase): - """ - Terrible test data from scraping - https://github.com/GoogleCloudPlatform/magic-modules/pull/1000 - TODO: If this test becomes load-bearing, mock out the Github client instead - of using this. - """ - TEST_PR_NUM = 1000 - EXPECTED_DOWNSTREAM_URLS = [ - "https://github.com/terraform-providers/terraform-provider-google-beta/pull/186", - "https://github.com/terraform-providers/terraform-provider-google/pull/2591", - "https://github.com/modular-magician/ansible/pull/142", - ] - EXPECTED_PARSED_DOWNSTREAMS = { - "terraform-providers/terraform-provider-google-beta": [186], - "terraform-providers/terraform-provider-google": [2591], - "modular-magician/ansible": [142], - } - - def setUp(self): - gh_token = os.environ.get(TOKEN_ENV_VAR) - if not gh_token: - self.skipTest( - "test env var %s not set, skip tests calling Github" % TOKEN_ENV_VAR) - self.test_client = Github(gh_token) - - def test_find_unmerged_downstreams(self): - self.assertFalse(find_unmerged_downstreams(self.test_client, self.TEST_PR_NUM)) - - def test_parsed_downstream_urls(self): - result = get_parsed_downstream_urls(self.test_client, self.TEST_PR_NUM) - repo_cnt = 0 - for repo, pulls in result: - # Verify each repo in result. - self.assertIn(repo, self.EXPECTED_PARSED_DOWNSTREAMS, - "unexpected repo %s in result" % repo) - repo_cnt += 1 - - # Verify each pull request in result. - expected_pulls = self.EXPECTED_PARSED_DOWNSTREAMS[repo] - pull_cnt = 0 - for repo, prid in pulls: - self.assertIn(int(prid), expected_pulls) - pull_cnt += 1 - # Verify exact count of pulls (here because iterator). - self.assertEquals(pull_cnt, len(expected_pulls), - "expected %d pull requests in result[%s]" % (len(expected_pulls), repo)) - - # Verify exact count of repos (here because iterator). - self.assertEquals(repo_cnt, len(self.EXPECTED_PARSED_DOWNSTREAMS), - "expected %d pull requests in result[%s]" % ( - len(self.EXPECTED_PARSED_DOWNSTREAMS), repo)) - - def test_downstream_urls(self): - test_client = Github(os.environ.get(TOKEN_ENV_VAR)) - result = get_downstream_urls(self.test_client,self.TEST_PR_NUM) - - expected_len = len(self.EXPECTED_DOWNSTREAM_URLS) - self.assertEquals(len(result), expected_len, - "expected %d downstream urls, got %d" % (expected_cnt, len(result))) - for url in result: - self.assertIn(str(url), self.EXPECTED_DOWNSTREAM_URLS) - - -if __name__ == '__main__': - unittest.main() - - diff --git a/.ci/magic-modules/pyutils/strutils.py b/.ci/magic-modules/pyutils/strutils.py deleted file mode 100644 index 2e80bba27014..000000000000 --- a/.ci/magic-modules/pyutils/strutils.py +++ /dev/null @@ -1,125 +0,0 @@ -import re -from bs4 import BeautifulSoup -import mistune - -def find_dependency_urls_in_comment(body): - """Util to parse downstream dependencies from a given comment body. - - Example: - $ find_dependency_urls_in_comment(\""" - This is a comment on an MM PR. - - depends: https://github.com/ownerFoo/repoFoo/pull/100 - depends: https://github.com/ownerBar/repoBar/pull/10 - \""") - [https://github.com/ownerFoo/repoFoo/pull/100, - https://github.com/ownerBar/repoBar/pull/10] - - Args: - body (string): Text of comment in upstream PR - - Returns: - List of PR URLs found. - """ - return re.findall( - r'^depends: (https://github.com/[^\s]*)', body, re.MULTILINE) - -def parse_github_url(gh_url): - """Util to parse Github repo/PR id from a Github PR URL. - - Args: - gh_url (string): URL of Github pull request. - - Returns: - Tuple of (repo name, pr number) - """ - matches = re.match(r'https://github.com/([\w-]+/[\w-]+)/pull/(\d+)', gh_url) - if matches: - repo, prnum = matches.groups() - return (repo, int(prnum)) - return None - -def get_release_notes(body): - """Parse release note blocks from a given text block. - - Each code-block with a "release-note:..." language class. - Example: - ```release-note:new-resource - a_new_resource - ``` - - ```release-note:bug - Fixed a bug - ``` - Args: - body (string): PR body to pull release note block from - - Returns: - List of tuples of (`release-note` heading, release note) - """ - release_notes = [] - - # Parse markdown and find all code blocks - md = mistune.markdown(body) - soup = BeautifulSoup(md, 'html.parser') - for codeblock in soup.find_all('code'): - block_classes = codeblock.get('class') - if not block_classes: - continue - - note_type = get_release_note_type_from_class(block_classes[0]) - note_text = codeblock.get_text().strip() - if note_type and note_text: - release_notes.append((note_type, note_text)) - - return release_notes - -def get_release_note_type_from_class(class_str): - # expected class is 'lang-release-note:...' for release notes - prefix_len = len("lang-release-note:") - if class_str[:prefix_len] == "lang-release-note:": - return class_str[len("lang-"):] - return None - -def set_release_notes(release_notes, body): - """Sanitize and adds the given release note block for PR body text. - - For a given text block, removes any existing "releasenote" markdown code - blocks and adds the given release notes at the end. - - Args: - release_note (list(Tuple(string)): List of - (release-note heading, release note) - body (string): Text body to find and edit release note blocks in - - Returns: - Modified text - """ - edited = "" - md = mistune.markdown(body) - soup = BeautifulSoup(md, 'html.parser') - for blob in soup.find_all('p'): - edited += blob.get_text().strip() + "\n\n" - - for heading, note in release_notes: - edited += "\n```%s\n%s\n```\n" % (heading, note.strip()) - return edited - -def find_prefixed_labels(labels, prefix): - """Util for filtering and cleaning labels that start with a given prefix. - - Given a list of labels, find only the specific labels with the given prefix. - - Args: - prefix: String expected to be prefix of relevant labels - labels: List of string labels - - Return: - Filtered labels (i.e. all labels starting with prefix) - """ - changelog_labels = [] - for l in labels: - l = l.strip() - if l.startswith(prefix) and len(l) > len(prefix): - changelog_labels.append(l) - return changelog_labels diff --git a/.ci/magic-modules/pyutils/strutils_test.py b/.ci/magic-modules/pyutils/strutils_test.py deleted file mode 100644 index 6c60059ca07a..000000000000 --- a/.ci/magic-modules/pyutils/strutils_test.py +++ /dev/null @@ -1,169 +0,0 @@ -from strutils import * -import unittest -import os -from github import Github - - -class TestStringUtils(unittest.TestCase): - def test_find_dependency_urls(self): - test_urls = [ - "https://github.com/repo-owner/repo-A/pull/1", - "https://github.com/repo-owner/repo-A/pull/2", - "https://github.com/repo-owner/repo-B/pull/3", - ] - test_body = "".join(["\ndepends: %s\n" % u for u in test_urls]) - result = find_dependency_urls_in_comment(test_body) - self.assertEquals(len(result), len(test_urls), - "expected %d urls to be parsed from comment" % len(test_urls)) - for test_url in test_urls: - self.assertIn(test_url, result) - - def test_parse_github_url(self): - test_cases = { - "https://github.com/repoowner/reponame/pull/1234": ("repoowner/reponame", 1234), - "not a real url": None, - } - for k in test_cases: - result = parse_github_url(k) - expected = test_cases[k] - if not expected: - self.assertIsNone(result, "expected None, got %s" % result) - else: - self.assertEquals(result[0], expected[0]) - self.assertEquals(int(result[1]), expected[1]) - - def test_get_release_notes(self): - test_cases = [ - ("releasenote text not found", []), - ( -"""Empty release note: -```release-note:test - -``` -""", []), - (""" -Random code block -``` -This is not a release note -``` -""", []), - (""" -Empty release note with non-empty code block: -```release-note:test - -``` - -``` -This is not a release note -``` -""", []), - (""" -Empty code block with non-empty release note: - -```invalid - -``` - -```release-note:test -This is a release note -``` -""", [("release-note:test", "This is a release note")]), - (""" -Single release notes -```release-note:test -This is a release note -``` -""", [("release-note:test", "This is a release note")]) - # (""" - # Multiple release notes - # ```release-note:foo - # note foo - # ``` - - # ```release-note:bar - # note bar - # ``` - - # ```release-note:baz - # note baz - # ``` - # """, [ - # ("release-note:foo", "note foo"), - # ("release-note:bar", "note bar"), - # ("release-note:baz", "note baz"), - # ]), - ] - for k, expected in test_cases: - actual = get_release_notes(k) - self.assertEqual(len(actual), len(expected), - "test %s\n: expected %d items, got %d: %s" % (k, len(expected), len(actual), actual)) - for idx, note_tuple in enumerate(expected): - self.assertEqual(actual[idx][0], note_tuple[0], - "test %s\n: expected note type %s, got %s" % ( - k, note_tuple[0], actual[idx][0])) - - self.assertEqual(actual[idx][1], note_tuple[1], - "test %s\n: expected note type %s, got %s" % ( - k, note_tuple[1], actual[idx][1])) - - - def test_set_release_notes(self): - downstream_body = """ -All of the blocks below should be replaced - -```releasenote -This should be replaced -``` - -More text - -```releasenote -``` - -```test -``` - """ - release_notes = [ - ("release-note:foo", "new message foo"), - ("release-note:bar", "new message bar"), - ] - - replaced = set_release_notes(release_notes, downstream_body) - - # Existing non-code-block text should still be in body - self.assertIn("All of the blocks below should be replaced\n", replaced) - self.assertIn("More text\n", replaced) - - # New release notes should have been added. - self.assertIn("```release-note:foo\nnew message foo\n```\n", replaced) - self.assertIn("```release-note:bar\nnew message bar\n```\n", replaced) - - # Old release notes and code blocks should be removed. - self.assertEqual(len(re.findall("```.+\n", replaced)), 2, - "expected only two release note blocks in text. Result:\n%s" % replaced) - self.assertNotIn("This should be replaced", replaced) - - - def test_find_prefixed_labels(self): - self.assertFalse(find_prefixed_labels([], "test: ")) - self.assertFalse(find_prefixed_labels(["", ""], "test: ")) - labels = find_prefixed_labels(["foo", "bar"], "") - self.assertIn("foo", labels) - self.assertIn("bar", labels) - - test_labels = [ - "test: foo", - "test: bar", - # Not valid changelog labels - "not a changelog label", - "test: " - ] - result = find_prefixed_labels(test_labels, prefix="test: ") - - self.assertEqual(len(result), 2, "expected only 2 labels returned") - self.assertIn("test: foo", result) - self.assertIn("test: bar", result) -if __name__ == '__main__': - unittest.main() - - diff --git a/.ci/magic-modules/release-ansible.sh b/.ci/magic-modules/release-ansible.sh deleted file mode 100755 index 5afcf2683b44..000000000000 --- a/.ci/magic-modules/release-ansible.sh +++ /dev/null @@ -1,140 +0,0 @@ -#!/usr/bin/env bash - -set -x -# Constants + functions -declare -a ignored_modules=( - gcp_backend_service - gcp_forwarding_rule - gcp_healthcheck - gcp_target_proxy - gcp_url_map -) - -get_all_modules() { - remote_name=$1 - file_name=$remote_name - ssh-agent bash -c "ssh-add ~/github_private_key; git fetch $remote_name" - git checkout $remote_name/devel - git ls-files -- lib/ansible/modules/cloud/google/gcp_* | cut -d/ -f 6 | cut -d. -f 1 > $file_name - - for i in "${ignored_modules[@]}"; do - sed -i "/$i/d" $file_name - done -} - -# Install dependencies for Template Generator -pushd "magic-modules-gcp" -bundle install - -# Setup SSH keys. - -# Since these creds are going to be managed externally, we need to pass -# them into the container as an environment variable. We'll use -# ssh-agent to ensure that these are the credentials used to update. -set +x -echo "$CREDS" > ~/github_private_key -set -x -chmod 400 ~/github_private_key -popd - -# Clone ansible/ansible -ssh-agent bash -c "ssh-add ~/github_private_key; git clone git@github.com:modular-magician/ansible.git" - -# Setup Git config and remotes. -pushd "ansible" -git config --global user.email "magic-modules@google.com" -git config --global user.name "Modular Magician" - -git remote remove origin -git remote add origin git@github.com:modular-magician/ansible.git -git remote add upstream git@github.com:ansible/ansible.git -git remote add magician git@github.com:modular-magician/ansible.git -echo "Remotes setup properly" -popd - -# Copy code into ansible/ansible + commit to our fork -# By using the "ansible_devel" provider, we get versions of the resources that work -# with ansible devel. -pushd "magic-modules-gcp" -ruby compiler.rb -a -e ansible -f ansible_devel -o ../ansible/ -popd - -# Commit code from magic modules into our fork -pushd "ansible" -git add lib/ansible/modules/cloud/google/gcp_* test/integration/targets/gcp_* -git commit -m "Migrating code from collection" -ssh-agent bash -c "ssh-add ~/github_private_key; git push magician devel" - -set -e - -ssh-agent bash -c "ssh-add ~/github_private_key; git fetch magician devel" -ssh-agent bash -c "ssh-add ~/github_private_key; git fetch upstream devel" - -# Create files with list of modules in a given branch. -get_all_modules "upstream" -get_all_modules "magician" - -# Split existing modules into sets of 23 -# Max 50 files per PR and a module can have 2 files (module + test) -# 23 = 50/2 - 2 (to account for module_util files) -split -l 23 upstream mm-bug - -for filename in mm-bug*; do - echo "Building a Bug Fix PR for $filename" - # Checkout all files that file specifies and create a commit. - git checkout upstream/devel - git checkout -b bug_fixes$filename - - - while read p; do - git checkout magician/devel -- "lib/ansible/modules/cloud/google/$p.py" - if [[ $p != *"info"* ]]; then - git checkout magician/devel -- "test/integration/targets/$p" - fi - done < $filename - - git checkout magician/devel -- "lib/ansible/module_utils/gcp_utils.py" - git checkout magician/devel -- "lib/ansible/plugins/doc_fragments/gcp.py" - - # This commit may be empty - set +e - git commit -m "Bug fixes for GCP modules" - - # Create a PR message + save to file - ruby ../magic-modules-gcp/tools/ansible-pr/generate_template.rb > bug_fixes$filename - - # Create PR - ssh-agent bash -c "ssh-add ~/github_private_key; git push origin bug_fixes$filename --force" - hub pull-request -b ansible/ansible:devel -F bug_fixes$filename -f - set -e - - echo "Bug Fix PR built for $filename" -done - -## Get list of new modules (in magician, not in upstream) -comm -3 <(sort magician) <(sort upstream) > new_modules - -while read module; do - echo "Building a New Module PR for $module" - git checkout upstream/devel - git checkout -b $module - - git checkout magician/devel -- "lib/ansible/modules/cloud/google/$module.py" - if [[ $module != *"info"* ]]; then - git checkout magician/devel -- "test/integration/targets/$module" - fi - - git checkout magician/devel -- "lib/ansible/module_utils/gcp_utils.py" - - # Create a PR message + save to file - set +e - git commit -m "New Module: $module" - ruby ../magic-modules-gcp/tools/ansible-pr/generate_template.rb --new-module-name $module > $module - - # Create PR - ssh-agent bash -c "ssh-add ~/github_private_key; git push origin $module --force" - hub pull-request -b ansible/ansible:devel -F $module -f - set -e - - echo "New Module PR built for $module" -done < new_modules diff --git a/.ci/magic-modules/release-ansible.yml b/.ci/magic-modules/release-ansible.yml deleted file mode 100644 index 33fbe41aab06..000000000000 --- a/.ci/magic-modules/release-ansible.yml +++ /dev/null @@ -1,19 +0,0 @@ ---- -# This file takes one input: magic-modules-branched in detached-HEAD state -# It will create a series of PRs on Ansible. -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/hub - tag: '1.2' - -inputs: - - name: magic-modules-gcp - -run: - path: "magic-modules-gcp/.ci/magic-modules/release-ansible.sh" - -params: - GITHUB_TOKEN: "" diff --git a/.ci/magic-modules/vars/validator_handwritten_files.txt b/.ci/magic-modules/vars/validator_handwritten_files.txt deleted file mode 100644 index a38dd479a40c..000000000000 --- a/.ci/magic-modules/vars/validator_handwritten_files.txt +++ /dev/null @@ -1,7 +0,0 @@ -resource_compute_instance.go -resource_google_project.go -resource_sql_database_instance.go -resource_storage_bucket.go -iam_folder.go -iam_organization.go -iam_project.go.erb diff --git a/.ci/magic-modules/welcome-contributor.sh b/.ci/magic-modules/welcome-contributor.sh deleted file mode 100755 index 0b521ad24e6c..000000000000 --- a/.ci/magic-modules/welcome-contributor.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -set -x - -ASSIGNEE=$(shuf -n 1 <(printf "danawillow\nrambleraptor\nemilymye\nrileykarson\nSirGitsalot\nslevenick\nchrisst\nc2thorn\nndmckinley")) - -cat > comment/pr_comment << EOF -Hello! I am a robot who works on Magic Modules PRs. - -I have detected that you are a community contributor, so your PR will be assigned to someone with a commit-bit on this repo for initial review. - -Thanks for your contribution! A human will be with you soon. - -@$ASSIGNEE, please review this PR or find an appropriate assignee. -EOF - -# Something is preventing the magician from actually assigning the PRs. -# Leave this part in so we know what was supposed to happen, but the real -# logic is above. -echo $ASSIGNEE > comment/assignee -cat comment/assignee diff --git a/.ci/magic-modules/welcome-contributor.yml b/.ci/magic-modules/welcome-contributor.yml deleted file mode 100644 index 295163b61642..000000000000 --- a/.ci/magic-modules/welcome-contributor.yml +++ /dev/null @@ -1,17 +0,0 @@ ---- -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: magic-modules-gcp - -outputs: - - name: comment - -run: - path: magic-modules-gcp/.ci/magic-modules/welcome-contributor.sh diff --git a/.ci/magic-modules/write-branch-name.sh b/.ci/magic-modules/write-branch-name.sh deleted file mode 100755 index 2322d3493b6e..000000000000 --- a/.ci/magic-modules/write-branch-name.sh +++ /dev/null @@ -1,8 +0,0 @@ -#! /bin/bash -set -e -set -x - -PR_ID="$(cat ./mm-initial-pr/.git/id)" -ORIGINAL_PR_BRANCH="codegen-pr-$PR_ID" -pushd branchname -echo "$ORIGINAL_PR_BRANCH" > ./original_pr_branch_name diff --git a/.ci/magic-modules/write-branch-name.yml b/.ci/magic-modules/write-branch-name.yml deleted file mode 100644 index ca961b2f1a32..000000000000 --- a/.ci/magic-modules/write-branch-name.yml +++ /dev/null @@ -1,17 +0,0 @@ ---- -platform: linux - -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/python - tag: '1.0' - -inputs: - - name: mm-initial-pr - -outputs: - - name: branchname - -run: - path: mm-initial-pr/.ci/magic-modules/write-branch-name.sh diff --git a/.ci/release.yml.tmpl b/.ci/release.yml.tmpl deleted file mode 100644 index f268046c3564..000000000000 --- a/.ci/release.yml.tmpl +++ /dev/null @@ -1,119 +0,0 @@ -{% import "vars.tmpl" as vars %} -# These resource types are here until the PRs get merged in upstream. :) -resource_types: - - name: git-branch - type: docker-image - source: - # Note: resource types cannot use credhub substitution - "gcr.io/magic-modules" is hardcoded here. - repository: gcr.io/magic-modules/concourse-git-resource - tag: '1.0' - - - name: gcs-resource - type: docker-image - source: - repository: frodenas/gcs-resource - - - name: github-pull-request - type: docker-image - source: - repository: gcr.io/magic-modules/concourse-github-pr-resource - tag: '1.1' - -resources: - - name: magic-modules-gcp - type: git-branch - source: - uri: git@github.com:GoogleCloudPlatform/magic-modules.git - private_key: ((repo-key.private_key)) - - - name: gcp-bucket - type: gcs-resource - source: - bucket: ((gcp-bucket)) - json_key: ((gcp-bucket-json-key)) - regexp: dist/terraform-provider-google.* - - - name: night-trigger - type: time - source: - start: 11:00 PM - stop: 11:59 PM - location: America/Los_Angeles - - - name: terraform-head - type: git-branch - source: - uri: git@github.com:terraform-providers/terraform-provider-google.git - private_key: ((repo-key.private_key)) -jobs: - - name: coverage-spreadsheet-release - plan: - - get: night-trigger - trigger: true - - get: magic-modules-gcp - trigger: false - - task: build - file: magic-modules-gcp/.ci/magic-modules/coverage-spreadsheet-upload.yml - params: - SERVICE_ACCOUNT: ((magic-modules-service-account)) - - name: nightly-build - plan: - - get: night-trigger - trigger: true - - get: magic-modules-gcp - - get: terraform-head - - - task: build - file: magic-modules-gcp/.ci/magic-modules/generate-terraform-all-platforms.yml - -{% for arch in ['darwin_amd64', 'freebsd_386', 'freebsd_amd64', 'freebsd_arm', -'linux_386', 'linux_amd64', 'linux_arm', 'openbsd_386', 'openbsd_amd64', -'solaris_amd64', 'windows_386.exe', 'windows_amd64.exe'] %} - - put: gcp-bucket - params: - file: dist/terraform-provider-google.{{arch}} -{% endfor %} - - - name: inspec-integration-test - serial: true - plan: - - get: night-trigger - trigger: true - - get: magic-modules-gcp - - task: inspec-integration - file: magic-modules-gcp/.ci/acceptance-tests/inspec-integration.yml - params: - TERRAFORM_KEY: ((terraform-key)) - PROJECT_NAME: ((inspec-project-name)) - PROJECT_NUMBER: ((inspec-project-number)) - -{% for v in vars.terraform_v.itervalues() %} - - name: {{v.short_name}}-integration-test - serial: true - serial_groups: [terraform-integration] - plan: -{% if v.short_name == "terraform-beta" %} - - get: night-trigger - trigger: true -{% endif %} - - get: magic-modules-gcp - - task: {{v.short_name}}-integration - file: magic-modules-gcp/.ci/acceptance-tests/terraform-integration.yml - params: - PROVIDER_NAME: {{v.provider_name}} - SHORT_NAME: {{v.short_name}} - TEST_DIR: {{v.test_dir}} -{% endfor %} - - - name: ansible-integration-test - serial: true - plan: - - get: night-trigger - trigger: true - - get: magic-modules-gcp - - task: ansible-integration - file: magic-modules-gcp/.ci/acceptance-tests/ansible-integration.yml - params: - SERVICE_ACCOUNT_KEY: ((ansible-integration-key)) - ANSIBLE_TEMPLATE: ((ansible-integration-template)) - IMAGE_KEY: ((image-key)) diff --git a/.ci/unit-tests/inspec.sh b/.ci/unit-tests/inspec.sh deleted file mode 100755 index 9c0ddd784f9d..000000000000 --- a/.ci/unit-tests/inspec.sh +++ /dev/null @@ -1,60 +0,0 @@ -#!/bin/bash - -set -e -set -x - -# Service account credentials for GCP to allow terraform to work -export GOOGLE_CLOUD_KEYFILE_JSON="/tmp/google-account.json" -export GOOGLE_APPLICATION_CREDENTIALS="/tmp/google-account.json" - -# CI sets the contents of our json account secret in our environment; dump it -# to disk for use in tests. -set +x -echo "${TERRAFORM_KEY}" > /tmp/google-account.json -export GCP_PROJECT_NUMBER=${PROJECT_NUMBER} -export GCP_PROJECT_ID=${PROJECT_NAME} -export GCP_PROJECT_NAME=${PROJECT_NAME} -set -x - -gcloud auth activate-service-account terraform@graphite-test-sam-chef.iam.gserviceaccount.com --key-file=$GOOGLE_CLOUD_KEYFILE_JSON -PR_ID="$(cat ./magic-modules-new-prs/.git/id)" - - -pushd magic-modules -rm build/inspec/test/integration/verify/controls/* -export VCR_MODE=none -bundle install -bundle exec compiler -a -e inspec -o "build/inspec/" -v beta - -cp templates/inspec/vcr_config.rb build/inspec - -pushd build/inspec - -bundle -# Run rubocop on the generated resources -bundle exec rubocop -c .rubocop.yml - -mkdir inspec-cassettes -# Check if PR_ID folder exists -set +e -gsutil ls gs://magic-modules-inspec-bucket/$PR_ID -if [ $? -eq 0 ]; then - gsutil -m cp gs://magic-modules-inspec-bucket/$PR_ID/inspec-cassettes/* inspec-cassettes/ -else - gsutil -m cp gs://magic-modules-inspec-bucket/master/inspec-cassettes/* inspec-cassettes/ -fi -set -e - -bundle exec rake test:init_workspace -if test -f "inspec-cassettes/seed.txt"; then - # Seed the plan with the seed used to record the VCR cassettes. - # This lets randomly generated suffixes be the same between runs - bundle exec rake test:plan_integration_tests[$(cat inspec-cassettes/seed.txt)] -else - bundle exec rake test:plan_integration_tests -fi - -bundle exec rake test:run_integration_tests - -popd -popd \ No newline at end of file diff --git a/.ci/unit-tests/inspec.yml b/.ci/unit-tests/inspec.yml deleted file mode 100644 index 343104cde4ae..000000000000 --- a/.ci/unit-tests/inspec.yml +++ /dev/null @@ -1,15 +0,0 @@ -platform: linux -inputs: - - name: magic-modules - - name: magic-modules-new-prs -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/terraform-gcloud-inspec - tag: '0.12.16-4.0' -run: - path: magic-modules/.ci/unit-tests/inspec.sh -params: - PRODUCT: "" - PROVIDER: inspec - EXCLUDE_PATTERN: "" diff --git a/.ci/unit-tests/run.sh b/.ci/unit-tests/run.sh deleted file mode 100755 index d58fd5f866c3..000000000000 --- a/.ci/unit-tests/run.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# Setup GOPATH -export GOPATH=${PWD}/go - -set -x - -# Create GOPATH structure -mkdir -p "${GOPATH}/src/github.com/terraform-providers" -ln -s "${PWD}/magic-modules/build/$SHORT_NAME" "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -cd "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -go test -v ./$TEST_DIR -parallel 16 -run '^Test' -timeout 1m diff --git a/.ci/unit-tests/task.yml b/.ci/unit-tests/task.yml deleted file mode 100644 index a2f4143c9dd8..000000000000 --- a/.ci/unit-tests/task.yml +++ /dev/null @@ -1,14 +0,0 @@ -platform: linux -inputs: - - name: magic-modules -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby-python - tag: '1.11.5-2.6.0-2.7-v6' -run: - path: magic-modules/.ci/unit-tests/run.sh -params: - PROVIDER_NAME: "" - SHORT_NAME: "" - TEST_DIR: "" diff --git a/.ci/unit-tests/test-terraform.yml b/.ci/unit-tests/test-terraform.yml deleted file mode 100644 index b65fd0dddbca..000000000000 --- a/.ci/unit-tests/test-terraform.yml +++ /dev/null @@ -1,13 +0,0 @@ -platform: linux -inputs: - - name: terraform - - name: magic-modules -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby-python - tag: '1.11.5-2.6.0-2.7-v6' -run: - path: magic-modules/.ci/unit-tests/run.sh - args: - - terraform/ diff --git a/.ci/unit-tests/tf-3.sh b/.ci/unit-tests/tf-3.sh deleted file mode 100755 index 76168df49deb..000000000000 --- a/.ci/unit-tests/tf-3.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# Setup GOPATH -export GOPATH=${PWD}/go - -set -x - -# Create GOPATH structure -mkdir -p "${GOPATH}/src/github.com/terraform-providers" -ln -s "${PWD}/terraform-diff/${SUBDIR}/new" "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -cd "${GOPATH}/src/github.com/terraform-providers/$PROVIDER_NAME" - -go test -v ./$TEST_DIR -parallel 16 -run '^Test' -timeout 1m - diff --git a/.ci/unit-tests/tf-3.yml b/.ci/unit-tests/tf-3.yml deleted file mode 100644 index 5565cb003196..000000000000 --- a/.ci/unit-tests/tf-3.yml +++ /dev/null @@ -1,15 +0,0 @@ -platform: linux -inputs: - - name: magic-modules-branched - - name: terraform-diff -image_resource: - type: docker-image - source: - repository: gcr.io/magic-modules/go-ruby-python - tag: '1.11.5-2.6.0-2.7-v6' -run: - path: magic-modules-branched/.ci/unit-tests/tf-3.sh -params: - PROVIDER_NAME: "" - TEST_DIR: "" - SUBDIR: "" diff --git a/.ci/vars.tmpl b/.ci/vars.tmpl deleted file mode 100644 index 52c02eae3e9f..000000000000 --- a/.ci/vars.tmpl +++ /dev/null @@ -1,45 +0,0 @@ -{% set terraform_v = { - 'ga': { - 'provider_name': 'terraform-provider-google', - 'short_name': 'terraform', - 'test_dir': 'google', - 'github_org': 'terraform-providers', - 'override_provider': '' - }, - 'beta': { - 'provider_name': 'terraform-provider-google-beta', - 'short_name': 'terraform-beta', - 'test_dir': 'google-beta', - 'github_org': 'terraform-providers', - 'override_provider': '' - }, - 'validator': { - 'provider_name': 'terraform-google-conversion', - 'short_name': 'terraform-mapper', - 'test_dir': 'google', - 'github_org': 'GoogleCloudPlatform', - 'override_provider': 'validator' - } - } -%} -{% set downstreams_with_changelogs = [ - 'terraform-providers/terraform-provider-google-beta', - 'terraform-providers/terraform-provider-google' - ] -%} -{% macro build_folder(names) -%} -{% for name in names %} -build/{{name}} -{%- endfor %} -{% endmacro -%} -{% set terraform_submodules = build_folder(terraform_v.values()|map(attribute='short_name')).split() %} -{% set all_submodules = - (terraform_submodules + ['build/ansible'] + ['build/inspec']) -%} -{% set all_submodules_yaml_format = '[' + ','.join(all_submodules) + ']' %} -{% macro serialize_terraform_properties(objs) -%} -{% for obj in objs %} -{{obj.provider_name}}:{{obj.short_name}}:{{obj.github_org}} -{%- endfor %} -{% endmacro -%} -{% set terraform_properties_serialized = serialize_terraform_properties(terraform_v.values()).split() %} diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index c35a115c6d31..471b7ed8b855 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,18 +1,6 @@ **Release Note Template for Downstream PRs (will be copied)** diff --git a/.gitignore b/.gitignore index 62096f2bee00..858c72df21be 100644 --- a/.gitignore +++ b/.gitignore @@ -20,6 +20,9 @@ *.pyc *.python-version +# Python virtual environment +.env/* + # IDEA files .idea/* *.iml diff --git a/Gemfile.lock b/Gemfile.lock index a2a8b4588813..3450bc664951 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -1,7 +1,7 @@ GEM remote: https://rubygems.org/ specs: - activesupport (5.2.3) + activesupport (5.2.4.3) concurrent-ruby (~> 1.0, >= 1.0.2) i18n (>= 0.7, < 2) minitest (~> 5.1) @@ -11,16 +11,16 @@ GEM ast (2.4.0) binding_of_caller (0.8.0) debug_inspector (>= 0.0.1) - concurrent-ruby (1.1.5) + concurrent-ruby (1.1.6) debug_inspector (0.0.3) diff-lcs (1.3) faraday (0.15.4) multipart-post (>= 1.2, < 3) - i18n (1.6.0) + i18n (1.8.2) concurrent-ruby (~> 1.0) jaro_winkler (1.5.4) metaclass (0.0.4) - minitest (5.11.3) + minitest (5.14.1) mocha (1.3.0) metaclass (~> 0.0.1) multipart-post (2.0.0) @@ -57,7 +57,7 @@ GEM addressable (>= 2.3.5, < 2.6) faraday (~> 0.8, < 1.0) thread_safe (0.3.6) - tzinfo (1.2.5) + tzinfo (1.2.7) thread_safe (~> 0.1) unicode-display_width (1.6.0) diff --git a/api/product.rb b/api/product.rb index 23d448e0a766..58e6e0978580 100644 --- a/api/product.rb +++ b/api/product.rb @@ -30,8 +30,8 @@ class Product < Api::Object::Named # Example inputs: "Compute", "AccessContextManager" # attr_reader :name - # The full name of the GCP product; eg "Cloud Bigtable" - attr_reader :display_name + # Display Name: The full name of the GCP product; eg "Cloud Bigtable" + # A custom getter is used for :display_name instead of `attr_reader` attr_reader :objects @@ -76,9 +76,9 @@ def api_name # The product full name is the "display name" in string form intended for # users to read in documentation; "Google Compute Engine", "Cloud Bigtable" - def product_full_name - if !display_name.nil? - display_name + def display_name + if !@display_name.nil? + @display_name else name.underscore.humanize end @@ -92,7 +92,7 @@ def lowest_version return product_version if ordered_version_name == product_version.name end end - raise "Unable to find lowest version for product #{product_full_name}" + raise "Unable to find lowest version for product #{display_name}" end def version_obj(name) @@ -116,7 +116,7 @@ def version_obj_or_closest(name) return version_obj(version) if exists_at_version(version) end - raise "Could not find object for version #{name} and product #{product_full_name}" + raise "Could not find object for version #{name} and product #{display_name}" end def exists_at_version_or_lower(name) diff --git a/api/resource.rb b/api/resource.rb index 10fcbbc33778..21bb1c27fee8 100644 --- a/api/resource.rb +++ b/api/resource.rb @@ -205,6 +205,16 @@ def required_properties all_user_properties.select(&:required) end + def all_nested_properties(props) + nested = props + props.each do |prop| + if !prop.flatten_object && prop.nested_properties? + nested += all_nested_properties(prop.nested_properties) + end + end + nested + end + # Returns all resourcerefs at any depth def all_resourcerefs resourcerefs_for_properties(all_user_properties, self) diff --git a/api/type.rb b/api/type.rb index 1ff7239d935e..b3f369477b41 100644 --- a/api/type.rb +++ b/api/type.rb @@ -31,6 +31,11 @@ module Fields # string, as providers expect a single-line one w/o a newline. attr_reader :deprecation_message + # Add a removed message for fields no longer supported in the API. This should + # be used for fields supported in one version but have been removed from + # a different version. + attr_reader :removed_message + attr_reader :output # If set value will not be sent to server on sync attr_reader :input # If set to true value is used only on creation @@ -70,6 +75,7 @@ module Fields attr_reader :allow_empty_object attr_reader :min_version + attr_reader :exact_version # A list of properties that conflict with this property. attr_reader :conflicts @@ -99,7 +105,9 @@ def validate check :description, type: ::String, required: true check :exclude, type: :boolean, default: false, required: true check :deprecation_message, type: ::String + check :removed_message, type: ::String check :min_version, type: ::String + check :exact_version, type: ::String check :output, type: :boolean check :required, type: :boolean check :send_empty_value, type: :boolean @@ -132,9 +140,9 @@ def to_s # The only intended purpose is to allow better error messages. Some objects # and at some points in the build this doesn't output a valid output. def lineage - return name if __parent.nil? + return name&.underscore if __parent.nil? - __parent.lineage + '.' + name + __parent.lineage + '.' + name&.underscore end def to_json(opts = nil) @@ -258,7 +266,14 @@ def min_version end end + def exact_version + return nil if @exact_version.nil? || @exact_version.blank? + + @__resource.__product.version_obj(@exact_version) + end + def exclude_if_not_in_version!(version) + @exclude ||= exact_version != version unless exact_version.nil? @exclude ||= version < min_version end @@ -289,6 +304,10 @@ def nested_properties? !nested_properties.empty? end + def removed? + !(@removed_message.nil? || @removed_message == '') + end + def deprecated? !(@deprecation_message.nil? || @deprecation_message == '') end @@ -437,10 +456,12 @@ def item_type_class # Represents an enum, and store is valid values class Enum < Primitive attr_reader :values + attr_reader :skip_docs_values def validate super check :values, type: ::Array, item_type: [Symbol, ::String, ::Integer], required: true + check :skip_docs_values, type: :boolean end end diff --git a/compile/core.rb b/compile/core.rb index d5731df55ee0..883e0c5e53e5 100644 --- a/compile/core.rb +++ b/compile/core.rb @@ -226,9 +226,9 @@ def compile_string(ctx, source) end end - def autogen_notice(lang) + def autogen_notice(lang, pwd) Thread.current[:autogen] = true - comment_block(compile('templates/autogen_notice.erb').split("\n"), lang) + comment_block(compile(pwd + '/templates/autogen_notice.erb').split("\n"), lang) end def autogen_exception diff --git a/compiler.rb b/compiler.rb index 8f4bfb4c46dc..167e40c9535b 100755 --- a/compiler.rb +++ b/compiler.rb @@ -24,6 +24,7 @@ ENV['TZ'] = 'UTC' require 'active_support/inflector' +require 'active_support/core_ext/array/conversions' require 'api/compiler' require 'google/logger' require 'optparse' diff --git a/overrides/terraform/resource_override.rb b/overrides/terraform/resource_override.rb index b009b7e2b5bc..316af5d576c9 100644 --- a/overrides/terraform/resource_override.rb +++ b/overrides/terraform/resource_override.rb @@ -24,6 +24,13 @@ module Terraform class ResourceOverride < Overrides::ResourceOverride def self.attributes [ + # If non-empty, overrides the full filename prefix + # i.e. google/resource_product_{{resource_filename_override}}.go + # i.e. google/resource_product_{{resource_filename_override}}_test.go + # Note this doesn't override the actual resource name + # use :legacy_name instead. + :filename_override, + # If non-empty, overrides the full given resource name. # i.e. 'google_project' for resourcemanager.Project # Use Provider::Terraform::Config.legacy_name to override just @@ -75,7 +82,12 @@ def self.attributes # This enables resources that get their project via a reference to a different resource # instead of a project field to use User Project Overrides - :supports_indirect_user_project_override + :supports_indirect_user_project_override, + + # Function to transform a read error so that handleNotFound recognises + # it as a 404. This should be added as a handwritten fn that takes in + # an error and returns one. + :read_error_transform ] end @@ -87,11 +99,12 @@ def validate @examples ||= [] + check :filename_override, type: String check :legacy_name, type: String check :id_format, type: String check :examples, item_type: Provider::Terraform::Examples, type: Array, default: [] check :virtual_fields, - item_type: Provider::Terraform::VirtualFields, + item_type: Api::Type, type: Array, default: [] @@ -108,6 +121,7 @@ def validate check :skip_sweeper, type: :boolean, default: false check :skip_delete, type: :boolean, default: false check :supports_indirect_user_project_override, type: :boolean, default: false + check :read_error_transform, type: String end def apply(resource) diff --git a/products/accesscontextmanager/api.yaml b/products/accesscontextmanager/api.yaml index edb683651bca..a9ba4660e640 100644 --- a/products/accesscontextmanager/api.yaml +++ b/products/accesscontextmanager/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: AccessContextManager -display_name: Access Context Manager +display_name: Access Context Manager (VPC Service Controls) versions: - !ruby/object:Api::Product::Version name: ga @@ -148,6 +148,8 @@ objects: name: 'basic' description: | A set of predefined conditions for the access level and a combining function. + conflicts: + - custom properties: - !ruby/object:Api::Type::Enum name: 'combiningFunction' @@ -156,7 +158,7 @@ objects: is granted this AccessLevel. If AND is used, each Condition in conditions must be satisfied for the AccessLevel to be applied. If OR is used, at least one Condition in conditions must be satisfied - for the AccessLevel to be applied. Defaults to AND if unspecified. + for the AccessLevel to be applied. default_value: :AND values: - :AND @@ -221,6 +223,7 @@ objects: properties: - !ruby/object:Api::Type::Boolean name: 'requireScreenLock' + api_name: 'requireScreenlock' description: | Whether or not screenlock is required for the DevicePolicy to be true. Defaults to false. @@ -293,6 +296,35 @@ objects: countries/regions. Format: A valid ISO 3166-1 alpha-2 code. item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'custom' + description: | + Custom access level conditions are set using the Cloud Common Expression Language to represent the necessary conditions for the level to apply to a request. + See CEL spec at: https://github.com/google/cel-spec. + conflicts: + - basic + properties: + - !ruby/object:Api::Type::NestedObject + name: 'expr' + required: true + description: | + Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. + This page details the objects and attributes that are used to the build the CEL expressions for + custom access levels - https://cloud.google.com/access-context-manager/docs/custom-access-level-spec. + properties: + - !ruby/object:Api::Type::String + name: 'expression' + required: true + description: Textual representation of an expression in Common Expression Language syntax. + - !ruby/object:Api::Type::String + name: 'title' + description: Title for the expression, i.e. a short string describing its purpose. + - !ruby/object:Api::Type::String + name: 'description' + description: Description of the expression + - !ruby/object:Api::Type::String + name: 'location' + description: String indicating the location of the expression for error reporting, e.g. a file name and a position in the file - !ruby/object:Api::Resource name: 'ServicePerimeter' # This is an unusual API, so we need to use a few fields to map the methods diff --git a/products/activedirectory/api.yaml b/products/activedirectory/api.yaml new file mode 100644 index 000000000000..f63f88d9f02b --- /dev/null +++ b/products/activedirectory/api.yaml @@ -0,0 +1,107 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: ActiveDirectory +display_name: Managed Microsoft Active Directory +versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://managedidentities.googleapis.com/v1/ +scopes: + - https://www.googleapis.com/auth/cloud-platform +async: !ruby/object:Api::OpAsync + operation: !ruby/object:Api::OpAsync::Operation + path: 'name' + base_url: '{{op_id}}' + wait_ms: 1000 + # It takes about 35-40 mins to get the resource created + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 60 + update_minutes: 60 + delete_minutes: 60 + result: !ruby/object:Api::OpAsync::Result + path: 'response' + resource_inside_response: true + status: !ruby/object:Api::OpAsync::Status + path: 'done' + complete: true + allowed: + - true + - false + error: !ruby/object:Api::OpAsync::Error + path: 'error' + message: 'message' +objects: + - !ruby/object:Api::Resource + name: 'Domain' + kind: 'activedirectory#domain' + base_url : projects/{{project}}/locations/global/domains?domainName={{domain_name}} + update_verb: :PATCH + update_mask: true + self_link: '{{name}}' + description: Creates a Microsoft AD domain + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Managed Microsoft Active Directory Quickstart': 'https://cloud.google.com/managed-microsoft-ad/docs/quickstarts' + api: 'https://cloud.google.com/managed-microsoft-ad/reference/rest/v1/projects.locations.global.domains' + parameters: + - !ruby/object:Api::Type::String + name: domainName + required: true + url_param_only: true + input: true + description: | + The fully qualified domain name. e.g. mydomain.myorganization.com, with the restrictions, + https://cloud.google.com/managed-microsoft-ad/reference/rest/v1/projects.locations.global.domains. + properties: + - !ruby/object:Api::Type::String + name: 'name' + output: true + description: 'The unique name of the domain using the format: `projects/{project}/locations/global/domains/{domainName}`.' + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: 'Resource labels that can contain user-provided metadata' + - !ruby/object:Api::Type::Array + name: 'authorizedNetworks' + item_type: Api::Type::String + description: | + The full names of the Google Compute Engine networks the domain instance is connected to. The domain is only available on networks listed in authorizedNetworks. + If CIDR subnets overlap between networks, domain creation will fail. + - !ruby/object:Api::Type::String + name: 'reservedIpRange' + required: true + input: true + description: | + The CIDR range of internal addresses that are reserved for this domain. Reserved networks must be /24 or larger. + Ranges must be unique and non-overlapping with existing subnets in authorizedNetworks + - !ruby/object:Api::Type::Array + name: 'locations' + required: true + item_type: Api::Type::String + description: | + Locations where domain needs to be provisioned. [regions][compute/docs/regions-zones/] + e.g. us-west1 or us-east4 Service supports up to 4 locations at once. Each location will use a /26 block. + - !ruby/object:Api::Type::String + name: 'admin' + default_value: 'setupadmin' + input: true + description: | + The name of delegated administrator account used to perform Active Directory operations. + If not specified, setupadmin will be used. + - !ruby/object:Api::Type::String + name: 'fqdn' + output: true + description: | + The fully-qualified domain name of the exposed domain used by clients to connect to the service. + Similar to what would be chosen for an Active Directory set up on an internal network. \ No newline at end of file diff --git a/products/activedirectory/terraform.yaml b/products/activedirectory/terraform.yaml new file mode 100644 index 000000000000..8ff5dfbda36c --- /dev/null +++ b/products/activedirectory/terraform.yaml @@ -0,0 +1,39 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + Domain: !ruby/object:Overrides::Terraform::ResourceOverride + id_format: "{{name}}" + import_format: ["{{name}}"] + autogen_async: true + properties: + authorizedNetworks: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + domainName: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + validation: !ruby/object:Provider::Terraform::Validation + function: 'validateADDomainName()' + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/self_link_as_name.erb + examples: + - !ruby/object:Provider::Terraform::Examples + name: "active_directory_domain_basic" + primary_resource_id: "ad-domain" + vars: + name: "myorg" +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> diff --git a/products/appengine/api.yaml b/products/appengine/api.yaml index 58ab43d0f5db..a7e6987ec445 100644 --- a/products/appengine/api.yaml +++ b/products/appengine/api.yaml @@ -185,7 +185,7 @@ objects: - !ruby/object:Api::Resource name: 'Service' description: | - A Service resource is a logical component of an application that can share state and communicate in a secure fashion with other services. + A Service resource is a logical component of an application that can share state and communicate in a secure fashion with other services. For example, an application that handles customer requests might include separate services to handle tasks such as backend data analysis or API requests from mobile devices. Each service has a collection of versions that define a specific set of code used to implement the functionality of that service. base_url: 'apps/{{project}}/services' @@ -215,8 +215,9 @@ objects: name: 'StandardAppVersion' description: | Standard App Version resource to create a new version of standard GAE Application. + Learn about the differences between the standard environment and the flexible environment + at https://cloud.google.com/appengine/docs/the-appengine-environments. Currently supporting Zip and File Containers. - Currently does not support async operation checking. collection_url_key: 'versions' base_url: 'apps/{{project}}/services/{{service}}/versions' delete_url: 'apps/{{project}}/services/{{service}}/versions/{{version_id}}' @@ -224,7 +225,8 @@ objects: update_url: 'apps/{{project}}/services/{{service}}/versions' update_verb: :POST update_mask: false - self_link: 'apps/{{project}}/services/{{service}}/versions/{{version_id}}' + create_url: 'apps/{{project}}/services/{{service}}/versions' + self_link: 'apps/{{project}}/services/{{service}}/versions/{{version_id}}?view=FULL' references: !ruby/object:Api::Resource::ReferenceLinks guides: 'Official Documentation': @@ -254,6 +256,7 @@ objects: url_param_only: true resource: 'Service' imports: 'name' + required: true description: | AppEngine service resource properties: @@ -271,7 +274,7 @@ objects: name: 'runtime' description: | Desired runtime. Example python27. - required: true + required: true - !ruby/object:Api::Type::Boolean name: 'threadsafe' description: | @@ -279,19 +282,19 @@ objects: - !ruby/object:Api::Type::String name: 'runtimeApiVersion' description: | - The version of the API in the given runtime environment. + The version of the API in the given runtime environment. Please see the app.yaml reference for valid values at https://cloud.google.com/appengine/docs/standard//config/appref - !ruby/object:Api::Type::Array name: 'handlers' description: | - An ordered list of URL-matching patterns that should be applied to incoming requests. - The first matching URL handles the request and other request handlers are not attempted. + An ordered list of URL-matching patterns that should be applied to incoming requests. + The first matching URL handles the request and other request handlers are not attempted. item_type: !ruby/object:Api::Type::NestedObject properties: - !ruby/object:Api::Type::String name: 'urlRegex' description: | - URL prefix. Uses regular expression syntax, which means regexp special characters must be escaped, but should not contain groupings. + URL prefix. Uses regular expression syntax, which means regexp special characters must be escaped, but should not contain groupings. All URLs that begin with this prefix are handled by this handler, using the portion of the URL after the prefix as part of the file path. - !ruby/object:Api::Type::Enum name: 'securityLevel' @@ -334,7 +337,7 @@ objects: name: 'script' # TODO (mbang): Exactly one of script, staticFiles, or apiEndpoint must be set description: | - Executes a script to handle the requests that match this URL pattern. + Executes a script to handle the requests that match this URL pattern. Only the auto value is supported for Node.js in the App Engine standard environment, for example "script:" "auto". properties: - !ruby/object:Api::Type::String @@ -378,7 +381,9 @@ objects: - !ruby/object:Api::Type::Boolean name: 'applicationReadable' description: | - Whether files should also be uploaded as code data. By default, files declared in static file handlers are uploaded as static data and are only served to end users; they cannot be read by the application. If enabled, uploads are charged against both your code and static data storage resource quotas. + Whether files should also be uploaded as code data. By default, files declared in static file handlers are uploaded as + static data and are only served to end users; they cannot be read by the application. If enabled, uploads are charged + against both your code and static data storage resource quotas. - !ruby/object:Api::Type::Array name: 'libraries' description: | @@ -401,8 +406,8 @@ objects: name: 'deployment' description: | Code and application artifacts that make up this version. - required: false - properties: + required: true + properties: - !ruby/object:Api::Type::NestedObject name: 'zip' description: 'Zip File' @@ -453,12 +458,107 @@ objects: required: true description: | The format should be a shell command that can be fed to bash -c. + - !ruby/object:Api::Type::Array + name: 'inboundServices' + description: | + Before an application can receive email or XMPP messages, the application must be configured to enable the service. + item_type: Api::Type::String - !ruby/object:Api::Type::String name: 'instanceClass' description: | Instance class that is used to run this version. Valid values are - AutomaticScaling F1, F2, F4, F4_1G - (Only AutomaticScaling is supported at the moment) + AutomaticScaling: F1, F2, F4, F4_1G + BasicScaling or ManualScaling: B1, B2, B4, B4_1G, B8 + Defaults to F1 for AutomaticScaling and B2 for ManualScaling and BasicScaling. If no scaling is specified, AutomaticScaling is chosen. + - !ruby/object:Api::Type::NestedObject + name: 'automaticScaling' + description: | + Automatic scaling is based on request rate, response latencies, and other application metrics. + conflicts: + - basicScaling + - manualScaling + properties: + - !ruby/object:Api::Type::Integer + name: 'maxConcurrentRequests' + description: | + Number of concurrent requests an automatic scaling instance can accept before the scheduler spawns a new instance. + + Defaults to a runtime-specific value. + - !ruby/object:Api::Type::Integer + name: 'maxIdleInstances' + description: | + Maximum number of idle instances that should be maintained for this version. + - !ruby/object:Api::Type::String + name: 'maxPendingLatency' + description: | + Maximum amount of time that a request should wait in the pending queue before starting a new instance to handle it. + A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s". + - !ruby/object:Api::Type::Integer + name: 'minIdleInstances' + description: | + Minimum number of idle instances that should be maintained for this version. Only applicable for the default version of a service. + - !ruby/object:Api::Type::String + name: 'minPendingLatency' + description: | + Minimum amount of time a request should wait in the pending queue before starting a new instance to handle it. + A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s". + - !ruby/object:Api::Type::NestedObject + name: 'standardSchedulerSettings' + description: | + Scheduler settings for standard environment. + properties: + - !ruby/object:Api::Type::Double + name: 'targetCpuUtilization' + description: | + Target CPU utilization ratio to maintain when scaling. Should be a value in the range [0.50, 0.95], zero, or a negative value. + - !ruby/object:Api::Type::Double + name: 'targetThroughputUtilization' + description: | + Target throughput utilization ratio to maintain when scaling. Should be a value in the range [0.50, 0.95], zero, or a negative value. + - !ruby/object:Api::Type::Integer + name: 'minInstances' + description: | + Minimum number of instances to run for this version. Set to zero to disable minInstances configuration. + - !ruby/object:Api::Type::Integer + name: 'maxInstances' + description: | + Maximum number of instances to run for this version. Set to zero to disable maxInstances configuration. + - !ruby/object:Api::Type::NestedObject + name: 'basicScaling' + description: | + Basic scaling creates instances when your application receives requests. Each instance will be shut down when the application becomes idle. Basic scaling is ideal for work that is intermittent or driven by user activity. + conflicts: + - automaticScaling + - manualScaling + properties: + - !ruby/object:Api::Type::String + name: 'idleTimeout' + default_value: 900s + description: | + Duration of time after the last request that an instance must wait before the instance is shut down. + A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s". Defaults to 900s. + - !ruby/object:Api::Type::Integer + name: 'maxInstances' + required: true + description: | + Maximum number of instances to create for this version. Must be in the range [1.0, 200.0]. + - !ruby/object:Api::Type::NestedObject + name: 'manualScaling' + description: | + A service with manual scaling runs continuously, allowing you to perform complex initialization and rely on the state of its memory over time. + conflicts: + - automaticScaling + - basicScaling + properties: + - !ruby/object:Api::Type::Integer + name: 'instances' + required: true + description: | + Number of instances to assign to the service at the start. + + **Note:** When managing the number of instances at runtime through the App Engine Admin API or the (now deprecated) Python 2 + Modules API set_num_instances() you must use `lifecycle.ignore_changes = ["manual_scaling"[0].instances]` to prevent drift detection. + # StandardAppVersion and FlexibleAppVersion use the same API endpoint (apps.services.versions) # They are split apart as some of the fields will are necessary for one and not the other, and # other fields may have different defaults. However, some fields are the same. If fixing a bug @@ -479,7 +579,7 @@ objects: update_url: 'apps/{{project}}/services/{{service}}/versions' update_verb: :POST update_mask: false - self_link: 'apps/{{project}}/services/{{service}}/versions/{{version_id}}' + self_link: 'apps/{{project}}/services/{{service}}/versions/{{version_id}}?view=FULL' references: !ruby/object:Api::Resource::ReferenceLinks guides: 'Official Documentation': @@ -510,6 +610,7 @@ objects: parameters: - !ruby/object:Api::Type::ResourceRef name: 'service' + required: true url_param_only: true resource: 'Service' imports: 'name' @@ -644,7 +745,6 @@ objects: name: 'servingStatus' description: | Current serving status of this version. Only the versions with a SERVING status create instances and can be billed. - Defaults to SERVING. default_value: :SERVING values: - :SERVING @@ -652,8 +752,112 @@ objects: - !ruby/object:Api::Type::String name: 'runtimeApiVersion' description: | - The version of the API in the given runtime environment. + The version of the API in the given runtime environment. Please see the app.yaml reference for valid values at https://cloud.google.com/appengine/docs/standard//config/appref + - !ruby/object:Api::Type::Array + name: 'handlers' + description: | + An ordered list of URL-matching patterns that should be applied to incoming requests. + The first matching URL handles the request and other request handlers are not attempted. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'urlRegex' + description: | + URL prefix. Uses regular expression syntax, which means regexp special characters must be escaped, but should not contain groupings. + All URLs that begin with this prefix are handled by this handler, using the portion of the URL after the prefix as part of the file path. + - !ruby/object:Api::Type::Enum + name: 'securityLevel' + required: false + description: | + Security (HTTPS) enforcement for this URL. + values: + - :SECURE_DEFAULT + - :SECURE_NEVER + - :SECURE_OPTIONAL + - :SECURE_ALWAYS + - !ruby/object:Api::Type::Enum + name: 'login' + description: | + Methods to restrict access to a URL based on login status. + required: false + values: + - :LOGIN_OPTIONAL + - :LOGIN_ADMIN + - :LOGIN_REQUIRED + - !ruby/object:Api::Type::Enum + name: 'authFailAction' + description: | + Actions to take when the user is not logged in. + required: false + values: + - :AUTH_FAIL_ACTION_REDIRECT + - :AUTH_FAIL_ACTION_UNAUTHORIZED + - !ruby/object:Api::Type::Enum + name: 'redirectHttpResponseCode' + description: | + 30x code to use when performing redirects for the secure field. + required: false + values: + - :REDIRECT_HTTP_RESPONSE_CODE_301 + - :REDIRECT_HTTP_RESPONSE_CODE_302 + - :REDIRECT_HTTP_RESPONSE_CODE_303 + - :REDIRECT_HTTP_RESPONSE_CODE_307 + - !ruby/object:Api::Type::NestedObject + name: 'script' + # TODO (mbang): Exactly one of script, staticFiles, or apiEndpoint must be set + description: | + Executes a script to handle the requests that match this URL pattern. + Only the auto value is supported for Node.js in the App Engine standard environment, for example "script:" "auto". + properties: + - !ruby/object:Api::Type::String + name: 'scriptPath' + required: true + description: | + Path to the script from the application root directory. + - !ruby/object:Api::Type::NestedObject + name: 'staticFiles' + # TODO (mbang): Exactly one of script, staticFiles, or apiEndpoint must be set + description: | + Files served directly to the user for a given URL, such as images, CSS stylesheets, or JavaScript source files. + Static file handlers describe which files in the application directory are static files, and which URLs serve them. + properties: + - !ruby/object:Api::Type::String + name: 'path' + description: | + Path to the static files matched by the URL pattern, from the application root directory. + The path can refer to text matched in groupings in the URL pattern. + - !ruby/object:Api::Type::String + name: 'uploadPathRegex' + description: | + Regular expression that matches the file paths for all files that should be referenced by this handler. + - !ruby/object:Api::Type::KeyValuePairs + name: 'httpHeaders' + description: | + HTTP headers to use for all responses from these URLs. + An object containing a list of "key:value" value pairs.". + - !ruby/object:Api::Type::String + name: 'mimeType' + description: | + MIME type used to serve all files served by this handler. + Defaults to file-specific MIME types, which are derived from each file's filename extension. + - !ruby/object:Api::Type::String + name: 'expiration' + description: | + Time a static file served by this handler should be cached by web proxies and browsers. + A duration in seconds with up to nine fractional digits, terminated by 's'. Example "3.5s". + Default is '0s' + default_value: '0s' + - !ruby/object:Api::Type::Boolean + name: 'requireMatchingFile' + description: | + Whether this handler should match the request if the file referenced by the handler does not exist. + - !ruby/object:Api::Type::Boolean + name: 'applicationReadable' + description: | + Whether files should also be uploaded as code data. By default, files declared in static file handlers are + uploaded as static data and are only served to end users; they cannot be read by the application. If enabled, + uploads are charged against both your code and static data storage resource quotas. - !ruby/object:Api::Type::String name: 'runtimeMainExecutablePath' description: | @@ -666,7 +870,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'authFailAction' description: | - Action to take when users access resources that require authentication. Defaults to "AUTH_FAIL_ACTION_REDIRECT". + Action to take when users access resources that require authentication. default_value: :AUTH_FAIL_ACTION_REDIRECT values: - :AUTH_FAIL_ACTION_REDIRECT @@ -674,7 +878,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'login' description: | - Level of login required to access this resource. Defaults to "LOGIN_OPTIONAL". + Level of login required to access this resource. default_value: :LOGIN_OPTIONAL values: - :LOGIN_OPTIONAL @@ -895,7 +1099,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'rolloutStrategy' description: | - Endpoints rollout strategy. If FIXED, configId must be specified. If MANAGED, configId must be omitted. Default is "FIXED". + Endpoints rollout strategy. If FIXED, configId must be specified. If MANAGED, configId must be omitted. default_value: :FIXED values: - :FIXED @@ -1100,7 +1304,10 @@ objects: name: 'instances' required: true description: | - Number of instances to assign to the service at the start. This number can later be altered by using the Modules API set_num_instances() function. + Number of instances to assign to the service at the start. + + **Note:** When managing the number of instances at runtime through the App Engine Admin API or the (now deprecated) Python 2 + Modules API set_num_instances() you must use `lifecycle.ignore_changes = ["manual_scaling"[0].instances]` to prevent drift detection. - !ruby/object:Api::Resource name: 'ApplicationUrlDispatchRules' description: | @@ -1132,7 +1339,7 @@ objects: path: 'error/errors' message: 'message' properties: - - !ruby/object:Api::Type::Array + - !ruby/object:Api::Type::Array name: 'dispatchRules' required: true description: | @@ -1206,7 +1413,7 @@ objects: description: | Mapping that defines fractional HTTP traffic diversion to different versions within the service. required: true - properties: + properties: - !ruby/object:Api::Type::Enum name: 'shardBy' description: | @@ -1221,5 +1428,3 @@ objects: required: true description: | Mapping from version IDs within the service to fractional (0.000, 1] allocations of traffic for that version. Each version can be specified only once, but some versions in the service may not have any traffic allocation. Services that have traffic allocated cannot be deleted until either the service is deleted or their traffic allocation is removed. Allocations must sum to 1. Up to two decimal place precision is supported for IP-based splits and up to three decimal places is supported for cookie-based splits. - - diff --git a/products/appengine/terraform.yaml b/products/appengine/terraform.yaml index 4f3b3d155274..a69384a76a54 100644 --- a/products/appengine/terraform.yaml +++ b/products/appengine/terraform.yaml @@ -16,6 +16,14 @@ overrides: !ruby/object:Overrides::ResourceOverrides FirewallRule: !ruby/object:Overrides::Terraform::ResourceOverride import_format: ["apps/{{project}}/firewall/ingressRules/{{priority}}"] mutex: "apps/{{project}}" + async: !ruby/object:Provider::Terraform::PollAsync + check_response_func_existence: PollCheckForExistence + actions: ['create'] + operation: !ruby/object:Api::Async::Operation + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 4 + update_minutes: 4 + delete_minutes: 4 # This resource is a child resource (requires app ID in the URL) skip_sweeper: true examples: @@ -28,6 +36,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides org_id: :ORG_ID StandardAppVersion: !ruby/object:Overrides::Terraform::ResourceOverride import_format: ["apps/{{project}}/services/{{service}}/versions/{{version_id}}"] + id_format: "apps/{{project}}/services/{{service}}/versions/{{version_id}}" mutex: "apps/{{project}}" error_retry_predicates: ["isAppEngineRetryableError"] parameters: @@ -35,12 +44,14 @@ overrides: !ruby/object:Overrides::ResourceOverrides default_from_api: true required: false virtual_fields: - - !ruby/object:Provider::Terraform::VirtualFields + - !ruby/object:Api::Type::Boolean name: 'noop_on_destroy' + default_value: false description: | If set to `true`, the application version will not be deleted. - - !ruby/object:Provider::Terraform::VirtualFields + - !ruby/object:Api::Type::Boolean name: 'delete_service_on_destroy' + default_value: false description: | If set to `true`, the service will be deleted if it is the last version. custom_code: !ruby/object:Provider::Terraform::CustomCode @@ -57,8 +68,13 @@ overrides: !ruby/object:Overrides::ResourceOverrides ignore_read: true threadsafe: !ruby/object:Overrides::Terraform::PropertyOverride ignore_read: true + handlers: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + # instanceClass defaults to a value based on the scaling method instanceClass: !ruby/object:Overrides::Terraform::PropertyOverride - ignore_read: true + default_from_api: true + inboundServices: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true examples: - !ruby/object:Provider::Terraform::Examples name: "app_engine_standard_app_version" @@ -73,6 +89,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides org_id: :ORG_ID FlexibleAppVersion: !ruby/object:Overrides::Terraform::ResourceOverride import_format: ["apps/{{project}}/services/{{service}}/versions/{{version_id}}"] + id_format: "apps/{{project}}/services/{{service}}/versions/{{version_id}}" mutex: "apps/{{project}}" error_retry_predicates: ["isAppEngineRetryableError"] parameters: @@ -80,17 +97,19 @@ overrides: !ruby/object:Overrides::ResourceOverrides default_from_api: true required: false virtual_fields: - - !ruby/object:Provider::Terraform::VirtualFields + - !ruby/object:Api::Type::Boolean name: 'noop_on_destroy' + default_value: false description: | If set to `true`, the application version will not be deleted. - - !ruby/object:Provider::Terraform::VirtualFields + - !ruby/object:Api::Type::Boolean name: 'delete_service_on_destroy' + default_value: false description: | If set to `true`, the service will be deleted if it is the last version. custom_code: !ruby/object:Provider::Terraform::CustomCode custom_delete: templates/terraform/custom_delete/appversion_delete.go.erb - test_check_destroy: templates/terraform/custom_check_destroy/appengine.go.erb + test_check_destroy: templates/terraform/custom_check_destroy/skip_delete_during_test.go.erb encoder: templates/terraform/encoders/flex_app_version.go.erb properties: id: !ruby/object:Overrides::Terraform::PropertyOverride @@ -116,15 +135,22 @@ overrides: !ruby/object:Overrides::ResourceOverrides # runtimeApiVersion defaults to a runtime-specific value runtimeApiVersion: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true + handlers: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + inboundServices: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true examples: - !ruby/object:Provider::Terraform::Examples name: "app_engine_flexible_app_version" primary_resource_id: "myapp_v1" ignore_read_extra: - - "delete_service_on_destroy" + - "noop_on_destroy" vars: bucket_name: "appengine-static-content" - service_name: "service-" + project: "appeng-flex" + test_env_vars: + org_id: :ORG_ID + billing_account: :BILLING_ACCT Service: !ruby/object:Overrides::Terraform::ResourceOverride exclude: true DomainMapping: !ruby/object:Overrides::Terraform::ResourceOverride diff --git a/products/artifactregistry/api.yaml b/products/artifactregistry/api.yaml new file mode 100644 index 000000000000..8b2747014458 --- /dev/null +++ b/products/artifactregistry/api.yaml @@ -0,0 +1,118 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: ArtifactRegistry +display_name: Artifact Registry +scopes: + - https://www.googleapis.com/auth/cloud-platform +versions: + - !ruby/object:Api::Product::Version + name: beta + base_url: https://artifactregistry.googleapis.com/v1beta1/ +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Artifact Registry API + url: https://console.cloud.google.com/apis/library/artifactregistry.googleapis.com/ +async: !ruby/object:Api::OpAsync + operation: !ruby/object:Api::OpAsync::Operation + path: 'name' + base_url: '{{op_id}}' + wait_ms: 1000 + result: !ruby/object:Api::OpAsync::Result + path: 'response' + resource_inside_response: true + status: !ruby/object:Api::OpAsync::Status + path: 'done' + complete: true + allowed: + - true + - false + error: !ruby/object:Api::OpAsync::Error + path: 'error' + message: 'message' +objects: + - !ruby/object:Api::Resource + name: 'Repository' + base_url: projects/{{project}}/locations/{{location}}/repositories + create_url: projects/{{project}}/locations/{{location}}/repositories?repository_id={{repository_id}} + min_version: beta + description: A repository for storing artifacts + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/artifact-registry/docs/overview' + api: 'https://cloud.google.com/artifact-registry/docs/reference/rest/' + iam_policy: !ruby/object:Api::Resource::IamPolicy + exclude: false + method_name_separator: ':' + parent_resource_attribute: 'repository' + import_format: ["projects/{{project}}/locations/{{location}}/repositories/{{repository}}", "{{repository}}"] + properties: + - !ruby/object:Api::Type::String + name: name + description: |- + The name of the repository, for example: + "projects/p1/locations/us-central1/repositories/repo1" + output: true + - !ruby/object:Api::Type::String + name: repository_id + description: |- + The last part of the repository name, for example: + "repo1" + required: true + input: true + url_param_only: true + - !ruby/object:Api::Type::String + name: 'location' + description: | + The name of the location this repository is located in. + required: true + input: true + url_param_only: true + - !ruby/object:Api::Type::Enum + name: format + description: |- + The format of packages that are stored in the repoitory. + values: + - :DOCKER + required: true + input: true + - !ruby/object:Api::Type::String + name: description + description: |- + The user-provided description of the repository. + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: | + Labels with user-defined metadata. + This field may contain up to 64 entries. Label keys and values may be no + longer than 63 characters. Label keys must begin with a lowercase letter + and may only contain lowercase letters, numeric characters, underscores, + and dashes. + - !ruby/object:Api::Type::String + name: 'kmsKeyName' + description: |- + The Cloud KMS resource name of the customer managed encryption key that’s + used to encrypt the contents of the Repository. Has the form: + `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. + This value may not be changed after the Repository has been created. + input: true + - !ruby/object:Api::Type::Time + name: createTime + description: The time when the repository was created. + output: true + - !ruby/object:Api::Type::Time + name: updateTime + description: The time when the repository was last updated. + output: true diff --git a/products/artifactregistry/terraform.yaml b/products/artifactregistry/terraform.yaml new file mode 100644 index 000000000000..bf0b1314ee25 --- /dev/null +++ b/products/artifactregistry/terraform.yaml @@ -0,0 +1,52 @@ +# Copyright 2019 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + Repository: !ruby/object:Overrides::Terraform::ResourceOverride + id_format: projects/{{project}}/locations/{{location}}/repositories/{{repsitory_id}} + autogen_async: true + examples: + - !ruby/object:Provider::Terraform::Examples + name: "artifact_registry_repository_basic" + min_version: 'beta' + primary_resource_id: "my-repo" + vars: + repository_id: "my-repository" + description: "example docker repository" + - !ruby/object:Provider::Terraform::Examples + name: "artifact_registry_repository_cmek" + min_version: 'beta' + primary_resource_id: "my-repo" + vars: + repository_id: "my-repository" + kms_key_name: "kms-key" + test_vars_overrides: + kms_key_name: 'BootstrapKMSKeyInLocation(t, "us-central1").CryptoKey.Name' + - !ruby/object:Provider::Terraform::Examples + name: "artifact_registry_repository_iam" + min_version: 'beta' + primary_resource_id: "my-repo" + vars: + account_id: "my-account" + repository_id: "my-repository" + description: "example docker repository with iam" + properties: + location: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + repository_id: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' + name: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/shortname_to_url.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' diff --git a/products/bigquery/ansible.yaml b/products/bigquery/ansible.yaml index 635d75872a83..fcf0bd70ac05 100644 --- a/products/bigquery/ansible.yaml +++ b/products/bigquery/ansible.yaml @@ -51,6 +51,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides underscores. The maximum length is 1,024 characters. DatasetAccess: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true + Job: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true files: !ruby/object:Provider::Config::Files resource: <%= lines(indent(compile('provider/ansible/resource~compile.yaml'), 4)) -%> diff --git a/products/bigquery/api.yaml b/products/bigquery/api.yaml index c81b707516d6..2113120d43bb 100644 --- a/products/bigquery/api.yaml +++ b/products/bigquery/api.yaml @@ -61,7 +61,7 @@ objects: member of the access object. Primitive, Predefined and custom roles are supported. Predefined roles that have equivalent primitive roles are swapped by the API to their Primitive - counterparts, and will show a diff post-create. See + counterparts. See [official docs](https://cloud.google.com/bigquery/docs/access-control). - !ruby/object:Api::Type::String name: 'specialGroup' @@ -383,6 +383,605 @@ objects: A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters. required: true + + - !ruby/object:Api::Resource + name: 'Job' + kind: 'bigquery#job' + base_url: projects/{{project}}/jobs + self_link: projects/{{project}}/jobs/{{job_id}} + input: true + description: | + Jobs are actions that BigQuery runs on your behalf to load data, export data, query data, or copy data. + Once a BigQuery job is created, it cannot be changed or deleted. + properties: + - !ruby/object:Api::Type::String + name: 'id' + output: true + description: | + Opaque ID field of the job. + - !ruby/object:Api::Type::String + name: 'user_email' + output: true + description: | + Email address of the user who ran the job. + - !ruby/object:Api::Type::NestedObject + name: 'configuration' + description: 'Describes the job configuration.' + required: true + properties: + - !ruby/object:Api::Type::String + name: 'jobType' + description: | + The type of the job. + output: true + - !ruby/object:Api::Type::String + name: 'jobTimeoutMs' + description: | + Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job. + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: | + The labels associated with this job. You can use these to organize and group your jobs. + - !ruby/object:Api::Type::NestedObject + name: 'query' + description: 'Configures a query job.' + exactly_one_of: + - query + - load + - copy + - extract + properties: + - !ruby/object:Api::Type::String + name: 'query' + description: | + SQL query text to execute. The useLegacySql field can be used to indicate whether the query uses legacy SQL or standard SQL. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'destinationTable' + description: | + Describes the table where the query results should be stored. + This property must be set for large results that exceed the maximum response size. + For queries that produce anonymous (cached) results, this field will be populated by BigQuery. + properties: + - !ruby/object:Api::Type::String + name: 'projectId' + description: 'The ID of the project containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'datasetId' + description: 'The ID of the dataset containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'tableId' + description: 'The ID of the table.' + required: true + - !ruby/object:Api::Type::Array + name: 'userDefinedFunctionResources' + description: | + Describes user-defined function resources used in the query. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'resourceUri' + description: 'A code resource to load from a Google Cloud Storage URI (gs://bucket/path).' + # TODO (mbang): exactly_one_of: resourceUri, inlineCode + - !ruby/object:Api::Type::String + name: 'inlineCode' + description: | + An inline resource that contains code for a user-defined function (UDF). + Providing a inline code resource is equivalent to providing a URI for a file containing the same code. + # TODO (mbang): exactly_one_of: resourceUri, inlineCode + - !ruby/object:Api::Type::Enum + name: 'createDisposition' + description: | + Specifies whether the job is allowed to create new tables. The following values are supported: + CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. + CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. + Creation, truncation and append actions occur as one atomic update upon job completion + default_value: :CREATE_IF_NEEDED + values: + - :CREATE_IF_NEEDED + - :CREATE_NEVER + - !ruby/object:Api::Type::Enum + name: 'writeDisposition' + description: | + Specifies the action that occurs if the destination table already exists. The following values are supported: + WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. + WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. + WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. + Each action is atomic and only occurs if BigQuery is able to complete the job successfully. + Creation, truncation and append actions occur as one atomic update upon job completion. + default_value: :WRITE_EMPTY + values: + - :WRITE_TRUNCATE + - :WRITE_APPEND + - :WRITE_EMPTY + - !ruby/object:Api::Type::NestedObject + name: 'defaultDataset' + description: | + Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. + properties: + - !ruby/object:Api::Type::String + name: 'datasetId' + description: 'A unique ID for this dataset, without the project name.' + required: true + - !ruby/object:Api::Type::String + name: 'projectId' + description: 'The ID of the project containing this table.' + - !ruby/object:Api::Type::Enum + name: 'priority' + description: | + Specifies a priority for the query. + default_value: :INTERACTIVE + values: + - :INTERACTIVE + - :BATCH + - !ruby/object:Api::Type::Boolean + name: 'allowLargeResults' + description: | + If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. + Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. + However, you must still set destinationTable when result size exceeds the allowed maximum response size. + - !ruby/object:Api::Type::Boolean + name: 'useQueryCache' + description: | + Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever + tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. + The default value is true. + default_value: true + - !ruby/object:Api::Type::Boolean + name: 'flattenResults' + description: | + If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. + allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened. + - !ruby/object:Api::Type::Integer + name: 'maximumBillingTier' + description: | + Limits the billing tier for this job. Queries that have resource usage beyond this tier will fail (without incurring a charge). + If unspecified, this will be set to your project default. + - !ruby/object:Api::Type::String + name: 'maximumBytesBilled' + description: | + Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). + If unspecified, this will be set to your project default. + - !ruby/object:Api::Type::Boolean + name: 'useLegacySql' + description: | + Specifies whether to use BigQuery's legacy SQL dialect for this query. The default value is true. + If set to false, the query will use BigQuery's standard SQL. + default_value: true + - !ruby/object:Api::Type::String + name: 'parameterMode' + description: | + Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query. + - !ruby/object:Api::Type::Array + name: 'schemaUpdateOptions' + description: | + Allows the schema of the destination table to be updated as a side effect of the query job. + Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; + when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, + specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. + One or more of the following values are specified: + ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. + ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'destinationEncryptionConfiguration' + description: | + Custom encryption configuration (e.g., Cloud KMS keys) + properties: + - !ruby/object:Api::Type::String + name: 'kmsKeyName' + description: | + Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. + The BigQuery Service Account associated with your project requires access to this encryption key. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'scriptOptions' + description: | + Options controlling the execution of scripts. + properties: + - !ruby/object:Api::Type::String + name: 'statementTimeoutMs' + description: 'Timeout period for each statement in a script.' + at_least_one_of: + - query.0.scriptOptions.0.statementTimeoutMs + - query.0.scriptOptions.0.statementByteBudget + - query.0.scriptOptions.0.keyResultStatement + - !ruby/object:Api::Type::String + name: 'statementByteBudget' + description: 'Limit on the number of bytes billed per statement. Exceeding this budget results in an error.' + at_least_one_of: + - query.0.scriptOptions.0.statementTimeoutMs + - query.0.scriptOptions.0.statementByteBudget + - query.0.scriptOptions.0.keyResultStatement + - !ruby/object:Api::Type::Enum + name: 'keyResultStatement' + description: | + Determines which statement in the script represents the "key result", + used to populate the schema and query results of the script job. + at_least_one_of: + - query.0.scriptOptions.0.statementTimeoutMs + - query.0.scriptOptions.0.statementByteBudget + - query.0.scriptOptions.0.keyResultStatement + values: + - :LAST + - :FIRST_SELECT + - !ruby/object:Api::Type::NestedObject + name: 'load' + description: 'Configures a load job.' + exactly_one_of: + - query + - load + - copy + - extract + properties: + - !ruby/object:Api::Type::Array + name: 'sourceUris' + description: | + The fully-qualified URIs that point to your data in Google Cloud. + For Google Cloud Storage URIs: Each URI can contain one '*' wildcard character + and it must come after the 'bucket' name. Size limits related to load jobs apply + to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be + specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. + For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '*' wildcard character is not allowed. + item_type: Api::Type::String + required: true + - !ruby/object:Api::Type::NestedObject + name: 'destinationTable' + description: | + The destination table to load the data into. + required: true + properties: + - !ruby/object:Api::Type::String + name: 'projectId' + description: 'The ID of the project containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'datasetId' + description: 'The ID of the dataset containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'tableId' + description: 'The ID of the table.' + required: true + - !ruby/object:Api::Type::Enum + name: 'createDisposition' + description: | + Specifies whether the job is allowed to create new tables. The following values are supported: + CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. + CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. + Creation, truncation and append actions occur as one atomic update upon job completion + default_value: :CREATE_IF_NEEDED + values: + - :CREATE_IF_NEEDED + - :CREATE_NEVER + - !ruby/object:Api::Type::Enum + name: 'writeDisposition' + description: | + Specifies the action that occurs if the destination table already exists. The following values are supported: + WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. + WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. + WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. + Each action is atomic and only occurs if BigQuery is able to complete the job successfully. + Creation, truncation and append actions occur as one atomic update upon job completion. + default_value: :WRITE_EMPTY + values: + - :WRITE_TRUNCATE + - :WRITE_APPEND + - :WRITE_EMPTY + - !ruby/object:Api::Type::String + name: 'nullMarker' + description: | + Specifies a string that represents a null value in a CSV file. For example, if you specify "\N", BigQuery interprets "\N" as a null value + when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an + empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as + an empty value. + default_value: '' + - !ruby/object:Api::Type::String + name: 'fieldDelimiter' + description: | + The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character. + To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts + the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the + data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. + The default value is a comma (','). + - !ruby/object:Api::Type::Integer + name: 'skipLeadingRows' + description: | + The number of rows at the top of a CSV file that BigQuery will skip when loading the data. + The default value is 0. This property is useful if you have header rows in the file that should be skipped. + When autodetect is on, the behavior is the following: + skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, + the row is read as data. Otherwise data is read starting from the second row. + skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. + skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, + row N is just skipped. Otherwise row N is used to extract column names for the detected schema. + default_value: 0 + - !ruby/object:Api::Type::String + name: 'encoding' + description: | + The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. + The default value is UTF-8. BigQuery decodes the data after the raw, binary data + has been split using the values of the quote and fieldDelimiter properties. + default_value: 'UTF-8' + - !ruby/object:Api::Type::String + name: 'quote' + description: | + The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, + and then uses the first byte of the encoded string to split the data in its raw, binary state. + The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. + If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true. + - !ruby/object:Api::Type::Integer + name: 'maxBadRecords' + description: | + The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, + an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. + default_value: 0 + - !ruby/object:Api::Type::Boolean + name: 'allowQuotedNewlines' + description: | + Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. + The default value is false. + default_value: false + - !ruby/object:Api::Type::String + name: 'sourceFormat' + description: | + The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". + For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET". + For orc, specify "ORC". The default value is CSV. + default_value: 'CSV' + - !ruby/object:Api::Type::Boolean + name: 'allowJaggedRows' + description: | + Accept rows that are missing trailing optional columns. The missing values are treated as nulls. + If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, + an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats. + default_value: false + - !ruby/object:Api::Type::Boolean + name: 'ignoreUnknownValues' + description: | + Indicates if BigQuery should allow extra values that are not represented in the table schema. + If true, the extra values are ignored. If false, records with extra columns are treated as bad records, + and if there are too many bad records, an invalid error is returned in the job result. + The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: + CSV: Trailing columns + JSON: Named values that don't match any column names + default_value: false + - !ruby/object:Api::Type::Array + name: 'projectionFields' + description: | + If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup. + Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. + If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result. + item_type: Api::Type::String + - !ruby/object:Api::Type::Boolean + name: 'autodetect' + description: | + Indicates if we should automatically infer the options and schema for CSV and JSON sources. + - !ruby/object:Api::Type::Array + name: 'schemaUpdateOptions' + description: | + Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or + supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; + when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. + For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: + ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. + ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'timePartitioning' + description: | + Time-based partitioning specification for the destination table. + properties: + - !ruby/object:Api::Type::String + name: 'type' + description: | + The only type supported is DAY, which will generate one partition per day. Providing an empty string used to cause an error, + but in OnePlatform the field will be treated as unset. + required: true + - !ruby/object:Api::Type::String + name: 'expirationMs' + description: | + Number of milliseconds for which to keep the storage for a partition. A wrapper is used here because 0 is an invalid value. + - !ruby/object:Api::Type::String + name: 'field' + description: | + If not set, the table is partitioned by pseudo column '_PARTITIONTIME'; if set, the table is partitioned by this field. + The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. + A wrapper is used here because an empty string is an invalid value. + - !ruby/object:Api::Type::NestedObject + name: 'destinationEncryptionConfiguration' + description: | + Custom encryption configuration (e.g., Cloud KMS keys) + properties: + - !ruby/object:Api::Type::String + name: 'kmsKeyName' + description: | + Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. + The BigQuery Service Account associated with your project requires access to this encryption key. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'copy' + description: 'Copies a table.' + exactly_one_of: + - query + - load + - copy + - extract + properties: + - !ruby/object:Api::Type::Array + name: 'sourceTables' + description: | + Source tables to copy. + required: true + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'projectId' + description: 'The ID of the project containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'datasetId' + description: 'The ID of the dataset containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'tableId' + description: 'The ID of the table.' + required: true + - !ruby/object:Api::Type::NestedObject + name: 'destinationTable' + description: 'The destination table.' + properties: + - !ruby/object:Api::Type::String + name: 'projectId' + description: 'The ID of the project containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'datasetId' + description: 'The ID of the dataset containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'tableId' + description: 'The ID of the table.' + required: true + - !ruby/object:Api::Type::Enum + name: 'createDisposition' + description: | + Specifies whether the job is allowed to create new tables. The following values are supported: + CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. + CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. + Creation, truncation and append actions occur as one atomic update upon job completion + default_value: :CREATE_IF_NEEDED + values: + - :CREATE_IF_NEEDED + - :CREATE_NEVER + - !ruby/object:Api::Type::Enum + name: 'writeDisposition' + description: | + Specifies the action that occurs if the destination table already exists. The following values are supported: + WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. + WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. + WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. + Each action is atomic and only occurs if BigQuery is able to complete the job successfully. + Creation, truncation and append actions occur as one atomic update upon job completion. + default_value: :WRITE_EMPTY + values: + - :WRITE_TRUNCATE + - :WRITE_APPEND + - :WRITE_EMPTY + - !ruby/object:Api::Type::NestedObject + name: 'destinationEncryptionConfiguration' + description: | + Custom encryption configuration (e.g., Cloud KMS keys) + properties: + - !ruby/object:Api::Type::String + name: 'kmsKeyName' + description: | + Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. + The BigQuery Service Account associated with your project requires access to this encryption key. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'extract' + description: 'Configures an extract job.' + exactly_one_of: + - query + - load + - copy + - extract + properties: + - !ruby/object:Api::Type::Array + name: 'destinationUris' + description: | + A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written. + required: true + item_type: Api::Type::String + - !ruby/object:Api::Type::Boolean + name: 'printHeader' + description: | + Whether to print out a header row in the results. Default is true. + default_value: true + - !ruby/object:Api::Type::String + name: 'fieldDelimiter' + description: | + When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. + Default is ',' + - !ruby/object:Api::Type::String + name: 'destinationFormat' + description: | + The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. + The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. + The default value for models is SAVED_MODEL. + - !ruby/object:Api::Type::String + name: 'compression' + description: | + The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. + The default value is NONE. DEFLATE and SNAPPY are only supported for Avro. + default_value: 'NONE' + - !ruby/object:Api::Type::Boolean + name: 'useAvroLogicalTypes' + description: | + Whether to use logical types when extracting to AVRO format. + - !ruby/object:Api::Type::NestedObject + name: 'sourceTable' + description: | + A reference to the table being exported. + exactly_one_of: + - extract.0.source_table + - extract.0.source_model + properties: + - !ruby/object:Api::Type::String + name: 'projectId' + description: 'The ID of the project containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'datasetId' + description: 'The ID of the dataset containing this table.' + required: true + - !ruby/object:Api::Type::String + name: 'tableId' + description: 'The ID of the table.' + required: true + - !ruby/object:Api::Type::NestedObject + name: 'sourceModel' + description: | + A reference to the model being exported. + exactly_one_of: + - extract.0.source_table + - extract.0.source_model + properties: + - !ruby/object:Api::Type::String + name: 'projectId' + description: 'The ID of the project containing this model.' + required: true + - !ruby/object:Api::Type::String + name: 'datasetId' + description: 'The ID of the dataset containing this model.' + required: true + - !ruby/object:Api::Type::String + name: 'modelId' + description: 'The ID of the model.' + required: true + - !ruby/object:Api::Type::NestedObject + name: 'jobReference' + description: | + Reference describing the unique-per-user name of the job. + properties: + - !ruby/object:Api::Type::String + name: 'projectId' + description: | + The project ID of the project containing this job. + required: true + - !ruby/object:Api::Type::String + name: 'jobId' + description: | + The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters. + required: true + - !ruby/object:Api::Type::String + name: 'location' + description: | + The geographic location of the job. The default value is US. + default_value: 'US' - !ruby/object:Api::Resource name: 'Table' kind: 'bigquery#table' diff --git a/products/bigquery/inspec.yaml b/products/bigquery/inspec.yaml index 1b060701840c..28c20a2a8f05 100644 --- a/products/bigquery/inspec.yaml +++ b/products/bigquery/inspec.yaml @@ -31,10 +31,10 @@ overrides: !ruby/object:Overrides::ResourceOverrides lastModifiedTime: !ruby/object:Overrides::Inspec::PropertyOverride exclude_plural: true additional_functions: 'third_party/inspec/custom_functions/bigquery_dataset_name.erb' - DatasetAccess: !ruby/object:Overrides::Inspec::ResourceOverride exclude: true - + Job: !ruby/object:Overrides::Inspec::ResourceOverride + exclude: true Table: !ruby/object:Overrides::Inspec::ResourceOverride properties: description: !ruby/object:Overrides::Inspec::PropertyOverride diff --git a/products/bigquery/terraform.yaml b/products/bigquery/terraform.yaml index 789789642a46..3a80567bf675 100644 --- a/products/bigquery/terraform.yaml +++ b/products/bigquery/terraform.yaml @@ -17,6 +17,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides Dataset: !ruby/object:Overrides::Terraform::ResourceOverride import_format: ["projects/{{project}}/datasets/{{dataset_id}}"] delete_url: projects/{{project}}/datasets/{{dataset_id}}?deleteContents={{delete_contents_on_destroy}} + # Skipping sweeper due to the abnormal delete_url + skip_sweeper: true examples: - !ruby/object:Provider::Terraform::Examples name: "bigquery_dataset_basic" @@ -33,8 +35,9 @@ overrides: !ruby/object:Overrides::ResourceOverrides key_name: "example-key" keyring_name: "example-keyring" virtual_fields: - - !ruby/object:Provider::Terraform::VirtualFields + - !ruby/object:Api::Type::Boolean name: 'delete_contents_on_destroy' + default_value: false description: | If set to `true`, delete all the tables in the dataset when destroying the resource; otherwise, @@ -88,6 +91,213 @@ overrides: !ruby/object:Overrides::ResourceOverrides properties: datasetId: !ruby/object:Overrides::Terraform::PropertyOverride ignore_read: true + role: !ruby/object:Overrides::Terraform::PropertyOverride + # Bigquery allows for two different formats for specific roles + # (IAM vs "primitive" format), but will return the primative role in API + # responses. We identify this fine-grained resource from a list + # of DatasetAccess objects by comparing role, and we must use the same + # format when comparing. + diff_suppress_func: 'resourceBigQueryDatasetAccessRoleDiffSuppress' + # This custom expand makes sure we are correctly + # converting IAM roles set in state to their primitive equivalents + # before comparison. + custom_expand: "templates/terraform/custom_expand/bigquery_access_role.go.erb" + custom_code: !ruby/object:Provider::Terraform::CustomCode + constants: templates/terraform/constants/bigquery_dataset_access.go + Job: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["projects/{{project}}/jobs/{{job_id}}"] + skip_delete: true + async: !ruby/object:Provider::Terraform::PollAsync + check_response_func_existence: PollCheckForExistence + actions: ['create'] + operation: !ruby/object:Api::Async::Operation + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 4 + examples: + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_query" + primary_resource_id: "job" + vars: + job_id: "job_query" + account_name: "bqowner" + ignore_read_extra: + - "etag" + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_query_table_reference" + primary_resource_id: "job" + vars: + job_id: "job_query" + account_name: "bqowner" + ignore_read_extra: + - "etag" + - "query.0.default_dataset.0.dataset_id" + - "query.0.destination_table.0.table_id" + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_load" + primary_resource_id: "job" + vars: + job_id: "job_load" + ignore_read_extra: + - "etag" + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_load_table_reference" + primary_resource_id: "job" + vars: + job_id: "job_load" + ignore_read_extra: + - "etag" + - "load.0.destination_table.0.table_id" + skip_docs: true # there are a lot of examples for this resource, so omitting some that are similar to others + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_copy" + primary_resource_id: "job" + vars: + job_id: "job_copy" + account_name: "bqowner" + key_name: "example-key" + keyring_name: "example-keyring" + test_env_vars: + project: :PROJECT_NAME + ignore_read_extra: + - "etag" + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_copy_table_reference" + primary_resource_id: "job" + vars: + job_id: "job_copy" + account_name: "bqowner" + key_name: "example-key" + keyring_name: "example-keyring" + test_env_vars: + project: :PROJECT_NAME + ignore_read_extra: + - "etag" + - "copy.0.destination_table.0.table_id" + - "copy.0.source_tables.0.table_id" + - "copy.0.source_tables.1.table_id" + skip_docs: true # there are a lot of examples for this resource, so omitting some that are similar to others + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_extract" + primary_resource_id: "job" + vars: + job_id: "job_extract" + account_name: "bqowner" + ignore_read_extra: + - "etag" + - !ruby/object:Provider::Terraform::Examples + name: "bigquery_job_extract_table_reference" + primary_resource_id: "job" + vars: + job_id: "job_extract" + account_name: "bqowner" + ignore_read_extra: + - "etag" + - "extract.0.source_table.0.table_id" + skip_docs: true # there are a lot of examples for this resource, so omitting some that are similar to others + properties: + id: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + configuration: !ruby/object:Overrides::Terraform::PropertyOverride + flatten_object: true + configuration.copy.destinationTable: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/bigquery_table_ref.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/bigquery_table_ref_copy_destinationtable.go.erb' + configuration.copy.destinationTable.projectId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.copy.destinationTable.datasetId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.copy.destinationTable.tableId: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + The table. Can be specified `{{table_id}}` if `project_id` and `dataset_id` are also set, + or of the form `projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}` if not. + diff_suppress_func: 'compareSelfLinkRelativePaths' + configuration.copy.sourceTables: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/bigquery_table_ref_array.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/bigquery_table_ref_copy_sourcetables.go.erb' + configuration.copy.sourceTables.projectId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.copy.sourceTables.datasetId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.copy.sourceTables.tableId: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + The table. Can be specified `{{table_id}}` if `project_id` and `dataset_id` are also set, + or of the form `projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}` if not. + diff_suppress_func: 'compareSelfLinkRelativePaths' + configuration.load.destinationTable: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/bigquery_table_ref.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/bigquery_table_ref_load_destinationtable.go.erb' + configuration.load.destinationTable.projectId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.load.destinationTable.datasetId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.load.destinationTable.tableId: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + The table. Can be specified `{{table_id}}` if `project_id` and `dataset_id` are also set, + or of the form `projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}` if not. + diff_suppress_func: 'compareSelfLinkRelativePaths' + configuration.load.skipLeadingRows: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntAtLeast(0)' + configuration.load.fieldDelimiter: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + configuration.load.quote: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + configuration.extract.fieldDelimiter: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + configuration.extract.destinationFormat: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + configuration.extract.sourceTable: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/bigquery_table_ref.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/bigquery_table_ref_extract_sourcetable.go.erb' + configuration.extract.sourceTable.projectId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.extract.sourceTable.datasetId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.extract.sourceTable.tableId: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + The table. Can be specified `{{table_id}}` if `project_id` and `dataset_id` are also set, + or of the form `projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}` if not. + diff_suppress_func: 'compareSelfLinkRelativePaths' + configuration.query.destinationTable: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/bigquery_table_ref.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/bigquery_table_ref_query_destinationtable.go.erb' + configuration.query.destinationTable.projectId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.query.destinationTable.datasetId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.query.destinationTable.tableId: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + The table. Can be specified `{{table_id}}` if `project_id` and `dataset_id` are also set, + or of the form `projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}` if not. + diff_suppress_func: 'compareSelfLinkRelativePaths' + configuration.query.defaultDataset: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/bigquery_dataset_ref.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/bigquery_dataset_ref.go.erb' + configuration.query.defaultDataset.projectId: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true + configuration.query.defaultDataset.datasetId: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + The dataset. Can be specified `{{dataset_id}}` if `project_id` is also set, + or of the form `projects/{{project}}/datasets/{{dataset_id}}` if not. + diff_suppress_func: 'compareSelfLinkRelativePaths' + jobReference: !ruby/object:Overrides::Terraform::PropertyOverride + flatten_object: true + jobReference.projectId: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + constants: templates/terraform/constants/bigquery_job.go + encoder: templates/terraform/encoders/bigquery_job.go.erb Table: !ruby/object:Overrides::Terraform::ResourceOverride exclude: true diff --git a/products/bigqueryconnection/api.yaml b/products/bigqueryconnection/api.yaml new file mode 100644 index 000000000000..25eaeb87fc3e --- /dev/null +++ b/products/bigqueryconnection/api.yaml @@ -0,0 +1,111 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: BigqueryConnection +display_name: BigQuery Connection +versions: + - !ruby/object:Api::Product::Version + name: beta + base_url: https://bigqueryconnection.googleapis.com/v1beta1/ +scopes: + - https://www.googleapis.com/auth/bigquery +apis_required: + - !ruby/object:Api::Product::ApiReference + name: BigQueryConnection API + url: https://console.cloud.google.com/apis/api/bigqueryconnection.googleapis.com/ +objects: + - !ruby/object:Api::Resource + name: 'Connection' + base_url: projects/{{project}}/locations/{{location}}/connections + self_link: "{{name}}" + create_url: projects/{{project}}/locations/{{location}}/connections?connectionId={{connection_id}} + update_verb: :PATCH + update_mask: true + description: | + A connection allows BigQuery connections to external data sources.. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + "Cloud SQL federated queries": "https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries" + api: "https://cloud.google.com/bigquery/docs/reference/bigqueryconnection/rest/v1beta1/projects.locations.connections/create" + properties: + - !ruby/object:Api::Type::String + name: name + description: |- + The resource name of the connection in the form of: + "projects/{project_id}/locations/{location_id}/connections/{connectionId}" + input: true + output: true + - !ruby/object:Api::Type::String + name: connection_id + description: | + Optional connection id that should be assigned to the created connection. + required: false + input: true + url_param_only: true + - !ruby/object:Api::Type::String + name: 'location' + required: false + input: true + url_param_only: true + default_value: US + description: |- + The geographic location where the connection should reside. + Cloud SQL instance must be in the same location as the connection + with following exceptions: Cloud SQL us-central1 maps to BigQuery US, Cloud SQL europe-west1 maps to BigQuery EU. + Examples: US, EU, asia-northeast1, us-central1, europe-west1. The default value is US. + - !ruby/object:Api::Type::String + name: 'friendlyName' + description: A descriptive name for the connection + - !ruby/object:Api::Type::String + name: 'description' + description: A descriptive description for the connection + - !ruby/object:Api::Type::Boolean + name: 'hasCredential' + output: true + description: | + True if the connection has credential assigned. + - !ruby/object:Api::Type::NestedObject + name: cloudSql + description: Cloud SQL properties. + required: true + properties: + - !ruby/object:Api::Type::String + name: 'instanceId' + description: Cloud SQL instance ID in the form project:location:instance. + required: true + - !ruby/object:Api::Type::String + name: 'database' + description: Database name. + required: true + - !ruby/object:Api::Type::NestedObject + name: credential + description: Cloud SQL properties. + required: true + properties: + - !ruby/object:Api::Type::String + name: username + description: Username for database. + required: true + - !ruby/object:Api::Type::String + name: password + description: Password for database. + required: true + - !ruby/object:Api::Type::Enum + name: 'type' + description: Type of the Cloud SQL database. + required: true + values: + - :DATABASE_TYPE_UNSPECIFIED + - :POSTGRES + - :MYSQL diff --git a/products/bigqueryconnection/terraform.yaml b/products/bigqueryconnection/terraform.yaml new file mode 100644 index 000000000000..5fd7cb44cf68 --- /dev/null +++ b/products/bigqueryconnection/terraform.yaml @@ -0,0 +1,46 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +legacy_name: bigquery +overrides: !ruby/object:Overrides::ResourceOverrides + Connection: !ruby/object:Overrides::Terraform::ResourceOverride + properties: + cloudSql.credential: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: 'templates/terraform/custom_flatten/bigquery_connection_flatten.go.erb' + cloudSql.credential.password: !ruby/object:Overrides::Terraform::PropertyOverride + sensitive: true + id_format: "{{name}}" + import_format: ["{{name}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "bigquery_connection_basic" + primary_resource_id: "connection" + vars: + database_instance_name: "my-database-instance" + username: "user" + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "bigquery_connection_full" + primary_resource_id: "connection" + vars: + database_instance_name: "my-database-instance" + username: "user" + connection_id: "my-connection" +# This is for copying files over +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> diff --git a/products/bigquerydatatransfer/api.yaml b/products/bigquerydatatransfer/api.yaml index a85475147bea..e9c08724dc03 100644 --- a/products/bigquerydatatransfer/api.yaml +++ b/products/bigquerydatatransfer/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: BigqueryDataTransfer -display_name: BigQueryDataTransfer +display_name: BigQuery Data Transfer versions: - !ruby/object:Api::Product::Version name: ga @@ -27,8 +27,10 @@ apis_required: objects: - !ruby/object:Api::Resource name: 'Config' - base_url: projects/{{project}}/locations/{{location}}/transferConfigs + base_url: projects/{{project}}/locations/{{location}}/transferConfigs?serviceAccountName={{service_account_name}} self_link: "{{name}}" + # see comment at service_account_name, PATCHing service_account_name also required update_mask entry + # update_url: "{{name}}?serviceAccountName={{service_account_name}}" update_verb: :PATCH update_mask: true description: | @@ -47,6 +49,18 @@ objects: description: | The geographic location where the transfer config should reside. Examples: US, EU, asia-northeast1. The default value is US. + - !ruby/object:Api::Type::String + name: 'serviceAccountName' + url_param_only: true + # The API would support PATCHing the service account, but setting the + # update_mask accordingly for a url_param_only is currently not + # supported in magic-modules + input: true + default_value: '' + description: | + Optional service account name. If this field is set, transfer config will + be created with this service account credentials. It requires that + requesting user calling this API has permissions to act as this service account. properties: - !ruby/object:Api::Type::String name: 'displayName' diff --git a/products/bigquerydatatransfer/terraform.yaml b/products/bigquerydatatransfer/terraform.yaml index fe561caf0a83..601c87abdfca 100644 --- a/products/bigquerydatatransfer/terraform.yaml +++ b/products/bigquerydatatransfer/terraform.yaml @@ -28,7 +28,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides examples: - !ruby/object:Provider::Terraform::Examples skip_test: true - name: "scheduled_query" + name: "bigquerydatatransfer_config_scheduled_query" primary_resource_id: "query_config" vars: display_name: "my-query" diff --git a/products/bigqueryreservation/api.yaml b/products/bigqueryreservation/api.yaml index 5bef09d5646f..4f9e70e95637 100644 --- a/products/bigqueryreservation/api.yaml +++ b/products/bigqueryreservation/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: BigqueryReservation -display_name: BigQueryReservation +display_name: BigQuery Reservation versions: - !ruby/object:Api::Product::Version name: beta diff --git a/products/bigtable/api.yaml b/products/bigtable/api.yaml index 1d37d48de4d7..019bfe171cf2 100644 --- a/products/bigtable/api.yaml +++ b/products/bigtable/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: Bigtable -display_name: Bigtable +display_name: Cloud Bigtable versions: - !ruby/object:Api::Product::Version name: ga diff --git a/products/bigtable/terraform.yaml b/products/bigtable/terraform.yaml index be9ec743dc8e..307199112a19 100644 --- a/products/bigtable/terraform.yaml +++ b/products/bigtable/terraform.yaml @@ -28,6 +28,11 @@ overrides: !ruby/object:Overrides::ResourceOverrides vars: instance_name: "bt-instance" app_profile_name: "bt-profile" + deletion_protection: "true" + test_vars_overrides: + deletion_protection: "false" + oics_vars_overrides: + deletion_protection: "false" ignore_read_extra: - "ignore_warnings" - !ruby/object:Provider::Terraform::Examples @@ -36,6 +41,11 @@ overrides: !ruby/object:Overrides::ResourceOverrides vars: instance_name: "bt-instance" app_profile_name: "bt-profile" + deletion_protection: "true" + test_vars_overrides: + deletion_protection: "false" + oics_vars_overrides: + deletion_protection: "false" ignore_read_extra: - "ignore_warnings" properties: diff --git a/products/cloudasset/api.yaml b/products/cloudasset/api.yaml new file mode 100644 index 000000000000..37779b19f188 --- /dev/null +++ b/products/cloudasset/api.yaml @@ -0,0 +1,291 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +--- !ruby/object:Api::Product +name: CloudAsset +display_name: Cloud Asset Inventory +versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://cloudasset.googleapis.com/v1/ +scopes: + - https://www.googleapis.com/auth/cloud-platform +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Cloud Asset API + url: https://console.cloud.google.com/apis/library/cloudasset.googleapis.com/ +objects: + - !ruby/object:Api::Resource + name: ProjectFeed + base_url: projects/{{project}}/feeds + create_url: projects/{{project}}/feeds?feedId={{feed_id}} + self_link: "{{name}}" + update_verb: :PATCH + update_mask: true + collection_url_key: 'feeds' + description: | + Describes a Cloud Asset Inventory feed used to to listen to asset updates. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/asset-inventory/docs' + api: 'https://cloud.google.com/asset-inventory/docs/reference/rest/' + properties: + - !ruby/object:Api::Type::String + name: billing_project + url_param_only: true + input: true + description: | + The project whose identity will be used when sending messages to the + destination pubsub topic. It also specifies the project for API + enablement check, quota, and billing. If not specified, the resource's + project will be used. + - !ruby/object:Api::Type::String + name: name + output: true + description: | + The format will be projects/{projectNumber}/feeds/{client-assigned_feed_identifier}. + - !ruby/object:Api::Type::String + name: feedId + description: | + This is the client-assigned asset feed identifier and it needs to be unique under a specific parent. + required: true + input: true + url_param_only: true + - !ruby/object:Api::Type::Array + name: assetNames + item_type: Api::Type::String + description: | + A list of the full names of the assets to receive updates. You must specify either or both of + assetNames and assetTypes. Only asset updates matching specified assetNames and assetTypes are + exported to the feed. For example: //compute.googleapis.com/projects/my_project_123/zones/zone1/instances/instance1. + See https://cloud.google.com/apis/design/resourceNames#fullResourceName for more info. + - !ruby/object:Api::Type::Array + name: assetTypes + item_type: Api::Type::String + description: | + A list of types of the assets to receive updates. You must specify either or both of assetNames + and assetTypes. Only asset updates matching specified assetNames and assetTypes are exported to + the feed. For example: "compute.googleapis.com/Disk" + See https://cloud.google.com/asset-inventory/docs/supported-asset-types for a list of all + supported asset types. + - !ruby/object:Api::Type::Enum + name: contentType + description: | + Asset content type. If not specified, no content but the asset name and type will be returned. + values: + - :CONTENT_TYPE_UNSPECIFIED + - :RESOURCE + - :IAM_POLICY + - :ORG_POLICY + - :ACCESS_POLICY + - !ruby/object:Api::Type::NestedObject + name: feedOutputConfig + required: true + description: | + Output configuration for asset feed destination. + properties: + - !ruby/object:Api::Type::NestedObject + name: pubsubDestination + required: true + description: | + Destination on Cloud Pubsub. + properties: + - !ruby/object:Api::Type::String + name: topic + required: true + description: | + Destination on Cloud Pubsub topic. + - !ruby/object:Api::Resource + name: FolderFeed + base_url: folders/{{folder_id}}/feeds + create_url: folders/{{folder_id}}/feeds?feedId={{feed_id}} + self_link: "{{name}}" + update_verb: :PATCH + update_mask: true + collection_url_key: 'feeds' + description: | + Describes a Cloud Asset Inventory feed used to to listen to asset updates. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/asset-inventory/docs' + api: 'https://cloud.google.com/asset-inventory/docs/reference/rest/' + parameters: + - !ruby/object:Api::Type::String + name: folder + required: true + input: true + url_param_only: true + description: | + The folder this feed should be created in. + properties: + - !ruby/object:Api::Type::String + name: billing_project + required: true + input: true + url_param_only: true + description: | + The project whose identity will be used when sending messages to the + destination pubsub topic. It also specifies the project for API + enablement check, quota, and billing. + - !ruby/object:Api::Type::String + name: folder_id + output: true + description: | + The ID of the folder where this feed has been created. Both [FOLDER_NUMBER] + and folders/[FOLDER_NUMBER] are accepted. + - !ruby/object:Api::Type::String + name: name + output: true + description: | + The format will be folders/{folder_number}/feeds/{client-assigned_feed_identifier}. + - !ruby/object:Api::Type::String + name: feedId + description: | + This is the client-assigned asset feed identifier and it needs to be unique under a specific parent. + required: true + input: true + url_param_only: true + - !ruby/object:Api::Type::Array + name: assetNames + item_type: Api::Type::String + description: | + A list of the full names of the assets to receive updates. You must specify either or both of + assetNames and assetTypes. Only asset updates matching specified assetNames and assetTypes are + exported to the feed. For example: //compute.googleapis.com/projects/my_project_123/zones/zone1/instances/instance1. + See https://cloud.google.com/apis/design/resourceNames#fullResourceName for more info. + - !ruby/object:Api::Type::Array + name: assetTypes + item_type: Api::Type::String + description: | + A list of types of the assets to receive updates. You must specify either or both of assetNames + and assetTypes. Only asset updates matching specified assetNames and assetTypes are exported to + the feed. For example: "compute.googleapis.com/Disk" + See https://cloud.google.com/asset-inventory/docs/supported-asset-types for a list of all + supported asset types. + - !ruby/object:Api::Type::Enum + name: contentType + description: | + Asset content type. If not specified, no content but the asset name and type will be returned. + values: + - :CONTENT_TYPE_UNSPECIFIED + - :RESOURCE + - :IAM_POLICY + - :ORG_POLICY + - :ACCESS_POLICY + - !ruby/object:Api::Type::NestedObject + name: feedOutputConfig + required: true + description: | + Output configuration for asset feed destination. + properties: + - !ruby/object:Api::Type::NestedObject + name: pubsubDestination + required: true + description: | + Destination on Cloud Pubsub. + properties: + - !ruby/object:Api::Type::String + name: topic + required: true + description: | + Destination on Cloud Pubsub topic. + - !ruby/object:Api::Resource + name: OrganizationFeed + base_url: "organizations/{{org_id}}/feeds" + create_url: "organizations/{{org_id}}/feeds?feedId={{feed_id}}" + self_link: "{{name}}" + update_verb: :PATCH + update_mask: true + collection_url_key: 'feeds' + description: | + Describes a Cloud Asset Inventory feed used to to listen to asset updates. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/asset-inventory/docs' + api: 'https://cloud.google.com/asset-inventory/docs/reference/rest/' + parameters: + - !ruby/object:Api::Type::String + name: org_id + required: true + input: true + url_param_only: true + description: | + The organization this feed should be created in. + properties: + - !ruby/object:Api::Type::String + name: billing_project + required: true + input: true + url_param_only: true + description: | + The project whose identity will be used when sending messages to the + destination pubsub topic. It also specifies the project for API + enablement check, quota, and billing. + - !ruby/object:Api::Type::String + name: name + output: true + description: | + The format will be organizations/{organization_number}/feeds/{client-assigned_feed_identifier}. + - !ruby/object:Api::Type::String + name: feedId + description: | + This is the client-assigned asset feed identifier and it needs to be unique under a specific parent. + required: true + input: true + url_param_only: true + - !ruby/object:Api::Type::Array + name: assetNames + item_type: Api::Type::String + description: | + A list of the full names of the assets to receive updates. You must specify either or both of + assetNames and assetTypes. Only asset updates matching specified assetNames and assetTypes are + exported to the feed. For example: //compute.googleapis.com/projects/my_project_123/zones/zone1/instances/instance1. + See https://cloud.google.com/apis/design/resourceNames#fullResourceName for more info. + - !ruby/object:Api::Type::Array + name: assetTypes + item_type: Api::Type::String + description: | + A list of types of the assets to receive updates. You must specify either or both of assetNames + and assetTypes. Only asset updates matching specified assetNames and assetTypes are exported to + the feed. For example: "compute.googleapis.com/Disk" + See https://cloud.google.com/asset-inventory/docs/supported-asset-types for a list of all + supported asset types. + - !ruby/object:Api::Type::Enum + name: contentType + description: | + Asset content type. If not specified, no content but the asset name and type will be returned. + values: + - :CONTENT_TYPE_UNSPECIFIED + - :RESOURCE + - :IAM_POLICY + - :ORG_POLICY + - :ACCESS_POLICY + - !ruby/object:Api::Type::NestedObject + name: feedOutputConfig + required: true + description: | + Output configuration for asset feed destination. + properties: + - !ruby/object:Api::Type::NestedObject + name: pubsubDestination + required: true + description: | + Destination on Cloud Pubsub. + properties: + - !ruby/object:Api::Type::String + name: topic + required: true + description: | + Destination on Cloud Pubsub topic. diff --git a/products/cloudasset/terraform.yaml b/products/cloudasset/terraform.yaml new file mode 100644 index 000000000000..8318b9bb158d --- /dev/null +++ b/products/cloudasset/terraform.yaml @@ -0,0 +1,62 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + ProjectFeed: !ruby/object:Overrides::Terraform::ResourceOverride + custom_code: !ruby/object:Provider::Terraform::CustomCode + pre_create: templates/terraform/pre_create/cloud_asset_feed.go.erb + post_create: templates/terraform/post_create/cloud_asset_feed.go.erb + custom_import: templates/terraform/custom_import/cloud_asset_feed.go.erb + encoder: templates/terraform/encoders/cloud_asset_feed.go.erb + examples: + - !ruby/object:Provider::Terraform::Examples + name: "cloud_asset_project_feed" + primary_resource_id: "project_feed" + vars: + feed_id: "network-updates" + test_env_vars: + project: :PROJECT_NAME + FolderFeed: !ruby/object:Overrides::Terraform::ResourceOverride + supports_indirect_user_project_override: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + pre_create: templates/terraform/pre_create/cloud_asset_feed.go.erb + post_create: templates/terraform/post_create/cloud_asset_feed.go.erb + custom_import: templates/terraform/custom_import/cloud_asset_feed.go.erb + encoder: templates/terraform/encoders/cloud_asset_feed.go.erb + examples: + - !ruby/object:Provider::Terraform::Examples + name: "cloud_asset_folder_feed" + primary_resource_id: "folder_feed" + vars: + feed_id: "network-updates" + folder_name: "Networking" + test_env_vars: + project: :PROJECT_NAME + org_id: :ORG_ID + OrganizationFeed: !ruby/object:Overrides::Terraform::ResourceOverride + supports_indirect_user_project_override: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + pre_create: templates/terraform/pre_create/cloud_asset_feed.go.erb + post_create: templates/terraform/post_create/cloud_asset_feed.go.erb + custom_import: templates/terraform/custom_import/cloud_asset_feed.go.erb + encoder: templates/terraform/encoders/cloud_asset_feed.go.erb + examples: + - !ruby/object:Provider::Terraform::Examples + name: "cloud_asset_organization_feed" + primary_resource_id: "organization_feed" + vars: + feed_id: "network-updates" + test_env_vars: + project: :PROJECT_NAME + org_id: :ORG_ID diff --git a/products/cloudbuild/api.yaml b/products/cloudbuild/api.yaml index 0a96068cfba3..ecd96549f0bd 100644 --- a/products/cloudbuild/api.yaml +++ b/products/cloudbuild/api.yaml @@ -133,6 +133,11 @@ objects: This must be a relative path. If a step's dir is specified and is an absolute path, this value is ignored for that step's execution. + + - !ruby/object:Api::Type::Boolean + name: 'invertRegex' + description: | + Only trigger a build if the revision regex does NOT match the revision regex. - !ruby/object:Api::Type::String name: 'branchName' description: | @@ -198,6 +203,10 @@ objects: values: - :COMMENTS_DISABLED - :COMMENTS_ENABLED + - !ruby/object:Api::Type::Boolean + name: 'invertRegex' + description: | + If true, branches that do NOT match the git_ref will trigger a build. - !ruby/object:Api::Type::NestedObject name: 'push' description: | @@ -206,6 +215,10 @@ objects: - github.0.pull_request - github.0.push properties: + - !ruby/object:Api::Type::Boolean + name: 'invertRegex' + description: | + When true, only trigger a build if the revision regex does NOT match the git_ref regex. - !ruby/object:Api::Type::String name: 'branch' description: | diff --git a/products/cloudidentity/api.yaml b/products/cloudidentity/api.yaml new file mode 100644 index 000000000000..a665dd471532 --- /dev/null +++ b/products/cloudidentity/api.yaml @@ -0,0 +1,321 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: CloudIdentity +display_name: Cloud Identity +versions: + - !ruby/object:Api::Product::Version + name: beta + base_url: https://cloudidentity.googleapis.com/v1beta1/ +scopes: + - https://www.googleapis.com/auth/cloud-identity +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Cloud Identity API + url: https://console.cloud.google.com/apis/api/cloudidentity.googleapis.com/overview +objects: + - !ruby/object:Api::Resource + name: 'Group' + base_url: groups + update_url: '{{name}}' + self_link: '{{name}}' + update_verb: :PATCH + update_mask: true + description: | + A Cloud Identity resource representing a Group. + properties: + - !ruby/object:Api::Type::String + name: 'name' + output: true + description: | + Resource name of the Group in the format: groups/{group_id}, where group_id + is the unique ID assigned to the Group. + - !ruby/object:Api::Type::NestedObject + name: 'groupKey' + required: true + input: true + description: | + EntityKey of the Group. + properties: + - !ruby/object:Api::Type::String + name: 'id' + required: true + input: true + description: | + The ID of the entity. + + For Google-managed entities, the id must be the email address of an existing + group or user. + + For external-identity-mapped entities, the id must be a string conforming + to the Identity Source's requirements. + + Must be unique within a namespace. + - !ruby/object:Api::Type::String + name: 'namespace' + input: true + description: | + The namespace in which the entity exists. + + If not specified, the EntityKey represents a Google-managed entity + such as a Google user or a Google Group. + + If specified, the EntityKey represents an external-identity-mapped group. + The namespace must correspond to an identity source created in Admin Console + and must be in the form of `identitysources/{identity_source_id}`. + - !ruby/object:Api::Type::String + name: 'parent' + required: true + input: true + description: | + The resource name of the entity under which this Group resides in the + Cloud Identity resource hierarchy. + + Must be of the form identitysources/{identity_source_id} for external-identity-mapped + groups or customers/{customer_id} for Google Groups. + - !ruby/object:Api::Type::String + name: 'displayName' + description: | + The display name of the Group. + - !ruby/object:Api::Type::String + name: 'description' + description: | + An extended description to help users determine the purpose of a Group. + Must not be longer than 4,096 characters. + - !ruby/object:Api::Type::String + name: 'createTime' + output: true + description: | + The time when the Group was created. + - !ruby/object:Api::Type::String + name: 'updateTime' + output: true + description: | + The time when the Group was last updated. + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + required: true + input: true + description: | + The labels that apply to the Group. + + Must not contain more than one entry. Must contain the entry + 'cloudidentity.googleapis.com/groups.discussion_forum': '' if the Group is a Google Group or + 'system/groups/external': '' if the Group is an external-identity-mapped group. + # TODO (mbang): The full API doesn't seem to be implemented yet + # - !ruby/object:Api::Type::Array + # name: 'additionalGroupKeys' + # input: true + # description: | + # Additional entity key aliases for a Group. + # item_type: !ruby/object:Api::Type::NestedObject + # properties: + # - !ruby/object:Api::Type::String + # name: 'id' + # required: true + # description: | + # The ID of the entity. + + # For Google-managed entities, the id must be the email address of an existing + # group or user. + + # For external-identity-mapped entities, the id must be a string conforming + # to the Identity Source's requirements. + + # Must be unique within a namespace. + # - !ruby/object:Api::Type::String + # name: 'namespace' + # description: | + # The namespace in which the entity exists. + + # If not specified, the EntityKey represents a Google-managed entity + # such as a Google user or a Google Group. + + # If specified, the EntityKey represents an external-identity-mapped group. + # The namespace must correspond to an identity source created in Admin Console + # and must be in the form of `identitysources/{identity_source_id}. + # - !ruby/object:Api::Type::NestedObject + # name: 'dynamicGroupMetadata' + # input: true + # description: | + # Dynamic group metadata like queries and status. + # properties: + # - !ruby/object:Api::Type::Array + # name: 'queries' + # required: true + # description: | + # Memberships will be the union of all queries. Only one entry with USER resource is currently supported. + # item_type: !ruby/object:Api::Type::NestedObject + # properties: + # - !ruby/object:Api::Type::Enum + # name: 'resourceType' + # description: | + # Resources supported for dynamic groups. + # default_value: :USER + # values: + # - :USER + # - !ruby/object:Api::Type::String + # name: 'query' + # description: | + # Query that determines the memberships of the dynamic group. + + # Examples: All users with at least one organizations.department of engineering. + + # user.organizations.exists(org, org.department=='engineering') + + # All users with at least one location that has area of foo and building_id of bar. + + # user.locations.exists(loc, loc.area=='foo' && loc.building_id=='bar') + # - !ruby/object:Api::Type::NestedObject + # name: 'DynamicGroupStatus' + # output: true + # description: | + # Status of the dynamic group. + # properties: + # - !ruby/object:Api::Type::String + # name: 'status' + # description: | + # Status of the dynamic group. + # - !ruby/object:Api::Type::String + # name: 'statusTime' + # description: | + # The latest time at which the dynamic group is guaranteed to be in the given status. + # For example, if status is: UP_TO_DATE - The latest time at which this dynamic group + # was confirmed to be up to date. UPDATING_MEMBERSHIPS - The time at which dynamic group was created. + + # A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Resource + name: 'GroupMembership' + base_url: '{{group}}/memberships' + self_link: '{{name}}' + description: | + A Membership defines a relationship between a Group and an entity belonging to that Group, referred to as a "member". + parameters: + - !ruby/object:Api::Type::ResourceRef + name: 'group' + resource: 'Group' + imports: 'name' + description: | + The name of the Group to create this membership in. + required: true + input: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + output: true + description: | + The resource name of the Membership, of the form groups/{group_id}/memberships/{membership_id}. + - !ruby/object:Api::Type::NestedObject + name: 'memberKey' + input: true + description: | + EntityKey of the member. + exactly_one_of: + - member_key + - preferred_member_key + properties: + - !ruby/object:Api::Type::String + name: 'id' + required: true + input: true + description: | + The ID of the entity. + + For Google-managed entities, the id must be the email address of an existing + group or user. + + For external-identity-mapped entities, the id must be a string conforming + to the Identity Source's requirements. + + Must be unique within a namespace. + - !ruby/object:Api::Type::String + name: 'namespace' + input: true + description: | + The namespace in which the entity exists. + + If not specified, the EntityKey represents a Google-managed entity + such as a Google user or a Google Group. + + If specified, the EntityKey represents an external-identity-mapped group. + The namespace must correspond to an identity source created in Admin Console + and must be in the form of `identitysources/{identity_source_id}`. + - !ruby/object:Api::Type::NestedObject + name: 'preferredMemberKey' + input: true + description: | + EntityKey of the member. + exactly_one_of: + - member_key + - preferred_member_key + properties: + - !ruby/object:Api::Type::String + name: 'id' + required: true + input: true + description: | + The ID of the entity. + + For Google-managed entities, the id must be the email address of an existing + group or user. + + For external-identity-mapped entities, the id must be a string conforming + to the Identity Source's requirements. + + Must be unique within a namespace. + - !ruby/object:Api::Type::String + name: 'namespace' + input: true + description: | + The namespace in which the entity exists. + + If not specified, the EntityKey represents a Google-managed entity + such as a Google user or a Google Group. + + If specified, the EntityKey represents an external-identity-mapped group. + The namespace must correspond to an identity source created in Admin Console + and must be in the form of `identitysources/{identity_source_id}`. + - !ruby/object:Api::Type::String + name: 'createTime' + output: true + description: | + The time when the Membership was created. + - !ruby/object:Api::Type::String + name: 'updateTime' + output: true + description: | + The time when the Membership was last updated. + - !ruby/object:Api::Type::Array + name: 'roles' + required: true + description: | + The MembershipRoles that apply to the Membership. + Must not contain duplicate MembershipRoles with the same name. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::Enum + name: 'name' + required: true + description: | + The name of the MembershipRole. Must be one of OWNER, MANAGER, MEMBER. + values: + - :OWNER + - :MANAGER + - :MEMBER + - !ruby/object:Api::Type::String + name: 'type' + output: true + description: | + The type of the membership. diff --git a/products/cloudidentity/terraform.yaml b/products/cloudidentity/terraform.yaml new file mode 100644 index 000000000000..d6583f133597 --- /dev/null +++ b/products/cloudidentity/terraform.yaml @@ -0,0 +1,75 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + Group: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["{{name}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "cloud_identity_groups_basic" + primary_resource_id: "cloud_identity_group_basic" + min_version: beta + vars: + id_group: "my-identity-group" + test_env_vars: + org_domain: :ORG_DOMAIN + cust_id: :CUST_ID + ### The full API doesn't seem to be implemented yet + # - !ruby/object:Provider::Terraform::Examples + # name: "cloud_identity_groups_full" + # primary_resource_id: "cloud_identity_group_full" + # min_version: beta + # vars: + # id_group: "my-identity-group" + # test_env_vars: + # org_domain: :ORG_DOMAIN + # cust_id: :CUST_ID + custom_code: !ruby/object:Provider::Terraform::CustomCode + post_create: templates/terraform/post_create/set_computed_name.erb + GroupMembership: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["{{name}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "cloud_identity_group_membership" + primary_resource_id: "cloud_identity_group_membership_basic" + min_version: beta + vars: + id_group: "my-identity-group" + test_env_vars: + org_domain: :ORG_DOMAIN + cust_id: :CUST_ID + - !ruby/object:Provider::Terraform::Examples + name: "cloud_identity_group_membership_user" + primary_resource_id: "cloud_identity_group_membership_basic" + min_version: beta + vars: + id_group: "my-identity-group" + test_env_vars: + org_domain: :ORG_DOMAIN + cust_id: :CUST_ID + identity_user: :IDENTITY_USER + properties: + memberKey: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + preferredMemberKey: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + post_create: templates/terraform/post_create/set_computed_name.erb + +# This is for copying files over +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> diff --git a/products/cloudiot/api.yaml b/products/cloudiot/api.yaml new file mode 100644 index 000000000000..d0f82d7e952f --- /dev/null +++ b/products/cloudiot/api.yaml @@ -0,0 +1,405 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: CloudIot +display_name: Cloud IoT Core +versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://cloudiot.googleapis.com/v1/ +scopes: + - https://www.googleapis.com/auth/cloudiot + - https://www.googleapis.com/auth/cloud-platform +objects: + - !ruby/object:Api::Resource + name: 'DeviceRegistry' + base_url: 'projects/{{project}}/locations/{{region}}/registries' + self_link: 'projects/{{project}}/locations/{{region}}/registries/{{name}}' + update_verb: :PATCH + update_mask: true + description: | + A Google Cloud IoT Core device registry. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/iot/docs/' + api: 'https://cloud.google.com/iot/docs/reference/cloudiot/rest/' + parameters: + - !ruby/object:Api::Type::String + name: region + input: true + url_param_only: true + required: true + description: | + The region of this Device Registry. + properties: + - !ruby/object:Api::Type::String + name: 'id' + input: true + required: true + description: | + The unique identifier for the device registry. For example, + `myRegistry`. + - !ruby/object:Api::Type::String + name: 'name' + description: | + The resource path name. For example, + `projects/example-proj/locations/us-central1/registries/my-registry`. + - !ruby/object:Api::Type::Array + name: 'eventNotificationConfigs' + description: | + List of configurations for event notifications, such as PubSub topics + to publish device events to. + max_size: 10 + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'subfolderMatches' + description: | + If the subfolder name matches this string exactly, this + configuration will be used. The string must not include the + leading '/' character. If empty, all strings are matched. Empty + value can only be used for the last `event_notification_configs` + item. + - !ruby/object:Api::Type::String + name: 'pubsubTopicName' + required: true + description: | + PubSub topic name to publish device events. + - !ruby/object:Api::Type::NestedObject + name: 'stateNotificationConfig' + description: | + A PubSub topic to publish device state updates. + properties: + - !ruby/object:Api::Type::String + name: 'pubsubTopicName' + required: true + description: | + PubSub topic name to publish device state updates. + - !ruby/object:Api::Type::NestedObject + name: 'mqttConfig' + description: | + Activate or deactivate MQTT. + properties: + - !ruby/object:Api::Type::Enum + name: 'mqttEnabledState' + description: | + The field allows `MQTT_ENABLED` or `MQTT_DISABLED` + required: true + values: + - :MQTT_ENABLED + - :MQTT_DISABLED + - !ruby/object:Api::Type::NestedObject + name: 'httpConfig' + description: | + Activate or deactivate HTTP. + properties: + - !ruby/object:Api::Type::Enum + name: 'httpEnabledState' + required: true + description: | + The field allows `HTTP_ENABLED` or `HTTP_DISABLED`. + values: + - :HTTP_ENABLED + - :HTTP_DISABLED + - !ruby/object:Api::Type::Enum + name: 'logLevel' + default_value: :NONE + description: | + The default logging verbosity for activity from devices in this + registry. Specifies which events should be written to logs. For + example, if the LogLevel is ERROR, only events that terminate in + errors will be logged. LogLevel is inclusive; enabling INFO logging + will also enable ERROR logging. + values: + - :NONE + - :ERROR + - :INFO + - :DEBUG + - !ruby/object:Api::Type::Array + name: 'credentials' + description: | + List of public key certificates to authenticate devices. + max_size: 10 + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::NestedObject + name: 'publicKeyCertificate' + required: true + description: | + A public key certificate format and data. + properties: + - !ruby/object:Api::Type::Enum + name: 'format' + required: true + description: | + The field allows only `X509_CERTIFICATE_PEM`. + values: + - :X509_CERTIFICATE_PEM + - !ruby/object:Api::Type::String + name: 'certificate' + required: true + description: | + The certificate data. + - !ruby/object:Api::Type::NestedObject + name: 'x509Details' + output: true + description: | + The certificate details. Used only for X.509 certificates. + properties: + - !ruby/object:Api::Type::String + name: 'issuer' + output: true + description: | + The entity that signed the certificate. + - !ruby/object:Api::Type::String + name: 'subject' + output: true + description: | + The entity the certificate and public key belong to. + - !ruby/object:Api::Type::String + name: 'startTime' + output: true + description: | + The time the certificate becomes valid. A timestamp in + RFC3339 UTC "Zulu" format, accurate to nanoseconds. + Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'expiryTime' + output: true + description: | + The time the certificate becomes invalid. A timestamp in + RFC3339 UTC "Zulu" format, accurate to nanoseconds. + Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'signatureAlgorithm' + output: true + description: | + The algorithm used to sign the certificate. + - !ruby/object:Api::Type::String + name: 'publicKeyType' + output: true + description: | + The type of public key in the certificate. + - !ruby/object:Api::Resource + name: 'Device' + base_url: '{{registry}}/devices' + self_link: '{{registry}}/devices/{{name}}' + update_verb: :PATCH + update_mask: true + description: | + A Google Cloud IoT Core device. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/iot/docs/' + api: 'https://cloud.google.com/iot/docs/reference/cloudiot/rest/' + parameters: + - !ruby/object:Api::Type::String + name: registry + input: true + url_param_only: true + required: true + description: | + The name of the device registry where this device should be created. + properties: + - !ruby/object:Api::Type::String + name: 'id' + input: true + required: true + description: | + The unique identifier for the device. For example, + `Device0`. + - !ruby/object:Api::Type::String + name: 'name' + description: | + The resource path name. For example, + `projects/example-proj/locations/us-central1/registries/my-registry/devices/device0`. + - !ruby/object:Api::Type::String + name: 'numId' + output: true + description: | + A server-defined unique numeric ID for the device. + This is a more compact way to identify devices, and it is globally unique. + - !ruby/object:Api::Type::Array + name: 'credentials' + description: | + The credentials used to authenticate this device. + max_size: 3 + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::Time + name: 'expirationTime' + description: | + The time at which this credential becomes invalid. + - !ruby/object:Api::Type::NestedObject + name: 'publicKey' + required: true + description: | + A public key used to verify the signature of JSON Web Tokens (JWTs). + properties: + - !ruby/object:Api::Type::Enum + name: 'format' + required: true + description: | + The format of the key. + values: + - :RSA_PEM + - :RSA_X509_PEM + - :ES256_PEM + - :ES256_X509_PEM + - !ruby/object:Api::Type::String + name: 'key' + required: true + description: | + The key data. + - !ruby/object:Api::Type::Time + name: 'lastHeartbeatTime' + output: true + description: | + The last time an MQTT PINGREQ was received. + - !ruby/object:Api::Type::Time + name: 'lastEventTime' + output: true + description: | + The last time a telemetry event was received. + - !ruby/object:Api::Type::Time + name: 'lastStateTime' + output: true + description: | + The last time a state event was received. + - !ruby/object:Api::Type::Time + name: 'lastConfigAckTime' + output: true + description: | + The last time a cloud-to-device config version acknowledgment was received from the device. + - !ruby/object:Api::Type::Time + name: 'lastConfigSendTime' + output: true + description: | + The last time a cloud-to-device config version was sent to the device. + - !ruby/object:Api::Type::Boolean + name: 'blocked' + description: | + If a device is blocked, connections or requests from this device will fail. + - !ruby/object:Api::Type::Time + name: 'lastErrorTime' + output: true + description: | + The time the most recent error occurred, such as a failure to publish to Cloud Pub/Sub. + - !ruby/object:Api::Type::NestedObject + name: 'lastErrorStatus' + output: true + description: | + The error message of the most recent error, such as a failure to publish to Cloud Pub/Sub. + properties: + - !ruby/object:Api::Type::Integer + name: 'number' + description: | + The status code, which should be an enum value of google.rpc.Code. + - !ruby/object:Api::Type::String + name: 'message' + description: | + A developer-facing error message, which should be in English. + - !ruby/object:Api::Type::Array + name: 'details' + description: | + A list of messages that carry the error details. + item_type: Api::Type::KeyValuePairs + - !ruby/object:Api::Type::NestedObject + name: 'config' + output: true + description: | + The most recent device configuration, which is eventually sent from Cloud IoT Core to the device. + properties: + - !ruby/object:Api::Type::String + name: 'version' + output: true + description: | + The version of this update. + - !ruby/object:Api::Type::String + name: 'cloudUpdateTime' + output: true + description: | + The time at which this configuration version was updated in Cloud IoT Core. + - !ruby/object:Api::Type::String + name: 'deviceAckTime' + output: true + description: | + The time at which Cloud IoT Core received the acknowledgment from the device, + indicating that the device has received this configuration version. + - !ruby/object:Api::Type::String + name: 'binaryData' + description: | + The device configuration data. + - !ruby/object:Api::Type::NestedObject + name: 'state' + output: true + description: | + The state most recently received from the device. + properties: + - !ruby/object:Api::Type::Time + name: 'updateTime' + description: | + The time at which this state version was updated in Cloud IoT Core. + - !ruby/object:Api::Type::String + name: 'binaryData' + description: | + The device state data. + - !ruby/object:Api::Type::Enum + name: 'logLevel' + allow_empty_object: true + description: | + The logging verbosity for device activity. + values: + - :NONE + - :ERROR + - :INFO + - :DEBUG + - !ruby/object:Api::Type::KeyValuePairs + name: 'metadata' + description: | + The metadata key-value pairs assigned to the device. + - !ruby/object:Api::Type::NestedObject + name: 'gatewayConfig' + description: | + Gateway-related configuration and state. + properties: + - !ruby/object:Api::Type::Enum + name: 'gatewayType' + default_value: :NON_GATEWAY + input: true + description: | + Indicates whether the device is a gateway. + values: + - :GATEWAY + - :NON_GATEWAY + - !ruby/object:Api::Type::Enum + name: 'gatewayAuthMethod' + description: | + Indicates whether the device is a gateway. + values: + - :ASSOCIATION_ONLY + - :DEVICE_AUTH_TOKEN_ONLY + - :ASSOCIATION_AND_DEVICE_AUTH_TOKEN + - !ruby/object:Api::Type::String + name: 'lastAccessedGatewayId' + output: true + description: | + The ID of the gateway the device accessed most recently. + - !ruby/object:Api::Type::Time + name: 'lastAccessedGatewayTime' + output: true + description: | + The most recent time at which the device accessed the gateway specified in last_accessed_gateway. diff --git a/products/cloudiot/terraform.yaml b/products/cloudiot/terraform.yaml new file mode 100644 index 000000000000..3a2d52cbb3e7 --- /dev/null +++ b/products/cloudiot/terraform.yaml @@ -0,0 +1,192 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +legacy_name: "cloudiot" +overrides: !ruby/object:Overrides::ResourceOverrides + DeviceRegistry: !ruby/object:Overrides::Terraform::ResourceOverride + legacy_name: "google_cloudiot_registry" + import_format: ["{{project}}/locations/{{region}}/registries/{{name}}"] + id_format: "projects/{{project}}/locations/{{region}}/registries/{{name}}" + custom_code: !ruby/object:Provider::Terraform::CustomCode + constants: templates/terraform/constants/cloudiot.go.erb + decoder: templates/terraform/decoders/cloudiot_device_registry.go.erb + encoder: templates/terraform/encoders/cloudiot_device_registry.go.erb + extra_schema_entry: templates/terraform/extra_schema_entry/cloudiot_device_registry.go.erb + pre_update: templates/terraform/pre_update/cloudiot_device_registry.go.erb + docs: !ruby/object:Provider::Terraform::Docs + optional_properties: |+ + * `state_notification_config` - A PubSub topic to publish device state updates. + The structure is documented below. + + * `mqtt_config` - Activate or deactivate MQTT. + The structure is documented below. + + * `http_config` - Activate or deactivate HTTP. + The structure is documented below. + + * `credentials` - List of public key certificates to authenticate devices. + The structure is documented below. + + The `state_notification_config` block supports: + + * `pubsub_topic_name` - PubSub topic name to publish device state updates. + + The `mqtt_config` block supports: + + * `mqtt_enabled_state` - The field allows `MQTT_ENABLED` or `MQTT_DISABLED`. + + The `http_config` block supports: + + * `http_enabled_state` - The field allows `HTTP_ENABLED` or `HTTP_DISABLED`. + + The `credentials` block supports: + + * `public_key_certificate` - A public key certificate format and data. + + The `public_key_certificate` block supports: + + * `format` - The field allows only `X509_CERTIFICATE_PEM`. + + * `certificate` - The certificate data. + examples: + - !ruby/object:Provider::Terraform::Examples + name: "cloudiot_device_registry_basic" + primary_resource_id: "test-registry" + vars: + cloudiot_registry_name: "cloudiot-registry" + test_env_vars: + project: :PROJECT_NAME + region: :REGION + - !ruby/object:Provider::Terraform::Examples + name: "cloudiot_device_registry_single_event_notification_configs" + primary_resource_id: "test-registry" + vars: + cloudiot_registry_name: "cloudiot-registry" + cloudiot_device_telemetry_topic_name: "default-telemetry" + test_env_vars: + project: :PROJECT_NAME + region: :REGION + - !ruby/object:Provider::Terraform::Examples + name: "cloudiot_device_registry_full" + primary_resource_id: "test-registry" + vars: + cloudiot_registry_name: "cloudiot-registry" + cloudiot_device_status_topic_name: "default-devicestatus" + cloudiot_device_telemetry_topic_name: "default-telemetry" + cloudiot_additional_device_telemetry_topic_name: "additional-telemetry" + cloudiot_subfolder_matches_additional_device_telemetry_topic: "test/path" + test_env_vars: + project: :PROJECT_NAME + region: :REGION + properties: + id: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + name: 'name' + description: | + A unique name for the resource, required by device registry. + validation: !ruby/object:Provider::Terraform::Validation + function: 'validateCloudIotDeviceRegistryID' + name: !ruby/object:Overrides::Terraform::PropertyOverride + # We don't need this field, because it has the same format as the ID + exclude: true + region: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + required: false + default_from_api: true + description: | + The region in which the created registry should reside. + If it is not provided, the provider region is used. + logLevel: !ruby/object:Overrides::Terraform::PropertyOverride + diff_suppress_func: 'emptyOrDefaultStringSuppress("NONE")' + eventNotificationConfigs: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + eventNotificationConfigs.subfolderMatches: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validateCloudIotDeviceRegistrySubfolderMatch' + eventNotificationConfigs.pubsubTopicName: !ruby/object:Overrides::Terraform::PropertyOverride + diff_suppress_func: 'compareSelfLinkOrResourceName' + stateNotificationConfig: !ruby/object:Overrides::Terraform::PropertyOverride + # Excluding this because the original (manually-generated) implementation + # wrongly set this to be a Map, instead of a NestedObject. To avoid breaking + # changes, we observe that behaviour by excluding this field and adding a + # corresponding custom element (of type Map) to the schema. When we're + # ready to introduce this breaking change, remove this "exclude" directive + # along with the corresponding custom schema element. + exclude: true + stateNotificationConfig.pubsubTopicName: !ruby/object:Overrides::Terraform::PropertyOverride + # See the comment on stateNotificationConfig.exclude + exclude: true + mqttConfig: !ruby/object:Overrides::Terraform::PropertyOverride + # See the comment on stateNotificationConfig.exclude + exclude: true + mqttConfig.mqttEnabledState: !ruby/object:Overrides::Terraform::PropertyOverride + # See the comment on stateNotificationConfig.exclude + exclude: true + httpConfig: !ruby/object:Overrides::Terraform::PropertyOverride + # See the comment on stateNotificationConfig.exclude + exclude: true + httpConfig.httpEnabledState: !ruby/object:Overrides::Terraform::PropertyOverride + # See the comment on stateNotificationConfig.exclude + exclude: true + credentials: !ruby/object:Overrides::Terraform::PropertyOverride + # See the comment on stateNotificationConfig.exclude + exclude: true + credentials.publicKeyCertificate: !ruby/object:Overrides::Terraform::PropertyOverride + # See the comment on stateNotificationConfig.exclude + exclude: true + Device: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: [ "{{%registry}}/devices/{{name}}" ] + properties: + id: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + name: 'name' + description: | + A unique name for the resource. + name: !ruby/object:Overrides::Terraform::PropertyOverride + # We don't need this field, because it has the same format as the ID + exclude: true + credentials.expirationTime: !ruby/object:Overrides::Terraform::PropertyOverride + # If you don't set an expirationTime for a key, the API returns + # 1970-01-01T00:00:00Z, so we've to eventually accept that value. + default_from_api: true + gatewayConfig: !ruby/object:Overrides::Terraform::PropertyOverride + # The only mutable gateway_config field is gateway_auth_method, + # at least according to the API responses. + update_mask_fields: + - "gateway_config.gateway_auth_method" + examples: + - !ruby/object:Provider::Terraform::Examples + name: "cloudiot_device_basic" + primary_resource_id: "test-device" + vars: + cloudiot_device_name: "cloudiot-device" + cloudiot_device_registry_name: "cloudiot-device-registry" + test_env_vars: + project: :PROJECT_NAME + region: :REGION + - !ruby/object:Provider::Terraform::Examples + name: "cloudiot_device_full" + primary_resource_id: "test-device" + vars: + cloudiot_device_name: "cloudiot-device" + cloudiot_device_registry_name: "cloudiot-device-registry" + test_env_vars: + project: :PROJECT_NAME + region: :REGION +# This is for copying files over +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> diff --git a/products/cloudrun/api.yaml b/products/cloudrun/api.yaml index 07f7d6db82e5..380fb7f080a7 100644 --- a/products/cloudrun/api.yaml +++ b/products/cloudrun/api.yaml @@ -509,6 +509,24 @@ objects: references will never be expanded, regardless of whether the variable exists or not. Defaults to "". + - !ruby/object:Api::Type::Array + name: ports + description: |- + List of open ports in the container. + More Info: + https://cloud.google.com/run/docs/reference/rest/v1/RevisionSpec#ContainerPort + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: name + description: Name of the port. + - !ruby/object:Api::Type::String + name: protocol + description: Protocol used on port. Defaults to TCP. + - !ruby/object:Api::Type::Integer + name: containerPort + description: Port number. + required: true - !ruby/object:Api::Type::NestedObject name: resources description: |- @@ -540,6 +558,10 @@ objects: the default value. - `1` not-thread-safe. Single concurrency - `2-N` thread-safe, max concurrency of N + - !ruby/object:Api::Type::Integer + name: timeoutSeconds + description: |- + TimeoutSeconds holds the max duration the instance is allowed for responding to a request. - !ruby/object:Api::Type::String name: serviceAccountName description: |- diff --git a/products/cloudrun/terraform.yaml b/products/cloudrun/terraform.yaml index 1969381dfc56..495918258175 100644 --- a/products/cloudrun/terraform.yaml +++ b/products/cloudrun/terraform.yaml @@ -17,7 +17,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides id_format: "locations/{{location}}/namespaces/{{project}}/domainmappings/{{name}}" import_format: ["locations/{{location}}/namespaces/{{project}}/domainmappings/{{name}}"] async: !ruby/object:Provider::Terraform::PollAsync - check_response_func: PollCheckKnativeStatus + check_response_func_existence: PollCheckKnativeStatus actions: ['create', 'update'] operation: !ruby/object:Api::Async::Operation timeouts: !ruby/object:Api::Timeouts @@ -58,7 +58,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides id_format: "locations/{{location}}/namespaces/{{project}}/services/{{name}}" import_format: ["locations/{{location}}/namespaces/{{project}}/services/{{name}}"] async: !ruby/object:Provider::Terraform::PollAsync - check_response_func: PollCheckKnativeStatus + check_response_func_existence: PollCheckKnativeStatus actions: ['create', 'update'] operation: !ruby/object:Api::Async::Operation timeouts: !ruby/object:Api::Timeouts @@ -102,9 +102,19 @@ overrides: !ruby/object:Overrides::ResourceOverrides project: :PROJECT_NAME ignore_read_extra: - "autogenerate_revision_name" + - !ruby/object:Provider::Terraform::Examples + name: "cloud_run_service_traffic_split" + skip_test: true + primary_resource_id: "default" + primary_resource_name: "fmt.Sprintf(\"tf-test-cloudrun-srv%s\", context[\"random_suffix\"])" + vars: + cloud_run_service_name: "cloudrun-srv" + test_env_vars: + project: :PROJECT_NAME virtual_fields: - - !ruby/object:Provider::Terraform::VirtualFields + - !ruby/object:Api::Type::Boolean name: 'autogenerate_revision_name' + default_value: false description: | If set to `true`, the revision name (template.metadata.name) will be omitted and autogenerated by Cloud Run. This cannot be set to `true` while `template.metadata.name` @@ -142,8 +152,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides default_from_api: true spec.template.spec.containerConcurrency: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true + spec.template.spec.timeoutSeconds: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true spec.template.spec.containers: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true + spec.template.spec.containers.ports: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true spec.template.spec.containers.resources: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true spec.template.spec.containers.resources.limits: !ruby/object:Overrides::Terraform::PropertyOverride diff --git a/products/cloudscheduler/api.yaml b/products/cloudscheduler/api.yaml index 6a6d3729c643..c212e6f269a7 100644 --- a/products/cloudscheduler/api.yaml +++ b/products/cloudscheduler/api.yaml @@ -180,7 +180,7 @@ objects: name: topicName description: | The full resource name for the Cloud Pub/Sub topic to which - messages will be published when a job is delivered. ~>**NOTE**: + messages will be published when a job is delivered. ~>**NOTE:** The topic name must be in the same format as required by PubSub's PublishRequest.name, e.g. `projects/my-project/topics/my-topic`. required: true diff --git a/products/cloudscheduler/terraform.yaml b/products/cloudscheduler/terraform.yaml index a14505bc0475..411b0e87fbcc 100644 --- a/products/cloudscheduler/terraform.yaml +++ b/products/cloudscheduler/terraform.yaml @@ -77,6 +77,16 @@ overrides: !ruby/object:Overrides::ResourceOverrides region: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true ignore_read: true + retryConfig.retryCount: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + retryConfig.maxRetryDuration: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + retryConfig.minBackoffDuration: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + retryConfig.maxBackoffDuration: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + retryConfig.maxDoublings: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true # This is for copying files over files: !ruby/object:Provider::Config::Files diff --git a/products/compute/ansible.yaml b/products/compute/ansible.yaml index 9e6858b82e07..f62507069980 100644 --- a/products/compute/ansible.yaml +++ b/products/compute/ansible.yaml @@ -31,6 +31,8 @@ datasources: !ruby/object:Overrides::ResourceOverrides exclude: true License: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true + MachineImage: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true MachineType: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true NetworkEndpoint: !ruby/object:Overrides::Ansible::ResourceOverride @@ -51,8 +53,14 @@ datasources: !ruby/object:Overrides::ResourceOverrides exclude: true RouterNat: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true + SecurityPolicy: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true Zone: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true + PerInstanceConfig: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true + RegionPerInstanceConfig: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true overrides: !ruby/object:Overrides::ResourceOverrides Autoscaler: !ruby/object:Overrides::Ansible::ResourceOverride properties: @@ -261,6 +269,14 @@ overrides: !ruby/object:Overrides::ResourceOverrides description: | The source snapshot used to create this disk. You can provide this as a partial or full URL to the resource. + RegionUrlMap: !ruby/object:Overrides::Ansible::ResourceOverride + properties: + pathMatchers.defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Ansible::PropertyOverride + default_value: false + defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Ansible::PropertyOverride + default_value: false + pathMatchers.pathRules.urlRedirect.stripQuery: !ruby/object:Overrides::Ansible::PropertyOverride + default_value: false Reservation: !ruby/object:Overrides::Ansible::ResourceOverride properties: specificReservation.instanceProperties.minCpuPlatform: !ruby/object:Overrides::Ansible::PropertyOverride @@ -309,6 +325,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides exclude: true MachineType: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true + MachineImage: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true NetworkEndpoint: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true GlobalNetworkEndpoint: !ruby/object:Overrides::Ansible::ResourceOverride @@ -329,8 +347,22 @@ overrides: !ruby/object:Overrides::ResourceOverrides exclude: true RouterNat: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true + SecurityPolicy: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true + UrlMap: !ruby/object:Overrides::Ansible::ResourceOverride + properties: + pathMatchers.defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Ansible::PropertyOverride + default_value: false + defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Ansible::PropertyOverride + default_value: false + pathMatchers.pathRules.urlRedirect.stripQuery: !ruby/object:Overrides::Ansible::PropertyOverride + default_value: false Zone: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true + PerInstanceConfig: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true + RegionPerInstanceConfig: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true files: !ruby/object:Provider::Config::Files resource: <%= lines(indent(compile('provider/ansible/resource~compile.yaml'), 4)) -%> diff --git a/products/compute/api.yaml b/products/compute/api.yaml index 0cf418d8f1fa..813aca4836e8 100644 --- a/products/compute/api.yaml +++ b/products/compute/api.yaml @@ -94,8 +94,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'addressType' description: | - The type of address to reserve, either INTERNAL or EXTERNAL. - If unspecified, defaults to EXTERNAL. + The type of address to reserve. values: - :INTERNAL - :EXTERNAL @@ -123,6 +122,7 @@ objects: required: true - !ruby/object:Api::Type::Enum name: purpose + exact_version: ga description: | The purpose of this resource, which can be one of the following values: @@ -131,11 +131,21 @@ objects: This should only be set when using an Internal address. values: - :GCE_ENDPOINT + - !ruby/object:Api::Type::Enum + name: purpose + exact_version: beta + description: | + The purpose of this resource, which can be one of the following values: + - GCE_ENDPOINT for addresses that are used by VM instances, alias IP ranges, internal load balancers, and similar resources. + - SHARED_LOADBALANCER_VIP for an address that can be used by multiple internal load balancers + This should only be set when using an Internal address. + values: + - :GCE_ENDPOINT + - :SHARED_LOADBALANCER_VIP - !ruby/object:Api::Type::Enum name: 'networkTier' description: | - The networking tier used for configuring this address. This field can - take the following values: PREMIUM or STANDARD. If this field is not + The networking tier used for configuring this address. If this field is not specified, it is assumed to be PREMIUM. values: - :PREMIUM @@ -291,6 +301,46 @@ objects: instance may take to initialize. To do this, create an instance and time the startup process. default_value: 60 + - !ruby/object:Api::Type::Enum + name: 'mode' + default_value: :ON + description: | + Defines operating mode for this policy. + values: + - :OFF + - :ONLY_UP + - :ON + - !ruby/object:Api::Type::NestedObject + name: 'scaleDownControl' + min_version: beta + at_least_one_of: + - scale_down_control.0.max_scaled_down_replicas + - scale_down_control.0.time_window_sec + description: | + Defines scale down controls to reduce the risk of response latency + and outages due to abrupt scale-in events + properties: + - !ruby/object:Api::Type::NestedObject + name: 'maxScaledDownReplicas' + at_least_one_of: + - scale_down_control.0.max_scaled_down_replicas.0.fixed + - scale_down_control.0.max_scaled_down_replicas.0.percent + properties: + - !ruby/object:Api::Type::Integer + name: 'fixed' + description: | + Specifies a fixed number of VM instances. This must be a positive + integer. + - !ruby/object:Api::Type::Integer + name: 'percent' + description: | + Specifies a percentage of instances between 0 to 100%, inclusive. + For example, specify 80 for 80%. + - !ruby/object:Api::Type::Integer + name: 'timeWindowSec' + description: | + How long back autoscaling should look when computing recommendations + to include directives regarding slower scale down, as described above. - !ruby/object:Api::Type::NestedObject name: 'cpuUtilization' description: | @@ -364,8 +414,7 @@ objects: name: 'utilizationTargetType' description: | Defines how target utilization value is expressed for a - Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, - or DELTA_PER_MINUTE. + Stackdriver Monitoring metric. values: - :GAUGE - :DELTA_PER_SECOND @@ -741,10 +790,10 @@ objects: description: | Settings controlling the volume of connections to a backend service. This field is applicable only when the load_balancing_scheme is set to INTERNAL_SELF_MANAGED. - min_version: beta properties: - !ruby/object:Api::Type::NestedObject name: 'connectTimeout' + min_version: beta at_least_one_of: - circuit_breakers.0.connect_timeout - circuit_breakers.0.max_requests_per_connection @@ -845,7 +894,6 @@ objects: hashing. This field only applies if the load_balancing_scheme is set to INTERNAL_SELF_MANAGED. This field is only applicable when locality_lb_policy is set to MAGLEV or RING_HASH. - min_version: beta properties: - !ruby/object:Api::Type::NestedObject name: 'httpCookie' @@ -1039,7 +1087,6 @@ objects: output: true - !ruby/object:Api::Type::Array name: 'customRequestHeaders' - min_version: beta item_type: Api::Type::String description: | Headers that the HTTP/S load balancer should add to proxied @@ -1061,13 +1108,14 @@ objects: - !ruby/object:Api::Type::Array name: 'healthChecks' item_type: Api::Type::String - required: true min_size: 1 max_size: 1 description: | The set of URLs to the HttpHealthCheck or HttpsHealthCheck resource for health checking this BackendService. Currently at most one health - check can be specified, and a health check is required. + check can be specified. + + A health check must be specified unless the backend service uses an internet NEG as a backend. For internal load balancing, a URL to a HealthCheck resource must be specified instead. - !ruby/object:Api::Type::Integer @@ -1102,8 +1150,7 @@ objects: description: | Indicates whether the backend service will be used with internal or external load balancing. A backend service created for one type of - load balancing cannot be used with the other. Must be `EXTERNAL` or - `INTERNAL_SELF_MANAGED` for a global backend service. Defaults to `EXTERNAL`. + load balancing cannot be used with the other. default_value: :EXTERNAL # If you're modifying this value, it probably means Global ILB is now # an option. If that's the case, all of the documentation is based on @@ -1113,8 +1160,6 @@ objects: - :INTERNAL_SELF_MANAGED - !ruby/object:Api::Type::Enum name: 'localityLbPolicy' - input: true - min_version: beta values: - :ROUND_ROBIN - :LEAST_REQUEST @@ -1166,7 +1211,6 @@ objects: character, which cannot be a dash. - !ruby/object:Api::Type::NestedObject name: 'outlierDetection' - min_version: beta description: | Settings controlling eviction of unhealthy hosts from the load balancing pool. This field is applicable only when the load_balancing_scheme is set @@ -1420,8 +1464,7 @@ objects: name: 'protocol' description: | The protocol this BackendService uses to communicate with backends. - Possible values are HTTP, HTTPS, HTTP2, TCP, and SSL. The default is - HTTP. **NOTE**: HTTP2 is only valid for beta HTTP/2 load balancer + The default is HTTP. **NOTE**: HTTP2 is only valid for beta HTTP/2 load balancer types and may result in errors if used with the GA API. values: - :HTTP @@ -1457,7 +1500,6 @@ objects: failed request. Default is 30 seconds. Valid range is [1, 86400]. - !ruby/object:Api::Type::NestedObject name: 'logConfig' - min_version: beta description: | This field denotes the logging options for the load balancer traffic served by this backend service. If logging is enabled, logs will be exported to Stackdriver. @@ -1521,7 +1563,6 @@ objects: properties: - !ruby/object:Api::Type::Integer name: 'affinityCookieTtlSec' - min_version: beta description: | Lifetime of cookies in seconds if session_affinity is GENERATED_COOKIE. If set to 0, the cookie is non-persistent and lasts @@ -1543,7 +1584,7 @@ objects: - :RATE - :CONNECTION description: | - Specifies the balancing mode for this backend. Defaults to CONNECTION. + Specifies the balancing mode for this backend. - !ruby/object:Api::Type::Double name: 'capacityScaler' description: | @@ -1568,7 +1609,6 @@ objects: description: | This field designates whether this is a failover backend. More than one failover backend can be configured for a given RegionBackendService. - min_version: beta - !ruby/object:Api::Type::String name: 'group' required: true @@ -1663,10 +1703,10 @@ objects: Settings controlling the volume of connections to a backend service. This field is applicable only when the `load_balancing_scheme` is set to INTERNAL_MANAGED and the `protocol` is set to HTTP, HTTPS, or HTTP2. - min_version: beta properties: - !ruby/object:Api::Type::NestedObject name: 'connectTimeout' + min_version: beta at_least_one_of: - circuit_breakers.0.connect_timeout - circuit_breakers.0.max_requests_per_connection @@ -1769,7 +1809,6 @@ objects: * `load_balancing_scheme` is set to INTERNAL_MANAGED * `protocol` is set to HTTP, HTTPS, or HTTP2 * `locality_lb_policy` is set to MAGLEV or RING_HASH - min_version: beta properties: - !ruby/object:Api::Type::NestedObject name: 'httpCookie' @@ -1867,7 +1906,6 @@ objects: An optional description of this resource. - !ruby/object:Api::Type::NestedObject name: 'failoverPolicy' - min_version: beta description: | Policy for failovers. properties: @@ -1938,16 +1976,13 @@ objects: description: | Indicates what kind of load balancing this regional backend service will be used for. A backend service created for one type of load - balancing cannot be used with the other(s). Must be `INTERNAL` or - `INTERNAL_MANAGED`. Defaults to `INTERNAL`. + balancing cannot be used with the other(s). default_value: :INTERNAL values: - :INTERNAL - :INTERNAL_MANAGED - !ruby/object:Api::Type::Enum name: 'localityLbPolicy' - input: true - min_version: beta values: - :ROUND_ROBIN - :LEAST_REQUEST @@ -1999,7 +2034,6 @@ objects: character, which cannot be a dash. - !ruby/object:Api::Type::NestedObject name: 'outlierDetection' - min_version: beta description: | Settings controlling eviction of unhealthy hosts from the load balancing pool. This field is applicable only when the `load_balancing_scheme` is set @@ -2242,12 +2276,21 @@ objects: success rate: mean - (stdev * success_rate_stdev_factor). This factor is divided by a thousand to get a double. That is, if the desired factor is 1.9, the runtime value should be 1900. Defaults to 1900. + - !ruby/object:Api::Type::String + name: 'portName' + description: | + A named port on a backend instance group representing the port for + communication to the backend VMs in that group. Required when the + loadBalancingScheme is EXTERNAL, INTERNAL_MANAGED, or INTERNAL_SELF_MANAGED + and the backends are instance groups. The named port must be defined on each + backend instance group. This parameter has no meaning if the backends are NEGs. API sets a + default of "http" if not given. + Must be omitted when the loadBalancingScheme is INTERNAL (Internal TCP/UDP Load Balancing). - !ruby/object:Api::Type::Enum name: 'protocol' description: | The protocol this RegionBackendService uses to communicate with backends. - Possible values are HTTP, HTTPS, HTTP2, SSL, TCP, and UDP. The default is - HTTP. **NOTE**: HTTP2 is only valid for beta HTTP/2 load balancer + The default is HTTP. **NOTE**: HTTP2 is only valid for beta HTTP/2 load balancer types and may result in errors if used with the GA API. # This is removed to avoid breaking terraform, as default values cannot be # unspecified. Providers should include this as needed via overrides @@ -2279,7 +2322,6 @@ objects: failed request. Default is 30 seconds. Valid range is [1, 86400]. - !ruby/object:Api::Type::NestedObject name: 'logConfig' - min_version: beta description: | This field denotes the logging options for the load balancer traffic served by this backend service. If logging is enabled, logs will be exported to Stackdriver. @@ -2948,13 +2990,9 @@ objects: The list of ALLOW rules specified by this firewall. Each rule specifies a protocol and port-range tuple that describes a permitted connection. - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - allow - deny - conflicts: - - denied item_type: !ruby/object:Api::Type::NestedObject properties: # IPProtocol has to be string, instead of Enum because user can @@ -2985,13 +3023,9 @@ objects: output: true - !ruby/object:Api::Type::Array name: 'denied' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - allow - deny - conflicts: - - allowed description: | The list of DENY rules specified by this firewall. Each rule specifies a protocol and port-range tuple that describes a denied connection. @@ -3055,17 +3089,22 @@ objects: - !ruby/object:Api::Type::NestedObject name: 'logConfig' description: | - This field denotes whether to enable logging for a particular - firewall rule. If logging is enabled, logs will be exported to - Stackdriver. + This field denotes the logging options for a particular firewall rule. + If logging is enabled, logs will be exported to Cloud Logging. properties: - !ruby/object:Api::Type::Boolean - name: 'enableLogging' - api_name: enable + name: 'enable' description: | This field denotes whether to enable logging for a particular firewall rule. If logging is enabled, logs will be exported to Stackdriver. + - !ruby/object:Api::Type::Enum + name: 'metadata' + description: | + This field denotes whether to include or exclude metadata for firewall logs. + values: + - :EXCLUDE_ALL_METADATA + - :INCLUDE_ALL_METADATA - !ruby/object:Api::Type::Integer name: 'id' description: 'The unique identifier for the resource.' @@ -3281,8 +3320,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'IPProtocol' description: | - The IP protocol to which this rule applies. Valid options are TCP, - UDP, ESP, AH, SCTP or ICMP. + The IP protocol to which this rule applies. When the load balancing scheme is INTERNAL, only TCP and UDP are valid. @@ -3436,8 +3474,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'networkTier' description: | - The networking tier used for configuring this address. This field can - take the following values: PREMIUM or STANDARD. If this field is not + The networking tier used for configuring this address. If this field is not specified, it is assumed to be PREMIUM. values: - :PREMIUM @@ -3548,8 +3585,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'ipVersion' description: | - The IP Version that will be used by this address. Valid options are - `IPV4` or `IPV6`. The default value is `IPV4`. + The IP Version that will be used by this address. The default value is `IPV4`. values: - :IPV4 - :IPV6 @@ -3570,7 +3606,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'addressType' description: | - The type of the address to reserve, default is EXTERNAL. + The type of the address to reserve. * EXTERNAL indicates public/external single IP address. * INTERNAL indicates internal IP ranges belonging to some network. @@ -3679,8 +3715,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'IPProtocol' description: | - The IP protocol to which this rule applies. Valid options are TCP, - UDP, ESP, AH, SCTP or ICMP. When the load balancing scheme is + The IP protocol to which this rule applies. When the load balancing scheme is INTERNAL_SELF_MANAGED, only TCP is valid. values: - :TCP @@ -3693,7 +3728,6 @@ objects: name: 'ipVersion' description: | The IP Version that will be used by this global forwarding rule. - Valid options are IPV4 or IPV6. values: - :IPV4 - :IPV6 @@ -4148,19 +4182,12 @@ objects: - :HTTP2 - !ruby/object:Api::Type::NestedObject name: 'httpHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpsHealthCheck - - http2HealthCheck - - tcpHealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'host' @@ -4242,7 +4269,7 @@ objects: - http_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -4278,19 +4305,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'httpsHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - http2HealthCheck - - tcpHealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'host' @@ -4372,7 +4392,7 @@ objects: - https_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -4408,19 +4428,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'tcpHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - httpsHealthCheck - - http2HealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'request' @@ -4484,7 +4497,7 @@ objects: - tcp_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -4519,19 +4532,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'sslHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - httpsHealthCheck - - http2HealthCheck - - tcpHealthCheck properties: - !ruby/object:Api::Type::String name: 'request' @@ -4595,7 +4601,7 @@ objects: - ssl_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -4630,19 +4636,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'http2HealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - httpsHealthCheck - - tcpHealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'host' @@ -4724,7 +4723,7 @@ objects: - http2_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -6982,6 +6981,36 @@ objects: description: | The IEEE 802.1Q VLAN tag for this attachment, in the range 2-4094. When using PARTNER type this will be managed upstream. + - !ruby/object:Api::Resource + name: 'MachineImage' + kind: 'compute#machineImage' + base_url: projects/{{project}}/global/machineImages + collection_url_key: 'items' + has_self_link: true + description: | + Represents a MachineImage resource. Machine images store all the configuration, + metadata, permissions, and data from one or more disks required to create a + Virtual machine (VM) instance. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': 'https://cloud.google.com/compute/docs/machine-images' + api: 'https://cloud.google.com/compute/docs/reference/rest/beta/machineImages' + min_version: beta + + properties: + - !ruby/object:Api::Type::String + name: name + description: 'Name of the resource.' + required: true + - !ruby/object:Api::Type::String + name: description + description: 'A text description of the resource.' + - !ruby/object:Api::Type::ResourceRef + name: sourceInstance + description: 'The source instance used to create the machine image. You can provide this as a partial or full URL to the resource.' + resource: 'Instance' + imports: 'selfLink' + required: true - !ruby/object:Api::Resource name: 'MachineType' kind: 'compute#machineType' @@ -7356,7 +7385,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'networkEndpointType' description: | - Type of network endpoints in this network endpoint group. The only supported value is GCE_VM_IP_PORT + Type of network endpoints in this network endpoint group. values: - :GCE_VM_IP_PORT default_value: :GCE_VM_IP_PORT @@ -7515,9 +7544,7 @@ objects: name: 'networkEndpointType' required: true description: | - Type of network endpoints in this network endpoint group. Supported values are: - * INTERNET_IP_PORT - * INTERNET_FQDN_PORT + Type of network endpoints in this network endpoint group. values: - :INTERNET_IP_PORT - :INTERNET_FQDN_PORT @@ -7609,8 +7636,8 @@ objects: The autoscaling mode. Set to one of the following: - OFF: Disables the autoscaler. - ON: Enables scaling in and scaling out. - - ONLY_SCALE_OUT: Enables only scaling out. - You must use this mode if your node groups are configured to + - ONLY_SCALE_OUT: Enables only scaling out. + You must use this mode if your node groups are configured to restart their hosted VMs on minimal servers. values: - :OFF @@ -7619,7 +7646,7 @@ objects: - !ruby/object:Api::Type::Integer name: 'minNodes' description: | - Minimum size of the node group. Must be less + Minimum size of the node group. Must be less than or equal to max-nodes. The default value is 0. - !ruby/object:Api::Type::Integer name: 'maxNodes' @@ -7817,6 +7844,15 @@ objects: values: - :RESTART_NODE_ON_ANY_SERVER - :RESTART_NODE_ON_MINIMAL_SERVERS + - !ruby/object:Api::Type::Enum + name: 'cpuOvercommitType' + description: | + CPU overcommit. + min_version: beta + values: + - :ENABLED + - :NONE + default_value: :NONE - !ruby/object:Api::Resource name: 'PacketMirroring' min_version: beta @@ -7856,7 +7892,7 @@ objects: description: The name of the packet mirroring rule required: true - !ruby/object:Api::Type::String - name: description + name: description description: A human-readable description of the rule. input: true - !ruby/object:Api::Type::String @@ -7900,7 +7936,7 @@ objects: imports: 'selfLink' description: The URL of the forwarding rule. - !ruby/object:Api::Type::NestedObject - name: filter + name: filter description: | A filter for mirrored traffic. If unset, all traffic is mirrored. properties: @@ -7973,56 +8009,297 @@ objects: description: | All instances with these tags will be mirrored. item_type: Api::Type::String - + - !ruby/object:Api::Resource - name: 'ProjectInfo' - base_url: projects - self_link: projects/{{project}} - readonly: true + name: 'PerInstanceConfig' + base_url: 'projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}' + min_version: beta description: | - Information about the project specifically for compute. + A config defined for a single managed instance that belongs to an instance group manager. It preserves the instance name + across instance group manager operations and can define stateful disks or metadata that are unique to the instance. + create_verb: :POST + create_url: projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/createInstances + update_verb: :POST + update_url: projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/updatePerInstanceConfigs + delete_verb: :POST + delete_url: projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/deletePerInstanceConfigs + read_verb: :POST + self_link: projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/listPerInstanceConfigs + identity: + - name + nested_query: !ruby/object:Api::Resource::NestedQuery + keys: + - items + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': 'https://cloud.google.com/compute/docs/instance-groups/stateful-migs#per-instance_configs' + api: 'https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroupManagers' + async: !ruby/object:Api::OpAsync + operation: !ruby/object:Api::OpAsync::Operation + kind: 'compute#operation' + path: 'name' + base_url: 'projects/{{project}}/zones/{{zone}}/operations/{{op_id}}' + wait_ms: 1000 + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 15 + update_minutes: 6 + delete_minutes: 15 + result: !ruby/object:Api::OpAsync::Result + path: 'targetLink' + status: !ruby/object:Api::OpAsync::Status + path: 'status' + complete: 'DONE' + allowed: + - 'PENDING' + - 'RUNNING' + - 'DONE' + error: !ruby/object:Api::OpAsync::Error + path: 'error/errors' + message: 'message' + parameters: + - !ruby/object:Api::Type::ResourceRef + name: 'zone' + resource: 'Zone' + imports: 'name' + description: | + Zone where the containing instance group manager is located + required: true + url_param_only: true + input: true + - !ruby/object:Api::Type::ResourceRef + name: 'instanceGroupManager' + resource: 'InstanceGroupManager' + imports: 'name' + description: | + The instance group manager this instance config is part of. + required: true + url_param_only: true + input: true properties: - !ruby/object:Api::Type::String - name: name - description: The name of this project + name: 'name' + description: | + The name for this per-instance config and its corresponding instance. + required: true + input: true - !ruby/object:Api::Type::NestedObject - name: 'commonInstanceMetadata' - description: 'Metadata shared for all instances in this project' + name: 'preservedState' + description: 'The preserved state for this instance.' + update_verb: :POST + update_url: 'projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/updatePerInstanceConfigs' properties: + - !ruby/object:Api::Type::KeyValuePairs + name: 'metadata' + description: | + Preserved metadata defined for this instance. This is a list of key->value pairs. - !ruby/object:Api::Type::Array - name: 'items' + name: 'disk' + api_name: disks description: | - Array of key/values + Stateful disks for the instance. item_type: !ruby/object:Api::Type::NestedObject properties: - !ruby/object:Api::Type::String - name: 'key' - description: 'Key of the metadata key/value pair' + name: deviceName + required: true + description: | + A unique device name that is reflected into the /dev/ tree of a Linux operating system running within the instance. - !ruby/object:Api::Type::String - name: 'value' - description: 'Value of the metadata key/value pair' - - !ruby/object:Api::Type::Array - name: 'enabledFeatures' + name: source + required: true + description: | + The URI of an existing persistent disk to attach under the specified device-name in the format + `projects/project-id/zones/zone/disks/disk-name`. + - !ruby/object:Api::Type::Enum + name: mode + description: | + The mode of the disk. + values: + - :READ_ONLY + - :READ_WRITE + default_value: :READ_WRITE + - !ruby/object:Api::Type::Enum + name: deleteRule + description: | + A value that prescribes what should happen to the stateful disk when the VM instance is deleted. + The available options are `NEVER` and `ON_PERMANENT_INSTANCE_DELETION`. + `NEVER` detatch the disk when the VM is deleted, but not delete the disk. + `ON_PERMANENT_INSTANCE_DELETION` will delete the stateful disk when the VM is permanently + deleted from the instance group. + values: + - :NEVER + - :ON_PERMANENT_INSTANCE_DELETION + default_value: :NEVER + - !ruby/object:Api::Resource + name: 'RegionPerInstanceConfig' + base_url: 'projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}' + min_version: beta + description: | + A config defined for a single managed instance that belongs to an instance group manager. It preserves the instance name + across instance group manager operations and can define stateful disks or metadata that are unique to the instance. + This resource works with regional instance group managers. + create_verb: :POST + create_url: projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/createInstances + update_verb: :POST + update_url: projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/updatePerInstanceConfigs + delete_verb: :POST + delete_url: projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/deletePerInstanceConfigs + read_verb: :POST + self_link: projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/listPerInstanceConfigs + identity: + - name + nested_query: !ruby/object:Api::Resource::NestedQuery + keys: + - items + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': 'https://cloud.google.com/compute/docs/instance-groups/stateful-migs#per-instance_configs' + api: 'https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroupManagers' + async: !ruby/object:Api::OpAsync + operation: !ruby/object:Api::OpAsync::Operation + kind: 'compute#operation' + path: 'name' + base_url: 'projects/{{project}}/regions/{{region}}/operations/{{op_id}}' + wait_ms: 1000 + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 15 + update_minutes: 6 + delete_minutes: 15 + result: !ruby/object:Api::OpAsync::Result + path: 'targetLink' + status: !ruby/object:Api::OpAsync::Status + path: 'status' + complete: 'DONE' + allowed: + - 'PENDING' + - 'RUNNING' + - 'DONE' + error: !ruby/object:Api::OpAsync::Error + path: 'error/errors' + message: 'message' + parameters: + - !ruby/object:Api::Type::ResourceRef + name: 'region' + resource: 'Region' + imports: 'name' description: | - Restricted features enabled for use on this project - item_type: Api::Type::String - - !ruby/object:Api::Type::String - name: defaultServiceAccount - description: Default service account used by VMs in this project - - !ruby/object:Api::Type::String - name: xpnProjectStatus - description: The role this project has in a shared VPC configuration. + Region where the containing instance group manager is located + required: true + url_param_only: true + input: true + - !ruby/object:Api::Type::ResourceRef + name: 'regionInstanceGroupManager' + resource: 'RegionInstanceGroupManager' + imports: 'name' + description: | + The region instance group manager this instance config is part of. + required: true + url_param_only: true + input: true + properties: - !ruby/object:Api::Type::String - name: defaultNetworkTier - description: The default network tier used for configuring resources in this project - - !ruby/object:Api::Type::Array - name: 'quotas' + name: 'name' description: | - Quotas applied to this project - item_type: !ruby/object:Api::Type::NestedObject - properties: - - !ruby/object:Api::Type::String - name: 'metric' + The name for this per-instance config and its corresponding instance. + required: true + input: true + - !ruby/object:Api::Type::NestedObject + name: 'preservedState' + description: 'The preserved state for this instance.' + update_verb: :POST + update_url: 'projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/updatePerInstanceConfigs' + properties: + - !ruby/object:Api::Type::KeyValuePairs + name: 'metadata' + description: | + Preserved metadata defined for this instance. This is a list of key->value pairs. + - !ruby/object:Api::Type::Array + name: 'disk' + api_name: disks + description: | + Stateful disks for the instance. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: deviceName + required: true + description: | + A unique device name that is reflected into the /dev/ tree of a Linux operating system running within the instance. + - !ruby/object:Api::Type::String + name: source + required: true + description: | + The URI of an existing persistent disk to attach under the specified device-name in the format + `projects/project-id/zones/zone/disks/disk-name`. + - !ruby/object:Api::Type::Enum + name: mode + description: | + The mode of the disk. + values: + - :READ_ONLY + - :READ_WRITE + default_value: :READ_WRITE + - !ruby/object:Api::Type::Enum + name: deleteRule + description: | + A value that prescribes what should happen to the stateful disk when the VM instance is deleted. + The available options are `NEVER` and `ON_PERMANENT_INSTANCE_DELETION`. + `NEVER` detatch the disk when the VM is deleted, but not delete the disk. + `ON_PERMANENT_INSTANCE_DELETION` will delete the stateful disk when the VM is permanently + deleted from the instance group. + values: + - :NEVER + - :ON_PERMANENT_INSTANCE_DELETION + default_value: :NEVER + - !ruby/object:Api::Resource + name: 'ProjectInfo' + base_url: projects + self_link: projects/{{project}} + readonly: true + description: | + Information about the project specifically for compute. + properties: + - !ruby/object:Api::Type::String + name: name + description: The name of this project + - !ruby/object:Api::Type::NestedObject + name: 'commonInstanceMetadata' + description: 'Metadata shared for all instances in this project' + properties: + - !ruby/object:Api::Type::Array + name: 'items' + description: | + Array of key/values + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'key' + description: 'Key of the metadata key/value pair' + - !ruby/object:Api::Type::String + name: 'value' + description: 'Value of the metadata key/value pair' + - !ruby/object:Api::Type::Array + name: 'enabledFeatures' + description: | + Restricted features enabled for use on this project + item_type: Api::Type::String + - !ruby/object:Api::Type::String + name: defaultServiceAccount + description: Default service account used by VMs in this project + - !ruby/object:Api::Type::String + name: xpnProjectStatus + description: The role this project has in a shared VPC configuration. + - !ruby/object:Api::Type::String + name: defaultNetworkTier + description: The default network tier used for configuring resources in this project + - !ruby/object:Api::Type::Array + name: 'quotas' + description: | + Quotas applied to this project + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'metric' description: 'Name of the quota metric' - !ruby/object:Api::Type::String name: 'limit' @@ -8249,6 +8526,46 @@ objects: instance may take to initialize. To do this, create an instance and time the startup process. default_value: 60 + - !ruby/object:Api::Type::Enum + name: 'mode' + default_value: :ON + description: | + Defines operating mode for this policy. + values: + - :OFF + - :ONLY_UP + - :ON + - !ruby/object:Api::Type::NestedObject + name: 'scaleDownControl' + min_version: beta + at_least_one_of: + - scale_down_control.0.max_scaled_down_replicas + - scale_down_control.0.time_window_sec + description: | + Defines scale down controls to reduce the risk of response latency + and outages due to abrupt scale-in events + properties: + - !ruby/object:Api::Type::NestedObject + name: 'maxScaledDownReplicas' + at_least_one_of: + - scale_down_control.0.max_scaled_down_replicas.0.fixed + - scale_down_control.0.max_scaled_down_replicas.0.percent + properties: + - !ruby/object:Api::Type::Integer + name: 'fixed' + description: | + Specifies a fixed number of VM instances. This must be a positive + integer. + - !ruby/object:Api::Type::Integer + name: 'percent' + description: | + Specifies a percentage of instances between 0 to 100%, inclusive. + For example, specify 80 for 80%. + - !ruby/object:Api::Type::Integer + name: 'timeWindowSec' + description: | + How long back autoscaling should look when computing recommendations + to include directives regarding slower scale down, as described above. - !ruby/object:Api::Type::NestedObject name: 'cpuUtilization' description: | @@ -8322,8 +8639,7 @@ objects: name: 'utilizationTargetType' description: | Defines how target utilization value is expressed for a - Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, - or DELTA_PER_MINUTE. + Stackdriver Monitoring metric. values: - :GAUGE - :DELTA_PER_SECOND @@ -8498,7 +8814,7 @@ objects: guides: 'Adding or Resizing Regional Persistent Disks': 'https://cloud.google.com/compute/docs/disks/regional-persistent-disk' - api: 'https://cloud.google.com/compute/docs/reference/rest/beta/regionDisks' + api: 'https://cloud.google.com/compute/docs/reference/rest/v1/regionDisks' async: !ruby/object:Api::OpAsync operation: !ruby/object:Api::OpAsync::Operation kind: 'compute#operation' @@ -8750,11 +9066,20 @@ objects: output: true - !ruby/object:Api::Type::ResourceRef name: 'defaultService' + # TODO: add defaultRouteAction.weightedBackendService here once they are supported. + exactly_one_of: + - default_service + - default_url_redirect resource: 'RegionBackendService' imports: 'selfLink' - description: - A reference to RegionBackendService resource if none of the hostRules match. - required: true + description: | + The full or partial URL of the defaultService resource to which traffic is directed if + none of the hostRules match. If defaultRouteAction is additionally specified, advanced + routing actions like URL Rewrites, etc. take effect prior to sending the request to the + backend. However, if defaultService is specified, defaultRouteAction cannot contain any + weightedBackendServices. Conversely, if routeAction specifies any + weightedBackendServices, service must not be specified. Only one of defaultService, + defaultUrlRedirect or defaultRouteAction.weightedBackendService must be set. - !ruby/object:Api::Type::String name: 'description' description: | @@ -8814,6 +9139,10 @@ objects: properties: - !ruby/object:Api::Type::ResourceRef name: 'defaultService' + # TODO: add defaultRouteAction.weightedBackendService here once they are supported. + exactly_one_of: + - path_matchers.0.default_service + - path_matchers.0.default_url_redirect required: true resource: 'RegionBackendService' imports: 'selfLink' @@ -9463,36 +9792,52 @@ objects: - !ruby/object:Api::Type::String name: 'hostRedirect' description: | - The host that will be used in the redirect response instead of the one that was - supplied in the request. The value must be between 1 and 255 characters. + The host that will be used in the redirect response instead of the one + that was supplied in the request. The value must be between 1 and 255 + characters. - !ruby/object:Api::Type::Boolean name: 'httpsRedirect' default_value: false description: | - If set to true, the URL scheme in the redirected request is set to https. If set - to false, the URL scheme of the redirected request will remain the same as that - of the request. This must only be set for UrlMaps used in TargetHttpProxys. - Setting this true for TargetHttpsProxy is not permitted. Defaults to false. + If set to true, the URL scheme in the redirected request is set to https. + If set to false, the URL scheme of the redirected request will remain the + same as that of the request. This must only be set for UrlMaps used in + TargetHttpProxys. Setting this true for TargetHttpsProxy is not + permitted. The default is set to false. - !ruby/object:Api::Type::String name: 'pathRedirect' description: | - The path that will be used in the redirect response instead of the one that was - supplied in the request. Only one of pathRedirect or prefixRedirect must be - specified. The value must be between 1 and 1024 characters. + The path that will be used in the redirect response instead of the one + that was supplied in the request. pathRedirect cannot be supplied + together with prefixRedirect. Supply one alone or neither. If neither is + supplied, the path of the original request will be used for the redirect. + The value must be between 1 and 1024 characters. - !ruby/object:Api::Type::String name: 'prefixRedirect' description: | - The prefix that replaces the prefixMatch specified in the HttpRouteRuleMatch, - retaining the remaining portion of the URL before redirecting the request. + The prefix that replaces the prefixMatch specified in the + HttpRouteRuleMatch, retaining the remaining portion of the URL before + redirecting the request. prefixRedirect cannot be supplied together with + pathRedirect. Supply one alone or neither. If neither is supplied, the + path of the original request will be used for the redirect. The value + must be between 1 and 1024 characters. - !ruby/object:Api::Type::Enum name: 'redirectResponseCode' description: | - The HTTP Status code to use for this RedirectAction. Supported values are: - - MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. - - FOUND, which corresponds to 302. - SEE_OTHER which corresponds to 303. - - TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method - will be retained. - PERMANENT_REDIRECT, which corresponds to 308. In this case, + The HTTP Status code to use for this RedirectAction. Supported values are: + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method + will be retained. + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, the request method will be retained. + skip_docs_values: true values: - :FOUND - :MOVED_PERMANENTLY_DEFAULT @@ -9503,9 +9848,9 @@ objects: name: 'stripQuery' default_value: false description: | - If set to true, any accompanying query portion of the original URL is removed - prior to redirecting the request. If set to false, the query portion of the - original URL is retained. Defaults to false. + If set to true, any accompanying query portion of the original URL is + removed prior to redirecting the request. If set to false, the query + portion of the original URL is retained. The default value is false. - !ruby/object:Api::Type::Array name: 'pathRules' description: | @@ -9865,45 +10210,59 @@ objects: - !ruby/object:Api::Type::NestedObject name: 'urlRedirect' description: | - When a path pattern is matched, the request is redirected to a URL specified by - urlRedirect. If urlRedirect is specified, service or routeAction must not be - set. + When a path pattern is matched, the request is redirected to a URL specified + by urlRedirect. If urlRedirect is specified, service or routeAction must not + be set. properties: - !ruby/object:Api::Type::String name: 'hostRedirect' description: | - The host that will be used in the redirect response instead of the one that was - supplied in the request. The value must be between 1 and 255 characters. + The host that will be used in the redirect response instead of the one + that was supplied in the request. The value must be between 1 and 255 + characters. - !ruby/object:Api::Type::Boolean name: 'httpsRedirect' default_value: false description: | - If set to true, the URL scheme in the redirected request is set to https. If set - to false, the URL scheme of the redirected request will remain the same as that - of the request. This must only be set for UrlMaps used in TargetHttpProxys. - Setting this true for TargetHttpsProxy is not permitted. Defaults to false. + If set to true, the URL scheme in the redirected request is set to https. + If set to false, the URL scheme of the redirected request will remain the + same as that of the request. This must only be set for UrlMaps used in + TargetHttpProxys. Setting this true for TargetHttpsProxy is not + permitted. The default is set to false. - !ruby/object:Api::Type::String name: 'pathRedirect' description: | - The path that will be used in the redirect response instead of the one that was - supplied in the request. Only one of pathRedirect or prefixRedirect must be - specified. The value must be between 1 and 1024 characters. + The path that will be used in the redirect response instead of the one + that was supplied in the request. pathRedirect cannot be supplied + together with prefixRedirect. Supply one alone or neither. If neither is + supplied, the path of the original request will be used for the redirect. + The value must be between 1 and 1024 characters. - !ruby/object:Api::Type::String name: 'prefixRedirect' description: | - The prefix that replaces the prefixMatch specified in the HttpRouteRuleMatch, - retaining the remaining portion of the URL before redirecting the request. + The prefix that replaces the prefixMatch specified in the + HttpRouteRuleMatch, retaining the remaining portion of the URL before + redirecting the request. prefixRedirect cannot be supplied together with + pathRedirect. Supply one alone or neither. If neither is supplied, the + path of the original request will be used for the redirect. The value + must be between 1 and 1024 characters. - !ruby/object:Api::Type::Enum name: 'redirectResponseCode' description: | The HTTP Status code to use for this RedirectAction. Supported values are: - - MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. - - FOUND, which corresponds to 302. - - SEE_OTHER which corresponds to 303. - - TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method will be retained. - - PERMANENT_REDIRECT, which corresponds to 308. In this case, + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, the request method will be retained. + skip_docs_values: true values: - :FOUND - :MOVED_PERMANENTLY_DEFAULT @@ -9912,11 +10271,79 @@ objects: - :TEMPORARY_REDIRECT - !ruby/object:Api::Type::Boolean name: 'stripQuery' - required: true description: | If set to true, any accompanying query portion of the original URL is removed prior to redirecting the request. If set to false, the query portion of the original URL is retained. + - !ruby/object:Api::Type::NestedObject + name: 'defaultUrlRedirect' + # TODO: add defaultRouteAction.weightedBackendService here once they are supported. + exactly_one_of: + - path_matchers.0.default_service + - path_matchers.0.default_url_redirect + description: | + When none of the specified hostRules match, the request is redirected to a URL specified + by defaultUrlRedirect. If defaultUrlRedirect is specified, defaultService or + defaultRouteAction must not be set. + properties: + - !ruby/object:Api::Type::String + name: 'hostRedirect' + description: | + The host that will be used in the redirect response instead of the one that was + supplied in the request. The value must be between 1 and 255 characters. + - !ruby/object:Api::Type::Boolean + name: 'httpsRedirect' + default_value: false + description: | + If set to true, the URL scheme in the redirected request is set to https. If set to + false, the URL scheme of the redirected request will remain the same as that of the + request. This must only be set for UrlMaps used in TargetHttpProxys. Setting this + true for TargetHttpsProxy is not permitted. The default is set to false. + - !ruby/object:Api::Type::String + name: 'pathRedirect' + description: | + The path that will be used in the redirect response instead of the one that was + supplied in the request. pathRedirect cannot be supplied together with + prefixRedirect. Supply one alone or neither. If neither is supplied, the path of the + original request will be used for the redirect. The value must be between 1 and 1024 + characters. + - !ruby/object:Api::Type::String + name: 'prefixRedirect' + description: | + The prefix that replaces the prefixMatch specified in the HttpRouteRuleMatch, + retaining the remaining portion of the URL before redirecting the request. + prefixRedirect cannot be supplied together with pathRedirect. Supply one alone or + neither. If neither is supplied, the path of the original request will be used for + the redirect. The value must be between 1 and 1024 characters. + - !ruby/object:Api::Type::Enum + name: 'redirectResponseCode' + description: | + The HTTP Status code to use for this RedirectAction. Supported values are: + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method + will be retained. + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, + the request method will be retained. + skip_docs_values: true + values: + - :FOUND + - :MOVED_PERMANENTLY_DEFAULT + - :PERMANENT_REDIRECT + - :SEE_OTHER + - :TEMPORARY_REDIRECT + - !ruby/object:Api::Type::Boolean + name: 'stripQuery' + description: | + If set to true, any accompanying query portion of the original URL is removed prior + to redirecting the request. If set to false, the query portion of the original URL is + retained. - !ruby/object:Api::Type::Array name: 'tests' description: | @@ -9943,6 +10370,75 @@ objects: description: A reference to expected RegionBackendService resource the given URL should be mapped to. + - !ruby/object:Api::Type::NestedObject + name: 'defaultUrlRedirect' + # TODO: add defaultRouteAction.weightedBackendService here once they are supported. + exactly_one_of: + - default_service + - default_url_redirect + description: | + When none of the specified hostRules match, the request is redirected to a URL specified + by defaultUrlRedirect. If defaultUrlRedirect is specified, defaultService or + defaultRouteAction must not be set. + properties: + - !ruby/object:Api::Type::String + name: 'hostRedirect' + description: | + The host that will be used in the redirect response instead of the one that was + supplied in the request. The value must be between 1 and 255 characters. + - !ruby/object:Api::Type::Boolean + name: 'httpsRedirect' + default_value: false + description: | + If set to true, the URL scheme in the redirected request is set to https. If set to + false, the URL scheme of the redirected request will remain the same as that of the + request. This must only be set for UrlMaps used in TargetHttpProxys. Setting this + true for TargetHttpsProxy is not permitted. The default is set to false. + - !ruby/object:Api::Type::String + name: 'pathRedirect' + description: | + The path that will be used in the redirect response instead of the one that was + supplied in the request. pathRedirect cannot be supplied together with + prefixRedirect. Supply one alone or neither. If neither is supplied, the path of the + original request will be used for the redirect. The value must be between 1 and 1024 + characters. + - !ruby/object:Api::Type::String + name: 'prefixRedirect' + description: | + The prefix that replaces the prefixMatch specified in the HttpRouteRuleMatch, + retaining the remaining portion of the URL before redirecting the request. + prefixRedirect cannot be supplied together with pathRedirect. Supply one alone or + neither. If neither is supplied, the path of the original request will be used for + the redirect. The value must be between 1 and 1024 characters. + - !ruby/object:Api::Type::Enum + name: 'redirectResponseCode' + description: | + The HTTP Status code to use for this RedirectAction. Supported values are: + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method + will be retained. + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, + the request method will be retained. + skip_docs_values: true + values: + - :FOUND + - :MOVED_PERMANENTLY_DEFAULT + - :PERMANENT_REDIRECT + - :SEE_OTHER + - :TEMPORARY_REDIRECT + - !ruby/object:Api::Type::Boolean + name: 'stripQuery' + description: | + If set to true, any accompanying query portion of the original URL is removed prior + to redirecting the request. If set to false, the query portion of the original URL is + retained. - !ruby/object:Api::Resource name: 'RegionHealthCheck' kind: 'compute#healthCheck' @@ -9952,7 +10448,7 @@ objects: references: !ruby/object:Api::Resource::ReferenceLinks guides: 'Official Documentation': 'https://cloud.google.com/load-balancing/docs/health-checks' - api: 'https://cloud.google.com/compute/docs/reference/rest/beta/regionHealthChecks' + api: 'https://cloud.google.com/compute/docs/reference/rest/v1/regionHealthChecks' description: | Health Checks determine whether instances are responsive and able to do work. They are an important part of a comprehensive load balancing configuration, @@ -10059,19 +10555,12 @@ objects: - :HTTP2 - !ruby/object:Api::Type::NestedObject name: 'httpHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpsHealthCheck - - http2HealthCheck - - tcpHealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'host' @@ -10153,7 +10642,7 @@ objects: - http_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -10189,19 +10678,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'httpsHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - http2HealthCheck - - tcpHealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'host' @@ -10283,7 +10765,7 @@ objects: - https_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -10319,19 +10801,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'tcpHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - httpsHealthCheck - - http2HealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'request' @@ -10395,7 +10870,7 @@ objects: - tcp_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -10430,19 +10905,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'sslHealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - httpsHealthCheck - - http2HealthCheck - - tcpHealthCheck properties: - !ruby/object:Api::Type::String name: 'request' @@ -10506,7 +10974,7 @@ objects: - ssl_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -10541,19 +11009,12 @@ objects: - :USE_SERVING_PORT - !ruby/object:Api::Type::NestedObject name: 'http2HealthCheck' - # TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - # once hashicorp/terraform-plugin-sdk#280 is fixed - at_least_one_of: + exactly_one_of: - http_health_check - https_health_check - http2_health_check - tcp_health_check - ssl_health_check - conflicts: - - httpHealthCheck - - httpsHealthCheck - - tcpHealthCheck - - sslHealthCheck properties: - !ruby/object:Api::Type::String name: 'host' @@ -10635,7 +11096,7 @@ objects: - http2_health_check.0.port_specification description: | Specifies the type of proxy header to append before sending data to the - backend, either NONE or PROXY_V1. The default is NONE. + backend. values: - :NONE - :PROXY_V1 @@ -10729,6 +11190,8 @@ objects: which cannot be a dash. - !ruby/object:Api::Type::NestedObject name: 'snapshotSchedulePolicy' + conflicts: + - 'groupPlacementPolicy' description: | Policy for creating snapshots of persistent disks. properties: @@ -10833,7 +11296,6 @@ objects: description: | Specifies the behavior to apply to scheduled snapshots when the source disk is deleted. - Valid options are KEEP_AUTO_SNAPSHOTS and APPLY_RETENTION_POLICY default_value: :KEEP_AUTO_SNAPSHOTS values: - :KEEP_AUTO_SNAPSHOTS @@ -10870,6 +11332,37 @@ objects: - snapshot_schedule_policy.0.snapshot_properties.0.guest_flush description: | Whether to perform a 'guest aware' snapshot. + - !ruby/object:Api::Type::NestedObject + name: 'groupPlacementPolicy' + conflicts: + - 'snapshotSchedulePolicy' + description: | + Policy for creating snapshots of persistent disks. + properties: + - !ruby/object:Api::Type::Integer + name: 'vmCount' + at_least_one_of: + - group_placement_policy.0.vm_count + - group_placement_policy.0.availability_domain_count + description: | + Number of vms in this placement group. + - !ruby/object:Api::Type::Integer + name: 'availabilityDomainCount' + at_least_one_of: + - group_placement_policy.0.vm_count + - group_placement_policy.0.availability_domain_count + description: | + The number of availability domains instances will be spread across. If two instances are in different + availability domain, they will not be put in the same low latency network + - !ruby/object:Api::Type::Enum + name: 'collocation' + description: | + Collocation specifies whether to place VMs inside the same availability domain on the same low-latency network. + Specify `COLLOCATED` to enable collocation. Can only be specified with `vm_count`. If compute instances are created + with a COLLOCATED policy, then exactly `vm_count` instances must be created at the same time with the resource policy + attached. + values: + - :COLLOCATED - !ruby/object:Api::Resource name: 'Route' kind: 'compute#route' @@ -11153,8 +11646,6 @@ objects: name: advertiseMode description: | User-specified flag to indicate which mode to use for advertisement. - - Valid values of this enum field are: DEFAULT, CUSTOM values: - :DEFAULT - :CUSTOM @@ -11387,8 +11878,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'filter' description: | - Specifies the desired filtering of logs on this NAT. Valid - values are: `"ERRORS_ONLY"`, `"TRANSLATIONS_ONLY"`, `"ALL"` + Specifies the desired filtering of logs on this NAT. required: true values: - :ERRORS_ONLY @@ -11564,6 +12054,106 @@ objects: PARTNER InterconnectAttachment is created, updated, or deleted. output: true + - !ruby/object:Api::Resource + name: 'SecurityPolicy' + kind: 'compute#securityPolicy' + base_url: projects/{{project}}/global/securityPolicies + collection_url_key: 'items' + has_self_link: true + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': 'https://cloud.google.com/armor/docs/security-policy-concepts' + api: 'https://cloud.google.com/compute/docs/reference/rest/v1/securityPolicies' + description: | + Represents a Cloud Armor Security Policy resource. + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: 'Name of the security policy.' + required: true + - !ruby/object:Api::Type::Integer + name: 'id' + description: 'The unique identifier for the resource.' + output: true + - !ruby/object:Api::Type::Array + name: 'rules' + description: | + A list of rules that belong to this policy. + There must always be a default rule (rule with priority 2147483647 and match "*"). + If no rules are provided when creating a security policy, a default rule with action "allow" will be added. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'description' + description: | + A description of the rule. + - !ruby/object:Api::Type::Integer + name: 'priority' + description: | + An integer indicating the priority of a rule in the list. The priority must be a positive value + between 0 and 2147483647. Rules are evaluated from highest to lowest priority where 0 is the + highest priority and 2147483647 is the lowest prority. + - !ruby/object:Api::Type::String + name: 'action' + description: | + The Action to preform when the client connection triggers the rule. Can currently be either + "allow" or "deny()" where valid values for status are 403, 404, and 502. + - !ruby/object:Api::Type::Boolean + name: 'preview' + description: | + If set to true, the specified action is not enforced. + - !ruby/object:Api::Type::NestedObject + name: 'match' + description: + A match condition that incoming traffic is evaluated against. If it evaluates to true, + the corresponding 'action' is enforced. + properties: + - !ruby/object:Api::Type::String + name: 'description' + description: | + A description of the rule. + - !ruby/object:Api::Type::NestedObject + name: 'expr' + description: + User defined CEVAL expression. A CEVAL expression is used to specify match criteria such as origin.ip, + source.region_code and contents in the request header. + properties: + - !ruby/object:Api::Type::String + name: 'expression' + description: | + Textual representation of an expression in Common Expression Language syntax. + - !ruby/object:Api::Type::String + name: 'title' + description: | + Optional. Title for the expression, i.e. a short string describing its purpose. + This can be used e.g. in UIs which allow to enter the expression. + - !ruby/object:Api::Type::String + name: 'description' + description: | + Optional. Description of the expression. This is a longer text which describes the expression, + e.g. when hovered over it in a UI. + - !ruby/object:Api::Type::String + name: 'location' + description: | + Optional. String indicating the location of the expression for error reporting, + e.g. a file name and a position in the file. + - !ruby/object:Api::Type::String + name: 'versionedExpr' + description: | + Preconfigured versioned expression. If this field is specified, config must also be specified. + Available preconfigured expressions along with their requirements are: `SRC_IPS_V1` - must specify + the corresponding srcIpRange field in config. + - !ruby/object:Api::Type::NestedObject + name: 'config' + description: + The configuration options available when specifying versionedExpr. This field must be specified + if versionedExpr is specified and cannot be specified if versionedExpr is not specified. + properties: + - !ruby/object:Api::Type::Array + name: 'srcIpRanges' + description: | + CIDR IP address range. + item_type: Api::Type::String - !ruby/object:Api::Resource name: 'Snapshot' kind: 'compute#snapshot' @@ -11713,7 +12303,7 @@ objects: - !ruby/object:Api::Type::Integer name: 'storageBytes' description: | - A size of the the storage used by the snapshot. As snapshots share + A size of the storage used by the snapshot. As snapshots share storage, this number is expected to change with snapshot creation/deletion. output: true @@ -12147,8 +12737,7 @@ objects: - :SCSI - :NVME description: | - The disk interface to use for attaching this disk, one - of `SCSI` or `NVME`. The default is `SCSI`. + The disk interface to use for attaching this disk. - !ruby/object:Api::Type::Integer name: 'diskSizeGb' required: true @@ -12216,8 +12805,7 @@ objects: name: 'profile' description: | Profile specifies the set of SSL features that can be used by the - load balancer when negotiating SSL with clients. This can be one of - `COMPATIBLE`, `MODERN`, `RESTRICTED`, or `CUSTOM`. If using `CUSTOM`, + load balancer when negotiating SSL with clients. If using `CUSTOM`, the set of SSL features to enable must be specified in the `customFeatures` field. values: @@ -12229,8 +12817,7 @@ objects: name: 'minTlsVersion' description: | The minimum version of SSL protocol that can be used by the clients - to establish a connection with the load balancer. This can be one of - `TLS_1_0`, `TLS_1_1`, `TLS_1_2`. + to establish a connection with the load balancer. values: - :TLS_1_0 - :TLS_1_1 @@ -12446,7 +13033,7 @@ objects: resource: 'Region' imports: 'name' description: | - URL of the GCP region for this subnetwork. + The GCP region for this subnetwork. required: true input: true - !ruby/object:Api::Type::NestedObject @@ -12506,7 +13093,7 @@ objects: description: | Can only be specified if VPC flow logging for this subnetwork is enabled. Configures whether metadata fields should be added to the reported VPC - flow logs. Default is `INCLUDE_ALL_METADATA`. + flow logs. values: - :EXCLUDE_ALL_METADATA - :INCLUDE_ALL_METADATA @@ -12648,8 +13235,7 @@ objects: whether the load balancer will attempt to negotiate QUIC with clients or not. Can specify one of NONE, ENABLE, or DISABLE. If NONE is specified, uses the QUIC policy with no user overrides, which is - equivalent to DISABLE. Not specifying this field is equivalent to - specifying NONE. + equivalent to DISABLE. values: - :NONE - :ENABLE @@ -12707,7 +13293,7 @@ objects: guides: 'Official Documentation': 'https://cloud.google.com/compute/docs/load-balancing/http/target-proxies' - api: 'https://cloud.google.com/compute/docs/reference/rest/beta/regionTargetHttpProxies' + api: 'https://cloud.google.com/compute/docs/reference/rest/v1/regionTargetHttpProxies' async: !ruby/object:Api::OpAsync operation: !ruby/object:Api::OpAsync::Operation kind: 'compute#operation' @@ -12781,7 +13367,7 @@ objects: references: !ruby/object:Api::Resource::ReferenceLinks guides: 'Official Documentation': 'https://cloud.google.com/compute/docs/load-balancing/http/target-proxies' - api: 'https://cloud.google.com/compute/docs/reference/rest/beta/regionTargetHttpsProxies' + api: 'https://cloud.google.com/compute/docs/reference/rest/v1/regionTargetHttpsProxies' async: !ruby/object:Api::OpAsync operation: !ruby/object:Api::OpAsync::Operation kind: 'compute#operation' @@ -13175,7 +13761,7 @@ objects: name: 'proxyHeader' description: | Specifies the type of proxy header to append before sending data to - the backend, either NONE or PROXY_V1. The default is NONE. + the backend. values: - :NONE - :PROXY_V1 @@ -13279,7 +13865,7 @@ objects: name: 'proxyHeader' description: | Specifies the type of proxy header to append before sending data to - the backend, either NONE or PROXY_V1. The default is NONE. + the backend. values: - :NONE - :PROXY_V1 @@ -13588,17 +14174,20 @@ objects: output: true - !ruby/object:Api::Type::ResourceRef name: 'defaultService' + exactly_one_of: + - default_service + - default_url_redirect + - default_route_action.0.weighted_backend_services resource: 'BackendService' imports: 'selfLink' description: | - The BackendService resource to which traffic is - directed if none of the hostRules match. If defaultRouteAction is additionally - specified, advanced routing actions like URL Rewrites, etc. take effect prior to - sending the request to the backend. However, if defaultService is specified, - defaultRouteAction cannot contain any weightedBackendServices. Conversely, if - routeAction specifies any weightedBackendServices, service must not be - specified. Only one of defaultService, defaultUrlRedirect or - defaultRouteAction.weightedBackendService must be set. + The full or partial URL of the defaultService resource to which traffic is directed if + none of the hostRules match. If defaultRouteAction is additionally specified, advanced + routing actions like URL Rewrites, etc. take effect prior to sending the request to the + backend. However, if defaultService is specified, defaultRouteAction cannot contain any + weightedBackendServices. Conversely, if routeAction specifies any + weightedBackendServices, service must not be specified. Only one of defaultService, + defaultUrlRedirect or defaultRouteAction.weightedBackendService must be set. - !ruby/object:Api::Type::String name: 'description' description: | @@ -13743,25 +14332,30 @@ objects: properties: - !ruby/object:Api::Type::ResourceRef name: 'defaultService' + # TODO: (mbang) won't work for array path matchers yet, uncomment here once they are supported. + # (github.com/hashicorp/terraform-plugin-sdk/issues/470) + # exactly_one_of: + # - path_matchers.0.default_service + # - path_matchers.0.default_url_redirect + # - path_matchers.0.default_route_action.0.weighted_backend_services resource: 'BackendService' imports: 'selfLink' description: | - The BackendService resource. This will be used if - none of the pathRules or routeRules defined by this PathMatcher are matched. For - example, the following are all valid URLs to a BackendService resource: - - https://www.googleapis.com/compute/v1/projects/project/global/backendServices/backen - dService + The full or partial URL to the BackendService resource. This will be used if none + of the pathRules or routeRules defined by this PathMatcher are matched. For example, + the following are all valid URLs to a BackendService resource: + - https://www.googleapis.com/compute/v1/projects/project/global/backendServices/backendService - compute/v1/projects/project/global/backendServices/backendService - global/backendServices/backendService - If defaultRouteAction is additionally - specified, advanced routing actions like URL Rewrites, etc. take effect prior to - sending the request to the backend. However, if defaultService is specified, - defaultRouteAction cannot contain any weightedBackendServices. Conversely, if - defaultRouteAction specifies any weightedBackendServices, defaultService must - not be specified. Only one of defaultService, defaultUrlRedirect or - defaultRouteAction.weightedBackendService must be set. Authorization requires - one or more of the following Google IAM permissions on the specified resource - default_service: + If defaultRouteAction is additionally specified, advanced routing actions like URL + Rewrites, etc. take effect prior to sending the request to the backend. However, if + defaultService is specified, defaultRouteAction cannot contain any + weightedBackendServices. Conversely, if defaultRouteAction specifies any + weightedBackendServices, defaultService must not be specified. + Only one of defaultService, defaultUrlRedirect or + defaultRouteAction.weightedBackendService must be set. Authorization requires one + or more of the following Google IAM permissions on the + specified resource defaultService: - compute.backendBuckets.use - compute.backendServices.use - !ruby/object:Api::Type::String @@ -14199,45 +14793,59 @@ objects: - !ruby/object:Api::Type::NestedObject name: 'urlRedirect' description: | - When a path pattern is matched, the request is redirected to a URL specified by - urlRedirect. If urlRedirect is specified, service or routeAction must not be - set. + When a path pattern is matched, the request is redirected to a URL specified + by urlRedirect. If urlRedirect is specified, service or routeAction must not + be set. properties: - !ruby/object:Api::Type::String name: 'hostRedirect' description: | - The host that will be used in the redirect response instead of the one that was - supplied in the request. The value must be between 1 and 255 characters. + The host that will be used in the redirect response instead of the one + that was supplied in the request. The value must be between 1 and 255 + characters. - !ruby/object:Api::Type::Boolean name: 'httpsRedirect' default_value: false description: | - If set to true, the URL scheme in the redirected request is set to https. If set - to false, the URL scheme of the redirected request will remain the same as that - of the request. This must only be set for UrlMaps used in TargetHttpProxys. - Setting this true for TargetHttpsProxy is not permitted. Defaults to false. + If set to true, the URL scheme in the redirected request is set to https. + If set to false, the URL scheme of the redirected request will remain the + same as that of the request. This must only be set for UrlMaps used in + TargetHttpProxys. Setting this true for TargetHttpsProxy is not + permitted. The default is set to false. - !ruby/object:Api::Type::String name: 'pathRedirect' description: | - The path that will be used in the redirect response instead of the one that was - supplied in the request. Only one of pathRedirect or prefixRedirect must be - specified. The value must be between 1 and 1024 characters. + The path that will be used in the redirect response instead of the one + that was supplied in the request. pathRedirect cannot be supplied + together with prefixRedirect. Supply one alone or neither. If neither is + supplied, the path of the original request will be used for the redirect. + The value must be between 1 and 1024 characters. - !ruby/object:Api::Type::String name: 'prefixRedirect' description: | - The prefix that replaces the prefixMatch specified in the HttpRouteRuleMatch, - retaining the remaining portion of the URL before redirecting the request. + The prefix that replaces the prefixMatch specified in the + HttpRouteRuleMatch, retaining the remaining portion of the URL before + redirecting the request. prefixRedirect cannot be supplied together with + pathRedirect. Supply one alone or neither. If neither is supplied, the + path of the original request will be used for the redirect. The value + must be between 1 and 1024 characters. - !ruby/object:Api::Type::Enum name: 'redirectResponseCode' description: | The HTTP Status code to use for this RedirectAction. Supported values are: - - MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. - - FOUND, which corresponds to 302. - - SEE_OTHER which corresponds to 303. - - TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method will be retained. - - PERMANENT_REDIRECT, which corresponds to 308. In this case, + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, the request method will be retained. + skip_docs_values: true values: - :FOUND - :MOVED_PERMANENTLY_DEFAULT @@ -14246,11 +14854,10 @@ objects: - :TEMPORARY_REDIRECT - !ruby/object:Api::Type::Boolean name: 'stripQuery' - required: true description: | - If set to true, any accompanying query portion of the original URL is removed - prior to redirecting the request. If set to false, the query portion of the - original URL is retained. + If set to true, any accompanying query portion of the original URL is + removed prior to redirecting the request. If set to false, the query + portion of the original URL is retained. - !ruby/object:Api::Type::Array name: 'routeRules' description: | @@ -14911,12 +15518,18 @@ objects: - !ruby/object:Api::Type::Enum name: 'redirectResponseCode' description: | - The HTTP Status code to use for this RedirectAction. Supported values are: - - MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. - - FOUND, which corresponds to 302. - SEE_OTHER which corresponds to 303. - - TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method - will be retained. - PERMANENT_REDIRECT, which corresponds to 308. In this case, - the request method will be retained. + The HTTP Status code to use for this RedirectAction. Supported values are: + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method will be retained. + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, the request method will be retained. + skip_docs_values: true values: - :FOUND - :MOVED_PERMANENTLY_DEFAULT @@ -14930,6 +15543,388 @@ objects: If set to true, any accompanying query portion of the original URL is removed prior to redirecting the request. If set to false, the query portion of the original URL is retained. Defaults to false. + - !ruby/object:Api::Type::NestedObject + name: 'defaultUrlRedirect' + # TODO: (mbang) won't work for array path matchers yet, uncomment here once they are supported. + # (github.com/hashicorp/terraform-plugin-sdk/issues/470) + # exactly_one_of: + # - path_matchers.0.default_service + # - path_matchers.0.default_url_redirect + # - path_matchers.0.default_route_action.0.weighted_backend_services + description: | + When none of the specified hostRules match, the request is redirected to a URL specified + by defaultUrlRedirect. If defaultUrlRedirect is specified, defaultService or + defaultRouteAction must not be set. + properties: + - !ruby/object:Api::Type::String + name: 'hostRedirect' + description: | + The host that will be used in the redirect response instead of the one that was + supplied in the request. The value must be between 1 and 255 characters. + - !ruby/object:Api::Type::Boolean + name: 'httpsRedirect' + default_value: false + description: | + If set to true, the URL scheme in the redirected request is set to https. If set to + false, the URL scheme of the redirected request will remain the same as that of the + request. This must only be set for UrlMaps used in TargetHttpProxys. Setting this + true for TargetHttpsProxy is not permitted. The default is set to false. + - !ruby/object:Api::Type::String + name: 'pathRedirect' + description: | + The path that will be used in the redirect response instead of the one that was + supplied in the request. pathRedirect cannot be supplied together with + prefixRedirect. Supply one alone or neither. If neither is supplied, the path of the + original request will be used for the redirect. The value must be between 1 and 1024 + characters. + - !ruby/object:Api::Type::String + name: 'prefixRedirect' + description: | + The prefix that replaces the prefixMatch specified in the HttpRouteRuleMatch, + retaining the remaining portion of the URL before redirecting the request. + prefixRedirect cannot be supplied together with pathRedirect. Supply one alone or + neither. If neither is supplied, the path of the original request will be used for + the redirect. The value must be between 1 and 1024 characters. + - !ruby/object:Api::Type::Enum + name: 'redirectResponseCode' + description: | + The HTTP Status code to use for this RedirectAction. Supported values are: + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method + will be retained. + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, + the request method will be retained. + skip_docs_values: true + values: + - :FOUND + - :MOVED_PERMANENTLY_DEFAULT + - :PERMANENT_REDIRECT + - :SEE_OTHER + - :TEMPORARY_REDIRECT + - !ruby/object:Api::Type::Boolean + name: 'stripQuery' + description: | + If set to true, any accompanying query portion of the original URL is removed prior + to redirecting the request. If set to false, the query portion of the original URL is + retained. + - !ruby/object:Api::Type::NestedObject + name: 'defaultRouteAction' + # TODO: (mbang) conflicts also won't work for array path matchers yet, uncomment here once supported. + # conflicts: + # - defaultUrlRedirect + description: | + defaultRouteAction takes effect when none of the pathRules or routeRules match. The load balancer performs + advanced routing actions like URL rewrites, header transformations, etc. prior to forwarding the request + to the selected backend. If defaultRouteAction specifies any weightedBackendServices, defaultService must not be set. + Conversely if defaultService is set, defaultRouteAction cannot contain any weightedBackendServices. + + Only one of defaultRouteAction or defaultUrlRedirect must be set. + properties: + - !ruby/object:Api::Type::Array + name: 'weightedBackendServices' + # TODO: (mbang) won't work for array path matchers yet, uncomment here once they are supported. + # (github.com/hashicorp/terraform-plugin-sdk/issues/470) + # exactly_one_of: + # - path_matchers.0.default_service + # - path_matchers.0.default_url_redirect + # - path_matchers.0.default_route_action.0.weighted_backend_services + description: | + A list of weighted backend services to send traffic to when a route match occurs. + The weights determine the fraction of traffic that flows to their corresponding backend service. + If all traffic needs to go to a single backend service, there must be one weightedBackendService + with weight set to a non 0 number. + + Once a backendService is identified and before forwarding the request to the backend service, + advanced routing actions like Url rewrites and header transformations are applied depending on + additional settings specified in this HttpRouteAction. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::ResourceRef + name: 'backendService' + resource: 'BackendService' + imports: 'selfLink' + description: | + The full or partial URL to the default BackendService resource. Before forwarding the + request to backendService, the loadbalancer applies any relevant headerActions + specified as part of this backendServiceWeight. + - !ruby/object:Api::Type::Integer + name: 'weight' + description: | + Specifies the fraction of traffic sent to backendService, computed as + weight / (sum of all weightedBackendService weights in routeAction) . + + The selection of a backend service is determined only for new traffic. Once a user's request + has been directed to a backendService, subsequent requests will be sent to the same backendService + as determined by the BackendService's session affinity policy. + + The value must be between 0 and 1000 + - !ruby/object:Api::Type::NestedObject + name: 'headerAction' + description: | + Specifies changes to request and response headers that need to take effect for + the selected backendService. + + headerAction specified here take effect before headerAction in the enclosing + HttpRouteRule, PathMatcher and UrlMap. + properties: + - !ruby/object:Api::Type::Array + name: 'requestHeadersToRemove' + description: | + A list of header names for headers that need to be removed from the request prior to + forwarding the request to the backendService. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'requestHeadersToAdd' + description: | + Headers to add to a matching request prior to forwarding the request to the backendService. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'headerName' + description: | + The name of the header to add. + - !ruby/object:Api::Type::String + name: 'headerValue' + description: | + The value of the header to add. + - !ruby/object:Api::Type::Boolean + name: 'replace' + description: | + If false, headerValue is appended to any values that already exist for the header. + If true, headerValue is set for the header, discarding any values that were set for that header. + default_value: false + - !ruby/object:Api::Type::Array + name: 'responseHeadersToRemove' + description: | + A list of header names for headers that need to be removed from the response prior to sending the + response back to the client. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'responseHeadersToAdd' + description: | + Headers to add the response prior to sending the response back to the client. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'headerName' + description: | + The name of the header to add. + - !ruby/object:Api::Type::String + name: 'headerValue' + description: | + The value of the header to add. + - !ruby/object:Api::Type::Boolean + name: 'replace' + description: | + If false, headerValue is appended to any values that already exist for the header. + If true, headerValue is set for the header, discarding any values that were set for that header. + default_value: false + - !ruby/object:Api::Type::NestedObject + name: 'urlRewrite' + description: | + The spec to modify the URL of the request, prior to forwarding the request to the matched service. + properties: + - !ruby/object:Api::Type::String + name: 'pathPrefixRewrite' + description: | + Prior to forwarding the request to the selected backend service, the matching portion of the + request's path is replaced by pathPrefixRewrite. + + The value must be between 1 and 1024 characters. + - !ruby/object:Api::Type::String + name: 'hostRewrite' + description: | + Prior to forwarding the request to the selected service, the request's host header is replaced + with contents of hostRewrite. + + The value must be between 1 and 255 characters. + - !ruby/object:Api::Type::NestedObject + name: 'timeout' + description: | + Specifies the timeout for the selected route. Timeout is computed from the time the request has been + fully processed (i.e. end-of-stream) up until the response has been completely processed. Timeout includes all retries. + + If not specified, will use the largest timeout among all backend services associated with the route. + properties: + - !ruby/object:Api::Type::String + name: 'seconds' + description: | + Span of time at a resolution of a second. Must be from 0 to 315,576,000,000 inclusive. + Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years + - !ruby/object:Api::Type::Integer + name: 'nanos' + description: | + Span of time that's a fraction of a second at nanosecond resolution. Durations less than one second are represented + with a 0 seconds field and a positive nanos field. Must be from 0 to 999,999,999 inclusive. + - !ruby/object:Api::Type::NestedObject + name: 'retryPolicy' + description: | + Specifies the retry policy associated with this route. + properties: + - !ruby/object:Api::Type::Array + name: 'retryConditions' + description: | + Specfies one or more conditions when this retry rule applies. Valid values are: + + 5xx: Loadbalancer will attempt a retry if the backend service responds with any 5xx response code, + or if the backend service does not respond at all, example: disconnects, reset, read timeout, + connection failure, and refused streams. + gateway-error: Similar to 5xx, but only applies to response codes 502, 503 or 504. + connect-failure: Loadbalancer will retry on failures connecting to backend services, + for example due to connection timeouts. + retriable-4xx: Loadbalancer will retry for retriable 4xx response codes. + Currently the only retriable error supported is 409. + refused-stream:Loadbalancer will retry if the backend service resets the stream with a REFUSED_STREAM error code. + This reset type indicates that it is safe to retry. + cancelled: Loadbalancer will retry if the gRPC status code in the response header is set to cancelled + deadline-exceeded: Loadbalancer will retry if the gRPC status code in the response header is set to deadline-exceeded + resource-exhausted: Loadbalancer will retry if the gRPC status code in the response header is set to resource-exhausted + unavailable: Loadbalancer will retry if the gRPC status code in the response header is set to unavailable + item_type: Api::Type::String + - !ruby/object:Api::Type::Integer + name: 'numRetries' + description: | + Specifies the allowed number retries. This number must be > 0. If not specified, defaults to 1. + default_value: 1 + - !ruby/object:Api::Type::NestedObject + name: 'perTryTimeout' + description: | + Specifies a non-zero timeout per retry attempt. + + If not specified, will use the timeout set in HttpRouteAction. If timeout in HttpRouteAction is not set, + will use the largest timeout among all backend services associated with the route. + properties: + - !ruby/object:Api::Type::String + name: 'seconds' + description: | + Span of time at a resolution of a second. Must be from 0 to 315,576,000,000 inclusive. + Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years + - !ruby/object:Api::Type::Integer + name: 'nanos' + description: | + Span of time that's a fraction of a second at nanosecond resolution. Durations less than one second are + represented with a 0 seconds field and a positive nanos field. Must be from 0 to 999,999,999 inclusive. + - !ruby/object:Api::Type::NestedObject + name: 'requestMirrorPolicy' + description: | + Specifies the policy on how requests intended for the route's backends are shadowed to a separate mirrored backend service. + Loadbalancer does not wait for responses from the shadow service. Prior to sending traffic to the shadow service, + the host / authority header is suffixed with -shadow. + properties: + - !ruby/object:Api::Type::ResourceRef + name: 'backendService' + resource: 'BackendService' + imports: 'selfLink' + description: | + The full or partial URL to the BackendService resource being mirrored to. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'corsPolicy' + description: | + The specification for allowing client side cross-origin requests. Please see + [W3C Recommendation for Cross Origin Resource Sharing](https://www.w3.org/TR/cors/) + properties: + - !ruby/object:Api::Type::Array + name: 'allowOrigins' + description: | + Specifies the list of origins that will be allowed to do CORS requests. + An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowOriginRegexes' + description: | + Specifies the regualar expression patterns that match allowed origins. For regular expression grammar + please see en.cppreference.com/w/cpp/regex/ecmascript + An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowMethods' + description: | + Specifies the content for the Access-Control-Allow-Methods header. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowHeaders' + description: | + Specifies the content for the Access-Control-Allow-Headers header. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'exposeHeaders' + description: | + Specifies the content for the Access-Control-Expose-Headers header. + item_type: Api::Type::String + - !ruby/object:Api::Type::Integer + name: 'maxAge' + description: | + Specifies how long results of a preflight request can be cached in seconds. + This translates to the Access-Control-Max-Age header. + - !ruby/object:Api::Type::Boolean + name: 'allowCredentials' + description: | + In response to a preflight request, setting this to true indicates that the actual request can include user credentials. + This translates to the Access-Control-Allow-Credentials header. + default_value: false + - !ruby/object:Api::Type::Boolean + name: 'disabled' + description: | + If true, specifies the CORS policy is disabled. The default value is false, which indicates that the CORS policy is in effect. + default_value: false + - !ruby/object:Api::Type::NestedObject + name: 'faultInjectionPolicy' + description: | + The specification for fault injection introduced into traffic to test the resiliency of clients to backend service failure. + As part of fault injection, when clients send requests to a backend service, delays can be introduced by Loadbalancer on a + percentage of requests before sending those request to the backend service. Similarly requests from clients can be aborted + by the Loadbalancer for a percentage of requests. + + timeout and retryPolicy will be ignored by clients that are configured with a faultInjectionPolicy. + properties: + - !ruby/object:Api::Type::NestedObject + name: 'delay' + description: | + The specification for how client requests are delayed as part of fault injection, before being sent to a backend service. + properties: + - !ruby/object:Api::Type::NestedObject + name: 'fixedDelay' + description: | + Specifies the value of the fixed delay interval. + properties: + - !ruby/object:Api::Type::String + name: 'seconds' + description: | + Span of time at a resolution of a second. Must be from 0 to 315,576,000,000 inclusive. + Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years + - !ruby/object:Api::Type::Integer + name: 'nanos' + description: | + Span of time that's a fraction of a second at nanosecond resolution. Durations less than one second are + represented with a 0 seconds field and a positive nanos field. Must be from 0 to 999,999,999 inclusive. + - !ruby/object:Api::Type::Double + name: 'percentage' + description: | + The percentage of traffic (connections/operations/requests) on which delay will be introduced as part of fault injection. + The value must be between 0.0 and 100.0 inclusive. + - !ruby/object:Api::Type::NestedObject + name: 'abort' + description: | + The specification for how client requests are aborted as part of fault injection. + properties: + - !ruby/object:Api::Type::Integer + name: 'httpStatus' + description: | + The HTTP status code used to abort the request. + The value must be between 200 and 599 inclusive. + - !ruby/object:Api::Type::Double + name: 'percentage' + description: | + The percentage of traffic (connections/operations/requests) which will be aborted as part of fault injection. + The value must be between 0.0 and 100.0 inclusive. - !ruby/object:Api::Type::Array name: 'tests' description: | @@ -14959,6 +15954,567 @@ objects: required: true description: | Expected BackendService resource the given URL should be mapped to. + - !ruby/object:Api::Type::NestedObject + name: 'defaultUrlRedirect' + exactly_one_of: + - default_service + - default_url_redirect + - default_route_action.0.weighted_backend_services + conflicts: + - defaultRouteAction + description: | + When none of the specified hostRules match, the request is redirected to a URL specified + by defaultUrlRedirect. If defaultUrlRedirect is specified, defaultService or + defaultRouteAction must not be set. + properties: + - !ruby/object:Api::Type::String + name: 'hostRedirect' + description: | + The host that will be used in the redirect response instead of the one that was + supplied in the request. The value must be between 1 and 255 characters. + - !ruby/object:Api::Type::Boolean + name: 'httpsRedirect' + default_value: false + description: | + If set to true, the URL scheme in the redirected request is set to https. If set to + false, the URL scheme of the redirected request will remain the same as that of the + request. This must only be set for UrlMaps used in TargetHttpProxys. Setting this + true for TargetHttpsProxy is not permitted. The default is set to false. + - !ruby/object:Api::Type::String + name: 'pathRedirect' + description: | + The path that will be used in the redirect response instead of the one that was + supplied in the request. pathRedirect cannot be supplied together with + prefixRedirect. Supply one alone or neither. If neither is supplied, the path of the + original request will be used for the redirect. The value must be between 1 and 1024 + characters. + - !ruby/object:Api::Type::String + name: 'prefixRedirect' + description: | + The prefix that replaces the prefixMatch specified in the HttpRouteRuleMatch, + retaining the remaining portion of the URL before redirecting the request. + prefixRedirect cannot be supplied together with pathRedirect. Supply one alone or + neither. If neither is supplied, the path of the original request will be used for + the redirect. The value must be between 1 and 1024 characters. + - !ruby/object:Api::Type::Enum + name: 'redirectResponseCode' + description: | + The HTTP Status code to use for this RedirectAction. Supported values are: + + * MOVED_PERMANENTLY_DEFAULT, which is the default value and corresponds to 301. + + * FOUND, which corresponds to 302. + + * SEE_OTHER which corresponds to 303. + + * TEMPORARY_REDIRECT, which corresponds to 307. In this case, the request method + will be retained. + + * PERMANENT_REDIRECT, which corresponds to 308. In this case, + the request method will be retained. + skip_docs_values: true + values: + - :FOUND + - :MOVED_PERMANENTLY_DEFAULT + - :PERMANENT_REDIRECT + - :SEE_OTHER + - :TEMPORARY_REDIRECT + - !ruby/object:Api::Type::Boolean + name: 'stripQuery' + description: | + If set to true, any accompanying query portion of the original URL is removed prior + to redirecting the request. If set to false, the query portion of the original URL is + retained. The default is set to false. + - !ruby/object:Api::Type::NestedObject + name: 'defaultRouteAction' + conflicts: + - defaultUrlRedirect + description: | + defaultRouteAction takes effect when none of the hostRules match. The load balancer performs advanced routing actions + like URL rewrites, header transformations, etc. prior to forwarding the request to the selected backend. + If defaultRouteAction specifies any weightedBackendServices, defaultService must not be set. Conversely if defaultService + is set, defaultRouteAction cannot contain any weightedBackendServices. + + Only one of defaultRouteAction or defaultUrlRedirect must be set. + properties: + - !ruby/object:Api::Type::Array + name: 'weightedBackendServices' + exactly_one_of: + - default_service + - default_url_redirect + - default_route_action.0.weighted_backend_services + description: | + A list of weighted backend services to send traffic to when a route match occurs. + The weights determine the fraction of traffic that flows to their corresponding backend service. + If all traffic needs to go to a single backend service, there must be one weightedBackendService + with weight set to a non 0 number. + + Once a backendService is identified and before forwarding the request to the backend service, + advanced routing actions like Url rewrites and header transformations are applied depending on + additional settings specified in this HttpRouteAction. + at_least_one_of: + - default_route_action.0.weighted_backend_services + - default_route_action.0.url_rewrite + - default_route_action.0.timeout + - default_route_action.0.retry_policy + - default_route_action.0.request_mirror_policy + - default_route_action.0.cors_policy + - default_route_action.0.fault_injection_policy + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::ResourceRef + name: 'backendService' + resource: 'BackendService' + imports: 'selfLink' + description: | + The full or partial URL to the default BackendService resource. Before forwarding the + request to backendService, the loadbalancer applies any relevant headerActions + specified as part of this backendServiceWeight. + - !ruby/object:Api::Type::Integer + name: 'weight' + description: | + Specifies the fraction of traffic sent to backendService, computed as + weight / (sum of all weightedBackendService weights in routeAction) . + + The selection of a backend service is determined only for new traffic. Once a user's request + has been directed to a backendService, subsequent requests will be sent to the same backendService + as determined by the BackendService's session affinity policy. + + The value must be between 0 and 1000 + - !ruby/object:Api::Type::NestedObject + name: 'headerAction' + description: | + Specifies changes to request and response headers that need to take effect for + the selected backendService. + + headerAction specified here take effect before headerAction in the enclosing + HttpRouteRule, PathMatcher and UrlMap. + properties: + - !ruby/object:Api::Type::Array + name: 'requestHeadersToRemove' + description: | + A list of header names for headers that need to be removed from the request prior to + forwarding the request to the backendService. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'requestHeadersToAdd' + description: | + Headers to add to a matching request prior to forwarding the request to the backendService. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'headerName' + description: | + The name of the header to add. + - !ruby/object:Api::Type::String + name: 'headerValue' + description: | + The value of the header to add. + - !ruby/object:Api::Type::Boolean + name: 'replace' + description: | + If false, headerValue is appended to any values that already exist for the header. + If true, headerValue is set for the header, discarding any values that were set for that header. + default_value: false + - !ruby/object:Api::Type::Array + name: 'responseHeadersToRemove' + description: | + A list of header names for headers that need to be removed from the response prior to sending the + response back to the client. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'responseHeadersToAdd' + description: | + Headers to add the response prior to sending the response back to the client. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'headerName' + description: | + The name of the header to add. + - !ruby/object:Api::Type::String + name: 'headerValue' + description: | + The value of the header to add. + - !ruby/object:Api::Type::Boolean + name: 'replace' + description: | + If false, headerValue is appended to any values that already exist for the header. + If true, headerValue is set for the header, discarding any values that were set for that header. + default_value: false + - !ruby/object:Api::Type::NestedObject + name: 'urlRewrite' + description: | + The spec to modify the URL of the request, prior to forwarding the request to the matched service. + at_least_one_of: + - default_route_action.0.weighted_backend_services + - default_route_action.0.url_rewrite + - default_route_action.0.timeout + - default_route_action.0.retry_policy + - default_route_action.0.request_mirror_policy + - default_route_action.0.cors_policy + - default_route_action.0.fault_injection_policy + properties: + - !ruby/object:Api::Type::String + name: 'pathPrefixRewrite' + description: | + Prior to forwarding the request to the selected backend service, the matching portion of the + request's path is replaced by pathPrefixRewrite. + + The value must be between 1 and 1024 characters. + at_least_one_of: + - default_route_action.0.url_rewrite.0.path_prefix_rewrite + - default_route_action.0.url_rewrite.0.host_rewrite + - !ruby/object:Api::Type::String + name: 'hostRewrite' + description: | + Prior to forwarding the request to the selected service, the request's host header is replaced + with contents of hostRewrite. + + The value must be between 1 and 255 characters. + at_least_one_of: + - default_route_action.0.url_rewrite.0.path_prefix_rewrite + - default_route_action.0.url_rewrite.0.host_rewrite + - !ruby/object:Api::Type::NestedObject + name: 'timeout' + description: | + Specifies the timeout for the selected route. Timeout is computed from the time the request has been + fully processed (i.e. end-of-stream) up until the response has been completely processed. Timeout includes all retries. + + If not specified, will use the largest timeout among all backend services associated with the route. + at_least_one_of: + - default_route_action.0.weighted_backend_services + - default_route_action.0.url_rewrite + - default_route_action.0.timeout + - default_route_action.0.retry_policy + - default_route_action.0.request_mirror_policy + - default_route_action.0.cors_policy + - default_route_action.0.fault_injection_policy + properties: + - !ruby/object:Api::Type::String + name: 'seconds' + description: | + Span of time at a resolution of a second. Must be from 0 to 315,576,000,000 inclusive. + Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years + at_least_one_of: + - default_route_action.0.timeout.0.seconds + - default_route_action.0.timeout.0.nanos + - !ruby/object:Api::Type::Integer + name: 'nanos' + description: | + Span of time that's a fraction of a second at nanosecond resolution. Durations less than one second are represented + with a 0 seconds field and a positive nanos field. Must be from 0 to 999,999,999 inclusive. + at_least_one_of: + - default_route_action.0.timeout.0.seconds + - default_route_action.0.timeout.0.nanos + - !ruby/object:Api::Type::NestedObject + name: 'retryPolicy' + description: | + Specifies the retry policy associated with this route. + at_least_one_of: + - default_route_action.0.weighted_backend_services + - default_route_action.0.url_rewrite + - default_route_action.0.timeout + - default_route_action.0.retry_policy + - default_route_action.0.request_mirror_policy + - default_route_action.0.cors_policy + - default_route_action.0.fault_injection_policy + properties: + - !ruby/object:Api::Type::Array + name: 'retryConditions' + description: | + Specfies one or more conditions when this retry rule applies. Valid values are: + + 5xx: Loadbalancer will attempt a retry if the backend service responds with any 5xx response code, + or if the backend service does not respond at all, example: disconnects, reset, read timeout, + connection failure, and refused streams. + gateway-error: Similar to 5xx, but only applies to response codes 502, 503 or 504. + connect-failure: Loadbalancer will retry on failures connecting to backend services, + for example due to connection timeouts. + retriable-4xx: Loadbalancer will retry for retriable 4xx response codes. + Currently the only retriable error supported is 409. + refused-stream:Loadbalancer will retry if the backend service resets the stream with a REFUSED_STREAM error code. + This reset type indicates that it is safe to retry. + cancelled: Loadbalancer will retry if the gRPC status code in the response header is set to cancelled + deadline-exceeded: Loadbalancer will retry if the gRPC status code in the response header is set to deadline-exceeded + resource-exhausted: Loadbalancer will retry if the gRPC status code in the response header is set to resource-exhausted + unavailable: Loadbalancer will retry if the gRPC status code in the response header is set to unavailable + at_least_one_of: + - default_route_action.0.retry_policy.0.retry_conditions + - default_route_action.0.retry_policy.0.num_retries + - default_route_action.0.retry_policy.0.per_try_timeout + item_type: Api::Type::String + - !ruby/object:Api::Type::Integer + name: 'numRetries' + description: | + Specifies the allowed number retries. This number must be > 0. If not specified, defaults to 1. + at_least_one_of: + - default_route_action.0.retry_policy.0.retry_conditions + - default_route_action.0.retry_policy.0.num_retries + - default_route_action.0.retry_policy.0.per_try_timeout + default_value: 1 + - !ruby/object:Api::Type::NestedObject + name: 'perTryTimeout' + description: | + Specifies a non-zero timeout per retry attempt. + + If not specified, will use the timeout set in HttpRouteAction. If timeout in HttpRouteAction is not set, + will use the largest timeout among all backend services associated with the route. + at_least_one_of: + - default_route_action.0.retry_policy.0.retry_conditions + - default_route_action.0.retry_policy.0.num_retries + - default_route_action.0.retry_policy.0.per_try_timeout + properties: + - !ruby/object:Api::Type::String + name: 'seconds' + description: | + Span of time at a resolution of a second. Must be from 0 to 315,576,000,000 inclusive. + Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years + at_least_one_of: + - default_route_action.0.retry_policy.0.per_try_timeout.0.seconds + - default_route_action.0.retry_policy.0.per_try_timeout.0.nanos + - !ruby/object:Api::Type::Integer + name: 'nanos' + description: | + Span of time that's a fraction of a second at nanosecond resolution. Durations less than one second are + represented with a 0 seconds field and a positive nanos field. Must be from 0 to 999,999,999 inclusive. + at_least_one_of: + - default_route_action.0.retry_policy.0.per_try_timeout.0.seconds + - default_route_action.0.retry_policy.0.per_try_timeout.0.nanos + - !ruby/object:Api::Type::NestedObject + name: 'requestMirrorPolicy' + description: | + Specifies the policy on how requests intended for the route's backends are shadowed to a separate mirrored backend service. + Loadbalancer does not wait for responses from the shadow service. Prior to sending traffic to the shadow service, + the host / authority header is suffixed with -shadow. + at_least_one_of: + - default_route_action.0.weighted_backend_services + - default_route_action.0.url_rewrite + - default_route_action.0.timeout + - default_route_action.0.retry_policy + - default_route_action.0.request_mirror_policy + - default_route_action.0.cors_policy + - default_route_action.0.fault_injection_policy + properties: + - !ruby/object:Api::Type::ResourceRef + name: 'backendService' + resource: 'BackendService' + imports: 'selfLink' + description: | + The full or partial URL to the BackendService resource being mirrored to. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'corsPolicy' + description: | + The specification for allowing client side cross-origin requests. Please see + [W3C Recommendation for Cross Origin Resource Sharing](https://www.w3.org/TR/cors/) + at_least_one_of: + - default_route_action.0.weighted_backend_services + - default_route_action.0.url_rewrite + - default_route_action.0.timeout + - default_route_action.0.retry_policy + - default_route_action.0.request_mirror_policy + - default_route_action.0.cors_policy + - default_route_action.0.fault_injection_policy + properties: + - !ruby/object:Api::Type::Array + name: 'allowOrigins' + description: | + Specifies the list of origins that will be allowed to do CORS requests. + An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowOriginRegexes' + description: | + Specifies the regualar expression patterns that match allowed origins. For regular expression grammar + please see en.cppreference.com/w/cpp/regex/ecmascript + An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowMethods' + description: | + Specifies the content for the Access-Control-Allow-Methods header. + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowHeaders' + description: | + Specifies the content for the Access-Control-Allow-Headers header. + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'exposeHeaders' + description: | + Specifies the content for the Access-Control-Expose-Headers header. + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + item_type: Api::Type::String + - !ruby/object:Api::Type::Integer + name: 'maxAge' + description: | + Specifies how long results of a preflight request can be cached in seconds. + This translates to the Access-Control-Max-Age header. + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + - !ruby/object:Api::Type::Boolean + name: 'allowCredentials' + description: | + In response to a preflight request, setting this to true indicates that the actual request can include user credentials. + This translates to the Access-Control-Allow-Credentials header. + default_value: false + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + - !ruby/object:Api::Type::Boolean + name: 'disabled' + description: | + If true, specifies the CORS policy is disabled. The default value is false, which indicates that the CORS policy is in effect. + default_value: false + at_least_one_of: + - default_route_action.0.cors_policy.0.allow_origins + - default_route_action.0.cors_policy.0.allow_origin_regexes + - default_route_action.0.cors_policy.0.allow_methods + - default_route_action.0.cors_policy.0.allow_headers + - default_route_action.0.cors_policy.0.expose_headers + - default_route_action.0.cors_policy.0.max_age + - default_route_action.0.cors_policy.0.allow_credentials + - default_route_action.0.cors_policy.0.disabled + - !ruby/object:Api::Type::NestedObject + name: 'faultInjectionPolicy' + description: | + The specification for fault injection introduced into traffic to test the resiliency of clients to backend service failure. + As part of fault injection, when clients send requests to a backend service, delays can be introduced by Loadbalancer on a + percentage of requests before sending those request to the backend service. Similarly requests from clients can be aborted + by the Loadbalancer for a percentage of requests. + + timeout and retryPolicy will be ignored by clients that are configured with a faultInjectionPolicy. + at_least_one_of: + - default_route_action.0.weighted_backend_services + - default_route_action.0.url_rewrite + - default_route_action.0.timeout + - default_route_action.0.retry_policy + - default_route_action.0.request_mirror_policy + - default_route_action.0.cors_policy + - default_route_action.0.fault_injection_policy + properties: + - !ruby/object:Api::Type::NestedObject + name: 'delay' + description: | + The specification for how client requests are delayed as part of fault injection, before being sent to a backend service. + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.delay + - default_route_action.0.fault_injection_policy.0.abort + properties: + - !ruby/object:Api::Type::NestedObject + name: 'fixedDelay' + description: | + Specifies the value of the fixed delay interval. + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.delay.0.delay + - default_route_action.0.fault_injection_policy.0.delay.0.percentage + properties: + - !ruby/object:Api::Type::String + name: 'seconds' + description: | + Span of time at a resolution of a second. Must be from 0 to 315,576,000,000 inclusive. + Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.delay.0.fixed_delay.0.seconds + - default_route_action.0.fault_injection_policy.0.delay.0.fixed_delay.0.nanos + - !ruby/object:Api::Type::Integer + name: 'nanos' + description: | + Span of time that's a fraction of a second at nanosecond resolution. Durations less than one second are + represented with a 0 seconds field and a positive nanos field. Must be from 0 to 999,999,999 inclusive. + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.delay.0.fixed_delay.0.seconds + - default_route_action.0.fault_injection_policy.0.delay.0.fixed_delay.0.nanos + - !ruby/object:Api::Type::Double + name: 'percentage' + description: | + The percentage of traffic (connections/operations/requests) on which delay will be introduced as part of fault injection. + The value must be between 0.0 and 100.0 inclusive. + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.delay.0.delay + - default_route_action.0.fault_injection_policy.0.delay.0.percentage + - !ruby/object:Api::Type::NestedObject + name: 'abort' + description: | + The specification for how client requests are aborted as part of fault injection. + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.delay + - default_route_action.0.fault_injection_policy.0.abort + properties: + - !ruby/object:Api::Type::Integer + name: 'httpStatus' + description: | + The HTTP status code used to abort the request. + The value must be between 200 and 599 inclusive. + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.abort.0.http_status + - default_route_action.0.fault_injection_policy.0.abort.0.percentage + - !ruby/object:Api::Type::Double + name: 'percentage' + description: | + The percentage of traffic (connections/operations/requests) which will be aborted as part of fault injection. + The value must be between 0.0 and 100.0 inclusive. + at_least_one_of: + - default_route_action.0.fault_injection_policy.0.abort.0.http_status + - default_route_action.0.fault_injection_policy.0.abort.0.percentage - !ruby/object:Api::Resource name: 'VpnTunnel' kind: 'compute#vpnTunnel' diff --git a/products/compute/inspec.yaml b/products/compute/inspec.yaml index d00aa66a49a1..0453560b0801 100644 --- a/products/compute/inspec.yaml +++ b/products/compute/inspec.yaml @@ -93,6 +93,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides exclude: true MachineType: !ruby/object:Overrides::Inspec::ResourceOverride exclude: true + MachineImage: !ruby/object:Overrides::Inspec::ResourceOverride + exclude: true ManagedSslCertificate: !ruby/object:Overrides::Inspec::ResourceOverride exclude: true Network: !ruby/object:Overrides::Inspec::ResourceOverride @@ -183,6 +185,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides exclude: true pathMatchers.pathRules: !ruby/object:Overrides::Inspec::PropertyOverride exclude: true + pathMatchers.defaultRouteAction.weightedBackendServices: !ruby/object:Overrides::Inspec::PropertyOverride + exclude: true VpnGateway: !ruby/object:Overrides::Inspec::ResourceOverride exclude: true VpnTunnel: !ruby/object:Overrides::Inspec::ResourceOverride @@ -202,3 +206,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides override_name: "zone_status" id: !ruby/object:Overrides::Inspec::PropertyOverride override_name: "zone_id" + PerInstanceConfig: !ruby/object:Overrides::Inspec::ResourceOverride + exclude: true + RegionPerInstanceConfig: !ruby/object:Overrides::Inspec::ResourceOverride + exclude: true diff --git a/products/compute/terraform.yaml b/products/compute/terraform.yaml index f1d7eeef7186..7eb456a56138 100644 --- a/products/compute/terraform.yaml +++ b/products/compute/terraform.yaml @@ -32,6 +32,13 @@ overrides: !ruby/object:Overrides::ResourceOverrides primary_resource_id: "internal_with_gce_endpoint" vars: address_name: "my-internal-address-" + - !ruby/object:Provider::Terraform::Examples + name: "address_with_shared_loadbalancer_vip" + primary_resource_id: "internal_with_shared_loadbalancer_vip" + min_version: 'beta' + vars: + address_name: "my-internal-address" + skip_docs: true # It is almost identical to internal_with_gce_endpoint # TODO(rileykarson): Remove this example when instance is supported - !ruby/object:Provider::Terraform::Examples name: "instance_with_ip" @@ -107,6 +114,9 @@ overrides: !ruby/object:Overrides::ResourceOverrides name: maxReplicas autoscalingPolicy.coolDownPeriodSec: !ruby/object:Overrides::Terraform::PropertyOverride name: cooldownPeriod + autoscalingPolicy.scaleDownControl: !ruby/object:Overrides::Terraform::PropertyOverride + required: false + default_from_api: true autoscalingPolicy.cpuUtilization: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true autoscalingPolicy.cpuUtilization.utilizationTarget: !ruby/object:Overrides::Terraform::PropertyOverride @@ -166,12 +176,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides keyValue: !ruby/object:Overrides::Terraform::PropertyOverride sensitive: true ignore_read: true - docs: !ruby/object:Provider::Terraform::Docs - warning: | - All arguments including the key's value will be stored in the raw - state as plain-text. [Read more about sensitive data in state](/docs/state/sensitive-data.html). - Because the API does not return the sensitive key value, - we cannot confirm or reverse changes to a key outside of Terraform. BackendService: !ruby/object:Overrides::Terraform::ResourceOverride examples: - !ruby/object:Provider::Terraform::Examples @@ -194,6 +198,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides vars: backend_service_name: "backend-service" health_check_name: "health-check" + - !ruby/object:Provider::Terraform::Examples + name: "backend_service_network_endpoint" + primary_resource_id: "default" + vars: + backend_service_name: "backend-service" + neg_name: "network-endpoint" custom_code: !ruby/object:Provider::Terraform::CustomCode constants: 'templates/terraform/constants/backend_service.go.erb' encoder: 'templates/terraform/encoders/backend_service.go.erb' @@ -263,21 +273,18 @@ overrides: !ruby/object:Overrides::ResourceOverrides health_check_name: "rbs-health-check" - !ruby/object:Provider::Terraform::Examples name: "region_backend_service_ilb_round_robin" - min_version: beta primary_resource_id: "default" vars: region_backend_service_name: "region-service" health_check_name: "rbs-health-check" - !ruby/object:Provider::Terraform::Examples name: "region_backend_service_ilb_ring_hash" - min_version: beta primary_resource_id: "default" vars: region_backend_service_name: "region-service" health_check_name: "rbs-health-check" - !ruby/object:Provider::Terraform::Examples name: "region_backend_service_balancing_mode" - min_version: beta primary_resource_id: "default" vars: region_backend_service_name: "region-service" @@ -287,6 +294,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides custom_code: !ruby/object:Provider::Terraform::CustomCode constants: templates/terraform/constants/region_backend_service.go.erb encoder: templates/terraform/encoders/region_backend_service.go.erb + decoder: templates/terraform/decoders/region_backend_service.go.erb resource_definition: 'templates/terraform/resource_definition/region_backend_service.go.erb' properties: region: !ruby/object:Overrides::Terraform::PropertyOverride @@ -319,6 +327,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides exclude: true protocol: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true + portName: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true sessionAffinity: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true timeoutSec: !ruby/object:Overrides::Terraform::PropertyOverride @@ -346,18 +356,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides keyValue: !ruby/object:Overrides::Terraform::PropertyOverride sensitive: true ignore_read: true - docs: !ruby/object:Provider::Terraform::Docs - warning: | - All arguments including the key's value will be stored in the raw - state as plain-text. [Read more about sensitive data in state](/docs/state/sensitive-data.html). - Because the API does not return the sensitive key value, - we cannot confirm or reverse changes to a key outside of Terraform. RegionDiskResourcePolicyAttachment: !ruby/object:Overrides::Terraform::ResourceOverride description: | Adds existing resource policies to a disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation. - ~> **Note:** This resource does not support zonal disks (`google_compute_disk`). + ~> **Note:** This resource does not support zonal disks (`google_compute_disk`). For zonal disks, please refer to [`google_compute_disk_resource_policy_attachment`](https://www.terraform.io/docs/providers/google/r/compute_disk_resource_policy_attachment.html) examples: - !ruby/object:Provider::Terraform::Examples name: "region_disk_resource_policy_attachment_basic" @@ -381,7 +385,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides Adds existing resource policies to a disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation. - ~> **Note:** This resource does not support regional disks (`google_compute_region_disk`). + ~> **Note:** This resource does not support regional disks (`google_compute_region_disk`). For regional disks, please refer to [`google_compute_region_disk_resource_policy_attachment`](https://www.terraform.io/docs/providers/google/r/compute_region_disk_resource_policy_attachment.html) examples: - !ruby/object:Provider::Terraform::Examples name: "disk_resource_policy_attachment_basic" @@ -445,7 +449,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides `global/images/family/{family}`, `family/{family}`, `{project}/{family}`, `{project}/{image}`, `{family}`, or `{image}`. If referred by family, the images names must include the family name. If they don't, use the - [google_compute_image data source](/docs/providers/google/d/datasource_compute_image.html). + [google_compute_image data source](/docs/providers/google/d/compute_image.html). For instance, the image `centos-6-v20180104` includes its family name `centos-6`. These images can be referred by family name here. diskEncryptionKey.rawKey: !ruby/object:Overrides::Terraform::PropertyOverride @@ -481,17 +485,20 @@ overrides: !ruby/object:Overrides::ResourceOverrides default_from_api: true resourcePolicies: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true + description: | + {{description}} + + ~>**NOTE** This value does not support updating the + resource policy, as resource policies can not be updated more than + one at a time. Use + [`google_compute_disk_resource_policy_attachment`](https://www.terraform.io/docs/providers/google/r/compute_disk_resource_policy_attachment.html) + to allow for updating the resource policy attached to the disk. custom_code: !ruby/object:Provider::Terraform::CustomCode pre_delete: templates/terraform/pre_delete/detach_disk.erb constants: templates/terraform/constants/disk.erb encoder: templates/terraform/encoders/disk.erb decoder: templates/terraform/decoders/disk.erb resource_definition: templates/terraform/resource_definition/disk.erb - docs: !ruby/object:Provider::Terraform::Docs - warning: | - All arguments including the disk encryption key will be stored in the raw - state as plain-text. - [Read more about sensitive data in state](/docs/state/sensitive-data.html). examples: - !ruby/object:Provider::Terraform::Examples name: "disk_basic" @@ -501,6 +508,10 @@ overrides: !ruby/object:Overrides::ResourceOverrides DiskType: !ruby/object:Overrides::Terraform::ResourceOverride exclude: true Firewall: !ruby/object:Overrides::Terraform::ResourceOverride + docs: !ruby/object:Provider::Terraform::Docs + optional_properties: |+ + * `enable_logging` - (Optional, Deprecated) This field denotes whether to enable logging for a particular firewall rule. + If logging is enabled, logs will be exported to Stackdriver. Deprecated in favor of `log_config` examples: - !ruby/object:Provider::Terraform::Examples name: "firewall_basic" @@ -511,6 +522,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides custom_code: !ruby/object:Provider::Terraform::CustomCode constants: templates/terraform/constants/firewall.erb resource_definition: templates/terraform/resource_definition/firewall.erb + extra_schema_entry: templates/terraform/extra_schema_entry/firewall.erb properties: id: !ruby/object:Overrides::Terraform::PropertyOverride exclude: true @@ -539,9 +551,17 @@ overrides: !ruby/object:Overrides::ResourceOverrides # See terraform issue #2713 for more context. input: true logConfig: !ruby/object:Overrides::Terraform::PropertyOverride - flatten_object: true - logConfig.enableLogging: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + This field denotes the logging options for a particular firewall rule. + If defined, logging is enabled, and logs will be exported to Cloud Logging. send_empty_value: true + custom_expand: 'templates/terraform/custom_expand/firewall_log_config.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/firewall_log_config.go.erb' + diff_suppress_func: 'diffSuppressEnableLogging' + logConfig.enable: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + logConfig.metadata: !ruby/object:Overrides::Terraform::PropertyOverride + required: true name: !ruby/object:Overrides::Terraform::PropertyOverride validation: !ruby/object:Provider::Terraform::Validation function: 'validateGCPName' @@ -594,6 +614,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides region_url_map_name: "website-map" region_backend_service_name: "website-backend" region_health_check_name: "website-hc" + rigm_name: "website-rigm" network_name: "website-net" fw_name: "website-fw" custom_code: !ruby/object:Provider::Terraform::CustomCode @@ -631,7 +652,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides internal IP address will be automatically allocated from the IP range of the subnet or network configured for this forwarding rule. - An address must be specified by a literal IP address. ~> **NOTE**: While + An address must be specified by a literal IP address. ~> **NOTE:** While the API allows you to specify various resource paths for an address resource instead, Terraform requires this to specifically be an IP address to avoid needing to fetching the IP address from resource paths on refresh @@ -956,6 +977,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides - !ruby/object:Provider::Terraform::Examples name: "instance_group_named_port_gke" primary_resource_id: "my_port" + # Multiple fine-grained resources + skip_vcr: true vars: network_name: "container-network" subnetwork_name: "container-subnetwork" @@ -1001,12 +1024,23 @@ overrides: !ruby/object:Overrides::ResourceOverrides default_from_api: true candidateSubnets: !ruby/object:Overrides::Terraform::PropertyOverride ignore_read: true + edgeAvailabilityDomain: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true custom_code: !ruby/object:Provider::Terraform::CustomCode constants: templates/terraform/constants/interconnect_attachment.go.erb post_create: templates/terraform/post_create/interconnect_attachment.go.erb pre_delete: templates/terraform/pre_delete/interconnect_attachment.go.erb License: !ruby/object:Overrides::Terraform::ResourceOverride exclude: true + MachineImage: !ruby/object:Overrides::Terraform::ResourceOverride + examples: + - !ruby/object:Provider::Terraform::Examples + name: "machine_image_basic" + primary_resource_id: "image" + vars: + vm_name: "vm" + image_name: "image" + MachineType: !ruby/object:Overrides::Terraform::ResourceOverride exclude: true Network: !ruby/object:Overrides::Terraform::ResourceOverride @@ -1017,8 +1051,9 @@ overrides: !ruby/object:Overrides::ResourceOverrides vars: network_name: "vpc-network" virtual_fields: - - !ruby/object:Provider::Terraform::VirtualFields + - !ruby/object:Api::Type::Boolean name: 'delete_default_routes_on_create' + default_value: false description: | If set to `true`, default routes (`0.0.0.0/0`) will be deleted immediately after network creation. Defaults to `false`. @@ -1176,6 +1211,10 @@ overrides: !ruby/object:Overrides::ResourceOverrides properties: network: !ruby/object:Overrides::Terraform::PropertyOverride custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' + importCustomRoutes: !ruby/object:Overrides::Terraform::PropertyOverride + send_empty_value: true + exportCustomRoutes: !ruby/object:Overrides::Terraform::PropertyOverride + send_empty_value: true custom_code: !ruby/object:Provider::Terraform::CustomCode custom_delete: 'templates/terraform/custom_delete/skip_delete.go.erb' encoder: 'templates/terraform/encoders/network_peering_routes_config.go.erb' @@ -1272,6 +1311,140 @@ overrides: !ruby/object:Overrides::ResourceOverrides The Region in which the created address should reside. If it is not provided, the provider region is used. + PerInstanceConfig: !ruby/object:Overrides::Terraform::ResourceOverride + id_format: "{{project}}/{{zone}}/{{instance_group_manager}}/{{name}}" + mutex: instanceGroupManager/{{project}}/{{zone}}/{{instance_group_manager}} + # Fine-grained resources don't actually exist as standalone GCP resource + # in Cloud Asset Inventory + exclude_validator: true + examples: + - !ruby/object:Provider::Terraform::Examples + name: "stateful_igm" + primary_resource_id: "stateful-instance" + # Fine-grained resource need different autogenerated tests, as + # we need to check destroy during a test step where the parent resource + # still exists, rather than during CheckDestroy (when read returns + # nothing because the parent resource has then also been destroyed) + skip_test: true + vars: + template_name: "my-template" + igm_name: "my-igm" + properties: + preservedState.disk: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + custom_flatten: templates/terraform/custom_flatten/preserved_state_disks.go.erb + custom_expand: templates/terraform/custom_expand/preserved_state_disks.go.erb + virtual_fields: + - !ruby/object:Api::Type::Enum + name: 'minimal_action' + description: | + The minimal action to perform on the instance during an update. + Default is `NONE`. Possible values are: + * REPLACE + * RESTART + * REFRESH + * NONE + values: + - :REPLACE + - :RESTART + - :REFRESH + - :NONE + default_value: :NONE + - !ruby/object:Api::Type::Enum + name: 'most_disruptive_allowed_action' + description: | + The most disruptive action to perform on the instance during an update. + Default is `REPLACE`. Possible values are: + * REPLACE + * RESTART + * REFRESH + * NONE + values: + - :REPLACE + - :RESTART + - :REFRESH + - :NONE + default_value: :REPLACE + - !ruby/object:Api::Type::Boolean + name: 'remove_instance_state_on_destroy' + description: | + When true, deleting this config will immediately remove any specified state from the underlying instance. + When false, deleting this config will *not* immediately remove any state from the underlying instance. + State will be removed on the next instance recreation or update. + default_value: false + custom_code: !ruby/object:Provider::Terraform::CustomCode + encoder: templates/terraform/encoders/compute_per_instance_config.go.erb + update_encoder: templates/terraform/update_encoder/compute_per_instance_config.go.erb + pre_delete: templates/terraform/pre_delete/compute_per_instance_config.go.erb + post_update: templates/terraform/post_update/compute_per_instance_config.go.erb + custom_delete: templates/terraform/custom_delete/per_instance_config.go.erb + RegionPerInstanceConfig: !ruby/object:Overrides::Terraform::ResourceOverride + id_format: "{{project}}/{{region}}/{{region_instance_group_manager}}/{{name}}" + mutex: instanceGroupManager/{{project}}/{{region}}/{{region_instance_group_manager}} + # Fine-grained resources don't actually exist as standalone GCP resource + # in Cloud Asset Inventory + exclude_validator: true + examples: + - !ruby/object:Provider::Terraform::Examples + name: "stateful_rigm" + primary_resource_id: "stateful-instance" + # Fine-grained resource need different autogenerated tests, as + # we need to check destroy during a test step where the parent resource + # still exists, rather than during CheckDestroy (when read returns + # nothing because the parent resource has then also been destroyed) + skip_test: true + vars: + template_name: "my-template" + igm_name: "my-rigm" + properties: + preservedState.disk: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + custom_flatten: templates/terraform/custom_flatten/preserved_state_disks.go.erb + custom_expand: templates/terraform/custom_expand/preserved_state_disks.go.erb + virtual_fields: + - !ruby/object:Api::Type::Enum + name: 'minimal_action' + description: | + The minimal action to perform on the instance during an update. + Default is `NONE`. Possible values are: + * REPLACE + * RESTART + * REFRESH + * NONE + values: + - :REPLACE + - :RESTART + - :REFRESH + - :NONE + default_value: :NONE + - !ruby/object:Api::Type::Enum + name: 'most_disruptive_allowed_action' + description: | + The most disruptive action to perform on the instance during an update. + Default is `REPLACE`. Possible values are: + * REPLACE + * RESTART + * REFRESH + * NONE + values: + - :REPLACE + - :RESTART + - :REFRESH + - :NONE + default_value: :REPLACE + - !ruby/object:Api::Type::Boolean + name: 'remove_instance_state_on_destroy' + description: | + When true, deleting this config will immediately remove any specified state from the underlying instance. + When false, deleting this config will *not* immediately remove any state from the underlying instance. + State will be removed on the next instance recreation or update. + default_value: false + custom_code: !ruby/object:Provider::Terraform::CustomCode + encoder: templates/terraform/encoders/compute_per_instance_config.go.erb + update_encoder: templates/terraform/update_encoder/compute_per_instance_config.go.erb + pre_delete: templates/terraform/pre_delete/compute_per_instance_config.go.erb + post_update: templates/terraform/post_update/compute_region_per_instance_config.go.erb + custom_delete: templates/terraform/custom_delete/region_per_instance_config.go.erb ProjectInfo: !ruby/object:Overrides::Terraform::ResourceOverride exclude: true Region: !ruby/object:Overrides::Terraform::ResourceOverride @@ -1355,16 +1528,13 @@ overrides: !ruby/object:Overrides::ResourceOverrides custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' physicalBlockSizeBytes: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true + diskEncryptionKey.rawKey: !ruby/object:Overrides::Terraform::PropertyOverride + sensitive: true custom_code: !ruby/object:Provider::Terraform::CustomCode pre_delete: templates/terraform/pre_delete/detach_disk.erb encoder: templates/terraform/encoders/disk.erb decoder: templates/terraform/decoders/disk.erb resource_definition: templates/terraform/resource_definition/disk.erb - docs: !ruby/object:Provider::Terraform::Docs - warning: | - All arguments including the disk encryption key will be stored in the raw - state as plain-text. - [Read more about sensitive data in state](/docs/state/sensitive-data.html). examples: - !ruby/object:Provider::Terraform::Examples name: "region_disk_basic" @@ -1460,7 +1630,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides sslHealthCheck: !ruby/object:Overrides::Terraform::PropertyOverride diff_suppress_func: 'portDiffSuppress' RegionUrlMap: !ruby/object:Overrides::Terraform::ResourceOverride - min_version: 'beta' examples: - !ruby/object:Provider::Terraform::Examples name: "region_url_map_basic" @@ -1521,6 +1690,15 @@ overrides: !ruby/object:Overrides::ResourceOverrides is_set: true tests: !ruby/object:Overrides::Terraform::PropertyOverride name: "test" + pathMatchers.defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + description: '{{description}} This field is required to ensure an empty block is not set. The normal default value is false.' + defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + description: '{{description}} This field is required to ensure an empty block is not set. The normal default value is false.' + pathMatchers.pathRules.urlRedirect.stripQuery: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + description: '{{description}} This field is required to ensure an empty block is not set. The normal default value is false.' ResourcePolicy: !ruby/object:Overrides::Terraform::ResourceOverride examples: - !ruby/object:Provider::Terraform::Examples @@ -1533,6 +1711,11 @@ overrides: !ruby/object:Overrides::ResourceOverrides primary_resource_id: "bar" vars: name: "policy" + - !ruby/object:Provider::Terraform::Examples + name: "resource_policy_placement_policy" + primary_resource_id: "baz" + vars: + name: "policy" properties: region: !ruby/object:Overrides::Terraform::PropertyOverride required: false @@ -1565,6 +1748,9 @@ overrides: !ruby/object:Overrides::ResourceOverrides validation: !ruby/object:Provider::Terraform::Validation function: 'validation.IntAtLeast(1)' Route: !ruby/object:Overrides::Terraform::ResourceOverride + # Route cannot be added while a peering is on progress on the network + mutex: 'projects/{{project}}/global/networks/{{network}}/peerings' + error_retry_predicates: ["isPeeringOperationInProgress"] examples: - !ruby/object:Provider::Terraform::Examples name: "route_basic" @@ -1726,6 +1912,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides router_name: "my-router" peer_name: "my-router-peer" properties: + advertiseMode: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: 'templates/terraform/custom_flatten/default_if_empty.erb' name: !ruby/object:Overrides::Terraform::PropertyOverride validation: !ruby/object:Provider::Terraform::Validation function: 'validateRFC1035Name(2, 63)' @@ -1738,6 +1926,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides description: | {{description}} If it is not provided, the provider region is used. + SecurityPolicy: !ruby/object:Overrides::Terraform::ResourceOverride + exclude: true Snapshot: !ruby/object:Overrides::Terraform::ResourceOverride timeouts: !ruby/object:Api::Timeouts insert_minutes: 5 @@ -1814,6 +2004,10 @@ overrides: !ruby/object:Overrides::ResourceOverrides dns_zone_name: "dnszone" forwarding_rule_name: "forwarding-rule" http_health_check_name: "http-health-check" + - !ruby/object:Provider::Terraform::Examples + name: "managed_ssl_certificate_recreation" + primary_resource_id: "default" + min_version: beta description: | {{description}} For a resource where you provide the key, see the @@ -1838,14 +2032,20 @@ overrides: !ruby/object:Overrides::ResourceOverrides - !ruby/object:Provider::Terraform::Examples name: "ssl_certificate_basic" primary_resource_id: "default" + # Uses resource.UniqueId + skip_vcr: true ignore_read_extra: - "name_prefix" - !ruby/object:Provider::Terraform::Examples name: "ssl_certificate_random_provider" primary_resource_id: "default" + # Uses resource.UniqueId + skip_vcr: true - !ruby/object:Provider::Terraform::Examples name: "ssl_certificate_target_https_proxies" primary_resource_id: "default" + # Uses resource.UniqueId + skip_vcr: true vars: target_https_proxy_name: "test-proxy" url_map_name: "url-map" @@ -1883,15 +2083,20 @@ overrides: !ruby/object:Overrides::ResourceOverrides - !ruby/object:Provider::Terraform::Examples name: "region_ssl_certificate_basic" primary_resource_id: "default" + # Uses resource.UniqueId + skip_vcr: true ignore_read_extra: - "name_prefix" - !ruby/object:Provider::Terraform::Examples name: "region_ssl_certificate_random_provider" primary_resource_id: "default" + # Uses resource.UniqueId + skip_vcr: true - !ruby/object:Provider::Terraform::Examples name: "region_ssl_certificate_target_https_proxies" - min_version: "beta" primary_resource_id: "default" + # Uses resource.UniqueId + skip_vcr: true vars: region_target_https_proxy_name: "test-proxy" region_url_map_name: "url-map" @@ -1968,10 +2173,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides See the [official documentation](https://cloud.google.com/compute/docs/load-balancing/ssl-policies#profilefeaturesupport) for information on what cipher suites each profile provides. If `CUSTOM` is used, the `custom_features` attribute **must be set**. - Default is `COMPATIBLE`. minTlsVersion: !ruby/object:Overrides::Terraform::PropertyOverride default_value: :TLS_1_0 - description : '{{description}} Default is `TLS_1_0`.' warnings: !ruby/object:Overrides::Terraform::PropertyOverride exclude: true Subnetwork: !ruby/object:Overrides::Terraform::ResourceOverride @@ -2021,7 +2224,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides function: 'validateIpCidrRange' region: !ruby/object:Overrides::Terraform::PropertyOverride required: false - input: false default_from_api: true custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' purpose: !ruby/object:Overrides::Terraform::PropertyOverride @@ -2061,6 +2263,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides url_map_name: "url-map" backend_service_name: "backend-service" http_health_check_name: "http-health-check" + - !ruby/object:Provider::Terraform::Examples + name: "target_http_proxy_https_redirect" + primary_resource_id: "default" + vars: + target_http_proxy_name: "test-https-redirect-proxy" + url_map_name: "url-map" properties: id: !ruby/object:Overrides::Terraform::PropertyOverride name: proxyId @@ -2082,7 +2290,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides default_value: :NONE custom_flatten: 'templates/terraform/custom_flatten/default_if_empty.erb' RegionTargetHttpProxy: !ruby/object:Overrides::Terraform::ResourceOverride - min_version: 'beta' examples: - !ruby/object:Provider::Terraform::Examples name: "region_target_http_proxy_basic" @@ -2092,6 +2299,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides region_url_map_name: "url-map" region_backend_service_name: "backend-service" region_health_check_name: "http-health-check" + - !ruby/object:Provider::Terraform::Examples + name: "region_target_http_proxy_https_redirect" + primary_resource_id: "default" + vars: + region_target_http_proxy_name: "test-https-redirect-proxy" + region_url_map_name: "url-map" properties: region: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true @@ -2103,7 +2316,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides id: !ruby/object:Overrides::Terraform::PropertyOverride name: proxyId RegionTargetHttpsProxy: !ruby/object:Overrides::Terraform::ResourceOverride - min_version: 'beta' examples: - !ruby/object:Provider::Terraform::Examples name: "region_target_https_proxy_basic" @@ -2299,6 +2511,24 @@ overrides: !ruby/object:Overrides::ResourceOverrides url_map_name: "urlmap" home_backend_service_name: "home" health_check_name: "health-check" + - !ruby/object:Provider::Terraform::Examples + name: "url_map_header_based_routing" + primary_resource_id: "urlmap" + vars: + url_map_name: "urlmap" + default_backend_service_name: "default" + service_a_backend_service_name: "service-a" + service_b_backend_service_name: "service-b" + health_check_name: "health-check" + - !ruby/object:Provider::Terraform::Examples + name: "url_map_parameter_based_routing" + primary_resource_id: "urlmap" + vars: + url_map_name: "urlmap" + default_backend_service_name: "default" + service_a_backend_service_name: "service-a" + service_b_backend_service_name: "service-b" + health_check_name: "health-check" properties: id: !ruby/object:Overrides::Terraform::PropertyOverride name: "map_id" @@ -2335,6 +2565,49 @@ overrides: !ruby/object:Overrides::ResourceOverrides description: The backend service or backend bucket link that should be matched by this test. tests: !ruby/object:Overrides::Terraform::PropertyOverride name: "test" + pathMatchers.defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + description: '{{description}} This field is required to ensure an empty block is not set. The normal default value is false.' + defaultUrlRedirect.stripQuery: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + description: '{{description}} This field is required to ensure an empty block is not set. The normal default value is false.' + pathMatchers.pathRules.urlRedirect.stripQuery: !ruby/object:Overrides::Terraform::PropertyOverride + required: true + description: '{{description}} This field is required to ensure an empty block is not set. The normal default value is false.' + pathMatchers.defaultRouteAction.weightedBackendServices.weight: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(0, 1000)' + pathMatchers.defaultRouteAction.timeout: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + pathMatchers.defaultRouteAction.retryPolicy.numRetries: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntAtLeast(1)' + pathMatchers.defaultRouteAction.faultInjectionPolicy.delay.percentage: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.FloatBetween(0, 100)' + pathMatchers.defaultRouteAction.faultInjectionPolicy.abort.httpStatus: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(200, 599)' + pathMatchers.defaultRouteAction.faultInjectionPolicy.abort.percentage: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.FloatBetween(0, 100)' + defaultRouteAction.weightedBackendServices.weight: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(0, 1000)' + defaultRouteAction.timeout: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + defaultRouteAction.retryPolicy.numRetries: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntAtLeast(1)' + defaultRouteAction.faultInjectionPolicy.delay.percentage: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.FloatBetween(0, 100)' + defaultRouteAction.faultInjectionPolicy.abort.httpStatus: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(200, 599)' + defaultRouteAction.faultInjectionPolicy.abort.percentage: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.FloatBetween(0, 100)' VpnTunnel: !ruby/object:Overrides::Terraform::ResourceOverride examples: - !ruby/object:Provider::Terraform::Examples @@ -2362,11 +2635,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides udp500_forwarding_rule_name: "fr-udp500" udp4500_forwarding_rule_name: "fr-udp4500" route_name: "route1" - docs: !ruby/object:Provider::Terraform::Docs - warning: | - All arguments including the shared secret will be stored in the raw - state as plain-text. - [Read more about sensitive data in state](/docs/state/sensitive-data.html). properties: targetVpnGateway: !ruby/object:Overrides::Terraform::PropertyOverride resource: 'VpnGateway' @@ -2415,7 +2683,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides post_create: templates/terraform/post_create/labels.erb Zone: !ruby/object:Overrides::Terraform::ResourceOverride exclude: true - # This is for copying files over files: !ruby/object:Provider::Config::Files # These files have templating (ERB) code that will be run. diff --git a/products/container/ansible.yaml b/products/container/ansible.yaml index eaaf7ca05386..c95ed2d2fe5c 100644 --- a/products/container/ansible.yaml +++ b/products/container/ansible.yaml @@ -17,6 +17,8 @@ datasources: !ruby/object:Overrides::ResourceOverrides properties: location: !ruby/object:Overrides::Ansible::PropertyOverride aliases: ["region", "zone"] + initialClusterVersion: !ruby/object:Overrides::Ansible::PropertyOverride + aliases: ["cluster_version"] kubectlPath: !ruby/object:Overrides::Ansible::PropertyOverride exclude: true kubectlContext: !ruby/object:Overrides::Ansible::PropertyOverride diff --git a/products/container/ansible_version_added.yaml b/products/container/ansible_version_added.yaml index 156dac302c40..79b1c9fb10c4 100644 --- a/products/container/ansible_version_added.yaml +++ b/products/container/ansible_version_added.yaml @@ -53,6 +53,12 @@ :version_added: '2.9' :effect: :version_added: '2.9' + :shieldedInstanceConfig: + :version_added: '2.10' + :enableSecureBoot: + :version_added: '2.10' + :enableIntegrityMonitoring: + :version_added: '2.10' :masterAuth: :version_added: '2.6' :username: @@ -79,6 +85,8 @@ :version_added: '2.8' :clusterIpv4Cidr: :version_added: '2.6' + :enableTpu: + :version_added: '2.9' :addonsConfig: :version_added: '2.6' :httpLoadBalancing: @@ -133,10 +141,10 @@ :version_added: '2.9' :tpuIpv4CidrBlock: :version_added: '2.9' - :enableTpu: - :version_added: '2.9' - :tpuIpv4CidrBlock: - :version_added: '2.9' + :tpuIpv4CidrBlock: + :version_added: '2.9' + :initialClusterVersion: + :version_added: '2.10' :masterAuthorizedNetworksConfig: :version_added: '2.10' :enabled: @@ -147,6 +155,14 @@ :version_added: '2.10' :cidrBlock: :version_added: '2.10' + :binaryAuthorization: + :version_added: '2.10' + :enabled: + :version_added: '2.10' + :shieldedNodes: + :version_added: '2.10' + :enabled: + :version_added: '2.10' :location: :version_added: '2.8' :kubectlPath: @@ -197,6 +213,12 @@ :version_added: '2.9' :effect: :version_added: '2.9' + :shieldedInstanceConfig: + :version_added: '2.10' + :enableSecureBoot: + :version_added: '2.10' + :enableIntegrityMonitoring: + :version_added: '2.10' :initialNodeCount: :version_added: '2.6' :version: diff --git a/products/container/api.yaml b/products/container/api.yaml index 3aae7b0ffb9e..6ceea5747630 100644 --- a/products/container/api.yaml +++ b/products/container/api.yaml @@ -257,6 +257,27 @@ objects: - "NO_SCHEDULE" - "PREFER_NO_SCHEDULE" - "NO_EXECUTE" + - !ruby/object:Api::Type::NestedObject + name: 'shieldedInstanceConfig' + description: 'Shielded Instance options.' + properties: + - !ruby/object:Api::Type::Boolean + name: 'enableSecureBoot' + description: | + Defines whether the instance has Secure Boot enabled. + + Secure Boot helps ensure that the system only runs authentic software by + verifying the digital signature of all boot components, and halting the + boot process if signature verification fails. + - !ruby/object:Api::Type::Boolean + name: 'enableIntegrityMonitoring' + description: | + Defines whether the instance has integrity monitoring enabled. + + Enables monitoring and attestation of the boot integrity of the instance. + The attestation is performed against the integrity policy baseline. This + baseline is initially derived from the implicitly trusted boot image when + the instance is created. - !ruby/object:Api::Type::NestedObject name: 'masterAuth' description: | @@ -576,7 +597,6 @@ objects: The software version of the master endpoint and kubelets used in the cluster when it was first created. The version can be upgraded over time. - output: true - !ruby/object:Api::Type::String name: 'currentMasterVersion' description: 'The current software version of the master endpoint.' @@ -688,6 +708,29 @@ objects: - !ruby/object:Api::Type::Boolean name: 'enabled' description: If enabled, all container images will be validated by Binary Authorization. + - !ruby/object:Api::Type::NestedObject + min_version: beta + name: 'releaseChannel' + description: | + ReleaseChannel indicates which release channel a cluster is subscribed to. + Release channels are arranged in order of risk and frequency of updates. + properties: + - !ruby/object:Api::Type::Enum + name: 'channel' + description: 'Which release channel the cluster is subscribed to.' + values: + - UNSPECIFIED + - RAPID + - REGULAR + - STABLE + name: 'shieldedNodes' + description: 'Shielded Nodes configuration.' + properties: + - !ruby/object:Api::Type::Boolean + name: 'enabled' + description: | + Whether Shielded Nodes features are enabled on all nodes in this + cluster. - !ruby/object:Api::Resource name: 'NodePool' base_url: projects/{{project}}/locations/{{location}}/clusters/{{cluster}}/nodePools @@ -853,6 +896,27 @@ objects: - !ruby/object:Api::Type::String name: 'effect' description: Effect for taint + - !ruby/object:Api::Type::NestedObject + name: 'shieldedInstanceConfig' + description: 'Shielded Instance options.' + properties: + - !ruby/object:Api::Type::Boolean + name: 'enableSecureBoot' + description: | + Defines whether the instance has Secure Boot enabled. + + Secure Boot helps ensure that the system only runs authentic software by + verifying the digital signature of all boot components, and halting the + boot process if signature verification fails. + - !ruby/object:Api::Type::Boolean + name: 'enableIntegrityMonitoring' + description: | + Defines whether the instance has integrity monitoring enabled. + + Enables monitoring and attestation of the boot integrity of the instance. + The attestation is performed against the integrity policy baseline. This + baseline is initially derived from the implicitly trusted boot image when + the instance is created. - !ruby/object:Api::Type::Integer name: 'initialNodeCount' description: | diff --git a/products/containeranalysis/api.yaml b/products/containeranalysis/api.yaml index 6a765cbe1c38..ff8d62a82f3c 100644 --- a/products/containeranalysis/api.yaml +++ b/products/containeranalysis/api.yaml @@ -29,11 +29,14 @@ objects: base_url: projects/{{project}}/notes?noteId={{name}} self_link: projects/{{project}}/notes/{{name}} update_verb: :PATCH + update_mask: true description: | - Provides a detailed description of a Note. + A Container Analysis note is a high-level piece of metadata that + describes a type of analysis that can be done for a resource. references: !ruby/object:Api::Resource::ReferenceLinks guides: 'Official Documentation': 'https://cloud.google.com/container-analysis/' + 'Creating Attestations (Occurrences)': 'https://cloud.google.com/binary-authorization/docs/making-attestations' api: 'https://cloud.google.com/container-analysis/api/reference/rest/' properties: - !ruby/object:Api::Type::String @@ -42,6 +45,61 @@ objects: The name of the note. required: true input: true + - !ruby/object:Api::Type::String + name: shortDescription + description: | + A one sentence description of the note. + - !ruby/object:Api::Type::String + name: longDescription + description: | + A detailed description of the note + - !ruby/object:Api::Type::Enum + name: 'kind' + description: | + The type of analysis this note describes + values: + - NOTE_KIND_UNSPECIFIED + - VULNERABILITY + - BUILD + - IMAGE + - PACKAGE + - DEPLOYMENT + - DISCOVERY + - ATTESTATION + - UPGRADE + output: true + - !ruby/object:Api::Type::Array + name: relatedUrl + description: | + URLs associated with this note and related metadata. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: url + description: | + Specific URL associated with the resource. + required: true + - !ruby/object:Api::Type::String + name: label + description: | + Label to describe usage of the URL + - !ruby/object:Api::Type::Time + name: expirationTime + description: | + Time of expiration for this note. Leave empty if note does not expire. + - !ruby/object:Api::Type::Time + name: createTime + description: The time this note was created. + output: true + - !ruby/object:Api::Type::Time + name: updateTime + description: The time this note was last updated. + output: true + - !ruby/object:Api::Type::Array + name: relatedNoteNames + description: | + Names of other notes related to this note. + item_type: Api::Type::String - !ruby/object:Api::Type::NestedObject name: attestationAuthority description: | @@ -75,3 +133,113 @@ objects: The human readable name of this Attestation Authority, for example "qa". required: true + + - !ruby/object:Api::Resource + name: 'Occurrence' + base_url: projects/{{project}}/occurrences + self_link: projects/{{project}}/occurrences/{{name}} + update_verb: :PATCH + update_mask: true + description: | + An occurrence is an instance of a Note, or type of analysis that + can be done for a resource. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': 'https://cloud.google.com/container-analysis/' + api: 'https://cloud.google.com/container-analysis/api/reference/rest/' + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The name of the occurrence. + output: true + - !ruby/object:Api::Type::String + name: resourceUri + description: | + Required. Immutable. A URI that represents the resource for which + the occurrence applies. For example, + https://gcr.io/project/image@sha256:123abc for a Docker image. + required: true + input: true + - !ruby/object:Api::Type::String + name: noteName + description: | + The analysis note associated with this occurrence, in the form of + projects/[PROJECT]/notes/[NOTE_ID]. This field can be used as a + filter in list requests. + required: true + input: true + - !ruby/object:Api::Type::String + name: kind + description: | + The note kind which explicitly denotes which of the occurrence + details are specified. This field can be used as a filter in list + requests. + output: true + - !ruby/object:Api::Type::String + name: remediation + description: | + A description of actions that can be taken to remedy the note. + - !ruby/object:Api::Type::Time + name: createTime + description: The time when the repository was created. + output: true + - !ruby/object:Api::Type::Time + name: updateTime + description: The time when the repository was last updated. + output: true + - !ruby/object:Api::Type::NestedObject + name: attestation + description: | + Occurrence that represents a single "attestation". The authenticity + of an attestation can be verified using the attached signature. + If the verifier trusts the public key of the signer, then verifying + the signature is sufficient to establish trust. In this circumstance, + the authority to which this attestation is attached is primarily + useful for lookup (how to find this attestation if you already + know the authority and artifact to be verified) and intent (for + which authority this attestation was intended to sign. + required: true + properties: + - !ruby/object:Api::Type::String + name: serializedPayload + description: | + The serialized payload that is verified by one or + more signatures. A base64-encoded string. + required: true + - !ruby/object:Api::Type::Array + name: signatures + description: | + One or more signatures over serializedPayload. + Verifier implementations should consider this attestation + message verified if at least one signature verifies + serializedPayload. See Signature in common.proto for more + details on signature structure and verification. + required: true + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: signature + description: | + The content of the signature, an opaque bytestring. + The payload that this signature verifies MUST be + unambiguously provided with the Signature during + verification. A wrapper message might provide the + payload explicitly. Alternatively, a message might + have a canonical serialization that can always be + unambiguously computed to derive the payload. + - !ruby/object:Api::Type::String + name: publicKeyId + required: true + description: | + The identifier for the public key that verifies this + signature. MUST be an RFC3986 conformant + URI. * When possible, the key id should be an + immutable reference, such as a cryptographic digest. + Examples of valid values: + + * OpenPGP V4 public key fingerprint. See https://www.iana.org/assignments/uri-schemes/prov/openpgp4fpr + for more details on this scheme. + * `openpgp4fpr:74FAF3B861BDA0870C7B6DEF607E48D2A663AEEA` + * RFC6920 digest-named SubjectPublicKeyInfo (digest of the DER serialization): + * "ni:///sha-256;cD9o9Cq6LG3jD0iKXqEi_vdjJGecm_iXkbqVoScViaU" diff --git a/products/containeranalysis/terraform.yaml b/products/containeranalysis/terraform.yaml index 99a3579f59f6..3a87f106bd92 100644 --- a/products/containeranalysis/terraform.yaml +++ b/products/containeranalysis/terraform.yaml @@ -14,10 +14,10 @@ --- !ruby/object:Provider::Terraform::Config overrides: !ruby/object:Overrides::ResourceOverrides Note: !ruby/object:Overrides::Terraform::ResourceOverride + mutex: "projects/{{project}}/notes/{{name}}" id_format: "projects/{{project}}/notes/{{name}}" import_format: ["projects/{{project}}/notes/{{name}}"] custom_code: !ruby/object:Provider::Terraform::CustomCode - pre_update: 'templates/terraform/pre_update/containeranalysis_note.erb' encoder: templates/terraform/encoders/containeranalysis_attestation_field_name.go.erb decoder: templates/terraform/decoders/containeranalysis_attestation_field_name.go.erb examples: @@ -25,10 +25,42 @@ overrides: !ruby/object:Overrides::ResourceOverrides name: "container_analysis_note_basic" primary_resource_id: "note" vars: - note_name: "test-attestor-note" + note_name: "attestor-note" + - !ruby/object:Provider::Terraform::Examples + name: "container_analysis_note_attestation_full" + primary_resource_id: "note" + vars: + note_name: "attestor-note" properties: name: !ruby/object:Overrides::Terraform::PropertyOverride custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' + relatedUrl: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + relatedNoteNames: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + Occurrence: !ruby/object:Overrides::Terraform::ResourceOverride + # "projects/{{project}}/notes/{{name}}" + mutex: "{{note_name}}" + id_format: "projects/{{project}}/occurrences/{{name}}" + import_format: ["projects/{{project}}/occurrences/{{name}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "container_analysis_occurrence_kms" + # Occurrence requires custom logic for signing payloads. + skip_test: true + primary_resource_id: "occurrence" + vars: + note_name: "attestation-note" + attestor: "attestor" + custom_code: !ruby/object:Provider::Terraform::CustomCode + encoder: templates/terraform/encoders/containeranalysis_occurrence.go.erb + update_encoder: templates/terraform/update_encoder/containeranalysis_occurrence.go.erb + decoder: templates/terraform/decoders/containeranalysis_occurrence.go.erb + properties: + name: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: templates/terraform/custom_flatten/name_from_self_link.erb + attestation.signatures: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true # This is for copying files over files: !ruby/object:Provider::Config::Files diff --git a/products/datacatalog/api.yaml b/products/datacatalog/api.yaml new file mode 100644 index 000000000000..25476010e224 --- /dev/null +++ b/products/datacatalog/api.yaml @@ -0,0 +1,488 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: DataCatalog +versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://datacatalog.googleapis.com/v1/ +scopes: + - https://www.googleapis.com/auth/cloud-platform +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Google Cloud Data Catalog API + url: https://console.cloud.google.com/apis/library/datacatalog.googleapis.com +objects: + - !ruby/object:Api::Resource + name: EntryGroup + base_url: projects/{{project}}/locations/{{region}}/entryGroups + create_url: projects/{{project}}/locations/{{region}}/entryGroups?entryGroupId={{entry_group_id}} + self_link: "{{name}}" + update_verb: :PATCH + update_mask: true + description: | + An EntryGroup resource represents a logical grouping of zero or more Data Catalog Entry resources. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': https://cloud.google.com/data-catalog/docs + api: https://cloud.google.com/data-catalog/docs/reference/rest/v1/projects.locations.entryGroups + iam_policy: !ruby/object:Api::Resource::IamPolicy + method_name_separator: ':' + fetch_iam_policy_verb: :POST + parent_resource_attribute: 'entry_group' + import_format: ["projects/{{project}}/locations/{{region}}/entryGroups/{{entry_group}}", "{{entry_group}}"] + base_url: projects/{{project}}/locations/{{region}}/entryGroups/{{entry_group}} + parameters: + - !ruby/object:Api::Type::String + name: region + url_param_only: true + input: true + description: | + EntryGroup location region. + - !ruby/object:Api::Type::String + name: entryGroupId + required: true + url_param_only: true + input: true + description: | + The id of the entry group to create. The id must begin with a letter or underscore, + contain only English letters, numbers and underscores, and be at most 64 characters. + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The resource name of the entry group in URL format. Example: projects/{project}/locations/{location}/entryGroups/{entryGroupId} + output: true + - !ruby/object:Api::Type::String + name: displayName + description: | + A short name to identify the entry group, for example, "analytics data - jan 2011". + - !ruby/object:Api::Type::String + name: description + description: | + Entry group description, which can consist of several sentences or paragraphs that describe entry group contents. + - !ruby/object:Api::Resource + name: Entry + base_url: '{{entry_group}}/entries' + create_url: '{{entry_group}}/entries?entryId={{entry_id}}' + self_link: "{{name}}" + update_verb: :PATCH + update_mask: true + description: | + Entry Metadata. A Data Catalog Entry resource represents another resource in Google Cloud Platform + (such as a BigQuery dataset or a Pub/Sub topic) or outside of Google Cloud Platform. Clients can use + the linkedResource field in the Entry resource to refer to the original resource ID of the source system. + + An Entry resource contains resource details, such as its schema. An Entry can also be used to attach + flexible metadata, such as a Tag. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': https://cloud.google.com/data-catalog/docs + api: https://cloud.google.com/data-catalog/docs/reference/rest/v1/projects.locations.entryGroups.entries + parameters: + - !ruby/object:Api::Type::String + name: entryGroup + required: true + url_param_only: true + input: true + description: | + The name of the entry group this entry is in. + - !ruby/object:Api::Type::String + name: entryId + required: true + url_param_only: true + input: true + description: | + The id of the entry to create. + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The Data Catalog resource name of the entry in URL format. + Example: projects/{project_id}/locations/{location}/entryGroups/{entryGroupId}/entries/{entryId}. + Note that this Entry and its child resources may not actually be stored in the location in this name. + output: true + - !ruby/object:Api::Type::String + name: linkedResource + description: | + The resource this metadata entry refers to. + For Google Cloud Platform resources, linkedResource is the full name of the resource. + For example, the linkedResource for a table resource from BigQuery is: + //bigquery.googleapis.com/projects/projectId/datasets/datasetId/tables/tableId + Output only when Entry is of type in the EntryType enum. For entries with userSpecifiedType, + this field is optional and defaults to an empty string. + - !ruby/object:Api::Type::String + name: displayName + description: | + Display information such as title and description. A short name to identify the entry, + for example, "Analytics Data - Jan 2011". + - !ruby/object:Api::Type::String + name: description + description: | + Entry description, which can consist of several sentences or paragraphs that describe entry contents. + - !ruby/object:Api::Type::String + # This is a string instead of a NestedObject because schemas contain ColumnSchemas, which can contain nested ColumnSchemas. + # We'll have people provide the json blob for the schema instead. + name: schema + description: | + Schema of the entry (e.g. BigQuery, GoogleSQL, Avro schema), as a json string. An entry might not have any schema + attached to it. See + https://cloud.google.com/data-catalog/docs/reference/rest/v1/projects.locations.entryGroups.entries#schema + for what fields this schema can contain. + - !ruby/object:Api::Type::Enum + name: type + description: | + The type of the entry. Only used for Entries with types in the EntryType enum. + Currently, only FILESET enum value is allowed. All other entries created through Data Catalog must use userSpecifiedType. + values: + - :FILESET + input: true + exactly_one_of: + - type + - user_specified_type + - !ruby/object:Api::Type::String + name: userSpecifiedType + description: | + Entry type if it does not fit any of the input-allowed values listed in EntryType enum above. + When creating an entry, users should check the enum values first, if nothing matches the entry + to be created, then provide a custom value, for example "my_special_type". + userSpecifiedType strings must begin with a letter or underscore and can only contain letters, + numbers, and underscores; are case insensitive; must be at least 1 character and at most 64 characters long. + exactly_one_of: + - type + - user_specified_type + - !ruby/object:Api::Type::String + name: integratedSystem + description: | + This field indicates the entry's source system that Data Catalog integrates with, such as BigQuery or Pub/Sub. + output: true + - !ruby/object:Api::Type::String + name: userSpecifiedSystem + description: | + This field indicates the entry's source system that Data Catalog does not integrate with. + userSpecifiedSystem strings must begin with a letter or underscore and can only contain letters, numbers, + and underscores; are case insensitive; must be at least 1 character and at most 64 characters long. + - !ruby/object:Api::Type::NestedObject + name: gcsFilesetSpec + description: | + Specification that applies to a Cloud Storage fileset. This is only valid on entries of type FILESET. + properties: + - !ruby/object:Api::Type::Array + name: filePatterns + description: | + Patterns to identify a set of files in Google Cloud Storage. + See [Cloud Storage documentation](https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames) + for more information. Note that bucket wildcards are currently not supported. Examples of valid filePatterns: + + * gs://bucket_name/dir/*: matches all files within bucket_name/dir directory. + * gs://bucket_name/dir/**: matches all files in bucket_name/dir spanning all subdirectories. + * gs://bucket_name/file*: matches files prefixed by file in bucket_name + * gs://bucket_name/??.txt: matches files with two characters followed by .txt in bucket_name + * gs://bucket_name/[aeiou].txt: matches files that contain a single vowel character followed by .txt in bucket_name + * gs://bucket_name/[a-m].txt: matches files that contain a, b, ... or m followed by .txt in bucket_name + * gs://bucket_name/a/*/b: matches all files in bucket_name that match a/*/b pattern, such as a/c/b, a/d/b + * gs://another_bucket/a.txt: matches gs://another_bucket/a.txt + required: true + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: sampleGcsFileSpecs + description: | + Sample files contained in this fileset, not all files contained in this fileset are represented here. + output: true + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: filePath + description: | + The full file path + output: true + - !ruby/object:Api::Type::Integer + name: sizeBytes + description: | + The size of the file, in bytes. + output: true + - !ruby/object:Api::Type::NestedObject + name: bigqueryTableSpec + description: | + Specification that applies to a BigQuery table. This is only valid on entries of type TABLE. + output: true + properties: + - !ruby/object:Api::Type::String + name: tableSourceType + description: | + The table source type. + output: true + - !ruby/object:Api::Type::NestedObject + name: viewSpec + description: | + Table view specification. This field should only be populated if tableSourceType is BIGQUERY_VIEW. + output: true + properties: + - !ruby/object:Api::Type::String + name: viewQuery + description: | + The query that defines the table view. + output: true + - !ruby/object:Api::Type::NestedObject + name: tableSpec + description: | + Spec of a BigQuery table. This field should only be populated if tableSourceType is BIGQUERY_TABLE. + output: true + properties: + - !ruby/object:Api::Type::String + name: groupedEntry + description: | + If the table is a dated shard, i.e., with name pattern [prefix]YYYYMMDD, groupedEntry is the + Data Catalog resource name of the date sharded grouped entry, for example, + projects/{project_id}/locations/{location}/entrygroups/{entryGroupId}/entries/{entryId}. + Otherwise, groupedEntry is empty. + output: true + - !ruby/object:Api::Type::NestedObject + name: bigqueryDateShardedSpec + description: | + Specification for a group of BigQuery tables with name pattern [prefix]YYYYMMDD. + Context: https://cloud.google.com/bigquery/docs/partitioned-tables#partitioning_versus_sharding. + output: true + properties: + - !ruby/object:Api::Type::String + name: dataset + description: | + The Data Catalog resource name of the dataset entry the current table belongs to, for example, + projects/{project_id}/locations/{location}/entrygroups/{entryGroupId}/entries/{entryId} + output: true + - !ruby/object:Api::Type::String + name: tablePrefix + description: | + The table name prefix of the shards. The name of any given shard is [tablePrefix]YYYYMMDD, + for example, for shard MyTable20180101, the tablePrefix is MyTable. + output: true + - !ruby/object:Api::Type::Integer + name: shardCount + description: | + Total number of shards. + output: true + - !ruby/object:Api::Resource + name: TagTemplate + base_url: projects/{{project}}/locations/{{region}}/tagTemplates + self_link: "{{name}}" + create_url: projects/{{project}}/locations/{{region}}/tagTemplates?tagTemplateId={{tag_template_id}} + delete_url: "{{name}}?force={{force_delete}}" + update_verb: :PATCH + update_mask: true + description: | + A tag template defines a tag, which can have one or more typed fields. + The template is used to create and attach the tag to GCP resources. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': https://cloud.google.com/data-catalog/docs + api: https://cloud.google.com/data-catalog/docs/reference/rest/v1/projects.locations.tagTemplates + parameters: + - !ruby/object:Api::Type::String + name: region + url_param_only: true + input: true + description: | + Template location region. + - !ruby/object:Api::Type::String + name: tagTemplateId + required: true + url_param_only: true + input: true + description: | + The id of the tag template to create. + - !ruby/object:Api::Type::Boolean + name: forceDelete + url_param_only: true + description: | + This confirms the deletion of any possible tags using this template. Must be set to true in order to delete the tag template. + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The resource name of the tag template in URL format. Example: projects/{project_id}/locations/{location}/tagTemplates/{tagTemplateId} + output: true + - !ruby/object:Api::Type::String + name: displayName + description: | + The display name for this template. + - !ruby/object:Api::Type::Map + name: fields + description: | + Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. This map must contain at least one field and at most 500 fields. + required: true + input: true # TODO(danawillow): update logic + key_name: field_id + value_type: !ruby/object:Api::Type::NestedObject + name: field + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The resource name of the tag template field in URL format. Example: projects/{project_id}/locations/{location}/tagTemplates/{tagTemplateId}/fields/{field} + output: true + - !ruby/object:Api::Type::String + name: displayName + description: | + The display name for this field. + - !ruby/object:Api::Type::NestedObject + name: type + description: | + The type of value this tag field can contain. + required: true + properties: + - !ruby/object:Api::Type::Enum + name: primitiveType + description: | + Represents primitive types - string, bool etc. + values: + - :DOUBLE + - :STRING + - :BOOL + - :TIMESTAMP + - !ruby/object:Api::Type::NestedObject + name: enumType + description: | + Represents an enum type. + properties: + - !ruby/object:Api::Type::Array + name: allowedValues + description: | + The set of allowed values for this enum. The display names of the + values must be case-insensitively unique within this set. Currently, + enum values can only be added to the list of allowed values. Deletion + and renaming of enum values are not supported. + Can have up to 500 allowed values. + required: true + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: displayName + description: | + The display name of the enum value. + required: true + - !ruby/object:Api::Type::Boolean + name: isRequired + description: | + Whether this is a required field. Defaults to false. + - !ruby/object:Api::Type::Integer + name: order + description: | + The order of this field with respect to other fields in this tag template. + A higher value indicates a more important field. The value can be negative. + Multiple fields can have the same order, and field orders within a tag do not have to be sequential. + - !ruby/object:Api::Resource + name: Tag + base_url: '{{parent}}/tags' + self_link: '{{name}}' + update_url: '{{name}}' + update_verb: :PATCH + update_mask: true + self_link: '{{parent}}/tags' + delete_url: '{{name}}' + nested_query: !ruby/object:Api::Resource::NestedQuery + keys: + - tags + description: | + Tags are used to attach custom metadata to Data Catalog resources. Tags conform to the specifications within their tag template. + + See [Data Catalog IAM](https://cloud.google.com/data-catalog/docs/concepts/iam) for information on the permissions needed to create or view tags. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': https://cloud.google.com/data-catalog/docs + api: https://cloud.google.com/data-catalog/docs/reference/rest/v1/projects.locations.entryGroups.tags + parameters: + - !ruby/object:Api::Type::String + name: parent + url_param_only: true + description: | + The name of the parent this tag is attached to. This can be the name of an entry or an entry group. If an entry group, the tag will be attached to + all entries in that group. + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The resource name of the tag in URL format. Example: + projects/{project_id}/locations/{location}/entrygroups/{entryGroupId}/entries/{entryId}/tags/{tag_id} or + projects/{project_id}/locations/{location}/entrygroups/{entryGroupId}/tags/{tag_id} + where tag_id is a system-generated identifier. Note that this Tag may not actually be stored in the location in this name. + output: true + - !ruby/object:Api::Type::String + name: template + description: | + The resource name of the tag template that this tag uses. Example: + projects/{project_id}/locations/{location}/tagTemplates/{tagTemplateId} + This field cannot be modified after creation. + required: true + input: true + - !ruby/object:Api::Type::String + name: templateDisplayName + description: | + The display name of the tag template. + output: true + - !ruby/object:Api::Type::Map + name: fields + description: | + This maps the ID of a tag field to the value of and additional information about that field. + Valid field IDs are defined by the tag's template. A tag must have at least 1 field and at most 500 fields. + required: true + key_name: field_name + value_type: !ruby/object:Api::Type::NestedObject + name: field_value + properties: + - !ruby/object:Api::Type::String + name: display_name + description: | + The display name of this field + output: true + - !ruby/object:Api::Type::Integer + name: order + description: | + The order of this field with respect to other fields in this tag. For example, a higher value can indicate + a more important field. The value can be negative. Multiple fields can have the same order, and field orders + within a tag do not have to be sequential. + output: true + - !ruby/object:Api::Type::Double + name: doubleValue + description: | + Holds the value for a tag field with double type. + - !ruby/object:Api::Type::String + name: stringValue + description: | + Holds the value for a tag field with string type. + - !ruby/object:Api::Type::Boolean + name: boolValue + description: | + Holds the value for a tag field with boolean type. + - !ruby/object:Api::Type::String + name: timestampValue + description: | + Holds the value for a tag field with timestamp type. + - !ruby/object:Api::Type::NestedObject + name: enumValue + description: | + Holds the value for a tag field with enum type. This value must be one of the allowed values in the definition of this enum. + properties: + - !ruby/object:Api::Type::String + name: displayName + description: | + The display name of the enum value. + required: true + - !ruby/object:Api::Type::String + name: column + description: | + Resources like Entry can have schemas associated with them. This scope allows users to attach tags to an + individual column based on that schema. + + For attaching a tag to a nested column, use `.` to separate the column names. Example: + `outer_column.inner_column` diff --git a/products/datacatalog/terraform.yaml b/products/datacatalog/terraform.yaml new file mode 100644 index 000000000000..4aadf710af56 --- /dev/null +++ b/products/datacatalog/terraform.yaml @@ -0,0 +1,179 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + EntryGroup: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["{{name}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_group_basic" + primary_resource_id: "basic_entry_group" + primary_resource_name: "fmt.Sprintf(\"tf_test_my_group%s\", context[\"random_suffix\"])" + vars: + entry_group_id: "my_group" + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_group_full" + primary_resource_id: "basic_entry_group" + primary_resource_name: "fmt.Sprintf(\"tf_test_my_group%s\", context[\"random_suffix\"])" + vars: + entry_group_id: "my_group" + properties: + entryGroupId: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + regex: '^[A-z_][A-z0-9_]{0,63}$' + region: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + required: false + default_from_api: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/data_catalog_entry_group.go.erb + Entry: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["{{name}}"] + supports_indirect_user_project_override: true + examples: + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_basic" + primary_resource_id: "basic_entry" + vars: + entry_id: "my_entry" + entry_group_id: "my_group" + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_fileset" + primary_resource_id: "basic_entry" + vars: + entry_id: "my_entry" + entry_group_id: "my_group" + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_full" + primary_resource_id: "basic_entry" + vars: + entry_id: "my_entry" + entry_group_id: "my_group" + properties: + linkedResource: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + schema: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/json_schema.erb' + custom_flatten: 'templates/terraform/custom_flatten/json_schema.erb' + state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }' + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.ValidateJsonString' + userSpecifiedSystem: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + regex: '^[A-z_][A-z0-9_]{0,63}$' + userSpecifiedType: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + regex: '^[A-z_][A-z0-9_]{0,63}$' + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/data_catalog_entry.go.erb + TagTemplate: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["{{name}}"] + skip_sweeper: true # no list endpoint plus variables in delete URL + examples: + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_tag_template_basic" + primary_resource_id: "basic_tag_template" + vars: + tag_template_id: "my_template" + force_delete: "false" + test_vars_overrides: + force_delete: "true" + oics_vars_overrides: + force_delete: "true" + properties: + tagTemplateId: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + regex: '^[a-z_][a-z0-9_]{0,63}$' + fields: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + Set of tag template field IDs and the settings for the field. This set is an exhaustive list of the allowed fields. This set must contain at least one field and at most 500 fields. + fields.type.enumType: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + {{description}} Exactly one of `primitive_type` or `enum_type` must be set + fields.type.enumType.allowedValues: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + fields.type.primitiveType: !ruby/object:Overrides::Terraform::PropertyOverride + description: | + {{description}} Exactly one of `primitive_type` or `enum_type` must be set + region: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + required: false + default_from_api: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/data_catalog_tag_template.go.erb + Tag: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["{{name}}"] + id_format: "{{name}}" + examples: + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_tag_basic" + primary_resource_id: "basic_tag" + vars: + entry_group_id: "my_entry_group" + entry_id: "my_entry" + tag_template_id: "my_template" + force_delete: "false" + test_vars_overrides: + force_delete: "true" + oics_vars_overrides: + force_delete: "true" + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_group_tag" + primary_resource_id: "entry_group_tag" + vars: + entry_group_id: "my_entry_group" + first_entry: "first_entry" + second_entry: "second_entry" + tag_template_id: "my_template" + force_delete: "false" + test_vars_overrides: + force_delete: "true" + oics_vars_overrides: + force_delete: "true" + - !ruby/object:Provider::Terraform::Examples + name: "data_catalog_entry_tag_full" + primary_resource_id: "basic_tag" + vars: + entry_group_id: "my_entry_group" + entry_id: "my_entry" + tag_template_id: "my_template" + force_delete: "false" + test_vars_overrides: + force_delete: "true" + oics_vars_overrides: + force_delete: "true" + properties: + # Changing the name here so when mm generates methods like `flattenDataCatalogTagTemplateDisplayName` + # this doesn't conflict with tag template's display name methods + templateDisplayName: !ruby/object:Overrides::Terraform::PropertyOverride + name: template_displayname + fields.enumValue: !ruby/object:Overrides::Terraform::PropertyOverride + flatten_object: true + # because `fields` is a set, the current generated expand code can't properly retrieve + # enum_value by d.Get("enum_value") without knowing the set id, however, the value + # `v` is the correct value and passed to expand, so this custom expand will use + # that as the correct `enum_value.display_name` value + custom_expand: templates/terraform/custom_expand/data_catalog_tag.go.erb + custom_flatten: templates/terraform/custom_flatten/data_catalog_tag.go.erb + fields.enumValue.displayName: !ruby/object:Overrides::Terraform::PropertyOverride + name: enum_value + required: false + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/data_catalog_tag.go.erb +# This is for copying files over +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> \ No newline at end of file diff --git a/products/datafusion/api.yaml b/products/datafusion/api.yaml index 3954a63b0a77..023f1d127ee1 100644 --- a/products/datafusion/api.yaml +++ b/products/datafusion/api.yaml @@ -147,7 +147,7 @@ objects: Endpoint on which the Data Fusion UI and REST APIs are accessible. - !ruby/object:Api::Type::String name: 'version' - output: true + input: true description: | Current version of the Data Fusion. - !ruby/object:Api::Type::String diff --git a/products/datafusion/terraform.yaml b/products/datafusion/terraform.yaml index 262fbaa838fc..2dbaabf7dfab 100644 --- a/products/datafusion/terraform.yaml +++ b/products/datafusion/terraform.yaml @@ -37,6 +37,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides ignore_read: true required: false default_from_api: true + version: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true name: !ruby/object:Overrides::Terraform::PropertyOverride custom_expand: 'templates/terraform/custom_expand/shortname_to_url.go.erb' custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' diff --git a/products/dataproc/api.yaml b/products/dataproc/api.yaml index f0532959a3f9..d633bceda4ad 100644 --- a/products/dataproc/api.yaml +++ b/products/dataproc/api.yaml @@ -13,7 +13,6 @@ --- !ruby/object:Api::Product name: Dataproc -display_name: Cloud Dataproc versions: - !ruby/object:Api::Product::Version name: ga @@ -32,6 +31,7 @@ objects: name: 'AutoscalingPolicy' base_url: "projects/{{project}}/locations/{{location}}/autoscalingPolicies" self_link: "projects/{{project}}/locations/{{location}}/autoscalingPolicies/{{id}}" + collection_url_key: 'policies' description: | Describes an autoscaling policy for Dataproc cluster autoscaler. parameters: @@ -486,10 +486,13 @@ objects: description: | The set of optional components to activate on the cluster. - Possible values include: COMPONENT_UNSPECIFIED, ANACONDA, HIVE_WEBHCAT, JUPYTER, ZEPPELIN + Possible values include: COMPONENT_UNSPECIFIED, ANACONDA, HIVE_WEBHCAT, JUPYTER, ZEPPELIN, HBASE, SOLR, and RANGER values: - :COMPONENT_UNSPECIFIED - :ANACONDA + - :HBASE + - :RANGER + - :SOLR - :HIVE_WEBHCAT - :JUPYTER - :ZEPPELIN diff --git a/products/datastore/api.yaml b/products/datastore/api.yaml index fb6fac991e8b..b041e7a372b0 100644 --- a/products/datastore/api.yaml +++ b/products/datastore/api.yaml @@ -13,7 +13,6 @@ --- !ruby/object:Api::Product name: Datastore -display_name: Cloud Datastore versions: - !ruby/object:Api::Product::Version name: ga @@ -74,8 +73,7 @@ objects: - :NONE - :ALL_ANCESTORS description: | - Policy for including ancestors in the index. Either `ALL_ANCESTORS` or `NONE`, - the default is `NONE`. + Policy for including ancestors in the index. - !ruby/object:Api::Type::Array name: 'properties' description: | @@ -95,4 +93,4 @@ objects: - :ASCENDING - :DESCENDING description: | - The direction the index should optimize for sorting. Possible values are ASCENDING and DESCENDING. + The direction the index should optimize for sorting. diff --git a/products/datastore/terraform.yaml b/products/datastore/terraform.yaml index 2760b70867bd..e29556779c6b 100644 --- a/products/datastore/terraform.yaml +++ b/products/datastore/terraform.yaml @@ -16,6 +16,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides Index: !ruby/object:Overrides::Terraform::ResourceOverride id_format: "projects/{{project}}/indexes/{{index_id}}" self_link: "projects/{{project}}/indexes/{{index_id}}" + error_retry_predicates: ["datastoreIndex409Contention"] autogen_async: true # TODO(ndmckinley): This resource doesn't have a name, so the current # sweeper won't ever sweep it - might as well not have one for now, diff --git a/products/deploymentmanager/api.yaml b/products/deploymentmanager/api.yaml index dccd1c7c2760..42bdfb144370 100644 --- a/products/deploymentmanager/api.yaml +++ b/products/deploymentmanager/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: DeploymentManager -display_name: Deployment Manager +display_name: Cloud Deployment Manager versions: - !ruby/object:Api::Product::Version name: ga diff --git a/products/deploymentmanager/terraform.yaml b/products/deploymentmanager/terraform.yaml index 4edaa5818ae7..683d885c1e61 100644 --- a/products/deploymentmanager/terraform.yaml +++ b/products/deploymentmanager/terraform.yaml @@ -53,7 +53,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides properties: preview: !ruby/object:Overrides::Terraform::PropertyOverride description: | - {{description}} ~>**NOTE**: Deployment Manager does not allow update + {{description}} ~>**NOTE:** Deployment Manager does not allow update of a deployment in preview (unless updating to preview=false). Thus, Terraform will force-recreate deployments if either preview is updated to true or if other fields are updated while preview is true. diff --git a/products/dialogflow/api.yaml b/products/dialogflow/api.yaml index c750e9d91b5b..8ca33a365c98 100644 --- a/products/dialogflow/api.yaml +++ b/products/dialogflow/api.yaml @@ -60,7 +60,6 @@ objects: The list of all languages supported by this agent (except for the defaultLanguageCode). - !ruby/object:Api::Type::String name: 'timeZone' - input: true description: | The time zone of this agent from the [time zone database](https://www.iana.org/time-zones), e.g., America/New_York, Europe/Paris. @@ -253,4 +252,70 @@ objects: name: 'parentFollowupIntentName' description: | The unique identifier of the followup intent's parent. - Format: projects//agent/intents/. \ No newline at end of file + Format: projects//agent/intents/. + - !ruby/object:Api::Resource + name: 'EntityType' + base_url: "projects/{{project}}/agent/entityTypes/" + self_link: "{{name}}" + update_verb: :PATCH + description: | + Represents an entity type. Entity types serve as a tool for extracting parameter values from natural language queries. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/dialogflow/docs/' + api: 'https://cloud.google.com/dialogflow/docs/reference/rest/v2/projects.agent.entityTypes' + properties: + - !ruby/object:Api::Type::String + name: 'name' + output: true + description: | + The unique identifier of the entity type. + Format: projects//agent/entityTypes/. + - !ruby/object:Api::Type::String + name: 'displayName' + required: true + description: | + The name of this entity type to be displayed on the console. + - !ruby/object:Api::Type::Enum + name: 'kind' + required: true + description: | + Indicates the kind of entity type. + * KIND_MAP: Map entity types allow mapping of a group of synonyms to a reference value. + * KIND_LIST: List entity types contain a set of entries that do not map to reference values. However, list entity + types can contain references to other entity types (with or without aliases). + * KIND_REGEXP: Regexp entity types allow to specify regular expressions in entries values. + values: + - :KIND_MAP + - :KIND_LIST + - :KIND_REGEXP + - !ruby/object:Api::Type::Boolean + name: 'enableFuzzyExtraction' + description: | + Enables fuzzy entity extraction during classification. + - !ruby/object:Api::Type::Array + name: 'entities' + description: | + The collection of entity entries associated with the entity type. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'value' + required: true + description: | + The primary value associated with this entity entry. For example, if the entity type is vegetable, the value + could be scallions. + For KIND_MAP entity types: + * A reference value to be used in place of synonyms. + For KIND_LIST entity types: + * A string that can contain references to other entity types (with or without aliases). + - !ruby/object:Api::Type::Array + name: 'synonyms' + required: true + item_type: Api::Type::String + description: | + A collection of value synonyms. For example, if the entity type is vegetable, and value is scallions, a synonym + could be green onions. + For KIND_LIST entity types: + * This collection must contain exactly one synonym equal to value. \ No newline at end of file diff --git a/products/dialogflow/terraform.yaml b/products/dialogflow/terraform.yaml index 694b21f5cdd7..d9ee953383e2 100644 --- a/products/dialogflow/terraform.yaml +++ b/products/dialogflow/terraform.yaml @@ -79,6 +79,21 @@ overrides: !ruby/object:Overrides::ResourceOverrides custom_code: !ruby/object:Provider::Terraform::CustomCode custom_import: templates/terraform/custom_import/self_link_as_name_set_project.go.erb post_create: 'templates/terraform/post_create/set_computed_name.erb' + EntityType: !ruby/object:Overrides::Terraform::ResourceOverride + examples: + - !ruby/object:Provider::Terraform::Examples + name: "dialogflow_entity_type_basic" + primary_resource_id: "basic_entity_type" + skip_test: true + vars: + intent_name: "basic-entity-type" + # Skip sweeper gen since this is a child resource. + skip_sweeper: true + id_format: "{{name}}" + import_format: ["{{name}}"] + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/self_link_as_name_set_project.go.erb + post_create: 'templates/terraform/post_create/set_computed_name.erb' # This is for copying files over files: !ruby/object:Provider::Config::Files # These files have templating (ERB) code that will be run. diff --git a/products/dns/ansible.yaml b/products/dns/ansible.yaml index bcf41ba13942..12b8d8209f46 100644 --- a/products/dns/ansible.yaml +++ b/products/dns/ansible.yaml @@ -14,6 +14,8 @@ --- !ruby/object:Provider::Ansible::Config # This is where custom code would be defined eventually. datasources: !ruby/object:Overrides::ResourceOverrides + Policy: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true Project: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true ResourceRecordSet: !ruby/object:Overrides::Ansible::ResourceOverride @@ -33,6 +35,8 @@ datasources: !ruby/object:Overrides::ResourceOverrides query_options: false filter_api_param: dnsName overrides: !ruby/object:Overrides::ResourceOverrides + Policy: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true ResourceRecordSet: !ruby/object:Overrides::Ansible::ResourceOverride access_api_results: true imports: diff --git a/products/dns/api.yaml b/products/dns/api.yaml index 8d850318d1a3..7cd63659fb8e 100644 --- a/products/dns/api.yaml +++ b/products/dns/api.yaml @@ -173,7 +173,6 @@ objects: description: | The zone's visibility: public zones are exposed to the Internet, while private zones are visible only to Virtual Private Cloud resources. - Must be one of: `public`, `private`. values: - :private - :public @@ -228,9 +227,7 @@ objects: values: - :default - :private - min_version: beta - !ruby/object:Api::Type::NestedObject - min_version: beta name: 'peeringConfig' description: | The presence of this field indicates that DNS Peering is enabled for this @@ -259,6 +256,25 @@ objects: Specifies if this is a managed reverse lookup zone. If true, Cloud DNS will resolve reverse lookup queries using automatically configured records for VPC resources. This only applies to networks listed under `private_visibility_config`. + - !ruby/object:Api::Type::NestedObject + min_version: beta + input: true + name: 'serviceDirectoryConfig' + description: + The presence of this field indicates that this zone is backed by Service Directory. The value + of this field contains information related to the namespace associated with the zone. + properties: + - !ruby/object:Api::Type::NestedObject + name: 'namespace' + required: true + description: 'The namespace associated with the zone.' + properties: + - !ruby/object:Api::Type::String + name: 'namespaceUrl' + required: true + description: | + The fully qualified URL of the service directory namespace that should be + associated with the zone. Ignored for `public` visibility zones. references: !ruby/object:Api::Resource::ReferenceLinks guides: 'Managing Zones': @@ -352,7 +368,6 @@ objects: 'Using DNS server policies': 'https://cloud.google.com/dns/zones/#using-dns-server-policies' api: 'https://cloud.google.com/dns/docs/reference/v1beta2/policies' - min_version: beta - !ruby/object:Api::Resource name: 'Project' kind: 'dns#project' diff --git a/products/dns/terraform.yaml b/products/dns/terraform.yaml index 1c6d0c14fd42..33ed5fe3e1e7 100644 --- a/products/dns/terraform.yaml +++ b/products/dns/terraform.yaml @@ -19,6 +19,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides - !ruby/object:Provider::Terraform::Examples name: "dns_managed_zone_basic" primary_resource_id: "example-zone" + # Randomness from random provider + skip_vcr: true - !ruby/object:Provider::Terraform::Examples name: "dns_managed_zone_private" primary_resource_id: "private-zone" @@ -36,12 +38,18 @@ overrides: !ruby/object:Overrides::ResourceOverrides network_2_name: "network-2" - !ruby/object:Provider::Terraform::Examples name: "dns_managed_zone_private_peering" - min_version: 'beta' primary_resource_id: "peering-zone" vars: zone_name: "peering-zone" network_source_name: "network-source" network_target_name: "network-target" + - !ruby/object:Provider::Terraform::Examples + name: "dns_managed_zone_service_directory" + min_version: 'beta' + primary_resource_id: "sd-zone" + vars: + zone_name: "peering-zone" + network_name: "network" properties: creationTime: !ruby/object:Overrides::Terraform::PropertyOverride exclude: true @@ -81,7 +89,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides and apply an incorrect update to the resource. If you encounter this issue, remove all `networks` blocks in an update and then apply another update adding all of them back simultaneously. privateVisibilityConfig.networks.networkUrl: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/network_full_url.erb' diff_suppress_func: 'compareSelfLinkOrResourceName' + description: | + The id or fully qualified URL of the VPC network to bind to. + This should be formatted like `projects/{project}/global/networks/{network}` or + `https://www.googleapis.com/compute/v1/projects/{project}/global/networks/{network}` forwardingConfig.targetNameServers: !ruby/object:Overrides::Terraform::PropertyOverride is_set: true set_hash_func: |- @@ -94,9 +107,25 @@ overrides: !ruby/object:Overrides::ResourceOverrides schema.SerializeResourceForHash(&buf, raw, dnsManagedZoneForwardingConfigTargetNameServersSchema()) return hashcode.String(buf.String()) } + serviceDirectoryConfig.namespace.namespaceUrl: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/sd_full_url.erb' + custom_flatten: 'templates/terraform/custom_flatten/full_to_relative_path.erb' + description: | + The fully qualified or partial URL of the service directory namespace that should be + associated with the zone. This should be formatted like + `https://servicedirectory.googleapis.com/v1/projects/{project}/locations/{location}/namespaces/{namespace_id}` + or simply `projects/{project}/locations/{location}/namespaces/{namespace_id}` + Ignored for `public` visibility zones. visibility: !ruby/object:Overrides::Terraform::PropertyOverride diff_suppress_func: 'caseDiffSuppress' custom_flatten: templates/terraform/custom_flatten/default_if_empty.erb + peeringConfig.targetNetwork.networkUrl: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: templates/terraform/custom_expand/network_full_url.erb + diff_suppress_func: 'compareSelfLinkOrResourceName' + description: | + The id or fully qualified URL of the VPC network to forward queries to. + This should be formatted like `projects/{project}/global/networks/{network}` or + `https://www.googleapis.com/compute/v1/projects/{project}/global/networks/{network}` reverseLookup: !ruby/object:Overrides::Terraform::PropertyOverride custom_flatten: templates/terraform/custom_flatten/object_to_bool.go.erb custom_expand: templates/terraform/custom_expand/bool_to_object.go.erb @@ -142,6 +171,13 @@ overrides: !ruby/object:Overrides::ResourceOverrides schema.SerializeResourceForHash(&buf, raw, dnsPolicyNetworksSchema()) return hashcode.String(buf.String()) } + networks.networkUrl: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: templates/terraform/custom_expand/network_full_url.erb + diff_suppress_func: 'compareSelfLinkOrResourceName' + description: | + The id or fully qualified URL of the VPC network to forward queries to. + This should be formatted like `projects/{project}/global/networks/{network}` or + `https://www.googleapis.com/compute/v1/projects/{project}/global/networks/{network}` custom_code: !ruby/object:Provider::Terraform::CustomCode pre_delete: templates/terraform/pre_delete/detach_network.erb ResourceRecordSet: !ruby/object:Overrides::Terraform::ResourceOverride diff --git a/products/filestore/api.yaml b/products/filestore/api.yaml index 85e277d8c75e..b7484c39bfd0 100644 --- a/products/filestore/api.yaml +++ b/products/filestore/api.yaml @@ -19,11 +19,13 @@ # include a small hack to rename the library - see # templates/terraform/constants/filestore.erb. name: Filestore -display_name: Cloud Filestore versions: - !ruby/object:Api::Product::Version name: ga base_url: https://file.googleapis.com/v1/ + - !ruby/object:Api::Product::Version + name: beta + base_url: https://file.googleapis.com/v1beta1/ scopes: - https://www.googleapis.com/auth/cloud-platform async: !ruby/object:Api::OpAsync @@ -105,9 +107,12 @@ objects: required: true input: true values: - - TIER_UNSPECIFIED - - STANDARD - - PREMIUM + - :TIER_UNSPECIFIED + - :STANDARD + - :PREMIUM + - :BASIC_HDD + - :BASIC_SSD + - :HIGH_SCALE_SSD - !ruby/object:Api::Type::KeyValuePairs name: 'labels' description: | @@ -133,6 +138,51 @@ objects: File share capacity in GiB. This must be at least 1024 GiB for the standard tier, or 2560 GiB for the premium tier. required: true + - !ruby/object:Api::Type::Array + name: 'nfsExportOptions' + description: | + Nfs Export Options. There is a limit of 10 export options per file share. + max_size: 10 + min_version: beta + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::Array + name: 'ipRanges' + description: | + List of either IPv4 addresses, or ranges in CIDR notation which may mount the file share. + Overlapping IP ranges are not allowed, both within and across NfsExportOptions. An error will be returned. + The limit is 64 IP ranges/addresses for each FileShareConfig among all NfsExportOptions. + item_type: Api::Type::String + - !ruby/object:Api::Type::Enum + name: 'accessMode' + description: | + Either READ_ONLY, for allowing only read requests on the exported directory, + or READ_WRITE, for allowing both read and write requests. The default is READ_WRITE. + default_value: :READ_WRITE + values: + - :READ_ONLY + - :READ_WRITE + - !ruby/object:Api::Type::Enum + name: 'squashMode' + description: | + Either NO_ROOT_SQUASH, for allowing root access on the exported directory, or ROOT_SQUASH, + for not allowing root access. The default is NO_ROOT_SQUASH. + default_value: :NO_ROOT_SQUASH + values: + - :NO_ROOT_SQUASH + - :ROOT_SQUASH + - !ruby/object:Api::Type::Integer + name: 'anonUid' + description: | + An integer representing the anonymous user id with a default value of 65534. + Anon_uid may only be set with squashMode of ROOT_SQUASH. An error will be returned + if this field is specified for other squashMode settings. + - !ruby/object:Api::Type::Integer + name: 'anonGid' + description: | + An integer representing the anonymous group id with a default value of 65534. + Anon_gid may only be set with squashMode of ROOT_SQUASH. An error will be returned + if this field is specified for other squashMode settings. - !ruby/object:Api::Type::Array name: 'networks' description: | diff --git a/products/filestore/terraform.yaml b/products/filestore/terraform.yaml index f20a9567914f..2633995e0fea 100644 --- a/products/filestore/terraform.yaml +++ b/products/filestore/terraform.yaml @@ -25,6 +25,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides primary_resource_id: "instance" vars: instance_name: "test-instance" + - !ruby/object:Provider::Terraform::Examples + name: "filestore_instance_full" + min_version: beta + primary_resource_id: "instance" + vars: + instance_name: "test-instance" properties: name: !ruby/object:Overrides::Terraform::PropertyOverride custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' diff --git a/products/firebase/api.yaml b/products/firebase/api.yaml index ca1fc9b7df01..a6f84d7245d2 100644 --- a/products/firebase/api.yaml +++ b/products/firebase/api.yaml @@ -100,3 +100,55 @@ objects: description: | The ID of the default GCP resource location for the Project. The location must be one of the available GCP resource locations. + - !ruby/object:Api::Resource + name: 'WebApp' + min_version: beta + base_url: projects/{{project}}/webApps + self_link: '{{name}}' + update_verb: :PATCH + update_mask: true + description: | + A Google Cloud Firebase web application instance + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://firebase.google.com/' + api: 'https://firebase.google.com/docs/projects/api/reference/rest/v1beta1/projects.webApps' + async: !ruby/object:Api::OpAsync + actions: ["create"] + operation: !ruby/object:Api::OpAsync::Operation + path: 'name' + base_url: '{{op_id}}' + wait_ms: 1000 + result: !ruby/object:Api::OpAsync::Result + path: 'response' + resource_inside_response: true + status: !ruby/object:Api::OpAsync::Status + path: 'done' + complete: true + allowed: + - true + - false + error: !ruby/object:Api::OpAsync::Error + path: 'error' + message: 'message' + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The fully qualified resource name of the App, for example: + + projects/projectId/webApps/appId + output: true + - !ruby/object:Api::Type::String + name: displayName + required: true + description: | + The user-assigned display name of the App. + - !ruby/object:Api::Type::String + name: appId + output: true + description: | + Immutable. The globally unique, Firebase-assigned identifier of the App. + + This identifier should be treated as an opaque token, as the data format is not specified. diff --git a/products/firebase/terraform.yaml b/products/firebase/terraform.yaml index 62f8bc6d0050..a61601bd524f 100644 --- a/products/firebase/terraform.yaml +++ b/products/firebase/terraform.yaml @@ -17,8 +17,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides import_format: ["projects/{{project}}", "{{project}}"] timeouts: !ruby/object:Api::Timeouts insert_minutes: 10 - update_minutes: 10 - delete_minutes: 10 autogen_async: true skip_delete: true skip_sweeper: true @@ -44,8 +42,27 @@ overrides: !ruby/object:Overrides::ResourceOverrides primary_resource_id: "basic" test_env_vars: org_id: :ORG_ID - properties: - + WebApp: !ruby/object:Overrides::Terraform::ResourceOverride + # id_format: '{{name}}' + import_format: ['{{name}}'] + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 10 + update_minutes: 10 + autogen_async: true + skip_delete: true #currently only able to delete a webapp through the Firebase Admin console + skip_sweeper: true + examples: + - !ruby/object:Provider::Terraform::Examples + name: "firebase_web_app_basic" + min_version: "beta" + primary_resource_id: "basic" + vars: + display_name: "Display Name Basic" + bucket_name: "fb-webapp-" + test_env_vars: + org_id: :ORG_ID + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/self_link_as_name.erb # This is for copying files over files: !ruby/object:Provider::Config::Files # These files have templating (ERB) code that will be run. diff --git a/products/firestore/api.yaml b/products/firestore/api.yaml index 7441f165db35..9a6afefdeb2b 100644 --- a/products/firestore/api.yaml +++ b/products/firestore/api.yaml @@ -13,7 +13,6 @@ --- !ruby/object:Api::Product name: Firestore -display_name: Cloud Firestore versions: - !ruby/object:Api::Product::Version name: ga @@ -69,8 +68,7 @@ objects: - !ruby/object:Api::Type::Enum name: queryScope description: | - The scope at which a query is run. One of `"COLLECTION"` or - `"COLLECTION_GROUP"`. Defaults to `"COLLECTION"`. + The scope at which a query is run. default_value: :COLLECTION values: - :COLLECTION diff --git a/products/gameservices/api.yaml b/products/gameservices/api.yaml index a1d914e95b0e..024c07b07794 100644 --- a/products/gameservices/api.yaml +++ b/products/gameservices/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: GameServices -display_name: Google Game Services +display_name: Game Servers scopes: - https://www.googleapis.com/auth/compute versions: diff --git a/products/healthcare/api.yaml b/products/healthcare/api.yaml index 981cd1afee52..0a1403a98b68 100644 --- a/products/healthcare/api.yaml +++ b/products/healthcare/api.yaml @@ -15,6 +15,9 @@ name: Healthcare display_name: Cloud Healthcare versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://healthcare.googleapis.com/v1/ - !ruby/object:Api::Product::Version name: beta base_url: https://healthcare.googleapis.com/v1beta1/ @@ -73,7 +76,7 @@ objects: guides: 'Creating a dataset': 'https://cloud.google.com/healthcare/docs/how-tos/datasets' - api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1beta1/projects.locations.datasets' + api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1/projects.locations.datasets' - !ruby/object:Api::Resource name: 'DicomStore' kind: "healthcare#dicomStore" @@ -153,7 +156,7 @@ objects: guides: 'Creating a DICOM store': 'https://cloud.google.com/healthcare/docs/how-tos/dicom' - api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1beta1/projects.locations.datasets.dicomStores' + api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1/projects.locations.datasets.dicomStores' - !ruby/object:Api::Resource name: 'FhirStore' kind: "healthcare#fhirStore" @@ -184,17 +187,30 @@ objects: ** Changing this property may recreate the FHIR store (removing all data) ** required: true input: true + # Version is duplicated because it is optional in beta but required in GA. - !ruby/object:Api::Type::Enum name: version description: | - The FHIR specification version. Supported values include DSTU2, STU3 and R4. Defaults to STU3. - required: false # TODO: Make this field required in GA. + The FHIR specification version. + exact_version: beta + required: false input: true default_value: :STU3 values: - :DSTU2 - :STU3 - :R4 + - !ruby/object:Api::Type::Enum + name: version + description: | + The FHIR specification version. + exact_version: ga + required: true + input: true + values: + - :DSTU2 + - :STU3 + - :R4 - !ruby/object:Api::Type::Boolean name: 'enableUpdateCreate' description: | @@ -288,11 +304,67 @@ objects: description: | The fully qualified name of this dataset output: true + - !ruby/object:Api::Type::Array + name: streamConfigs + description: |- + A list of streaming configs that configure the destinations of streaming export for every resource mutation in + this FHIR store. Each store is allowed to have up to 10 streaming configs. After a new config is added, the next + resource mutation is streamed to the new location in addition to the existing ones. When a location is removed + from the list, the server stops streaming to that location. Before adding a new config, you must add the required + bigquery.dataEditor role to your project's Cloud Healthcare Service Agent service account. Some lag (typically on + the order of dozens of seconds) is expected before the results show up in the streaming destination. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::Array + name: 'resourceTypes' + description: | + Supply a FHIR resource type (such as "Patient" or "Observation"). See + https://www.hl7.org/fhir/valueset-resource-types.html for a list of all FHIR resource types. The server treats + an empty list as an intent to stream all the supported resource types in this FHIR store. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: bigqueryDestination + required: true + description: | + The destination BigQuery structure that contains both the dataset location and corresponding schema config. + The output is organized in one table per resource type. The server reuses the existing tables (if any) that + are named after the resource types, e.g. "Patient", "Observation". When there is no existing table for a given + resource type, the server attempts to create one. + See the [streaming config reference](https://cloud.google.com/healthcare/docs/reference/rest/v1beta1/projects.locations.datasets.fhirStores#streamconfig) for more details. + properties: + - !ruby/object:Api::Type::String + name: datasetUri + required: true + description: | + BigQuery URI to a dataset, up to 2000 characters long, in the format bq://projectId.bqDatasetId + - !ruby/object:Api::Type::NestedObject + name: schemaConfig + required: true + description: | + The configuration for the exported BigQuery schema. + properties: + - !ruby/object:Api::Type::Enum + name: schemaType + description: | + Specifies the output schema type. Only ANALYTICS is supported at this time. + * ANALYTICS: Analytics schema defined by the FHIR community. + See https://github.com/FHIR/sql-on-fhir/blob/master/sql-on-fhir.md. + default_value: :ANALYTICS + values: + - :ANALYTICS + - !ruby/object:Api::Type::Integer + name: recursiveStructureDepth + required: true + description: | + The depth for all recursive structures in the output analytics schema. For example, concept in the CodeSystem + resource is a recursive structure; when the depth is 2, the CodeSystem table will have a column called + concept.concept but not concept.concept.concept. If not specified or set to 0, the server will use the default + value 2. The maximum depth allowed is 5. references: !ruby/object:Api::Resource::ReferenceLinks guides: 'Creating a FHIR store': 'https://cloud.google.com/healthcare/docs/how-tos/fhir' - api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1beta1/projects.locations.datasets.fhirStores' + api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1/projects.locations.datasets.fhirStores' - !ruby/object:Api::Resource name: 'Hl7V2Store' kind: "healthcare#hl7V2Store" @@ -333,6 +405,7 @@ objects: at_least_one_of: - parser_config.0.allow_null_header - parser_config.0.segment_terminator + - parser_config.0.schema description: | Determines whether messages with no header are allowed. - !ruby/object:Api::Type::String @@ -340,11 +413,20 @@ objects: at_least_one_of: - parser_config.0.allow_null_header - parser_config.0.segment_terminator + - parser_config.0.schema description: | Byte(s) to be used as the segment terminator. If this is unset, '\r' will be used as segment terminator. A base64-encoded string. - + - !ruby/object:Api::Type::String + name: schema + at_least_one_of: + - parser_config.0.allow_null_header + - parser_config.0.segment_terminator + - parser_config.0.schema + description: | + JSON encoded string for schemas used to parse messages in this + store if schematized parsing is desired. - !ruby/object:Api::Type::KeyValuePairs name: labels required: false @@ -362,9 +444,62 @@ objects: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + - !ruby/object:Api::Type::Array + name: notificationConfigs + description: |- + A list of notification configs. Each configuration uses a filter to determine whether to publish a + message (both Ingest & Create) on the corresponding notification destination. Only the message name + is sent as part of the notification. Supplied by the client. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: pubsubTopic + description: | + The Cloud Pub/Sub topic that notifications of changes are published on. Supplied by the client. + PubsubMessage.Data will contain the resource name. PubsubMessage.MessageId is the ID of this message. + It is guaranteed to be unique within the topic. PubsubMessage.PublishTime is the time at which the message + was published. Notifications are only sent if the topic is non-empty. Topic names must be scoped to a + project. cloud-healthcare@system.gserviceaccount.com must have publisher permissions on the given + Cloud Pub/Sub topic. Not having adequate permissions will cause the calls that send notifications to fail. + + If a notification cannot be published to Cloud Pub/Sub, errors will be logged to Stackdriver + required: true + - !ruby/object:Api::Type::String + name: filter + description: | + Restricts notifications sent for messages matching a filter. If this is empty, all messages + are matched. Syntax: https://cloud.google.com/appengine/docs/standard/python/search/query_strings + Fields/functions available for filtering are: + + * messageType, from the MSH-9.1 field. For example, NOT messageType = "ADT". + * send_date or sendDate, the YYYY-MM-DD date the message was sent in the dataset's timeZone, from the MSH-7 segment. For example, send_date < "2017-01-02". + * sendTime, the timestamp when the message was sent, using the RFC3339 time format for comparisons, from the MSH-7 segment. For example, sendTime < "2017-01-02T00:00:00-05:00". + * sendFacility, the care center that the message came from, from the MSH-4 segment. For example, sendFacility = "ABC". + * PatientId(value, type), which matches if the message lists a patient having an ID of the given value and type in the PID-2, PID-3, or PID-4 segments. For example, PatientId("123456", "MRN"). + * labels.x, a string value of the label with key x as set using the Message.labels map. For example, labels."priority"="high". The operator :* can be used to assert the existence of a label. For example, labels."priority":*. - !ruby/object:Api::Type::NestedObject name: notificationConfig + removed_message: This field has been replaced by notificationConfigs + exact_version: ga + required: false + update_url: '{{dataset}}/hl7V2Stores/{{name}}' + properties: + - !ruby/object:Api::Type::String + name: pubsubTopic + description: | + The Cloud Pub/Sub topic that notifications of changes are published on. Supplied by the client. + PubsubMessage.Data will contain the resource name. PubsubMessage.MessageId is the ID of this message. + It is guaranteed to be unique within the topic. PubsubMessage.PublishTime is the time at which the message + was published. Notifications are only sent if the topic is non-empty. Topic names must be scoped to a + project. cloud-healthcare@system.gserviceaccount.com must have publisher permissions on the given + Cloud Pub/Sub topic. Not having adequate permissions will cause the calls that send notifications to fail. + required: true + - !ruby/object:Api::Type::NestedObject + name: notificationConfig + # This field is duplicated because beta and ga have different behaviors. + deprecation_message: This field has been replaced by notificationConfigs + exact_version: beta required: false update_url: '{{dataset}}/hl7V2Stores/{{name}}' properties: @@ -378,7 +513,6 @@ objects: project. cloud-healthcare@system.gserviceaccount.com must have publisher permissions on the given Cloud Pub/Sub topic. Not having adequate permissions will cause the calls that send notifications to fail. required: true - - !ruby/object:Api::Type::Time name: 'creationTime' description: | @@ -394,4 +528,4 @@ objects: guides: 'Creating a HL7v2 Store': 'https://cloud.google.com/healthcare/docs/how-tos/hl7v2' - api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1beta1/projects.locations.datasets.hl7V2Stores' + api: 'https://cloud.google.com/healthcare/docs/reference/rest/v1/projects.locations.datasets.hl7V2Stores' diff --git a/products/healthcare/terraform.yaml b/products/healthcare/terraform.yaml index 92c80245ca94..02f0eed55ae7 100644 --- a/products/healthcare/terraform.yaml +++ b/products/healthcare/terraform.yaml @@ -21,7 +21,6 @@ overrides: !ruby/object:Overrides::ResourceOverrides examples: - !ruby/object:Provider::Terraform::Examples name: "healthcare_dataset_basic" - skip_test: true primary_resource_id: "default" vars: dataset_name: "example-dataset" @@ -41,15 +40,24 @@ overrides: !ruby/object:Overrides::ResourceOverrides {{description}} id_format: "{{dataset}}/fhirStores/{{name}}" import_format: ["{{dataset}}/fhirStores/{{name}}"] + # FhirStore datastores will be sweeped by the Dataset sweeper + skip_sweeper: true examples: - !ruby/object:Provider::Terraform::Examples name: "healthcare_fhir_store_basic" - skip_test: true primary_resource_id: "default" vars: dataset_name: "example-dataset" fhir_store_name: "example-fhir-store" pubsub_topic: "fhir-notifications" + - !ruby/object:Provider::Terraform::Examples + name: "healthcare_fhir_store_streaming_config" + primary_resource_id: "default" + vars: + dataset_name: "example-dataset" + fhir_store_name: "example-fhir-store" + pubsub_topic: "fhir-notifications" + bq_dataset_name: "bq_example_dataset" properties: creationTime: !ruby/object:Overrides::Terraform::PropertyOverride exclude: true @@ -63,10 +71,11 @@ overrides: !ruby/object:Overrides::ResourceOverrides {{description}} id_format: "{{dataset}}/dicomStores/{{name}}" import_format: ["{{dataset}}/dicomStores/{{name}}"] + # DicomStore datastores will be sweeped by the Dataset sweeper + skip_sweeper: true examples: - !ruby/object:Provider::Terraform::Examples name: "healthcare_dicom_store_basic" - skip_test: true primary_resource_id: "default" vars: dataset_name: "example-dataset" @@ -85,18 +94,32 @@ overrides: !ruby/object:Overrides::ResourceOverrides {{description}} id_format: "{{dataset}}/hl7V2Stores/{{name}}" import_format: ["{{dataset}}/hl7V2Stores/{{name}}"] + # Hl7V2Store datastores will be sweeped by the Dataset sweeper + skip_sweeper: true examples: - !ruby/object:Provider::Terraform::Examples name: "healthcare_hl7_v2_store_basic" - skip_test: true primary_resource_id: "default" vars: dataset_name: "example-dataset" hl7_v2_store_name: "example-hl7-v2-store" pubsub_topic: "hl7-v2-notifications" + - !ruby/object:Provider::Terraform::Examples + name: "healthcare_hl7_v2_store_parser_config" + min_version: beta + primary_resource_id: "store" + vars: + dataset_name: "example-dataset" + hl7_v2_store_name: "example-hl7-v2-store" properties: creationTime: !ruby/object:Overrides::Terraform::PropertyOverride exclude: true + parserConfig.schema: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/json_schema.erb' + custom_flatten: 'templates/terraform/custom_flatten/json_schema.erb' + state_func: 'func(v interface{}) string { s, _ := structure.NormalizeJsonString(v); return s }' + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.ValidateJsonString' selfLink: !ruby/object:Overrides::Terraform::PropertyOverride ignore_read: true custom_code: !ruby/object:Provider::Terraform::CustomCode diff --git a/products/iam/api.yaml b/products/iam/api.yaml index dd634970b3cc..9a7b00df287b 100644 --- a/products/iam/api.yaml +++ b/products/iam/api.yaml @@ -97,6 +97,7 @@ objects: - !ruby/object:Api::Resource name: 'ServiceAccountKey' base_url: projects/{{project}}/serviceAccounts/{{service_account}}/keys + collection_url_key: 'keys' description: | A service account in the Identity and Access Management API. parameters: diff --git a/products/iam/helpers/ansible/service_account_key_template.erb b/products/iam/helpers/ansible/service_account_key_template.erb index 0c9b69a3a261..a8f57fdebe7a 100644 --- a/products/iam/helpers/ansible/service_account_key_template.erb +++ b/products/iam/helpers/ansible/service_account_key_template.erb @@ -3,12 +3,12 @@ # # Copyright (C) 2017 Google # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) -<%= lines(autogen_notice :python) -%> +<%= lines(autogen_notice(:python, pwd)) -%> from __future__ import absolute_import, division, print_function __metaclass__ = type -<%= lines(compile('templates/ansible/documentation.erb'), 1) -%> +<%= lines(compile(pwd + '/templates/ansible/documentation.erb'), 1) -%> ################################################################################ # Imports ################################################################################ @@ -48,7 +48,7 @@ import base64 def main(): """Main function""" -<%= lines(indent(compile('templates/ansible/module.erb'), 4)) -%> +<%= lines(indent(compile(pwd + '/templates/ansible/module.erb'), 4)) -%> if not module.params['scopes']: module.params['scopes'] = <%= python_literal(object.__product.scopes) %> diff --git a/products/iap/api.yaml b/products/iap/api.yaml index fc1b65f12866..d4fbb8f24e91 100644 --- a/products/iap/api.yaml +++ b/products/iap/api.yaml @@ -182,6 +182,10 @@ objects: input: true description: | Contains the data that describes an Identity Aware Proxy owned client. + + ~> **Note:** Only internal org clients can be created via declarative tools. Other types of clients must be + manually created via the GCP console. This restriction is due to the existing APIs and not lack of support + in this tool. parameters: - !ruby/object:Api::Type::String name: 'clientId' diff --git a/products/iap/terraform.yaml b/products/iap/terraform.yaml index 84a2fcb29930..d2db04e498f3 100644 --- a/products/iap/terraform.yaml +++ b/products/iap/terraform.yaml @@ -152,7 +152,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides primary_resource_name: "fmt.Sprintf(\"tf-test-tunnel-vm%s\", context[\"random_suffix\"])" Brand: !ruby/object:Overrides::Terraform::ResourceOverride async: !ruby/object:Provider::Terraform::PollAsync - check_response_func: PollCheckForExistence + check_response_func_existence: PollCheckForExistence actions: ['create'] operation: !ruby/object:Api::Async::Operation timeouts: !ruby/object:Api::Timeouts @@ -184,6 +184,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides id_format: '{{brand}}/identityAwareProxyClients/{{client_id}}' self_link: '{{brand}}/identityAwareProxyClients/{{client_id}}' import_format: ['{{brand}}/identityAwareProxyClients/{{client_id}}'] + # Child of iap brand resource + skip_sweeper: true examples: - !ruby/object:Provider::Terraform::Examples name: "iap_client" diff --git a/products/identityplatform/terraform.yaml b/products/identityplatform/terraform.yaml index ce93e10624a8..aa205b723850 100644 --- a/products/identityplatform/terraform.yaml +++ b/products/identityplatform/terraform.yaml @@ -24,6 +24,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides skip_test: true TenantDefaultSupportedIdpConfig: !ruby/object:Overrides::Terraform::ResourceOverride import_format: ["projects/{{project}}/tenants/{{tenant}}/defaultSupportedIdpConfigs/{{idp_id}}"] + # Child of idp Tenant resource + skip_sweeper: true examples: - !ruby/object:Provider::Terraform::Examples name: "identity_platform_tenant_default_supported_idp_config_basic" @@ -41,7 +43,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides idp_entity_id: tf-idp sp_entity_id: tf-sp test_vars_overrides: - name: '"saml.tf-config-" + acctest.RandString(10)' + name: '"saml.tf-config-" + randString(t, 10)' TenantInboundSamlConfig: !ruby/object:Overrides::Terraform::ResourceOverride properties: name: !ruby/object:Overrides::Terraform::PropertyOverride @@ -55,7 +57,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides idp_entity_id: tf-idp sp_entity_id: tf-sp test_vars_overrides: - name: '"saml.tf-config-" + acctest.RandString(10)' + name: '"saml.tf-config-" + randString(t, 10)' OauthIdpConfig: !ruby/object:Overrides::Terraform::ResourceOverride properties: name: !ruby/object:Overrides::Terraform::PropertyOverride @@ -67,7 +69,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides vars: name: oidc.oauth-idp-config test_vars_overrides: - name: '"oidc.oauth-idp-config-" + acctest.RandString(10)' + name: '"oidc.oauth-idp-config-" + randString(t, 10)' TenantOauthIdpConfig: !ruby/object:Overrides::Terraform::ResourceOverride properties: name: !ruby/object:Overrides::Terraform::PropertyOverride @@ -79,7 +81,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides vars: name: oidc.oauth-idp-config test_vars_overrides: - name: '"oidc.oauth-idp-config-" + acctest.RandString(10)' + name: '"oidc.oauth-idp-config-" + randString(t, 10)' Tenant: !ruby/object:Overrides::Terraform::ResourceOverride properties: name: !ruby/object:Overrides::Terraform::PropertyOverride diff --git a/products/kms/ansible.yaml b/products/kms/ansible.yaml index 0f7bda77214c..87d654836145 100644 --- a/products/kms/ansible.yaml +++ b/products/kms/ansible.yaml @@ -60,6 +60,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides Immutable purpose of CryptoKey. See https://cloud.google.com/kms/docs/reference/rest/v1/projects.locations.keyRings.cryptoKeys#CryptoKeyPurpose for inputs. + KeyRingImportJob: !ruby/object:Overrides::Ansible::ResourceOverride + exclude: true SecretCiphertext: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true files: !ruby/object:Provider::Config::Files diff --git a/products/kms/api.yaml b/products/kms/api.yaml index bb37ac0818b7..01d8df9a2ea0 100644 --- a/products/kms/api.yaml +++ b/products/kms/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: KMS -display_name: Cloud KMS +display_name: Cloud Key Management Service versions: - !ruby/object:Api::Product::Version name: ga @@ -146,6 +146,124 @@ objects: 'Creating a key': 'https://cloud.google.com/kms/docs/creating-keys#create_a_key' api: 'https://cloud.google.com/kms/docs/reference/rest/v1/projects.locations.keyRings.cryptoKeys' + - !ruby/object:Api::Resource + name: 'KeyRingImportJob' + base_url: '{{key_ring}}/importJobs' + create_url: '{{key_ring}}/importJobs?importJobId={{import_job_id}}' + self_link: '{{name}}' + input: true + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Importing a key': + 'https://cloud.google.com/kms/docs/importing-a-key' + api: 'https://cloud.google.com/kms/docs/reference/rest/v1/projects.locations.keyRings.importJobs' + description: | + A `KeyRingImportJob` can be used to create `CryptoKeys` and `CryptoKeyVersions` using pre-existing + key material, generated outside of Cloud KMS. A `KeyRingImportJob` expires 3 days after it is created. + Once expired, Cloud KMS will no longer be able to import or unwrap any key material that + was wrapped with the `KeyRingImportJob`'s public key. + parameters: + - !ruby/object:Api::Type::String + name: 'keyRing' + description: | + The KeyRing that this import job belongs to. + Format: `'projects/{{project}}/locations/{{location}}/keyRings/{{keyRing}}'`. + required: true + input: true + url_param_only: true + - !ruby/object:Api::Type::String + name: 'importJobId' + required: true + description: | + It must be unique within a KeyRing and match the regular expression [a-zA-Z0-9_-]{1,63} + required: true + input: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The resource name for this ImportJob in the format projects/*/locations/*/keyRings/*/importJobs/*. + output: true + - !ruby/object:Api::Type::Enum + name: 'importMethod' + input: true + required: true + description: | + The wrapping method to be used for incoming key material. + values: + - :RSA_OAEP_3072_SHA1_AES_256 + - :RSA_OAEP_4096_SHA1_AES_256 + - !ruby/object:Api::Type::Enum + name: 'protectionLevel' + input: true + required: true + description: | + The protection level of the ImportJob. This must match the protectionLevel of the + versionTemplate on the CryptoKey you attempt to import into. + values: + - :SOFTWARE + - :HSM + - :EXTERNAL + - !ruby/object:Api::Type::Time + name: 'createTime' + description: | + The time that this resource was created on the server. + This is in RFC3339 text format. + output: true + - !ruby/object:Api::Type::Time + name: 'generateTime' + description: | + The time that this resource was generated. + This is in RFC3339 text format. + output: true + - !ruby/object:Api::Type::Time + name: 'expireTime' + description: | + The time at which this resource is scheduled for expiration and can no longer be used. + This is in RFC3339 text format. + output: true + - !ruby/object:Api::Type::Time + name: 'expireEventTime' + description: | + The time this resource expired. Only present if state is EXPIRED. + output: true + - !ruby/object:Api::Type::String + name: 'state' + description: | + The current state of the ImportJob, indicating if it can be used. + output: true + - !ruby/object:Api::Type::NestedObject + name: 'publicKey' + description: | + The public key with which to wrap key material prior to import. Only returned if state is `ACTIVE`. + output: true + properties: + - !ruby/object:Api::Type::String + name: 'pem' + description: | + The public key, encoded in PEM format. For more information, see the RFC 7468 sections + for General Considerations and Textual Encoding of Subject Public Key Info. + output: true + - !ruby/object:Api::Type::NestedObject + name: 'attestation' + description: | + Statement that was generated and signed by the key creator (for example, an HSM) at key creation time. + Use this statement to verify attributes of the key as stored on the HSM, independently of Google. + Only present if the chosen ImportMethod is one with a protection level of HSM. + output: true + properties: + - !ruby/object:Api::Type::String + name: 'format' + description: | + The format of the attestation data. + output: true + - !ruby/object:Api::Type::String + name: 'content' + description: | + The attestation data provided by the HSM when the key operation was performed. + A base64-encoded string. + output: true - !ruby/object:Api::Resource name: 'SecretCiphertext' base_url: '{{crypto_key}}' diff --git a/products/kms/inspec.yaml b/products/kms/inspec.yaml index 9eb70a924790..e9263e4d9fca 100644 --- a/products/kms/inspec.yaml +++ b/products/kms/inspec.yaml @@ -49,5 +49,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides base_url: 'projects/{{project}}/locations/{{location}}/keyRings/{{key_ring_name}}/cryptoKeys/{{crypto_key_name}}' exclude: false method_name_separator: ':' + KeyRingImportJob: !ruby/object:Overrides::Inspec::ResourceOverride + exclude: true SecretCiphertext: !ruby/object:Overrides::Inspec::ResourceOverride exclude: true \ No newline at end of file diff --git a/products/kms/terraform.yaml b/products/kms/terraform.yaml index ab48f697aa00..6cff368ef88b 100644 --- a/products/kms/terraform.yaml +++ b/products/kms/terraform.yaml @@ -102,11 +102,41 @@ overrides: !ruby/object:Overrides::ResourceOverrides encoder: templates/terraform/encoders/kms_crypto_key.go.erb update_encoder: templates/terraform/update_encoder/kms_crypto_key.go.erb extra_schema_entry: templates/terraform/extra_schema_entry/kms_self_link.erb + KeyRingImportJob: !ruby/object:Overrides::Terraform::ResourceOverride + description: | + {{description}} + + ~> **Note:** KeyRingImportJobs cannot be deleted from Google Cloud Platform. + Destroying a Terraform-managed KeyRingImportJob will remove it from state but + *will not delete the resource on the server.* + id_format: "{{name}}" + import_format: ["{{name}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "kms_key_ring_import_job" + primary_resource_id: "import-job" + vars: + keyring: "keyring-example" + skip_test: true + properties: + createTime: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + keyRing: !ruby/object:Overrides::Terraform::PropertyOverride + diff_suppress_func: 'kmsCryptoKeyRingsEquivalent' + ignore_read: true + createTime: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + generateTime: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + expireEventTime: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/kms_key_ring_import_job.go.erb SecretCiphertext: !ruby/object:Overrides::Terraform::ResourceOverride description: | {{description}} - ~> **NOTE**: Using this resource will allow you to conceal secret data within your + ~> **NOTE:** Using this resource will allow you to conceal secret data within your resource definitions, but it does not take care of protecting that data in the logging output, plan output, or state output. Please take care to secure your secret data outside of resource definitions. diff --git a/products/logging/api.yaml b/products/logging/api.yaml index 1ad1dd07ca5c..49375dddf39a 100644 --- a/products/logging/api.yaml +++ b/products/logging/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: Logging -display_name: Stackdriver Logging +display_name: Cloud (Stackdriver) Logging versions: - !ruby/object:Api::Product::Version name: ga diff --git a/products/memcache/api.yaml b/products/memcache/api.yaml new file mode 100644 index 000000000000..a7bd2e4c69ad --- /dev/null +++ b/products/memcache/api.yaml @@ -0,0 +1,152 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: Memcache +versions: + - !ruby/object:Api::Product::Version + name: beta + base_url: https://memcache.googleapis.com/v1beta2/ +scopes: + - https://www.googleapis.com/auth/cloud-platform +async: !ruby/object:Api::OpAsync + operation: !ruby/object:Api::OpAsync::Operation + path: 'name' + base_url: '{{op_id}}' + wait_ms: 1000 + result: !ruby/object:Api::OpAsync::Result + path: 'response' + resource_inside_response: true + status: !ruby/object:Api::OpAsync::Status + path: 'done' + complete: True + allowed: + - True + - False + error: !ruby/object:Api::OpAsync::Error + path: 'error' + message: 'message' +objects: + - !ruby/object:Api::Resource + name: 'Instance' + min_version: beta + create_url: projects/{{project}}/locations/{{region}}/instances?instanceId={{name}} + self_link: projects/{{project}}/locations/{{region}}/instances/{{name}} + base_url: projects/{{project}}/locations/{{region}}/instances + update_verb: :PATCH + update_mask: true + description: | + A Google Cloud Memcache instance. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/memcache/docs/creating-instances' + parameters: + - !ruby/object:Api::Type::String + name: 'region' + description: | + The name of the Memcache region of the instance. + required: true + input: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The resource name of the instance. + required: true + input: true + url_param_only: true + pattern: projects/{{project}}/locations/{{region}}/instances/{{name}} + - !ruby/object:Api::Type::String + name: 'displayName' + description: | + A user-visible name for the instance. + - !ruby/object:Api::Type::String + name: 'state' + description: | + The instance state - short description. + output: true + exclude: true + - !ruby/object:Api::Type::Array + name: 'instanceMessages' + description: | + Additional information about the instance state, if available. + output: true + exclude: true + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'code' + description: An error code. + - !ruby/object:Api::Type::String + name: 'message' + description: The message to be displayed to a user. + - !ruby/object:Api::Type::Time + name: 'createTime' + description: Creation timestamp in RFC3339 text format. + output: true + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: | + Resource labels to represent user-provided metadata. + - !ruby/object:Api::Type::Array + name: 'zones' + input: true + description: | + Zones where memcache nodes should be provisioned. If not + provided, all zones will be used. + item_type: Api::Type::String + - !ruby/object:Api::Type::String + name: 'authorizedNetwork' + input: true + description: | + The full name of the GCE network to connect the instance to. If not provided, + 'default' will be used. + - !ruby/object:Api::Type::Integer + name: nodeCount + description: | + Number of nodes in the memcache instance. + required: true + - !ruby/object:Api::Type::NestedObject + name: nodeConfig + description: | + Configuration for memcache nodes. + required: true + input: true + properties: + - !ruby/object:Api::Type::Integer + name: cpuCount + description: | + Number of CPUs per node. + required: true + - !ruby/object:Api::Type::Integer + name: memorySizeMb + description: | + Memory size in Mebibytes for each memcache node. + required: true + - !ruby/object:Api::Type::NestedObject + name: parameters + description: | + User-specified parameters for this memcache instance. + input: true + properties: + - !ruby/object:Api::Type::String + name: id + output: true + description: | + This is a unique ID associated with this set of parameters. + - !ruby/object:Api::Type::KeyValuePairs + name: params + description: | + User-defined set of parameters to use in the memcache process. diff --git a/products/memcache/inspec.yaml b/products/memcache/inspec.yaml new file mode 100644 index 000000000000..ed65c44f040a --- /dev/null +++ b/products/memcache/inspec.yaml @@ -0,0 +1,17 @@ +# Copyright 2019 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Inspec::Config +overrides: !ruby/object:Overrides::ResourceOverrides + Instance: !ruby/object:Overrides::Inspec::ResourceOverride + collection_url_key: resources \ No newline at end of file diff --git a/products/memcache/terraform.yaml b/products/memcache/terraform.yaml new file mode 100644 index 000000000000..85801e23b860 --- /dev/null +++ b/products/memcache/terraform.yaml @@ -0,0 +1,48 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + Instance: !ruby/object:Overrides::Terraform::ResourceOverride + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 20 + update_minutes: 20 + delete_minutes: 20 + autogen_async: true + examples: + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "memcache_instance_basic" + primary_resource_id: "instance" + vars: + instance_name: "test-instance" + properties: + name: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' + displayName: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + zones: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + is_set: true + region: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + parameters: !ruby/object:Overrides::Terraform::PropertyOverride + name: memcacheParameters + + +# This is for copying files over +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> diff --git a/products/monitoring/api.yaml b/products/monitoring/api.yaml index 91a41033b270..f9a37208e7bd 100644 --- a/products/monitoring/api.yaml +++ b/products/monitoring/api.yaml @@ -12,11 +12,11 @@ # limitations under the License. --- !ruby/object:Api::Product name: Monitoring -display_name: Stackdriver Monitoring +display_name: Cloud (Stackdriver) Monitoring versions: - !ruby/object:Api::Product::Version name: ga - base_url: https://monitoring.googleapis.com/v3/ + base_url: https://monitoring.googleapis.com/ scopes: - https://www.googleapis.com/auth/cloud-platform apis_required: @@ -26,8 +26,8 @@ apis_required: objects: - !ruby/object:Api::Resource name: 'AlertPolicy' - base_url: projects/{{project}}/alertPolicies - self_link: "{{name}}" + base_url: v3/projects/{{project}}/alertPolicies + self_link: "v3/{{name}}" update_verb: :PATCH update_mask: true description: | @@ -739,11 +739,10 @@ objects: The format of the content field. Presently, only the value "text/markdown" is supported. - - !ruby/object:Api::Resource name: 'Group' - base_url: projects/{{project}}/groups - self_link: "{{name}}" + base_url: v3/projects/{{project}}/groups + self_link: "v3/{{name}}" update_verb: :PUT description: | The description of a dynamic collection of monitored resources. Each group @@ -788,11 +787,10 @@ objects: The filter used to determine which monitored resources belong to this group. - - !ruby/object:Api::Resource name: NotificationChannel - base_url: projects/{{project}}/notificationChannels - self_link: "{{name}}" + base_url: v3/projects/{{project}}/notificationChannels + self_link: "v3/{{name}}" update_verb: :PATCH description: | A NotificationChannel is a medium through which an alert is delivered @@ -918,9 +916,9 @@ objects: - !ruby/object:Api::Resource name: Service - base_url: projects/{{project}}/services - create_url: projects/{{project}}/services?serviceId={{service_id}} - self_link: "{{name}}" + base_url: v3/projects/{{project}}/services + create_url: v3/projects/{{project}}/services?serviceId={{service_id}} + self_link: "v3/{{name}}" update_verb: :PATCH update_mask: true description: | @@ -963,6 +961,614 @@ objects: Formatted as described in https://cloud.google.com/apis/design/resource_names. + - !ruby/object:Api::Resource + name: Slo + base_url: v3/projects/{{project}}/services/{{service}}/serviceLevelObjectives + # name = projects/{{project}}/services/{{service}}/serviceLevelObjectives/{{slo_id}} + self_link: "v3/{{name}}" + create_url: v3/projects/{{project}}/services/{{service}}/serviceLevelObjectives?serviceLevelObjectiveId={{slo_id}} + update_verb: :PATCH + update_mask: true + description: | + A Service-Level Objective (SLO) describes the level of desired good + service. It consists of a service-level indicator (SLI), a performance + goal, and a period over which the objective is to be evaluated against + that goal. The SLO can use SLIs defined in a number of different manners. + Typical SLOs might include "99% of requests in each rolling week have + latency below 200 milliseconds" or "99.5% of requests in each calendar + month return successfully." + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Service Monitoring': 'https://cloud.google.com/monitoring/service-monitoring' + 'Monitoring API Documentation': 'https://cloud.google.com/monitoring/api/v3/' + api: 'https://cloud.google.com/monitoring/api/ref_v3/rest/v3/services.serviceLevelObjectives' + parameters: + - !ruby/object:Api::Type::String + name: service + required: true + url_param_only: true + input: true + description: | + ID of the service to which this SLO belongs. + - !ruby/object:Api::Type::String + name: sloId + description: | + The id to use for this ServiceLevelObjective. If omitted, an id will be generated instead. + input: true + properties: + - !ruby/object:Api::Type::String + name: name + description: | + The full resource name for this service. The syntax is: + projects/[PROJECT_ID_OR_NUMBER]/services/[SERVICE_ID]/serviceLevelObjectives/[SLO_NAME] + output: true + - !ruby/object:Api::Type::String + name: displayName + description: | + Name used for UI elements listing this SLO. + - !ruby/object:Api::Type::Double + name: goal + required: true + description: | + The fraction of service that must be good in order for this objective + to be met. 0 < goal <= 0.999 + - !ruby/object:Api::Type::Integer + name: rollingPeriodDays + api_name: rollingPeriod + exactly_one_of: + - rolling_period_days + - calendar_period + description: | + A rolling time period, semantically "in the past X days". + Must be between 1 to 30 days, inclusive. + - !ruby/object:Api::Type::Enum + name: calendarPeriod + exactly_one_of: + - rolling_period_days + - calendar_period + description: | + A calendar period, semantically "since the start of the current + ". + values: + - DAY + - WEEK + - FORTNIGHT + - MONTH + - !ruby/object:Api::Type::NestedObject + name: serviceLevelIndicator + description: | + serviceLevelIndicator (SLI) describes a good service. + It is used to measure and calculate the quality of the Service's + performance with respect to a single aspect of service quality. + properties: + - !ruby/object:Api::Type::NestedObject + name: basicSli + exactly_one_of: + - service_level_indicator.0.basic_sli + - service_level_indicator.0.request_based_sli + - service_level_indicator.0.windows_based_sli + description: | + Basic Service-Level Indicator (SLI) on a well-known service type. + Performance will be computed on the basis of pre-defined metrics. + + SLIs are used to measure and calculate the quality of the Service's + performance with respect to a single aspect of service quality. + + Exactly one of the following must be set: + `basic_sli`, `request_based_sli`, `windows_based_sli` + properties: + - !ruby/object:Api::Type::Array + name: method + description: | + An optional set of RPCs to which this SLI is relevant. + Telemetry from other methods will not be used to calculate + performance for this SLI. If omitted, this SLI applies to all + the Service's methods. For service types that don't support + breaking down by method, setting this field will result in an + error. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: location + description: | + An optional set of locations to which this SLI is relevant. + Telemetry from other locations will not be used to calculate + performance for this SLI. If omitted, this SLI applies to all + locations in which the Service has activity. For service types + that don't support breaking down by location, setting this + field will result in an error. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: version + description: | + The set of API versions to which this SLI is relevant. + Telemetry from other API versions will not be used to + calculate performance for this SLI. If omitted, + this SLI applies to all API versions. For service types + that don't support breaking down by version, setting this + field will result in an error. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: latency + description: | + Parameters for a latency threshold SLI. + required: true + properties: + - !ruby/object:Api::Type::String + required: true + name: threshold + description: | + A duration string, e.g. 10s. + Good service is defined to be the count of requests made to + this service that return in no more than threshold. + - !ruby/object:Api::Type::NestedObject + name: requestBasedSli + api_name: 'requestBased' + exactly_one_of: + - service_level_indicator.0.basic_sli + - service_level_indicator.0.request_based_sli + - service_level_indicator.0.windows_based_sli + description: | + A request-based SLI defines a SLI for which atomic units of + service are counted directly. + + A SLI describes a good service. + It is used to measure and calculate the quality of the Service's + performance with respect to a single aspect of service quality. + Exactly one of the following must be set: + `basic_sli`, `request_based_sli`, `windows_based_sli` + properties: + # NOTE: If adding properties to requestBasedSli, remember to add to the + # custom updateMask fields in property overrides. + - !ruby/object:Api::Type::NestedObject + name: goodTotalRatio + exactly_one_of: + - service_level_indicator.0.request_based_sli.0.good_total_ratio + - service_level_indicator.0.request_based_sli.0.distribution_cut + description: | + A means to compute a ratio of `good_service` to `total_service`. + Defines computing this ratio with two TimeSeries [monitoring filters](https://cloud.google.com/monitoring/api/v3/filters) + Must specify exactly two of good, bad, and total service filters. + The relationship good_service + bad_service = total_service + will be assumed. + + Exactly one of `distribution_cut` or `good_total_ratio` can be set. + properties: + - !ruby/object:Api::Type::String + name: goodServiceFilter + at_least_one_of: + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.good_service_filter + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.bad_service_filter + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.total_service_filter + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + quantifying good service provided. + Must have ValueType = DOUBLE or ValueType = INT64 and + must have MetricKind = DELTA or MetricKind = CUMULATIVE. + + Exactly two of `good_service_filter`,`bad_service_filter`,`total_service_filter` + must be set (good + bad = total is assumed). + - !ruby/object:Api::Type::String + name: badServiceFilter + at_least_one_of: + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.good_service_filter + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.bad_service_filter + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.total_service_filter + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + quantifying bad service provided, either demanded service that + was not provided or demanded service that was of inadequate + quality. + + Must have ValueType = DOUBLE or ValueType = INT64 and + must have MetricKind = DELTA or MetricKind = CUMULATIVE. + + Exactly two of `good_service_filter`,`bad_service_filter`,`total_service_filter` + must be set (good + bad = total is assumed). + - !ruby/object:Api::Type::String + name: totalServiceFilter + at_least_one_of: + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.good_service_filter + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.bad_service_filter + - service_level_indicator.0.request_based_sli.0.good_total_ratio.0.total_service_filter + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + quantifying total demanded service. + + Must have ValueType = DOUBLE or ValueType = INT64 and + must have MetricKind = DELTA or MetricKind = CUMULATIVE. + + Exactly two of `good_service_filter`,`bad_service_filter`,`total_service_filter` + must be set (good + bad = total is assumed). + - !ruby/object:Api::Type::NestedObject + name: distributionCut + exactly_one_of: + - service_level_indicator.0.request_based_sli.0.good_total_ratio + - service_level_indicator.0.request_based_sli.0.distribution_cut + description: | + Used when good_service is defined by a count of values aggregated in a + Distribution that fall into a good range. The total_service is the + total count of all values aggregated in the Distribution. + Defines a distribution TimeSeries filter and thresholds used for + measuring good service and total service. + + Exactly one of `distribution_cut` or `good_total_ratio` can be set. + properties: + - !ruby/object:Api::Type::String + name: distributionFilter + required: true + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + aggregating values to quantify the good service provided. + + Must have ValueType = DISTRIBUTION and + MetricKind = DELTA or MetricKind = CUMULATIVE. + - !ruby/object:Api::Type::NestedObject + name: range + required: true + description: | + Range of numerical values. The computed good_service + will be the count of values x in the Distribution such + that range.min <= x < range.max. inclusive of min and + exclusive of max. Open ranges can be defined by setting + just one of min or max. + properties: + - !ruby/object:Api::Type::Integer + name: min + at_least_one_of: + - service_level_indicator.0.request_based_sli.0.distribution_cut.0.range.0.min + - service_level_indicator.0.request_based_sli.0.distribution_cut.0.range.0.max + description: | + Min value for the range (inclusive). If not given, + will be set to "-infinity", defining an open range + "< range.max" + - !ruby/object:Api::Type::Integer + name: max + at_least_one_of: + - service_level_indicator.0.request_based_sli.0.distribution_cut.0.range.0.min + - service_level_indicator.0.request_based_sli.0.distribution_cut.0.range.0.max + description: | + max value for the range (inclusive). If not given, + will be set to "infinity", defining an open range + ">= range.min" + - !ruby/object:Api::Type::NestedObject + name: windowsBasedSli + api_name: 'windowsBased' + exactly_one_of: + - service_level_indicator.0.basic_sli + - service_level_indicator.0.request_based_sli + - service_level_indicator.0.windows_based_sli + description: | + A windows-based SLI defines the criteria for time windows. + good_service is defined based off the count of these time windows + for which the provided service was of good quality. + + A SLI describes a good service. It is used to measure and calculate + the quality of the Service's performance with respect to a single + aspect of service quality. + + Exactly one of the following must be set: + `basic_sli`, `request_based_sli`, `windows_based_sli` + properties: + # NOTE: If adding properties to windowsBasedSli, remember to add to the + # custom updateMask fields in property overrides. + - !ruby/object:Api::Type::String + name: windowPeriod + description: | + Duration over which window quality is evaluated, given as a + duration string "{X}s" representing X seconds. Must be an + integer fraction of a day and at least 60s. + # START window_criterion FIELDS + - !ruby/object:Api::Type::String + name: goodBadMetricFilter + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_bad_metric_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range + - service_level_indicator.0.windows_based_sli.0.metric_sum_in_range + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + with ValueType = BOOL. The window is good if any true values + appear in the window. One of `good_bad_metric_filter`, + `good_total_ratio_threshold`, `metric_mean_in_range`, + `metric_sum_in_range` must be set for `windows_based_sli`. + - !ruby/object:Api::Type::NestedObject + name: goodTotalRatioThreshold + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_bad_metric_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range + - service_level_indicator.0.windows_based_sli.0.metric_sum_in_range + description: | + Criterion that describes a window as good if its performance is + high enough. One of `good_bad_metric_filter`, + `good_total_ratio_threshold`, `metric_mean_in_range`, + `metric_sum_in_range` must be set for `windows_based_sli`. + properties: + - !ruby/object:Api::Type::Double + name: threshold + description: | + If window performance >= threshold, the window is counted + as good. + - !ruby/object:Api::Type::NestedObject + name: performance + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.basic_sli_performance + description: | + Request-based SLI to evaluate to judge window quality. + properties: + - !ruby/object:Api::Type::NestedObject + name: goodTotalRatio + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.distribution_cut + description: | + A means to compute a ratio of `good_service` to `total_service`. + Defines computing this ratio with two TimeSeries [monitoring filters](https://cloud.google.com/monitoring/api/v3/filters) + Must specify exactly two of good, bad, and total service filters. + The relationship good_service + bad_service = total_service + will be assumed. + properties: + - !ruby/object:Api::Type::String + name: goodServiceFilter + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.good_service_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.bad_service_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.total_service_filter + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + quantifying good service provided. Exactly two of + good, bad, or total service filter must be defined (where + good + bad = total is assumed) + + Must have ValueType = DOUBLE or ValueType = INT64 and + must have MetricKind = DELTA or MetricKind = CUMULATIVE. + - !ruby/object:Api::Type::String + name: badServiceFilter + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.good_service_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.bad_service_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.total_service_filter + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + quantifying bad service provided, either demanded service that + was not provided or demanded service that was of inadequate + quality. Exactly two of + good, bad, or total service filter must be defined (where + good + bad = total is assumed) + + Must have ValueType = DOUBLE or ValueType = INT64 and + must have MetricKind = DELTA or MetricKind = CUMULATIVE. + - !ruby/object:Api::Type::String + name: totalServiceFilter + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.good_service_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.bad_service_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio.0.total_service_filter + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + quantifying total demanded service. Exactly two of + good, bad, or total service filter must be defined (where + good + bad = total is assumed) + + Must have ValueType = DOUBLE or ValueType = INT64 and + must have MetricKind = DELTA or MetricKind = CUMULATIVE. + - !ruby/object:Api::Type::NestedObject + name: distributionCut + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.good_total_ratio + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.distribution_cut + description: | + Used when good_service is defined by a count of values aggregated in a + Distribution that fall into a good range. The total_service is the + total count of all values aggregated in the Distribution. + Defines a distribution TimeSeries filter and thresholds used for + measuring good service and total service. + properties: + - !ruby/object:Api::Type::String + name: distributionFilter + required: true + description: | + A TimeSeries [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + aggregating values to quantify the good service provided. + + Must have ValueType = DISTRIBUTION and + MetricKind = DELTA or MetricKind = CUMULATIVE. + - !ruby/object:Api::Type::NestedObject + name: range + required: true + description: | + Range of numerical values. The computed good_service + will be the count of values x in the Distribution such + that range.min <= x < range.max. inclusive of min and + exclusive of max. Open ranges can be defined by setting + just one of min or max. + properties: + - !ruby/object:Api::Type::Integer + name: min + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.distribution_cut.0.range.0.min + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.distribution_cut.0.range.0.max + description: | + Min value for the range (inclusive). If not given, + will be set to "-infinity", defining an open range + "< range.max" + - !ruby/object:Api::Type::Integer + name: max + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.distribution_cut.0.range.0.min + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance.0.distribution_cut.0.range.0.max + description: | + max value for the range (inclusive). If not given, + will be set to "infinity", defining an open range + ">= range.min" + - !ruby/object:Api::Type::NestedObject + name: basicSliPerformance + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.performance + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold.0.basic_sli_performance + description: | + Basic SLI to evaluate to judge window quality. + properties: + - !ruby/object:Api::Type::Array + name: method + description: | + An optional set of RPCs to which this SLI is relevant. + Telemetry from other methods will not be used to calculate + performance for this SLI. If omitted, this SLI applies to all + the Service's methods. For service types that don't support + breaking down by method, setting this field will result in an + error. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: location + description: | + An optional set of locations to which this SLI is relevant. + Telemetry from other locations will not be used to calculate + performance for this SLI. If omitted, this SLI applies to all + locations in which the Service has activity. For service types + that don't support breaking down by location, setting this + field will result in an error. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: version + description: | + The set of API versions to which this SLI is relevant. + Telemetry from other API versions will not be used to + calculate performance for this SLI. If omitted, + this SLI applies to all API versions. For service types + that don't support breaking down by version, setting this + field will result in an error. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: latency + required: true + description: | + Parameters for a latency threshold SLI. + properties: + - !ruby/object:Api::Type::String + required: true + name: threshold + description: | + A duration string, e.g. 10s. + Good service is defined to be the count of requests made to + this service that return in no more than threshold. + - !ruby/object:Api::Type::NestedObject + name: metricMeanInRange + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_bad_metric_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range + - service_level_indicator.0.windows_based_sli.0.metric_sum_in_range + description: | + Criterion that describes a window as good if the metric's value + is in a good range, *averaged* across returned streams. + One of `good_bad_metric_filter`, + + `good_total_ratio_threshold`, `metric_mean_in_range`, + `metric_sum_in_range` must be set for `windows_based_sli`. + Average value X of `time_series` should satisfy + `range.min <= X < range.max` for a good window. + properties: + - !ruby/object:Api::Type::String + name: timeSeries + required: true + description: | + A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + specifying the TimeSeries to use for evaluating window + The provided TimeSeries must have ValueType = INT64 or + ValueType = DOUBLE and MetricKind = GAUGE. Mean value `X` + should satisfy `range.min <= X < range.max` + under good service. + - !ruby/object:Api::Type::NestedObject + name: range + required: true + description: | + Range of numerical values. The computed good_service + will be the count of values x in the Distribution such + that range.min <= x < range.max. inclusive of min and + exclusive of max. Open ranges can be defined by setting + just one of min or max. Mean value `X` of `time_series` + values should satisfy `range.min <= X < range.max` for a + good service. + properties: + - !ruby/object:Api::Type::Integer + name: min + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range.0.range.0.min + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range.0.range.0.max + description: | + Min value for the range (inclusive). If not given, + will be set to "-infinity", defining an open range + "< range.max" + - !ruby/object:Api::Type::Integer + name: max + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range.0.range.0.min + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range.0.range.0.max + description: | + max value for the range (inclusive). If not given, + will be set to "infinity", defining an open range + ">= range.min" + - !ruby/object:Api::Type::NestedObject + name: metricSumInRange + exactly_one_of: + - service_level_indicator.0.windows_based_sli.0.good_bad_metric_filter + - service_level_indicator.0.windows_based_sli.0.good_total_ratio_threshold + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range + - service_level_indicator.0.windows_based_sli.0.metric_sum_in_range + description: | + Criterion that describes a window as good if the metric's value + is in a good range, *summed* across returned streams. + Summed value `X` of `time_series` should satisfy + `range.min <= X < range.max` for a good window. + + One of `good_bad_metric_filter`, + `good_total_ratio_threshold`, `metric_mean_in_range`, + `metric_sum_in_range` must be set for `windows_based_sli`. + properties: + - !ruby/object:Api::Type::String + name: timeSeries + required: true + description: | + A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters) + specifying the TimeSeries to use for evaluating window + quality. The provided TimeSeries must have + ValueType = INT64 or ValueType = DOUBLE and + MetricKind = GAUGE. + + Summed value `X` should satisfy + `range.min <= X < range.max` for a good window. + - !ruby/object:Api::Type::NestedObject + name: range + required: true + description: | + Range of numerical values. The computed good_service + will be the count of values x in the Distribution such + that range.min <= x < range.max. inclusive of min and + exclusive of max. Open ranges can be defined by setting + just one of min or max. Summed value `X` should satisfy + `range.min <= X < range.max` for a good window. + properties: + - !ruby/object:Api::Type::Integer + name: min + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.metric_sum_in_range.0.range.0.min + - service_level_indicator.0.windows_based_sli.0.metric_sum_in_range.0.range.0.max + description: | + Min value for the range (inclusive). If not given, + will be set to "-infinity", defining an open range + "< range.max" + - !ruby/object:Api::Type::Integer + name: max + at_least_one_of: + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range.0.range.0.min + - service_level_indicator.0.windows_based_sli.0.metric_mean_in_range.0.range.0.max + description: | + max value for the range (inclusive). If not given, + will be set to "infinity", defining an open range + ">= range.min" + # END window_criterion FIELDS - !ruby/object:Api::Resource name: UptimeCheckConfig update_verb: :PATCH @@ -972,8 +1578,8 @@ objects: 'Official Documentation': 'https://cloud.google.com/monitoring/uptime-checks/' api: 'https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.uptimeCheckConfigs' - base_url: projects/{{project}}/uptimeCheckConfigs - self_link: "{{name}}" + base_url: v3/projects/{{project}}/uptimeCheckConfigs + self_link: "v3/{{name}}" description: This message configures which resources and services to monitor for availability. properties: @@ -1017,6 +1623,16 @@ objects: name: content description: String or regex content to match (max 1024 bytes) required: true + - !ruby/object:Api::Type::Enum + name: matcher + description: The type of content matcher that will be applied to the server output, + compared to the content string when the check is run. + default_value: :CONTAINS_STRING + values: + - :CONTAINS_STRING + - :NOT_CONTAINS_STRING + - :MATCHES_REGEX + - :NON_MATCHES_REGEX - !ruby/object:Api::Type::Array name: selectedRegions description: The list of regions from which the check will be run. Some regions @@ -1032,6 +1648,22 @@ objects: - http_check - tcp_check properties: + - !ruby/object:Api::Type::Enum + name: requestMethod + input: true + description: The HTTP request method to use for the check. If set to + METHOD_UNSPECIFIED then requestMethod defaults to GET. + default_value: :GET + values: + - :METHOD_UNSPECIFIED + - :GET + - :POST + - !ruby/object:Api::Type::Enum + name: contentType + description: The content type to use for the check. + values: + - :TYPE_UNSPECIFIED + - :URL_ENCODED - !ruby/object:Api::Type::NestedObject name: authInfo at_least_one_of: @@ -1122,6 +1754,15 @@ objects: you do not wish to be seen when retrieving the configuration. The server will be responsible for encrypting the headers. On Get/List calls, if mask_headers is set to True then the headers will be obscured with ******. + - !ruby/object:Api::Type::String + name: body + description: The request body associated with the HTTP POST request. If contentType + is URL_ENCODED, the body passed in must be URL-encoded. Users can provide a + Content-Length header via the headers field or the API will do so. If the + requestMethod is GET and body is not empty, the API will return an error. The + maximum byte size is 1 megabyte. Note - As with all bytes fields JSON + representations are base64 encoded. e.g. "foo=bar" in URL-encoded form is + "foo%3Dbar" and in base64 encoding is "Zm9vJTI1M0RiYXI=". - !ruby/object:Api::Type::NestedObject name: tcpCheck description: Contains information needed to make a TCP check. @@ -1187,4 +1828,166 @@ objects: required: true description: Values for all of the labels listed in the associated monitored resource descriptor. For example, Compute Engine VM instances use - the labels "project_id", "instance_id", and "zone". \ No newline at end of file + the labels "project_id", "instance_id", and "zone". + + - !ruby/object:Api::Resource + name: MetricDescriptor + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/monitoring/custom-metrics/' + api: 'https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.metricDescriptors' + base_url: v3/projects/{{project}}/metricDescriptors + self_link: "v3/{{name}}" + update_verb: :POST + update_url: v3/projects/{{project}}/metricDescriptors + description: Defines a metric type and its schema. Once a metric descriptor is created, + deleting or altering it stops data collection and makes the metric type's existing data + unusable. + properties: + - !ruby/object:Api::Type::String + name: name + output: true + description: The resource name of the metric descriptor. + - !ruby/object:Api::Type::String + name: type + input: true + required: true + description: The metric type, including its DNS name prefix. The type is not + URL-encoded. All service defined metrics must be prefixed with the service name, + in the format of {service name}/{relative metric name}, such as + cloudsql.googleapis.com/database/cpu/utilization. The relative metric name must + have only upper and lower-case letters, digits, '/' and underscores '_' are + allowed. Additionally, the maximum number of characters allowed for the + relative_metric_name is 100. All user-defined metric types have the DNS name + custom.googleapis.com, external.googleapis.com, or logging.googleapis.com/user/. + - !ruby/object:Api::Type::Array + name: labels + description: The set of labels that can be used to describe a specific instance of this + metric type. In order to delete a label, the entire resource must be deleted, + then created with the desired labels. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: key + required: true + description: The key for this label. The key must not exceed 100 characters. The + first character of the key must be an upper- or lower-case letter, the remaining + characters must be letters, digits or underscores, and the key must match the + regular expression [a-zA-Z][a-zA-Z0-9_]* + - !ruby/object:Api::Type::Enum + name: valueType + description: The type of data that can be assigned to the label. + default_value: :STRING + values: + - :STRING + - :BOOL + - :INT64 + - !ruby/object:Api::Type::String + name: description + description: A human-readable description for the label. + - !ruby/object:Api::Type::Enum + name: metricKind + input: true + required: true + description: Whether the metric records instantaneous values, changes to a value, etc. + Some combinations of metricKind and valueType might not be supported. + values: + - :METRIC_KIND_UNSPECIFIED + - :GAUGE + - :DELTA + - :CUMULATIVE + - !ruby/object:Api::Type::Enum + name: valueType + input: true + required: true + description: Whether the measurement is an integer, a floating-point number, etc. Some + combinations of metricKind and valueType might not be supported. + values: + - :BOOL + - :INT64 + - :DOUBLE + - :STRING + - :DISTRIBUTION + - !ruby/object:Api::Type::String + name: unit + input: true + description: | + The units in which the metric value is reported. It is only applicable if the + valueType is INT64, DOUBLE, or DISTRIBUTION. The unit defines the representation of + the stored metric values. + + Different systems may scale the values to be more easily displayed (so a value of + 0.02KBy might be displayed as 20By, and a value of 3523KBy might be displayed as + 3.5MBy). However, if the unit is KBy, then the value of the metric is always in + thousands of bytes, no matter how it may be displayed. + + If you want a custom metric to record the exact number of CPU-seconds used by a job, + you can create an INT64 CUMULATIVE metric whose unit is s{CPU} (or equivalently + 1s{CPU} or just s). If the job uses 12,005 CPU-seconds, then the value is written as + 12005. + + Alternatively, if you want a custom metric to record data in a more granular way, you + can create a DOUBLE CUMULATIVE metric whose unit is ks{CPU}, and then write the value + 12.005 (which is 12005/1000), or use Kis{CPU} and write 11.723 (which is 12005/1024). + The supported units are a subset of The Unified Code for Units of Measure standard. + More info can be found in the API documentation + (https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.metricDescriptors). + - !ruby/object:Api::Type::String + name: description + input: true + required: true + description: A detailed description of the metric, which can be used in documentation. + - !ruby/object:Api::Type::String + name: displayName + input: true + required: true + description: A concise name for the metric, which can be displayed in user interfaces. + Use sentence case without an ending period, for example "Request count". + - !ruby/object:Api::Type::NestedObject + name: metadata + input: true + description: Metadata which can be used to guide usage of the metric. + properties: + - !ruby/object:Api::Type::String + name: samplePeriod + at_least_one_of: + - metadata.0.sample_period + - metadata.0.ingest_delay + description: The sampling period of metric data points. For metrics which are + written periodically, consecutive data points are stored at this time interval, + excluding data loss due to errors. Metrics with a higher granularity have a + smaller sampling period. In + `[duration format](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf?&_ga=2.264881487.1507873253.1593446723-935052455.1591817775#google.protobuf.Duration)`. + - !ruby/object:Api::Type::String + name: ingestDelay + at_least_one_of: + - metadata.0.sample_period + - metadata.0.ingest_delay + description: The delay of data points caused by ingestion. Data points older than + this age are guaranteed to be ingested and available to be read, excluding data + loss due to errors. In + `[duration format](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf?&_ga=2.264881487.1507873253.1593446723-935052455.1591817775#google.protobuf.Duration)`. + - !ruby/object:Api::Type::Enum + name: launchStage + input: true + description: The launch stage of the metric definition. + values: + - :LAUNCH_STAGE_UNSPECIFIED + - :UNIMPLEMENTED + - :PRELAUNCH + - :EARLY_ACCESS + - :ALPHA + - :BETA + - :GA + - :DEPRECATED + - !ruby/object:Api::Type::Array + name: monitoredResourceTypes + output: true + description: If present, then a time series, which is identified partially by + a metric type and a MonitoredResourceDescriptor, that is associated with this metric + type can only be associated with one of the monitored resource types listed here. + This field allows time series to be associated with the intersection of this metric + type and the monitored resource types in this list. + item_type: Api::Type::String + diff --git a/products/monitoring/inspec.yaml b/products/monitoring/inspec.yaml index 1f65cfd128e6..c7f0d95ff02d 100644 --- a/products/monitoring/inspec.yaml +++ b/products/monitoring/inspec.yaml @@ -18,7 +18,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides additional_functions: third_party/inspec/custom_functions/alert_policy.erb singular_extra_examples: third_party/inspec/documentation/google_project_alert_policy.md plural_extra_examples: third_party/inspec/documentation/google_project_alert_policies.md - self_link: projects/{{project}}/alertPolicies/{{name}} + self_link: v3/projects/{{project}}/alertPolicies/{{name}} properties: name: !ruby/object:Overrides::Inspec::PropertyOverride override_name: policy_names @@ -32,5 +32,9 @@ overrides: !ruby/object:Overrides::ResourceOverrides exclude: true Service: !ruby/object:Overrides::Inspec::ResourceOverride exclude: true + Slo: !ruby/object:Overrides::Inspec::ResourceOverride + exclude: true UptimeCheckConfig: !ruby/object:Overrides::Inspec::ResourceOverride - exclude: true \ No newline at end of file + exclude: true + MetricDescriptor: !ruby/object:Overrides::Inspec::ResourceOverride + exclude: true diff --git a/products/monitoring/terraform.yaml b/products/monitoring/terraform.yaml index 190941a69742..7fb081990097 100644 --- a/products/monitoring/terraform.yaml +++ b/products/monitoring/terraform.yaml @@ -16,7 +16,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides id_format: "{{name}}" import_format: ["{{name}}"] mutex: alertPolicy/{{project}} - error_retry_predicates: ["isMonitoringRetryableError"] + error_retry_predicates: ["isMonitoringConcurrentEditError"] examples: - !ruby/object:Provider::Terraform::Examples # skipping tests because the API is full of race conditions @@ -35,7 +35,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides id_format: "{{name}}" import_format: ["{{name}}"] mutex: stackdriver/groups/{{project}} - error_retry_predicates: ["isMonitoringRetryableError"] + error_retry_predicates: ["isMonitoringConcurrentEditError"] examples: - !ruby/object:Provider::Terraform::Examples name: "monitoring_group_basic" @@ -59,7 +59,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides id_format: "{{name}}" import_format: ["{{name}}"] mutex: stackdriver/notifications/{{project}} - error_retry_predicates: ["isMonitoringRetryableError"] + error_retry_predicates: ["isMonitoringConcurrentEditError"] examples: - !ruby/object:Provider::Terraform::Examples name: "notification_channel_basic" @@ -111,7 +111,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides legacy_name: 'google_monitoring_custom_service' id_format: "{{name}}" import_format: ["{{name}}"] - error_retry_predicates: ["isMonitoringRetryableError"] + error_retry_predicates: ["isMonitoringConcurrentEditError"] properties: serviceId: !ruby/object:Overrides::Terraform::PropertyOverride api_name: 'name' @@ -130,10 +130,113 @@ overrides: !ruby/object:Overrides::ResourceOverrides custom_import: templates/terraform/custom_import/self_link_as_name.erb encoder: templates/terraform/encoders/monitoring_service.go.erb + Slo: !ruby/object:Overrides::Terraform::ResourceOverride + id_format: "{{name}}" + import_format: ["{{name}}"] + mutex: monitoring/project/{{project}}/service/{{service}} + examples: + - !ruby/object:Provider::Terraform::Examples + name: "monitoring_slo_appengine" + primary_resource_id: "appeng_slo" + vars: + slo_id: "ae-slo" + - !ruby/object:Provider::Terraform::Examples + name: "monitoring_slo_request_based" + primary_resource_id: "request_based_slo" + test_env_vars: + project: :PROJECT_NAME + vars: + service_id: "custom-srv-request-slos" + slo_id: "consumed-api-slo" + - !ruby/object:Provider::Terraform::Examples + name: 'monitoring_slo_windows_based_good_bad_metric_filter' + primary_resource_id: "windows_based" + vars: + service_id: "custom-srv-windows-slos" + slo_id: "good-bad-metric-filter" + - !ruby/object:Provider::Terraform::Examples + name: 'monitoring_slo_windows_based_metric_mean' + primary_resource_id: "windows_based" + vars: + service_id: "custom-srv-windows-slos" + slo_id: "metric-mean-range" + - !ruby/object:Provider::Terraform::Examples + name: 'monitoring_slo_windows_based_metric_sum' + primary_resource_id: "windows_based" + vars: + service_id: "custom-srv-windows-slos" + slo_id: "metric-sum-range" + - !ruby/object:Provider::Terraform::Examples + name: 'monitoring_slo_windows_based_ratio_threshold' + primary_resource_id: "windows_based" + vars: + service_id: "custom-srv-windows-slos" + slo_id: "ratio-threshold" + properties: + rollingPeriodDays: !ruby/object:Overrides::Terraform::PropertyOverride + api_name: rollingPeriod + custom_flatten: templates/terraform/custom_expand/days_to_duration_string.go.erb + custom_expand: templates/terraform/custom_flatten/duration_string_to_days.go.erb + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(1, 30)' + sloId: !ruby/object:Overrides::Terraform::PropertyOverride + api_name: 'name' + custom_flatten: templates/terraform/custom_flatten/name_from_self_link.erb + default_from_api: true + validation: !ruby/object:Provider::Terraform::Validation + regex: '^[a-z0-9\-]+$' + goal: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: validateMonitoringSloGoal + serviceLevelIndicator: !ruby/object:Overrides::Terraform::PropertyOverride + flatten_object: true + serviceLevelIndicator.basicSli.method: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + serviceLevelIndicator.basicSli.location: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + serviceLevelIndicator.basicSli.version: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + serviceLevelIndicator.requestBasedSli: !ruby/object:Overrides::Terraform::PropertyOverride + # Force update all nested fields to allow for unsetting values. + update_mask_fields: + - "serviceLevelIndicator.requestBased.goodTotalRatio.badServiceFilter" + - "serviceLevelIndicator.requestBased.goodTotalRatio.goodServiceFilter" + - "serviceLevelIndicator.requestBased.goodTotalRatio.totalServiceFilter" + - "serviceLevelIndicator.requestBased.distributionCut.range" + - "serviceLevelIndicator.requestBased.distributionCut.distributionFilter" + serviceLevelIndicator.windowsBasedSli: !ruby/object:Overrides::Terraform::PropertyOverride + # Force update nested fields to allow for unsetting values. + update_mask_fields: + - "serviceLevelIndicator.windowsBased.windowPeriod" + - "serviceLevelIndicator.windowsBased.goodBadMetricFilter" + - "serviceLevelIndicator.windowsBased.goodTotalRatioThreshold.threshold" + - "serviceLevelIndicator.windowsBased.goodTotalRatioThreshold.performance.goodTotalRatio.badServiceFilter" + - "serviceLevelIndicator.windowsBased.goodTotalRatioThreshold.performance.goodTotalRatio.goodServiceFilter" + - "serviceLevelIndicator.windowsBased.goodTotalRatioThreshold.performance.goodTotalRatio.totalServiceFilter" + - "serviceLevelIndicator.windowsBased.goodTotalRatioThreshold.performance.distributionCut.range" + - "serviceLevelIndicator.windowsBased.goodTotalRatioThreshold.performance.distributionCut.distributionFilter" + - "serviceLevelIndicator.windowsBased.goodTotalRatioThreshold.basicSliPerformance" + - "serviceLevelIndicator.windowsBased.metricMeanInRange.timeSeries" + - "serviceLevelIndicator.windowsBased.metricMeanInRange.range" + - "serviceLevelIndicator.windowsBased.metricSumInRange.timeSeries" + - "serviceLevelIndicator.windowsBased.metricSumInRange.range" + serviceLevelIndicator.windowsBasedSli.goodTotalRatioThreshold.basicSliPerformance.method: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + serviceLevelIndicator.windowsBasedSli.goodTotalRatioThreshold.basicSliPerformance.location: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + serviceLevelIndicator.windowsBasedSli.goodTotalRatioThreshold.basicSliPerformance.version: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + + custom_code: !ruby/object:Provider::Terraform::CustomCode + constants: templates/terraform/constants/monitoring_slo.go.erb + custom_import: templates/terraform/custom_import/self_link_as_name.erb + encoder: templates/terraform/encoders/monitoring_slo.go.erb + UptimeCheckConfig: !ruby/object:Overrides::Terraform::ResourceOverride id_format: "{{name}}" import_format: ["{{name}}"] - error_retry_predicates: ["isMonitoringRetryableError"] + error_retry_predicates: ["isMonitoringConcurrentEditError"] + mutex: stackdriver/groups/{{project}} examples: - !ruby/object:Provider::Terraform::Examples name: "uptime_check_config_http" @@ -168,10 +271,52 @@ overrides: !ruby/object:Overrides::ResourceOverrides custom_flatten: "templates/terraform/custom_flatten/uptime_check_http_password.erb" httpCheck.port: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true + httpCheck.headers: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true resourceGroup.groupId: !ruby/object:Overrides::Terraform::PropertyOverride custom_expand: "templates/terraform/custom_expand/resource_from_self_link.go.erb" custom_flatten: "templates/terraform/custom_flatten/group_id_to_name.erb" + MetricDescriptor: !ruby/object:Overrides::Terraform::ResourceOverride + async: !ruby/object:Provider::Terraform::PollAsync + check_response_func_existence: PollCheckForExistence + check_response_func_absence: PollCheckForAbsence + target_occurrences: 20 + actions: ['create', 'update', 'delete'] + operation: !ruby/object:Api::Async::Operation + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 6 + update_minutes: 6 + delete_minutes: 6 + id_format: "{{name}}" + import_format: ["{{name}}"] + error_retry_predicates: ["isMonitoringConcurrentEditError"] + properties: + labels: !ruby/object:Overrides::Terraform::PropertyOverride + is_set: true + labels.valueType: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: "templates/terraform/custom_flatten/default_if_empty.erb" + metadata: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + launchStage: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + examples: + - !ruby/object:Provider::Terraform::Examples + name: "monitoring_metric_descriptor_basic" + primary_resource_id: "basic" + vars: + display_name: "metric-descriptor" + type: "daily_sales" + - !ruby/object:Provider::Terraform::Examples + name: "monitoring_metric_descriptor_alert" + primary_resource_id: "with_alert" + vars: + display_name: "metric-descriptor" + type: "daily_sales" + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/self_link_as_name.erb + + files: !ruby/object:Provider::Config::Files # These files have templating (ERB) code that will be run. # This is usually to add licensing info, autogeneration notices, etc. diff --git a/products/networkmanagement/api.yaml b/products/networkmanagement/api.yaml new file mode 100644 index 000000000000..7c1781b9b9d8 --- /dev/null +++ b/products/networkmanagement/api.yaml @@ -0,0 +1,212 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: NetworkManagement +display_name: NetworkManagement +scopes: + - https://www.googleapis.com/auth/cloud-platform +versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://networkmanagement.googleapis.com/v1/ +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Network Management API + url: https://console.cloud.google.com/apis/library/networkmanagement.googleapis.com/ +async: !ruby/object:Api::OpAsync + operation: !ruby/object:Api::OpAsync::Operation + path: 'name' + base_url: '{{op_id}}' + wait_ms: 1000 + result: !ruby/object:Api::OpAsync::Result + path: 'response' + resource_inside_response: true + status: !ruby/object:Api::OpAsync::Status + path: 'done' + complete: true + allowed: + - true + - false + error: !ruby/object:Api::OpAsync::Error + path: 'error' + message: 'message' +objects: + - !ruby/object:Api::Resource + name: 'ConnectivityTest' + base_url: projects/{{project}}/locations/global/connectivityTests + create_url: projects/{{project}}/locations/global/connectivityTests?testId={{name}} + update_verb: :PATCH + update_mask: true + description: | + A connectivity test are a static analysis of your resource configurations + that enables you to evaluate connectivity to and from Google Cloud + resources in your Virtual Private Cloud (VPC) network. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/network-intelligence-center/docs' + api: 'https://cloud.google.com/network-intelligence-center/docs/connectivity-tests/reference/networkmanagement/rest/v1/projects.locations.global.connectivityTests' + iam_policy: !ruby/object:Api::Resource::IamPolicy + exclude: true + method_name_separator: ':' + parent_resource_attribute: 'connectivityTest' + import_format: ["projects/{{project}}/locations/global/connectivityTests/{{connectivityTest}}", "{{connectivityTest}}"] + properties: + - !ruby/object:Api::Type::String + name: name + description: |- + Unique name for the connectivity test. + required: true + input: true + - !ruby/object:Api::Type::String + name: description + description: |- + The user-supplied description of the Connectivity Test. + Maximum of 512 characters. + - !ruby/object:Api::Type::NestedObject + name: 'source' + required: true + description: | + Required. Source specification of the Connectivity Test. + + You can use a combination of source IP address, virtual machine + (VM) instance, or Compute Engine network to uniquely identify the + source location. + + Examples: If the source IP address is an internal IP address within + a Google Cloud Virtual Private Cloud (VPC) network, then you must + also specify the VPC network. Otherwise, specify the VM instance, + which already contains its internal IP address and VPC network + information. + + If the source of the test is within an on-premises network, then + you must provide the destination VPC network. + + If the source endpoint is a Compute Engine VM instance with multiple + network interfaces, the instance itself is not sufficient to + identify the endpoint. So, you must also specify the source IP + address or VPC network. + + A reachability analysis proceeds even if the source location is + ambiguous. However, the test result may include endpoints that + you don't intend to test. + properties: + - !ruby/object:Api::Type::String + name: ipAddress + description: |- + The IP address of the endpoint, which can be an external or + internal IP. An IPv6 address is only allowed when the test's + destination is a global load balancer VIP. + - !ruby/object:Api::Type::Integer + name: port + description: |- + The IP protocol port of the endpoint. Only applicable when + protocol is TCP or UDP. + - !ruby/object:Api::Type::String + name: instance + description: |- + A Compute Engine instance URI. + - !ruby/object:Api::Type::String + name: network + description: |- + A Compute Engine network URI. + - !ruby/object:Api::Type::Enum + name: networkType + description: |- + Type of the network where the endpoint is located. + values: + - :GCP_NETWORK + - :NON_GCP_NETWORK + - !ruby/object:Api::Type::String + name: projectId + description: |- + Project ID where the endpoint is located. The Project ID can be + derived from the URI if you provide a VM instance or network URI. + The following are two cases where you must provide the project ID: + + 1. Only the IP address is specified, and the IP address is + within a GCP project. + 2. When you are using Shared VPC and the IP address + that you provide is from the service project. In this case, + the network that the IP address resides in is defined in the + host project. + - !ruby/object:Api::Type::NestedObject + name: 'destination' + required: true + description: | + Required. Destination specification of the Connectivity Test. + + You can use a combination of destination IP address, Compute + Engine VM instance, or VPC network to uniquely identify the + destination location. + + Even if the destination IP address is not unique, the source IP + location is unique. Usually, the analysis can infer the destination + endpoint from route information. + + If the destination you specify is a VM instance and the instance has + multiple network interfaces, then you must also specify either a + destination IP address or VPC network to identify the destination + interface. + + A reachability analysis proceeds even if the destination location + is ambiguous. However, the result can include endpoints that you + don't intend to test. + properties: + - !ruby/object:Api::Type::String + name: ipAddress + description: |- + The IP address of the endpoint, which can be an external or + internal IP. An IPv6 address is only allowed when the test's + destination is a global load balancer VIP. + - !ruby/object:Api::Type::Integer + name: port + description: |- + The IP protocol port of the endpoint. Only applicable when + protocol is TCP or UDP. + - !ruby/object:Api::Type::String + name: instance + description: |- + A Compute Engine instance URI. + - !ruby/object:Api::Type::String + name: network + description: |- + A Compute Engine network URI. + - !ruby/object:Api::Type::String + name: projectId + description: |- + Project ID where the endpoint is located. The Project ID can be + derived from the URI if you provide a VM instance or network URI. + The following are two cases where you must provide the project ID: + 1. Only the IP address is specified, and the IP address is within + a GCP project. 2. When you are using Shared VPC and the IP address + that you provide is from the service project. In this case, the + network that the IP address resides in is defined in the host + project. + - !ruby/object:Api::Type::String + name: protocol + description: |- + IP Protocol of the test. When not provided, "TCP" is assumed. + default_value: "TCP" + - !ruby/object:Api::Type::Array + name: relatedProjects + description: |- + Other projects that may be relevant for reachability analysis. + This is applicable to scenarios where a test can cross project + boundaries. + item_type: Api::Type::String + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: | + Resource labels to represent user-provided metadata. diff --git a/products/networkmanagement/terraform.yaml b/products/networkmanagement/terraform.yaml new file mode 100644 index 000000000000..2954652b4638 --- /dev/null +++ b/products/networkmanagement/terraform.yaml @@ -0,0 +1,55 @@ +# Copyright 2019 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + ConnectivityTest: !ruby/object:Overrides::Terraform::ResourceOverride + filename_override: 'connectivity_test_resource' + id_format: projects/{{project}}/locations/global/connectivityTests/{{name}} + autogen_async: true + examples: + - !ruby/object:Provider::Terraform::Examples + name: "network_management_connectivity_test_instances" + primary_resource_id: "instance-test" + vars: + primary_resource_name: "conn-test-instances" + network_name: "conn-test-net" + source_instance: "source-vm" + dest_instance: "dest-vm" + - !ruby/object:Provider::Terraform::Examples + name: "network_management_connectivity_test_addresses" + primary_resource_id: "address-test" + vars: + primary_resource_name: "conn-test-addr" + network: "connectivity-vpc" + source_addr: "src-addr" + dest_addr: "dest-addr" + properties: + name: !ruby/object:Overrides::Terraform::PropertyOverride + custom_expand: 'templates/terraform/custom_expand/network_management_connectivity_test_name.go.erb' + custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' + source: !ruby/object:Overrides::Terraform::PropertyOverride + update_mask_fields: + - "source.ipAddress" + - "source.port" + - "source.instance" + - "source.network" + - "source.networkType" + - "source.projectId" + destination: !ruby/object:Overrides::Terraform::PropertyOverride + update_mask_fields: + - "destination.ipAddress" + - "destination.port" + - "destination.instance" + - "destination.network" + - "destination.projectId" diff --git a/products/notebooks/api.yaml b/products/notebooks/api.yaml new file mode 100644 index 000000000000..3f5105abff77 --- /dev/null +++ b/products/notebooks/api.yaml @@ -0,0 +1,400 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: Notebooks +display_name: Cloud AI Notebooks +versions: + - !ruby/object:Api::Product::Version + name: beta + base_url: https://notebooks.googleapis.com/v1beta1/ +scopes: + - https://www.googleapis.com/auth/cloud-platform +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Cloud Notebooks API + url: https://console.cloud.google.com/apis/api/notebooks.googleapis.com +async: !ruby/object:Api::OpAsync + operation: !ruby/object:Api::OpAsync::Operation + base_url: '{{op_id}}' + path: 'name' + wait_ms: 1000 + result: !ruby/object:Api::OpAsync::Result + path: 'response' + resource_inside_response: true + status: !ruby/object:Api::OpAsync::Status + path: 'done' + complete: True + allowed: + - True + - False + error: !ruby/object:Api::OpAsync::Error + path: 'error' + message: 'message' +objects: + # Notebooks Environment + - !ruby/object:Api::Resource + name: 'Environment' + description: | + A Cloud AI Platform Notebook environment. + min_version: beta + base_url: projects/{{project}}/locations/{{location}}/environments + create_url: projects/{{project}}/locations/{{location}}/environments?environmentId={{name}} + self_link: projects/{{project}}/locations/{{location}}/environments/{{name}} + create_verb: :POST + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/ai-platform-notebooks' + api: 'https://cloud.google.com/ai-platform/notebooks/docs/reference/rest' + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The name specified for the Environment instance. + Format: projects/{project_id}/locations/{location}/environments/{environmentId} + required: true + input: true + url_param_only: true + pattern: projects/{{project}}/locations/{{location}}/environments/{{name}} + - !ruby/object:Api::Type::ResourceRef + name: 'location' + description: 'A reference to the zone where the machine resides.' + resource: 'Location' + imports: 'name' + required: true + url_param_only: true + - !ruby/object:Api::Type::String + name: 'displayName' + description: | + Display name of this environment for the UI. + - !ruby/object:Api::Type::String + name: 'description' + description: | + A brief description of this environment. + - !ruby/object:Api::Type::String + name: 'postStartupScript' + description: | + Path to a Bash script that automatically runs after a notebook instance fully boots up. + The path must be a URL or Cloud Storage path. Example: "gs://path-to-file/file-name" + - !ruby/object:Api::Type::Time + name: 'createTime' + description: 'Instance creation time' + output: true + - !ruby/object:Api::Type::NestedObject + name: 'vmImage' + exactly_one_of: + - vm_image + - container_image + description: | + Use a Compute Engine VM image to start the notebook instance. + properties: + - !ruby/object:Api::Type::String + name: 'project' + description: | + The name of the Google Cloud project that this VM image belongs to. + Format: projects/{project_id} + required: true + - !ruby/object:Api::Type::String + name: 'imageName' + description: | + Use VM image name to find the image. + - !ruby/object:Api::Type::String + name: 'imageFamily' + description: | + Use this VM image family to find the image; the newest image in this family will be used. + - !ruby/object:Api::Type::NestedObject + name: 'containerImage' + exactly_one_of: + - vm_image + - container_image + description: | + Use a container image to start the notebook instance. + properties: + - !ruby/object:Api::Type::String + name: 'repository' + description: | + The path to the container image repository. + For example: gcr.io/{project_id}/{imageName} + required: true + - !ruby/object:Api::Type::String + name: 'tag' + description: | + The tag of the container image. If not specified, this defaults to the latest tag. + # Notebooks Instance + - !ruby/object:Api::Resource + name: 'Instance' + description: | + A Cloud AI Platform Notebook instance. + min_version: beta + base_url: projects/{{project}}/locations/{{location}}/instances + create_url: projects/{{project}}/locations/{{location}}/instances?instanceId={{name}} + self_link: projects/{{project}}/locations/{{location}}/instances/{{name}} + create_verb: :POST + input: true + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/ai-platform-notebooks' + api: 'https://cloud.google.com/ai-platform/notebooks/docs/reference/rest' + parameters: + - !ruby/object:Api::Type::ResourceRef + name: 'location' + description: 'A reference to the zone where the machine resides.' + resource: 'Location' + imports: 'selfLink' + required: true + input: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The name specified for the Notebook instance. + required: true + input: true + url_param_only: true + pattern: projects/{{project}}/locations/{{location}}/instances/{{name}} + - !ruby/object:Api::Type::String + name: 'machineType' + description: | + A reference to a machine type which defines VM kind. + required: true + # Machine Type is updatable, but requires the instance to be stopped, just like + # for compute instances. + # TODO: Implement allow_stopping_for_update here and for acceleratorConfig + # update_verb: :PATCH + # update_url: 'projects/{{project}}/locations/{{location}}/instances/{{name}}:setMachineType' + pattern: projects/{{project}}/zones/{{location}}/machineTypes/{{name}} + - !ruby/object:Api::Type::String + name: 'postStartupScript' + description: | + Path to a Bash script that automatically runs after a + notebook instance fully boots up. The path must be a URL + or Cloud Storage path (gs://path-to-file/file-name). + - !ruby/object:Api::Type::String + name: 'proxyUri' + description: | + The proxy endpoint that is used to access the Jupyter notebook. + output: true + - !ruby/object:Api::Type::String + name: 'instanceOwners' + description: | + The owner of this instance after creation. + Format: alias@example.com. + Currently supports one owner only. + If not specified, all of the service account users of + your VM instance's service account can use the instance. + - !ruby/object:Api::Type::String + name: 'serviceAccount' + description: | + The service account on this instance, giving access to other + Google Cloud services. You can use any service account within + the same project, but you must have the service account user + permission to use the instance. If not specified, + the Compute Engine default service account is used. + - !ruby/object:Api::Type::NestedObject + name: 'acceleratorConfig' + description: | + The hardware accelerator used on this instance. If you use accelerators, + make sure that your configuration has enough vCPUs and memory to support the + machineType you have selected. + # AcceleratorConfig is updatable, but requires the instance to be stopped, just like + # for compute instances. + # TODO: Implement allow_stopping_for_update here and for machineType + # update_verb: :PATCH + # update_url: 'projects/{{project}}/locations/{{location}}/instances/{{name}}:setAccelerator' + properties: + - !ruby/object:Api::Type::Enum + name: 'type' + values: + - ACCELERATOR_TYPE_UNSPECIFIED + - NVIDIA_TESLA_K80 + - NVIDIA_TESLA_P100 + - NVIDIA_TESLA_V100 + - NVIDIA_TESLA_P4 + - NVIDIA_TESLA_T4 + - NVIDIA_TESLA_T4_VWS + - NVIDIA_TESLA_P100_VWS + - NVIDIA_TESLA_P4_VWS + - TPU_V2 + - TPU_V3 + required: true + description: | + Type of this accelerator. + - !ruby/object:Api::Type::Integer + name: 'coreCount' + required: true + description: | + Count of cores of this accelerator. + - !ruby/object:Api::Type::Enum + name: 'state' + values: + - STATE_UNSPECIFIED + - STARTING + - PROVISIONING + - ACTIVE + - STOPPING + - STOPPED + - DELETED + description: | + The state of this instance. + output: true + - !ruby/object:Api::Type::Boolean + name: 'installGpuDriver' + description: | + Indicates that this is a boot disk. The virtual machine will + use the first partition of the disk for its root filesystem. + input: true + - !ruby/object:Api::Type::String + name: 'customGpuDriverPath' + description: | + Specify a custom Cloud Storage path where the GPU driver is stored. + If not specified, we'll automatically choose from official GPU drivers. + - !ruby/object:Api::Type::Enum + name: 'bootDiskType' + values: + - DISK_TYPE_UNSPECIFIED + - PD_STANDARD + - PD_SSD + description: | + Possible disk types for notebook instances. + - !ruby/object:Api::Type::Integer + name: 'bootDiskSizeGb' + description: | + The size of the boot disk in GB attached to this instance, + up to a maximum of 64000 GB (64 TB). The minimum recommended value is 100 GB. + If not specified, this defaults to 100. + - !ruby/object:Api::Type::Enum + name: 'dataDiskType' + values: + - DISK_TYPE_UNSPECIFIED + - PD_STANDARD + - PD_SSD + description: | + Possible disk types for notebook instances. + - !ruby/object:Api::Type::Integer + name: 'dataDiskSizeGb' + description: | + The size of the data disk in GB attached to this instance, + up to a maximum of 64000 GB (64 TB). + You can choose the size of the data disk based on how big your notebooks and data are. + If not specified, this defaults to 100. + - !ruby/object:Api::Type::Boolean + name: 'noRemoveDataDisk' + description: | + If true, the data disk will not be auto deleted when deleting the instance. + - !ruby/object:Api::Type::Enum + name: 'diskEncryption' + values: + - DISK_ENCRYPTION_UNSPECIFIED + - GMEK + - CMEK + description: | + Disk encryption method used on the boot and data disks, defaults to GMEK. + - !ruby/object:Api::Type::String + name: 'kmsKey' + description: | + The KMS key used to encrypt the disks, only applicable if diskEncryption is CMEK. + Format: projects/{project_id}/locations/{location}/keyRings/{key_ring_id}/cryptoKeys/{key_id} + - !ruby/object:Api::Type::Boolean + name: 'noPublicIp' + description: | + no public IP will be assigned to this instance. + - !ruby/object:Api::Type::Boolean + name: 'noProxyAccess' + description: | + the notebook instance will not register with the proxy.. + - !ruby/object:Api::Type::String + name: 'network' + description: | + The name of the VPC that this instance is in. + Format: projects/{project_id}/global/networks/{network_id} + - !ruby/object:Api::Type::String + name: 'subnet' + description: | + The name of the subnet that this instance is in. + Format: projects/{project_id}/regions/{region}/subnetworks/{subnetwork_id} + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: | + Labels to apply to this instance. These can be later modified by the setLabels method. + An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + update_verb: :PATCH + update_url: 'projects/{{project}}/locations/{{location}}/instances/{{name}}:setLabels' + - !ruby/object:Api::Type::KeyValuePairs + name: 'metadata' + description: | + Custom metadata to apply to this instance. + An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + - !ruby/object:Api::Type::Time + name: 'createTime' + description: 'Instance creation time' + output: true + - !ruby/object:Api::Type::Time + name: 'updateTime' + description: 'Instance update time.' + output: true + - !ruby/object:Api::Type::NestedObject + name: 'vmImage' + exactly_one_of: + - vm_image + - container_image + description: | + Use a Compute Engine VM image to start the notebook instance. + properties: + - !ruby/object:Api::Type::String + name: 'project' + description: | + The name of the Google Cloud project that this VM image belongs to. + Format: projects/{project_id} + required: true + - !ruby/object:Api::Type::String + name: 'imageFamily' + description: | + Use this VM image family to find the image; the newest image in this family will be used. + - !ruby/object:Api::Type::String + name: 'imageName' + description: | + Use VM image name to find the image. + - !ruby/object:Api::Type::NestedObject + name: 'containerImage' + exactly_one_of: + - vm_image + - container_image + description: | + Use a container image to start the notebook instance. + properties: + - !ruby/object:Api::Type::String + name: 'repository' + description: | + The path to the container image repository. + For example: gcr.io/{project_id}/{imageName} + required: true + - !ruby/object:Api::Type::String + name: 'tag' + description: | + The tag of the container image. If not specified, this defaults to the latest tag. + # Compute Zone (Location) + - !ruby/object:Api::Resource + name: 'Location' + kind: 'compute#zone' + base_url: projects/{{project}}/locations + collection_url_key: 'items' + has_self_link: true + readonly: true + description: 'Represents a Location resource.' + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: 'Name of the Location resource.' \ No newline at end of file diff --git a/products/notebooks/terraform.yaml b/products/notebooks/terraform.yaml new file mode 100644 index 000000000000..e966c62b85e3 --- /dev/null +++ b/products/notebooks/terraform.yaml @@ -0,0 +1,113 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + Environment: !ruby/object:Overrides::Terraform::ResourceOverride + properties: + location: !ruby/object:Overrides::Terraform::PropertyOverride + diff_suppress_func: 'compareSelfLinkOrResourceName' + custom_expand: templates/terraform/custom_expand/resource_from_self_link.go.erb + custom_flatten: templates/terraform/custom_flatten/name_from_self_link.erb + examples: + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "notebook_environment_basic" + primary_resource_id: "environment" + vars: + environment_name: "notebooks-environment" + Instance: !ruby/object:Overrides::Terraform::ResourceOverride + timeouts: !ruby/object:Api::Timeouts + insert_minutes: 15 + update_minutes: 15 + delete_minutes: 15 + autogen_async: true + examples: + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "notebook_instance_basic" + primary_resource_id: "instance" + vars: + instance_name: "notebooks-instance" + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "notebook_instance_basic_container" + primary_resource_id: "instance" + vars: + instance_name: "notebooks-instance" + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "notebook_instance_basic_gpu" + primary_resource_id: "instance" + vars: + instance_name: "notebooks-instance" + - !ruby/object:Provider::Terraform::Examples + min_version: beta + name: "notebook_instance_full" + primary_resource_id: "instance" + vars: + instance_name: "notebooks-instance" + test_env_vars: + service_account: :SERVICE_ACCT + description: | + {{description}} + + ~> **Note:** Due to limitations of the Notebooks Instance API, many fields + in this resource do not properly detect drift. These fields will also not + appear in state once imported. + properties: + bootDiskSizeGb: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + bootDiskType: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + createTime: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + containerImage: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + dataDiskSizeGb: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + instanceOwners: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + machineType: !ruby/object:Overrides::Terraform::PropertyOverride + diff_suppress_func: 'compareSelfLinkOrResourceName' + custom_flatten: templates/terraform/custom_flatten/name_from_self_link.erb + metadata: !ruby/object:Overrides::Terraform::PropertyOverride + # This is not a traditional ignore_read. Metadata is returned from the API, + # but it gets merged with metadata that the server sets. This prevents us + # from detecting drift. + ignore_read: true + name: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' + network: !ruby/object:Overrides::Terraform::PropertyOverride + diff_suppress_func: compareSelfLinkOrResourceName + default_from_api: true + serviceAccount: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + subnet: !ruby/object:Overrides::Terraform::PropertyOverride + diff_suppress_func: compareSelfLinkOrResourceName + default_from_api: true + updateTime: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + vmImage: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + Location: !ruby/object:Overrides::Terraform::ResourceOverride + properties: + name: !ruby/object:Overrides::Terraform::PropertyOverride + custom_flatten: templates/terraform/custom_flatten/name_from_self_link.erb + +# This is for copying files over +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> \ No newline at end of file diff --git a/products/osconfig/api.yaml b/products/osconfig/api.yaml new file mode 100644 index 000000000000..0d79cf286e15 --- /dev/null +++ b/products/osconfig/api.yaml @@ -0,0 +1,1544 @@ +# Copyright 2020 google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: OSConfig +display_name: OS Config +versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://osconfig.googleapis.com/v1/ + - !ruby/object:Api::Product::Version + name: beta + base_url: https://osconfig.googleapis.com/v1beta/ +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Identity and Access Management (IAM) API + url: https://console.cloud.google.com/apis/library/iam.googleapis.com/ +scopes: + - https://www.googleapis.com/auth/cloud-platform + - https://www.googleapis.com/auth/compute +objects: + - !ruby/object:Api::Resource + name: 'PatchDeployment' + base_url: "projects/{{project}}/patchDeployments" + create_url: "projects/{{project}}/patchDeployments?patchDeploymentId={{patch_deployment_id}}" + self_link: "{{name}}" + description: | + Patch deployments are configurations that individual patch jobs use to complete a patch. + These configurations include instance filter, package repository settings, and a schedule. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/compute/docs/os-patch-management' + api: 'https://cloud.google.com/compute/docs/osconfig/rest' + input: true + parameters: + - !ruby/object:Api::Type::String + name: 'patchDeploymentId' + description: | + A name for the patch deployment in the project. When creating a name the following rules apply: + * Must contain only lowercase letters, numbers, and hyphens. + * Must start with a letter. + * Must be between 1-63 characters. + * Must end with a number or a letter. + * Must be unique within the project. + required: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + Unique name for the patch deployment resource in a project. + The patch deployment name is in the form: projects/{project_id}/patchDeployments/{patchDeploymentId}. + output: true + - !ruby/object:Api::Type::String + name: 'description' + description: | + Description of the patch deployment. Length of the description is limited to 1024 characters. + - !ruby/object:Api::Type::NestedObject + name: 'instanceFilter' + required: true + description: | + VM instances to patch. + properties: + - !ruby/object:Api::Type::Boolean + name: 'all' + at_least_one_of: + - instance_filter.0.all + - instance_filter.0.group_labels + - instance_filter.0.zones + - instance_filter.0.instances + - instance_filter.0.instance_name_prefixes + description: | + Target all VM instances in the project. If true, no other criteria is permitted. + - !ruby/object:Api::Type::Array + name: 'groupLabels' + at_least_one_of: + - instance_filter.0.all + - instance_filter.0.group_labels + - instance_filter.0.zones + - instance_filter.0.instances + - instance_filter.0.instance_name_prefixes + description: | + Targets VM instances matching ANY of these GroupLabels. This allows targeting of disparate groups of VM instances. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + required: true + description: | + Compute Engine instance labels that must be present for a VM instance to be targeted by this filter + - !ruby/object:Api::Type::Array + name: 'zones' + at_least_one_of: + - instance_filter.0.all + - instance_filter.0.group_labels + - instance_filter.0.zones + - instance_filter.0.instances + - instance_filter.0.instance_name_prefixes + description: | + Targets VM instances in ANY of these zones. Leave empty to target VM instances in any zone. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'instances' + at_least_one_of: + - instance_filter.0.all + - instance_filter.0.group_labels + - instance_filter.0.zones + - instance_filter.0.instances + - instance_filter.0.instance_name_prefixes + description: | + Targets any of the VM instances specified. Instances are specified by their URI in the `form zones/{{zone}}/instances/{{instance_name}}`, + `projects/{{project_id}}/zones/{{zone}}/instances/{{instance_name}}`, or + `https://www.googleapis.com/compute/v1/projects/{{project_id}}/zones/{{zone}}/instances/{{instance_name}}` + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'instanceNamePrefixes' + at_least_one_of: + - instance_filter.0.all + - instance_filter.0.group_labels + - instance_filter.0.zones + - instance_filter.0.instances + - instance_filter.0.instance_name_prefixes + description: | + Targets VMs whose name starts with one of these prefixes. Similar to labels, this is another way to group + VMs when targeting configs, for example prefix="prod-". + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'patchConfig' + description: | + Patch configuration that is applied. + properties: + - !ruby/object:Api::Type::Enum + name: 'rebootConfig' + description: | + Post-patch reboot settings. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + values: + - :DEFAULT + - :ALWAYS + - :NEVER + - !ruby/object:Api::Type::NestedObject + name: 'apt' + description: | + Apt update settings. Use this setting to override the default apt patch rules. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + properties: + - !ruby/object:Api::Type::Enum + name: 'type' + at_least_one_of: + - patch_config.0.apt.type + - patch_config.0.apt.excludes + - patch_config.0.apt.exclusive_packages + description: | + By changing the type to DIST, the patching is performed using apt-get dist-upgrade instead. + values: + - :DIST + - :UPGRADE + - !ruby/object:Api::Type::Array + name: 'excludes' + at_least_one_of: + - patch_config.0.apt.type + - patch_config.0.apt.excludes + - patch_config.0.apt.exclusive_packages + description: | + List of packages to exclude from update. These packages will be excluded. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'exclusivePackages' + at_least_one_of: + - patch_config.0.apt.type + - patch_config.0.apt.excludes + - patch_config.0.apt.exclusive_packages + description: | + An exclusive list of packages to be updated. These are the only packages that will be updated. + If these packages are not installed, they will be ignored. This field cannot be specified with + any other patch configuration fields. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'yum' + description: | + Yum update settings. Use this setting to override the default yum patch rules. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + properties: + - !ruby/object:Api::Type::Boolean + name: 'security' + at_least_one_of: + - patch_config.0.yum.security + - patch_config.0.yum.minimal + - patch_config.0.yum.excludes + - patch_config.0.yum.exclusive_packages + description: | + Adds the --security flag to yum update. Not supported on all platforms. + - !ruby/object:Api::Type::Boolean + name: 'minimal' + at_least_one_of: + - patch_config.0.yum.security + - patch_config.0.yum.minimal + - patch_config.0.yum.excludes + - patch_config.0.yum.exclusive_packages + description: | + Will cause patch to run yum update-minimal instead. + - !ruby/object:Api::Type::Array + name: 'excludes' + at_least_one_of: + - patch_config.0.yum.security + - patch_config.0.yum.minimal + - patch_config.0.yum.excludes + - patch_config.0.yum.exclusive_packages + description: | + List of packages to exclude from update. These packages will be excluded. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'exclusivePackages' + at_least_one_of: + - patch_config.0.yum.security + - patch_config.0.yum.minimal + - patch_config.0.yum.excludes + - patch_config.0.yum.exclusive_packages + description: | + An exclusive list of packages to be updated. These are the only packages that will be updated. + If these packages are not installed, they will be ignored. This field cannot be specified with + any other patch configuration fields. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'goo' + description: | + goo update settings. Use this setting to override the default goo patch rules. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + properties: + - !ruby/object:Api::Type::Boolean + name: enabled + description: | + goo update settings. Use this setting to override the default goo patch rules. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'zypper' + description: | + zypper update settings. Use this setting to override the default zypper patch rules. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + properties: + - !ruby/object:Api::Type::Boolean + name: 'withOptional' + at_least_one_of: + - patch_config.0.zypper.withOptional + - patch_config.0.zypper.withUpdate + - patch_config.0.zypper.categories + - patch_config.0.zypper.severities + - patch_config.0.zypper.excludes + - patch_config.0.zypper.exclusive_patches + description: | + Adds the --with-optional flag to zypper patch. + - !ruby/object:Api::Type::Boolean + name: 'withUpdate' + at_least_one_of: + - patch_config.0.zypper.withOptional + - patch_config.0.zypper.withUpdate + - patch_config.0.zypper.categories + - patch_config.0.zypper.severities + - patch_config.0.zypper.excludes + - patch_config.0.zypper.exclusive_patches + description: | + Adds the --with-update flag, to zypper patch. + - !ruby/object:Api::Type::Array + name: 'categories' + at_least_one_of: + - patch_config.0.zypper.withOptional + - patch_config.0.zypper.withUpdate + - patch_config.0.zypper.categories + - patch_config.0.zypper.severities + - patch_config.0.zypper.excludes + - patch_config.0.zypper.exclusive_patches + description: | + Install only patches with these categories. Common categories include security, recommended, and feature. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'severities' + at_least_one_of: + - patch_config.0.zypper.withOptional + - patch_config.0.zypper.withUpdate + - patch_config.0.zypper.categories + - patch_config.0.zypper.severities + - patch_config.0.zypper.excludes + - patch_config.0.zypper.exclusive_patches + description: | + Install only patches with these severities. Common severities include critical, important, moderate, and low. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'excludes' + at_least_one_of: + - patch_config.0.zypper.withOptional + - patch_config.0.zypper.withUpdate + - patch_config.0.zypper.categories + - patch_config.0.zypper.severities + - patch_config.0.zypper.excludes + - patch_config.0.zypper.exclusive_patches + description: | + List of packages to exclude from update. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'exclusivePatches' + at_least_one_of: + - patch_config.0.zypper.withOptional + - patch_config.0.zypper.withUpdate + - patch_config.0.zypper.categories + - patch_config.0.zypper.severities + - patch_config.0.zypper.excludes + - patch_config.0.zypper.exclusive_patches + description: | + An exclusive list of patches to be updated. These are the only patches that will be installed using 'zypper patch patch:' command. + This field must not be used with any other patch configuration fields. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'windowsUpdate' + description: | + Windows update settings. Use this setting to override the default Windows patch rules. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + properties: + - !ruby/object:Api::Type::Enum + name: 'classifications' + at_least_one_of: + - patch_config.0.windowsUpdate.classifications + - patch_config.0.windowsUpdate.excludes + - patch_config.0.windowsUpdate.exclusive_patches + description: | + Only apply updates of these windows update classifications. If empty, all updates are applied. + values: + - :CRITICAL + - :SECURITY + - :DEFINITION + - :DRIVER + - :FEATURE_PACK + - :SERVICE_PACK + - :TOOL + - :UPDATE_ROLLUP + - :UPDATE + - !ruby/object:Api::Type::Array + name: 'excludes' + at_least_one_of: + - patch_config.0.windowsUpdate.classifications + - patch_config.0.windowsUpdate.excludes + - patch_config.0.windowsUpdate.exclusive_patches + description: | + List of KBs to exclude from update. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'exclusivePatches' + at_least_one_of: + - patch_config.0.windowsUpdate.classifications + - patch_config.0.windowsUpdate.excludes + - patch_config.0.windowsUpdate.exclusive_patches + description: | + An exclusive list of kbs to be updated. These are the only patches that will be updated. + This field must not be used with other patch configurations. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'preStep' + description: | + The ExecStep to run before the patch update. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + properties: + - !ruby/object:Api::Type::NestedObject + name: 'linuxExecStepConfig' + at_least_one_of: + - patch_config.0.preStep.linux_exec_step_config + - patch_config.0.preStep.windows_exec_step_config + description: | + The ExecStepConfig for all Linux VMs targeted by the PatchJob. + properties: + - !ruby/object:Api::Type::Array + name: 'allowedSuccessCodes' + description: | + Defaults to [0]. A list of possible return values that the execution can return to indicate a success. + item_type: Api::Type::Integer + - !ruby/object:Api::Type::Enum + name: 'interpreter' + description: | + The script interpreter to use to run the script. If no interpreter is specified the script will + be executed directly, which will likely only succeed for scripts with shebang lines. + values: + - :SHELL + - :POWERSHELL + - !ruby/object:Api::Type::String + name: 'localPath' + description: | + An absolute path to the executable on the VM. + exactly_one_of: + - patch_config.0.pre_step.0.linux_exec_step_config.0.local_path + - patch_config.0.pre_step.0.linux_exec_step_config.0.gcs_object + - !ruby/object:Api::Type::NestedObject + name: 'gcsObject' + description: | + A Cloud Storage object containing the executable. + exactly_one_of: + - patch_config.0.pre_step.0.linux_exec_step_config.0.local_path + - patch_config.0.pre_step.0.linux_exec_step_config.0.gcs_object + properties: + - !ruby/object:Api::Type::String + name: 'bucket' + required: true + description: | + Bucket of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'object' + required: true + description: | + Name of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'generationNumber' + required: true + description: | + Generation number of the Cloud Storage object. This is used to ensure that the ExecStep specified by this PatchJob does not change. + - !ruby/object:Api::Type::NestedObject + name: 'windowsExecStepConfig' + at_least_one_of: + - patch_config.0.preStep.linux_exec_step_config + - patch_config.0.preStep.windows_exec_step_config + description: | + The ExecStepConfig for all Windows VMs targeted by the PatchJob. + properties: + - !ruby/object:Api::Type::Array + name: 'allowedSuccessCodes' + description: | + Defaults to [0]. A list of possible return values that the execution can return to indicate a success. + item_type: Api::Type::Integer + - !ruby/object:Api::Type::Enum + name: 'interpreter' + description: | + The script interpreter to use to run the script. If no interpreter is specified the script will + be executed directly, which will likely only succeed for scripts with shebang lines. + values: + - :SHELL + - :POWERSHELL + - !ruby/object:Api::Type::String + name: 'localPath' + description: | + An absolute path to the executable on the VM. + exactly_one_of: + - patch_config.0.pre_step.0.windows_exec_step_config.0.local_path + - patch_config.0.pre_step.0.windows_exec_step_config.0.gcs_object + - !ruby/object:Api::Type::NestedObject + name: 'gcsObject' + description: | + A Cloud Storage object containing the executable. + exactly_one_of: + - patch_config.0.pre_step.0.windows_exec_step_config.0.local_path + - patch_config.0.pre_step.0.windows_exec_step_config.0.gcs_object + properties: + - !ruby/object:Api::Type::String + name: 'bucket' + required: true + description: | + Bucket of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'object' + required: true + description: | + Name of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'generationNumber' + required: true + description: | + Generation number of the Cloud Storage object. This is used to ensure that the ExecStep specified by this PatchJob does not change. + - !ruby/object:Api::Type::NestedObject + name: 'postStep' + description: | + The ExecStep to run after the patch update. + at_least_one_of: + - patch_config.0.reboot_config + - patch_config.0.apt + - patch_config.0.yum + - patch_config.0.goo + - patch_config.0.zypper + - patch_config.0.window_update + - patch_config.0.pre_step + - patch_config.0.post_step + properties: + - !ruby/object:Api::Type::NestedObject + name: 'linuxExecStepConfig' + at_least_one_of: + - patch_config.0.post_step.linux_exec_step_config + - patch_config.0.post_step.windows_exec_step_config + description: | + The ExecStepConfig for all Linux VMs targeted by the PatchJob. + properties: + - !ruby/object:Api::Type::Array + name: 'allowedSuccessCodes' + description: | + Defaults to [0]. A list of possible return values that the execution can return to indicate a success. + item_type: Api::Type::Integer + - !ruby/object:Api::Type::Enum + name: 'interpreter' + description: | + The script interpreter to use to run the script. If no interpreter is specified the script will + be executed directly, which will likely only succeed for scripts with shebang lines. + values: + - :SHELL + - :POWERSHELL + - !ruby/object:Api::Type::String + name: 'localPath' + description: | + An absolute path to the executable on the VM. + exactly_one_of: + - patch_config.0.post_step.0.linux_exec_step_config.0.local_path + - patch_config.0.post_step.0.linux_exec_step_config.0.gcs_object + - !ruby/object:Api::Type::NestedObject + name: 'gcsObject' + description: | + A Cloud Storage object containing the executable. + exactly_one_of: + - patch_config.0.post_step.0.linux_exec_step_config.0.local_path + - patch_config.0.post_step.0.linux_exec_step_config.0.gcs_object + properties: + - !ruby/object:Api::Type::String + name: 'bucket' + required: true + description: | + Bucket of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'object' + required: true + description: | + Name of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'generationNumber' + required: true + description: | + Generation number of the Cloud Storage object. This is used to ensure that the ExecStep specified by this PatchJob does not change. + - !ruby/object:Api::Type::NestedObject + name: 'windowsExecStepConfig' + at_least_one_of: + - patch_config.0.post_step.linux_exec_step_config + - patch_config.0.post_step.windows_exec_step_config + description: | + The ExecStepConfig for all Windows VMs targeted by the PatchJob. + properties: + - !ruby/object:Api::Type::Array + name: 'allowedSuccessCodes' + description: | + Defaults to [0]. A list of possible return values that the execution can return to indicate a success. + item_type: Api::Type::Integer + - !ruby/object:Api::Type::Enum + name: 'interpreter' + description: | + The script interpreter to use to run the script. If no interpreter is specified the script will + be executed directly, which will likely only succeed for scripts with shebang lines. + values: + - :SHELL + - :POWERSHELL + - !ruby/object:Api::Type::String + name: 'localPath' + description: | + An absolute path to the executable on the VM. + exactly_one_of: + - patch_config.0.post_step.0.windows_exec_step_config.0.local_path + - patch_config.0.post_step.0.windows_exec_step_config.0.gcs_object + - !ruby/object:Api::Type::NestedObject + name: 'gcsObject' + description: | + A Cloud Storage object containing the executable. + exactly_one_of: + - patch_config.0.post_step.0.windows_exec_step_config.0.local_path + - patch_config.0.post_step.0.windows_exec_step_config.0.gcs_object + properties: + - !ruby/object:Api::Type::String + name: 'bucket' + required: true + description: | + Bucket of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'object' + required: true + description: | + Name of the Cloud Storage object. + - !ruby/object:Api::Type::String + name: 'generationNumber' + required: true + description: | + Generation number of the Cloud Storage object. This is used to ensure that the ExecStep specified by this PatchJob does not change. + - !ruby/object:Api::Type::String + name: 'duration' + description: | + Duration of the patch. After the duration ends, the patch times out. + A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s" + - !ruby/object:Api::Type::String + name: 'createTime' + output: true + description: | + Time the patch deployment was created. Timestamp is in RFC3339 text format. + A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'updateTime' + output: true + description: | + Time the patch deployment was last updated. Timestamp is in RFC3339 text format. + A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'lastExecuteTime' + output: true + description: | + The last time a patch job was started by this deployment. Timestamp is in RFC3339 text format. + A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::NestedObject + name: 'oneTimeSchedule' + exactly_one_of: + - one_time_schedule + - recurring_schedule + description: | + Schedule a one-time execution. + properties: + - !ruby/object:Api::Type::String + name: 'executeTime' + required: true + description: | + The desired patch job execution time. A timestamp in RFC3339 UTC "Zulu" format, + accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::NestedObject + name: 'recurringSchedule' + exactly_one_of: + - one_time_schedule + - recurring_schedule + description: | + Schedule recurring executions. + properties: + - !ruby/object:Api::Type::NestedObject + name: 'timeZone' + required: true + description: | + Defines the time zone that timeOfDay is relative to. The rules for daylight saving time are + determined by the chosen time zone. + properties: + - !ruby/object:Api::Type::String + name: 'id' + required: true + description: | + IANA Time Zone Database time zone, e.g. "America/New_York". + - !ruby/object:Api::Type::String + name: 'version' + description: | + IANA Time Zone Database version number, e.g. "2019a". + - !ruby/object:Api::Type::String + name: 'startTime' + description: | + The time that the recurring schedule becomes effective. Defaults to createTime of the patch deployment. + A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'endTime' + description: | + The end time at which a recurring patch deployment schedule is no longer active. + A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::NestedObject + name: 'timeOfDay' + required: true + description: | + Time of the day to run a recurring deployment. + properties: + - !ruby/object:Api::Type::Integer + name: 'hours' + at_least_one_of: + - recurring_schedule.0.time_of_day.0.hours + - recurring_schedule.0.time_of_day.0.minutes + - recurring_schedule.0.time_of_day.0.seconds + - recurring_schedule.0.time_of_day.0.nanos + description: | + Hours of day in 24 hour format. Should be from 0 to 23. + An API may choose to allow the value "24:00:00" for scenarios like business closing time. + - !ruby/object:Api::Type::Integer + name: 'minutes' + at_least_one_of: + - recurring_schedule.0.time_of_day.0.hours + - recurring_schedule.0.time_of_day.0.minutes + - recurring_schedule.0.time_of_day.0.seconds + - recurring_schedule.0.time_of_day.0.nanos + description: | + Minutes of hour of day. Must be from 0 to 59. + - !ruby/object:Api::Type::Integer + name: 'seconds' + at_least_one_of: + - recurring_schedule.0.time_of_day.0.hours + - recurring_schedule.0.time_of_day.0.minutes + - recurring_schedule.0.time_of_day.0.seconds + - recurring_schedule.0.time_of_day.0.nanos + description: | + Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds. + - !ruby/object:Api::Type::Integer + name: 'nanos' + at_least_one_of: + - recurring_schedule.0.time_of_day.0.hours + - recurring_schedule.0.time_of_day.0.minutes + - recurring_schedule.0.time_of_day.0.seconds + - recurring_schedule.0.time_of_day.0.nanos + description: | + Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999. + - !ruby/object:Api::Type::Enum + name: 'frequency' + required: true + description: | + The frequency unit of this recurring schedule. + values: + - :WEEKLY + - :MONTHLY + - !ruby/object:Api::Type::String + name: 'lastExecuteTime' + output: true + description: | + The time the last patch job ran successfully. + A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'nextExecuteTime' + output: true + description: | + The time the next patch job is scheduled to run. + A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::NestedObject + name: 'weekly' + exactly_one_of: + - recurring_schedule.0.weekly + - recurring_schedule.0.monthly + description: | + Schedule with weekly executions. + properties: + - !ruby/object:Api::Type::Enum + name: 'dayOfWeek' + required: true + description: | + IANA Time Zone Database time zone, e.g. "America/New_York". + values: + - :MONDAY + - :TUESDAY + - :WEDNESDAY + - :THURSDAY + - :FRIDAY + - :SATURDAY + - :SUNDAY + - !ruby/object:Api::Type::NestedObject + name: 'monthly' + exactly_one_of: + - recurring_schedule.0.weekly + - recurring_schedule.0.monthly + description: | + Schedule with monthly executions. + properties: + - !ruby/object:Api::Type::NestedObject + name: 'weekDayOfMonth' + exactly_one_of: + - recurring_schedule.0.monthly.0.week_day_of_month + - recurring_schedule.0.monthly.0.month_day + description: | + Week day in a month. + properties: + - !ruby/object:Api::Type::Integer + name: 'weekOrdinal' + required: true + description: | + Week number in a month. 1-4 indicates the 1st to 4th week of the month. -1 indicates the last week of the month. + - !ruby/object:Api::Type::Enum + name: 'dayOfWeek' + required: true + description: | + A day of the week. + values: + - :MONDAY + - :TUESDAY + - :WEDNESDAY + - :THURSDAY + - :FRIDAY + - :SATURDAY + - :SUNDAY + - !ruby/object:Api::Type::Integer + name: 'monthDay' + exactly_one_of: + - recurring_schedule.0.monthly.0.week_day_of_month + - recurring_schedule.0.monthly.0.month_day + description: | + One day of the month. 1-31 indicates the 1st to the 31st day. -1 indicates the last day of the month. + Months without the target day will be skipped. For example, a schedule to run "every month on the 31st" + will not run in February, April, June, etc. + - !ruby/object:Api::Resource + name: 'GuestPolicies' + base_url: "projects/{{project}}/guestPolicies" + create_url: "projects/{{project}}/guestPolicies?guestPolicyId={{guest_policy_id}}" + update_verb: :PATCH + self_link: "projects/{{project}}/guestPolicies/{{guest_policy_id}}" + min_version: beta + description: | + An OS Config resource representing a guest configuration policy. These policies represent + the desired state for VM instance guest environments including packages to install or remove, + package repository configurations, and software to install. + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Official Documentation': + 'https://cloud.google.com/compute/docs/os-config-management' + api: 'https://cloud.google.com/compute/docs/osconfig/rest' + parameters: + - !ruby/object:Api::Type::String + name: 'guestPolicyId' + description: | + The logical name of the guest policy in the project with the following restrictions: + * Must contain only lowercase letters, numbers, and hyphens. + * Must start with a letter. + * Must be between 1-63 characters. + * Must end with a number or a letter. + * Must be unique within the project. + required: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + Unique name of the resource in this project using one of the following forms: projects/{project_number}/guestPolicies/{guestPolicyId}. + output: true + - !ruby/object:Api::Type::String + name: 'description' + description: | + Description of the guest policy. Length of the description is limited to 1024 characters. + - !ruby/object:Api::Type::NestedObject + name: 'assignment' + required: true + description: | + Specifies the VM instances that are assigned to this policy. This allows you to target sets + or groups of VM instances by different parameters such as labels, names, OS, or zones. + If left empty, all VM instances underneath this policy are targeted. + At the same level in the resource hierarchy (that is within a project), the service prevents + the creation of multiple policies that conflict with each other. + For more information, see how the service + [handles assignment conflicts](https://cloud.google.com/compute/docs/os-config-management/create-guest-policy#handle-conflicts). + properties: + - !ruby/object:Api::Type::Array + name: 'groupLabels' + at_least_one_of: + - assignment.0.group_labels + - assignment.0.zones + - assignment.0.instances + - assignment.0.instance_name_prefixes + - assignment.0.os_types + description: | + Targets instances matching at least one of these label sets. This allows an assignment to target disparate groups, + for example "env=prod or env=staging". + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + required: true + description: | + Google Compute Engine instance labels that must be present for an instance to be included in this assignment group. + - !ruby/object:Api::Type::Array + name: 'zones' + at_least_one_of: + - assignment.0.group_labels + - assignment.0.zones + - assignment.0.instances + - assignment.0.instance_name_prefixes + - assignment.0.os_types + description: | + Targets instances in any of these zones. Leave empty to target instances in any zone. + Zonal targeting is uncommon and is supported to facilitate the management of changes by zone. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'instances' + at_least_one_of: + - assignment.0.group_labels + - assignment.0.zones + - assignment.0.instances + - assignment.0.instance_name_prefixes + - assignment.0.os_types + description: | + Targets any of the instances specified. Instances are specified by their URI in the form + zones/[ZONE]/instances/[INSTANCE_NAME]. + Instance targeting is uncommon and is supported to facilitate the management of changes + by the instance or to target specific VM instances for development and testing. + Only supported for project-level policies and must reference instances within this project. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'instanceNamePrefixes' + at_least_one_of: + - assignment.0.group_labels + - assignment.0.zones + - assignment.0.instances + - assignment.0.instance_name_prefixes + - assignment.0.os_types + description: | + Targets VM instances whose name starts with one of these prefixes. + Like labels, this is another way to group VM instances when targeting configs, + for example prefix="prod-". + Only supported for project-level policies. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'osTypes' + at_least_one_of: + - assignment.0.group_labels + - assignment.0.zones + - assignment.0.instances + - assignment.0.instance_name_prefixes + - assignment.0.os_types + description: | + Targets VM instances matching at least one of the following OS types. + VM instances must match all supplied criteria for a given OsType to be included. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'osShortName' + description: | + Targets VM instances with OS Inventory enabled and having the following OS short name, for example "debian" or "windows". + - !ruby/object:Api::Type::String + name: 'osVersion' + description: | + Targets VM instances with OS Inventory enabled and having the following following OS version. + - !ruby/object:Api::Type::String + name: 'osArchitecture' + description: | + Targets VM instances with OS Inventory enabled and having the following OS architecture. + - !ruby/object:Api::Type::Array + name: 'packages' + description: | + The software packages to be managed by this policy. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The name of the package. A package is uniquely identified for conflict validation + by checking the package name and the manager(s) that the package targets. + required: true + - !ruby/object:Api::Type::Enum + name: 'desiredState' + description: | + The desiredState the agent should maintain for this package. The default is to ensure the package is installed. + values: + - :INSTALLED + - :UPDATED + - :REMOVED + - !ruby/object:Api::Type::Enum + name: 'manager' + description: | + Type of package manager that can be used to install this package. If a system does not have the package manager, + the package is not installed or removed no error message is returned. By default, or if you specify ANY, + the agent attempts to install and remove this package using the default package manager. + This is useful when creating a policy that applies to different types of systems. + The default behavior is ANY. + default_value: :ANY + values: + - :ANY + - :APT + - :YUM + - :ZYPPER + - :GOO + - !ruby/object:Api::Type::Array + name: 'packageRepositories' + description: | + A list of package repositories to configure on the VM instance. + This is done before any other configs are applied so they can use these repos. + Package repositories are only configured if the corresponding package manager(s) are available. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::NestedObject + name: 'apt' + description: | + An Apt Repository. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::Enum + name: 'archiveType' + description: | + Type of archive files in this repository. The default behavior is DEB. + default_value: :DEB + values: + - :DEB + - :DEB_SRC + - !ruby/object:Api::Type::String + name: 'uri' + description: | + URI for this repository. + required: true + - !ruby/object:Api::Type::String + name: 'distribution' + description: | + Distribution of this repository. + required: true + - !ruby/object:Api::Type::Array + name: 'components' + description: | + List of components for this repository. Must contain at least one item. + required: true + item_type: Api::Type::String + - !ruby/object:Api::Type::String + name: 'gpgKey' + description: | + URI of the key file for this repository. The agent maintains a keyring at + /etc/apt/trusted.gpg.d/osconfig_agent_managed.gpg containing all the keys in any applied guest policy. + - !ruby/object:Api::Type::NestedObject + name: 'yum' + description: | + A Yum Repository. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'id' + description: | + A one word, unique name for this repository. This is the repo id in the Yum config file and also the displayName + if displayName is omitted. This id is also used as the unique identifier when checking for guest policy conflicts. + required: true + - !ruby/object:Api::Type::String + name: 'displayName' + description: | + The display name of the repository. + - !ruby/object:Api::Type::String + name: 'baseUrl' + description: | + The location of the repository directory. + required: true + - !ruby/object:Api::Type::Array + name: 'gpgKeys' + description: | + URIs of GPG keys. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'zypper' + description: | + A Zypper Repository. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'id' + description: | + A one word, unique name for this repository. This is the repo id in the zypper config file and also the displayName + if displayName is omitted. This id is also used as the unique identifier when checking for guest policy conflicts. + required: true + - !ruby/object:Api::Type::String + name: 'displayName' + description: | + The display name of the repository. + - !ruby/object:Api::Type::String + name: 'baseUrl' + description: | + The location of the repository directory. + required: true + - !ruby/object:Api::Type::Array + name: 'gpgKeys' + description: | + URIs of GPG keys. + item_type: Api::Type::String + - !ruby/object:Api::Type::NestedObject + name: 'goo' + description: | + A Goo Repository. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The name of the repository. + required: true + - !ruby/object:Api::Type::String + name: 'url' + description: | + The url of the repository. + required: true + - !ruby/object:Api::Type::Array + name: 'recipes' + description: | + A list of Recipes to install on the VM instance. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + Unique identifier for the recipe. Only one recipe with a given name is installed on an instance. + Names are also used to identify resources which helps to determine whether guest policies have conflicts. + This means that requests to create multiple recipes with the same name and version are rejected since they + could potentially have conflicting assignments. + required: true + - !ruby/object:Api::Type::String + name: 'version' + description: | + The version of this software recipe. Version can be up to 4 period separated numbers (e.g. 12.34.56.78). + - !ruby/object:Api::Type::Array + name: 'artifacts' + description: | + Resources available to be used in the steps in the recipe. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::String + name: 'id' + description: | + Id of the artifact, which the installation and update steps of this recipe can reference. + Artifacts in a recipe cannot have the same id. + required: true + - !ruby/object:Api::Type::Boolean + name: 'allowInsecure' + description: | + Defaults to false. When false, recipes are subject to validations based on the artifact type: + Remote: A checksum must be specified, and only protocols with transport-layer security are permitted. + GCS: An object generation number must be specified. + default_value: false + - !ruby/object:Api::Type::NestedObject + name: 'remote' + description: | + A generic remote artifact. + # TODO (mbang): add conflicts_with when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'uri' + description: | + URI from which to fetch the object. It should contain both the protocol and path following the format {protocol}://{location}. + - !ruby/object:Api::Type::String + name: 'checkSum' + description: | + Must be provided if allowInsecure is false. SHA256 checksum in hex format, to compare to the checksum of the artifact. + If the checksum is not empty and it doesn't match the artifact then the recipe installation fails before running any + of the steps. + - !ruby/object:Api::Type::NestedObject + name: 'gcs' + description: | + A Google Cloud Storage artifact. + # TODO (mbang): add conflicts_with when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'bucket' + description: | + Bucket of the Google Cloud Storage object. Given an example URL: https://storage.googleapis.com/my-bucket/foo/bar#1234567 + this value would be my-bucket. + - !ruby/object:Api::Type::String + name: 'object' + description: | + Name of the Google Cloud Storage object. Given an example URL: https://storage.googleapis.com/my-bucket/foo/bar#1234567 + this value would be foo/bar. + - !ruby/object:Api::Type::Integer + name: 'generation' + description: | + Must be provided if allowInsecure is false. Generation number of the Google Cloud Storage object. + https://storage.googleapis.com/my-bucket/foo/bar#1234567 this value would be 1234567. + - !ruby/object:Api::Type::Array + name: 'installSteps' + description: | + Actions to be taken for installing this recipe. On failure it stops executing steps and does not attempt another installation. + Any steps taken (including partially completed steps) are not rolled back. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::NestedObject + name: 'fileCopy' + description: | + Copies a file onto the instance. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::String + name: 'destination' + description: | + The absolute path on the instance to put the file. + required: true + - !ruby/object:Api::Type::Boolean + name: 'overwrite' + description: | + Whether to allow this step to overwrite existing files.If this is false and the file already exists the file + is not overwritten and the step is considered a success. Defaults to false. + default_value: false + - !ruby/object:Api::Type::String + name: 'permissions' + description: | + Consists of three octal digits which represent, in order, the permissions of the owner, group, and other users + for the file (similarly to the numeric mode used in the linux chmod utility). Each digit represents a three bit + number with the 4 bit corresponding to the read permissions, the 2 bit corresponds to the write bit, and the one + bit corresponds to the execute permission. Default behavior is 755. + + Below are some examples of permissions and their associated values: + read, write, and execute: 7 read and execute: 5 read and write: 6 read only: 4 + - !ruby/object:Api::Type::NestedObject + name: 'archiveExtraction' + description: | + Extracts an archive into the specified directory. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::String + name: 'destination' + description: | + Directory to extract archive to. Defaults to / on Linux or C:\ on Windows. + - !ruby/object:Api::Type::Enum + name: 'type' + description: | + The type of the archive to extract. + required: true + values: + - :TAR + - :TAR_GZIP + - :TAR_BZIP + - :TAR_LZMA + - :TAR_XZ + - :ZIP + - !ruby/object:Api::Type::NestedObject + name: 'msiInstallation' + description: | + Installs an MSI file. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::Array + name: 'flags' + description: | + The flags to use when installing the MSI. Defaults to the install flag. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowedExitCodes' + description: | + Return codes that indicate that the software installed or updated successfully. Behaviour defaults to [0] + item_type: Api::Type::Integer + - !ruby/object:Api::Type::NestedObject + name: 'dpkgInstallation' + description: | + Installs a deb file via dpkg. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'rpmInstallation' + description: | + Installs an rpm file via the rpm utility. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'fileExec' + description: | + Executes an artifact or local file. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::Array + name: 'args' + description: | + Arguments to be passed to the provided executable. + item_type: Api::Type::String + - !ruby/object:Api::Type::String + name: 'allowedExitCodes' + description: | + A list of possible return values that the program can return to indicate a success. Defaults to [0]. + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + - !ruby/object:Api::Type::String + name: 'localPath' + description: | + The absolute path of the file on the local filesystem. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + - !ruby/object:Api::Type::NestedObject + name: 'scriptRun' + description: | + Runs commands in a shell. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'script' + description: | + The shell script to be executed. + required: true + - !ruby/object:Api::Type::Array + name: 'allowedExitCodes' + description: | + Return codes that indicate that the software installed or updated successfully. Behaviour defaults to [0] + item_type: Api::Type::Integer + - !ruby/object:Api::Type::Enum + name: 'interpreter' + description: | + The script interpreter to use to run the script. If no interpreter is specified the script is executed directly, + which likely only succeed for scripts with shebang lines. + values: + - :SHELL + - :POWERSHELL + - !ruby/object:Api::Type::Array + name: 'updateSteps' + description: | + Actions to be taken for updating this recipe. On failure it stops executing steps and does not attempt another update for this recipe. + Any steps taken (including partially completed steps) are not rolled back. + item_type: !ruby/object:Api::Type::NestedObject + properties: + - !ruby/object:Api::Type::NestedObject + name: 'fileCopy' + description: | + Copies a file onto the instance. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::String + name: 'destination' + description: | + The absolute path on the instance to put the file. + required: true + - !ruby/object:Api::Type::Boolean + name: 'overwrite' + description: | + Whether to allow this step to overwrite existing files.If this is false and the file already exists the file + is not overwritten and the step is considered a success. Defaults to false. + default_value: false + - !ruby/object:Api::Type::String + name: 'permissions' + description: | + Consists of three octal digits which represent, in order, the permissions of the owner, group, and other users + for the file (similarly to the numeric mode used in the linux chmod utility). Each digit represents a three bit + number with the 4 bit corresponding to the read permissions, the 2 bit corresponds to the write bit, and the one + bit corresponds to the execute permission. Default behavior is 755. + + Below are some examples of permissions and their associated values: + read, write, and execute: 7 read and execute: 5 read and write: 6 read only: 4 + - !ruby/object:Api::Type::NestedObject + name: 'archiveExtraction' + description: | + Extracts an archive into the specified directory. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::String + name: 'destination' + description: | + Directory to extract archive to. Defaults to / on Linux or C:\ on Windows. + - !ruby/object:Api::Type::Enum + name: 'type' + description: | + The type of the archive to extract. + required: true + values: + - :TAR + - :TAR_GZIP + - :TAR_BZIP + - :TAR_LZMA + - :TAR_XZ + - :ZIP + - !ruby/object:Api::Type::NestedObject + name: 'msiInstallation' + description: | + Installs an MSI file. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::Array + name: 'flags' + description: | + The flags to use when installing the MSI. Defaults to the install flag. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowedExitCodes' + description: | + Return codes that indicate that the software installed or updated successfully. Behaviour defaults to [0] + item_type: Api::Type::Integer + - !ruby/object:Api::Type::NestedObject + name: 'dpkgInstallation' + description: | + Installs a deb file via dpkg. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'rpmInstallation' + description: | + Installs an rpm file via the rpm utility. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + required: true + - !ruby/object:Api::Type::NestedObject + name: 'fileExec' + description: | + Executes an artifact or local file. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::Array + name: 'args' + description: | + Arguments to be passed to the provided executable. + item_type: Api::Type::String + - !ruby/object:Api::Type::Array + name: 'allowedExitCodes' + description: | + A list of possible return values that the program can return to indicate a success. Defaults to [0]. + item_type: Api::Type::Integer + - !ruby/object:Api::Type::String + name: 'artifactId' + description: | + The id of the relevant artifact in the recipe. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + - !ruby/object:Api::Type::String + name: 'localPath' + description: | + The absolute path of the file on the local filesystem. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + - !ruby/object:Api::Type::NestedObject + name: 'scriptRun' + description: | + Runs commands in a shell. + # TODO (mbang): add exactly_one_of when it can be applied to lists (https://github.com/hashicorp/terraform-plugin-sdk/issues/470) + properties: + - !ruby/object:Api::Type::String + name: 'script' + description: | + The shell script to be executed. + required: true + - !ruby/object:Api::Type::Array + name: 'allowedExitCodes' + description: | + Return codes that indicate that the software installed or updated successfully. Behaviour defaults to [0] + item_type: Api::Type::Integer + - !ruby/object:Api::Type::Enum + name: 'interpreter' + description: | + The script interpreter to use to run the script. If no interpreter is specified the script is executed directly, + which likely only succeed for scripts with shebang lines. + values: + - :SHELL + - :POWERSHELL + - !ruby/object:Api::Type::Enum + name: 'desiredState' + description: | + Default is INSTALLED. The desired state the agent should maintain for this recipe. + + INSTALLED: The software recipe is installed on the instance but won't be updated to new versions. + INSTALLED_KEEP_UPDATED: The software recipe is installed on the instance. The recipe is updated to a higher version, + if a higher version of the recipe is assigned to this instance. + REMOVE: Remove is unsupported for software recipes and attempts to create or update a recipe to the REMOVE state is rejected. + default_value: :INSTALLED + values: + - :INSTALLED + - :UPDATED + - :REMOVED + - !ruby/object:Api::Type::String + name: 'createTime' + output: true + description: | + Time this guest policy was created. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. + Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'updateTime' + output: true + description: | + Last time this guest policy was updated. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. + Example: "2014-10-02T15:01:23.045123456Z". + - !ruby/object:Api::Type::String + name: 'etag' + description: | + The etag for this guest policy. If this is provided on update, it must match the server's etag. diff --git a/products/osconfig/terraform.yaml b/products/osconfig/terraform.yaml new file mode 100644 index 000000000000..913b88b85bd9 --- /dev/null +++ b/products/osconfig/terraform.yaml @@ -0,0 +1,120 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + PatchDeployment: !ruby/object:Overrides::Terraform::ResourceOverride + id_format: "{{name}}" + examples: + - !ruby/object:Provider::Terraform::Examples + name: "os_config_patch_deployment_basic" + primary_resource_id: "patch" + vars: + instance_name: "patch-deploy-inst" + patch_deployment_id: "patch-deploy" + - !ruby/object:Provider::Terraform::Examples + name: "os_config_patch_deployment_instance" + primary_resource_id: "patch" + vars: + instance_name: "patch-deploy-inst" + patch_deployment_id: "patch-deploy" + - !ruby/object:Provider::Terraform::Examples + name: "os_config_patch_deployment_full" + primary_resource_id: "patch" + vars: + instance_name: "patch-deploy-inst" + patch_deployment_id: "patch-deploy" + properties: + patchDeploymentId: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + regex: "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))" + recurringSchedule.timeOfDay.hours: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(0,23)' + recurringSchedule.timeOfDay.minutes: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(0,59)' + recurringSchedule.timeOfDay.seconds: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(0,60)' + recurringSchedule.timeOfDay.nanos: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(0,999999999)' + recurringSchedule.monthly.weekDayOfMonth.weekOrdinal: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(-1,4)' + recurringSchedule.monthly.monthDay: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(-1,31)' + recurringSchedule.frequency: !ruby/object:Overrides::Terraform::PropertyOverride + exclude: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + post_create: templates/terraform/post_create/set_computed_name.erb + encoder: templates/terraform/encoders/os_config_patch_deployment.go.erb + decoder: templates/terraform/decoders/os_config_patch_deployment.go.erb + custom_import: templates/terraform/custom_import/self_link_as_name.erb + GuestPolicies: !ruby/object:Overrides::Terraform::ResourceOverride + id_format: "{{name}}" + examples: + - !ruby/object:Provider::Terraform::Examples + name: "os_config_guest_policies_basic" + primary_resource_id: "guest_policies" + vars: + instance_name: "guest-policy-inst" + guest_policy_id: "guest-policy" + - !ruby/object:Provider::Terraform::Examples + name: "os_config_guest_policies_packages" + primary_resource_id: "guest_policies" + vars: + guest_policy_id: "guest-policy" + - !ruby/object:Provider::Terraform::Examples + name: "os_config_guest_policies_recipes" + primary_resource_id: "guest_policies" + vars: + guest_policy_id: "guest-policy" + properties: + guestPolicyId: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + regex: "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))" + recipes.installSteps.archiveExtraction.destination: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.installSteps.msiInstallation.flags: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.installSteps.msiInstallation.allowedExitCodes: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.installSteps.fileExec.allowedExitCodes: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.installSteps.scriptRun.allowedExitCodes: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.updateSteps.archiveExtraction.destination: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.updateSteps.msiInstallation.flags: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.updateSteps.msiInstallation.allowedExitCodes: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.updateSteps.fileExec.allowedExitCodes: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + recipes.updateSteps.scriptRun.allowedExitCodes: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + etag: !ruby/object:Overrides::Terraform::PropertyOverride + default_from_api: true + custom_code: !ruby/object:Provider::Terraform::CustomCode + post_create: templates/terraform/post_create/set_computed_name.erb + custom_import: templates/terraform/custom_import/self_link_as_name.erb + +# This is for copying files over +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> \ No newline at end of file diff --git a/products/oslogin/api.yaml b/products/oslogin/api.yaml index fa028120e485..98c9b674d122 100644 --- a/products/oslogin/api.yaml +++ b/products/oslogin/api.yaml @@ -60,6 +60,7 @@ objects: description: | Public key text in SSH format, defined by RFC4253 section 6.6. required: true + input: true - !ruby/object:Api::Type::String name: 'expirationTimeUsec' description: | diff --git a/products/pubsub/terraform.yaml b/products/pubsub/terraform.yaml index b620ca35159b..ae05af868bf8 100644 --- a/products/pubsub/terraform.yaml +++ b/products/pubsub/terraform.yaml @@ -21,7 +21,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides # resource until it exists and the negative cached result goes away. # Context: terraform-providers/terraform-provider-google#4993 async: !ruby/object:Provider::Terraform::PollAsync - check_response_func: PollCheckForExistence + check_response_func_existence: PollCheckForExistence actions: ['create'] operation: !ruby/object:Api::Async::Operation timeouts: !ruby/object:Api::Timeouts @@ -71,7 +71,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides # resource until it exists and the negative cached result goes away. # Context: terraform-providers/terraform-provider-google#4993 async: !ruby/object:Provider::Terraform::PollAsync - check_response_func: PollCheckForExistence + check_response_func_existence: PollCheckForExistence actions: ['create'] operation: !ruby/object:Api::Async::Operation timeouts: !ruby/object:Api::Timeouts diff --git a/products/redis/api.yaml b/products/redis/api.yaml index 52eaa1c7a88b..e9fd2e0a0075 100644 --- a/products/redis/api.yaml +++ b/products/redis/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: Redis -display_name: Cloud Memorystore +display_name: Memorystore (Redis) versions: - !ruby/object:Api::Product::Version name: ga @@ -81,9 +81,7 @@ objects: - !ruby/object:Api::Type::Enum name: connectMode description: | - The connection mode of the Redis instance. Can be either - `DIRECT_PEERING` or `PRIVATE_SERVICE_ACCESS`. The default - connect mode if not provided is `DIRECT_PEERING`. + The connection mode of the Redis instance. input: true values: - :DIRECT_PEERING diff --git a/products/redis/terraform.yaml b/products/redis/terraform.yaml index 71a58ca5eabd..f828da1e6c5f 100644 --- a/products/redis/terraform.yaml +++ b/products/redis/terraform.yaml @@ -50,6 +50,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides name: !ruby/object:Overrides::Terraform::PropertyOverride custom_expand: 'templates/terraform/custom_expand/shortname_to_url.go.erb' custom_flatten: 'templates/terraform/custom_flatten/name_from_self_link.erb' + validation: !ruby/object:Provider::Terraform::Validation + regex: '^[a-z][a-z0-9-]{0,39}[a-z0-9]$' redisVersion: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true region: !ruby/object:Overrides::Terraform::PropertyOverride diff --git a/products/runtimeconfig/api.yaml b/products/runtimeconfig/api.yaml index 0f8d2ffd8965..4b5e51bacb2b 100644 --- a/products/runtimeconfig/api.yaml +++ b/products/runtimeconfig/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: RuntimeConfig -display_name: Cloud Runtime Configuration +display_name: Runtime Configurator versions: - !ruby/object:Api::Product::Version name: ga diff --git a/products/secretmanager/api.yaml b/products/secretmanager/api.yaml index ca3699c624c9..4ea62d22f78e 100644 --- a/products/secretmanager/api.yaml +++ b/products/secretmanager/api.yaml @@ -15,6 +15,9 @@ name: SecretManager display_name: Secret Manager versions: + - !ruby/object:Api::Product::Version + name: ga + base_url: https://secretmanager.googleapis.com/v1/ - !ruby/object:Api::Product::Version name: beta base_url: https://secretmanager.googleapis.com/v1beta1/ @@ -27,7 +30,6 @@ apis_required: objects: - !ruby/object:Api::Resource name: Secret - min_version: beta self_link: projects/{{project}}/secrets/{{secret_id}} base_url: projects/{{project}}/secrets create_url: projects/{{project}}/secrets?secretId={{secret_id}} @@ -37,8 +39,9 @@ objects: parent_resource_attribute: secret_id method_name_separator: ':' exclude: false + allowed_iam_role: roles/secretmanager.secretAccessor references: !ruby/object:Api::Resource::ReferenceLinks - api: 'https://cloud.google.com/secret-manager/docs/reference/rest/v1beta1/projects.secrets' + api: 'https://cloud.google.com/secret-manager/docs/reference/rest/v1/projects.secrets' description: | A Secret is a logical secret whose value and versions can be accessed. parameters: @@ -114,7 +117,6 @@ objects: The canonical IDs of the location to replicate data. For example: "us-east1". - !ruby/object:Api::Resource name: SecretVersion - min_version: beta base_url: '{{name}}' self_link: '{{name}}' create_url: '{{secret}}:addVersion' diff --git a/products/secretmanager/terraform.yaml b/products/secretmanager/terraform.yaml index 5b1a14699bab..4afdb8ff2404 100644 --- a/products/secretmanager/terraform.yaml +++ b/products/secretmanager/terraform.yaml @@ -17,8 +17,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides - !ruby/object:Provider::Terraform::Examples name: "secret_config_basic" primary_resource_id: "secret-basic" - primary_resource_name: "fmt.Sprintf(\"tf-test-test-secret-basic%s\", context[\"random_suffix\"])" - min_version: beta + primary_resource_name: "fmt.Sprintf(\"secret%s\", context[\"random_suffix\"])" vars: secret_id: "secret" import_format: ["projects/{{project}}/secrets/{{secret_id}}"] @@ -28,11 +27,12 @@ overrides: !ruby/object:Overrides::ResourceOverrides custom_expand: templates/terraform/custom_expand/bool_to_object.go.erb SecretVersion: !ruby/object:Overrides::Terraform::ResourceOverride + # Versions will be sweeped by the Secret sweeper + skip_sweeper: true examples: - !ruby/object:Provider::Terraform::Examples name: "secret_version_basic" primary_resource_id: "secret-version-basic" - min_version: beta vars: secret_id: "secret-version" data: "secret-data" diff --git a/products/securitycenter/api.yaml b/products/securitycenter/api.yaml index 6ba463bd2a65..67d7dbc891a6 100644 --- a/products/securitycenter/api.yaml +++ b/products/securitycenter/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: SecurityCenter -display_name: Cloud Security Command Center +display_name: Security Command Center (SCC) versions: - !ruby/object:Api::Product::Version name: ga diff --git a/products/servicedirectory/api.yaml b/products/servicedirectory/api.yaml new file mode 100644 index 000000000000..3723d861410b --- /dev/null +++ b/products/servicedirectory/api.yaml @@ -0,0 +1,180 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Api::Product +name: ServiceDirectory +display_name: Service Directory +versions: + - !ruby/object:Api::Product::Version + name: beta + base_url: https://servicedirectory.googleapis.com/v1beta1/ +scopes: + - https://www.googleapis.com/auth/cloud-platform +apis_required: + - !ruby/object:Api::Product::ApiReference + name: Service Directory API + url: https://console.cloud.google.com/apis/library/servicedirectory.googleapis.com/ +objects: + - !ruby/object:Api::Resource + name: 'Namespace' + base_url: '{{name}}' + create_url: 'projects/{{project}}/locations/{{location}}/namespaces?namespaceId={{namespace_id}}' + self_link: '{{name}}' + update_verb: :PATCH + update_mask: true + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Configuring a namespace': 'https://cloud.google.com/service-directory/docs/configuring-service-directory#configuring_a_namespace' + api: 'https://cloud.google.com/service-directory/docs/reference/rest/v1beta1/projects.locations.namespaces' + iam_policy: !ruby/object:Api::Resource::IamPolicy + exclude: false + parent_resource_attribute: 'name' + method_name_separator: ':' + fetch_iam_policy_verb: :POST + set_iam_policy_verb: :POST + min_version: beta + description: | + A container for `services`. Namespaces allow administrators to group services + together and define permissions for a collection of services. + parameters: + - !ruby/object:Api::Type::String + name: 'location' + description: | + The location for the Namespace. + A full list of valid locations can be found by running + `gcloud beta service-directory locations list`. + required: true + url_param_only: true + - !ruby/object:Api::Type::String + name: namespaceId + description: | + The Resource ID must be 1-63 characters long, including digits, + lowercase letters or the hyphen character. + required: true + input: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The resource name for the namespace + in the format `projects/*/locations/*/namespaces/*`. + output: true + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: | + Resource labels associated with this Namespace. No more than 64 user + labels can be associated with a given resource. Label keys and values can + be no longer than 63 characters. + - !ruby/object:Api::Resource + name: 'Service' + base_url: '{{name}}' + create_url: '{{namespace}}/services?serviceId={{service_id}}' + self_link: '{{name}}' + update_verb: :PATCH + update_mask: true + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Configuring a service': 'https://cloud.google.com/service-directory/docs/configuring-service-directory#configuring_a_service' + api: 'https://cloud.google.com/service-directory/docs/reference/rest/v1beta1/projects.locations.namespaces.services' + iam_policy: !ruby/object:Api::Resource::IamPolicy + exclude: false + parent_resource_attribute: 'name' + method_name_separator: ':' + fetch_iam_policy_verb: :POST + set_iam_policy_verb: :POST + min_version: beta + description: | + An individual service. A service contains a name and optional metadata. + parameters: + - !ruby/object:Api::Type::String + name: 'namespace' + description: | + The resource name of the namespace this service will belong to. + required: true + url_param_only: true + - !ruby/object:Api::Type::String + name: serviceId + description: | + The Resource ID must be 1-63 characters long, including digits, + lowercase letters or the hyphen character. + required: true + input: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The resource name for the service in the + format `projects/*/locations/*/namespaces/*/services/*`. + output: true + - !ruby/object:Api::Type::KeyValuePairs + name: 'metadata' + description: | + Metadata for the service. This data can be consumed + by service clients. The entire metadata dictionary may contain + up to 2000 characters, spread across all key-value pairs. + Metadata that goes beyond any these limits will be rejected. + - !ruby/object:Api::Resource + name: 'Endpoint' + base_url: '{{name}}' + create_url: '{{service}}/endpoints?endpointId={{endpoint_id}}' + self_link: '{{name}}' + update_verb: :PATCH + update_mask: true + references: !ruby/object:Api::Resource::ReferenceLinks + guides: + 'Configuring an endpoint': 'https://cloud.google.com/service-directory/docs/configuring-service-directory#configuring_an_endpoint' + api: 'https://cloud.google.com/service-directory/docs/reference/rest/v1beta1/projects.locations.namespaces.services.endpoints' + min_version: beta + description: | + An individual endpoint that provides a service. + parameters: + - !ruby/object:Api::Type::String + name: 'service' + description: | + The resource name of the service that this endpoint provides. + required: true + url_param_only: true + - !ruby/object:Api::Type::String + name: endpointId + description: | + The Resource ID must be 1-63 characters long, including digits, + lowercase letters or the hyphen character. + required: true + input: true + url_param_only: true + properties: + - !ruby/object:Api::Type::String + name: 'name' + description: | + The resource name for the endpoint in the format + `projects/*/locations/*/namespaces/*/services/*/endpoints/*`. + output: true + - !ruby/object:Api::Type::String + name: 'address' + description: | + IPv4 or IPv6 address of the endpoint. + - !ruby/object:Api::Type::Integer + name: 'port' + description: | + Port that the endpoint is running on, must be in the + range of [0, 65535]. If unspecified, the default is 0. + - !ruby/object:Api::Type::KeyValuePairs + name: 'metadata' + description: | + Metadata for the endpoint. This data can be consumed + by service clients. The entire metadata dictionary may contain + up to 512 characters, spread across all key-value pairs. + Metadata that goes beyond any these limits will be rejected. + diff --git a/products/servicedirectory/terraform.yaml b/products/servicedirectory/terraform.yaml new file mode 100644 index 000000000000..343b1d128ee8 --- /dev/null +++ b/products/servicedirectory/terraform.yaml @@ -0,0 +1,80 @@ +# Copyright 2020 Google Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- !ruby/object:Provider::Terraform::Config +overrides: !ruby/object:Overrides::ResourceOverrides + Namespace: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "service_directory_namespace_basic" + primary_resource_id: "example" + vars: + namespace_id: "example-namespace" + min_version: beta + properties: + location: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + namespaceId: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validateRFC1035Name(2, 63)' + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/service_directory_namespace.go.erb + Service: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}/services/{{service_id}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "service_directory_service_basic" + primary_resource_id: "example" + vars: + service_id: "example-service" + namespace_id: "example-namespace" + min_version: beta + properties: + namespace: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + serviceId: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validateRFC1035Name(2, 63)' + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/service_directory_service.go.erb + Endpoint: !ruby/object:Overrides::Terraform::ResourceOverride + import_format: ["projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}/services/{{service_id}}/endpoints/{{endpoint_id}}"] + examples: + - !ruby/object:Provider::Terraform::Examples + name: "service_directory_endpoint_basic" + primary_resource_id: "example" + vars: + service_id: "example-service" + namespace_id: "example-namespace" + endpoint_id: "example-endpoint" + min_version: beta + properties: + service: !ruby/object:Overrides::Terraform::PropertyOverride + ignore_read: true + endpointId: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validateRFC1035Name(2, 63)' + address: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validateIpAddress' + port: !ruby/object:Overrides::Terraform::PropertyOverride + validation: !ruby/object:Provider::Terraform::Validation + function: 'validation.IntBetween(0, 65535)' + custom_code: !ruby/object:Provider::Terraform::CustomCode + custom_import: templates/terraform/custom_import/service_directory_endpoint.go.erb +files: !ruby/object:Provider::Config::Files + # These files have templating (ERB) code that will be run. + # This is usually to add licensing info, autogeneration notices, etc. + compile: +<%= lines(indent(compile('provider/terraform/product~compile.yaml'), 4)) -%> diff --git a/products/servicemanagement/api.yaml b/products/servicemanagement/api.yaml index 9b170d21aeb9..a214d516478f 100644 --- a/products/servicemanagement/api.yaml +++ b/products/servicemanagement/api.yaml @@ -13,7 +13,7 @@ --- !ruby/object:Api::Product name: ServiceManagement -display_name: Service Management +display_name: Cloud Endpoints versions: - !ruby/object:Api::Product::Version name: ga diff --git a/products/sourcerepo/terraform.yaml b/products/sourcerepo/terraform.yaml index 688c25f64cb7..31aa0f98a20a 100644 --- a/products/sourcerepo/terraform.yaml +++ b/products/sourcerepo/terraform.yaml @@ -49,9 +49,11 @@ overrides: !ruby/object:Overrides::ResourceOverrides A Cloud Pub/Sub topic in this repo's project. Values are of the form `projects//topics/` or `` (where the topic will be inferred). + set_hash_func: 'resourceSourceRepoRepositoryPubSubConfigsHash' pubsubConfigs.serviceAccountEmail: !ruby/object:Overrides::Terraform::PropertyOverride default_from_api: true custom_code: !ruby/object:Provider::Terraform::CustomCode + constants: templates/terraform/constants/source_repo_repository.go.erb update_encoder: templates/terraform/update_encoder/source_repo_repository.erb post_create: templates/terraform/post_create/source_repo_repository_update.go.erb # This is for copying files over diff --git a/products/spanner/terraform.yaml b/products/spanner/terraform.yaml index 53438b2a8f1a..73e9f7bbe016 100644 --- a/products/spanner/terraform.yaml +++ b/products/spanner/terraform.yaml @@ -17,6 +17,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides autogen_async: true # This resource is a child resource skip_sweeper: true + id_format: "{{instance}}/{{name}}" import_format: - "projects/{{project}}/instances/{{instance}}/databases/{{name}}" - "instances/{{instance}}/databases/{{name}}" @@ -26,6 +27,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides - !ruby/object:Provider::Terraform::Examples name: "spanner_database_basic" primary_resource_id: "database" + # Randomness due to spanner instance + skip_vcr: true vars: database_name: "my-database" properties: @@ -55,6 +58,8 @@ overrides: !ruby/object:Overrides::ResourceOverrides - !ruby/object:Provider::Terraform::Examples name: "spanner_instance_basic" primary_resource_id: "example" + # Randomness + skip_vcr: true properties: name: !ruby/object:Overrides::Terraform::PropertyOverride description: | diff --git a/products/sql/api.yaml b/products/sql/api.yaml index ae6d8420ffea..222e4f7f9353 100644 --- a/products/sql/api.yaml +++ b/products/sql/api.yaml @@ -395,6 +395,10 @@ objects: update method to make sure concurrent updates are handled properly. During update, use the most recent settingsVersion value for this instance and do not try to update this value. + - !ruby/object:Api::Type::KeyValuePairs + name: 'userLabels' + description: | + User-provided labels, represented as a dictionary where each label is a single key value pair. - !ruby/object:Api::Type::String name: 'gceZone' output: true @@ -415,6 +419,50 @@ objects: - :PENDING_CREATE - :MAINTENANCE - :FAILED + - !ruby/object:Api::Type::NestedObject + name: 'diskEncryptionConfiguration' + description: 'Disk encyption settings' + properties: + - !ruby/object:Api::Type::String + name: 'kmsKeyName' + description: | + The KMS key used to encrypt the Cloud SQL instance + - !ruby/object:Api::Type::NestedObject + name: 'diskEncryptionStatus' + description: 'Disk encyption status' + properties: + - !ruby/object:Api::Type::String + name: 'kmsKeyVersionName' + description: | + The KMS key version used to encrypt the Cloud SQL instance + - !ruby/object:Api::Type::NestedObject + name: 'serverCaCert' + description: 'SSL configuration' + output: true + properties: + - !ruby/object:Api::Type::String + name: 'cert' + description: 'PEM representation of the X.509 certificate.' + - !ruby/object:Api::Type::String + name: 'certSerialNumber' + description: 'Serial number, as extracted from the certificate.' + - !ruby/object:Api::Type::String + name: 'commonName' + description: 'User supplied name. Constrained to [a-zA-Z.-_ ]+.' + - !ruby/object:Api::Type::Time + name: 'createTime' + description: | + The time when the certificate was created in RFC 3339 format, for + example 2012-11-15T16:19:00.094Z. + - !ruby/object:Api::Type::Time + name: 'expirationTime' + description: | + The time when the certificate expires in RFC 3339 format, for example + 2012-11-15T16:19:00.094Z. + - !ruby/object:Api::Type::String + name: 'sha1Fingerprint' + description: | + SHA-1 fingerprint of the certificate. - !ruby/object:Api::Resource name: 'Database' kind: 'sql#database' @@ -654,7 +702,7 @@ objects: - !ruby/object:Api::Type::Enum name: 'databaseVersion' description: | - The MySQL version running on your source database server: MYSQL_5_6 or MYSQL_5_7. + The MySQL version running on your source database server. required: true values: - :MYSQL_5_6 diff --git a/products/sql/terraform.yaml b/products/sql/terraform.yaml index 9d64768e49be..e0cac2542390 100644 --- a/products/sql/terraform.yaml +++ b/products/sql/terraform.yaml @@ -16,6 +16,7 @@ client_name: 'SqlAdmin' overrides: !ruby/object:Overrides::ResourceOverrides Database: !ruby/object:Overrides::Terraform::ResourceOverride mutex: "google-sql-database-instance-{{project}}-{{instance}}" + read_error_transform: "transformSQLDatabaseReadError" import_format: ["projects/{{project}}/instances/{{instance}}/databases/{{name}}", "{{project}}/{{instance}}/{{name}}", "instances/{{instance}}/databases/{{name}}", diff --git a/products/storage/ansible.yaml b/products/storage/ansible.yaml index f8b4b6400267..976c6561aa92 100644 --- a/products/storage/ansible.yaml +++ b/products/storage/ansible.yaml @@ -25,6 +25,12 @@ datasources: !ruby/object:Overrides::ResourceOverrides Object: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true overrides: !ruby/object:Overrides::ResourceOverrides + Bucket: !ruby/object:Overrides::Ansible::ResourceOverride + properties: + retentionPolicy: !ruby/object:Overrides::Ansible::PropertyOverride + exclude: true + encryption: !ruby/object:Overrides::Ansible::PropertyOverride + exclude: true ObjectAccessControl: !ruby/object:Overrides::Ansible::ResourceOverride exclude: true Object: !ruby/object:Overrides::Ansible::ResourceOverride diff --git a/products/storage/api.yaml b/products/storage/api.yaml index ab50fcc12258..b52ac786a957 100644 --- a/products/storage/api.yaml +++ b/products/storage/api.yaml @@ -407,6 +407,39 @@ objects: object is missing, if applicable, the service will return the named object from this bucket as the content for a 404 Not Found result. + - !ruby/object:Api::Type::KeyValuePairs + name: 'labels' + description: | + Labels applied to this bucket. A list of key->value pairs. + - !ruby/object:Api::Type::NestedObject + name: 'encryption' + description: | + Encryption configuration for the bucket + properties: + - !ruby/object:Api::Type::String + name: 'defaultKmsKeyName' + description: | + A Cloud KMS key that will be used to encrypt objects inserted into this bucket, + if no encryption method is specified. + - !ruby/object:Api::Type::NestedObject + name: 'retentionPolicy' + description: | + Retention policy for the bucket + properties: + - !ruby/object:Api::Type::Time + name: 'effectiveTime' + description: | + The time from which the retention policy was effective + - !ruby/object:Api::Type::Boolean + name: 'isLocked' + description: | + If the retention policy is locked. If true, the retention policy cannot be removed and the period cannot + be reduced. + - !ruby/object:Api::Type::Integer + name: 'retentionPeriod' + description: | + The period of time, in seconds, that objects in the bucket must be retained and cannot be deleted, + overwritten, or made noncurrent. parameters: - !ruby/object:Api::Type::String name: 'project' @@ -438,10 +471,6 @@ objects: - :projectPrivate - :publicRead input: true - - !ruby/object:Api::Type::KeyValuePairs - name: 'labels' - description: | - Labels applied to this bucket. A list of key->value pairs. - !ruby/object:Api::Resource name: 'BucketAccessControl' kind: 'storage#bucketAccessControl' diff --git a/products/storage/terraform.yaml b/products/storage/terraform.yaml index af92aaa18cf1..56fd4bfe903d 100644 --- a/products/storage/terraform.yaml +++ b/products/storage/terraform.yaml @@ -15,6 +15,7 @@ overrides: !ruby/object:Overrides::ResourceOverrides Bucket: !ruby/object:Overrides::Terraform::ResourceOverride exclude_resource: true + error_retry_predicates: ["isStoragePreconditionError"] import_format: ["{{name}}"] examples: - !ruby/object:Provider::Terraform::Examples diff --git a/provider/ansible.rb b/provider/ansible.rb index 32b40ff46175..a15208919545 100644 --- a/provider/ansible.rb +++ b/provider/ansible.rb @@ -94,7 +94,7 @@ def module_name(object) object.name.underscore].join('_') end - def build_object_data(object, output_folder, version) + def build_object_data(pwd, object, output_folder, version) # Method is overridden to add Ansible example objects to the data object. data = AnsibleProductFileTemplate.file_for_resource( output_folder, @@ -105,7 +105,7 @@ def build_object_data(object, output_folder, version) ) prod_name = data.object.name.underscore - path = ["products/#{data.product.api_name}", + path = [pwd + "/products/#{data.product.api_name}", "examples/ansible/#{prod_name}.yaml"].join('/') data.example = get_example(path) if File.file?(path) @@ -233,19 +233,20 @@ def get_example(cfg_file) ex end - def generate_resource(data) + def generate_resource(pwd, data) target_folder = data.output_folder name = module_name(data.object) path = File.join(target_folder, "plugins/modules/#{name}.py") data.generate( + pwd, data.object.template || 'templates/ansible/resource.erb', path, self ) end - def generate_resource_tests(data) + def generate_resource_tests(pwd, data) prod_name = data.object.name.underscore path = ["products/#{data.product.api_name}", "examples/ansible/#{prod_name}.yaml"].join('/') @@ -260,6 +261,7 @@ def generate_resource_tests(data) path = File.join(target_folder, "tests/integration/targets/#{name}/tasks/main.yml") data.generate( + pwd, 'templates/ansible/tests_main.erb', path, self @@ -270,6 +272,7 @@ def generate_resource_tests(data) path = File.join(target_folder, "tests/integration/targets/#{name}/tasks/#{t.name}.yml") data.generate( + pwd, t.path, path, self @@ -280,20 +283,22 @@ def generate_resource_tests(data) path = File.join(target_folder, "tests/integration/targets/#{name}/defaults/main.yml") data.generate( + pwd, 'templates/ansible/integration_test_variables.erb', path, self ) end - def generate_resource_sweepers(data) + def generate_resource_sweepers(pwd, data) # No generated sweepers for this provider end - def compile_datasource(data) + def compile_datasource(pwd, data) target_folder = data.output_folder name = module_name(data.object) - data.generate('templates/ansible/facts.erb', + data.generate(pwd, + 'templates/ansible/facts.erb', File.join(target_folder, "plugins/modules/#{name}_info.py"), self) @@ -344,7 +349,7 @@ def regex_url(url) # Generates files on a per-resource basis. # All paths are allowed a '%s' where the module name # will be added. - def generate_resource_files(data) + def generate_resource_files(pwd, data) return unless @config&.files&.resource files = @config.files.resource @@ -358,7 +363,7 @@ def generate_resource_files(data) data.version, build_env ) - compile_file_list(data.output_folder, files, file_template) + compile_file_list(data.output_folder, files, file_template, pwd) end def copy_common_files(output_folder, provider_name = 'ansible') diff --git a/provider/ansible/example.rb b/provider/ansible/example.rb index 6591304402a2..83fc317937e6 100644 --- a/provider/ansible/example.rb +++ b/provider/ansible/example.rb @@ -229,7 +229,7 @@ class NoVerifier < Verifier attr_reader :reason def validate() end - def build_task(_state, _object) + def build_task(_state, _object, _pwd) '' end end @@ -262,9 +262,9 @@ def validate true end - def build_task(_state, object) + def build_task(_state, object, pwd) @parameters = build_parameters(object) - compile 'templates/ansible/verifiers/facts.yaml.erb' + compile(pwd + '/templates/ansible/verifiers/facts.yaml.erb') end private diff --git a/provider/ansible_devel.rb b/provider/ansible_devel.rb index 08d745a0fba6..51a3a5ae692b 100644 --- a/provider/ansible_devel.rb +++ b/provider/ansible_devel.rb @@ -34,22 +34,24 @@ def module_utils_import_path 'ansible.module_utils.gcp_utils' end - def generate_resource(data) + def generate_resource(pwd, data) target_folder = data.output_folder name = module_name(data.object) path = File.join(target_folder, "lib/ansible/modules/cloud/google/#{name}.py") data.generate( + pwd, data.object.template || 'templates/ansible/resource.erb', path, self ) end - def compile_datasource(data) + def compile_datasource(pwd, data) target_folder = data.output_folder name = module_name(data.object) - data.generate('templates/ansible/facts.erb', + data.generate(pwd, + 'templates/ansible/facts.erb', File.join(target_folder, "lib/ansible/modules/cloud/google/#{name}_info.py"), self) @@ -64,7 +66,7 @@ def compile_datasource(data) File.symlink "#{name}_info.py", deprecated_facts_path end - def generate_resource_tests(data) + def generate_resource_tests(pwd, data) prod_name = data.object.name.underscore path = ["products/#{data.product.api_name}", "examples/ansible/#{prod_name}.yaml"].join('/') @@ -79,6 +81,7 @@ def generate_resource_tests(data) path = File.join(target_folder, "test/integration/targets/#{name}/tasks/main.yml") data.generate( + pwd, 'templates/ansible/tests_main.erb', path, self @@ -89,6 +92,7 @@ def generate_resource_tests(data) path = File.join(target_folder, "test/integration/targets/#{name}/tasks/#{t.name}.yml") data.generate( + pwd, t.path, path, self @@ -96,7 +100,7 @@ def generate_resource_tests(data) end end - def generate_resource_sweepers(data) end + def generate_resource_sweepers(pwd, data) end def compile_common_files(_arg1, _arg2, _arg3) end @@ -115,7 +119,7 @@ def copy_common_files(output_folder, provider_name = nil) copy_file_list(output_folder, files) end - def generate_resource_files(data) + def generate_resource_files(pwd, data) return unless @config&.files&.resource files = @config.files.resource @@ -132,7 +136,7 @@ def generate_resource_files(data) data.version, build_env ) - compile_file_list(data.output_folder, files, file_template) + compile_file_list(data.output_folder, files, file_template, pwd) end end end diff --git a/provider/core.rb b/provider/core.rb index e43a5bf29fe6..a9d4361b6317 100644 --- a/provider/core.rb +++ b/provider/core.rb @@ -89,10 +89,14 @@ def generate(output_folder, types, product_path, dump_yaml) compile_product_files(output_folder) \ unless @config.files.nil? || @config.files.compile.nil? - generate_datasources(output_folder, types) \ + FileUtils.mkpath output_folder unless Dir.exist?(output_folder) + pwd = Dir.pwd + Dir.chdir output_folder + generate_datasources(pwd, output_folder, types) \ unless @config.datasources.nil? - generate_operation(output_folder, types) + generate_operation(pwd, output_folder, types) + Dir.chdir pwd # Write a file with the final version of the api, after overrides # have been applied. @@ -107,7 +111,7 @@ def generate(output_folder, types, product_path, dump_yaml) end end - def generate_operation(output_folder, types); end + def generate_operation(pwd, output_folder, types); end def copy_files(output_folder) copy_file_list(output_folder, @config.files.copy) @@ -181,14 +185,16 @@ def compile_common_files( compile_file_list(output_folder, files, file_template) end - def compile_file_list(output_folder, files, file_template) + def compile_file_list(output_folder, files, file_template, pwd = Dir.pwd) + FileUtils.mkpath output_folder unless Dir.exist?(output_folder) + Dir.chdir output_folder files.map do |target, source| Thread.new do Google::LOGGER.debug "Compiling #{source} => #{target}" - target_file = File.join(output_folder, target) - file_template.generate(source, target_file, self) + file_template.generate(pwd, source, target, self) end end.map(&:join) + Dir.chdir pwd end def generate_objects(output_folder, types) @@ -215,27 +221,34 @@ def generate_objects(output_folder, types) end def generate_object(object, output_folder, version_name) - data = build_object_data(object, output_folder, version_name) + pwd = Dir.pwd + data = build_object_data(pwd, object, output_folder, version_name) unless object.exclude_resource + FileUtils.mkpath output_folder unless Dir.exist?(output_folder) + Dir.chdir output_folder Google::LOGGER.debug "Generating #{object.name} resource" - generate_resource data.clone + generate_resource(pwd, data.clone) Google::LOGGER.debug "Generating #{object.name} tests" - generate_resource_tests data.clone - generate_resource_sweepers data.clone - generate_resource_files data.clone + generate_resource_tests(pwd, data.clone) + generate_resource_sweepers(pwd, data.clone) + generate_resource_files(pwd, data.clone) + Dir.chdir pwd end # if iam_policy is not defined or excluded, don't generate it return if object.iam_policy.nil? || object.iam_policy.exclude + FileUtils.mkpath output_folder unless Dir.exist?(output_folder) + Dir.chdir output_folder Google::LOGGER.debug "Generating #{object.name} IAM policy" - generate_iam_policy data.clone + generate_iam_policy(pwd, data.clone) + Dir.chdir pwd end # Generate files at a per-resource basis. - def generate_resource_files(data) end + def generate_resource_files(pwd, data) end - def generate_datasources(output_folder, types) + def generate_datasources(pwd, output_folder, types) # We need to apply overrides for datasources @api = Overrides::Runner.build(@api, @config.datasources, @config.resource_override, @@ -257,18 +270,18 @@ def generate_datasources(output_folder, types) "Excluding #{object.name} datasource per API version" ) else - generate_datasource object, output_folder + generate_datasource(pwd, object, output_folder) end end end - def generate_datasource(object, output_folder) - data = build_object_data(object, output_folder, @target_version_name) + def generate_datasource(pwd, object, output_folder) + data = build_object_data(pwd, object, output_folder, @target_version_name) - compile_datasource data.clone + compile_datasource(pwd, data.clone) end - def build_object_data(object, output_folder, version) + def build_object_data(_pwd, object, output_folder, version) ProductFileTemplate.file_for_resource(output_folder, object, version, @config, build_env) end @@ -329,7 +342,7 @@ def update_uri(resource, url_part) url_part end - def generate_iam_policy(data) end + def generate_iam_policy(pwd, data) end # TODO(nelsonjr): Review all object interfaces and move to private methods # that should not be exposed outside the object hierarchy. diff --git a/provider/file_template.rb b/provider/file_template.rb index 1694ca9a511a..5831be9aca0f 100644 --- a/provider/file_template.rb +++ b/provider/file_template.rb @@ -35,10 +35,7 @@ class FileTemplate # # Once the file's contents are written, set the proper [chmod] mode and # format the file with a language-appropriate formatter. - def generate(template, path, provider) - folder = File.dirname(path) - FileUtils.mkpath folder unless Dir.exist?(folder) - + def generate(pwd, template, path, provider) # If we've modified a file since starting an MM run, it's a reasonable # assumption that it was this run that modified it. if File.exist?(path) && File.mtime(path) > @env[:start_time] @@ -58,17 +55,19 @@ def generate(template, path, provider) end # This variable is used in ansible/resource.erb - ctx.local_variable_set('file_relative', relative_path(path, @output_folder).to_s) + ctx.local_variable_set('file_relative', + relative_path(@output_folder + '/' + path, @output_folder).to_s) + ctx.local_variable_set('pwd', pwd) Google::LOGGER.debug "Generating #{path}" - File.open(path, 'w') { |f| f.puts compile_file(ctx, template) } + File.open(path, 'w') { |f| f.puts compile_file(ctx, pwd + '/' + template) } # Files are often generated in parallel. # We can use thread-local variables to ensure that autogen checking # stays specific to the file each thred represents. raise "#{path} missing autogen" unless Thread.current[:autogen] - old_file_chmod_mode = File.stat(template).mode + old_file_chmod_mode = File.stat(pwd + '/' + template).mode FileUtils.chmod(old_file_chmod_mode, path) format_output_file(path) diff --git a/provider/inspec.rb b/provider/inspec.rb index 00fe1a5a0c26..e6eab888dc68 100644 --- a/provider/inspec.rb +++ b/provider/inspec.rb @@ -61,36 +61,39 @@ class NestedObjectProductFileTemplate < Provider::ProductFileTemplate # This function uses the resource templates to create singular and plural # resources that can be used by InSpec - def generate_resource(data) + def generate_resource(pwd, data) target_folder = File.join(data.output_folder, 'libraries') name = data.object.name.underscore data.generate( + pwd, 'templates/inspec/singular_resource.erb', File.join(target_folder, "#{resource_name(data.object, data.product)}.rb"), self ) - generate_documentation(data.clone, name, false) + generate_documentation(pwd, data.clone, name, false) unless data.object.singular_only data.generate( + pwd, 'templates/inspec/plural_resource.erb', File.join(target_folder, resource_name(data.object, data.product).pluralize + '.rb'), self ) - generate_documentation(data.clone, name, true) + generate_documentation(pwd, data.clone, name, true) end - generate_properties(data.clone, data.object.all_user_properties) + generate_properties(pwd, data.clone, data.object.all_user_properties) end # Generate the IAM policy for this object. This is used to query and test # IAM policies separately from the resource itself - def generate_iam_policy(data) + def generate_iam_policy(pwd, data) target_folder = File.join(data.output_folder, 'libraries') iam_policy_resource_name = "#{resource_name(data.object, data.product)}_iam_policy" data.generate( + pwd, 'templates/inspec/iam_policy/iam_policy.erb', File.join(target_folder, "#{iam_policy_resource_name}.rb"), self @@ -98,21 +101,23 @@ def generate_iam_policy(data) markdown_target_folder = File.join(data.output_folder, 'docs/resources') data.generate( + pwd, 'templates/inspec/iam_policy/iam_policy.md.erb', File.join(markdown_target_folder, "#{iam_policy_resource_name}.md"), self ) - generate_iam_binding(data) + generate_iam_binding(pwd, data) end # Generate the IAM binding for this object. This is used to query and test # IAM bindings in a more convienient way than using the IAM policy resource - def generate_iam_binding(data) + def generate_iam_binding(pwd, data) target_folder = File.join(data.output_folder, 'libraries') iam_binding_resource_name = "#{resource_name(data.object, data.product)}_iam_binding" data.generate( + pwd, 'templates/inspec/iam_binding/iam_binding.erb', File.join(target_folder, "#{iam_binding_resource_name}.rb"), self @@ -120,25 +125,26 @@ def generate_iam_binding(data) markdown_target_folder = File.join(data.output_folder, 'docs/resources') data.generate( + pwd, 'templates/inspec/iam_binding/iam_binding.md.erb', File.join(markdown_target_folder, "#{iam_binding_resource_name}.md"), self ) end - def generate_properties(data, props) + def generate_properties(pwd, data, props) nested_objects = props.select(&:nested_properties?) return if nested_objects.empty? # Create property files for any nested objects. - generate_property_files(nested_objects, data) + generate_property_files(pwd, nested_objects, data) # Create property files for any deeper nested objects. - nested_objects.each { |prop| generate_properties(data, prop.nested_properties) } + nested_objects.each { |prop| generate_properties(pwd, data, prop.nested_properties) } end # Generate the files for the properties - def generate_property_files(properties, data) + def generate_property_files(pwd, properties, data) properties.flatten.compact.each do |property| nested_object_template = NestedObjectProductFileTemplate.new( data.output_folder, @@ -153,11 +159,11 @@ def generate_property_files(properties, data) nested_object_template.output_folder, "libraries/#{nested_object_requires(property)}.rb" ) - nested_object_template.generate(source, target, self) + nested_object_template.generate(pwd, source, target, self) end end - def build_object_data(object, output_folder, version) + def build_object_data(_pwd, object, output_folder, version) InspecProductFileTemplate.file_for_resource( output_folder, object, @@ -168,7 +174,7 @@ def build_object_data(object, output_folder, version) end # Generates InSpec markdown documents for the resource - def generate_documentation(data, base_name, plural) + def generate_documentation(pwd, data, base_name, plural) docs_folder = File.join(data.output_folder, 'docs', 'resources') name = plural ? base_name.pluralize : base_name @@ -179,6 +185,7 @@ def generate_documentation(data, base_name, plural) file_name = resource_name(data.object, data.product) file_name = file_name.pluralize if plural data.generate( + pwd, 'templates/inspec/doc_template.md.erb', File.join(docs_folder, "#{file_name}.md"), self @@ -193,28 +200,29 @@ def format_url(url) end # Copies InSpec tests to build folder - def generate_resource_tests(data) + def generate_resource_tests(pwd, data) target_folder = File.join(data.output_folder, 'test') FileUtils.mkpath target_folder - FileUtils.cp_r 'templates/inspec/tests/.', target_folder + FileUtils.cp_r pwd + '/templates/inspec/tests/.', target_folder name = resource_name(data.object, data.product) - generate_inspec_test(data.clone, name, target_folder, name) + generate_inspec_test(pwd, data.clone, name, target_folder, name) # Build test for plural resource - generate_inspec_test(data.clone, name.pluralize, target_folder, name)\ + generate_inspec_test(pwd, data.clone, name.pluralize, target_folder, name)\ unless data.object.singular_only end - def generate_inspec_test(data, name, target_folder, attribute_file_name) + def generate_inspec_test(pwd, data, name, target_folder, attribute_file_name) data.name = name data.attribute_file_name = attribute_file_name data.doc_generation = false data.privileged = data.object.privileged data.generate( + pwd, 'templates/inspec/integration_test_template.erb', File.join( target_folder, @@ -225,7 +233,7 @@ def generate_inspec_test(data, name, target_folder, attribute_file_name) ) end - def generate_resource_sweepers(data) + def generate_resource_sweepers(pwd, data) # No generated sweepers for this provider end @@ -306,22 +314,26 @@ def markdown_format(property, indent = 1) description_arr += property.item_type.properties.map\ { |prop| markdown_format(prop, indent + 1) } description = description_arr.join("\n\n") + elsif property.is_a?(Api::Type::Enum) + description_arr = [description, "#{' ' * indent}Possible values:"] + description_arr += property.values.map { |v| "#{' ' * (indent + 1)}* #{v}" } + description = description_arr.join("\n") end description end - def grab_attributes - YAML.load_file('templates/inspec/tests/integration/configuration/mm-attributes.yml') + def grab_attributes(pwd) + YAML.load_file(pwd + '/templates/inspec/tests/integration/configuration/mm-attributes.yml') end # Returns a variable name OR default value for that variable based on # defaults from the existing inspec-gcp tests that do not exist within MM # Default values are used within documentation to show realistic examples - def external_attribute(attribute_name, doc_generation = false) + def external_attribute(pwd, attribute_name, doc_generation = false) return attribute_name unless doc_generation external_attribute_file = 'templates/inspec/examples/attributes/external_attributes.yml' - "'#{YAML.load_file(external_attribute_file)[attribute_name]}'" + "'#{YAML.load_file(pwd + '/' + external_attribute_file)[attribute_name]}'" end def qualified_property_class(property) @@ -381,8 +393,11 @@ def beta_api_url(object) end def ga_api_url(object) - ga_version = object.__product.version_obj_or_closest('ga') - object.product_url || ga_version.base_url + if object.__product.exists_at_version('ga') + ga_version = object.__product.version_obj_or_closest('ga') + return object.product_url || ga_version.base_url + end + beta_api_url(object) end end end diff --git a/provider/terraform.rb b/provider/terraform.rb index e694501f30ff..671c9466b461 100644 --- a/provider/terraform.rb +++ b/provider/terraform.rb @@ -1,4 +1,4 @@ -# Copyright 2017 Google Inc. +# Copyright 2020 Google Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at @@ -100,23 +100,48 @@ def force_new?(property, resource) force_new?(property.parent, resource)))) end - # Returns the property for a given Terraform field path (e.g. + # Returns tuples of (fieldName, list of update masks) for + # top-level updatable fields. Schema path refers to a given Terraform + # field name (e.g. d.GetChange('fieldName)') + def get_property_update_masks_groups(properties, mask_prefix: '') + mask_groups = [] + properties.each do |prop| + if prop.flatten_object + mask_groups += get_property_update_masks_groups( + prop.properties, mask_prefix: "#{prop.api_name}." + ) + elsif prop.update_mask_fields + mask_groups << [prop.name.underscore, prop.update_mask_fields] + else + mask_groups << [prop.name.underscore, [mask_prefix + prop.api_name]] + end + end + mask_groups + end + + # Returns an updated path for a given Terraform field path (e.g. # 'a_field', 'parent_field.0.child_name'). Returns nil if the property - # is not included in the resource's properties. - def property_for_schema_path(schema_path, resource) + # is not included in the resource's properties and removes keys that have + # been flattened + # TODO(emilymye): Change format of input for + # xactly_one_of/at_least_one_of/etc to use camelcase, MM properities and + # convert to snake in this method + def get_property_schema_path(schema_path, resource) nested_props = resource.properties prop = nil - - schema_path.split('.').each_with_index do |pname, i| - next if i.odd? - - pname = pname.camelize(:lower) - prop = nested_props.find { |p| p.name == pname } - break if prop.nil? + path_tkns = schema_path.split('.0.').map do |pname| + camel_pname = pname.camelize(:lower) + prop = nested_props.find { |p| p.name == camel_pname } + return nil if prop.nil? nested_props = prop.nested_properties || [] + prop.flatten_object ? nil : pname.underscore + end + if path_tkns.empty? || path_tkns[-1].nil? + nil + else + path_tkns.compact.join('.0.') end - prop end # Transforms a format string with field markers to a regex string with @@ -153,30 +178,31 @@ def folder_name(version) # This function uses the resource.erb template to create one file # per resource. The resource.erb template forms the basis of a single # GCP Resource on Terraform. - def generate_resource(data) - target_folder = File.join(data.output_folder, folder_name(data.version)) - - name = data.object.name.underscore + def generate_resource(pwd, data) + name = data.object.filename_override || data.object.name.underscore product_name = data.product.name.underscore - filepath = File.join(target_folder, "resource_#{product_name}_#{name}.go") - data.generate('templates/terraform/resource.erb', filepath, self) - generate_documentation(data) + FileUtils.mkpath folder_name(data.version) unless Dir.exist?(folder_name(data.version)) + data.generate(pwd, + '/templates/terraform/resource.erb', + "#{folder_name(data.version)}/resource_#{product_name}_#{name}.go", + self) + generate_documentation(pwd, data) end - def generate_documentation(data) + def generate_documentation(pwd, data) target_folder = data.output_folder target_folder = File.join(target_folder, 'website', 'docs', 'r') FileUtils.mkpath target_folder - name = data.object.name.underscore + name = data.object.filename_override || data.object.name.underscore product_name = @config.legacy_name || data.product.name.underscore filepath = File.join(target_folder, "#{product_name}_#{name}.html.markdown") - data.generate('templates/terraform/resource.html.markdown.erb', filepath, self) + data.generate(pwd, 'templates/terraform/resource.html.markdown.erb', filepath, self) end - def generate_resource_tests(data) + def generate_resource_tests(pwd, data) return if data.object.examples .reject(&:skip_test) .reject do |e| @@ -185,96 +211,91 @@ def generate_resource_tests(data) end .empty? - target_folder = File.join(data.output_folder, folder_name(data.version)) - - name = data.object.name.underscore + name = data.object.filename_override || data.object.name.underscore product_name = data.product.name.underscore - filepath = - File.join( - target_folder, - "resource_#{product_name}_#{name}_generated_test.go" - ) data.product = data.product.name data.resource_name = data.object.name.camelize(:upper) - data.generate('templates/terraform/examples/base_configs/test_file.go.erb', - filepath, self) + FileUtils.mkpath folder_name(data.version) unless Dir.exist?(folder_name(data.version)) + data.generate( + pwd, + 'templates/terraform/examples/base_configs/test_file.go.erb', + "#{folder_name(data.version)}/resource_#{product_name}_#{name}_generated_test.go", + self + ) end - def generate_resource_sweepers(data) + def generate_resource_sweepers(pwd, data) return if data.object.skip_sweeper || data.object.custom_code.custom_delete || - data.object.custom_code.pre_delete + data.object.custom_code.pre_delete || + data.object.skip_delete - target_folder = File.join(data.output_folder, folder_name(data.version)) - - name = data.object.name.underscore + name = data.object.filename_override || data.object.name.underscore product_name = data.product.name.underscore - filepath = - File.join( - target_folder, - "resource_#{product_name}_#{name}_sweeper_test.go" - ) data.product = data.product.name data.resource_name = data.object.name.camelize(:upper) - data.generate('templates/terraform/sweeper_file.go.erb', - filepath, self) + FileUtils.mkpath folder_name(data.version) unless Dir.exist?(folder_name(data.version)) + data.generate(pwd, + 'templates/terraform/sweeper_file.go.erb', + "#{folder_name(data.version)}/resource_#{product_name}_#{name}_sweeper_test.go", + self) end - def generate_operation(output_folder, _types) + def generate_operation(pwd, output_folder, _types) return if @api.objects.select(&:autogen_async).empty? product_name = @api.name.underscore - data = build_object_data(@api.objects.first, output_folder, @target_version_name) - target_folder = File.join(data.output_folder, folder_name(data.version)) + data = build_object_data(pwd, @api.objects.first, output_folder, @target_version_name) data.object = @api.objects.select(&:autogen_async).first data.async = data.object.async - data.generate('templates/terraform/operation.go.erb', - File.join(target_folder, - "#{product_name}_operation.go"), + FileUtils.mkpath folder_name(data.version) unless Dir.exist?(folder_name(data.version)) + data.generate(pwd, + 'templates/terraform/operation.go.erb', + "#{folder_name(data.version)}/#{product_name}_operation.go", self) end # Generate the IAM policy for this object. This is used to query and test # IAM policies separately from the resource itself - def generate_iam_policy(data) - target_folder = File.join(data.output_folder, folder_name(data.version)) - - name = data.object.name.underscore + def generate_iam_policy(pwd, data) + name = data.object.filename_override || data.object.name.underscore product_name = data.product.name.underscore - filepath = File.join(target_folder, "iam_#{product_name}_#{name}.go") - data.generate('templates/terraform/iam_policy.go.erb', filepath, self) + FileUtils.mkpath folder_name(data.version) unless Dir.exist?(folder_name(data.version)) + data.generate(pwd, + 'templates/terraform/iam_policy.go.erb', + "#{folder_name(data.version)}/iam_#{product_name}_#{name}.go", + self) # Only generate test if testable examples exist. unless data.object.examples.reject(&:skip_test).empty? - generated_test_name = "iam_#{product_name}_#{name}_generated_test.go" - filepath = File.join(target_folder, generated_test_name) data.generate( + pwd, 'templates/terraform/examples/base_configs/iam_test_file.go.erb', - filepath, + "#{folder_name(data.version)}/iam_#{product_name}_#{name}_generated_test.go", self ) end - generate_iam_documentation(data) + generate_iam_documentation(pwd, data) end - def generate_iam_documentation(data) + def generate_iam_documentation(pwd, data) target_folder = data.output_folder target_folder = File.join(target_folder, 'website', 'docs', 'r') FileUtils.mkpath target_folder - name = data.object.name.underscore + name = data.object.filename_override || data.object.name.underscore product_name = @config.legacy_name || data.product.name.underscore filepath = File.join(target_folder, "#{product_name}_#{name}_iam.html.markdown") - data.generate('templates/terraform/resource_iam.html.markdown.erb', filepath, self) + data.generate(pwd, 'templates/terraform/resource_iam.html.markdown.erb', filepath, self) end - def build_object_data(object, output_folder, version) + def build_object_data(_pwd, object, output_folder, version) TerraformProductFileTemplate.file_for_resource( output_folder, object, diff --git a/provider/terraform/async.rb b/provider/terraform/async.rb index 08b2c589ece7..33b5b36b1b7b 100644 --- a/provider/terraform/async.rb +++ b/provider/terraform/async.rb @@ -20,8 +20,13 @@ class Terraform < Provider::AbstractCore class PollAsync < Api::Async # Details how to poll for an eventually-consistent resource state. - # Function to call for checking the Poll response - attr_reader :check_response_func + # Function to call for checking the Poll response for + # creating and updating a resource + attr_reader :check_response_func_existence + + # Function to call for checking the Poll response for + # deleting a resource + attr_reader :check_response_func_absence # Custom code to get a poll response, if needed. # Will default to same logic as Read() to get current resource @@ -31,12 +36,18 @@ class PollAsync < Api::Async # result of the final Read() attr_reader :suppress_error + # Number of times the desired state has to occur continuously + # during polling before returning a success + attr_reader :target_occurrences + def validate super - check :check_response_func, type: String, required: true + check :check_response_func_existence, type: String, required: true + check :check_response_func_absence, type: String, default: 'PollCheckForAbsence' check :custom_poll_read, type: String check :suppress_error, type: :boolean, default: false + check :target_occurrences, type: Integer, default: 1 end end end diff --git a/provider/terraform/common~compile.yaml b/provider/terraform/common~compile.yaml index 4878b9a14490..d75b6f6a172a 100644 --- a/provider/terraform/common~compile.yaml +++ b/provider/terraform/common~compile.yaml @@ -15,7 +15,6 @@ <% dir = @target_version_name == 'ga' ? 'google' : "google-#{@target_version_name}" -%> -'website/google.erb': 'third_party/terraform/website-compiled/google.erb' <% Dir["third_party/terraform/tests/*.go.erb"].each do |file_path| fname = file_path.split('/')[-1] -%> diff --git a/provider/terraform/custom_code.rb b/provider/terraform/custom_code.rb index 1f46b314a411..f1b2ef4dee58 100644 --- a/provider/terraform/custom_code.rb +++ b/provider/terraform/custom_code.rb @@ -75,6 +75,9 @@ class CustomCode < Api::Object # (e.g. "fooBarDiffSuppress") and regexes that are necessarily # exported (e.g. "fooBarValidationRegex"). attr_reader :constants + # This code is run before the Create call happens. It's placed + # in the Create function, just before the Create call is made. + attr_reader :pre_create # This code is run after the Create call succeeds. It's placed # in the Create function directly without modification. attr_reader :post_create @@ -126,6 +129,7 @@ def validate check :update_encoder, type: String check :decoder, type: String check :constants, type: String + check :pre_create, type: String check :post_create, type: String check :custom_create, type: String check :pre_update, type: String diff --git a/provider/terraform/examples.rb b/provider/terraform/examples.rb index 83c0283e0f43..be8c6c379740 100644 --- a/provider/terraform/examples.rb +++ b/provider/terraform/examples.rb @@ -59,6 +59,8 @@ class Examples < Api::Object # - :ORG_TARGET # - :BILLING_ACCT # - :SERVICE_ACCT + # - :CUST_ID + # - :IDENTITY_USER # This list corresponds to the `get*FromEnv` methods in provider_test.go. attr_reader :test_env_vars @@ -81,6 +83,10 @@ class Examples < Api::Object # } attr_reader :test_vars_overrides + # Hash to provider custom override values for generating oics config + # See test_vars_overrides for more details + attr_reader :oics_vars_overrides + # The version name of of the example's version if it's different than the # resource version, eg. `beta` # @@ -107,6 +113,9 @@ class Examples < Api::Object # Whether to skip generating tests for this resource attr_reader :skip_test + # Whether to skip generating docs for this example + attr_reader :skip_docs + # The name of the primary resource for use in IAM tests. IAM tests need # a reference to the primary resource to create IAM policies for attr_reader :primary_resource_name @@ -115,7 +124,13 @@ class Examples < Api::Object # Defaults to `templates/terraform/examples/{{name}}.tf.erb` attr_reader :config_path - def config_documentation + # If the example should be skipped during VCR testing. + # This is the case when something about the resource or config causes VCR to fail for example + # a resource with a unique identifier generated within the resource via resource.UniqueId() + # Or a config with two fine grained resources that have a race condition during create + attr_reader :skip_vcr + + def config_documentation(pwd) docs_defaults = { PROJECT_NAME: 'my-project-name', FIRESTORE_PROJECT_NAME: 'my-project-name', @@ -125,7 +140,9 @@ def config_documentation ORG_DOMAIN: 'example.com', ORG_TARGET: '123456789', BILLING_ACCT: '000000-0000000-0000000-000000', - SERVICE_ACCT: 'emailAddress:my@service-account.com' + SERVICE_ACCT: 'emailAddress:my@service-account.com', + CUST_ID: 'A01b123xz', + IDENTITY_USER: 'cloud_identity_user' } @vars ||= {} @test_env_vars ||= {} @@ -135,26 +152,26 @@ def config_documentation test_env_vars: test_env_vars.map { |k, v| [k, docs_defaults[v]] }.to_h, primary_resource_id: primary_resource_id }, - config_path + pwd + '/' + config_path )) lines(compile_file( { content: body }, - 'templates/terraform/examples/base_configs/documentation.tf.erb' + pwd + '/templates/terraform/examples/base_configs/documentation.tf.erb' )) end - def config_test - body = config_test_body + def config_test(pwd) + body = config_test_body(pwd) lines(compile_file( { content: body }, - 'templates/terraform/examples/base_configs/test_body.go.erb' + pwd + '/templates/terraform/examples/base_configs/test_body.go.erb' )) end # rubocop:disable Style/FormatStringToken - def config_test_body + def config_test_body(pwd) @vars ||= {} @test_env_vars ||= {} @test_vars_overrides ||= {} @@ -185,21 +202,25 @@ def config_test_body test_env_vars: test_env_vars.map { |k, _| [k, "%{#{k}}"] }.to_h, primary_resource_id: primary_resource_id }, - config_path + pwd + '/' + config_path )) substitute_test_paths body end - def config_example + def config_example(pwd) @vars ||= [] + @oics_vars_overrides ||= {} + + rand_vars = vars.map { |k, str| [k, "#{str}-${local.name_suffix}"] }.to_h + # Examples with test_env_vars are skipped elsewhere body = lines(compile_file( { - vars: vars.map { |k, str| [k, "#{str}-${local.name_suffix}"] }.to_h, + vars: rand_vars.merge(oics_vars_overrides), primary_resource_id: primary_resource_id }, - config_path + pwd + '/' + config_path )) substitute_example_paths body @@ -253,7 +274,9 @@ def validate check :ignore_read_extra, type: Array, item_type: String, default: [] check :primary_resource_name, type: String check :skip_test, type: TrueClass + check :skip_docs, type: TrueClass check :config_path, type: String, default: "templates/terraform/examples/#{name}.tf.erb" + check :skip_vcr, type: TrueClass end end end diff --git a/provider/terraform/sub_template.rb b/provider/terraform/sub_template.rb index a4e38113d3d6..03c58ab37139 100644 --- a/provider/terraform/sub_template.rb +++ b/provider/terraform/sub_template.rb @@ -17,51 +17,62 @@ module Provider class Terraform < Provider::AbstractCore # Functions to compile sub-templates. module SubTemplate - def build_schema_property(property, object) - compile_template'templates/terraform/schema_property.erb', - property: property, - object: object + def build_schema_property(property, object, pwd) + compile_template pwd + '/templates/terraform/schema_property.erb', + property: property, + object: object, + pwd: pwd end - def build_subresource_schema(property, object) - compile_template'templates/terraform/schema_subresource.erb', - property: property, - object: object + def build_subresource_schema(property, object, pwd) + compile_template pwd + '/templates/terraform/schema_subresource.erb', + property: property, + object: object, + pwd: pwd end # Transforms a Cloud API representation of a property into a Terraform # schema representation. - def build_flatten_method(prefix, property, object) - compile_template 'templates/terraform/flatten_property_method.erb', + def build_flatten_method(prefix, property, object, pwd) + compile_template pwd + '/templates/terraform/flatten_property_method.erb', prefix: prefix, property: property, - object: object + object: object, + pwd: pwd end # Transforms a Terraform schema representation of a property into a # representation used by the Cloud API. - def build_expand_method(prefix, property, object) - compile_template 'templates/terraform/expand_property_method.erb', + def build_expand_method(prefix, property, object, pwd) + compile_template pwd + '/templates/terraform/expand_property_method.erb', prefix: prefix, property: property, - object: object + object: object, + pwd: pwd end - def build_expand_resource_ref(var_name, property) - compile_template 'templates/terraform/expand_resource_ref.erb', + def build_expand_resource_ref(var_name, property, pwd) + compile_template pwd + '/templates/terraform/expand_resource_ref.erb', var_name: var_name, - property: property + property: property, + pwd: pwd end - def build_property_documentation(property) - compile_template 'templates/terraform/property_documentation.erb', - property: property + def build_property_documentation(property, pwd) + return if property.removed? + + compile_template pwd + '/templates/terraform/property_documentation.erb', + property: property, + pwd: pwd end - def build_nested_property_documentation(property) + def build_nested_property_documentation(property, pwd) + return if property.removed? + compile_template( - 'templates/terraform/nested_property_documentation.erb', - property: property + pwd + '/templates/terraform/nested_property_documentation.erb', + property: property, + pwd: pwd ) end diff --git a/provider/terraform/virtual_fields.rb b/provider/terraform/virtual_fields.rb index 12a0b2673c92..e81bdbffe52e 100644 --- a/provider/terraform/virtual_fields.rb +++ b/provider/terraform/virtual_fields.rb @@ -43,10 +43,18 @@ class VirtualFields < Api::Object # The description / docs for the field. attr_reader :description + # The API type of the field (defaults to boolean) + attr_reader :type + + # The default value for the field (defaults to false) + attr_reader :default_value + def validate super check :name, type: String, required: true check :description, type: String, required: true + check :type, type: Class, default: Api::Type::Boolean + check :default_value, default: false end end end diff --git a/provider/terraform_object_library.rb b/provider/terraform_object_library.rb index bc4ccb0f0dc6..46ac2058cb1f 100644 --- a/provider/terraform_object_library.rb +++ b/provider/terraform_object_library.rb @@ -30,11 +30,12 @@ def generate_object(object, output_folder, version_name) super(object, output_folder, version_name) end - def generate_resource(data) + def generate_resource(pwd, data) target_folder = data.output_folder product_ns = data.object.__product.name - data.generate('templates/terraform/objectlib/base.go.erb', + data.generate(pwd, + 'templates/terraform/objectlib/base.go.erb', File.join(target_folder, "google/#{product_ns.downcase}_#{data.object.name.underscore}.go"), self) @@ -115,7 +116,7 @@ def copy_common_files(output_folder) ['google/compute_shared_operation.go', 'third_party/terraform/utils/compute_shared_operation.go'], ['google/compute_instance_helpers.go', - 'third_party/terraform/utils/compute_instance_helpers.go'], + 'third_party/terraform/utils/compute_instance_helpers.go.erb'], ['google/convert.go', 'third_party/terraform/utils/convert.go'], ['google/metadata.go', @@ -137,10 +138,10 @@ def copy_common_files(output_folder) ]) end - def generate_resource_tests(data) end + def generate_resource_tests(pwd, data) end - def generate_iam_policy(data) end + def generate_iam_policy(pwd, data) end - def generate_resource_sweepers(data) end + def generate_resource_sweepers(pwd, data) end end end diff --git a/provider/terraform_oics.rb b/provider/terraform_oics.rb index c7590169e404..f4fba172e7fe 100644 --- a/provider/terraform_oics.rb +++ b/provider/terraform_oics.rb @@ -24,7 +24,7 @@ def generate(output_folder, types, _product_path, _dump_yaml) end # Create a directory of examples per resource - def generate_resource(data) + def generate_resource(pwd, data) examples = data.object.examples .reject(&:skip_test) .reject { |e| !e.test_env_vars.nil? && e.test_env_vars.any? } @@ -37,41 +37,37 @@ def generate_resource(data) data.example = example - data.generate( - 'templates/terraform/examples/base_configs/example_file.tf.erb', - File.join(target_folder, 'main.tf'), - self - ) + data.generate(pwd, + 'templates/terraform/examples/base_configs/example_file.tf.erb', + File.join(target_folder, 'main.tf'), + self) - data.generate( - 'templates/terraform/examples/base_configs/tutorial.md.erb', - File.join(target_folder, 'tutorial.md'), - self - ) + data.generate(pwd, + 'templates/terraform/examples/base_configs/tutorial.md.erb', + File.join(target_folder, 'tutorial.md'), + self) - data.generate( - 'templates/terraform/examples/base_configs/example_backing_file.tf.erb', - File.join(target_folder, 'backing_file.tf'), - self - ) + data.generate(pwd, + 'templates/terraform/examples/base_configs/example_backing_file.tf.erb', + File.join(target_folder, 'backing_file.tf'), + self) - data.generate( - 'templates/terraform/examples/static/motd', - File.join(target_folder, 'motd'), - self - ) + data.generate(pwd, + 'templates/terraform/examples/static/motd', + File.join(target_folder, 'motd'), + self) end end # We don't want to generate anything but the resource. - def generate_resource_tests(data) end + def generate_resource_tests(pwd, data) end - def generate_resource_sweepers(data) end + def generate_resource_sweepers(pwd, data) end def compile_common_files(output_folder, products, common_compile_file) end def copy_common_files(output_folder) end - def generate_iam_policy(data) end + def generate_iam_policy(pwd, data) end end end diff --git a/spec/compiler_spec.rb b/spec/compiler_spec.rb index 43bc8e6d7b37..e3a8627f0599 100644 --- a/spec/compiler_spec.rb +++ b/spec/compiler_spec.rb @@ -48,7 +48,7 @@ it { is_expected.to be_instance_of Api::Product } it { is_expected.to have_attributes(api_name: 'myproduct') } - it { is_expected.to have_attribute_of_length(objects: 4) } + it { is_expected.to have_attribute_of_length(objects: 5) } end context 'should only accept product' do diff --git a/spec/data/good-file.yaml b/spec/data/good-file.yaml index 98d2d34a6b74..eb4683e127c0 100644 --- a/spec/data/good-file.yaml +++ b/spec/data/good-file.yaml @@ -114,6 +114,43 @@ objects: - !ruby/object:Api::Type::String name: 'nv-prop1' description: 'the first property in my namevalues' + - !ruby/object:Api::Resource + name: 'ResourceWithTerraformOverride' + kind: 'terraform#resourceWithTerraformOverride' + base_url: 'resourceWithTerraformOverride' + description: 'a description' + properties: + - !ruby/object:Api::Type::String + name: 'stringOne' + description: 'a string property (depth 0)' + - !ruby/object:Api::Type::NestedObject + name: 'objectOne' + description: 'a NestedObject property (depth 0)' + properties: + - !ruby/object:Api::Type::String + name: 'objectOneString' + description: 'a string property (depth 1)' + - !ruby/object:Api::Type::NestedObject + name: 'objectOneFlattenedObject' + description: 'a nested NestedObject (depth 1)' + properties: + - !ruby/object:Api::Type::Integer + name: 'objectOneNestedNestedInteger' + description: 'a nested integer (depth 2)' + - !ruby/object:Api::Type::NestedObject + name: 'objectTwoFlattened' + description: 'a NestedObject property that is flattened (depth 0)' + properties: + - !ruby/object:Api::Type::String + name: 'objectTwoString' + description: 'a nested string (depth 1)' + - !ruby/object:Api::Type::NestedObject + name: 'objectTwoNestedObject' + description: 'a nested NestedObject (depth 1)' + properties: + - !ruby/object:Api::Type::String + name: 'objectTwoNestedNestedString' + description: 'a nested String (depth 2)' - !ruby/object:Api::Resource name: 'TerraformImportIdTest' description: 'Used for spec/provider_terraform_import_spec' diff --git a/spec/data/good-file-config.yaml b/spec/data/good-tf-override.yaml similarity index 57% rename from spec/data/good-file-config.yaml rename to spec/data/good-tf-override.yaml index fff83f0f2b5f..83180541b559 100644 --- a/spec/data/good-file-config.yaml +++ b/spec/data/good-tf-override.yaml @@ -12,13 +12,12 @@ # limitations under the License. --- !ruby/object:Provider::Terraform::Config -overrides: !ruby/object:Provider::ResourceOverrides - AnotherResource: !ruby/object:Provider::Terraform::ResourceOverride - description: '{{description}} bar' +overrides: !ruby/object:Overrides::ResourceOverrides + ResourceWithTerraformOverride: !ruby/object:Overrides::Terraform::ResourceOverride properties: - property1: !ruby/object:Provider::Terraform::PropertyOverride - description: 'foo' - nested-property.property1: !ruby/object:Provider::Terraform::PropertyOverride - description: 'bar' - array-property.property1: !ruby/object:Provider::Terraform::PropertyOverride - description: 'baz' + objectTwoFlattened: !ruby/object:Overrides::Terraform::PropertyOverride + flatten_object: true + objectTwoFlattened.objectTwoNestedObject: !ruby/object:Overrides::Terraform::PropertyOverride + update_mask_fields: + - 'overrideFoo' + - 'nested.overrideBar' diff --git a/spec/data/terraform-config.yaml b/spec/data/terraform-config.yaml index 9aea0ef12e01..76dcb92477d0 100644 --- a/spec/data/terraform-config.yaml +++ b/spec/data/terraform-config.yaml @@ -13,3 +13,13 @@ --- !ruby/object:Provider::Terraform::Config overrides: !ruby/object:Overrides::ResourceOverrides + ResourceWithTerraformOverride: !ruby/object:Overrides::Terraform::ResourceOverride + properties: + objectOne.objectOneFlattenedObject: !ruby/object:Overrides::Terraform::PropertyOverride + flatten_object: true + objectTwoFlattened: !ruby/object:Overrides::Terraform::PropertyOverride + flatten_object: true + objectTwoFlattened.objectTwoString: !ruby/object:Overrides::Terraform::PropertyOverride + update_mask_fields: + - 'overrideFoo' + - 'nested.overrideBar' diff --git a/spec/provider_terraform_spec.rb b/spec/provider_terraform_spec.rb index 76fdff0f9e17..13e30f5c16cc 100644 --- a/spec/provider_terraform_spec.rb +++ b/spec/provider_terraform_spec.rb @@ -23,11 +23,14 @@ class << self describe Provider::Terraform do context 'good file product' do let(:product) { Api::Compiler.new(File.read('spec/data/good-file.yaml')).run } - let(:config) do - Provider::Config.parse('spec/data/terraform-config.yaml', product)[1] - end + let(:parsed) { Provider::Config.parse('spec/data/terraform-config.yaml', product) } + let(:config) { parsed[1] } + let(:override_product) { parsed[0] } let(:provider) { Provider::Terraform.new(config, product, 'ga', Time.now) } let(:resource) { product.objects[0] } + let(:override_resource) do + override_product.objects.find { |o| o.name == 'ResourceWithTerraformOverride' } + end before do allow_open 'spec/data/good-file.yaml' @@ -139,6 +142,94 @@ class << self ) end end + + describe '#get_property_update_masks_groups' do + subject do + provider.get_property_update_masks_groups(override_resource.properties) + end + + it do + is_expected.to eq( + [ + ['string_one', ['stringOne']], + ['object_one', ['objectOne']], + ['object_two_string', ['overrideFoo', 'nested.overrideBar']], + [ + 'object_two_nested_object', [ + 'objectTwoFlattened.objectTwoNestedObject' + ] + ] + ] + ) + end + end + + describe '#get_property_schema_path nonexistant' do + let(:test_paths) do + [ + 'not_a_field', + 'object_one.0.not_a_field', + 'object_one.0.object_one_nested_object.0.not_a_field' + ] + end + subject do + test_paths.map do |test_path| + provider.get_property_schema_path(test_path, override_resource) + end + end + + it do + is_expected.to eq([nil] * test_paths.size) + end + end + + describe '#get_property_schema_path no changes' do + let(:test_paths) do + [ + 'string_one', + 'object_one', + 'object_one.0.object_one_string' + ] + end + subject do + test_paths.map do |test_path| + provider.get_property_schema_path(test_path, override_resource) + end + end + + it do + is_expected.to eq(test_paths) + end + end + + describe '#get_property_schema_path flattened objects' do + let(:test_paths) do + [ + 'object_one.0.object_one_flattened_object', + 'object_one.0.object_one_flattened_object.0.object_one_nested_nested_integer', + 'object_two_flattened.0.object_two_string', + 'object_two_flattened.0.object_two_nested_object', + 'object_two_flattened.0.object_two_nested_object.0.object_two_nested_nested_string' + ] + end + subject do + test_paths.map do |test_path| + provider.get_property_schema_path(test_path, override_resource) + end + end + + it do + is_expected.to eq( + [ + nil, + 'object_one.0.object_one_nested_nested_integer', + 'object_two_string', + 'object_two_nested_object', + 'object_two_nested_object.0.object_two_nested_nested_string' + ] + ) + end + end end def allow_open(file_name) diff --git a/templates/ansible/facts.erb b/templates/ansible/facts.erb index c7d7506672c4..ceb233356088 100644 --- a/templates/ansible/facts.erb +++ b/templates/ansible/facts.erb @@ -3,7 +3,7 @@ # # Copyright (C) 2017 Google # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) -<%= lines(autogen_notice :python) -%> +<%= lines(autogen_notice(:python, pwd)) -%> from __future__ import absolute_import, division, print_function __metaclass__ = type diff --git a/templates/ansible/integration_test.erb b/templates/ansible/integration_test.erb index 6cb30d90812e..0cf793faa2cb 100644 --- a/templates/ansible/integration_test.erb +++ b/templates/ansible/integration_test.erb @@ -1,5 +1,5 @@ --- -<%= lines(autogen_notice :yaml) -%> +<%= lines(autogen_notice(:yaml, pwd)) -%> # Pre-test setup <% unless example.dependencies.nil? -%> <% example.dependencies.each do |depend| -%> @@ -25,7 +25,7 @@ - result.changed == true <% end # if object.readonly -%> <% unless example.verifier.nil? -%> -<%= lines(example.verifier.build_task('present', object)) -%> +<%= lines(example.verifier.build_task('present', object, pwd)) -%> <% end -%> <% unless object.readonly -%> # ---------------------------------------------------------------------------- @@ -43,7 +43,7 @@ that: - result.changed == true <% unless example.verifier.nil? -%> -<%= lines(example.verifier.build_task('absent', object)) -%> +<%= lines(example.verifier.build_task('absent', object, pwd)) -%> <% end -%> # ---------------------------------------------------------------------------- <%= lines(example.task.build_test('absent', object, true)) -%> diff --git a/templates/ansible/provider_helpers.erb b/templates/ansible/provider_helpers.erb index db854570bd65..34380b2ef8bf 100644 --- a/templates/ansible/provider_helpers.erb +++ b/templates/ansible/provider_helpers.erb @@ -4,9 +4,9 @@ <%= object.provider_helpers.map do |f| if f == object.provider_helpers.last - lines(compile(f)) + lines(compile(pwd + '/' + f)) else - lines(compile(f), 2) + lines(compile(pwd + '/' + f), 2) end end.join -%> diff --git a/templates/ansible/resource.erb b/templates/ansible/resource.erb index 51ecd909b5e4..801cf167353c 100644 --- a/templates/ansible/resource.erb +++ b/templates/ansible/resource.erb @@ -3,12 +3,12 @@ # # Copyright (C) 2017 Google # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) -<%= lines(autogen_notice :python) -%> +<%= lines(autogen_notice(:python, pwd)) -%> from __future__ import absolute_import, division, print_function __metaclass__ = type -<%= lines(compile('templates/ansible/documentation.erb'), 1) -%> +<%= lines(compile(pwd + '/templates/ansible/documentation.erb'), 1) -%> ################################################################################ # Imports ################################################################################ @@ -44,7 +44,7 @@ import json def main(): """Main function""" -<%= lines(indent(compile('templates/ansible/module.erb'), 4)) -%> +<%= lines(indent(compile(pwd + '/templates/ansible/module.erb'), 4)) -%> if not module.params['scopes']: module.params['scopes'] = <%= python_literal(object.__product.scopes) %> @@ -295,7 +295,7 @@ def <%= func_name -%>(module, request, response): <% end -%> <% end # unless update_props.empty? -%> <% if object.update_mask -%> -<%= lines(compile('templates/ansible/update_mask.erb')) -%> +<%= lines(compile(pwd + '/templates/ansible/update_mask.erb')) -%> <% end # if update_mask -%> <%= lines(method_decl('delete', ['module', 'link', ('kind' if object.kind?), @@ -378,7 +378,7 @@ def unwrap_resource(result, module): <% end -%> -<%= lines(compile('templates/ansible/transport.erb'), 2) -%> +<%= lines(compile(pwd + '/templates/ansible/transport.erb'), 2) -%> <%= lines(emit_link('self_link', build_url(object.self_link_url), object), 2) -%> <%= lines(emit_link('collection', build_url(object.collection_url), object), 2) -%> <%- unless object.create_url.nil? -%> @@ -446,7 +446,7 @@ def is_different(module, response): def response_to_hash(module, response): return <%= lines(python_literal(response_properties(object.properties), use_hash_brackets: true)) -%> <% if object.all_user_properties.any?(&:pattern) -%> -<%= lines(compile('templates/ansible/pattern.py.erb')) -%> +<%= lines(compile(pwd + '/templates/ansible/pattern.py.erb')) -%> <% end -%> <% readonly_selflink_rrefs.each do |resource| -%> @@ -459,9 +459,9 @@ def <%= resource.name.underscore -%>_selflink(name, params): name = <%= build_url(resource.self_link_url).gsub('{name}', '%s') -%>.format(**params) % name return name <% end -%> -<%= lines_before(compile('templates/ansible/async.erb'), 1) -%> -<%= lines_before(compile('templates/ansible/provider_helpers.erb'), 1) -%> -<%= lines_before(compile('templates/ansible/properties.erb'), 1) -%> +<%= lines_before(compile(pwd + '/templates/ansible/async.erb'), 1) -%> +<%= lines_before(compile(pwd + '/templates/ansible/provider_helpers.erb'), 1) -%> +<%= lines_before(compile(pwd + '/templates/ansible/properties.erb'), 1) -%> if __name__ == '__main__': diff --git a/templates/inspec/doc_template.md.erb b/templates/inspec/doc_template.md.erb index 8c78c74d5eee..937912f3657f 100644 --- a/templates/inspec/doc_template.md.erb +++ b/templates/inspec/doc_template.md.erb @@ -30,17 +30,17 @@ This resource has beta fields available. To retrieve these fields, include `beta <% end -%> ## Examples ``` -<%= compile("templates/inspec/examples/#{resource_name(object, product)}/#{resource_underscored_name}.erb") -%> +<%= compile(pwd + "/templates/inspec/examples/#{resource_name(object, product)}/#{resource_underscored_name}.erb") -%> ``` <% if object.singular_extra_examples && !plural -%> -<%= compile(object.singular_extra_examples) -%> +<%= compile(pwd + '/' + object.singular_extra_examples) -%> <% end -%> <% if object.plural_extra_examples && plural -%> -<%= compile(object.plural_extra_examples) -%> +<%= compile(pwd + '/' + object.plural_extra_examples) -%> <% end -%> diff --git a/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policies.erb b/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policies.erb index 2ecb2957f474..93894243caa5 100644 --- a/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policies.erb +++ b/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policies.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% service_perimeter = grab_attributes['service_perimeter'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% service_perimeter = grab_attributes(pwd)['service_perimeter'] -%> describe google_access_context_manager_access_policies(org_id: <%= gcp_organization_id %>) do its('count') { should be >= 1 } diff --git a/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy.erb b/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy.erb index b26f00b8863d..d85d846714dd 100644 --- a/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy.erb +++ b/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% service_perimeter = grab_attributes['service_perimeter'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% service_perimeter = grab_attributes(pwd)['service_perimeter'] -%> describe.one do google_access_context_manager_access_policies(org_id: <%= gcp_organization_id %>).names.each do |policy_name| diff --git a/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy_attributes.erb b/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy_attributes.erb index 4ef4a50d8d00..d7eb5a975b1c 100644 --- a/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy_attributes.erb +++ b/templates/inspec/examples/google_access_context_manager_access_policy/google_access_context_manager_access_policy_attributes.erb @@ -1,3 +1,3 @@ -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the perimeter') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the perimeter') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -service_perimeter = attribute('service_perimeter', default: <%= JSON.pretty_generate(grab_attributes['service_perimeter']) -%>, description: 'Service perimeter definition') \ No newline at end of file +service_perimeter = attribute('service_perimeter', default: <%= JSON.pretty_generate(grab_attributes(pwd)['service_perimeter']) -%>, description: 'Service perimeter definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter.erb b/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter.erb index e93362478d83..e23dca2d8df3 100644 --- a/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter.erb +++ b/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% service_perimeter = grab_attributes['service_perimeter'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% service_perimeter = grab_attributes(pwd)['service_perimeter'] -%> describe.one do google_access_context_manager_access_policies(org_id: <%= gcp_organization_id %>).names.each do |policy_name| diff --git a/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter_attributes.erb b/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter_attributes.erb index 4ef4a50d8d00..d7eb5a975b1c 100644 --- a/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter_attributes.erb +++ b/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeter_attributes.erb @@ -1,3 +1,3 @@ -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the perimeter') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the perimeter') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -service_perimeter = attribute('service_perimeter', default: <%= JSON.pretty_generate(grab_attributes['service_perimeter']) -%>, description: 'Service perimeter definition') \ No newline at end of file +service_perimeter = attribute('service_perimeter', default: <%= JSON.pretty_generate(grab_attributes(pwd)['service_perimeter']) -%>, description: 'Service perimeter definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeters.erb b/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeters.erb index ed65843deb61..5ed933bff520 100644 --- a/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeters.erb +++ b/templates/inspec/examples/google_access_context_manager_service_perimeter/google_access_context_manager_service_perimeters.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% service_perimeter = grab_attributes['service_perimeter'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% service_perimeter = grab_attributes(pwd)['service_perimeter'] -%> describe.one do google_access_context_manager_access_policies(org_id: <%= gcp_organization_id %>).names.each do |policy_name| diff --git a/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version.erb b/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version.erb index c53e464c7c70..cdb414532564 100644 --- a/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version.erb +++ b/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% standardappversion = grab_attributes['standardappversion'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% standardappversion = grab_attributes(pwd)['standardappversion'] -%> describe google_appengine_standard_app_version(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>, version_id: <%= doc_generation ? "'#{standardappversion['version_id']}'" : "standardappversion['version_id']" -%>, service: <%= doc_generation ? "'#{standardappversion['service']}'" : "standardappversion['service']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version_attributes.erb b/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version_attributes.erb index 425ce2537a4f..47f6338b6dc5 100644 --- a/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version_attributes.erb +++ b/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_version_attributes.erb @@ -1,5 +1,5 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project location.') -standardappversion = attribute('standardappversion', default: <%= JSON.pretty_generate(grab_attributes['standardappversion']) -%>, description: 'Cloud App Engine definition') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project location.') +standardappversion = attribute('standardappversion', default: <%= JSON.pretty_generate(grab_attributes(pwd)['standardappversion']) -%>, description: 'Cloud App Engine definition') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_versions.erb b/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_versions.erb index 3ecf79e4469f..485a69908c25 100644 --- a/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_versions.erb +++ b/templates/inspec/examples/google_appengine_standard_app_version/google_appengine_standard_app_versions.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% standardappversion = grab_attributes['standardappversion'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% standardappversion = grab_attributes(pwd)['standardappversion'] -%> describe google_appengine_standard_app_versions(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>,service: <%= doc_generation ? "'#{standardappversion['service']}'" : "standardappversion['service']" -%>) do its('runtimes') { should include <%= doc_generation ? "'#{standardappversion['runtime']}'" : "standardappversion['runtime']" -%> } diff --git a/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset.erb b/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset.erb index 1fa49e1f1bc2..f8be9fe14eb6 100644 --- a/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset.erb +++ b/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% dataset = grab_attributes['dataset'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% dataset = grab_attributes(pwd)['dataset'] -%> describe google_bigquery_dataset(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{dataset['dataset_id']}'" : "dataset['dataset_id']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset_attributes.erb b/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset_attributes.erb index 24d9298e551f..874450396a53 100644 --- a/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset_attributes.erb +++ b/templates/inspec/examples/google_bigquery_dataset/google_bigquery_dataset_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -dataset = attribute('dataset', default: <%= JSON.pretty_generate(grab_attributes['dataset']) -%>, description: 'BigQuery dataset definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +dataset = attribute('dataset', default: <%= JSON.pretty_generate(grab_attributes(pwd)['dataset']) -%>, description: 'BigQuery dataset definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_bigquery_dataset/google_bigquery_datasets.erb b/templates/inspec/examples/google_bigquery_dataset/google_bigquery_datasets.erb index 326afb3e0ae7..a293484ca1a3 100644 --- a/templates/inspec/examples/google_bigquery_dataset/google_bigquery_datasets.erb +++ b/templates/inspec/examples/google_bigquery_dataset/google_bigquery_datasets.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% dataset = grab_attributes['dataset'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% dataset = grab_attributes(pwd)['dataset'] -%> describe google_bigquery_datasets(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('count') { should be >= 1 } its('friendly_names') { should include <%= doc_generation ? "'#{dataset['friendly_name']}'" : "dataset['friendly_name']" -%> } diff --git a/templates/inspec/examples/google_bigquery_table/google_bigquery_table.erb b/templates/inspec/examples/google_bigquery_table/google_bigquery_table.erb index 770313f62ea3..32e42aca5389 100644 --- a/templates/inspec/examples/google_bigquery_table/google_bigquery_table.erb +++ b/templates/inspec/examples/google_bigquery_table/google_bigquery_table.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% bigquery_table = grab_attributes['bigquery_table'] -%> -<% dataset = grab_attributes['dataset'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% bigquery_table = grab_attributes(pwd)['bigquery_table'] -%> +<% dataset = grab_attributes(pwd)['dataset'] -%> describe google_bigquery_table(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, dataset: <%= doc_generation ? "'#{dataset['dataset_id']}'" : "dataset['dataset_id']" -%>, name: <%= doc_generation ? "'#{bigquery_table['table_id']}'" : "bigquery_table['table_id']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_bigquery_table/google_bigquery_table_attributes.erb b/templates/inspec/examples/google_bigquery_table/google_bigquery_table_attributes.erb index cadbae7d6ce1..adbb802ba3ce 100644 --- a/templates/inspec/examples/google_bigquery_table/google_bigquery_table_attributes.erb +++ b/templates/inspec/examples/google_bigquery_table/google_bigquery_table_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -bigquery_table = attribute('bigquery_table', default: <%= JSON.pretty_generate(grab_attributes['bigquery_table']) -%>, description: 'BigQuery table definition') -dataset = attribute('dataset', default: <%= JSON.pretty_generate(grab_attributes['dataset']) -%>, description: 'BigQuery dataset definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +bigquery_table = attribute('bigquery_table', default: <%= JSON.pretty_generate(grab_attributes(pwd)['bigquery_table']) -%>, description: 'BigQuery table definition') +dataset = attribute('dataset', default: <%= JSON.pretty_generate(grab_attributes(pwd)['dataset']) -%>, description: 'BigQuery dataset definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_bigquery_table/google_bigquery_tables.erb b/templates/inspec/examples/google_bigquery_table/google_bigquery_tables.erb index 9071803b6104..bf256a6a96ac 100644 --- a/templates/inspec/examples/google_bigquery_table/google_bigquery_tables.erb +++ b/templates/inspec/examples/google_bigquery_table/google_bigquery_tables.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% bigquery_table = grab_attributes['bigquery_table'] -%> -<% dataset = grab_attributes['dataset'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% bigquery_table = grab_attributes(pwd)['bigquery_table'] -%> +<% dataset = grab_attributes(pwd)['dataset'] -%> describe.one do google_bigquery_tables(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, dataset: <%= doc_generation ? "'#{dataset['dataset_id']}'" : "dataset['dataset_id']" -%>).table_references.each do |table_reference| describe google_bigquery_table(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, dataset: <%= doc_generation ? "'#{dataset['dataset_id']}'" : "dataset['dataset_id']" -%>, name: table_reference.table_id) do diff --git a/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info.erb b/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info.erb index 7e300f5bf870..e62897471a67 100644 --- a/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info.erb +++ b/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_billing_account = "#{external_attribute('gcp_billing_account', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_billing_account = "#{external_attribute(pwd, 'gcp_billing_account', doc_generation)}" -%> describe google_billing_project_billing_info(project_id: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info_attributes.erb b/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info_attributes.erb index 7e92ede4c630..d99baef0d604 100644 --- a/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info_attributes.erb +++ b/templates/inspec/examples/google_billing_project_billing_info/google_billing_project_billing_info_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_billing_account = attribute(:gcp_billing_account, default: '<%= external_attribute('gcp_billing_account') -%>', description: 'The GCP billing account name.') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_billing_account = attribute(:gcp_billing_account, default: '<%= external_attribute(pwd, 'gcp_billing_account') -%>', description: 'The GCP billing account name.') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') \ No newline at end of file diff --git a/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job.erb b/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job.erb index 5b73eb3fb4d0..de9fc2f37ca9 100644 --- a/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job.erb +++ b/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% scheduler_job = grab_attributes['scheduler_job'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% scheduler_job = grab_attributes(pwd)['scheduler_job'] -%> describe google_cloud_scheduler_job(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, region: <%= doc_generation ? "#{scheduler_job['region']}" : "scheduler_job['region']" -%>, name: <%= doc_generation ? "'#{scheduler_job['name']}'" : "scheduler_job['name']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job_attributes.erb b/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job_attributes.erb index 3a89f2879e3b..d11ee01546a4 100644 --- a/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job_attributes.erb +++ b/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_job_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -scheduler_job = attribute('scheduler_job', default: <%= JSON.pretty_generate(grab_attributes['scheduler_job']) -%>, description: 'Cloud Scheduler Job configuration') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +scheduler_job = attribute('scheduler_job', default: <%= JSON.pretty_generate(grab_attributes(pwd)['scheduler_job']) -%>, description: 'Cloud Scheduler Job configuration') \ No newline at end of file diff --git a/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_jobs.erb b/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_jobs.erb index 4406cf0a2311..945af3861a02 100644 --- a/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_jobs.erb +++ b/templates/inspec/examples/google_cloud_scheduler_job/google_cloud_scheduler_jobs.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% scheduler_job = grab_attributes['scheduler_job'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% scheduler_job = grab_attributes(pwd)['scheduler_job'] -%> google_cloud_scheduler_jobs(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, region: <%= doc_generation ? "#{scheduler_job['location']}" : "scheduler_job['location']" -%>).names.each do |name| describe google_cloud_scheduler_job(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, region: <%= doc_generation ? "#{scheduler_job['region']}" : "scheduler_job['region']" -%>, name: name) do it { should exist } diff --git a/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger.erb b/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger.erb index 3e85fc74ff3c..2658bfa017ef 100644 --- a/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger.erb +++ b/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% trigger = grab_attributes['trigger'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% trigger = grab_attributes(pwd)['trigger'] -%> describe google_cloudbuild_triggers(project: <%= gcp_project_id -%>) do its('count') { should eq 1 } end diff --git a/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger_attributes.erb b/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger_attributes.erb index c96172035e84..32243cf57e81 100644 --- a/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger_attributes.erb +++ b/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_trigger_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -trigger = attribute('trigger', default: <%= JSON.pretty_generate(grab_attributes['trigger']) -%>, description: 'CloudBuild trigger definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +trigger = attribute('trigger', default: <%= JSON.pretty_generate(grab_attributes(pwd)['trigger']) -%>, description: 'CloudBuild trigger definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_triggers.erb b/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_triggers.erb index 3e85fc74ff3c..2658bfa017ef 100644 --- a/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_triggers.erb +++ b/templates/inspec/examples/google_cloudbuild_trigger/google_cloudbuild_triggers.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% trigger = grab_attributes['trigger'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% trigger = grab_attributes(pwd)['trigger'] -%> describe google_cloudbuild_triggers(project: <%= gcp_project_id -%>) do its('count') { should eq 1 } end diff --git a/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function.erb b/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function.erb index 6a31a1bf46a7..e88608a51cdd 100644 --- a/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function.erb +++ b/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function.erb @@ -1,11 +1,11 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_cloud_function_region = "#{external_attribute('gcp_cloud_function_region', doc_generation)}" -%> -<% cloudfunction = grab_attributes['cloudfunction'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_cloud_function_region = "#{external_attribute(pwd, 'gcp_cloud_function_region', doc_generation)}" -%> +<% cloudfunction = grab_attributes(pwd)['cloudfunction'] -%> describe google_cloudfunctions_cloud_function(project: <%= gcp_project_id -%>, location: <%= gcp_cloud_function_region -%>, name: <%= doc_generation ? "'#{cloudfunction['name']}'" : "cloudfunction['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{cloudfunction['description']}'" : "cloudfunction['description']" -%> } its('available_memory_mb') { should eq <%= doc_generation ? "'#{cloudfunction['available_memory_mb']}'" : "cloudfunction['available_memory_mb']" -%> } - its('https_trigger.url') { should match /\/<%= "#{grab_attributes['cloudfunction']['name']}" -%>$/ } + its('https_trigger.url') { should match /\/<%= "#{grab_attributes(pwd)['cloudfunction']['name']}" -%>$/ } its('entry_point') { should eq <%= doc_generation ? "'#{cloudfunction['entry_point']}'" : "cloudfunction['entry_point']" -%> } its('environment_variables') { should include('MY_ENV_VAR' => <%= doc_generation ? "'#{cloudfunction['env_var_value']}'" : "cloudfunction['env_var_value']" -%>) } end diff --git a/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function_attributes.erb b/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function_attributes.erb index 7665cfa9ee65..d16cc55b34d8 100644 --- a/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function_attributes.erb +++ b/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_function_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_cloud_function_region = attribute(:gcp_cloud_function_region, default: '<%= external_attribute('gcp_cloud_function_region') -%>', description: 'The Cloud Function region.') -cloudfunction = attribute('cloudfunction', default: <%= JSON.pretty_generate(grab_attributes['cloudfunction']) -%>, description: 'Cloud Function definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_cloud_function_region = attribute(:gcp_cloud_function_region, default: '<%= external_attribute(pwd, 'gcp_cloud_function_region') -%>', description: 'The Cloud Function region.') +cloudfunction = attribute('cloudfunction', default: <%= JSON.pretty_generate(grab_attributes(pwd)['cloudfunction']) -%>, description: 'Cloud Function definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_functions.erb b/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_functions.erb index 687c59436314..acecfacba322 100644 --- a/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_functions.erb +++ b/templates/inspec/examples/google_cloudfunctions_cloud_function/google_cloudfunctions_cloud_functions.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_cloud_function_region = "#{external_attribute('gcp_cloud_function_region', doc_generation)}" -%> -<% cloudfunction = grab_attributes['cloudfunction'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_cloud_function_region = "#{external_attribute(pwd, 'gcp_cloud_function_region', doc_generation)}" -%> +<% cloudfunction = grab_attributes(pwd)['cloudfunction'] -%> describe google_cloudfunctions_cloud_functions(project: <%= gcp_project_id -%>, location: <%= gcp_cloud_function_region -%>) do its('descriptions') { should include <%= doc_generation ? "'#{cloudfunction['description']}'" : "cloudfunction['description']" -%> } its('entry_points') { should include <%= doc_generation ? "'#{cloudfunction['entry_point']}'" : "cloudfunction['entry_point']" -%> } diff --git a/templates/inspec/examples/google_compute_address/google_compute_address.erb b/templates/inspec/examples/google_compute_address/google_compute_address.erb index 446d4ff7cc9d..0cee97fb8549 100644 --- a/templates/inspec/examples/google_compute_address/google_compute_address.erb +++ b/templates/inspec/examples/google_compute_address/google_compute_address.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% address = grab_attributes['address'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% address = grab_attributes(pwd)['address'] -%> describe google_compute_address(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>, name: <%= doc_generation ? "'#{address['name']}'" : "address['name']" -%>) do it { should exist } its('address') { should eq <%= doc_generation ? "'#{address['address']}'" : "address['address']" -%> } diff --git a/templates/inspec/examples/google_compute_address/google_compute_address_attributes.erb b/templates/inspec/examples/google_compute_address/google_compute_address_attributes.erb index b73f6a88cc98..fcb18e906c5e 100644 --- a/templates/inspec/examples/google_compute_address/google_compute_address_attributes.erb +++ b/templates/inspec/examples/google_compute_address/google_compute_address_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -address = attribute('address', default: <%= JSON.pretty_generate(grab_attributes['address']) -%>, description: 'Address definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +address = attribute('address', default: <%= JSON.pretty_generate(grab_attributes(pwd)['address']) -%>, description: 'Address definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_address/google_compute_addresses.erb b/templates/inspec/examples/google_compute_address/google_compute_addresses.erb index 49075598d54f..264a7bf03ba3 100644 --- a/templates/inspec/examples/google_compute_address/google_compute_addresses.erb +++ b/templates/inspec/examples/google_compute_address/google_compute_addresses.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% address = grab_attributes['address'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% address = grab_attributes(pwd)['address'] -%> describe google_compute_addresses(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>) do its('addresses') { should include <%= doc_generation ? "'#{address['address']}'" : "address['address']" -%> } its('names') { should include <%= doc_generation ? "'#{address['name']}'" : "address['name']" -%> } diff --git a/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler.erb b/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler.erb index b5c47a0a8498..1f58db41928e 100644 --- a/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler.erb +++ b/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler.erb @@ -1,10 +1,10 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" -%> -<% autoscaler = grab_attributes['autoscaler'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" -%> +<% autoscaler = grab_attributes(pwd)['autoscaler'] -%> describe google_compute_autoscaler(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, zone: <%= doc_generation ? "#{gcp_zone}" : "gcp_zone" -%>, name: <%= doc_generation ? "'#{autoscaler['name']}'" : "autoscaler['name']" -%>) do it { should exist } - its('target') { should match /\/<%= "#{grab_attributes['instance_group_manager']['name']}" -%>$/ } + its('target') { should match /\/<%= "#{grab_attributes(pwd)['instance_group_manager']['name']}" -%>$/ } its('autoscaling_policy.max_num_replicas') { should eq <%= doc_generation ? "'#{autoscaler['max_replicas']}'" : "autoscaler['max_replicas']" -%> } its('autoscaling_policy.min_num_replicas') { should eq <%= doc_generation ? "'#{autoscaler['min_replicas']}'" : "autoscaler['min_replicas']" -%> } its('autoscaling_policy.cool_down_period_sec') { should eq <%= doc_generation ? "'#{autoscaler['cooldown_period']}'" : "autoscaler['cooldown_period']" -%> } diff --git a/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler_attributes.erb b/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler_attributes.erb index f478953f4152..5d6315ede0bb 100644 --- a/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler_attributes.erb +++ b/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscaler_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'The GCP project zone.') -instance_group_manager = attribute('instance_group_manager', default: <%= JSON.pretty_generate(grab_attributes['instance_group_manager']) -%>, description: 'Instance group manager definition') -autoscaler = attribute('autoscaler', default: <%= JSON.pretty_generate(grab_attributes['autoscaler']) -%>, description: 'Autoscaler definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'The GCP project zone.') +instance_group_manager = attribute('instance_group_manager', default: <%= JSON.pretty_generate(grab_attributes(pwd)['instance_group_manager']) -%>, description: 'Instance group manager definition') +autoscaler = attribute('autoscaler', default: <%= JSON.pretty_generate(grab_attributes(pwd)['autoscaler']) -%>, description: 'Autoscaler definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscalers.erb b/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscalers.erb index fc9d3bd4656e..3c889c152a88 100644 --- a/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscalers.erb +++ b/templates/inspec/examples/google_compute_autoscaler/google_compute_autoscalers.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" -%> -<% autoscaler = grab_attributes['autoscaler'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" -%> +<% autoscaler = grab_attributes(pwd)['autoscaler'] -%> autoscalers = google_compute_autoscalers(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, zone: <%= doc_generation ? "#{gcp_zone}" : "gcp_zone" -%>) describe.one do autoscalers.autoscaling_policies.each do |autoscaling_policy| diff --git a/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket.erb b/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket.erb index 67d8f0798112..9c7d881256bc 100644 --- a/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket.erb +++ b/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_storage_bucket_name = "#{external_attribute('gcp_storage_bucket_name', doc_generation)}" -%> -<% backend_bucket = grab_attributes['backend_bucket'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_storage_bucket_name = "#{external_attribute(pwd, 'gcp_storage_bucket_name', doc_generation)}" -%> +<% backend_bucket = grab_attributes(pwd)['backend_bucket'] -%> describe google_compute_backend_bucket(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{backend_bucket['name']}'" : "backend_bucket['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{backend_bucket['description']}'" : "backend_bucket['description']" -%> } diff --git a/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket_attributes.erb b/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket_attributes.erb index 2a46730a8143..e1b6caaee2ef 100644 --- a/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket_attributes.erb +++ b/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_bucket_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_storage_bucket_name = attribute(:gcp_storage_bucket_name, default: '<%= external_attribute('gcp_storage_bucket_name') -%>', description: 'The GCS bucket name to use for the backend bucket.') -backend_bucket = attribute('backend_bucket', default: <%= JSON.pretty_generate(grab_attributes['backend_bucket']) -%>, description: 'Backend bucket definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_storage_bucket_name = attribute(:gcp_storage_bucket_name, default: '<%= external_attribute(pwd, 'gcp_storage_bucket_name') -%>', description: 'The GCS bucket name to use for the backend bucket.') +backend_bucket = attribute('backend_bucket', default: <%= JSON.pretty_generate(grab_attributes(pwd)['backend_bucket']) -%>, description: 'Backend bucket definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_buckets.erb b/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_buckets.erb index 2d5fd5a9c5fc..7306eaffe1bd 100644 --- a/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_buckets.erb +++ b/templates/inspec/examples/google_compute_backend_bucket/google_compute_backend_buckets.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_storage_bucket_name = "#{external_attribute('gcp_storage_bucket_name', doc_generation)}" -%> -<% backend_bucket = grab_attributes['backend_bucket'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_storage_bucket_name = "#{external_attribute(pwd, 'gcp_storage_bucket_name', doc_generation)}" -%> +<% backend_bucket = grab_attributes(pwd)['backend_bucket'] -%> describe google_compute_backend_buckets(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{backend_bucket['name']}'" : "backend_bucket['name']" -%>) do its('descriptions') { should include <%= doc_generation ? "'#{backend_bucket['description']}'" : "backend_bucket['description']" -%> } <% if doc_generation # bucket name is partially random, this ruins VCR in integration tests -%> diff --git a/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service.erb b/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service.erb index 7fcd87046cf4..68635ea8dd11 100644 --- a/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service.erb +++ b/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% backend_service = grab_attributes['backend_service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% backend_service = grab_attributes(pwd)['backend_service'] -%> describe google_compute_backend_service(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{backend_service['name']}'" : "backend_service['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{backend_service['description']}'" : "backend_service['description']" -%> } diff --git a/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service_attributes.erb b/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service_attributes.erb index a672702c6a72..8a62223696f1 100644 --- a/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service_attributes.erb +++ b/templates/inspec/examples/google_compute_backend_service/google_compute_backend_service_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -backend_service = attribute('backend_service', default: <%= JSON.pretty_generate(grab_attributes['backend_service']) -%>, description: 'Backend service definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +backend_service = attribute('backend_service', default: <%= JSON.pretty_generate(grab_attributes(pwd)['backend_service']) -%>, description: 'Backend service definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_backend_service/google_compute_backend_services.erb b/templates/inspec/examples/google_compute_backend_service/google_compute_backend_services.erb index 7c7e7cc29d58..8462d7df9045 100644 --- a/templates/inspec/examples/google_compute_backend_service/google_compute_backend_services.erb +++ b/templates/inspec/examples/google_compute_backend_service/google_compute_backend_services.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% backend_service = grab_attributes['backend_service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% backend_service = grab_attributes(pwd)['backend_service'] -%> describe google_compute_backend_services(project: <%= gcp_project_id -%>) do its('count') { should be >= 1 } its('names') { should include <%= doc_generation ? "'#{backend_service['name']}'" : "backend_service['name']" -%> } diff --git a/templates/inspec/examples/google_compute_disk/google_compute_disk.erb b/templates/inspec/examples/google_compute_disk/google_compute_disk.erb index d00ed63614b0..a9cc19a9811f 100644 --- a/templates/inspec/examples/google_compute_disk/google_compute_disk.erb +++ b/templates/inspec/examples/google_compute_disk/google_compute_disk.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> -<% snapshot = grab_attributes['snapshot'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> +<% snapshot = grab_attributes(pwd)['snapshot'] -%> <% gcp_compute_disk_name = snapshot["disk_name"] -%> <% gcp_compute_disk_image = snapshot["disk_image"] -%> <% gcp_compute_disk_type = snapshot["disk_type"] -%> diff --git a/templates/inspec/examples/google_compute_disk/google_compute_disk_attributes.erb b/templates/inspec/examples/google_compute_disk/google_compute_disk_attributes.erb index 9de6cdaa0312..b426d63eebf8 100644 --- a/templates/inspec/examples/google_compute_disk/google_compute_disk_attributes.erb +++ b/templates/inspec/examples/google_compute_disk/google_compute_disk_attributes.erb @@ -1,6 +1,6 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'The GCP project zone.') -snapshot = attribute('snapshot', default: <%= JSON.pretty_generate(grab_attributes['snapshot']) -%>, description: 'Disk snapshot description') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'The GCP project zone.') +snapshot = attribute('snapshot', default: <%= JSON.pretty_generate(grab_attributes(pwd)['snapshot']) -%>, description: 'Disk snapshot description') gcp_compute_disk_name = snapshot["disk_name"] gcp_compute_disk_image = snapshot["disk_image"] gcp_compute_disk_type = snapshot["disk_type"] \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_disk/google_compute_disks.erb b/templates/inspec/examples/google_compute_disk/google_compute_disks.erb index f09c940b9de2..5e2196b501a1 100644 --- a/templates/inspec/examples/google_compute_disk/google_compute_disks.erb +++ b/templates/inspec/examples/google_compute_disk/google_compute_disks.erb @@ -1,7 +1,7 @@ -<% snapshot = grab_attributes['snapshot'] -%> +<% snapshot = grab_attributes(pwd)['snapshot'] -%> <% gcp_compute_disk_image = "#{snapshot["disk_image"].gsub('\'', '')}" -%> most_recent_image = google_compute_image(project: <%= doc_generation ? "'#{gcp_compute_disk_image.split('/').first}'" : "gcp_compute_disk_image.split('/').first" -%>, name: <%= doc_generation ? "'#{gcp_compute_disk_image.split('/').last}'" : "gcp_compute_disk_image.split('/').last" -%>) -describe google_compute_disks(project: <%= "#{external_attribute('gcp_project_id', doc_generation)}" -%>, zone: <%= "#{external_attribute('gcp_zone', doc_generation)}" -%>) do +describe google_compute_disks(project: <%= "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%>, zone: <%= "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" -%>) do it { should exist } its('names') { should include <%= doc_generation ? "'#{snapshot['disk_name']}'" : "snapshot['disk_name']" -%> } its('source_images') { should include most_recent_image.self_link } diff --git a/templates/inspec/examples/google_compute_firewall/google_compute_firewall.erb b/templates/inspec/examples/google_compute_firewall/google_compute_firewall.erb index 0b21ca50971a..000738d3b490 100644 --- a/templates/inspec/examples/google_compute_firewall/google_compute_firewall.erb +++ b/templates/inspec/examples/google_compute_firewall/google_compute_firewall.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% firewall = grab_attributes['firewall'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% firewall = grab_attributes(pwd)['firewall'] -%> describe google_compute_firewall(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{firewall['name']}'" : "firewall['name']" -%>) do its('direction') { should cmp 'INGRESS' } its('log_config_enabled?') { should be true } diff --git a/templates/inspec/examples/google_compute_firewall/google_compute_firewall_attributes.erb b/templates/inspec/examples/google_compute_firewall/google_compute_firewall_attributes.erb index 018e6e33fd4a..35324e120e66 100644 --- a/templates/inspec/examples/google_compute_firewall/google_compute_firewall_attributes.erb +++ b/templates/inspec/examples/google_compute_firewall/google_compute_firewall_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -firewall = attribute('firewall', default: <%= JSON.pretty_generate(grab_attributes['firewall']) -%>, description: 'Firewall rule definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +firewall = attribute('firewall', default: <%= JSON.pretty_generate(grab_attributes(pwd)['firewall']) -%>, description: 'Firewall rule definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_firewall/google_compute_firewalls.erb b/templates/inspec/examples/google_compute_firewall/google_compute_firewalls.erb index 32f52b6350e2..eaba7efcb441 100644 --- a/templates/inspec/examples/google_compute_firewall/google_compute_firewalls.erb +++ b/templates/inspec/examples/google_compute_firewall/google_compute_firewalls.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% firewall = grab_attributes['firewall'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% firewall = grab_attributes(pwd)['firewall'] -%> describe google_compute_firewalls(project: <%= gcp_project_id -%>) do its('count') { should be >= 1 } its('firewall_names') { should include <%= doc_generation ? "'#{firewall['name']}'" : "firewall['name']" -%> } diff --git a/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule.erb b/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule.erb index ff1bbd869e27..42b81d8dd856 100644 --- a/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule.erb +++ b/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_lb_region = "#{external_attribute('gcp_lb_region', doc_generation)}" -%> -<% gcp_fr_udp_name = "#{external_attribute('gcp_fr_udp_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_lb_region = "#{external_attribute(pwd, 'gcp_lb_region', doc_generation)}" -%> +<% gcp_fr_udp_name = "#{external_attribute(pwd, 'gcp_fr_udp_name', doc_generation)}" -%> describe google_compute_forwarding_rule(project: <%= gcp_project_id -%>, region: <%= gcp_lb_region -%>, name: <%= doc_generation ? gcp_fr_udp_name : "\"\#{gcp_fr_udp_name}-500\"" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule_attributes.erb b/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule_attributes.erb index d9f39a586f8d..e6aa0dab921f 100644 --- a/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule_attributes.erb +++ b/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rule_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_lb_region = attribute(:gcp_lb_region, default: '<%= external_attribute('gcp_lb_region') -%>', description: 'The region used for the forwarding rule.') -gcp_fr_udp_name = attribute(:gcp_fr_udp_name, default: '<%= external_attribute('gcp_fr_udp_name') -%>', description: 'The forwarding rule name.') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_lb_region = attribute(:gcp_lb_region, default: '<%= external_attribute(pwd, 'gcp_lb_region') -%>', description: 'The region used for the forwarding rule.') +gcp_fr_udp_name = attribute(:gcp_fr_udp_name, default: '<%= external_attribute(pwd, 'gcp_fr_udp_name') -%>', description: 'The forwarding rule name.') diff --git a/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rules.erb b/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rules.erb index 1054439a7ae1..4663581f0f5b 100644 --- a/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rules.erb +++ b/templates/inspec/examples/google_compute_forwarding_rule/google_compute_forwarding_rules.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_lb_region = "#{external_attribute('gcp_lb_region', doc_generation)}" -%> -<% gcp_fr_udp_name = "#{external_attribute('gcp_fr_udp_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_lb_region = "#{external_attribute(pwd, 'gcp_lb_region', doc_generation)}" -%> +<% gcp_fr_udp_name = "#{external_attribute(pwd, 'gcp_fr_udp_name', doc_generation)}" -%> describe google_compute_forwarding_rules(project: <%= gcp_project_id -%>, region: <%= gcp_lb_region -%>) do its('forwarding_rule_names') { should include <%= doc_generation ? gcp_fr_udp_name : "\"\#{gcp_fr_udp_name}-500\"" -%> } diff --git a/templates/inspec/examples/google_compute_global_address/google_compute_global_address.erb b/templates/inspec/examples/google_compute_global_address/google_compute_global_address.erb index 56ed5d7d1ea9..7be07c5888aa 100644 --- a/templates/inspec/examples/google_compute_global_address/google_compute_global_address.erb +++ b/templates/inspec/examples/google_compute_global_address/google_compute_global_address.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% global_address = grab_attributes['global_address'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% global_address = grab_attributes(pwd)['global_address'] -%> describe google_compute_global_address(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{global_address['name']}'" : "global_address['name']" -%>) do it { should exist } its('ip_version') { should eq <%= doc_generation ? "'#{global_address['ip_version']}'" : "global_address['ip_version']" -%> } diff --git a/templates/inspec/examples/google_compute_global_address/google_compute_global_address_attributes.erb b/templates/inspec/examples/google_compute_global_address/google_compute_global_address_attributes.erb index 6649541bc771..9f8b152276e7 100644 --- a/templates/inspec/examples/google_compute_global_address/google_compute_global_address_attributes.erb +++ b/templates/inspec/examples/google_compute_global_address/google_compute_global_address_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -global_address = attribute('global_address', default: <%= JSON.pretty_generate(grab_attributes['global_address']) -%>, description: 'Compute Global Address definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +global_address = attribute('global_address', default: <%= JSON.pretty_generate(grab_attributes(pwd)['global_address']) -%>, description: 'Compute Global Address definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_global_address/google_compute_global_addresses.erb b/templates/inspec/examples/google_compute_global_address/google_compute_global_addresses.erb index 9d2facd88721..b241e0f7e9f9 100644 --- a/templates/inspec/examples/google_compute_global_address/google_compute_global_addresses.erb +++ b/templates/inspec/examples/google_compute_global_address/google_compute_global_addresses.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% global_address = grab_attributes['global_address'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% global_address = grab_attributes(pwd)['global_address'] -%> describe google_compute_global_addresses(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{global_address['name']}'" : "global_address['name']" -%>) do its('count') { should be >= 1 } its('names') { should include <%= doc_generation ? "'#{global_address['name']}'" : "global_address['name']" -%> } diff --git a/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule.erb b/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule.erb index 451617be56bd..379e7ef49d02 100644 --- a/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule.erb +++ b/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule.erb @@ -1,9 +1,9 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% global_forwarding_rule = grab_attributes['global_forwarding_rule'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% global_forwarding_rule = grab_attributes(pwd)['global_forwarding_rule'] -%> describe google_compute_global_forwarding_rule(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{global_forwarding_rule['name']}'" : "global_forwarding_rule['name']" -%>) do it { should exist } its('port_range') { should eq <%= doc_generation ? "'#{global_forwarding_rule['port_range']}'" : "global_forwarding_rule['port_range']" -%> } - its('target') { should match /\/<%= "#{grab_attributes['http_proxy']['name']}" -%>$/ } + its('target') { should match /\/<%= "#{grab_attributes(pwd)['http_proxy']['name']}" -%>$/ } end describe google_compute_global_forwarding_rule(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: 'nonexistent') do diff --git a/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule_attributes.erb b/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule_attributes.erb index b28de737a3de..afb45a68c4c5 100644 --- a/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule_attributes.erb +++ b/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rule_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -global_forwarding_rule = attribute('global_forwarding_rule', default: <%= JSON.pretty_generate(grab_attributes['global_forwarding_rule']) -%>, description: 'Compute global forwarding rule definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +global_forwarding_rule = attribute('global_forwarding_rule', default: <%= JSON.pretty_generate(grab_attributes(pwd)['global_forwarding_rule']) -%>, description: 'Compute global forwarding rule definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rules.erb b/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rules.erb index 109c4857056a..a74d39ead5ee 100644 --- a/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rules.erb +++ b/templates/inspec/examples/google_compute_global_forwarding_rule/google_compute_global_forwarding_rules.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% global_forwarding_rule = grab_attributes['global_forwarding_rule'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% global_forwarding_rule = grab_attributes(pwd)['global_forwarding_rule'] -%> describe google_compute_global_forwarding_rules(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('count') { should be >= 1 } its('port_ranges') { should include <%= doc_generation ? "'#{global_forwarding_rule['port_range']}'" : "global_forwarding_rule['port_range']" -%> } diff --git a/templates/inspec/examples/google_compute_health_check/google_compute_health_check.erb b/templates/inspec/examples/google_compute_health_check/google_compute_health_check.erb index 30931f44ceca..562caa826e16 100644 --- a/templates/inspec/examples/google_compute_health_check/google_compute_health_check.erb +++ b/templates/inspec/examples/google_compute_health_check/google_compute_health_check.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% health_check = grab_attributes['health_check'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% health_check = grab_attributes(pwd)['health_check'] -%> describe google_compute_health_check(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{health_check['name']}'" : "health_check['name']" -%>) do it { should exist } its('timeout_sec') { should eq <%= doc_generation ? "'#{health_check['timeout_sec']}'" : "health_check['timeout_sec']" -%> } diff --git a/templates/inspec/examples/google_compute_health_check/google_compute_health_check_attributes.erb b/templates/inspec/examples/google_compute_health_check/google_compute_health_check_attributes.erb index 7b601264bd64..9a4940ad4fe0 100644 --- a/templates/inspec/examples/google_compute_health_check/google_compute_health_check_attributes.erb +++ b/templates/inspec/examples/google_compute_health_check/google_compute_health_check_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -health_check = attribute('health_check', default: <%= JSON.pretty_generate(grab_attributes['health_check']) -%>, description: 'Health check definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +health_check = attribute('health_check', default: <%= JSON.pretty_generate(grab_attributes(pwd)['health_check']) -%>, description: 'Health check definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_health_check/google_compute_health_checks.erb b/templates/inspec/examples/google_compute_health_check/google_compute_health_checks.erb index c367e583701c..1abe9f3c5b9d 100644 --- a/templates/inspec/examples/google_compute_health_check/google_compute_health_checks.erb +++ b/templates/inspec/examples/google_compute_health_check/google_compute_health_checks.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% health_check = grab_attributes['health_check'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% health_check = grab_attributes(pwd)['health_check'] -%> describe google_compute_health_checks(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('names') { should include <%= doc_generation ? "'#{health_check['name']}'" : "health_check['name']" -%> } its('timeout_secs') { should include <%= doc_generation ? "'#{health_check['timeout_sec']}'" : "health_check['timeout_sec']" -%> } diff --git a/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check.erb b/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check.erb index b8e7b5d6c652..fd8d25a2c163 100644 --- a/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check.erb +++ b/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% http_health_check = grab_attributes['http_health_check'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% http_health_check = grab_attributes(pwd)['http_health_check'] -%> describe google_compute_http_health_check(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{http_health_check['name']}'" : "http_health_check['name']" -%>) do it { should exist } its('timeout_sec') { should eq <%= doc_generation ? "'#{http_health_check['timeout_sec']}'" : "http_health_check['timeout_sec']" -%> } diff --git a/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check_attributes.erb b/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check_attributes.erb index 80e5c6a6a9b8..3a8648e846aa 100644 --- a/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check_attributes.erb +++ b/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_check_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -http_health_check = attribute('http_health_check', default: <%= JSON.pretty_generate(grab_attributes['http_health_check']) -%>, description: 'HTTP health check definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +http_health_check = attribute('http_health_check', default: <%= JSON.pretty_generate(grab_attributes(pwd)['http_health_check']) -%>, description: 'HTTP health check definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_checks.erb b/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_checks.erb index 7e19b4f967e0..afc9f3f7cffe 100644 --- a/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_checks.erb +++ b/templates/inspec/examples/google_compute_http_health_check/google_compute_http_health_checks.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% http_health_check = grab_attributes['http_health_check'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% http_health_check = grab_attributes(pwd)['http_health_check'] -%> describe google_compute_http_health_checks(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('names') { should include <%= doc_generation ? "'#{http_health_check['name']}'" : "http_health_check['name']" -%> } its('timeout_secs') { should include <%= doc_generation ? "'#{http_health_check['timeout_sec']}'" : "http_health_check['timeout_sec']" -%> } diff --git a/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check.erb b/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check.erb index aa8a2440eb3c..d23793ace65c 100644 --- a/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check.erb +++ b/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% https_health_check = grab_attributes['https_health_check'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% https_health_check = grab_attributes(pwd)['https_health_check'] -%> describe google_compute_https_health_check(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{https_health_check['name']}'" : "https_health_check['name']" -%>) do it { should exist } its('timeout_sec') { should eq <%= doc_generation ? "'#{https_health_check['timeout_sec']}'" : "https_health_check['timeout_sec']" -%> } diff --git a/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check_attributes.erb b/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check_attributes.erb index 45e945adfb91..515e2be4f1c9 100644 --- a/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check_attributes.erb +++ b/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_check_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -https_health_check = attribute('https_health_check', default: <%= JSON.pretty_generate(grab_attributes['https_health_check']) -%>, description: 'HTTPS health check definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +https_health_check = attribute('https_health_check', default: <%= JSON.pretty_generate(grab_attributes(pwd)['https_health_check']) -%>, description: 'HTTPS health check definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_checks.erb b/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_checks.erb index 3d88ada99330..2e034c04f66d 100644 --- a/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_checks.erb +++ b/templates/inspec/examples/google_compute_https_health_check/google_compute_https_health_checks.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% https_health_check = grab_attributes['https_health_check'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% https_health_check = grab_attributes(pwd)['https_health_check'] -%> describe google_compute_https_health_checks(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('names') { should include <%= doc_generation ? "'#{https_health_check['name']}'" : "https_health_check['name']" -%> } its('timeout_secs') { should include <%= doc_generation ? "'#{https_health_check['timeout_sec']}'" : "https_health_check['timeout_sec']" -%> } diff --git a/templates/inspec/examples/google_compute_image/google_compute_image.erb b/templates/inspec/examples/google_compute_image/google_compute_image.erb index 34a3d6fe7e83..93e3d1b2ae15 100644 --- a/templates/inspec/examples/google_compute_image/google_compute_image.erb +++ b/templates/inspec/examples/google_compute_image/google_compute_image.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% compute_image = grab_attributes['compute_image'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% compute_image = grab_attributes(pwd)['compute_image'] -%> describe google_compute_image(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{compute_image['name']}'" : "compute_image['name']" -%>) do it { should exist } its('disk_size_gb') { should cmp 3 } diff --git a/templates/inspec/examples/google_compute_image/google_compute_image_attributes.erb b/templates/inspec/examples/google_compute_image/google_compute_image_attributes.erb index 146c3fefc2a3..fb038a9af531 100644 --- a/templates/inspec/examples/google_compute_image/google_compute_image_attributes.erb +++ b/templates/inspec/examples/google_compute_image/google_compute_image_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -compute_image = attribute('compute_image', default: <%= JSON.pretty_generate(grab_attributes['compute_image']) -%>, description: 'Compute image description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +compute_image = attribute('compute_image', default: <%= JSON.pretty_generate(grab_attributes(pwd)['compute_image']) -%>, description: 'Compute image description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance/google_compute_instance.erb b/templates/inspec/examples/google_compute_instance/google_compute_instance.erb index 90492b58ead5..d5cd1612cc13 100644 --- a/templates/inspec/examples/google_compute_instance/google_compute_instance.erb +++ b/templates/inspec/examples/google_compute_instance/google_compute_instance.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> -<% instance = grab_attributes['instance'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> +<% instance = grab_attributes(pwd)['instance'] -%> describe google_compute_instance(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>, name: <%= doc_generation ? "'#{instance['name']}'" : "instance['name']" -%>) do it { should exist } its('machine_type') { should match <%= doc_generation ? "'#{instance['machine_type']}'" : "instance['machine_type']" -%> } @@ -8,6 +8,8 @@ describe google_compute_instance(project: <%= gcp_project_id -%>, zone: <%= gcp_ its('tags.items') { should include <%= doc_generation ? "'#{instance['tag_2']}'" : "instance['tag_2']" -%> } its('tag_count') { should cmp 2 } its('service_account_scopes') { should include <%= doc_generation ? "'#{instance['sa_scope']}'" : "instance['sa_scope']" -%> } + its('metadata_keys') { should include <%= doc_generation ? "'#{instance['metadata_key']}'" : "instance['metadata_key']" -%> } + its('metadata_values') { should include <%= doc_generation ? "'#{instance['metadata_value']}'" : "instance['metadata_value']" -%> } end describe google_compute_instance(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>, name: 'nonexistent') do diff --git a/templates/inspec/examples/google_compute_instance/google_compute_instance_attributes.erb b/templates/inspec/examples/google_compute_instance/google_compute_instance_attributes.erb index 293e916a8baa..ff87256cf185 100644 --- a/templates/inspec/examples/google_compute_instance/google_compute_instance_attributes.erb +++ b/templates/inspec/examples/google_compute_instance/google_compute_instance_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'GCP zone name of the compute disk') -instance = attribute('instance', default: <%= JSON.pretty_generate(grab_attributes['instance']) -%>, description: 'Compute instance description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'GCP zone name of the compute disk') +instance = attribute('instance', default: <%= JSON.pretty_generate(grab_attributes(pwd)['instance']) -%>, description: 'Compute instance description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance/google_compute_instances.erb b/templates/inspec/examples/google_compute_instance/google_compute_instances.erb index cb05c7971485..7d2472b3e5ad 100644 --- a/templates/inspec/examples/google_compute_instance/google_compute_instances.erb +++ b/templates/inspec/examples/google_compute_instance/google_compute_instances.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> -<% instance = grab_attributes['instance'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> +<% instance = grab_attributes(pwd)['instance'] -%> describe google_compute_instances(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>) do its('instance_names') { should include <%= doc_generation ? "'#{instance['name']}'" : "instance['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group.erb b/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group.erb index f8d2da23c6f5..e8624744dd93 100644 --- a/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group.erb +++ b/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> -<% instance_group = grab_attributes['instance_group'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> +<% instance_group = grab_attributes(pwd)['instance_group'] -%> describe google_compute_instance_group(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>, name: <%= doc_generation ? "'#{instance_group['name']}'" : "instance_group['name']" -%>) do it { should exist } its('description') { should cmp <%= doc_generation ? "'#{instance_group['description']}'" : "instance_group['description']" -%> } diff --git a/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group_attributes.erb b/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group_attributes.erb index 048964c15651..392989d3f94e 100644 --- a/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group_attributes.erb +++ b/templates/inspec/examples/google_compute_instance_group/google_compute_instance_group_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'GCP zone name') -instance_group = attribute('instance_group', default: <%= JSON.pretty_generate(grab_attributes['instance_group']) -%>, description: 'Instance group') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'GCP zone name') +instance_group = attribute('instance_group', default: <%= JSON.pretty_generate(grab_attributes(pwd)['instance_group']) -%>, description: 'Instance group') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance_group/google_compute_instance_groups.erb b/templates/inspec/examples/google_compute_instance_group/google_compute_instance_groups.erb index c4340cfa99ce..e83fcf0d7e85 100644 --- a/templates/inspec/examples/google_compute_instance_group/google_compute_instance_groups.erb +++ b/templates/inspec/examples/google_compute_instance_group/google_compute_instance_groups.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> -<% instance_group = grab_attributes['instance_group'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> +<% instance_group = grab_attributes(pwd)['instance_group'] -%> describe google_compute_instance_groups(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>) do its('instance_group_names') { should include <%= doc_generation ? "'#{instance_group['name']}'" : "instance_group['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager.erb b/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager.erb index e16516c1e30c..739979dbb39b 100644 --- a/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager.erb +++ b/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" -%> -<% gcp_lb_mig1_name = "#{external_attribute('gcp_lb_mig1_name', doc_generation)}" -%> -<% instance_group_manager = grab_attributes['instance_group_manager'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" -%> +<% gcp_lb_mig1_name = "#{external_attribute(pwd, 'gcp_lb_mig1_name', doc_generation)}" -%> +<% instance_group_manager = grab_attributes(pwd)['instance_group_manager'] -%> describe google_compute_instance_group_manager(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, zone: <%= doc_generation ? "#{gcp_zone}" : "gcp_zone" -%>, name: <%= doc_generation ? "'#{instance_group_manager['name']}'" : "instance_group_manager['name']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager_attributes.erb b/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager_attributes.erb index c69f8feb36af..7e7a7eee553e 100644 --- a/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager_attributes.erb +++ b/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_manager_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'The GCP project zone.') -instance_group_manager = attribute('instance_group_manager', default: <%= JSON.pretty_generate(grab_attributes['instance_group_manager']) -%>, description: 'Instance group manager definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'The GCP project zone.') +instance_group_manager = attribute('instance_group_manager', default: <%= JSON.pretty_generate(grab_attributes(pwd)['instance_group_manager']) -%>, description: 'Instance group manager definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_managers.erb b/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_managers.erb index d774663ccec5..5ec0875ac2d5 100644 --- a/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_managers.erb +++ b/templates/inspec/examples/google_compute_instance_group_manager/google_compute_instance_group_managers.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" -%> -<% gcp_lb_mig1_name = "#{external_attribute('gcp_lb_mig1_name', doc_generation)}" -%> -<% instance_group_manager = grab_attributes['instance_group_manager'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" -%> +<% gcp_lb_mig1_name = "#{external_attribute(pwd, 'gcp_lb_mig1_name', doc_generation)}" -%> +<% instance_group_manager = grab_attributes(pwd)['instance_group_manager'] -%> describe google_compute_instance_group_managers(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, zone: <%= doc_generation ? "#{gcp_zone}" : "gcp_zone" -%>) do its('base_instance_names') { should include <%= doc_generation ? "'#{instance_group_manager['base_instance_name']}'" : "instance_group_manager['base_instance_name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template.erb b/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template.erb index ad95cb041b4d..0dc861ba1e62 100644 --- a/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template.erb +++ b/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% instance_template = grab_attributes['instance_template'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% instance_template = grab_attributes(pwd)['instance_template'] -%> describe google_compute_instance_template(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{instance_template['name']}'" : "instance_template['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{instance_template['description']}'" : "instance_template['description']" -%> } diff --git a/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template_attributes.erb b/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template_attributes.erb index ccc90bb07712..43ee62466cb8 100644 --- a/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template_attributes.erb +++ b/templates/inspec/examples/google_compute_instance_template/google_compute_instance_template_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -instance_template = attribute('instance_template', default: <%= JSON.pretty_generate(grab_attributes['instance_template']) -%>, description: 'An instance template definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +instance_template = attribute('instance_template', default: <%= JSON.pretty_generate(grab_attributes(pwd)['instance_template']) -%>, description: 'An instance template definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_instance_template/google_compute_instance_templates.erb b/templates/inspec/examples/google_compute_instance_template/google_compute_instance_templates.erb index dd968bfbf06a..a6691a2c2661 100644 --- a/templates/inspec/examples/google_compute_instance_template/google_compute_instance_templates.erb +++ b/templates/inspec/examples/google_compute_instance_template/google_compute_instance_templates.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% instance_template = grab_attributes['instance_template'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% instance_template = grab_attributes(pwd)['instance_template'] -%> describe google_compute_instance_templates(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('names') { should include <%= doc_generation ? "'#{instance_template['name']}'" : "instance_template['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_network/google_compute_network.erb b/templates/inspec/examples/google_compute_network/google_compute_network.erb index 66f9c910a0c4..b24f523a9cec 100644 --- a/templates/inspec/examples/google_compute_network/google_compute_network.erb +++ b/templates/inspec/examples/google_compute_network/google_compute_network.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% network = grab_attributes['network'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% network = grab_attributes(pwd)['network'] -%> describe google_compute_network(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{network['name']}'" : "network['name']" -%>) do it { should exist } its('routing_config.routing_mode') { should cmp <%= doc_generation ? "'#{network['routing_mode']}'" : "network['routing_mode']" -%> } diff --git a/templates/inspec/examples/google_compute_network/google_compute_network_attributes.erb b/templates/inspec/examples/google_compute_network/google_compute_network_attributes.erb index 205e68a34291..96f6dab07361 100644 --- a/templates/inspec/examples/google_compute_network/google_compute_network_attributes.erb +++ b/templates/inspec/examples/google_compute_network/google_compute_network_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -network = attribute('network', default: <%= JSON.pretty_generate(grab_attributes['network']) -%>, description: 'Network description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +network = attribute('network', default: <%= JSON.pretty_generate(grab_attributes(pwd)['network']) -%>, description: 'Network description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_network/google_compute_networks.erb b/templates/inspec/examples/google_compute_network/google_compute_networks.erb index ea617bf94aff..c77f9b4eefec 100644 --- a/templates/inspec/examples/google_compute_network/google_compute_networks.erb +++ b/templates/inspec/examples/google_compute_network/google_compute_networks.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% network = grab_attributes['network'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% network = grab_attributes(pwd)['network'] -%> describe google_compute_networks(project: <%= gcp_project_id -%>) do its('network_names') { should include <%= doc_generation ? "'#{network['name']}'" : "network['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group.erb b/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group.erb index 6034a69db465..dcd80daaa106 100644 --- a/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group.erb +++ b/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% network_endpoint_group = grab_attributes['network_endpoint_group'] -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% network_endpoint_group = grab_attributes(pwd)['network_endpoint_group'] -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> describe google_compute_network_endpoint_group(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>, name: <%= doc_generation ? "'#{network_endpoint_group['name']}'" : "network_endpoint_group['name']" -%>) do it { should exist } its('default_port') { should cmp <%= doc_generation ? "'#{network_endpoint_group['default_port']}'" : "network_endpoint_group['default_port']" -%> } diff --git a/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group_attributes.erb b/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group_attributes.erb index e6fe6011cafe..c2827e4a70a5 100644 --- a/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group_attributes.erb +++ b/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_group_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -network_endpoint_group = attribute('network_endpoint_group', default: <%= JSON.pretty_generate(grab_attributes['network_endpoint_group']) -%>, description: 'Network endpoint group description') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'GCP zone name') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +network_endpoint_group = attribute('network_endpoint_group', default: <%= JSON.pretty_generate(grab_attributes(pwd)['network_endpoint_group']) -%>, description: 'Network endpoint group description') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'GCP zone name') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_groups.erb b/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_groups.erb index b416a5075442..24ea8c229dc3 100644 --- a/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_groups.erb +++ b/templates/inspec/examples/google_compute_network_endpoint_group/google_compute_network_endpoint_groups.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% network_endpoint_group = grab_attributes['network_endpoint_group'] -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% network_endpoint_group = grab_attributes(pwd)['network_endpoint_group'] -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> describe google_compute_network_endpoint_groups(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>) do its('default_ports') { should include <%= doc_generation ? "'#{network_endpoint_group['default_port']}'" : "network_endpoint_group['default_port']" -%> } its('names') { should include <%= doc_generation ? "'#{network_endpoint_group['name']}'" : "network_endpoint_group['name']" -%> } diff --git a/templates/inspec/examples/google_compute_node_group/google_compute_node_group.erb b/templates/inspec/examples/google_compute_node_group/google_compute_node_group.erb index a4567eaa3e9c..aa6a57057f64 100644 --- a/templates/inspec/examples/google_compute_node_group/google_compute_node_group.erb +++ b/templates/inspec/examples/google_compute_node_group/google_compute_node_group.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% node_group = grab_attributes['node_group'] -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% node_group = grab_attributes(pwd)['node_group'] -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> describe google_compute_node_group(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>, name: <%= doc_generation ? "'#{node_group['name']}'" : "node_group['name']" -%>) do it { should exist } its('description') { should cmp <%= doc_generation ? "'#{node_group['description']}'" : "node_group['description']" -%> } diff --git a/templates/inspec/examples/google_compute_node_group/google_compute_node_group_attributes.erb b/templates/inspec/examples/google_compute_node_group/google_compute_node_group_attributes.erb index 5d11c86bc48e..5f8d7f99f5bc 100644 --- a/templates/inspec/examples/google_compute_node_group/google_compute_node_group_attributes.erb +++ b/templates/inspec/examples/google_compute_node_group/google_compute_node_group_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -node_group = attribute('node_group', default: <%= JSON.pretty_generate(grab_attributes['node_group']) -%>, description: 'Node group description') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'GCP zone name') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +node_group = attribute('node_group', default: <%= JSON.pretty_generate(grab_attributes(pwd)['node_group']) -%>, description: 'Node group description') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'GCP zone name') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_node_group/google_compute_node_groups.erb b/templates/inspec/examples/google_compute_node_group/google_compute_node_groups.erb index 4eea2831b019..a6f7c5352fa5 100644 --- a/templates/inspec/examples/google_compute_node_group/google_compute_node_groups.erb +++ b/templates/inspec/examples/google_compute_node_group/google_compute_node_groups.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% node_group = grab_attributes['node_group'] -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% node_group = grab_attributes(pwd)['node_group'] -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> describe google_compute_node_groups(project: <%= gcp_project_id -%>, zone: <%= gcp_zone -%>) do it { should exist } its('descriptions') { should include <%= doc_generation ? "'#{node_group['description']}'" : "node_group['description']" -%> } diff --git a/templates/inspec/examples/google_compute_node_template/google_compute_node_template.erb b/templates/inspec/examples/google_compute_node_template/google_compute_node_template.erb index 574227febdff..5fc714c79fc8 100644 --- a/templates/inspec/examples/google_compute_node_template/google_compute_node_template.erb +++ b/templates/inspec/examples/google_compute_node_template/google_compute_node_template.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% node_template = grab_attributes['node_template'] -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% node_template = grab_attributes(pwd)['node_template'] -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" %> describe google_compute_node_template(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, name: <%= doc_generation ? "'#{node_template['name']}'" : "node_template['name']" -%>) do it { should exist } its('node_affinity_labels') { should include(<%= doc_generation ? "'#{node_template['label_key']}'" : "node_template['label_key']" -%> => <%= doc_generation ? "'#{node_template['label_value']}'" : "node_template['label_value']" -%>) } diff --git a/templates/inspec/examples/google_compute_node_template/google_compute_node_template_attributes.erb b/templates/inspec/examples/google_compute_node_template/google_compute_node_template_attributes.erb index 7ecffbee4933..9e4022c57d63 100644 --- a/templates/inspec/examples/google_compute_node_template/google_compute_node_template_attributes.erb +++ b/templates/inspec/examples/google_compute_node_template/google_compute_node_template_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -node_template = attribute('node_template', default: <%= JSON.pretty_generate(grab_attributes['node_template']) -%>, description: 'Node template description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +node_template = attribute('node_template', default: <%= JSON.pretty_generate(grab_attributes(pwd)['node_template']) -%>, description: 'Node template description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_node_template/google_compute_node_templates.erb b/templates/inspec/examples/google_compute_node_template/google_compute_node_templates.erb index aaa36f94d300..9904f65a728c 100644 --- a/templates/inspec/examples/google_compute_node_template/google_compute_node_templates.erb +++ b/templates/inspec/examples/google_compute_node_template/google_compute_node_templates.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% node_template = grab_attributes['node_template'] -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% node_template = grab_attributes(pwd)['node_template'] -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" %> describe google_compute_node_templates(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('names') { should include <%= doc_generation ? "'#{node_template['name']}'" : "node_template['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_project_info/google_compute_project_info.erb b/templates/inspec/examples/google_compute_project_info/google_compute_project_info.erb index f101eaf31e58..0bdc41f0eaec 100644 --- a/templates/inspec/examples/google_compute_project_info/google_compute_project_info.erb +++ b/templates/inspec/examples/google_compute_project_info/google_compute_project_info.erb @@ -1,4 +1,4 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> describe google_compute_project_info(project: <%= gcp_project_id -%>) do it { should exist } its('default_service_account') { should match "developer.gserviceaccount.com" } diff --git a/templates/inspec/examples/google_compute_project_info/google_compute_project_info_attributes.erb b/templates/inspec/examples/google_compute_project_info/google_compute_project_info_attributes.erb index a2863dfa3703..9e434667ef77 100644 --- a/templates/inspec/examples/google_compute_project_info/google_compute_project_info_attributes.erb +++ b/templates/inspec/examples/google_compute_project_info/google_compute_project_info_attributes.erb @@ -1 +1 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_region/google_compute_region.erb b/templates/inspec/examples/google_compute_region/google_compute_region.erb index 5ca45d827c3f..d47c7a9a551c 100644 --- a/templates/inspec/examples/google_compute_region/google_compute_region.erb +++ b/templates/inspec/examples/google_compute_region/google_compute_region.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> describe google_compute_region(project: <%= gcp_project_id -%>, name: <%= gcp_location -%>) do it { should exist } it { should be_up } diff --git a/templates/inspec/examples/google_compute_region/google_compute_region_attributes.erb b/templates/inspec/examples/google_compute_region/google_compute_region_attributes.erb index 8241a8af352d..7a694ac5cb22 100644 --- a/templates/inspec/examples/google_compute_region/google_compute_region_attributes.erb +++ b/templates/inspec/examples/google_compute_region/google_compute_region_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_region/google_compute_regions.erb b/templates/inspec/examples/google_compute_region/google_compute_regions.erb index 05bf6392abd4..8c3193d9f05c 100644 --- a/templates/inspec/examples/google_compute_region/google_compute_regions.erb +++ b/templates/inspec/examples/google_compute_region/google_compute_regions.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> describe google_compute_regions(project: <%= gcp_project_id -%>) do its('count') { should be >= 1 } its('region_names') { should include "#{gcp_location}" } diff --git a/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service.erb b/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service.erb index 8ceca054d24c..064154375feb 100644 --- a/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service.erb +++ b/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% region_backend_service = grab_attributes['region_backend_service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% region_backend_service = grab_attributes(pwd)['region_backend_service'] -%> describe google_compute_region_backend_service(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, name: <%= doc_generation ? "'#{region_backend_service['name']}'" : "region_backend_service['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{region_backend_service['description']}'" : "region_backend_service['description']" -%> } diff --git a/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service_attributes.erb b/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service_attributes.erb index 4b6866b7922d..298773c5f630 100644 --- a/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service_attributes.erb +++ b/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_service_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -region_backend_service = attribute('region_backend_service', default: <%= JSON.pretty_generate(grab_attributes['region_backend_service']) -%>, description: 'Backend service definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +region_backend_service = attribute('region_backend_service', default: <%= JSON.pretty_generate(grab_attributes(pwd)['region_backend_service']) -%>, description: 'Backend service definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_services.erb b/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_services.erb index 3d6224dc4960..5707d919233a 100644 --- a/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_services.erb +++ b/templates/inspec/examples/google_compute_region_backend_service/google_compute_region_backend_services.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% region_backend_service = grab_attributes['region_backend_service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% region_backend_service = grab_attributes(pwd)['region_backend_service'] -%> describe google_compute_region_backend_services(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('count') { should be >= 1 } its('names') { should include <%= doc_generation ? "'#{region_backend_service['name']}'" : "region_backend_service['name']" -%> } diff --git a/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager.erb b/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager.erb index 2231b0c0f0e8..4269de5aca53 100644 --- a/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager.erb +++ b/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% rigm = grab_attributes['rigm'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% rigm = grab_attributes(pwd)['rigm'] -%> describe google_compute_region_instance_group_manager(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, name: <%= doc_generation ? "'#{rigm['name']}'" : "rigm['name']" -%>) do it { should exist } its('base_instance_name') { should eq <%= doc_generation ? "'#{rigm['base_instance_name']}'" : "rigm['base_instance_name']" -%> } diff --git a/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager_attributes.erb b/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager_attributes.erb index 3d1f22a10758..5799971ab6b2 100644 --- a/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager_attributes.erb +++ b/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_manager_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -rigm = attribute('rigm', default: <%= JSON.pretty_generate(grab_attributes['rigm']) -%>, description: 'Compute region instance group manager description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +rigm = attribute('rigm', default: <%= JSON.pretty_generate(grab_attributes(pwd)['rigm']) -%>, description: 'Compute region instance group manager description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_managers.erb b/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_managers.erb index 318fc5b07199..be3faf38ac69 100644 --- a/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_managers.erb +++ b/templates/inspec/examples/google_compute_region_instance_group_manager/google_compute_region_instance_group_managers.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% rigm = grab_attributes['rigm'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% rigm = grab_attributes(pwd)['rigm'] -%> describe google_compute_region_instance_group_managers(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('instance_group_manager_names') { should include <%= doc_generation ? "'#{rigm['name']}'" : "rigm['name']" -%> } its('base_instance_names') { should include <%= doc_generation ? "'#{rigm['base_instance_name']}'" : "rigm['base_instance_name']" -%> } diff --git a/templates/inspec/examples/google_compute_route/google_compute_route.erb b/templates/inspec/examples/google_compute_route/google_compute_route.erb index 2dfa1fed280c..61fd4cbea81f 100644 --- a/templates/inspec/examples/google_compute_route/google_compute_route.erb +++ b/templates/inspec/examples/google_compute_route/google_compute_route.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% route = grab_attributes['route'] -%> -<% gcp_network_name = "#{external_attribute('gcp_network_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% route = grab_attributes(pwd)['route'] -%> +<% gcp_network_name = "#{external_attribute(pwd, 'gcp_network_name', doc_generation)}" -%> describe google_compute_route(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{route['name']}'" : "route['name']" -%>) do it { should exist } its('dest_range') { should eq <%= doc_generation ? "'#{route['dest_range']}'" : "route['dest_range']" -%> } diff --git a/templates/inspec/examples/google_compute_route/google_compute_route_attributes.erb b/templates/inspec/examples/google_compute_route/google_compute_route_attributes.erb index 31eb5dbf7302..4912798bb9d8 100644 --- a/templates/inspec/examples/google_compute_route/google_compute_route_attributes.erb +++ b/templates/inspec/examples/google_compute_route/google_compute_route_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -route = attribute('route', default: <%= JSON.pretty_generate(grab_attributes['route']) -%>, description: 'Compute route description') -gcp_network_name = attribute(:gcp_network_name, default: '<%= external_attribute('gcp_network_name') -%>', description: 'GCP network name') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +route = attribute('route', default: <%= JSON.pretty_generate(grab_attributes(pwd)['route']) -%>, description: 'Compute route description') +gcp_network_name = attribute(:gcp_network_name, default: '<%= external_attribute(pwd, 'gcp_network_name') -%>', description: 'GCP network name') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_route/google_compute_routes.erb b/templates/inspec/examples/google_compute_route/google_compute_routes.erb index 0da181373ea7..413789b35fc2 100644 --- a/templates/inspec/examples/google_compute_route/google_compute_routes.erb +++ b/templates/inspec/examples/google_compute_route/google_compute_routes.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% route = grab_attributes['route'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% route = grab_attributes(pwd)['route'] -%> describe google_compute_routes(project: <%= gcp_project_id -%>) do its('count') { should be >= 1 } its('dest_ranges') { should include <%= doc_generation ? "'#{route['dest_range']}'" : "route['dest_range']" -%> } diff --git a/templates/inspec/examples/google_compute_router/google_compute_router.erb b/templates/inspec/examples/google_compute_router/google_compute_router.erb index b4f0467e637d..fecb0bcfcf67 100644 --- a/templates/inspec/examples/google_compute_router/google_compute_router.erb +++ b/templates/inspec/examples/google_compute_router/google_compute_router.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% router = grab_attributes['router'] -%> -<% gcp_network_name = "#{external_attribute('gcp_network_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% router = grab_attributes(pwd)['router'] -%> +<% gcp_network_name = "#{external_attribute(pwd, 'gcp_network_name', doc_generation)}" -%> describe google_compute_router(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, name: <%= doc_generation ? "'#{router['name']}'" : "router['name']" -%>) do it { should exist } its('bgp.asn') { should eq <%= doc_generation ? "'#{router['bgp_asn']}'" : "router['bgp_asn']" -%> } diff --git a/templates/inspec/examples/google_compute_router/google_compute_router_attributes.erb b/templates/inspec/examples/google_compute_router/google_compute_router_attributes.erb index 6319dea87fa0..9d7e97fc6da4 100644 --- a/templates/inspec/examples/google_compute_router/google_compute_router_attributes.erb +++ b/templates/inspec/examples/google_compute_router/google_compute_router_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -router = attribute('router', default: <%= JSON.pretty_generate(grab_attributes['router']) -%>, description: 'Compute router description') -gcp_network_name = attribute(:gcp_network_name, default: '<%= external_attribute('gcp_network_name') -%>', description: 'GCP network name') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +router = attribute('router', default: <%= JSON.pretty_generate(grab_attributes(pwd)['router']) -%>, description: 'Compute router description') +gcp_network_name = attribute(:gcp_network_name, default: '<%= external_attribute(pwd, 'gcp_network_name') -%>', description: 'GCP network name') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_router/google_compute_routers.erb b/templates/inspec/examples/google_compute_router/google_compute_routers.erb index 3adf63690476..9b02a8421407 100644 --- a/templates/inspec/examples/google_compute_router/google_compute_routers.erb +++ b/templates/inspec/examples/google_compute_router/google_compute_routers.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% router = grab_attributes['router'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% router = grab_attributes(pwd)['router'] -%> describe google_compute_routers(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('names') { should include <%= doc_generation ? "'#{router['name']}'" : "router['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat.erb b/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat.erb index 89f3a5539cd6..192462efbf7f 100644 --- a/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat.erb +++ b/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% router = grab_attributes['router'] -%> -<% router_nat = grab_attributes['router_nat'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% router = grab_attributes(pwd)['router'] -%> +<% router_nat = grab_attributes(pwd)['router_nat'] -%> describe google_compute_router_nat(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, router: <%= doc_generation ? "'#{router['name']}'" : "router['name']" -%>, name: <%= doc_generation ? "'#{router_nat['name']}'" : "router_nat['name']" -%>) do it { should exist } its('nat_ip_allocate_option') { should cmp <%= doc_generation ? "'#{router_nat['nat_ip_allocate_option']}'" : "router_nat['nat_ip_allocate_option']" -%> } diff --git a/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat_attributes.erb b/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat_attributes.erb index bfada640e1dc..9316bb528181 100644 --- a/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat_attributes.erb +++ b/templates/inspec/examples/google_compute_router_nat/google_compute_router_nat_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -router = attribute('router', default: <%= JSON.pretty_generate(grab_attributes['router']) -%>, description: 'Compute router description') -router_nat = attribute('router_nat', default: <%= JSON.pretty_generate(grab_attributes['router_nat']) -%>, description: 'Compute router NAT description') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +router = attribute('router', default: <%= JSON.pretty_generate(grab_attributes(pwd)['router']) -%>, description: 'Compute router description') +router_nat = attribute('router_nat', default: <%= JSON.pretty_generate(grab_attributes(pwd)['router_nat']) -%>, description: 'Compute router NAT description') diff --git a/templates/inspec/examples/google_compute_router_nat/google_compute_router_nats.erb b/templates/inspec/examples/google_compute_router_nat/google_compute_router_nats.erb index 13e0e53ad843..e655821801df 100644 --- a/templates/inspec/examples/google_compute_router_nat/google_compute_router_nats.erb +++ b/templates/inspec/examples/google_compute_router_nat/google_compute_router_nats.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% router = grab_attributes['router'] -%> -<% router_nat = grab_attributes['router_nat'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% router = grab_attributes(pwd)['router'] -%> +<% router_nat = grab_attributes(pwd)['router_nat'] -%> describe google_compute_router_nats(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, router: <%= doc_generation ? "'#{router['name']}'" : "router['name']" -%>) do its('names') { should include <%= doc_generation ? "'#{router_nat['name']}'" : "router_nat['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_security_policy/google_compute_security_policies.erb b/templates/inspec/examples/google_compute_security_policy/google_compute_security_policies.erb new file mode 100644 index 000000000000..54f336d392da --- /dev/null +++ b/templates/inspec/examples/google_compute_security_policy/google_compute_security_policies.erb @@ -0,0 +1,6 @@ +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% security_policy = grab_attributes(pwd)['security_policy'] -%> +describe google_compute_security_policies(project: <%= gcp_project_id -%>) do + its('count') { should be >= 1 } + its('names') { should include <%= doc_generation ? "'#{security_policy['name']}'" : "security_policy['name']" -%> } +end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_security_policy/google_compute_security_policy.erb b/templates/inspec/examples/google_compute_security_policy/google_compute_security_policy.erb new file mode 100644 index 000000000000..8487a9deb68f --- /dev/null +++ b/templates/inspec/examples/google_compute_security_policy/google_compute_security_policy.erb @@ -0,0 +1,12 @@ +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% security_policy = grab_attributes(pwd)['security_policy'] -%> +describe google_compute_security_policy(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{security_policy['name']}'" : "security_policy['name']" -%>) do + it { should exist } + its('rules.size') { should cmp 2 } + its('rules.first.priority') { should cmp <%= doc_generation ? "'#{security_policy['priority']}'" : "security_policy['priority']" -%> } + its('rules.first.match.config.src_ip_ranges.first') { should cmp <%= doc_generation ? "'#{security_policy['ip_range']}'" : "security_policy['ip_range']" -%> } +end + +describe google_compute_security_policy(project: <%= gcp_project_id -%>, name: 'nonexistent') do + it { should_not exist } +end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_security_policy/google_compute_security_policy_attributes.erb b/templates/inspec/examples/google_compute_security_policy/google_compute_security_policy_attributes.erb new file mode 100644 index 000000000000..ba9ba34bfce3 --- /dev/null +++ b/templates/inspec/examples/google_compute_security_policy/google_compute_security_policy_attributes.erb @@ -0,0 +1,2 @@ +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +security_policy = attribute('security_policy', default: <%= JSON.pretty_generate(grab_attributes(pwd)['security_policy']) -%>, description: 'Security Policy description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot.erb b/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot.erb index 233f7f4e3b4a..8c74c1f1c34c 100644 --- a/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot.erb +++ b/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% snapshot = grab_attributes['snapshot'] -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% snapshot = grab_attributes(pwd)['snapshot'] -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> describe google_compute_snapshot(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{snapshot['name']}'" : "snapshot['name']" -%>) do it { should exist } its('source_disk') { should match <%= doc_generation ? "'#{snapshot['disk_name']}'" : "snapshot['disk_name']" -%> } diff --git a/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot_attributes.erb b/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot_attributes.erb index 03a36fba93c1..4de08ded0b22 100644 --- a/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot_attributes.erb +++ b/templates/inspec/examples/google_compute_snapshot/google_compute_snapshot_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'GCP zone name of the compute disk') -snapshot = attribute('snapshot', default: <%= JSON.pretty_generate(grab_attributes['snapshot']) -%>, description: 'Compute disk snapshot description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'GCP zone name of the compute disk') +snapshot = attribute('snapshot', default: <%= JSON.pretty_generate(grab_attributes(pwd)['snapshot']) -%>, description: 'Compute disk snapshot description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_snapshot/google_compute_snapshots.erb b/templates/inspec/examples/google_compute_snapshot/google_compute_snapshots.erb index a5e7f9c982c4..63aaef15581c 100644 --- a/templates/inspec/examples/google_compute_snapshot/google_compute_snapshots.erb +++ b/templates/inspec/examples/google_compute_snapshot/google_compute_snapshots.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% snapshot = grab_attributes['snapshot'] -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" %> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% snapshot = grab_attributes(pwd)['snapshot'] -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" %> describe google_compute_snapshots(project: <%= gcp_project_id -%>) do its('count') { should be >= 1 } end diff --git a/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate.erb b/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate.erb index 0ab4c66bfcba..cb20905fdefd 100644 --- a/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate.erb +++ b/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% ssl_certificate = grab_attributes['ssl_certificate'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% ssl_certificate = grab_attributes(pwd)['ssl_certificate'] -%> describe google_compute_ssl_certificate(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{ssl_certificate['name']}'" : "ssl_certificate['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{ssl_certificate['description']}'" : "ssl_certificate['description']" -%> } diff --git a/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate_attributes.erb b/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate_attributes.erb index 469761ce7b14..33fffe66ec75 100644 --- a/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate_attributes.erb +++ b/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificate_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -ssl_certificate = attribute('ssl_certificate', default: <%= JSON.pretty_generate(grab_attributes['ssl_certificate']) -%>, description: 'A GCP SSL certificate definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +ssl_certificate = attribute('ssl_certificate', default: <%= JSON.pretty_generate(grab_attributes(pwd)['ssl_certificate']) -%>, description: 'A GCP SSL certificate definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificates.erb b/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificates.erb index fea38dd0962b..533f4f533fd6 100644 --- a/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificates.erb +++ b/templates/inspec/examples/google_compute_ssl_certificate/google_compute_ssl_certificates.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% ssl_certificate = grab_attributes['ssl_certificate'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% ssl_certificate = grab_attributes(pwd)['ssl_certificate'] -%> describe google_compute_ssl_certificates(project: <%= gcp_project_id -%>) do its('names') { should include <%= doc_generation ? "'#{ssl_certificate['name']}'" : "ssl_certificate['name']" -%> } diff --git a/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policies.erb b/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policies.erb index e37ae469ebd0..f34d533eda63 100644 --- a/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policies.erb +++ b/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policies.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% ssl_policy = grab_attributes['ssl_policy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% ssl_policy = grab_attributes(pwd)['ssl_policy'] -%> describe google_compute_ssl_policies(project: <%= gcp_project_id -%>) do it { should exist } its('names') { should include <%= doc_generation ? "'#{ssl_policy['name']}'" : "ssl_policy['name']" -%> } diff --git a/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy.erb b/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy.erb index 1caa810b5c53..ccc5f477add0 100644 --- a/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy.erb +++ b/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy.erb @@ -1,5 +1,5 @@ -<% ssl_policy = grab_attributes['ssl_policy'] -%> -describe google_compute_ssl_policy(project: <%= "#{external_attribute('gcp_project_id', doc_generation)}" -%>, name: <%= doc_generation ? "'#{ssl_policy['name']}'" : "ssl_policy['name']" -%>) do +<% ssl_policy = grab_attributes(pwd)['ssl_policy'] -%> +describe google_compute_ssl_policy(project: <%= "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%>, name: <%= doc_generation ? "'#{ssl_policy['name']}'" : "ssl_policy['name']" -%>) do it { should exist } its('min_tls_version') { should eq <%= doc_generation ? "'#{ssl_policy['min_tls_version']}'" : "ssl_policy['min_tls_version']" -%> } its('profile') { should eq <%= doc_generation ? "'#{ssl_policy['profile']}'" : "ssl_policy['profile']" -%> } @@ -7,6 +7,6 @@ describe google_compute_ssl_policy(project: <%= "#{external_attribute('gcp_proje its('custom_features') { should include <%= doc_generation ? "'#{ssl_policy['custom_feature2']}'" : "ssl_policy['custom_feature2']" -%> } end -describe google_compute_ssl_policy(project: <%= "#{external_attribute('gcp_project_id', doc_generation)}" -%>, name: 'nonexistent') do +describe google_compute_ssl_policy(project: <%= "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%>, name: 'nonexistent') do it { should_not exist } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy_attributes.erb b/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy_attributes.erb index 8d9d8728df95..c8766b09bae3 100644 --- a/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy_attributes.erb +++ b/templates/inspec/examples/google_compute_ssl_policy/google_compute_ssl_policy_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -ssl_policy = attribute('ssl_policy', default: <%= JSON.pretty_generate(grab_attributes['ssl_policy']) -%>) \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +ssl_policy = attribute('ssl_policy', default: <%= JSON.pretty_generate(grab_attributes(pwd)['ssl_policy']) -%>) \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork.erb b/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork.erb index b2de1f6a0924..849ae695ebb6 100644 --- a/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork.erb +++ b/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% subnetwork = grab_attributes['subnetwork'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% subnetwork = grab_attributes(pwd)['subnetwork'] -%> describe google_compute_subnetwork(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, name: <%= doc_generation ? "'#{subnetwork['name']}'" : "subnetwork['name']" -%>) do it { should exist } its('ip_cidr_range') { should eq <%= doc_generation ? "'#{subnetwork['ip_cidr_range']}'" : "subnetwork['ip_cidr_range']" -%> } diff --git a/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork_attributes.erb b/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork_attributes.erb index 0ab67d9c7a01..dd4489bdc090 100644 --- a/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork_attributes.erb +++ b/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetwork_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -subnetwork = attribute('subnetwork', default: <%= JSON.pretty_generate(grab_attributes['subnetwork']) -%>, description: 'Compute subnetwork description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +subnetwork = attribute('subnetwork', default: <%= JSON.pretty_generate(grab_attributes(pwd)['subnetwork']) -%>, description: 'Compute subnetwork description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetworks.erb b/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetworks.erb index 82adb03df3d9..5b53106795cb 100644 --- a/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetworks.erb +++ b/templates/inspec/examples/google_compute_subnetwork/google_compute_subnetworks.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% subnetwork = grab_attributes['subnetwork'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% subnetwork = grab_attributes(pwd)['subnetwork'] -%> describe google_compute_subnetworks(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('ip_cidr_ranges') { should include <%= doc_generation ? "'#{subnetwork['ip_cidr_range']}'" : "subnetwork['ip_cidr_range']" -%> } its('subnetwork_names') { should include <%= doc_generation ? "'#{subnetwork['name']}'" : "subnetwork['name']" -%> } diff --git a/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxies.erb b/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxies.erb index c9827e42d975..cf7393d3c47f 100644 --- a/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxies.erb +++ b/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxies.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% http_proxy = grab_attributes['http_proxy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% http_proxy = grab_attributes(pwd)['http_proxy'] -%> describe google_compute_target_http_proxies(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('names') { should include <%= doc_generation ? "'#{http_proxy['name']}'" : "http_proxy['name']" -%> } its('descriptions') { should include <%= doc_generation ? "'#{http_proxy['description']}'" : "http_proxy['description']" -%> } diff --git a/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy.erb b/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy.erb index dd58cf30094e..672e331eab85 100644 --- a/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy.erb +++ b/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy.erb @@ -1,9 +1,9 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% http_proxy = grab_attributes['http_proxy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% http_proxy = grab_attributes(pwd)['http_proxy'] -%> describe google_compute_target_http_proxy(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{http_proxy['name']}'" : "http_proxy['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{http_proxy['description']}'" : "http_proxy['description']" -%> } - its('url_map') { should match /\/<%= "#{grab_attributes['url_map']['name']}" -%>$/ } + its('url_map') { should match /\/<%= "#{grab_attributes(pwd)['url_map']['name']}" -%>$/ } end describe google_compute_target_http_proxy(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: 'nonexistent') do diff --git a/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy_attributes.erb b/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy_attributes.erb index 0d67e44aad7e..c22a4ccdc5a9 100644 --- a/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy_attributes.erb +++ b/templates/inspec/examples/google_compute_target_http_proxy/google_compute_target_http_proxy_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -http_proxy = attribute('http_proxy', default: <%= JSON.pretty_generate(grab_attributes['http_proxy']) -%>, description: 'Compute HTTP proxy definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +http_proxy = attribute('http_proxy', default: <%= JSON.pretty_generate(grab_attributes(pwd)['http_proxy']) -%>, description: 'Compute HTTP proxy definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxies.erb b/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxies.erb index 9725d667a03a..c0fd2c13db94 100644 --- a/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxies.erb +++ b/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxies.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% https_proxy = grab_attributes['https_proxy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% https_proxy = grab_attributes(pwd)['https_proxy'] -%> describe google_compute_target_https_proxies(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('names') { should include <%= doc_generation ? "'#{https_proxy['name']}'" : "https_proxy['name']" -%> } its('descriptions') { should include <%= doc_generation ? "'#{https_proxy['description']}'" : "https_proxy['description']" -%> } diff --git a/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy.erb b/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy.erb index 1c15594fd533..4d4314ac1a19 100644 --- a/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy.erb +++ b/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy.erb @@ -1,8 +1,8 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% https_proxy = grab_attributes['https_proxy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% https_proxy = grab_attributes(pwd)['https_proxy'] -%> describe google_compute_target_https_proxy(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{https_proxy['name']}'" : "https_proxy['name']" -%>) do it { should exist } - its('url_map') { should match /\/<%= "#{grab_attributes['url_map']['name']}" -%>$/ } + its('url_map') { should match /\/<%= "#{grab_attributes(pwd)['url_map']['name']}" -%>$/ } its('description') { should eq <%= doc_generation ? "'#{https_proxy['description']}'" : "https_proxy['description']" -%> } end diff --git a/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy_attributes.erb b/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy_attributes.erb index db9531e0f747..6456f0f0171f 100644 --- a/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy_attributes.erb +++ b/templates/inspec/examples/google_compute_target_https_proxy/google_compute_target_https_proxy_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -https_proxy = attribute('https_proxy', default: <%= JSON.pretty_generate(grab_attributes['https_proxy']) -%>, description: 'Compute HTTPS proxy definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +https_proxy = attribute('https_proxy', default: <%= JSON.pretty_generate(grab_attributes(pwd)['https_proxy']) -%>, description: 'Compute HTTPS proxy definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool.erb b/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool.erb index 9a1de14a8043..bd45ee4c2deb 100644 --- a/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool.erb +++ b/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool.erb @@ -1,8 +1,8 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_ext_vm_name = "#{external_attribute('gcp_ext_vm_name', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_zone = "#{external_attribute('gcp_zone', doc_generation)}" -%> -<% target_pool = grab_attributes['target_pool'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_ext_vm_name = "#{external_attribute(pwd, 'gcp_ext_vm_name', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_zone = "#{external_attribute(pwd, 'gcp_zone', doc_generation)}" -%> +<% target_pool = grab_attributes(pwd)['target_pool'] -%> describe google_compute_target_pool(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, name: <%= doc_generation ? "'#{target_pool['name']}'" : "target_pool['name']" -%>) do it { should exist } its('session_affinity') { should eq <%= doc_generation ? "'#{target_pool['session_affinity']}'" : "target_pool['session_affinity']" -%> } diff --git a/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool_attributes.erb b/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool_attributes.erb index d0a8f64fafd2..0a5c482b570e 100644 --- a/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool_attributes.erb +++ b/templates/inspec/examples/google_compute_target_pool/google_compute_target_pool_attributes.erb @@ -1,5 +1,5 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -gcp_ext_vm_name = attribute(:gcp_ext_vm_name, default: '<%= external_attribute('gcp_ext_vm_name') -%>', description: 'The name of a VM instance.') -target_pool = attribute('target_pool', default: <%= JSON.pretty_generate(grab_attributes['target_pool']) -%>, description: 'Target pool definition') -gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute('gcp_zone') -%>', description: 'The GCP zone.') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +gcp_ext_vm_name = attribute(:gcp_ext_vm_name, default: '<%= external_attribute(pwd, 'gcp_ext_vm_name') -%>', description: 'The name of a VM instance.') +target_pool = attribute('target_pool', default: <%= JSON.pretty_generate(grab_attributes(pwd)['target_pool']) -%>, description: 'Target pool definition') +gcp_zone = attribute(:gcp_zone, default: '<%= external_attribute(pwd, 'gcp_zone') -%>', description: 'The GCP zone.') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_target_pool/google_compute_target_pools.erb b/templates/inspec/examples/google_compute_target_pool/google_compute_target_pools.erb index ab2f526f5bfa..c62ad3783d5c 100644 --- a/templates/inspec/examples/google_compute_target_pool/google_compute_target_pools.erb +++ b/templates/inspec/examples/google_compute_target_pool/google_compute_target_pools.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_ext_vm_name = "#{external_attribute('gcp_ext_vm_name', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% target_pool = grab_attributes['target_pool'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_ext_vm_name = "#{external_attribute(pwd, 'gcp_ext_vm_name', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% target_pool = grab_attributes(pwd)['target_pool'] -%> describe google_compute_target_pools(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('names') { should include <%= doc_generation ? "'#{target_pool['name']}'" : "target_pool['name']" -%> } its('session_affinities') { should include <%= doc_generation ? "'#{target_pool['session_affinity']}'" : "target_pool['session_affinity']" -%> } diff --git a/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxies.erb b/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxies.erb index 8f7611821908..43d110e4ed2b 100644 --- a/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxies.erb +++ b/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxies.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% target_tcp_proxy = grab_attributes['target_tcp_proxy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% target_tcp_proxy = grab_attributes(pwd)['target_tcp_proxy'] -%> describe google_compute_target_tcp_proxies(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>) do its('names') { should include <%= doc_generation ? "'#{target_tcp_proxy['name']}'" : "target_tcp_proxy['name']" -%> } its('proxy_headers') { should include <%= doc_generation ? "'#{target_tcp_proxy['proxy_header']}'" : "target_tcp_proxy['proxy_header']" -%> } diff --git a/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy.erb b/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy.erb index 83559e32eeec..778255958f68 100644 --- a/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy.erb +++ b/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy.erb @@ -1,9 +1,9 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% target_tcp_proxy = grab_attributes['target_tcp_proxy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% target_tcp_proxy = grab_attributes(pwd)['target_tcp_proxy'] -%> describe google_compute_target_tcp_proxy(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: <%= doc_generation ? "'#{target_tcp_proxy['name']}'" : "target_tcp_proxy['name']" -%>) do it { should exist } its('proxy_header') { should eq <%= doc_generation ? "'#{target_tcp_proxy['proxy_header']}'" : "target_tcp_proxy['proxy_header']" -%> } - its('service') { should match /\/<%= "#{grab_attributes['target_tcp_proxy']['tcp_backend_service_name']}" -%>$/ } + its('service') { should match /\/<%= "#{grab_attributes(pwd)['target_tcp_proxy']['tcp_backend_service_name']}" -%>$/ } end describe google_compute_target_tcp_proxy(project: <%= doc_generation ? "#{gcp_project_id}" : "gcp_project_id" -%>, name: 'nonexistent') do diff --git a/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy_attributes.erb b/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy_attributes.erb index 549aba6f34d5..c1fc11c82636 100644 --- a/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy_attributes.erb +++ b/templates/inspec/examples/google_compute_target_tcp_proxy/google_compute_target_tcp_proxy_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -target_tcp_proxy = attribute('target_tcp_proxy', default: <%= JSON.pretty_generate(grab_attributes['target_tcp_proxy']) -%>, description: 'Compute TCP proxy definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +target_tcp_proxy = attribute('target_tcp_proxy', default: <%= JSON.pretty_generate(grab_attributes(pwd)['target_tcp_proxy']) -%>, description: 'Compute TCP proxy definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_url_map/google_compute_url_map.erb b/templates/inspec/examples/google_compute_url_map/google_compute_url_map.erb index aa7bd6c24723..4e2a476964e6 100644 --- a/templates/inspec/examples/google_compute_url_map/google_compute_url_map.erb +++ b/templates/inspec/examples/google_compute_url_map/google_compute_url_map.erb @@ -1,14 +1,14 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% url_map = grab_attributes['url_map'] -%> -<% backend_service = grab_attributes['backend_service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% url_map = grab_attributes(pwd)['url_map'] -%> +<% backend_service = grab_attributes(pwd)['backend_service'] -%> describe google_compute_url_map(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{url_map['name']}'" : "url_map['name']" -%>) do it { should exist } its('description') { should eq <%= doc_generation ? "'#{url_map['description']}'" : "url_map['description']" -%> } - its('default_service') { should match /\/<%= "#{grab_attributes['backend_service']['name']}" -%>$/ } + its('default_service') { should match /\/<%= "#{grab_attributes(pwd)['backend_service']['name']}" -%>$/ } its('host_rules.count') { should eq 1 } its('host_rules.first.hosts') { should include <%= doc_generation ? "'#{url_map['host_rule_host']}'" : "url_map['host_rule_host']" -%> } its('path_matchers.count') { should eq 1 } - its('path_matchers.first.default_service') { should match /\/<%= "#{grab_attributes['backend_service']['name']}" -%>$/ } + its('path_matchers.first.default_service') { should match /\/<%= "#{grab_attributes(pwd)['backend_service']['name']}" -%>$/ } its('tests.count') { should eq 1 } its('tests.first.host') { should eq <%= doc_generation ? "'#{url_map['test_host']}'" : "url_map['test_host']" -%> } its('tests.first.path') { should eq <%= doc_generation ? "'#{url_map['test_path']}'" : "url_map['test_path']" -%> } diff --git a/templates/inspec/examples/google_compute_url_map/google_compute_url_map_attributes.erb b/templates/inspec/examples/google_compute_url_map/google_compute_url_map_attributes.erb index 7af8bba0b9d3..eff3291ab96e 100644 --- a/templates/inspec/examples/google_compute_url_map/google_compute_url_map_attributes.erb +++ b/templates/inspec/examples/google_compute_url_map/google_compute_url_map_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -url_map = attribute('url_map', default: <%= JSON.pretty_generate(grab_attributes['url_map']) -%>, description: 'Compute URL map definition') -backend_service = attribute('backend_service', default: <%= JSON.pretty_generate(grab_attributes['backend_service']) -%>, description: 'Backend service definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +url_map = attribute('url_map', default: <%= JSON.pretty_generate(grab_attributes(pwd)['url_map']) -%>, description: 'Compute URL map definition') +backend_service = attribute('backend_service', default: <%= JSON.pretty_generate(grab_attributes(pwd)['backend_service']) -%>, description: 'Backend service definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_url_map/google_compute_url_maps.erb b/templates/inspec/examples/google_compute_url_map/google_compute_url_maps.erb index c62b0e148c0a..1bbae986c8bc 100644 --- a/templates/inspec/examples/google_compute_url_map/google_compute_url_maps.erb +++ b/templates/inspec/examples/google_compute_url_map/google_compute_url_maps.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% url_map = grab_attributes['url_map'] -%> -<% backend_service = grab_attributes['backend_service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% url_map = grab_attributes(pwd)['url_map'] -%> +<% backend_service = grab_attributes(pwd)['backend_service'] -%> describe google_compute_url_maps(project: <%= gcp_project_id -%>) do its('names') { should include <%= doc_generation ? "'#{url_map['name']}'" : "url_map['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel.erb b/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel.erb index c143581ba957..d19d23bbba8d 100644 --- a/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel.erb +++ b/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% vpn_tunnel = grab_attributes['vpn_tunnel'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% vpn_tunnel = grab_attributes(pwd)['vpn_tunnel'] -%> describe google_compute_vpn_tunnel(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, name: <%= doc_generation ? "'#{vpn_tunnel['name']}'" : "vpn_tunnel['name']" -%>) do it { should exist } its('peer_ip') { should eq <%= doc_generation ? "'#{vpn_tunnel['peer_ip']}'" : "vpn_tunnel['peer_ip']" -%> } diff --git a/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel_attributes.erb b/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel_attributes.erb index e1bb650b7f1e..cb78252b3ae4 100644 --- a/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel_attributes.erb +++ b/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnel_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -vpn_tunnel = attribute('vpn_tunnel', default: <%= JSON.pretty_generate(grab_attributes['vpn_tunnel']) -%>, description: 'Compute VPN tunnel description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +vpn_tunnel = attribute('vpn_tunnel', default: <%= JSON.pretty_generate(grab_attributes(pwd)['vpn_tunnel']) -%>, description: 'Compute VPN tunnel description') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnels.erb b/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnels.erb index 51b4759aaccb..a7fced882e4f 100644 --- a/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnels.erb +++ b/templates/inspec/examples/google_compute_vpn_tunnel/google_compute_vpn_tunnels.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% vpn_tunnel = grab_attributes['vpn_tunnel'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% vpn_tunnel = grab_attributes(pwd)['vpn_tunnel'] -%> describe google_compute_vpn_tunnels(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('vpn_tunnel_names') { should include <%= doc_generation ? "'#{vpn_tunnel['name']}'" : "vpn_tunnel['name']" -%> } its('peer_ips') { should include <%= doc_generation ? "'#{vpn_tunnel['peer_ip']}'" : "vpn_tunnel['peer_ip']" -%> } diff --git a/templates/inspec/examples/google_compute_zone/google_compute_zone.erb b/templates/inspec/examples/google_compute_zone/google_compute_zone.erb index d9c1b73625d7..e28f451d9145 100644 --- a/templates/inspec/examples/google_compute_zone/google_compute_zone.erb +++ b/templates/inspec/examples/google_compute_zone/google_compute_zone.erb @@ -1,4 +1,4 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> describe google_compute_zone(project: <%= gcp_project_id -%>, name: "us-central1-a") do it { should exist } it { should be_up } diff --git a/templates/inspec/examples/google_compute_zone/google_compute_zone_attributes.erb b/templates/inspec/examples/google_compute_zone/google_compute_zone_attributes.erb index a2863dfa3703..9e434667ef77 100644 --- a/templates/inspec/examples/google_compute_zone/google_compute_zone_attributes.erb +++ b/templates/inspec/examples/google_compute_zone/google_compute_zone_attributes.erb @@ -1 +1 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') \ No newline at end of file diff --git a/templates/inspec/examples/google_compute_zone/google_compute_zones.erb b/templates/inspec/examples/google_compute_zone/google_compute_zones.erb index 3b0b0b5695bb..9c3187b81c81 100644 --- a/templates/inspec/examples/google_compute_zone/google_compute_zones.erb +++ b/templates/inspec/examples/google_compute_zone/google_compute_zones.erb @@ -1,4 +1,4 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> google_compute_zones(project: <%= gcp_project_id -%>).zone_names.each do |zone_name| describe google_compute_zone(project: <%= gcp_project_id -%>, name: zone_name) do it { should exist } diff --git a/templates/inspec/examples/google_container_cluster/google_container_cluster.erb b/templates/inspec/examples/google_container_cluster/google_container_cluster.erb index bbd98deaed4a..5464efe90d4d 100644 --- a/templates/inspec/examples/google_container_cluster/google_container_cluster.erb +++ b/templates/inspec/examples/google_container_cluster/google_container_cluster.erb @@ -1,10 +1,10 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_kube_cluster_name = "#{external_attribute('gcp_kube_cluster_name', doc_generation)}" -%> -<% gcp_kube_cluster_zone = "#{external_attribute('gcp_kube_cluster_zone', doc_generation)}" -%> -<% gcp_kube_cluster_size = "#{external_attribute('gcp_kube_cluster_size', doc_generation)}" -%> -<% gcp_kube_cluster_zone_extra1 = "#{external_attribute('gcp_kube_cluster_zone_extra1', doc_generation)}" -%> -<% gcp_kube_cluster_zone_extra2 = "#{external_attribute('gcp_kube_cluster_zone_extra2', doc_generation)}" -%> -<% gcp_kube_cluster_master_user = "#{external_attribute('gcp_kube_cluster_master_user', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_kube_cluster_name = "#{external_attribute(pwd, 'gcp_kube_cluster_name', doc_generation)}" -%> +<% gcp_kube_cluster_zone = "#{external_attribute(pwd, 'gcp_kube_cluster_zone', doc_generation)}" -%> +<% gcp_kube_cluster_size = "#{external_attribute(pwd, 'gcp_kube_cluster_size', doc_generation)}" -%> +<% gcp_kube_cluster_zone_extra1 = "#{external_attribute(pwd, 'gcp_kube_cluster_zone_extra1', doc_generation)}" -%> +<% gcp_kube_cluster_zone_extra2 = "#{external_attribute(pwd, 'gcp_kube_cluster_zone_extra2', doc_generation)}" -%> +<% gcp_kube_cluster_master_user = "#{external_attribute(pwd, 'gcp_kube_cluster_master_user', doc_generation)}" -%> describe google_container_cluster(project: <%= gcp_project_id -%>, location: <%= gcp_kube_cluster_zone -%>, name: <%= gcp_kube_cluster_name -%>) do it { should exist } its('locations.sort'){ should cmp [ <%= gcp_kube_cluster_zone -%>, <%= gcp_kube_cluster_zone_extra1 -%>, <%= gcp_kube_cluster_zone_extra2 -%> ].sort } @@ -14,4 +14,9 @@ end describe google_container_cluster(project: <%= gcp_project_id -%>, location: <%= gcp_kube_cluster_zone -%>, name: 'nonexistent') do it { should_not exist } +end + +describe google_container_cluster(project: <%= gcp_project_id -%>, location: <%= gcp_kube_cluster_zone -%>, name: <%= gcp_kube_cluster_name -%>, beta: true) do + it { should exist } + its('release_channel.channel') { should cmp "RAPID" } end \ No newline at end of file diff --git a/templates/inspec/examples/google_container_cluster/google_container_cluster_attributes.erb b/templates/inspec/examples/google_container_cluster/google_container_cluster_attributes.erb index c0a43756537c..2305faa0df95 100644 --- a/templates/inspec/examples/google_container_cluster/google_container_cluster_attributes.erb +++ b/templates/inspec/examples/google_container_cluster/google_container_cluster_attributes.erb @@ -1,7 +1,7 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_kube_cluster_name = attribute(:gcp_kube_cluster_name, default: '<%= external_attribute('gcp_kube_cluster_name') -%>', description: 'GCP container cluster name') -gcp_kube_cluster_zone = attribute(:gcp_kube_cluster_zone, default: '<%= external_attribute('gcp_kube_cluster_zone') -%>', description: 'GCP container cluster zone') -gcp_kube_cluster_size = attribute(:gcp_kube_cluster_size, default: '<%= external_attribute('gcp_kube_cluster_size') -%>', description: 'GCP container cluster size') -gcp_kube_cluster_zone_extra1 = attribute(:gcp_kube_cluster_zone_extra1, default: '<%= external_attribute('gcp_kube_cluster_zone_extra1') -%>', description: 'First extra zone for the cluster') -gcp_kube_cluster_zone_extra2 = attribute(:gcp_kube_cluster_zone_extra2, default: '<%= external_attribute('gcp_kube_cluster_zone_extra2') -%>', description: 'Second extra zone for the cluster') -gcp_kube_cluster_master_user = attribute(:gcp_kube_cluster_master_user, default: '<%= external_attribute('gcp_kube_cluster_master_user') -%>', description: 'GCP container cluster admin username') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_kube_cluster_name = attribute(:gcp_kube_cluster_name, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_name') -%>', description: 'GCP container cluster name') +gcp_kube_cluster_zone = attribute(:gcp_kube_cluster_zone, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_zone') -%>', description: 'GCP container cluster zone') +gcp_kube_cluster_size = attribute(:gcp_kube_cluster_size, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_size') -%>', description: 'GCP container cluster size') +gcp_kube_cluster_zone_extra1 = attribute(:gcp_kube_cluster_zone_extra1, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_zone_extra1') -%>', description: 'First extra zone for the cluster') +gcp_kube_cluster_zone_extra2 = attribute(:gcp_kube_cluster_zone_extra2, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_zone_extra2') -%>', description: 'Second extra zone for the cluster') +gcp_kube_cluster_master_user = attribute(:gcp_kube_cluster_master_user, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_master_user') -%>', description: 'GCP container cluster admin username') \ No newline at end of file diff --git a/templates/inspec/examples/google_container_cluster/google_container_clusters.erb b/templates/inspec/examples/google_container_cluster/google_container_clusters.erb index 0015fc63a5e9..0bdeb62af514 100644 --- a/templates/inspec/examples/google_container_cluster/google_container_clusters.erb +++ b/templates/inspec/examples/google_container_cluster/google_container_clusters.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_kube_cluster_name = "#{external_attribute('gcp_kube_cluster_name', doc_generation)}" -%> -<% gcp_kube_cluster_zone = "#{external_attribute('gcp_kube_cluster_zone', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_kube_cluster_name = "#{external_attribute(pwd, 'gcp_kube_cluster_name', doc_generation)}" -%> +<% gcp_kube_cluster_zone = "#{external_attribute(pwd, 'gcp_kube_cluster_zone', doc_generation)}" -%> describe google_container_clusters(project: <%= gcp_project_id -%>, location: <%= gcp_kube_cluster_zone -%>) do its('cluster_names') { should include <%= gcp_kube_cluster_name -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_container_node_pool/google_container_node_pool.erb b/templates/inspec/examples/google_container_node_pool/google_container_node_pool.erb index 1696c4e039ee..92888408e356 100644 --- a/templates/inspec/examples/google_container_node_pool/google_container_node_pool.erb +++ b/templates/inspec/examples/google_container_node_pool/google_container_node_pool.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_kube_cluster_zone = "#{external_attribute('gcp_kube_cluster_zone', doc_generation)}" -%> -<% gcp_kube_cluster_name = "#{external_attribute('gcp_kube_cluster_name', doc_generation)}" -%> -<% regional_node_pool = grab_attributes['regional_node_pool'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_kube_cluster_zone = "#{external_attribute(pwd, 'gcp_kube_cluster_zone', doc_generation)}" -%> +<% gcp_kube_cluster_name = "#{external_attribute(pwd, 'gcp_kube_cluster_name', doc_generation)}" -%> +<% regional_node_pool = grab_attributes(pwd)['regional_node_pool'] -%> describe google_container_node_pool(project: <%= gcp_project_id -%>, location: <%= gcp_kube_cluster_zone -%>, cluster_name: <%= gcp_kube_cluster_name -%>, nodepool_name: <%= doc_generation ? "'#{regional_node_pool['name']}'" : "regional_node_pool['name']" -%>) do it { should exist } its('initial_node_count') { should eq <%= doc_generation ? "'#{regional_node_pool['initial_node_count']}'" : "regional_node_pool['initial_node_count']" -%>} diff --git a/templates/inspec/examples/google_container_node_pool/google_container_node_pool_attributes.erb b/templates/inspec/examples/google_container_node_pool/google_container_node_pool_attributes.erb index c7f6605ecf02..9e87d0275330 100644 --- a/templates/inspec/examples/google_container_node_pool/google_container_node_pool_attributes.erb +++ b/templates/inspec/examples/google_container_node_pool/google_container_node_pool_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_kube_cluster_zone = attribute(:gcp_kube_cluster_zone, default: '<%= external_attribute('gcp_kube_cluster_zone') -%>', description: 'The zone that the kube cluster resides in.') -gcp_kube_cluster_name = attribute(:gcp_kube_cluster_name, default: '<%= external_attribute('gcp_kube_cluster_name') -%>', description: 'The parent container clusters name.') -regional_node_pool = attribute('regional_node_pool', default: <%= JSON.pretty_generate(grab_attributes['regional_node_pool']) -%>, description: 'Regional Node Pool definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_kube_cluster_zone = attribute(:gcp_kube_cluster_zone, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_zone') -%>', description: 'The zone that the kube cluster resides in.') +gcp_kube_cluster_name = attribute(:gcp_kube_cluster_name, default: '<%= external_attribute(pwd, 'gcp_kube_cluster_name') -%>', description: 'The parent container clusters name.') +regional_node_pool = attribute('regional_node_pool', default: <%= JSON.pretty_generate(grab_attributes(pwd)['regional_node_pool']) -%>, description: 'Regional Node Pool definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_container_node_pool/google_container_node_pools.erb b/templates/inspec/examples/google_container_node_pool/google_container_node_pools.erb index ee69dd74f731..143676e83699 100644 --- a/templates/inspec/examples/google_container_node_pool/google_container_node_pools.erb +++ b/templates/inspec/examples/google_container_node_pool/google_container_node_pools.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_kube_cluster_zone = "#{external_attribute('gcp_kube_cluster_zone', doc_generation)}" -%> -<% gcp_kube_cluster_name = "#{external_attribute('gcp_kube_cluster_name', doc_generation)}" -%> -<% regional_node_pool = grab_attributes['regional_node_pool'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_kube_cluster_zone = "#{external_attribute(pwd, 'gcp_kube_cluster_zone', doc_generation)}" -%> +<% gcp_kube_cluster_name = "#{external_attribute(pwd, 'gcp_kube_cluster_name', doc_generation)}" -%> +<% regional_node_pool = grab_attributes(pwd)['regional_node_pool'] -%> describe google_container_node_pools(project: <%= gcp_project_id -%>, location: <%= gcp_kube_cluster_zone -%>, cluster_name: <%= gcp_kube_cluster_name -%>) do its('initial_node_counts') { should include <%= doc_generation ? "'#{regional_node_pool['initial_node_count']}'" : "regional_node_pool['initial_node_count']" -%>} end \ No newline at end of file diff --git a/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster.erb b/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster.erb index 2ca9b6559a6f..b0bc5f65c1b6 100644 --- a/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster.erb +++ b/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% dataproc_cluster = grab_attributes['dataproc_cluster'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% dataproc_cluster = grab_attributes(pwd)['dataproc_cluster'] -%> describe google_dataproc_cluster(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>, cluster_name: <%= doc_generation ? "'#{dataproc_cluster['name']}'" : "dataproc_cluster['name']" -%>) do it { should exist } its('labels') { should include(<%= doc_generation ? "'#{dataproc_cluster['label_key']}'" : "dataproc_cluster['label_key']" -%> => <%= doc_generation ? "'#{dataproc_cluster['label_value']}'" : "dataproc_cluster['label_value']" -%>) } diff --git a/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster_attributes.erb b/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster_attributes.erb index be9eb6d44303..d264472ff326 100644 --- a/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster_attributes.erb +++ b/templates/inspec/examples/google_dataproc_cluster/google_dataproc_cluster_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -dataproc_cluster = attribute('dataproc_cluster', default: <%= JSON.pretty_generate(grab_attributes['dataproc_cluster']) -%>, description: 'Dataproc cluster definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +dataproc_cluster = attribute('dataproc_cluster', default: <%= JSON.pretty_generate(grab_attributes(pwd)['dataproc_cluster']) -%>, description: 'Dataproc cluster definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_dataproc_cluster/google_dataproc_clusters.erb b/templates/inspec/examples/google_dataproc_cluster/google_dataproc_clusters.erb index 72f6ded703dc..fd46d12cf017 100644 --- a/templates/inspec/examples/google_dataproc_cluster/google_dataproc_clusters.erb +++ b/templates/inspec/examples/google_dataproc_cluster/google_dataproc_clusters.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% dataproc_cluster = grab_attributes['dataproc_cluster'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% dataproc_cluster = grab_attributes(pwd)['dataproc_cluster'] -%> describe google_dataproc_clusters(project: <%= gcp_project_id -%>, region: <%= gcp_location -%>) do its('count') { should be >= 1 } its('cluster_names') { should include <%= doc_generation ? "'#{dataproc_cluster['name']}'" : "dataproc_cluster['name']" -%> } diff --git a/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone.erb b/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone.erb index e8bfb724cc0e..fc3e0a0d34cc 100644 --- a/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone.erb +++ b/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_dns_zone_name = "#{external_attribute('gcp_dns_zone_name', doc_generation)}" -%> -<% dns_managed_zone = grab_attributes['dns_managed_zone'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_dns_zone_name = "#{external_attribute(pwd, 'gcp_dns_zone_name', doc_generation)}" -%> +<% dns_managed_zone = grab_attributes(pwd)['dns_managed_zone'] -%> describe google_dns_managed_zone(project: <%= gcp_project_id -%>, zone: <%= doc_generation ? "'#{dns_managed_zone['name']}'" : "dns_managed_zone['name']" -%>) do it { should exist } its('dns_name') { should cmp <%= gcp_dns_zone_name -%> } diff --git a/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone_attributes.erb b/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone_attributes.erb index 880ff948b058..bcb68a6122d0 100644 --- a/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone_attributes.erb +++ b/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zone_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_dns_zone_name = attribute(:gcp_dns_zone_name, default: '<%= external_attribute('gcp_dns_zone_name') -%>', description: 'The DNS name of the DNS zone.') -dns_managed_zone = attribute('dns_managed_zone', default: <%= grab_attributes['dns_managed_zone'] -%>) \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_dns_zone_name = attribute(:gcp_dns_zone_name, default: '<%= external_attribute(pwd, 'gcp_dns_zone_name') -%>', description: 'The DNS name of the DNS zone.') +dns_managed_zone = attribute('dns_managed_zone', default: <%= grab_attributes(pwd)['dns_managed_zone'] -%>) \ No newline at end of file diff --git a/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zones.erb b/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zones.erb index 81fbbaf38d5f..49413c899ee0 100644 --- a/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zones.erb +++ b/templates/inspec/examples/google_dns_managed_zone/google_dns_managed_zones.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_dns_zone_name = "#{external_attribute('gcp_dns_zone_name', doc_generation)}" -%> -<% dns_managed_zone = grab_attributes['dns_managed_zone'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_dns_zone_name = "#{external_attribute(pwd, 'gcp_dns_zone_name', doc_generation)}" -%> +<% dns_managed_zone = grab_attributes(pwd)['dns_managed_zone'] -%> describe google_dns_managed_zones(project: <%= gcp_project_id -%>) do it { should exist } its('zone_names') { should include <%= doc_generation ? "'#{dns_managed_zone['name']}'" : "dns_managed_zone['name']" -%> } diff --git a/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set.erb b/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set.erb index 2860e3877e03..6684b8a4dbb7 100644 --- a/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set.erb +++ b/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set.erb @@ -1,6 +1,6 @@ -<% record_set = grab_attributes['record_set'] -%> -<% managed_zone = grab_attributes['managed_zone'] -%> -describe google_dns_resource_record_set(project: <%= "#{external_attribute('gcp_project_id', doc_generation)}" -%>, name: <%= doc_generation ? "'#{record_set['name']}'" : "record_set['name']" -%>, type: <%= doc_generation ? "'#{record_set['type']}'" : "record_set['type']" -%>, managed_zone: <%= doc_generation ? "'#{managed_zone['name']}'" : "managed_zone['name']" -%>) do +<% record_set = grab_attributes(pwd)['record_set'] -%> +<% managed_zone = grab_attributes(pwd)['managed_zone'] -%> +describe google_dns_resource_record_set(project: <%= "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%>, name: <%= doc_generation ? "'#{record_set['name']}'" : "record_set['name']" -%>, type: <%= doc_generation ? "'#{record_set['type']}'" : "record_set['type']" -%>, managed_zone: <%= doc_generation ? "'#{managed_zone['name']}'" : "managed_zone['name']" -%>) do it { should exist } its('type') { should eq <%= doc_generation ? "'#{record_set['type']}'" : "record_set['type']" -%> } its('ttl') { should eq <%= doc_generation ? "'#{record_set['ttl']}'" : "record_set['ttl']" -%> } diff --git a/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set_attributes.erb b/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set_attributes.erb index ffd8dfb667a4..7813cc172a26 100644 --- a/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set_attributes.erb +++ b/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_set_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -record_set = attribute('record_set', default: <%= JSON.pretty_generate(grab_attributes['record_set']) -%>) -managed_zone = attribute('managed_zone', default: <%= JSON.pretty_generate(grab_attributes['managed_zone']) -%>) \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +record_set = attribute('record_set', default: <%= JSON.pretty_generate(grab_attributes(pwd)['record_set']) -%>) +managed_zone = attribute('managed_zone', default: <%= JSON.pretty_generate(grab_attributes(pwd)['managed_zone']) -%>) \ No newline at end of file diff --git a/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_sets.erb b/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_sets.erb index cb2f186f4f4f..4a121c13165f 100644 --- a/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_sets.erb +++ b/templates/inspec/examples/google_dns_resource_record_set/google_dns_resource_record_sets.erb @@ -1,6 +1,6 @@ -<% record_set = grab_attributes['record_set'] -%> -<% managed_zone = grab_attributes['managed_zone'] -%> -describe google_dns_resource_record_sets(project: <%= "#{external_attribute('gcp_project_id', doc_generation)}" -%>, name: <%= doc_generation ? "'#{record_set['name']}'" : "record_set['name']" -%>, managed_zone: <%= doc_generation ? "'#{managed_zone['name']}'" : "managed_zone['name']" -%>) do +<% record_set = grab_attributes(pwd)['record_set'] -%> +<% managed_zone = grab_attributes(pwd)['managed_zone'] -%> +describe google_dns_resource_record_sets(project: <%= "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%>, name: <%= doc_generation ? "'#{record_set['name']}'" : "record_set['name']" -%>, managed_zone: <%= doc_generation ? "'#{managed_zone['name']}'" : "managed_zone['name']" -%>) do its('count') { should eq 3 } its('types') { should include <%= doc_generation ? "'#{record_set['type']}'" : "record_set['type']" -%> } its('ttls') { should include <%= doc_generation ? "'#{record_set['ttl']}'" : "record_set['ttl']" -%> } diff --git a/templates/inspec/examples/google_filestore_instance/google_filestore_instance.erb b/templates/inspec/examples/google_filestore_instance/google_filestore_instance.erb index 10a5a7c1d596..7c09bc70125d 100644 --- a/templates/inspec/examples/google_filestore_instance/google_filestore_instance.erb +++ b/templates/inspec/examples/google_filestore_instance/google_filestore_instance.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% filestore_instance = grab_attributes['filestore_instance'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% filestore_instance = grab_attributes(pwd)['filestore_instance'] -%> describe google_filestore_instance(project: <%= gcp_project_id -%>, zone: <%= doc_generation ? "'#{filestore_instance['zone']}'" : "filestore_instance['zone']" -%>, name: <%= doc_generation ? "'#{filestore_instance['name']}'" : "filestore_instance['name']" -%>) do it { should exist } its('tier') { should cmp <%= doc_generation ? "'#{filestore_instance['tier']}'" : "filestore_instance['tier']" -%> } diff --git a/templates/inspec/examples/google_filestore_instance/google_filestore_instance_attributes.erb b/templates/inspec/examples/google_filestore_instance/google_filestore_instance_attributes.erb index 455ff911c660..e2aa864df739 100644 --- a/templates/inspec/examples/google_filestore_instance/google_filestore_instance_attributes.erb +++ b/templates/inspec/examples/google_filestore_instance/google_filestore_instance_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -filestore_instance = attribute('filestore_instance', default: <%= grab_attributes['filestore_instance'] -%>) \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +filestore_instance = attribute('filestore_instance', default: <%= grab_attributes(pwd)['filestore_instance'] -%>) \ No newline at end of file diff --git a/templates/inspec/examples/google_filestore_instance/google_filestore_instances.erb b/templates/inspec/examples/google_filestore_instance/google_filestore_instances.erb index 2d0937151334..81a4fbe3e5b1 100644 --- a/templates/inspec/examples/google_filestore_instance/google_filestore_instances.erb +++ b/templates/inspec/examples/google_filestore_instance/google_filestore_instances.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% filestore_instance = grab_attributes['filestore_instance'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% filestore_instance = grab_attributes(pwd)['filestore_instance'] -%> describe google_filestore_instances(project: <%= gcp_project_id -%>, zone: <%= doc_generation ? "'#{filestore_instance['zone']}'" : "filestore_instance['zone']" -%>) do its('tiers') { should include <%= doc_generation ? "'#{filestore_instance['tier']}'" : "filestore_instance['tier']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role.erb b/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role.erb index f22abe83d203..c3c1ff1eb05d 100644 --- a/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role.erb +++ b/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% gcp_organization_iam_custom_role_id = "#{external_attribute('gcp_organization_iam_custom_role_id', doc_generation)}" -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% gcp_organization_iam_custom_role_id = "#{external_attribute(pwd, 'gcp_organization_iam_custom_role_id', doc_generation)}" -%> describe google_iam_organization_custom_role(org_id: <%= doc_generation ? "'12345'" : "gcp_organization_id" -%>, name: <%= gcp_organization_iam_custom_role_id -%>) do it { should exist } its('stage') { should eq 'GA' } diff --git a/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role_attributes.erb b/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role_attributes.erb index cf9ae4c17e6a..47bf93a86119 100644 --- a/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role_attributes.erb +++ b/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_role_attributes.erb @@ -1,3 +1,3 @@ -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the folder') -gcp_organization_iam_custom_role_id = attribute(:gcp_organization_iam_custom_role_id, default: '<%= external_attribute('gcp_organization_iam_custom_role_id') -%>', description: 'The IAM custom role identifier.') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the folder') +gcp_organization_iam_custom_role_id = attribute(:gcp_organization_iam_custom_role_id, default: '<%= external_attribute(pwd, 'gcp_organization_iam_custom_role_id') -%>', description: 'The IAM custom role identifier.') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') \ No newline at end of file diff --git a/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_roles.erb b/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_roles.erb index 2cd25f40250d..743c59acbd7c 100644 --- a/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_roles.erb +++ b/templates/inspec/examples/google_iam_organization_custom_role/google_iam_organization_custom_roles.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% gcp_organization_iam_custom_role_id = "#{external_attribute('gcp_organization_iam_custom_role_id', doc_generation)}" -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% gcp_organization_iam_custom_role_id = "#{external_attribute(pwd, 'gcp_organization_iam_custom_role_id', doc_generation)}" -%> describe google_iam_organization_custom_roles(org_id: <%= gcp_organization_id -%>) do its('names') { should include "organizations/<%= doc_generation ? "123456" : "\#{gcp_organization_id}" -%>/roles/<%= doc_generation ? "role-id" : "\#{gcp_organization_iam_custom_role_id}" -%>" } end \ No newline at end of file diff --git a/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key.erb b/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key.erb index 900a5bd51281..55fd13a5474e 100644 --- a/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key.erb +++ b/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_kms_key_ring_policy_name = "#{external_attribute('gcp_kms_key_ring_policy_name', doc_generation)}" -%> -<% gcp_kms_crypto_key_name_policy = "#{external_attribute('gcp_kms_crypto_key_name_policy', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_kms_key_ring_policy_name = "#{external_attribute(pwd, 'gcp_kms_key_ring_policy_name', doc_generation)}" -%> +<% gcp_kms_crypto_key_name_policy = "#{external_attribute(pwd, 'gcp_kms_crypto_key_name_policy', doc_generation)}" -%> describe google_kms_crypto_key(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>, key_ring_name: <%= gcp_kms_key_ring_policy_name -%>, name: <%= gcp_kms_crypto_key_name_policy -%>) do it { should exist } its('crypto_key_name') { should cmp <%= gcp_kms_crypto_key_name_policy -%> } diff --git a/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key_attributes.erb b/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key_attributes.erb index 1cb85513506e..3c3fd43f8123 100644 --- a/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key_attributes.erb +++ b/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_key_attributes.erb @@ -1,6 +1,6 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'GCP location') -gcp_kms_key_ring_policy_name = attribute(:gcp_kms_key_ring_policy_name, default: '<%= external_attribute('gcp_kms_key_ring_policy_name') -%>', description: 'Key ring name') -gcp_kms_crypto_key_name_policy = attribute(:gcp_kms_crypto_key_name_policy, default: '<%= external_attribute('gcp_kms_crypto_key_name_policy') -%>', description: 'Key name') -gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute('gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'GCP location') +gcp_kms_key_ring_policy_name = attribute(:gcp_kms_key_ring_policy_name, default: '<%= external_attribute(pwd, 'gcp_kms_key_ring_policy_name') -%>', description: 'Key ring name') +gcp_kms_crypto_key_name_policy = attribute(:gcp_kms_crypto_key_name_policy, default: '<%= external_attribute(pwd, 'gcp_kms_crypto_key_name_policy') -%>', description: 'Key name') +gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute(pwd, 'gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_keys.erb b/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_keys.erb index 97a25781bbaa..da66280eb1cd 100644 --- a/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_keys.erb +++ b/templates/inspec/examples/google_kms_crypto_key/google_kms_crypto_keys.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_kms_key_ring_policy_name = "#{external_attribute('gcp_kms_key_ring_policy_name', doc_generation)}" -%> -<% gcp_kms_crypto_key_name_policy = "#{external_attribute('gcp_kms_crypto_key_name_policy', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_kms_key_ring_policy_name = "#{external_attribute(pwd, 'gcp_kms_key_ring_policy_name', doc_generation)}" -%> +<% gcp_kms_crypto_key_name_policy = "#{external_attribute(pwd, 'gcp_kms_crypto_key_name_policy', doc_generation)}" -%> describe google_kms_crypto_keys(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>, key_ring_name: <%= gcp_kms_key_ring_policy_name -%>) do its('count') { should be >= 1 } its('crypto_key_names') { should include <%= gcp_kms_crypto_key_name_policy -%> } diff --git a/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring.erb b/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring.erb index 4648cbe70bfb..8667bc6d2e2a 100644 --- a/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring.erb +++ b/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_kms_key_ring_policy_name = "#{external_attribute('gcp_kms_key_ring_policy_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_kms_key_ring_policy_name = "#{external_attribute(pwd, 'gcp_kms_key_ring_policy_name', doc_generation)}" -%> describe google_kms_key_ring(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>, name: <%= gcp_kms_key_ring_policy_name -%>) do it { should exist } its('create_time') { should be > Time.now - 365*60*60*24*10 } diff --git a/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring_attributes.erb b/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring_attributes.erb index 13f1c1fd66c1..7dbcf06ad173 100644 --- a/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring_attributes.erb +++ b/templates/inspec/examples/google_kms_key_ring/google_kms_key_ring_attributes.erb @@ -1,5 +1,5 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'GCP location') -gcp_kms_key_ring_policy_name = attribute(:gcp_kms_key_ring_policy_name, default: '<%= external_attribute('gcp_kms_key_ring_policy_name') -%>', description: 'Key ring name') -gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute('gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'GCP location') +gcp_kms_key_ring_policy_name = attribute(:gcp_kms_key_ring_policy_name, default: '<%= external_attribute(pwd, 'gcp_kms_key_ring_policy_name') -%>', description: 'Key ring name') +gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute(pwd, 'gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_kms_key_ring/google_kms_key_rings.erb b/templates/inspec/examples/google_kms_key_ring/google_kms_key_rings.erb index 7c904825d9fb..fa4bfa94abf7 100644 --- a/templates/inspec/examples/google_kms_key_ring/google_kms_key_rings.erb +++ b/templates/inspec/examples/google_kms_key_ring/google_kms_key_rings.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_kms_key_ring_policy_name = "#{external_attribute('gcp_kms_key_ring_policy_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_kms_key_ring_policy_name = "#{external_attribute(pwd, 'gcp_kms_key_ring_policy_name', doc_generation)}" -%> describe google_kms_key_rings(project: <%= gcp_project_id -%>, location: <%= gcp_location -%>) do its('key_ring_names'){ should include <%= gcp_kms_key_ring_policy_name -%> } end diff --git a/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion.erb b/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion.erb index 8970222e1d72..2b729b2a3127 100644 --- a/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion.erb +++ b/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% folder_exclusion = grab_attributes['folder_exclusion'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% folder_exclusion = grab_attributes(pwd)['folder_exclusion'] -%> # Getting folder exclusions is complicated due to the name being generated by the server. # This can be drastically simplified if you have the name when writing the test describe.one do diff --git a/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion_attributes.erb b/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion_attributes.erb index 0a91d7605666..4e5581f1b1c5 100644 --- a/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion_attributes.erb +++ b/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusion_attributes.erb @@ -1,3 +1,3 @@ -folder_exclusion = attribute('folder_exclusion', default: <%= grab_attributes['folder_exclusion'] -%>) -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the folder') +folder_exclusion = attribute('folder_exclusion', default: <%= grab_attributes(pwd)['folder_exclusion'] -%>) +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the folder') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') \ No newline at end of file diff --git a/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusions.erb b/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusions.erb index fed3a9f22d13..d90b0d09131d 100644 --- a/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusions.erb +++ b/templates/inspec/examples/google_logging_folder_exclusion/google_logging_folder_exclusions.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% folder_exclusion = grab_attributes['folder_exclusion'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% folder_exclusion = grab_attributes(pwd)['folder_exclusion'] -%> # Getting folder exclusions is complicated due to the name being generated by the server. # This can be drastically simplified if you have the name when writing the test describe.one do diff --git a/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink.erb b/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink.erb index cccb000e3c6f..7952632ccf2b 100644 --- a/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink.erb +++ b/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% folder_sink = grab_attributes['folder_sink'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% folder_sink = grab_attributes(pwd)['folder_sink'] -%> # Getting folder sinks is complicated due to the name being generated by the server. # This can be drastically simplified if you have the folder name when writing the test describe.one do diff --git a/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink_attributes.erb b/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink_attributes.erb index 8d148423dd73..d0087bce3601 100644 --- a/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink_attributes.erb +++ b/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sink_attributes.erb @@ -1,3 +1,3 @@ -folder_sink = attribute('folder_sink', default: <%= grab_attributes['folder_sink'] -%>) -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the folder') +folder_sink = attribute('folder_sink', default: <%= grab_attributes(pwd)['folder_sink'] -%>) +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of the folder') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') \ No newline at end of file diff --git a/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sinks.erb b/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sinks.erb index cd016eea01a3..e6a247bf46c9 100644 --- a/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sinks.erb +++ b/templates/inspec/examples/google_logging_folder_log_sink/google_logging_folder_log_sinks.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% folder_sink = grab_attributes['folder_sink'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% folder_sink = grab_attributes(pwd)['folder_sink'] -%> # Getting folder sinks is complicated due to the name being generated by the server. # This can be drastically simplified if you have the folder name when writing the test describe.one do diff --git a/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink.erb b/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink.erb index 0210d6976c93..1217591d71d1 100644 --- a/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink.erb +++ b/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% org_sink = grab_attributes['org_sink'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% org_sink = grab_attributes(pwd)['org_sink'] -%> describe google_logging_organization_log_sink(organization: <%= gcp_organization_id -%>, name: <%= doc_generation ? "'#{org_sink['name']}'" : "org_sink['name']" -%>) do it { should exist } its('filter') { should cmp <%= doc_generation ? "'#{org_sink['filter']}'" : "org_sink['filter']" -%> } diff --git a/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink_attributes.erb b/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink_attributes.erb index 082ea57f55a3..d90fc842b908 100644 --- a/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink_attributes.erb +++ b/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sink_attributes.erb @@ -1,3 +1,3 @@ -org_sink = attribute('org_sink', default: <%= grab_attributes['org_sink'] -%>) -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of this folder') +org_sink = attribute('org_sink', default: <%= grab_attributes(pwd)['org_sink'] -%>) +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of this folder') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') \ No newline at end of file diff --git a/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sinks.erb b/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sinks.erb index d3d3c21ebd1a..7bb879a01b58 100644 --- a/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sinks.erb +++ b/templates/inspec/examples/google_logging_organization_log_sink/google_logging_organization_log_sinks.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% org_sink = grab_attributes['org_sink'] -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% org_sink = grab_attributes(pwd)['org_sink'] -%> describe google_logging_organization_log_sinks(organization: <%= gcp_organization_id -%>) do its('names') { should include <%= doc_generation ? "'#{org_sink['name']}'" : "org_sink['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion.erb b/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion.erb index 6db10667dce5..6b8e23505f55 100644 --- a/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion.erb +++ b/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% project_exclusion = grab_attributes['project_exclusion'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% project_exclusion = grab_attributes(pwd)['project_exclusion'] -%> describe google_logging_project_exclusion(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{project_exclusion['name']}'" : "project_exclusion['name']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion_attributes.erb b/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion_attributes.erb index 02c2fc6b43bb..4e5e29cb1c98 100644 --- a/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion_attributes.erb +++ b/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusion_attributes.erb @@ -1,4 +1,4 @@ -project_exclusion = attribute('project_exclusion', default: <%= grab_attributes['project_exclusion'] -%>) -gcp_project_id = attribute(:gcp_project_id, default: <%= external_attribute('gcp_project_id') -%>, description: 'The project identifier') +project_exclusion = attribute('project_exclusion', default: <%= grab_attributes(pwd)['project_exclusion'] -%>) +gcp_project_id = attribute(:gcp_project_id, default: <%= external_attribute(pwd, 'gcp_project_id') -%>, description: 'The project identifier') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusions.erb b/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusions.erb index fe5a979a6c82..489a8c3e1e83 100644 --- a/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusions.erb +++ b/templates/inspec/examples/google_logging_project_exclusion/google_logging_project_exclusions.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% project_exclusion = grab_attributes['folder_exclusion'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% project_exclusion = grab_attributes(pwd)['folder_exclusion'] -%> describe google_logging_project_exclusions(project: <%= gcp_project_id -%>) do its('names'){ should include <%= doc_generation ? "'#{project_exclusion['name']}'" : "project_exclusion['name']" -%> } diff --git a/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink.erb b/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink.erb index 779931415c16..2d1cf7ad91fb 100644 --- a/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink.erb +++ b/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% project_sink = grab_attributes['project_sink'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% project_sink = grab_attributes(pwd)['project_sink'] -%> describe google_logging_project_sink(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{project_sink['name']}'" : "project_sink['name']" -%>) do it { should exist } its('filter') { should cmp <%= doc_generation ? "'#{project_sink['filter']}'" : "project_sink['filter']" -%> } diff --git a/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink_attributes.erb b/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink_attributes.erb index 82457f0d454c..2361137989bf 100644 --- a/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink_attributes.erb +++ b/templates/inspec/examples/google_logging_project_sink/google_logging_project_sink_attributes.erb @@ -1,4 +1,4 @@ -project_sink = attribute('project_sink', default: <%= grab_attributes['project_sink'] -%>) -gcp_project_id = attribute(:gcp_project_id, default: <%= external_attribute('gcp_project_id') -%>, description: 'The project id.') +project_sink = attribute('project_sink', default: <%= grab_attributes(pwd)['project_sink'] -%>) +gcp_project_id = attribute(:gcp_project_id, default: <%= external_attribute(pwd, 'gcp_project_id') -%>, description: 'The project id.') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_logging_project_sink/google_logging_project_sinks.erb b/templates/inspec/examples/google_logging_project_sink/google_logging_project_sinks.erb index e20bcb6d95b7..0aff061cd876 100644 --- a/templates/inspec/examples/google_logging_project_sink/google_logging_project_sinks.erb +++ b/templates/inspec/examples/google_logging_project_sink/google_logging_project_sinks.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% project_sink = grab_attributes['project_sink'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% project_sink = grab_attributes(pwd)['project_sink'] -%> describe google_logging_project_sinks(project: <%= gcp_project_id -%>) do its('names') { should include <%= doc_generation ? "'#{project_sink['name']}'" : "project_sink['name']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_memcache_instance/google_memcache_instance.erb b/templates/inspec/examples/google_memcache_instance/google_memcache_instance.erb new file mode 100644 index 000000000000..6dd3f038cbd8 --- /dev/null +++ b/templates/inspec/examples/google_memcache_instance/google_memcache_instance.erb @@ -0,0 +1,11 @@ +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% memcache_instance = grab_attributes(pwd)['memcache_instance'] -%> +describe google_memcache_instance(project: <%= gcp_project_id -%>, region: <%= gcp_location %>, name: <%= doc_generation ? "'#{memcache_instance['name']}'" : "memcache_instance['name']" -%>) do + it { should exist } + its('node_count') { should cmp 1 } +end + +describe google_memcache_instance(project: <%= gcp_project_id -%>, region: <%= gcp_location %>, name: "nonexistent") do + it { should_not exist } +end \ No newline at end of file diff --git a/templates/inspec/examples/google_memcache_instance/google_memcache_instance_attributes.erb b/templates/inspec/examples/google_memcache_instance/google_memcache_instance_attributes.erb new file mode 100644 index 000000000000..795917fcc5a0 --- /dev/null +++ b/templates/inspec/examples/google_memcache_instance/google_memcache_instance_attributes.erb @@ -0,0 +1,3 @@ +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +memcache_instance = attribute('memcache_instance', default: <%= JSON.pretty_generate(grab_attributes(pwd)['memcache_instance']) -%>, description: 'Memcache settings') \ No newline at end of file diff --git a/templates/inspec/examples/google_memcache_instance/google_memcache_instances.erb b/templates/inspec/examples/google_memcache_instance/google_memcache_instances.erb new file mode 100644 index 000000000000..9ae45434a1e6 --- /dev/null +++ b/templates/inspec/examples/google_memcache_instance/google_memcache_instances.erb @@ -0,0 +1,7 @@ +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% memcache_instance = grab_attributes(pwd)['memcache_instance'] -%> +describe google_memcache_instances(project: <%= gcp_project_id -%>, region: <%= gcp_location %>) do + its('count') { should be >= 1 } + its('node_counts') { should include 1 } +end \ No newline at end of file diff --git a/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model.erb b/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model.erb index bc42beafdc4e..c4f25f3477c9 100644 --- a/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model.erb +++ b/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% ml_model = grab_attributes['ml_model'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% ml_model = grab_attributes(pwd)['ml_model'] -%> describe google_ml_engine_model(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{ml_model['name']}'" : "ml_model['name']" -%>) do it { should exist } its('description') { should cmp <%= doc_generation ? "'#{ml_model['description']}'" : "ml_model['description']" -%> } diff --git a/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model_attributes.erb b/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model_attributes.erb index bfbbdabf66e9..1020108510eb 100644 --- a/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model_attributes.erb +++ b/templates/inspec/examples/google_ml_engine_model/google_ml_engine_model_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project region.') -ml_model = attribute('ml_model', default: <%= JSON.pretty_generate(grab_attributes['ml_model']) -%>, description: 'Machine learning model definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project region.') +ml_model = attribute('ml_model', default: <%= JSON.pretty_generate(grab_attributes(pwd)['ml_model']) -%>, description: 'Machine learning model definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_ml_engine_model/google_ml_engine_models.erb b/templates/inspec/examples/google_ml_engine_model/google_ml_engine_models.erb index c1ff8750517c..41c01626400a 100644 --- a/templates/inspec/examples/google_ml_engine_model/google_ml_engine_models.erb +++ b/templates/inspec/examples/google_ml_engine_model/google_ml_engine_models.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% ml_model = grab_attributes['ml_model'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% ml_model = grab_attributes(pwd)['ml_model'] -%> describe google_ml_engine_models(project: <%= gcp_project_id -%>) do its('descriptions') { should include <%= doc_generation ? "'#{ml_model['description']}'" : "ml_model['description']" -%> } its('online_prediction_loggings') { should include <%= doc_generation ? "'#{ml_model['online_prediction_logging']}'" : "ml_model['online_prediction_logging']" -%> } diff --git a/templates/inspec/examples/google_organization/google_organization.erb b/templates/inspec/examples/google_organization/google_organization.erb index e47486ca67c8..8bf19875f501 100644 --- a/templates/inspec/examples/google_organization/google_organization.erb +++ b/templates/inspec/examples/google_organization/google_organization.erb @@ -1,4 +1,4 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> describe google_organization(name: "organizations/<%= doc_generation ? '123456' : "\#{gcp_organization_id}" -%>") do its('name') { should eq "organizations/<%= doc_generation ? '123456' : "\#{gcp_organization_id}" -%>" } diff --git a/templates/inspec/examples/google_organization/google_organization_attributes.erb b/templates/inspec/examples/google_organization/google_organization_attributes.erb index 1d55f221dbde..8eca5f43aaf7 100644 --- a/templates/inspec/examples/google_organization/google_organization_attributes.erb +++ b/templates/inspec/examples/google_organization/google_organization_attributes.erb @@ -1,2 +1,2 @@ -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of this folder') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of this folder') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') \ No newline at end of file diff --git a/templates/inspec/examples/google_organization/google_organizations.erb b/templates/inspec/examples/google_organization/google_organizations.erb index ac1a89f2a7ec..9468ac1695a2 100644 --- a/templates/inspec/examples/google_organization/google_organizations.erb +++ b/templates/inspec/examples/google_organization/google_organizations.erb @@ -1,5 +1,5 @@ -<% gcp_organization_id = "#{external_attribute('gcp_organization_id', doc_generation)}" -%> -<% gcp_organization_display_name = "#{external_attribute('gcp_organization_display_name', doc_generation)}" -%> +<% gcp_organization_id = "#{external_attribute(pwd, 'gcp_organization_id', doc_generation)}" -%> +<% gcp_organization_display_name = "#{external_attribute(pwd, 'gcp_organization_display_name', doc_generation)}" -%> describe google_organizations do its('names') { should include "organizations/<%= doc_generation ? '123456' : "\#{gcp_organization_id}" -%>" } diff --git a/templates/inspec/examples/google_project/google_project.erb b/templates/inspec/examples/google_project/google_project.erb index 36aa988624e4..1066abd2a61f 100644 --- a/templates/inspec/examples/google_project/google_project.erb +++ b/templates/inspec/examples/google_project/google_project.erb @@ -1,4 +1,4 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> describe google_project(project: <%= gcp_project_id -%>) do it { should exist } its('project_id') { should cmp <%= gcp_project_id -%> } diff --git a/templates/inspec/examples/google_project/google_project_attributes.erb b/templates/inspec/examples/google_project/google_project_attributes.erb index a2863dfa3703..9e434667ef77 100644 --- a/templates/inspec/examples/google_project/google_project_attributes.erb +++ b/templates/inspec/examples/google_project/google_project_attributes.erb @@ -1 +1 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') \ No newline at end of file diff --git a/templates/inspec/examples/google_project/google_projects.erb b/templates/inspec/examples/google_project/google_projects.erb index 15a4a36bf583..e28d01579b35 100644 --- a/templates/inspec/examples/google_project/google_projects.erb +++ b/templates/inspec/examples/google_project/google_projects.erb @@ -1,4 +1,4 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> describe google_projects() do its('count') { should be >= 1 } its('project_ids') { should include <%= gcp_project_id -%> } diff --git a/templates/inspec/examples/google_project_alert_policy/google_project_alert_policies.erb b/templates/inspec/examples/google_project_alert_policy/google_project_alert_policies.erb index eeca2f996aa2..341771ebd46c 100644 --- a/templates/inspec/examples/google_project_alert_policy/google_project_alert_policies.erb +++ b/templates/inspec/examples/google_project_alert_policy/google_project_alert_policies.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% alert_policy = grab_attributes['alert_policy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% alert_policy = grab_attributes(pwd)['alert_policy'] -%> describe google_project_alert_policies(project: <%= gcp_project_id -%>) do it { should exist } its('policy_display_names') { should include <%= doc_generation ? "'#{alert_policy['display_name']}'" : "alert_policy['display_name']" -%>} diff --git a/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy.erb b/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy.erb index 19ce1f0be671..c8e898daefc7 100644 --- a/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy.erb +++ b/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% alert_policy = grab_attributes['alert_policy'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% alert_policy = grab_attributes(pwd)['alert_policy'] -%> describe.one do google_project_alert_policies(project: <%= gcp_project_id -%>).policy_names do |policy_name| describe google_project_alert_policy(project: <%= gcp_project_id -%>, name: policy_name) do diff --git a/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy_attributes.erb b/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy_attributes.erb index 4df0922e7a5b..d68387f2452b 100644 --- a/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy_attributes.erb +++ b/templates/inspec/examples/google_project_alert_policy/google_project_alert_policy_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -alert_policy = attribute('alert_policy', default: <%= JSON.pretty_generate(grab_attributes['alert_policy']) -%>, description: 'Alert Policy description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +alert_policy = attribute('alert_policy', default: <%= JSON.pretty_generate(grab_attributes(pwd)['alert_policy']) -%>, description: 'Alert Policy description') \ No newline at end of file diff --git a/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role.erb b/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role.erb index f5129ebaa952..8f523d0733b5 100644 --- a/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role.erb +++ b/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_project_iam_custom_role_id = "#{external_attribute('gcp_project_iam_custom_role_id', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_project_iam_custom_role_id = "#{external_attribute(pwd, 'gcp_project_iam_custom_role_id', doc_generation)}" -%> describe google_project_iam_custom_role(project: <%= gcp_project_id -%>, name: <%= gcp_project_iam_custom_role_id -%>) do it { should exist } its('stage') { should eq 'GA' } diff --git a/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role_attributes.erb b/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role_attributes.erb index 3238df07fe9c..a27a02e66a28 100644 --- a/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role_attributes.erb +++ b/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_role_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_project_iam_custom_role_id = attribute(:gcp_project_iam_custom_role_id, default: '<%= external_attribute('gcp_project_iam_custom_role_id') -%>', description: 'The IAM custom role identifier.') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_project_iam_custom_role_id = attribute(:gcp_project_iam_custom_role_id, default: '<%= external_attribute(pwd, 'gcp_project_iam_custom_role_id') -%>', description: 'The IAM custom role identifier.') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_roles.erb b/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_roles.erb index db14e31ea613..26f1fee6203f 100644 --- a/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_roles.erb +++ b/templates/inspec/examples/google_project_iam_custom_role/google_project_iam_custom_roles.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_project_iam_custom_role_id = "#{external_attribute('gcp_project_iam_custom_role_id', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_project_iam_custom_role_id = "#{external_attribute(pwd, 'gcp_project_iam_custom_role_id', doc_generation)}" -%> describe google_project_iam_custom_roles(project: <%= gcp_project_id -%>) do its('names') { should include "projects/<%= doc_generation ? "project-id" : "\#{gcp_project_id}" -%>/roles/<%= doc_generation ? "role-id" : "\#{gcp_project_iam_custom_role_id}" -%>" } end \ No newline at end of file diff --git a/templates/inspec/examples/google_project_metric/google_project_metric.erb b/templates/inspec/examples/google_project_metric/google_project_metric.erb index 24569fcfbc0e..a3bc78606f56 100644 --- a/templates/inspec/examples/google_project_metric/google_project_metric.erb +++ b/templates/inspec/examples/google_project_metric/google_project_metric.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% logging_metric = grab_attributes['logging_metric'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% logging_metric = grab_attributes(pwd)['logging_metric'] -%> describe google_project_metric(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{logging_metric['name']}'" : "logging_metric['name']" -%>) do it { should exist } its('filter') { should cmp <%= doc_generation ? "'#{logging_metric['filter']}'" : "logging_metric['filter']" -%> } diff --git a/templates/inspec/examples/google_project_metric/google_project_metric_attributes.erb b/templates/inspec/examples/google_project_metric/google_project_metric_attributes.erb index 22313211290d..74e822ad83a1 100644 --- a/templates/inspec/examples/google_project_metric/google_project_metric_attributes.erb +++ b/templates/inspec/examples/google_project_metric/google_project_metric_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -logging_metric = attribute('logging_metric', default: <%= JSON.pretty_generate(grab_attributes['logging_metric']) -%>, description: 'Logging metric definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +logging_metric = attribute('logging_metric', default: <%= JSON.pretty_generate(grab_attributes(pwd)['logging_metric']) -%>, description: 'Logging metric definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_project_metric/google_project_metrics.erb b/templates/inspec/examples/google_project_metric/google_project_metrics.erb index e00776c14c05..6f53f5ae9840 100644 --- a/templates/inspec/examples/google_project_metric/google_project_metrics.erb +++ b/templates/inspec/examples/google_project_metric/google_project_metrics.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% logging_metric = grab_attributes['logging_metric'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% logging_metric = grab_attributes(pwd)['logging_metric'] -%> describe google_project_metrics(project: <%= gcp_project_id -%>) do it { should exist } its('metric_filters') { should include <%= doc_generation ? "'#{logging_metric['filter']}'" : "logging_metric['filter']" -%> } diff --git a/templates/inspec/examples/google_project_service/google_project_service.erb b/templates/inspec/examples/google_project_service/google_project_service.erb index 7e5e67045fc1..f2a20336d408 100644 --- a/templates/inspec/examples/google_project_service/google_project_service.erb +++ b/templates/inspec/examples/google_project_service/google_project_service.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% service = grab_attributes['service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% service = grab_attributes(pwd)['service'] -%> describe google_project_service(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{service['name']}'" : "service['name']" -%>) do it { should exist } its('state') { should cmp "ENABLED" } diff --git a/templates/inspec/examples/google_project_service/google_project_service_attributes.erb b/templates/inspec/examples/google_project_service/google_project_service_attributes.erb index f7b5ec33a865..a0fc0c587797 100644 --- a/templates/inspec/examples/google_project_service/google_project_service_attributes.erb +++ b/templates/inspec/examples/google_project_service/google_project_service_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -service = attribute('service', default: <%= JSON.pretty_generate(grab_attributes['service']) -%>, description: 'Service description') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +service = attribute('service', default: <%= JSON.pretty_generate(grab_attributes(pwd)['service']) -%>, description: 'Service description') \ No newline at end of file diff --git a/templates/inspec/examples/google_project_service/google_project_services.erb b/templates/inspec/examples/google_project_service/google_project_services.erb index 4494e84fc468..5eef0819ae9f 100644 --- a/templates/inspec/examples/google_project_service/google_project_services.erb +++ b/templates/inspec/examples/google_project_service/google_project_services.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% service = grab_attributes['service'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% service = grab_attributes(pwd)['service'] -%> describe.one do google_project_services(project: <%= gcp_project_id -%>).names.each do |name| describe name do diff --git a/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription.erb b/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription.erb index 8748a88cea45..e6f79dc05774 100644 --- a/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription.erb +++ b/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% subscription = grab_attributes['subscription'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% subscription = grab_attributes(pwd)['subscription'] -%> describe google_pubsub_subscription(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{subscription['name']}'" : "subscription['name']" -%>) do it { should exist } end diff --git a/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription_attributes.erb b/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription_attributes.erb index a4fc9b29b37d..336577f3407c 100644 --- a/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription_attributes.erb +++ b/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscription_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -subscription = attribute('subscription', default: <%= grab_attributes['subscription'] -%>) +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +subscription = attribute('subscription', default: <%= grab_attributes(pwd)['subscription'] -%>) diff --git a/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscriptions.erb b/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscriptions.erb index 3c062ba63735..a934744ac1e3 100644 --- a/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscriptions.erb +++ b/templates/inspec/examples/google_pubsub_subscription/google_pubsub_subscriptions.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% subscription = grab_attributes['subscription'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% subscription = grab_attributes(pwd)['subscription'] -%> describe google_pubsub_subscriptions(project: <%= gcp_project_id -%>) do its('count') { should be >= 1 } end diff --git a/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic.erb b/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic.erb index c3c761e71c81..b773b8ee83cf 100644 --- a/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic.erb +++ b/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% topic = grab_attributes['topic'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% topic = grab_attributes(pwd)['topic'] -%> describe google_pubsub_topic(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{topic['name']}'" : "topic['name']" -%>) do it { should exist } end diff --git a/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic_attributes.erb b/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic_attributes.erb index f0628e0d3d22..961a6d248b72 100644 --- a/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic_attributes.erb +++ b/templates/inspec/examples/google_pubsub_topic/google_pubsub_topic_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -topic = attribute('topic', default: <%= grab_attributes['topic'] -%>) +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +topic = attribute('topic', default: <%= grab_attributes(pwd)['topic'] -%>) diff --git a/templates/inspec/examples/google_pubsub_topic/google_pubsub_topics.erb b/templates/inspec/examples/google_pubsub_topic/google_pubsub_topics.erb index 0e722b93a536..ffa3b8d07497 100644 --- a/templates/inspec/examples/google_pubsub_topic/google_pubsub_topics.erb +++ b/templates/inspec/examples/google_pubsub_topic/google_pubsub_topics.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% topic = grab_attributes['topic'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% topic = grab_attributes(pwd)['topic'] -%> describe google_pubsub_topics(project: <%= gcp_project_id -%>) do it { should exist } its('names') { should include <%= doc_generation ? "'#{topic['name']}'" : "topic['name']" -%> } diff --git a/templates/inspec/examples/google_redis_instance/google_redis_instance.erb b/templates/inspec/examples/google_redis_instance/google_redis_instance.erb index 352b8fbbe600..3684e715611b 100644 --- a/templates/inspec/examples/google_redis_instance/google_redis_instance.erb +++ b/templates/inspec/examples/google_redis_instance/google_redis_instance.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% redis = grab_attributes['redis'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% redis = grab_attributes(pwd)['redis'] -%> describe google_redis_instance(project: <%= gcp_project_id -%>, region: <%= doc_generation ? "'#{redis['region']}'" : "redis['region']" -%>, name: <%= doc_generation ? "'#{redis['name']}'" : "redis['name']" -%>) do it { should exist } its('tier') { should cmp <%= doc_generation ? "'#{redis['tier']}'" : "redis['tier']" -%> } diff --git a/templates/inspec/examples/google_redis_instance/google_redis_instance_attributes.erb b/templates/inspec/examples/google_redis_instance/google_redis_instance_attributes.erb index 006e5a5d0593..47e7bd056295 100644 --- a/templates/inspec/examples/google_redis_instance/google_redis_instance_attributes.erb +++ b/templates/inspec/examples/google_redis_instance/google_redis_instance_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -redis = attribute('redis', default: <%= grab_attributes['redis'] -%>) +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +redis = attribute('redis', default: <%= grab_attributes(pwd)['redis'] -%>) diff --git a/templates/inspec/examples/google_redis_instance/google_redis_instances.erb b/templates/inspec/examples/google_redis_instance/google_redis_instances.erb index 549e71f7b8f9..21e6d5cd9186 100644 --- a/templates/inspec/examples/google_redis_instance/google_redis_instances.erb +++ b/templates/inspec/examples/google_redis_instance/google_redis_instances.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% redis = grab_attributes['redis'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% redis = grab_attributes(pwd)['redis'] -%> describe google_redis_instances(project: <%= gcp_project_id -%>, region: <%= doc_generation ? "'#{redis['region']}'" : "redis['region']" -%>) do its('tiers') { should include <%= doc_generation ? "'#{redis['tier']}'" : "redis['tier']" -%> } its('memory_size_gbs') { should include <%= doc_generation ? "'#{redis['memory_size_gb']}'" : "redis['memory_size_gb']" -%> } diff --git a/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder.erb b/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder.erb index 53e80492d88c..b24306bc65e5 100644 --- a/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder.erb +++ b/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder.erb @@ -1,4 +1,4 @@ -<% folder = grab_attributes['folder'] -%> +<% folder = grab_attributes(pwd)['folder'] -%> describe.one do google_resourcemanager_folders(parent: <%= doc_generation ? "'organizations/12345'" : "\"organizations/\#{gcp_organization_id}\"" -%>).names.each do |name| describe google_resourcemanager_folder(name: name) do diff --git a/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder_attributes.erb b/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder_attributes.erb index 2676d1ca4f6b..bb0eff2865fd 100644 --- a/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder_attributes.erb +++ b/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folder_attributes.erb @@ -1,3 +1,3 @@ -folder = attribute('folder', default: <%= grab_attributes['folder'] -%>) -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of this folder') +folder = attribute('folder', default: <%= grab_attributes(pwd)['folder'] -%>) +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization that is the parent of this folder') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') \ No newline at end of file diff --git a/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folders.erb b/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folders.erb index 7b7b4c9c8c4c..d0ce2596ddba 100644 --- a/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folders.erb +++ b/templates/inspec/examples/google_resourcemanager_folder/google_resourcemanager_folders.erb @@ -1,4 +1,4 @@ -<% folder = grab_attributes['folder'] -%> +<% folder = grab_attributes(pwd)['folder'] -%> describe.one do google_resourcemanager_folders(parent: <%= doc_generation ? "'organizations/12345'" : "\"organizations/\#{gcp_organization_id}\"" -%>).display_names.each do |display_name| describe display_name do diff --git a/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config.erb b/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config.erb index 1a51cf193be4..9be54812764d 100644 --- a/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config.erb +++ b/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% runtimeconfig_config = grab_attributes['runtimeconfig_config'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% runtimeconfig_config = grab_attributes(pwd)['runtimeconfig_config'] -%> describe google_runtime_config_config(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{runtimeconfig_config['name']}'" : "runtimeconfig_config['name']" -%>) do it { should exist } its('description') { should cmp <%= doc_generation ? "'#{runtimeconfig_config['description']}'" : "runtimeconfig_config['description']" -%> } diff --git a/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config_attributes.erb b/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config_attributes.erb index 5e80fedc0e38..dd6c245f54b7 100644 --- a/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config_attributes.erb +++ b/templates/inspec/examples/google_runtime_config_config/google_runtime_config_config_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -runtimeconfig_config = attribute('runtimeconfig_config', default: <%= grab_attributes['runtimeconfig_config'] -%>) +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +runtimeconfig_config = attribute('runtimeconfig_config', default: <%= grab_attributes(pwd)['runtimeconfig_config'] -%>) diff --git a/templates/inspec/examples/google_runtime_config_config/google_runtime_config_configs.erb b/templates/inspec/examples/google_runtime_config_config/google_runtime_config_configs.erb index 478c554a2b8f..7879479232d1 100644 --- a/templates/inspec/examples/google_runtime_config_config/google_runtime_config_configs.erb +++ b/templates/inspec/examples/google_runtime_config_config/google_runtime_config_configs.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% runtimeconfig_config = grab_attributes['runtimeconfig_config'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% runtimeconfig_config = grab_attributes(pwd)['runtimeconfig_config'] -%> describe google_runtime_config_configs(project: <%= gcp_project_id -%>) do its('descriptions') { should include <%= doc_generation ? "'#{runtimeconfig_config['description']}'" : "runtimeconfig_config['description']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable.erb b/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable.erb index cdc396302d0a..d530f48413b3 100644 --- a/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable.erb +++ b/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% runtimeconfig_config = grab_attributes['runtimeconfig_config'] -%> -<% runtimeconfig_variable = grab_attributes['runtimeconfig_variable'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% runtimeconfig_config = grab_attributes(pwd)['runtimeconfig_config'] -%> +<% runtimeconfig_variable = grab_attributes(pwd)['runtimeconfig_variable'] -%> describe google_runtime_config_variable(project: <%= gcp_project_id -%>, config: <%= doc_generation ? "'#{runtimeconfig_config['name']}'" : "runtimeconfig_config['name']" -%>, name: <%= doc_generation ? "'#{runtimeconfig_variable['name']}'" : "runtimeconfig_variable['name']" -%>) do it { should exist } its('text') { should cmp <%= doc_generation ? "'#{runtimeconfig_variable['text']}'" : "runtimeconfig_variable['text']" -%> } diff --git a/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable_attributes.erb b/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable_attributes.erb index eb4cf653d070..8fe1e21ecc0e 100644 --- a/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable_attributes.erb +++ b/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variable_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -runtimeconfig_config = attribute('runtimeconfig_config', default: <%= grab_attributes['runtimeconfig_config'] -%>) -runtimeconfig_variable = attribute('runtimeconfig_variable', default: <%= grab_attributes['runtimeconfig_variable'] -%>) +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +runtimeconfig_config = attribute('runtimeconfig_config', default: <%= grab_attributes(pwd)['runtimeconfig_config'] -%>) +runtimeconfig_variable = attribute('runtimeconfig_variable', default: <%= grab_attributes(pwd)['runtimeconfig_variable'] -%>) diff --git a/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variables.erb b/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variables.erb index a34007a185f2..611d15a77f2d 100644 --- a/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variables.erb +++ b/templates/inspec/examples/google_runtime_config_variable/google_runtime_config_variables.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% runtimeconfig_config = grab_attributes['runtimeconfig_config'] -%> -<% runtimeconfig_variable = grab_attributes['runtimeconfig_variable'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% runtimeconfig_config = grab_attributes(pwd)['runtimeconfig_config'] -%> +<% runtimeconfig_variable = grab_attributes(pwd)['runtimeconfig_variable'] -%> describe google_runtime_config_variables(project: <%= gcp_project_id -%>, config: <%= doc_generation ? "'#{runtimeconfig_config['name']}'" : "runtimeconfig_config['name']" -%>) do its('texts') { should include <%= doc_generation ? "'#{runtimeconfig_variable['text']}'" : "runtimeconfig_variable['text']" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_service_account/google_service_account.erb b/templates/inspec/examples/google_service_account/google_service_account.erb index 724fa9bdf614..6027fde92248 100644 --- a/templates/inspec/examples/google_service_account/google_service_account.erb +++ b/templates/inspec/examples/google_service_account/google_service_account.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> describe google_service_account(project: <%= gcp_project_id -%>, name: "<%= doc_generation ? "display-name" : "\#{gcp_service_account_display_name}" -%>@<%= doc_generation ? "project-id" : "\#{gcp_project_id}" -%>.iam.gserviceaccount.com") do it { should exist } its('display_name') { should cmp <%= gcp_service_account_display_name -%> } diff --git a/templates/inspec/examples/google_service_account/google_service_account_attributes.erb b/templates/inspec/examples/google_service_account/google_service_account_attributes.erb index ea7e09771830..0ec35cfa0e64 100644 --- a/templates/inspec/examples/google_service_account/google_service_account_attributes.erb +++ b/templates/inspec/examples/google_service_account/google_service_account_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute('gcp_service_account_display_name') -%>', description: 'The IAM service account display name.') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute(pwd, 'gcp_service_account_display_name') -%>', description: 'The IAM service account display name.') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_service_account/google_service_accounts.erb b/templates/inspec/examples/google_service_account/google_service_accounts.erb index a882be63d2ed..72ff1a464f85 100644 --- a/templates/inspec/examples/google_service_account/google_service_accounts.erb +++ b/templates/inspec/examples/google_service_account/google_service_accounts.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> describe google_service_accounts(project: <%= gcp_project_id -%>, name: "<%= doc_generation ? "display-name" : "\#{gcp_service_account_display_name}" -%>@<%= doc_generation ? "project-id" : "\#{gcp_project_id}" -%>.iam.gserviceaccount.com") do its('service_account_emails') { should include "<%= doc_generation ? "display-name" : "\#{gcp_service_account_display_name}" -%>@<%= doc_generation ? "project-id" : "\#{gcp_project_id}" -%>.iam.gserviceaccount.com" } its('count') { should be <= 1000 } diff --git a/templates/inspec/examples/google_service_account_key/google_service_account_key.erb b/templates/inspec/examples/google_service_account_key/google_service_account_key.erb index 86286b87c5af..fc5b5c137f93 100644 --- a/templates/inspec/examples/google_service_account_key/google_service_account_key.erb +++ b/templates/inspec/examples/google_service_account_key/google_service_account_key.erb @@ -1,5 +1,8 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> google_service_account_keys(project: <%= gcp_project_id -%>, service_account: "<%= doc_generation ? "display-name" : "\#{gcp_service_account_display_name}" -%>@<%= doc_generation ? "project-id" : "\#{gcp_project_id}" -%>.iam.gserviceaccount.com").key_names.each do |sa_key_name| - describe + describe google_service_account_key(project: <%= gcp_project_id -%>, service_account: "<%= doc_generation ? "display-name" : "\#{gcp_service_account_display_name}" -%>@<%= doc_generation ? "project-id" : "\#{gcp_project_id}" -%>.iam.gserviceaccount.com", name: sa_key_name.split('/').last) do + it { should exist } + its('key_type') { should_not cmp 'USER_MANAGED' } + end end \ No newline at end of file diff --git a/templates/inspec/examples/google_service_account_key/google_service_account_key_attributes.erb b/templates/inspec/examples/google_service_account_key/google_service_account_key_attributes.erb index ea7e09771830..0ec35cfa0e64 100644 --- a/templates/inspec/examples/google_service_account_key/google_service_account_key_attributes.erb +++ b/templates/inspec/examples/google_service_account_key/google_service_account_key_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute('gcp_service_account_display_name') -%>', description: 'The IAM service account display name.') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute(pwd, 'gcp_service_account_display_name') -%>', description: 'The IAM service account display name.') gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default:0, description:'Flag to enable privileged resources requiring elevated privileges in GCP.') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_service_account_key/google_service_account_keys.erb b/templates/inspec/examples/google_service_account_key/google_service_account_keys.erb index 363d7a6056f0..052d62f3ee30 100644 --- a/templates/inspec/examples/google_service_account_key/google_service_account_keys.erb +++ b/templates/inspec/examples/google_service_account_key/google_service_account_keys.erb @@ -1,5 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> describe google_service_account_keys(project: <%= gcp_project_id -%>, service_account: "<%= doc_generation ? "display-name" : "\#{gcp_service_account_display_name}" -%>@<%= doc_generation ? "project-id" : "\#{gcp_project_id}" -%>.iam.gserviceaccount.com") do its('count') { should be <= 1000 } + its('key_types') { should_not include 'USER_MANAGED' } end \ No newline at end of file diff --git a/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repositories.erb b/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repositories.erb index 2f85226d9ad3..0c3e3adf3f99 100644 --- a/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repositories.erb +++ b/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repositories.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% repository = grab_attributes['repository'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% repository = grab_attributes(pwd)['repository'] -%> repo_name = <%= doc_generation ? "'#{repository['name']}'" : "repository['name']" %> describe.one do google_sourcerepo_repositories(project: <%= gcp_project_id -%>).names.each do |name| diff --git a/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository.erb b/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository.erb index b4c9bbc6193f..469e77755211 100644 --- a/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository.erb +++ b/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% repository = grab_attributes['repository'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% repository = grab_attributes(pwd)['repository'] -%> describe google_sourcerepo_repository(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{repository['name']}'" : "repository['name']" -%>) do it { should exist } end diff --git a/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository_attributes.erb b/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository_attributes.erb index eb1280931aee..027498314e4d 100644 --- a/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository_attributes.erb +++ b/templates/inspec/examples/google_sourcerepo_repository/google_sourcerepo_repository_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -repository = attribute('repository', default: <%= JSON.pretty_generate(grab_attributes['repository']) -%>, description: 'Source Repository definition') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +repository = attribute('repository', default: <%= JSON.pretty_generate(grab_attributes(pwd)['repository']) -%>, description: 'Source Repository definition') \ No newline at end of file diff --git a/templates/inspec/examples/google_spanner_database/google_spanner_database.erb b/templates/inspec/examples/google_spanner_database/google_spanner_database.erb index 487ce68cc262..62db1c769db2 100644 --- a/templates/inspec/examples/google_spanner_database/google_spanner_database.erb +++ b/templates/inspec/examples/google_spanner_database/google_spanner_database.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% spannerdatabase = grab_attributes['spannerdatabase'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% spannerdatabase = grab_attributes(pwd)['spannerdatabase'] -%> describe google_spanner_database(project: <%= gcp_project_id -%>, instance: <%= doc_generation ? "'#{spannerdatabase['instance']}'" : "spannerdatabase['instance']" -%>, name: <%= doc_generation ? "'#{spannerdatabase['name']}'" : "spannerdatabase['name']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_spanner_database/google_spanner_database_attributes.erb b/templates/inspec/examples/google_spanner_database/google_spanner_database_attributes.erb index 93a24ba09df1..a31c01b02921 100644 --- a/templates/inspec/examples/google_spanner_database/google_spanner_database_attributes.erb +++ b/templates/inspec/examples/google_spanner_database/google_spanner_database_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -spannerdatabase = attribute('spannerdatabase', default: <%= JSON.pretty_generate(grab_attributes['spannerdatabase']) -%>, description: 'Cloud Spanner definition') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +spannerdatabase = attribute('spannerdatabase', default: <%= JSON.pretty_generate(grab_attributes(pwd)['spannerdatabase']) -%>, description: 'Cloud Spanner definition') diff --git a/templates/inspec/examples/google_spanner_database/google_spanner_databases.erb b/templates/inspec/examples/google_spanner_database/google_spanner_databases.erb index c29f3cb3c7ab..064ec0622361 100644 --- a/templates/inspec/examples/google_spanner_database/google_spanner_databases.erb +++ b/templates/inspec/examples/google_spanner_database/google_spanner_databases.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% spannerdatabase = grab_attributes['spannerdatabase'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% spannerdatabase = grab_attributes(pwd)['spannerdatabase'] -%> describe.one do google_spanner_databases(project: <%= gcp_project_id -%>, instance: <%= doc_generation ? "'#{spannerdatabase['instance']}'" : "spannerdatabase['instance']" -%>).names.each do |name| diff --git a/templates/inspec/examples/google_spanner_instance/google_spanner_instance.erb b/templates/inspec/examples/google_spanner_instance/google_spanner_instance.erb index bb488d5eaee0..714f04240248 100644 --- a/templates/inspec/examples/google_spanner_instance/google_spanner_instance.erb +++ b/templates/inspec/examples/google_spanner_instance/google_spanner_instance.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% spannerinstance = grab_attributes['spannerinstance'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% spannerinstance = grab_attributes(pwd)['spannerinstance'] -%> describe google_spanner_instance(project: <%= gcp_project_id -%>, name: <%= doc_generation ? "'#{spannerinstance['name']}'" : "spannerinstance['name']" -%>, config: <%= doc_generation ? "'#{spannerinstance['config']}'" : "spannerinstance['config']" -%>) do it { should exist } diff --git a/templates/inspec/examples/google_spanner_instance/google_spanner_instance_attributes.erb b/templates/inspec/examples/google_spanner_instance/google_spanner_instance_attributes.erb index ba0396d56f0e..1c8ab339e0c8 100644 --- a/templates/inspec/examples/google_spanner_instance/google_spanner_instance_attributes.erb +++ b/templates/inspec/examples/google_spanner_instance/google_spanner_instance_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -spannerinstance = attribute('spannerinstance', default: <%= JSON.pretty_generate(grab_attributes['spannerinstance']) -%>, description: 'Cloud Spanner definition') +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +spannerinstance = attribute('spannerinstance', default: <%= JSON.pretty_generate(grab_attributes(pwd)['spannerinstance']) -%>, description: 'Cloud Spanner definition') diff --git a/templates/inspec/examples/google_spanner_instance/google_spanner_instances.erb b/templates/inspec/examples/google_spanner_instance/google_spanner_instances.erb index 3ddaa51ab622..0b11a9afddca 100644 --- a/templates/inspec/examples/google_spanner_instance/google_spanner_instances.erb +++ b/templates/inspec/examples/google_spanner_instance/google_spanner_instances.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% spannerinstance = grab_attributes['spannerinstance'] -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% spannerinstance = grab_attributes(pwd)['spannerinstance'] -%> describe.one do google_spanner_instances(project: <%= gcp_project_id -%>, config: <%= doc_generation ? "'#{spannerinstance['config']}'" : "spannerinstance['config']" -%>).configs.each do |config| diff --git a/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance.erb b/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance.erb index f2e28abcf105..728842daa274 100644 --- a/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance.erb +++ b/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_db_instance_name = "#{external_attribute('gcp_db_instance_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_db_instance_name = "#{external_attribute(pwd, 'gcp_db_instance_name', doc_generation)}" -%> describe google_sql_database_instance(project: <%= gcp_project_id -%>, database: <%= gcp_db_instance_name -%>) do it { should exist } diff --git a/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance_attributes.erb b/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance_attributes.erb index a1a3b190f72e..d5b9382713c6 100644 --- a/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance_attributes.erb +++ b/templates/inspec/examples/google_sql_database_instance/google_sql_database_instance_attributes.erb @@ -1,3 +1,3 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project location.') -gcp_db_instance_name = attribute(:gcp_db_instance_name, default: '<%= external_attribute('gcp_db_instance_name') -%>', description: 'Database instance name.') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project location.') +gcp_db_instance_name = attribute(:gcp_db_instance_name, default: '<%= external_attribute(pwd, 'gcp_db_instance_name') -%>', description: 'Database instance name.') \ No newline at end of file diff --git a/templates/inspec/examples/google_sql_database_instance/google_sql_database_instances.erb b/templates/inspec/examples/google_sql_database_instance/google_sql_database_instances.erb index e67dc17344e9..6d9fd569ef8a 100644 --- a/templates/inspec/examples/google_sql_database_instance/google_sql_database_instances.erb +++ b/templates/inspec/examples/google_sql_database_instance/google_sql_database_instances.erb @@ -1,6 +1,6 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_db_instance_name = "#{external_attribute('gcp_db_instance_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_db_instance_name = "#{external_attribute(pwd, 'gcp_db_instance_name', doc_generation)}" -%> describe google_sql_database_instances(project: <%= gcp_project_id -%>) do its('instance_states') { should include 'RUNNABLE' } diff --git a/templates/inspec/examples/google_sql_user/google_sql_user.erb b/templates/inspec/examples/google_sql_user/google_sql_user.erb index cb5fff46f9e5..df867ff15c8e 100644 --- a/templates/inspec/examples/google_sql_user/google_sql_user.erb +++ b/templates/inspec/examples/google_sql_user/google_sql_user.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_db_instance_name = "#{external_attribute('gcp_db_instance_name', doc_generation)}" -%> -<% gcp_db_user_name = "#{external_attribute('gcp_db_user_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_db_instance_name = "#{external_attribute(pwd, 'gcp_db_instance_name', doc_generation)}" -%> +<% gcp_db_user_name = "#{external_attribute(pwd, 'gcp_db_user_name', doc_generation)}" -%> describe google_sql_user(project: <%= gcp_project_id -%>, database: <%= gcp_db_instance_name -%>, name: <%= gcp_db_user_name -%>, host: "example.com") do it { should exist } diff --git a/templates/inspec/examples/google_sql_user/google_sql_user_attributes.erb b/templates/inspec/examples/google_sql_user/google_sql_user_attributes.erb index 0586a7363233..58477507e9a1 100644 --- a/templates/inspec/examples/google_sql_user/google_sql_user_attributes.erb +++ b/templates/inspec/examples/google_sql_user/google_sql_user_attributes.erb @@ -1,4 +1,4 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'The GCP project location.') -gcp_db_instance_name = attribute(:gcp_db_instance_name, default: '<%= external_attribute('gcp_db_instance_name') -%>', description: 'Database instance name.') -gcp_db_user_name = attribute(:gcp_db_user_name, default: '<%= external_attribute('gcp_db_user_name') -%>', description: 'SQL database user name.') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'The GCP project location.') +gcp_db_instance_name = attribute(:gcp_db_instance_name, default: '<%= external_attribute(pwd, 'gcp_db_instance_name') -%>', description: 'Database instance name.') +gcp_db_user_name = attribute(:gcp_db_user_name, default: '<%= external_attribute(pwd, 'gcp_db_user_name') -%>', description: 'SQL database user name.') \ No newline at end of file diff --git a/templates/inspec/examples/google_sql_user/google_sql_users.erb b/templates/inspec/examples/google_sql_user/google_sql_users.erb index 004c03fbce49..b5c3dad3ffdb 100644 --- a/templates/inspec/examples/google_sql_user/google_sql_users.erb +++ b/templates/inspec/examples/google_sql_user/google_sql_users.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> -<% gcp_db_instance_name = "#{external_attribute('gcp_db_instance_name', doc_generation)}" -%> -<% gcp_db_user_name = "#{external_attribute('gcp_db_user_name', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> +<% gcp_db_instance_name = "#{external_attribute(pwd, 'gcp_db_instance_name', doc_generation)}" -%> +<% gcp_db_user_name = "#{external_attribute(pwd, 'gcp_db_user_name', doc_generation)}" -%> describe google_sql_users(project: <%= gcp_project_id -%>, database: <%= gcp_db_instance_name -%>) do its('user_names') { should include <%= gcp_db_user_name -%> } diff --git a/templates/inspec/examples/google_storage_bucket/google_storage_bucket.erb b/templates/inspec/examples/google_storage_bucket/google_storage_bucket.erb index 2e4ba4e8ed03..5cb98f6ee78b 100644 --- a/templates/inspec/examples/google_storage_bucket/google_storage_bucket.erb +++ b/templates/inspec/examples/google_storage_bucket/google_storage_bucket.erb @@ -1,11 +1,12 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> describe google_storage_bucket(name: <%= doc_generation ? "bucket-name" : "\"inspec-gcp-static-\#{gcp_project_id}\"" -%>) do it { should exist } its('location') { should cmp <%= gcp_location -%>.upcase } its('storage_class') { should eq "STANDARD" } its('labels') { should include("key" => "value") } + its('retention_policy.retention_period') { should cmp 1000 } end describe google_storage_bucket(name: "nonexistent") do diff --git a/templates/inspec/examples/google_storage_bucket/google_storage_bucket_attributes.erb b/templates/inspec/examples/google_storage_bucket/google_storage_bucket_attributes.erb index fab87fb26e8a..447e1ec7dbf9 100644 --- a/templates/inspec/examples/google_storage_bucket/google_storage_bucket_attributes.erb +++ b/templates/inspec/examples/google_storage_bucket/google_storage_bucket_attributes.erb @@ -1,2 +1,2 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_location = attribute(:gcp_location, default: '<%= external_attribute('gcp_location') -%>', description: 'GCP location') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_location = attribute(:gcp_location, default: '<%= external_attribute(pwd, 'gcp_location') -%>', description: 'GCP location') \ No newline at end of file diff --git a/templates/inspec/examples/google_storage_bucket/google_storage_buckets.erb b/templates/inspec/examples/google_storage_bucket/google_storage_buckets.erb index 234d7f8a1760..b3dba11223f5 100644 --- a/templates/inspec/examples/google_storage_bucket/google_storage_buckets.erb +++ b/templates/inspec/examples/google_storage_bucket/google_storage_buckets.erb @@ -1,5 +1,5 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_location = "#{external_attribute('gcp_location', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_location = "#{external_attribute(pwd, 'gcp_location', doc_generation)}" -%> describe google_storage_buckets(project: <%= gcp_project_id -%>) do its('bucket_names') { should include <%= doc_generation ? "bucket-name" : "\"inspec-gcp-static-\#{gcp_project_id}\"" -%> } end \ No newline at end of file diff --git a/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl.erb b/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl.erb index 00bf6af334e9..699ba5f199b1 100644 --- a/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl.erb +++ b/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_storage_bucket_acl = "#{external_attribute('gcp_storage_bucket_acl', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> -<% gcp_enable_privileged_resources = "#{external_attribute('gcp_enable_privileged_resources', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_storage_bucket_acl = "#{external_attribute(pwd, 'gcp_storage_bucket_acl', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_enable_privileged_resources = "#{external_attribute(pwd, 'gcp_enable_privileged_resources', doc_generation)}" -%> describe google_storage_bucket_acl(bucket: <%= gcp_storage_bucket_acl -%>, entity: <%= doc_generation ? "user-email" : "\"user-\#{gcp_service_account_display_name}@\#{gcp_project_id}.iam.gserviceaccount.com\"" -%>) do it { should exist } its('role') { should cmp "OWNER" } diff --git a/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl_attributes.erb b/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl_attributes.erb index 9067ed2710b6..deaa5d4869ae 100644 --- a/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl_attributes.erb +++ b/templates/inspec/examples/google_storage_bucket_acl/google_storage_bucket_acl_attributes.erb @@ -1,5 +1,5 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_storage_bucket_acl = attribute(:gcp_storage_bucket_acl, default: '<%= external_attribute('gcp_storage_bucket_acl') -%>', description: 'The name of the storage bucket with ACLs attached') -gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute('gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') -gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute('gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_storage_bucket_acl = attribute(:gcp_storage_bucket_acl, default: '<%= external_attribute(pwd, 'gcp_storage_bucket_acl') -%>', description: 'The name of the storage bucket with ACLs attached') +gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute(pwd, 'gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') +gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute(pwd, 'gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object.erb b/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object.erb index 54c7b928837e..56ee2223cae4 100644 --- a/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object.erb +++ b/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object.erb @@ -1,8 +1,8 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_storage_bucket_object = "#{external_attribute('gcp_storage_bucket_object', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> -<% gcp_storage_bucket_object_name = "#{external_attribute('gcp_storage_bucket_object_name', doc_generation)}" -%> -<% gcp_enable_privileged_resources = "#{external_attribute('gcp_enable_privileged_resources', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_storage_bucket_object = "#{external_attribute(pwd, 'gcp_storage_bucket_object', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_storage_bucket_object_name = "#{external_attribute(pwd, 'gcp_storage_bucket_object_name', doc_generation)}" -%> +<% gcp_enable_privileged_resources = "#{external_attribute(pwd, 'gcp_enable_privileged_resources', doc_generation)}" -%> describe google_storage_bucket_object(bucket: <%= gcp_storage_bucket_object -%>, object: <%= gcp_storage_bucket_object_name -%>) do it { should exist } its('size.to_i') { should be > 0 } diff --git a/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object_attributes.erb b/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object_attributes.erb index 325d3112c7d7..1a74e108fac9 100644 --- a/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object_attributes.erb +++ b/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_object_attributes.erb @@ -1,6 +1,6 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_storage_bucket_object = attribute(:gcp_storage_bucket_object, default: '<%= external_attribute('gcp_storage_bucket_object') -%>', description: 'The name of the storage bucket with an object') -gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute('gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') -gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute('gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') -gcp_storage_bucket_object_name = attribute(:gcp_storage_bucket_object_name, default: '<%= external_attribute('gcp_storage_bucket_object_name') -%>', description: 'The name of the object') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_storage_bucket_object = attribute(:gcp_storage_bucket_object, default: '<%= external_attribute(pwd, 'gcp_storage_bucket_object') -%>', description: 'The name of the storage bucket with an object') +gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute(pwd, 'gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') +gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute(pwd, 'gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') +gcp_storage_bucket_object_name = attribute(:gcp_storage_bucket_object_name, default: '<%= external_attribute(pwd, 'gcp_storage_bucket_object_name') -%>', description: 'The name of the object') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_objects.erb b/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_objects.erb index b64afeba2b2a..24721f9a0139 100644 --- a/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_objects.erb +++ b/templates/inspec/examples/google_storage_bucket_object/google_storage_bucket_objects.erb @@ -1,8 +1,8 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_storage_bucket_object = "#{external_attribute('gcp_storage_bucket_object', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> -<% gcp_storage_bucket_object_name = "#{external_attribute('gcp_storage_bucket_object_name', doc_generation)}" -%> -<% gcp_enable_privileged_resources = "#{external_attribute('gcp_enable_privileged_resources', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_storage_bucket_object = "#{external_attribute(pwd, 'gcp_storage_bucket_object', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_storage_bucket_object_name = "#{external_attribute(pwd, 'gcp_storage_bucket_object_name', doc_generation)}" -%> +<% gcp_enable_privileged_resources = "#{external_attribute(pwd, 'gcp_enable_privileged_resources', doc_generation)}" -%> describe google_storage_bucket_objects(bucket: <%= gcp_storage_bucket_object -%>) do its('object_names') { should include <%= gcp_storage_bucket_object_name -%> } its('count') { should be <= 10 } diff --git a/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl.erb b/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl.erb index ff8e0785d1d3..87340b7b0642 100644 --- a/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl.erb +++ b/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl.erb @@ -1,7 +1,7 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_storage_bucket_name = "#{external_attribute('gcp_storage_bucket_name', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> -<% gcp_enable_privileged_resources = "#{external_attribute('gcp_enable_privileged_resources', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_storage_bucket_name = "#{external_attribute(pwd, 'gcp_storage_bucket_name', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_enable_privileged_resources = "#{external_attribute(pwd, 'gcp_enable_privileged_resources', doc_generation)}" -%> describe google_storage_default_object_acl(bucket: <%= gcp_storage_bucket_name -%>, entity: <%= doc_generation ? "user-email" : "\"user-\#{gcp_service_account_display_name}@\#{gcp_project_id}.iam.gserviceaccount.com\"" -%>) do it { should exist } its('role') { should cmp "OWNER" } diff --git a/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl_attributes.erb b/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl_attributes.erb index beaea827ad37..dc05a00799a1 100644 --- a/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl_attributes.erb +++ b/templates/inspec/examples/google_storage_default_object_acl/google_storage_default_object_acl_attributes.erb @@ -1,5 +1,5 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_storage_bucket_name = attribute(:gcp_storage_bucket_name, default: '<%= external_attribute('gcp_storage_bucket_name') -%>', description: 'The name of the storage bucket with the default object ACL') -gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute('gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') -gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute('gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_storage_bucket_name = attribute(:gcp_storage_bucket_name, default: '<%= external_attribute(pwd, 'gcp_storage_bucket_name') -%>', description: 'The name of the storage bucket with the default object ACL') +gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute(pwd, 'gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') +gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute(pwd, 'gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl.erb b/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl.erb index 1a183f728e93..8642e26fc994 100644 --- a/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl.erb +++ b/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl.erb @@ -1,8 +1,8 @@ -<% gcp_project_id = "#{external_attribute('gcp_project_id', doc_generation)}" -%> -<% gcp_storage_bucket_object = "#{external_attribute('gcp_storage_bucket_object', doc_generation)}" -%> -<% gcp_service_account_display_name = "#{external_attribute('gcp_service_account_display_name', doc_generation)}" -%> -<% gcp_storage_bucket_object_name = "#{external_attribute('gcp_storage_bucket_object_name', doc_generation)}" -%> -<% gcp_enable_privileged_resources = "#{external_attribute('gcp_enable_privileged_resources', doc_generation)}" -%> +<% gcp_project_id = "#{external_attribute(pwd, 'gcp_project_id', doc_generation)}" -%> +<% gcp_storage_bucket_object = "#{external_attribute(pwd, 'gcp_storage_bucket_object', doc_generation)}" -%> +<% gcp_service_account_display_name = "#{external_attribute(pwd, 'gcp_service_account_display_name', doc_generation)}" -%> +<% gcp_storage_bucket_object_name = "#{external_attribute(pwd, 'gcp_storage_bucket_object_name', doc_generation)}" -%> +<% gcp_enable_privileged_resources = "#{external_attribute(pwd, 'gcp_enable_privileged_resources', doc_generation)}" -%> describe google_storage_object_acl(bucket: <%= gcp_storage_bucket_object -%>, object: <%= gcp_storage_bucket_object_name -%>, entity: <%= doc_generation ? "user-email" : "\"user-\#{gcp_service_account_display_name}@\#{gcp_project_id}.iam.gserviceaccount.com\"" -%>) do it { should exist } its('role') { should cmp "OWNER" } diff --git a/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl_attributes.erb b/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl_attributes.erb index 3e738dbe025e..410358a1f54a 100644 --- a/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl_attributes.erb +++ b/templates/inspec/examples/google_storage_object_acl/google_storage_object_acl_attributes.erb @@ -1,6 +1,6 @@ -gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute('gcp_project_id') -%>', description: 'The GCP project identifier.') -gcp_storage_bucket_object = attribute(:gcp_storage_bucket_object, default: '<%= external_attribute('gcp_storage_bucket_object') -%>', description: 'The name of the storage bucket with ACLs attached') -gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute('gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') -gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute('gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') -gcp_storage_bucket_object_name = attribute(:gcp_storage_bucket_object_name, default: '<%= external_attribute('gcp_storage_bucket_object_name') -%>', description: 'The name of the object with ACLs') -gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute('gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file +gcp_project_id = attribute(:gcp_project_id, default: '<%= external_attribute(pwd, 'gcp_project_id') -%>', description: 'The GCP project identifier.') +gcp_storage_bucket_object = attribute(:gcp_storage_bucket_object, default: '<%= external_attribute(pwd, 'gcp_storage_bucket_object') -%>', description: 'The name of the storage bucket with ACLs attached') +gcp_service_account_display_name = attribute(:gcp_service_account_display_name, default: '<%= external_attribute(pwd, 'gcp_service_account_display_name') -%>', description: 'The name of the service account assigned permissions') +gcp_enable_privileged_resources = attribute(:gcp_enable_privileged_resources, default: '<%= external_attribute(pwd, 'gcp_enable_privileged_resources') -%>', description: 'If we are running tests with escalated permissions(required for this test)') +gcp_storage_bucket_object_name = attribute(:gcp_storage_bucket_object_name, default: '<%= external_attribute(pwd, 'gcp_storage_bucket_object_name') -%>', description: 'The name of the object with ACLs') +gcp_organization_id = attribute(:gcp_organization_id, default: <%= external_attribute(pwd, 'gcp_organization_id') -%>, description: 'The identifier of the organization') \ No newline at end of file diff --git a/templates/inspec/iam_binding/iam_binding.erb b/templates/inspec/iam_binding/iam_binding.erb index 70279a97df70..bf22e29ac5fa 100644 --- a/templates/inspec/iam_binding/iam_binding.erb +++ b/templates/inspec/iam_binding/iam_binding.erb @@ -1,10 +1,10 @@ # frozen_string_literal: false -<%= lines(autogen_notice :ruby) -%> +<%= lines(autogen_notice(:ruby,pwd)) -%> require 'gcp_backend' require 'google/iam/property/iam_policy_bindings' -# A provider to manage <%= @api.product_full_name -%> IAM Binding resources. +# A provider to manage <%= @api.display_name -%> IAM Binding resources. class <%= object.name -%>IamBinding < GcpResourceBase name '<%= resource_name(object, product) -%>_iam_binding' desc '<%= object.name -%> Iam Binding' diff --git a/templates/inspec/iam_policy/iam_policy.erb b/templates/inspec/iam_policy/iam_policy.erb index 984d202c81dc..62e249de1295 100644 --- a/templates/inspec/iam_policy/iam_policy.erb +++ b/templates/inspec/iam_policy/iam_policy.erb @@ -1,11 +1,11 @@ # frozen_string_literal: false -<%= lines(autogen_notice :ruby) -%> +<%= lines(autogen_notice(:ruby,pwd)) -%> require 'gcp_backend' require 'google/iam/property/iam_policy_audit_configs' require 'google/iam/property/iam_policy_bindings' -# A provider to manage <%= @api.product_full_name -%> IAM Policy resources. +# A provider to manage <%= @api.display_name -%> IAM Policy resources. class <%= object.name -%>IamPolicy < GcpResourceBase name '<%= resource_name(object, product) -%>_iam_policy' desc '<%= object.name -%> Iam Policy' diff --git a/templates/inspec/integration_test_template.erb b/templates/inspec/integration_test_template.erb index 1646ca707468..06209f1229d7 100644 --- a/templates/inspec/integration_test_template.erb +++ b/templates/inspec/integration_test_template.erb @@ -1,4 +1,4 @@ -<%= lines(autogen_notice :ruby) -%> +<%= lines(autogen_notice(:ruby, pwd)) -%> <% vcr_mode = ENV['VCR_MODE'] -%> <% raise "Bad VCR_MODE environment variable set. Should be nil, 'none' or 'all'" unless vcr_mode.nil? || vcr_mode == 'all' || vcr_mode == 'none' -%> <% if vcr_mode -%> @@ -8,7 +8,7 @@ title 'Test GCP <%= name -%> resource.' -<%= compile("templates/inspec/examples/#{attribute_file_name}/#{attribute_file_name}_attributes.erb") -%> +<%= compile(pwd + "/templates/inspec/examples/#{attribute_file_name}/#{attribute_file_name}_attributes.erb") -%> control '<%= name -%>-1.0' do impact 1.0 @@ -20,7 +20,7 @@ control '<%= name -%>-1.0' do <% if vcr_mode -%> VCR.use_cassette('<%= name -%>', :record => :<%= vcr_mode -%>) do <% end # if vcr_mode -%> -<%= indent(compile("templates/inspec/examples/#{attribute_file_name}/#{name}.erb"), vcr_mode ? 4 : 2) %> +<%= indent(compile(pwd + "/templates/inspec/examples/#{attribute_file_name}/#{name}.erb"), vcr_mode ? 4 : 2) %> <% if vcr_mode -%> end <% end # if vcr_mode -%> diff --git a/templates/inspec/nested_object.erb b/templates/inspec/nested_object.erb index 4a3d3e48eb55..05a4382b2e05 100644 --- a/templates/inspec/nested_object.erb +++ b/templates/inspec/nested_object.erb @@ -24,7 +24,7 @@ -%> # frozen_string_literal: false -<%= lines(autogen_notice :ruby) -%> +<%= lines(autogen_notice(:ruby, pwd)) -%> <% requires = generate_requires(nested_property.nested_properties) -%> diff --git a/templates/inspec/plural_resource.erb b/templates/inspec/plural_resource.erb index 9d764db159b5..a067301ea831 100644 --- a/templates/inspec/plural_resource.erb +++ b/templates/inspec/plural_resource.erb @@ -14,7 +14,7 @@ -%> # frozen_string_literal: false -<%= lines(autogen_notice :ruby) -%> +<%= lines(autogen_notice(:ruby, pwd)) -%> require 'gcp_backend' class <%= object.__product.name.camelize(:upper) -%><%= object.name -%>s < GcpResourceBase <% @@ -64,7 +64,7 @@ link_query_items = object&.nested_query&.keys&.first || object.collection_url_ke hash_with_symbols[name] = value end <% if object.plural_custom_logic -%> -<%= lines(indent(compile(object.plural_custom_logic), 8)) -%> +<%= lines(indent(compile(pwd + '/' + object.plural_custom_logic), 8)) -%> <% end -%> converted.push(hash_with_symbols) end @@ -96,7 +96,7 @@ link_query_items = object&.nested_query&.keys&.first || object.collection_url_ke private -<%= compile('templates/inspec/product_url.erb') -%> +<%= compile(pwd + '/templates/inspec/product_url.erb') -%> def resource_base_url '<%= object.base_url %>' diff --git a/templates/inspec/singular_resource.erb b/templates/inspec/singular_resource.erb index 572a8827be53..2283382f4cda 100644 --- a/templates/inspec/singular_resource.erb +++ b/templates/inspec/singular_resource.erb @@ -14,9 +14,9 @@ -%> # frozen_string_literal: false -<%= lines(autogen_notice :ruby) -%> +<%= lines(autogen_notice(:ruby, pwd)) -%> <% - require 'google/string_utils' + require pwd + '/google/string_utils' inside_indent = 8 @@ -26,7 +26,7 @@ -%> <%= lines(emit_requires(requires)) -%> -# A provider to manage <%= @api.product_full_name -%> resources. +# A provider to manage <%= @api.display_name -%> resources. class <%= object.__product.name.camelize(:upper) -%><%= object.name -%> < GcpResourceBase name '<%= resource_name(object, product) -%>' desc '<%= object.name -%>' @@ -39,7 +39,7 @@ class <%= object.__product.name.camelize(:upper) -%><%= object.name -%> < GcpRes <% end -%> <% if !object.singular_custom_constructor.nil? -%> -<%= indent(compile(object.singular_custom_constructor), 2) -%> +<%= indent(compile(pwd + '/' + object.singular_custom_constructor), 2) -%> <% elsif object.nested_query.nil? -%> def initialize(params) @@ -100,12 +100,12 @@ best_guess_identifier = extract_identifiers(individual_url).last end <% unless object&.additional_functions.nil? -%> -<%= lines(indent(compile(object.additional_functions), 2)) -%> +<%= lines(indent(compile(pwd + '/' + object.additional_functions), 2)) -%> <% end -%> private -<%= compile('templates/inspec/product_url.erb') -%> +<%= compile(pwd + '/templates/inspec/product_url.erb') -%> <% url = object.self_link || object.base_url + '/{{name}}' -%> <% url_params = extract_identifiers(individual_url) -%> diff --git a/templates/inspec/tests/integration/build/gcp-mm.tf b/templates/inspec/tests/integration/build/gcp-mm.tf index 3e77c33cb4cd..feba1dff37de 100644 --- a/templates/inspec/tests/integration/build/gcp-mm.tf +++ b/templates/inspec/tests/integration/build/gcp-mm.tf @@ -656,6 +656,10 @@ resource "google_storage_bucket" "bucket" { labels = { "key" = "value" } + + retention_policy { + retention_period = 1000 + } } resource "google_storage_bucket_object" "object" { @@ -906,6 +910,11 @@ resource "google_service_account" "spanner_service_account" { display_name = "${var.gcp_service_account_display_name}-sp" } +resource "google_service_account_key" "userkey" { + service_account_id = google_service_account.spanner_service_account.name + public_key_type = "TYPE_X509_PEM_FILE" +} + resource "google_spanner_instance" "spanner_instance" { project = var.gcp_project_id config = var.spannerinstance["config"] @@ -1215,3 +1224,77 @@ resource "google_organization_iam_custom_role" "generic_org_iam_custom_role" { description = "Custom role allowing to list IAM roles only" permissions = ["iam.roles.list"] } + +variable "security_policy" { + type = any +} + +resource "google_compute_security_policy" "policy" { + project = var.gcp_project_id + name = var.security_policy["name"] + + rule { + action = var.security_policy["action"] + priority = var.security_policy["priority"] + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = [var.security_policy["ip_range"]] + } + } + description = var.security_policy["description"] + } + + rule { + action = "allow" + priority = "2147483647" + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["*"] + } + } + description = "default rule" + } +} + +variable "memcache_instance" { + type = any +} + +resource "google_compute_network" "memcache_network" { + provider = google-beta + project = var.gcp_project_id + name = "inspec-gcp-memcache" +} + +resource "google_compute_global_address" "service_range" { + provider = google-beta + project = var.gcp_project_id + name = "inspec-gcp-memcache" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 16 + network = google_compute_network.memcache_network.id +} + +resource "google_service_networking_connection" "private_service_connection" { + provider = google-beta + network = google_compute_network.memcache_network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.service_range.name] +} + +resource "google_memcache_instance" "instance" { + provider = google-beta + name = var.memcache_instance["name"] + project = var.gcp_project_id + region = var.gcp_location + authorized_network = google_service_networking_connection.private_service_connection.network + + node_config { + cpu_count = 1 + memory_size_mb = 1024 + } + node_count = 1 +} diff --git a/templates/inspec/tests/integration/configuration/mm-attributes.yml b/templates/inspec/tests/integration/configuration/mm-attributes.yml index ab78ff156af9..faf675d00529 100644 --- a/templates/inspec/tests/integration/configuration/mm-attributes.yml +++ b/templates/inspec/tests/integration/configuration/mm-attributes.yml @@ -437,4 +437,13 @@ logging_metric: compute_image: name: inspec-image source: https://storage.googleapis.com/bosh-cpi-artifacts/bosh-stemcell-3262.4-google-kvm-ubuntu-trusty-go_agent-raw.tar.gz - \ No newline at end of file + +security_policy: + name: sec-policy + action: deny(403) + priority: "1000" + ip_range: "9.9.9.0/24" + description: my description + +memcache_instance: + name: mem-instance diff --git a/templates/stackdriver.json b/templates/stackdriver.json new file mode 100644 index 000000000000..33430fbacff5 --- /dev/null +++ b/templates/stackdriver.json @@ -0,0 +1,421 @@ +{ + "parameters": { + "key": { + "description": "API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.", + "type": "string", + "location": "query" + }, + "access_token": { + "location": "query", + "description": "OAuth access token.", + "type": "string" + }, + "upload_protocol": { + "location": "query", + "description": "Upload protocol for media (e.g. \"raw\", \"multipart\").", + "type": "string" + }, + "prettyPrint": { + "location": "query", + "description": "Returns response with indentations and line breaks.", + "type": "boolean", + "default": "true" + }, + "quotaUser": { + "description": "Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.", + "type": "string", + "location": "query" + }, + "uploadType": { + "location": "query", + "description": "Legacy upload protocol for media (e.g. \"media\", \"multipart\").", + "type": "string" + }, + "fields": { + "type": "string", + "location": "query", + "description": "Selector specifying which fields to include in a partial response." + }, + "$.xgafv": { + "description": "V1 error format.", + "type": "string", + "enumDescriptions": [ + "v1 error format", + "v2 error format" + ], + "location": "query", + "enum": [ + "1", + "2" + ] + }, + "oauth_token": { + "description": "OAuth 2.0 token for the current user.", + "type": "string", + "location": "query" + }, + "callback": { + "description": "JSONP", + "type": "string", + "location": "query" + }, + "alt": { + "default": "json", + "enum": [ + "json", + "media", + "proto" + ], + "type": "string", + "enumDescriptions": [ + "Responses with Content-Type of application/json", + "Media download with context-dependent Content-Type", + "Responses with Content-Type of application/x-protobuf" + ], + "location": "query", + "description": "Data format for response." + } + }, + "version": "v2", + "baseUrl": "https://stackdriver.googleapis.com/", + "kind": "discovery#restDescription", + "description": "Provides users with programmatic access to Stackdriver endpoints that allow putting VM instances and other resources into maintenance mode.", + "servicePath": "", + "basePath": "", + "revision": "20200323", + "documentationLink": "https://cloud.google.com/stackdriver/docs/", + "id": "stackdriver:v2", + "discoveryVersion": "v1", + "version_module": true, + "schemas": { + "Status": { + "description": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). Each `Status` message contains\nthree pieces of data: error code, error message, and error details.\n\nYou can find out more about this error model and how to work with it in the\n[API Design Guide](https://cloud.google.com/apis/design/errors).", + "type": "object", + "properties": { + "details": { + "type": "array", + "items": { + "type": "object", + "additionalProperties": { + "type": "any", + "description": "Properties of the object. Contains field @type with type URL." + } + }, + "description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use." + }, + "code": { + "description": "The status code, which should be an enum value of google.rpc.Code.", + "format": "int32", + "type": "integer" + }, + "message": { + "description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\ngoogle.rpc.Status.details field, or localized by the client.", + "type": "string" + } + }, + "id": "Status" + }, + "Operation": { + "id": "Operation", + "description": "This resource represents a long-running operation that is the result of a\nnetwork API call.", + "type": "object", + "properties": { + "response": { + "description": "The normal response of the operation in case of success. If the original\nmethod returns no data on success, such as `Delete`, the response is\n`google.protobuf.Empty`. If the original method is standard\n`Get`/`Create`/`Update`, the response should be the resource. For other\nmethods, the response should have the type `XxxResponse`, where `Xxx`\nis the original method name. For example, if the original method name\nis `TakeSnapshot()`, the inferred response type is\n`TakeSnapshotResponse`.", + "type": "object", + "additionalProperties": { + "description": "Properties of the object. Contains field @type with type URL.", + "type": "any" + } + }, + "name": { + "description": "The server-assigned name, which is only unique within the same service that\noriginally returns it. If you use the default HTTP mapping, the\n`name` should be a resource name ending with `operations/{unique_id}`.", + "type": "string" + }, + "error": { + "$ref": "Status", + "description": "The error result of the operation in case of failure or cancellation." + }, + "metadata": { + "type": "object", + "additionalProperties": { + "type": "any", + "description": "Properties of the object. Contains field @type with type URL." + }, + "description": "Service-specific metadata associated with the operation. It typically\ncontains progress information and common metadata such as create time.\nSome services might not provide such metadata. Any method that returns a\nlong-running operation should document the metadata type, if any." + }, + "done": { + "description": "If the value is `false`, it means the operation is still in progress.\nIf `true`, the operation is completed, and either `error` or `response` is\navailable.", + "type": "boolean" + } + } + }, + "OperationMetadata": { + "id": "OperationMetadata", + "description": "Contains metadata for longrunning operations in the Stackdriver API.", + "type": "object", + "properties": { + "state": { + "enum": [ + "STATE_UNSPECIFIED", + "CREATED", + "RUNNING", + "DONE", + "CANCELLED" + ], + "description": "Current state of the batch operation.", + "type": "string", + "enumDescriptions": [ + "Invalid.", + "Request is received.", + "Request is actively being processed.", + "The batch processing is done.", + "The batch processing was cancelled." + ] + }, + "updateTime": { + "type": "string", + "description": "The time when the operation result was last updated.", + "format": "google-datetime" + }, + "createTime": { + "description": "The time when the batch request was received.", + "format": "google-datetime", + "type": "string" + } + } + }, + "MonitoredProject": { + "description": "A single cloud account being monitored within a Stackdriver account.", + "type": "object", + "properties": { + "projectNumber": { + "description": "Output only. The GCP-assigned project number.", + "format": "int64", + "type": "string" + }, + "createTime": { + "description": "Output only. The instant when this monitored project was created.", + "format": "google-datetime", + "type": "string" + }, + "updateTime": { + "description": "Output only. The instant when this monitored project was last updated.", + "format": "google-datetime", + "type": "string" + }, + "name": { + "description": "The resource name of the monitored project within a Stackdriver account.\nIncludes the host project id and monitored project id. On output it\nwill always contain the project number.\nExample: \u003ccode\u003eaccounts/my-project/projects/my-other-project\u003c/code\u003e", + "type": "string" + }, + "projectId": { + "description": "Output only. The GCP-assigned project id.\nExample: \u003ccode\u003eprojecty-project-101\u003c/code\u003e", + "type": "string" + }, + "organizationId": { + "description": "Optional, input only. The Id of the organization to hold the GCP Project\nfor a newly created monitored project.\nThis field is ignored if the GCP project already exists.", + "type": "string" + } + }, + "id": "MonitoredProject" + }, + "StackdriverAccount": { + "id": "StackdriverAccount", + "description": "A Workspace in Stackdriver Monitoring, which specifies one or more GCP\nprojects and zero or more AWS accounts to monitor together.\nOne GCP project acts as the Workspace's host.\nGCP projects and AWS accounts cannot be monitored until they are associated\nwith a Workspace.", + "type": "object", + "properties": { + "monitoredProjects": { + "description": "Output only. The GCP projects monitored in this Stackdriver account.", + "type": "array", + "items": { + "$ref": "MonitoredProject" + } + }, + "createTime": { + "type": "string", + "description": "Output only. The instant when this account was created.", + "format": "google-datetime" + }, + "hostProjectId": { + "description": "Output only. The GCP project id for the host project of this account.", + "type": "string" + }, + "updateTime": { + "description": "Output only. The instant when this account record was last updated.", + "format": "google-datetime", + "type": "string" + }, + "hostProjectNumber": { + "description": "Output only. The GCP project number for the host project of this account.", + "format": "int64", + "type": "string" + }, + "name": { + "description": "The resource name of the Stackdriver account, including the host project\nid or number. On output it will always be the host project number.\nExample: \u003ccode\u003eaccounts/[PROJECT_ID]\u003c/code\u003e or\n\u003ccode\u003eaccounts/[PROJECT_NUMBER]\u003c/code\u003e", + "type": "string" + }, + "organizationId": { + "description": "Optional, input only. The Id of the organization to hold the GCP Project\nfor a newly created Stackdriver account.\nThis field is ignored if the GCP project already exists.", + "type": "string" + } + } + } + }, + "protocol": "rest", + "icons": { + "x32": "http://www.google.com/images/icons/product/search-32.gif", + "x16": "http://www.google.com/images/icons/product/search-16.gif" + }, + "canonicalName": "Stackdriver", + "auth": { + "oauth2": { + "scopes": { + "https://www.googleapis.com/auth/monitoring": { + "description": "View and write monitoring data for all of your Google and third-party Cloud and API projects" + }, + "https://www.googleapis.com/auth/monitoring.write": { + "description": "Publish metric data to your Google Cloud projects" + }, + "https://www.googleapis.com/auth/cloud-platform": { + "description": "View and manage your data across Google Cloud Platform services" + }, + "https://www.googleapis.com/auth/monitoring.read": { + "description": "View monitoring data for all of your Google Cloud and third-party projects" + } + } + } + }, + "rootUrl": "https://stackdriver.googleapis.com/", + "ownerDomain": "google.com", + "name": "stackdriver", + "batchPath": "batch", + "mtlsRootUrl": "https://stackdriver.mtls.googleapis.com/", + "fullyEncodeReservedExpansion": true, + "title": "Stackdriver API", + "ownerName": "Google", + "resources": { + "accounts": { + "methods": { + "get": { + "httpMethod": "GET", + "parameterOrder": [ + "name" + ], + "response": { + "$ref": "StackdriverAccount" + }, + "parameters": { + "name": { + "location": "path", + "description": "The unique name of the Stackdriver account.\nCaller needs stackdriver.projects.get permission on the host project.", + "required": true, + "type": "string", + "pattern": "^accounts/[^/]+$" + }, + "includeProjects": { + "type": "boolean", + "location": "query", + "description": "If true the monitored_projects collection will be populated with any\nentries, if false it will be empty." + } + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/monitoring", + "https://www.googleapis.com/auth/monitoring.read" + ], + "flatPath": "v2/accounts/{accountsId}", + "id": "stackdriver.accounts.get", + "path": "v2/{+name}", + "description": "Fetches a specific Stackdriver account." + }, + "create": { + "response": { + "$ref": "Operation" + }, + "parameterOrder": [], + "httpMethod": "POST", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/monitoring", + "https://www.googleapis.com/auth/monitoring.write" + ], + "parameters": {}, + "flatPath": "v2/accounts", + "path": "v2/accounts", + "id": "stackdriver.accounts.create", + "request": { + "$ref": "StackdriverAccount" + }, + "description": "Creates a new Stackdriver account with a given host project.\nA MonitoredProject for that project will be attached to it if successful.\n\nOperation\u003cresponse: StackdriverAccount\u003e" + } + }, + "resources": { + "projects": { + "methods": { + "create": { + "response": { + "$ref": "Operation" + }, + "parameterOrder": [ + "parent" + ], + "httpMethod": "POST", + "parameters": { + "parent": { + "description": "The unique name of the Stackdriver account that will host this project.\nCaller needs stackdriver.projects.edit permission on the host project.", + "required": true, + "type": "string", + "pattern": "^accounts/[^/]+$", + "location": "path" + } + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/monitoring", + "https://www.googleapis.com/auth/monitoring.write" + ], + "flatPath": "v2/accounts/{accountsId}/projects", + "path": "v2/{+parent}/projects", + "id": "stackdriver.accounts.projects.create", + "description": "Creates a new monitored project in a Stackdriver account.\nOperation\u003cresponse: MonitoredProject\u003e", + "request": { + "$ref": "MonitoredProject" + } + } + } + } + } + }, + "operations": { + "methods": { + "get": { + "httpMethod": "GET", + "response": { + "$ref": "Operation" + }, + "parameterOrder": [ + "name" + ], + "parameters": { + "name": { + "pattern": "^operations/.*$", + "location": "path", + "description": "The name of the operation resource.", + "required": true, + "type": "string" + } + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/monitoring", + "https://www.googleapis.com/auth/monitoring.read" + ], + "flatPath": "v2/operations/{operationsId}", + "id": "stackdriver.operations.get", + "path": "v2/{+name}", + "description": "Gets the latest state of a long-running operation. Clients can use this\nmethod to poll the operation result at intervals as recommended by the API\nservice." + } + } + } + } +} \ No newline at end of file diff --git a/templates/terraform/constants/bigquery_dataset_access.go b/templates/terraform/constants/bigquery_dataset_access.go new file mode 100644 index 000000000000..fcf90d4c1d60 --- /dev/null +++ b/templates/terraform/constants/bigquery_dataset_access.go @@ -0,0 +1,12 @@ +var bigqueryAccessRoleToPrimitiveMap = map[string]string { + "roles/bigquery.dataOwner": "OWNER", + "roles/bigquery.dataEditor": "WRITER", + "roles/bigquery.dataViewer": "READER", +} + +func resourceBigQueryDatasetAccessRoleDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + if primitiveRole, ok := bigqueryAccessRoleToPrimitiveMap[new]; ok { + return primitiveRole == old + } + return false +} diff --git a/templates/terraform/constants/bigquery_job.go b/templates/terraform/constants/bigquery_job.go new file mode 100644 index 000000000000..2a9923e099a9 --- /dev/null +++ b/templates/terraform/constants/bigquery_job.go @@ -0,0 +1,18 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +var ( + bigqueryDatasetRegexp = regexp.MustCompile("projects/(.+)/datasets/(.+)") + bigqueryTableRegexp = regexp.MustCompile("projects/(.+)/datasets/(.+)/tables/(.+)") +) \ No newline at end of file diff --git a/templates/terraform/constants/cloudiot.go.erb b/templates/terraform/constants/cloudiot.go.erb new file mode 100644 index 000000000000..94bd4f5abb2b --- /dev/null +++ b/templates/terraform/constants/cloudiot.go.erb @@ -0,0 +1,241 @@ +func expandCloudIotDeviceRegistryHTTPConfig(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedHTTPEnabledState, err := expandCloudIotDeviceRegistryHTTPEnabledState(original["http_enabled_state"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedHTTPEnabledState); val.IsValid() && !isEmptyValue(val) { + transformed["httpEnabledState"] = transformedHTTPEnabledState + } + + return transformed, nil +} + +func expandCloudIotDeviceRegistryHTTPEnabledState(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandCloudIotDeviceRegistryMqttConfig(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedMqttEnabledState, err := expandCloudIotDeviceRegistryMqttEnabledState(original["mqtt_enabled_state"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedMqttEnabledState); val.IsValid() && !isEmptyValue(val) { + transformed["mqttEnabledState"] = transformedMqttEnabledState + } + + return transformed, nil +} + +func expandCloudIotDeviceRegistryMqttEnabledState(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandCloudIotDeviceRegistryStateNotificationConfig(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedPubsubTopicName, err := expandCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(original["pubsub_topic_name"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPubsubTopicName); val.IsValid() && !isEmptyValue(val) { + transformed["pubsubTopicName"] = transformedPubsubTopicName + } + + return transformed, nil +} + +func expandCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandCloudIotDeviceRegistryCredentials(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedPublicKeyCertificate, err := expandCloudIotDeviceRegistryCredentialsPublicKeyCertificate(original["public_key_certificate"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPublicKeyCertificate); val.IsValid() && !isEmptyValue(val) { + transformed["publicKeyCertificate"] = transformedPublicKeyCertificate + } + + req = append(req, transformed) + } + + return req, nil +} + +func expandCloudIotDeviceRegistryCredentialsPublicKeyCertificate(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedFormat, err := expandCloudIotDeviceRegistryPublicKeyCertificateFormat(original["format"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedFormat); val.IsValid() && !isEmptyValue(val) { + transformed["format"] = transformedFormat + } + + transformedCertificate, err := expandCloudIotDeviceRegistryPublicKeyCertificateCertificate(original["certificate"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCertificate); val.IsValid() && !isEmptyValue(val) { + transformed["certificate"] = transformedCertificate + } + + return transformed, nil +} + +func expandCloudIotDeviceRegistryPublicKeyCertificateFormat(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func expandCloudIotDeviceRegistryPublicKeyCertificateCertificate(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + return v, nil +} + +func flattenCloudIotDeviceRegistryCredentials(v interface{}, d *schema.ResourceData, config *Config) interface{} { + log.Printf("[DEBUG] Flattening device resitry credentials: %q", d.Id()) + if v == nil { + log.Printf("[DEBUG] The credentials array is nil: %q", d.Id()) + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + log.Printf("[DEBUG] Original credential: %+v", original) + if len(original) < 1 { + log.Printf("[DEBUG] Excluding empty credential that the API returned. %q", d.Id()) + continue + } + log.Printf("[DEBUG] Credentials array before appending a new credential: %+v", transformed) + transformed = append(transformed, map[string]interface{}{ + "public_key_certificate": flattenCloudIotDeviceRegistryCredentialsPublicKeyCertificate(original["publicKeyCertificate"], d, config), + }) + log.Printf("[DEBUG] Credentials array after appending a new credential: %+v", transformed) + } + return transformed +} + +func flattenCloudIotDeviceRegistryCredentialsPublicKeyCertificate(v interface{}, d *schema.ResourceData, config *Config) interface{} { + log.Printf("[DEBUG] Flattening device resitry credentials public key certificate: %q", d.Id()) + if v == nil { + log.Printf("[DEBUG] The public key certificate is nil: %q", d.Id()) + return v + } + + original := v.(map[string]interface{}) + log.Printf("[DEBUG] Original public key certificate: %+v", original) + transformed := make(map[string]interface{}) + + transformedPublicKeyCertificateFormat := flattenCloudIotDeviceRegistryPublicKeyCertificateFormat(original["format"], d, config) + transformed["format"] = transformedPublicKeyCertificateFormat + + transformedPublicKeyCertificateCertificate := flattenCloudIotDeviceRegistryPublicKeyCertificateCertificate(original["certificate"], d, config) + transformed["certificate"] = transformedPublicKeyCertificateCertificate + + log.Printf("[DEBUG] Transformed public key certificate: %+v", transformed) + + return transformed +} + +func flattenCloudIotDeviceRegistryPublicKeyCertificateFormat(v interface{}, d *schema.ResourceData, config *Config) interface{} { + return v +} + +func flattenCloudIotDeviceRegistryPublicKeyCertificateCertificate(v interface{}, d *schema.ResourceData, config *Config) interface{} { + return v +} + +func flattenCloudIotDeviceRegistryHTTPConfig(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return v + } + + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedHTTPEnabledState := flattenCloudIotDeviceRegistryHTTPConfigHTTPEnabledState(original["httpEnabledState"], d, config) + transformed["http_enabled_state"] = transformedHTTPEnabledState + + return transformed +} + +func flattenCloudIotDeviceRegistryHTTPConfigHTTPEnabledState(v interface{}, d *schema.ResourceData, config *Config) interface{} { + return v +} + +func flattenCloudIotDeviceRegistryMqttConfig(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return v + } + + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedMqttEnabledState := flattenCloudIotDeviceRegistryMqttConfigMqttEnabledState(original["mqttEnabledState"], d, config) + transformed["mqtt_enabled_state"] = transformedMqttEnabledState + + return transformed +} + +func flattenCloudIotDeviceRegistryMqttConfigMqttEnabledState(v interface{}, d *schema.ResourceData, config *Config) interface{} { + return v +} + +func flattenCloudIotDeviceRegistryStateNotificationConfig(v interface{}, d *schema.ResourceData, config *Config) interface{} { + log.Printf("[DEBUG] Flattening state notification config: %+v", v) + if v == nil { + return v + } + + original := v.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedPubsubTopicName := flattenCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(original["pubsubTopicName"], d, config) + if val := reflect.ValueOf(transformedPubsubTopicName); val.IsValid() && !isEmptyValue(val) { + log.Printf("[DEBUG] pubsub topic name is not null: %v", d.Get("pubsub_topic_name")) + transformed["pubsub_topic_name"] = transformedPubsubTopicName + } + + + return transformed +} + +func flattenCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(v interface{}, d *schema.ResourceData, config *Config) interface{} { + return v +} + +func validateCloudIotDeviceRegistryID(v interface{}, k string) (warnings []string, errors []error) { + value := v.(string) + if strings.HasPrefix(value, "goog") { + errors = append(errors, fmt.Errorf( + "%q (%q) can not start with \"goog\"", k, value)) + } + if !regexp.MustCompile(CloudIoTIdRegex).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q (%q) doesn't match regexp %q", k, value, CloudIoTIdRegex)) + } + return +} + +func validateCloudIotDeviceRegistrySubfolderMatch(v interface{}, k string) (warnings []string, errors []error) { + value := v.(string) + if strings.HasPrefix(value, "/") { + errors = append(errors, fmt.Errorf( + "%q (%q) can not start with '/'", k, value)) + } + return +} diff --git a/templates/terraform/constants/firewall.erb b/templates/terraform/constants/firewall.erb index aaac9f436038..1f949a416c02 100644 --- a/templates/terraform/constants/firewall.erb +++ b/templates/terraform/constants/firewall.erb @@ -34,3 +34,27 @@ func resourceComputeFirewallRuleHash(v interface{}) int { func compareCaseInsensitive(k, old, new string, d *schema.ResourceData) bool { return strings.ToLower(old) == strings.ToLower(new) } + +func diffSuppressEnableLogging(k, old, new string, d *schema.ResourceData) bool { + if k == "log_config.#" { + if new == "0" && d.Get("enable_logging").(bool) { + return true + } + } + + return false +} + +func resourceComputeFirewallEnableLoggingCustomizeDiff(diff *schema.ResourceDiff, v interface{}) error { + enableLogging, enableExists := diff.GetOkExists("enable_logging") + if !enableExists { + return nil + } + + logConfigExists := diff.Get("log_config.#").(int) != 0 + if logConfigExists && enableLogging == false { + return fmt.Errorf("log_config cannot be defined when enable_logging is false") + } + + return nil +} \ No newline at end of file diff --git a/templates/terraform/constants/monitoring_slo.go.erb b/templates/terraform/constants/monitoring_slo.go.erb new file mode 100644 index 000000000000..23b42305d7ae --- /dev/null +++ b/templates/terraform/constants/monitoring_slo.go.erb @@ -0,0 +1,7 @@ +func validateMonitoringSloGoal(v interface{}, k string) (warnings []string, errors []error) { + goal := v.(float64) + if goal <= 0 || goal > 0.999 { + errors = append(errors, fmt.Errorf("goal %f must be > 0 and <= 0.999", goal)) + } + return +} \ No newline at end of file diff --git a/templates/terraform/constants/source_repo_repository.go.erb b/templates/terraform/constants/source_repo_repository.go.erb new file mode 100644 index 000000000000..d523a7869673 --- /dev/null +++ b/templates/terraform/constants/source_repo_repository.go.erb @@ -0,0 +1,30 @@ +<%- # the license inside this block applies to this file + # Copyright 2019 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func resourceSourceRepoRepositoryPubSubConfigsHash(v interface{}) int { + if v == nil { + return 0 + } + + var buf bytes.Buffer + m := v.(map[string]interface{}) + + buf.WriteString(fmt.Sprintf("%s-", GetResourceNameFromSelfLink(m["topic"].(string)))) + buf.WriteString(fmt.Sprintf("%s-", m["message_format"].(string))) + if v, ok := m["service_account_email"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + + return hashcode.String(buf.String()) +} diff --git a/templates/terraform/custom_check_destroy/storage_hmac_key.go.erb b/templates/terraform/custom_check_destroy/storage_hmac_key.go.erb index c7208fbdd1fc..1677f4c44b7d 100644 --- a/templates/terraform/custom_check_destroy/storage_hmac_key.go.erb +++ b/templates/terraform/custom_check_destroy/storage_hmac_key.go.erb @@ -1,4 +1,4 @@ -config := testAccProvider.Meta().(*Config) +config := googleProviderConfig(t) url, err := replaceVarsForTest(config, rs, "{{StorageBasePath}}projects/{{project}}/hmacKeys/{{access_id}}") if err != nil { diff --git a/templates/terraform/custom_delete/appversion_delete.go.erb b/templates/terraform/custom_delete/appversion_delete.go.erb index d3e6aa0a9bfd..ad11d36ffb92 100644 --- a/templates/terraform/custom_delete/appversion_delete.go.erb +++ b/templates/terraform/custom_delete/appversion_delete.go.erb @@ -30,7 +30,7 @@ if d.Get("delete_service_on_destroy") == true { } err = appEngineOperationWaitTime( config, res, project, "Deleting Service", - int(d.Timeout(schema.TimeoutDelete).Minutes())) + d.Timeout(schema.TimeoutDelete)) if err != nil { return err @@ -50,7 +50,7 @@ if d.Get("delete_service_on_destroy") == true { } err = appEngineOperationWaitTime( config, res, project, "Deleting AppVersion", - int(d.Timeout(schema.TimeoutDelete).Minutes())) + d.Timeout(schema.TimeoutDelete)) if err != nil { return err diff --git a/templates/terraform/custom_delete/per_instance_config.go.erb b/templates/terraform/custom_delete/per_instance_config.go.erb new file mode 100644 index 000000000000..59d9096f9c0c --- /dev/null +++ b/templates/terraform/custom_delete/per_instance_config.go.erb @@ -0,0 +1,78 @@ + config := meta.(*Config) + + project, err := getProject(d, config) + if err != nil { + return err + } + + lockName, err := replaceVars(d, config, "instanceGroupManager/{{project}}/{{zone}}/{{instance_group_manager}}") + if err != nil { + return err + } + mutexKV.Lock(lockName) + defer mutexKV.Unlock(lockName) + + url, err := replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/deletePerInstanceConfigs") + if err != nil { + return err + } + + var obj map[string]interface{} + obj = map[string]interface{}{ + "names": [1]string{d.Get("name").(string)}, + } + log.Printf("[DEBUG] Deleting PerInstanceConfig %q", d.Id()) + + res, err := sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutDelete)) + if err != nil { + return handleNotFoundError(err, d, "PerInstanceConfig") + } + + err = computeOperationWaitTime( + config, res, project, "Deleting PerInstanceConfig", + d.Timeout(schema.TimeoutDelete)) + + if err != nil { + return err + } + + // Potentially delete the state managed by this config + if d.Get("remove_instance_state_on_destroy").(bool) { + // Instance name in applyUpdatesToInstances request must include zone + instanceName, err := replaceVars(d, config, "zones/{{zone}}/instances/{{name}}") + if err != nil { + return err + } + + obj = make(map[string]interface{}) + obj["instances"] = []string{instanceName} + + // The deletion must be applied to the instance after the PerInstanceConfig is deleted + url, err = replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/applyUpdatesToInstances") + if err != nil { + return err + } + + log.Printf("[DEBUG] Applying updates to PerInstanceConfig %q: %#v", d.Id(), obj) + res, err = sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return fmt.Errorf("Error deleting PerInstanceConfig %q: %s", d.Id(), err) + } + + err = computeOperationWaitTime( + config, res, project, "Applying update to PerInstanceConfig", + d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("Error deleting PerInstanceConfig %q: %s", d.Id(), err) + } + + // PerInstanceConfig goes into "DELETING" state while the instance is actually deleted + err = PollingWaitTime(resourceComputePerInstanceConfigPollRead(d, meta), PollCheckInstanceConfigDeleted, "Deleting PerInstanceConfig", d.Timeout(schema.TimeoutDelete), 1) + if err != nil { + return fmt.Errorf("Error waiting for delete on PerInstanceConfig %q: %s", d.Id(), err) + } + } + + log.Printf("[DEBUG] Finished deleting PerInstanceConfig %q: %#v", d.Id(), res) + return nil \ No newline at end of file diff --git a/templates/terraform/custom_delete/region_per_instance_config.go.erb b/templates/terraform/custom_delete/region_per_instance_config.go.erb new file mode 100644 index 000000000000..4daf052f733a --- /dev/null +++ b/templates/terraform/custom_delete/region_per_instance_config.go.erb @@ -0,0 +1,79 @@ + config := meta.(*Config) + + project, err := getProject(d, config) + if err != nil { + return err + } + + lockName, err := replaceVars(d, config, "instanceGroupManager/{{project}}/{{region}}/{{region_instance_group_manager}}") + if err != nil { + return err + } + mutexKV.Lock(lockName) + defer mutexKV.Unlock(lockName) + + url, err := replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/deletePerInstanceConfigs") + if err != nil { + return err + } + + var obj map[string]interface{} + obj = map[string]interface{}{ + "names": [1]string{d.Get("name").(string)}, + } + log.Printf("[DEBUG] Deleting RegionPerInstanceConfig %q", d.Id()) + + res, err := sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutDelete)) + if err != nil { + return handleNotFoundError(err, d, "RegionPerInstanceConfig") + } + + err = computeOperationWaitTime( + config, res, project, "Deleting RegionPerInstanceConfig", + d.Timeout(schema.TimeoutDelete)) + + if err != nil { + return err + } + + // Potentially delete the state managed by this config + if d.Get("remove_instance_state_on_destroy").(bool) { + // Instance name in applyUpdatesToInstances request must include zone + instanceName, err := findInstanceName(d, config) + if err != nil { + return err + } + + obj = make(map[string]interface{}) + obj["instances"] = []string{instanceName} + + // Updates must be applied to the instance after deleting the PerInstanceConfig + url, err = replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/applyUpdatesToInstances") + if err != nil { + return err + } + + log.Printf("[DEBUG] Applying updates to PerInstanceConfig %q: %#v", d.Id(), obj) + res, err = sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return fmt.Errorf("Error updating PerInstanceConfig %q: %s", d.Id(), err) + } + + err = computeOperationWaitTime( + config, res, project, "Applying update to PerInstanceConfig", + d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return fmt.Errorf("Error deleting PerInstanceConfig %q: %s", d.Id(), err) + } + + // RegionPerInstanceConfig goes into "DELETING" state while the instance is actually deleted + err = PollingWaitTime(resourceComputeRegionPerInstanceConfigPollRead(d, meta), PollCheckInstanceConfigDeleted, "Deleting RegionPerInstanceConfig", d.Timeout(schema.TimeoutDelete), 1) + if err != nil { + return fmt.Errorf("Error waiting for delete on RegionPerInstanceConfig %q: %s", d.Id(), err) + } + } + + log.Printf("[DEBUG] Finished deleting RegionPerInstanceConfig %q: %#v", d.Id(), res) + return nil \ No newline at end of file diff --git a/templates/terraform/custom_expand/bigquery_access_role.go.erb b/templates/terraform/custom_expand/bigquery_access_role.go.erb new file mode 100644 index 000000000000..5a5aae5115c9 --- /dev/null +++ b/templates/terraform/custom_expand/bigquery_access_role.go.erb @@ -0,0 +1,24 @@ +<%- # the license inside this block applies to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + if v == nil { + return nil, nil + } + + if primitiveRole, ok := bigqueryAccessRoleToPrimitiveMap[v.(string)]; ok { + return primitiveRole, nil + } + return v, nil +} diff --git a/templates/terraform/custom_expand/bigquery_dataset_ref.go.erb b/templates/terraform/custom_expand/bigquery_dataset_ref.go.erb new file mode 100644 index 000000000000..a0f45311c80f --- /dev/null +++ b/templates/terraform/custom_expand/bigquery_dataset_ref.go.erb @@ -0,0 +1,40 @@ +<%# # the license inside this if block pertains to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +#%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedProjectId := original["project_id"] + if val := reflect.ValueOf(transformedProjectId); val.IsValid() && !isEmptyValue(val) { + transformed["projectId"] = transformedProjectId + } + + transformedDatasetId := original["dataset_id"] + if val := reflect.ValueOf(transformedDatasetId); val.IsValid() && !isEmptyValue(val) { + transformed["datasetId"] = transformedDatasetId + } + + if parts := bigqueryDatasetRegexp.FindStringSubmatch(transformedDatasetId.(string)); parts != nil { + transformed["projectId"] = parts[1] + transformed["datasetId"] = parts[2] + } + + return transformed, nil +} diff --git a/templates/terraform/custom_expand/bigquery_table_ref.go.erb b/templates/terraform/custom_expand/bigquery_table_ref.go.erb new file mode 100644 index 000000000000..817dc252aa6a --- /dev/null +++ b/templates/terraform/custom_expand/bigquery_table_ref.go.erb @@ -0,0 +1,46 @@ +<%# # the license inside this if block pertains to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +#%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedProjectId := original["project_id"] + if val := reflect.ValueOf(transformedProjectId); val.IsValid() && !isEmptyValue(val) { + transformed["projectId"] = transformedProjectId + } + + transformedDatasetId := original["dataset_id"] + if val := reflect.ValueOf(transformedDatasetId); val.IsValid() && !isEmptyValue(val) { + transformed["datasetId"] = transformedDatasetId + } + + transformedTableId := original["table_id"] + if val := reflect.ValueOf(transformedTableId); val.IsValid() && !isEmptyValue(val) { + transformed["tableId"] = transformedTableId + } + + if parts := bigqueryTableRegexp.FindStringSubmatch(transformedTableId.(string)); parts != nil { + transformed["projectId"] = parts[1] + transformed["datasetId"] = parts[2] + transformed["tableId"] = parts[3] + } + + return transformed, nil +} diff --git a/templates/terraform/custom_expand/bigquery_table_ref_array.go.erb b/templates/terraform/custom_expand/bigquery_table_ref_array.go.erb new file mode 100644 index 000000000000..ce30ad84fe3b --- /dev/null +++ b/templates/terraform/custom_expand/bigquery_table_ref_array.go.erb @@ -0,0 +1,50 @@ +<%# # the license inside this if block pertains to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +#%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedProjectId := original["project_id"] + if val := reflect.ValueOf(transformedProjectId); val.IsValid() && !isEmptyValue(val) { + transformed["projectId"] = transformedProjectId + } + + transformedDatasetId := original["dataset_id"] + if val := reflect.ValueOf(transformedDatasetId); val.IsValid() && !isEmptyValue(val) { + transformed["datasetId"] = transformedDatasetId + } + + transformedTableId := original["table_id"] + if val := reflect.ValueOf(transformedTableId); val.IsValid() && !isEmptyValue(val) { + transformed["tableId"] = transformedTableId + } + + tableRef := regexp.MustCompile("projects/(.+)/datasets/(.+)/tables/(.+)") + if parts := tableRef.FindStringSubmatch(transformedTableId.(string)); parts != nil { + transformed["projectId"] = parts[1] + transformed["datasetId"] = parts[2] + transformed["tableId"] = parts[3] + } + + req = append(req, transformed) + } + return req, nil +} diff --git a/templates/terraform/custom_expand/compute_full_url.erb b/templates/terraform/custom_expand/compute_full_url.erb index 32dbb4da8be5..637528d1adf8 100644 --- a/templates/terraform/custom_expand/compute_full_url.erb +++ b/templates/terraform/custom_expand/compute_full_url.erb @@ -16,7 +16,7 @@ func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d T if v == nil || v.(string) == "" { return "", nil } - f, err := <%= build_expand_resource_ref('v.(string)', property) %> + f, err := <%= build_expand_resource_ref('v.(string)', property, pwd) %> if err != nil { return nil, fmt.Errorf("Invalid value for <%= property.name.underscore -%>: %s", err) } diff --git a/templates/terraform/custom_expand/data_catalog_tag.go.erb b/templates/terraform/custom_expand/data_catalog_tag.go.erb new file mode 100644 index 000000000000..667da5a756ca --- /dev/null +++ b/templates/terraform/custom_expand/data_catalog_tag.go.erb @@ -0,0 +1,24 @@ +<%# # the license inside this if block pertains to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +#%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + // we flattened the original["enum_value"]["display_name"] object to be just original["enum_value"] so here, + // v is the value we want from the config + transformed := make(map[string]interface{}) + if val := reflect.ValueOf(v); val.IsValid() && !isEmptyValue(val) { + transformed["displayName"] = v + } + + return transformed, nil +} diff --git a/templates/terraform/custom_expand/days_to_duration_string.go.erb b/templates/terraform/custom_expand/days_to_duration_string.go.erb new file mode 100644 index 000000000000..694efb7334d9 --- /dev/null +++ b/templates/terraform/custom_expand/days_to_duration_string.go.erb @@ -0,0 +1,28 @@ +<%- # the license inside this block applies to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + if v == nil { + return nil, nil + } + i, ok := v.(int) + if !ok { + return nil, fmt.Errorf("unexpected value is not int: %v", v) + } + if i == 0 { + return "", nil + } + // Day = 86400s + return fmt.Sprintf("%ds", i * 86400), nil +} diff --git a/templates/terraform/custom_expand/firewall_log_config.go.erb b/templates/terraform/custom_expand/firewall_log_config.go.erb new file mode 100644 index 000000000000..9db2f00592f5 --- /dev/null +++ b/templates/terraform/custom_expand/firewall_log_config.go.erb @@ -0,0 +1,33 @@ +<%- # the license inside this block applies to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + l := v.([]interface{}) + transformed := make(map[string]interface{}) + + if len(l) == 0 || l[0] == nil { + // send enable = enable_logging value to ensure correct logging status if there is no config + transformed["enable"] = d.Get("enable_logging").(bool) + return transformed, nil + } + + raw := l[0] + original := raw.(map[string]interface{}) + + // The log_config block is specified, so logging should be enabled + transformed["enable"] = true + transformed["metadata"] = original["metadata"] + + return transformed, nil +} diff --git a/templates/terraform/custom_expand/json_schema.erb b/templates/terraform/custom_expand/json_schema.erb new file mode 100644 index 000000000000..ef3f0dec09e3 --- /dev/null +++ b/templates/terraform/custom_expand/json_schema.erb @@ -0,0 +1,25 @@ +<%- # the license inside this block applies to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + b := []byte(v.(string)) + if len(b) == 0 { + return nil, nil + } + m := make(map[string]interface{}) + if err := json.Unmarshal(b, &m); err != nil { + return nil, err + } + return m, nil +} diff --git a/templates/terraform/custom_expand/network_full_url.erb b/templates/terraform/custom_expand/network_full_url.erb new file mode 100644 index 000000000000..befc8e314a47 --- /dev/null +++ b/templates/terraform/custom_expand/network_full_url.erb @@ -0,0 +1,12 @@ +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + if v == nil || v.(string) == "" { + return "", nil + } else if strings.HasPrefix(v.(string), "https://") { + return v, nil + } + url, err := replaceVars(d, config, "{{ComputeBasePath}}" + v.(string)) + if err != nil { + return "", err + } + return ConvertSelfLinkToV1(url), nil +} diff --git a/templates/terraform/custom_expand/network_management_connectivity_test_name.go.erb b/templates/terraform/custom_expand/network_management_connectivity_test_name.go.erb new file mode 100644 index 000000000000..2fd5a83ffe4e --- /dev/null +++ b/templates/terraform/custom_expand/network_management_connectivity_test_name.go.erb @@ -0,0 +1,22 @@ +<%- # the license inside this block applies to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + // projects/X/tests/Y - note not "connectivityTests" + f, err := parseGlobalFieldValue("tests", v.(string), "project", d, config, true) + if err != nil { + return nil, fmt.Errorf("Invalid value for zone: %s", err) + } + return f.RelativeLink(), nil +} diff --git a/templates/terraform/custom_expand/preserved_state_disks.go.erb b/templates/terraform/custom_expand/preserved_state_disks.go.erb new file mode 100644 index 000000000000..dacc8d489e08 --- /dev/null +++ b/templates/terraform/custom_expand/preserved_state_disks.go.erb @@ -0,0 +1,43 @@ +<%- # the license inside this block applies to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + if v == nil { + return map[string]interface{}{}, nil + } + l := v.(*schema.Set).List() + req := make(map[string]interface{}) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + deviceName := original["device_name"].(string) + diskObj := make(map[string]interface{}) + deleteRule := original["delete_rule"].(string) + if deleteRule != "" { + diskObj["autoDelete"] = deleteRule + } + source := original["source"] + if source != "" { + diskObj["source"] = source + } + mode := original["mode"] + if source != "" { + diskObj["mode"] = mode + } + req[deviceName] = diskObj + } + return req, nil +} diff --git a/templates/terraform/custom_expand/sd_full_url.erb b/templates/terraform/custom_expand/sd_full_url.erb new file mode 100644 index 000000000000..71543d60488c --- /dev/null +++ b/templates/terraform/custom_expand/sd_full_url.erb @@ -0,0 +1,12 @@ +func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d TerraformResourceData, config *Config) (interface{}, error) { + if v == nil || v.(string) == "" { + return "", nil + } else if strings.HasPrefix(v.(string), "https://") { + return v, nil + } + url, err := replaceVars(d, config, "{{ServiceDirectoryBasePath}}" + v.(string)) + if err != nil { + return "", err + } + return url, nil +} diff --git a/templates/terraform/custom_flatten/bigquery_connection_flatten.go.erb b/templates/terraform/custom_flatten/bigquery_connection_flatten.go.erb new file mode 100644 index 000000000000..62b67471440b --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_connection_flatten.go.erb @@ -0,0 +1,22 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + return []interface{}{ + map[string]interface{}{ + "username": d.Get("cloud_sql.0.credential.0.username"), + "password": d.Get("cloud_sql.0.credential.0.password"), + }, + } +} diff --git a/templates/terraform/custom_flatten/bigquery_dataset_ref.go.erb b/templates/terraform/custom_flatten/bigquery_dataset_ref.go.erb new file mode 100644 index 000000000000..d3d56745ea69 --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_dataset_ref.go.erb @@ -0,0 +1,32 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["project_id"] = original["projectId"] + transformed["dataset_id"] = original["datasetId"] + + if bigqueryDatasetRegexp.MatchString(d.Get("query.0.default_dataset.0.dataset_id").(string)) { + // The user specified the dataset_id as a URL, so store it in state that way + transformed["dataset_id"] = fmt.Sprintf("projects/%s/datasets/%s", transformed["project_id"], transformed["dataset_id"]) + } + return []interface{}{transformed} +} diff --git a/templates/terraform/custom_flatten/bigquery_table_ref.go.erb b/templates/terraform/custom_flatten/bigquery_table_ref.go.erb new file mode 100644 index 000000000000..ba39eac461eb --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_table_ref.go.erb @@ -0,0 +1,33 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["project_id"] = original["projectId"] + transformed["dataset_id"] = original["datasetId"] + transformed["table_id"] = original["tableId"] + + if bigqueryTableRegexp.MatchString(d.Get("<%= prop_path -%>").(string)) { + // The user specified the table_id as a URL, so store it in state that way + transformed["table_id"] = fmt.Sprintf("projects/%s/datasets/%s/tables/%s", transformed["project_id"], transformed["dataset_id"], transformed["table_id"]) + } + return []interface{}{transformed} +} diff --git a/templates/terraform/custom_flatten/bigquery_table_ref_copy_destinationtable.go.erb b/templates/terraform/custom_flatten/bigquery_table_ref_copy_destinationtable.go.erb new file mode 100644 index 000000000000..fc4d6f3f55bc --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_table_ref_copy_destinationtable.go.erb @@ -0,0 +1,18 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +<%= lines(compile_template(pwd + '/templates/terraform/custom_flatten/bigquery_table_ref.go.erb', + prefix: prefix, + property: property, + prop_path: 'copy.0.destination_table.0.table_id')) -%> diff --git a/templates/terraform/custom_flatten/bigquery_table_ref_copy_sourcetables.go.erb b/templates/terraform/custom_flatten/bigquery_table_ref_copy_sourcetables.go.erb new file mode 100644 index 000000000000..98132a5e5271 --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_table_ref_copy_sourcetables.go.erb @@ -0,0 +1,41 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for i, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + t := map[string]interface{}{ + "project_id": original["projectId"], + "dataset_id": original["datasetId"], + "table_id": original["tableId"], + } + + if bigqueryTableRegexp.MatchString(d.Get(fmt.Sprintf("copy.0.source_tables.%d.table_id", i)).(string)) { + // The user specified the table_id as a URL, so store it in state that way + t["table_id"] = fmt.Sprintf("projects/%s/datasets/%s/tables/%s", t["project_id"], t["dataset_id"], t["table_id"]) + } + transformed = append(transformed, t) + } + + return transformed +} diff --git a/templates/terraform/custom_flatten/bigquery_table_ref_extract_sourcetable.go.erb b/templates/terraform/custom_flatten/bigquery_table_ref_extract_sourcetable.go.erb new file mode 100644 index 000000000000..702bd0f39ac7 --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_table_ref_extract_sourcetable.go.erb @@ -0,0 +1,18 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +<%= lines(compile_template(pwd + '/templates/terraform/custom_flatten/bigquery_table_ref.go.erb', + prefix: prefix, + property: property, + prop_path: 'extract.0.source_table.0.table_id')) -%> diff --git a/templates/terraform/custom_flatten/bigquery_table_ref_load_destinationtable.go.erb b/templates/terraform/custom_flatten/bigquery_table_ref_load_destinationtable.go.erb new file mode 100644 index 000000000000..5d258fd07710 --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_table_ref_load_destinationtable.go.erb @@ -0,0 +1,18 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +<%= lines(compile_template(pwd + '/templates/terraform/custom_flatten/bigquery_table_ref.go.erb', + prefix: prefix, + property: property, + prop_path: 'load.0.destination_table.0.table_id')) -%> diff --git a/templates/terraform/custom_flatten/bigquery_table_ref_query_destinationtable.go.erb b/templates/terraform/custom_flatten/bigquery_table_ref_query_destinationtable.go.erb new file mode 100644 index 000000000000..c70252648382 --- /dev/null +++ b/templates/terraform/custom_flatten/bigquery_table_ref_query_destinationtable.go.erb @@ -0,0 +1,18 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +<%= lines(compile_template(pwd + '/templates/terraform/custom_flatten/bigquery_table_ref.go.erb', + prefix: prefix, + property: property, + prop_path: 'query.0.destination_table.0.table_id')) -%> diff --git a/templates/terraform/custom_flatten/data_catalog_tag.go.erb b/templates/terraform/custom_flatten/data_catalog_tag.go.erb new file mode 100644 index 000000000000..d996ef47e3dc --- /dev/null +++ b/templates/terraform/custom_flatten/data_catalog_tag.go.erb @@ -0,0 +1,21 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return nil + } + + return v.(map[string]interface{})["displayName"] +} \ No newline at end of file diff --git a/templates/terraform/custom_flatten/duration_string_to_days.go.erb b/templates/terraform/custom_flatten/duration_string_to_days.go.erb new file mode 100644 index 000000000000..8ab8600a3d2a --- /dev/null +++ b/templates/terraform/custom_flatten/duration_string_to_days.go.erb @@ -0,0 +1,27 @@ +<%- # the license inside this block applies to this file + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return nil + } + if v.(string) == "" { + return nil + } + dur, err := time.ParseDuration(v.(string)) + if err != nil { + return nil + } + return int(dur/(time.Hour*24)) +} diff --git a/templates/terraform/custom_flatten/firewall_log_config.go.erb b/templates/terraform/custom_flatten/firewall_log_config.go.erb new file mode 100644 index 000000000000..ff7ff4d7903f --- /dev/null +++ b/templates/terraform/custom_flatten/firewall_log_config.go.erb @@ -0,0 +1,32 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + + v, ok := original["enable"] + if ok && !v.(bool) { + return nil + } + + transformed := make(map[string]interface{}) + transformed["metadata"] = original["metadata"] + return []interface{}{transformed} +} diff --git a/templates/terraform/custom_flatten/full_to_relative_path.erb b/templates/terraform/custom_flatten/full_to_relative_path.erb new file mode 100644 index 000000000000..c2d751040da9 --- /dev/null +++ b/templates/terraform/custom_flatten/full_to_relative_path.erb @@ -0,0 +1,10 @@ +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return v + } + relative, err := getRelativePath(v.(string)) + if err != nil { + return v + } + return relative +} diff --git a/templates/terraform/custom_flatten/json_schema.erb b/templates/terraform/custom_flatten/json_schema.erb new file mode 100644 index 000000000000..ee6080405942 --- /dev/null +++ b/templates/terraform/custom_flatten/json_schema.erb @@ -0,0 +1,25 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return nil + } + b, err := json.Marshal(v) + if err != nil { + // TODO: return error once https://github.com/GoogleCloudPlatform/magic-modules/issues/3257 is fixed. + log.Printf("[ERROR] failed to marshal schema to JSON: %v", err) + } + return string(b) +} diff --git a/templates/terraform/custom_flatten/preserved_state_disks.go.erb b/templates/terraform/custom_flatten/preserved_state_disks.go.erb new file mode 100644 index 000000000000..d3008d7c1d0e --- /dev/null +++ b/templates/terraform/custom_flatten/preserved_state_disks.go.erb @@ -0,0 +1,35 @@ +<%# The license inside this block applies to this file. + # Copyright 2019 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { + if v == nil { + return v + } + disks := v.(map[string]interface{}) + transformed := make([]interface{}, 0, len(disks)) + for devName, deleteRuleRaw := range disks { + diskObj := deleteRuleRaw.(map[string]interface{}) + source, err := getRelativePath(diskObj["source"].(string)) + if err != nil { + source = diskObj["source"].(string) + } + transformed = append(transformed, map[string]interface{}{ + "device_name": devName, + "delete_rule": diskObj["autoDelete"], + "source": source, + "mode": diskObj["mode"], + }) + } + return transformed +} \ No newline at end of file diff --git a/templates/terraform/custom_import/cloud_asset_feed.go.erb b/templates/terraform/custom_import/cloud_asset_feed.go.erb new file mode 100644 index 000000000000..a089f93aee9a --- /dev/null +++ b/templates/terraform/custom_import/cloud_asset_feed.go.erb @@ -0,0 +1,4 @@ +if err := d.Set("name", d.Id()); err != nil { + return nil, err +} +return []*schema.ResourceData{d}, nil \ No newline at end of file diff --git a/templates/terraform/custom_import/data_catalog_entry.go.erb b/templates/terraform/custom_import/data_catalog_entry.go.erb new file mode 100644 index 000000000000..5f39756fdfdb --- /dev/null +++ b/templates/terraform/custom_import/data_catalog_entry.go.erb @@ -0,0 +1,17 @@ + config := meta.(*Config) + + // current import_formats can't import fields with forward slashes in their value + if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err + } + + name := d.Get("name").(string) + egRegex := regexp.MustCompile("(projects/.+/locations/.+/entryGroups/.+)/entries/(.+)") + + parts := egRegex.FindStringSubmatch(name) + if len(parts) != 3 { + return nil, fmt.Errorf("entry name does not fit the format %s", egRegex) + } + d.Set("entry_group", parts[1]) + d.Set("entry_id", parts[2]) + return []*schema.ResourceData{d}, nil diff --git a/templates/terraform/custom_import/data_catalog_entry_group.go.erb b/templates/terraform/custom_import/data_catalog_entry_group.go.erb new file mode 100644 index 000000000000..d9b1ffaefc85 --- /dev/null +++ b/templates/terraform/custom_import/data_catalog_entry_group.go.erb @@ -0,0 +1,18 @@ + config := meta.(*Config) + + // current import_formats can't import fields with forward slashes in their value + if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err + } + + name := d.Get("name").(string) + egRegex := regexp.MustCompile("projects/(.+)/locations/(.+)/entryGroups/(.+)") + + parts := egRegex.FindStringSubmatch(name) + if len(parts) != 4 { + return nil, fmt.Errorf("entry group name does not fit the format %s", egRegex) + } + d.Set("project", parts[1]) + d.Set("region", parts[2]) + d.Set("entry_group_id", parts[3]) + return []*schema.ResourceData{d}, nil diff --git a/templates/terraform/custom_import/data_catalog_tag.go.erb b/templates/terraform/custom_import/data_catalog_tag.go.erb new file mode 100644 index 000000000000..0f676f1ed82e --- /dev/null +++ b/templates/terraform/custom_import/data_catalog_tag.go.erb @@ -0,0 +1,17 @@ + config := meta.(*Config) + + // current import_formats can't import fields with forward slashes in their value + if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err + } + + name := d.Get("name").(string) + egRegex := regexp.MustCompile("(.+)/tags") + + parts := egRegex.FindStringSubmatch(name) + if len(parts) != 2 { + return nil, fmt.Errorf("entry name does not fit the format %s", egRegex) + } + + d.Set("parent", parts[1]) + return []*schema.ResourceData{d}, nil diff --git a/templates/terraform/custom_import/data_catalog_tag_template.go.erb b/templates/terraform/custom_import/data_catalog_tag_template.go.erb new file mode 100644 index 000000000000..203eb935c301 --- /dev/null +++ b/templates/terraform/custom_import/data_catalog_tag_template.go.erb @@ -0,0 +1,18 @@ + config := meta.(*Config) + + // current import_formats can't import fields with forward slashes in their value + if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err + } + + name := d.Get("name").(string) + egRegex := regexp.MustCompile("projects/(.+)/locations/(.+)/tagTemplates/(.+)") + + parts := egRegex.FindStringSubmatch(name) + if len(parts) != 4 { + return nil, fmt.Errorf("tag template name does not fit the format %s", egRegex) + } + d.Set("project", parts[1]) + d.Set("region", parts[2]) + d.Set("tag_template_id", parts[3]) + return []*schema.ResourceData{d}, nil diff --git a/templates/terraform/custom_import/kms_key_ring_import_job.go.erb b/templates/terraform/custom_import/kms_key_ring_import_job.go.erb new file mode 100644 index 000000000000..c4e04cd91b2e --- /dev/null +++ b/templates/terraform/custom_import/kms_key_ring_import_job.go.erb @@ -0,0 +1,20 @@ + + config := meta.(*Config) + + // current import_formats can't import fields with forward slashes in their value + if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err + } + + stringParts := strings.Split(d.Get("name").(string), "/") + if len(stringParts) != 8 { + return nil, fmt.Errorf( + "Saw %s when the name is expected to have shape %s", + d.Get("name"), + "projects/{{project}}/locations/{{location}}/keyRings/{{keyRing}}/importJobs/{{importJobId}}", + ) + } + + d.Set("key_ring", stringParts[3]) + d.Set("import_job_id", stringParts[5]) + return []*schema.ResourceData{d}, nil diff --git a/templates/terraform/custom_import/service_directory_endpoint.go.erb b/templates/terraform/custom_import/service_directory_endpoint.go.erb new file mode 100644 index 000000000000..e31eebb61713 --- /dev/null +++ b/templates/terraform/custom_import/service_directory_endpoint.go.erb @@ -0,0 +1,39 @@ +config := meta.(*Config) + +// current import_formats cannot import fields with forward slashes in their value +if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err +} + +nameParts := strings.Split(d.Get("name").(string), "/") +if len(nameParts) == 10 { + // `projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}/services/{{service_id}}/endpoints/{{endpoint_id}}` + d.Set("service", fmt.Sprintf("projects/%s/locations/%s/namespaces/%s/services/%s", nameParts[1], nameParts[3], nameParts[5], nameParts[7])) + d.Set("endpoint_id", nameParts[9]) +} else if len(nameParts) == 5 { + // `{{project}}/{{location}}/{{namespace_id}}/{{service_id}}/{{endpoint_id}}` + d.Set("service", fmt.Sprintf("projects/%s/locations/%s/namespaces/%s/services/%s", nameParts[0], nameParts[1], nameParts[2], nameParts[3])) + d.Set("endpoint_id", nameParts[4]) + id := fmt.Sprintf("projects/%s/locations/%s/namespaces/%s/services/%s/endpoints/%s", nameParts[0], nameParts[1], nameParts[2], nameParts[3], nameParts[4]) + d.Set("name", id) + d.SetId(id) +} else if len(nameParts) == 4 { + // `{{location}}/{{namespace_id}}/{{service_id}}/{{endpoint_id}}` + project, err := getProject(d, config) + if err != nil { + return nil, err + } + d.Set("service", fmt.Sprintf("projects/%s/locations/%s/namespaces/%s/services/%s", project, nameParts[0], nameParts[1], nameParts[2])) + d.Set("endpoint_id", nameParts[3]) + id := fmt.Sprintf("projects/%s/locations/%s/namespaces/%s/services/%s/endpoints/%s", project, nameParts[0], nameParts[1], nameParts[2], nameParts[3]) + d.Set("name", id) + d.SetId(id) +} else { + return nil, fmt.Errorf( + "Saw %s when the name is expected to have shape %s, %s or %s", + d.Get("name"), + "projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}/services/{{service_id}}/endpoints/{{endpoint_id}}", + "{{project}}/{{location}}/{{namespace_id}}/{{service_id}}/{{endpoint_id}}", + "{{location}}/{{namespace_id}}/{{service_id}}/{{endpoint_id}}") +} +return []*schema.ResourceData{d}, nil diff --git a/templates/terraform/custom_import/service_directory_namespace.go.erb b/templates/terraform/custom_import/service_directory_namespace.go.erb new file mode 100644 index 000000000000..153aceda8764 --- /dev/null +++ b/templates/terraform/custom_import/service_directory_namespace.go.erb @@ -0,0 +1,42 @@ +config := meta.(*Config) + +// current import_formats cannot import fields with forward slashes in their value +if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err +} + +nameParts := strings.Split(d.Get("name").(string), "/") +if len(nameParts) == 6 { + // `projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}` + d.Set("project", nameParts[1]) + d.Set("location", nameParts[3]) + d.Set("namespace_id", nameParts[5]) +} else if len(nameParts) == 3 { + // `{{project}}/{{location}}/{{namespace_id}}` + d.Set("project", nameParts[0]) + d.Set("location", nameParts[1]) + d.Set("namespace_id", nameParts[2]) + id := fmt.Sprintf("projects/%s/locations/%s/namespaces/%s", nameParts[0], nameParts[1], nameParts[2]) + d.Set("name", id) + d.SetId(id) +} else if len(nameParts) == 2 { + // `{{location}}/{{namespace_id}}` + project, err := getProject(d, config) + if err != nil { + return nil, err + } + d.Set("project", project) + d.Set("location", nameParts[0]) + d.Set("namespace_id", nameParts[1]) + id := fmt.Sprintf("projects/%s/locations/%s/namespaces/%s", project, nameParts[0], nameParts[1]) + d.Set("name", id) + d.SetId(id) +} else { + return nil, fmt.Errorf( + "Saw %s when the name is expected to have shape %s, %s or %s", + d.Get("name"), + "projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}", + "{{project}}/{{location}}/{{namespace_id}}", + "{{location}}/{{namespace_id}}") +} +return []*schema.ResourceData{d}, nil diff --git a/templates/terraform/custom_import/service_directory_service.go.erb b/templates/terraform/custom_import/service_directory_service.go.erb new file mode 100644 index 000000000000..8b7da023ea4e --- /dev/null +++ b/templates/terraform/custom_import/service_directory_service.go.erb @@ -0,0 +1,40 @@ +config := meta.(*Config) + +// current import_formats cannot import fields with forward slashes in their value +if err := parseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err +} + +nameParts := strings.Split(d.Get("name").(string), "/") +if len(nameParts) == 8 { + // `projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}/services/{{service_id}}` + d.Set("namespace", fmt.Sprintf("projects/%s/locations/%s/namespaces/%s", nameParts[1], nameParts[3], nameParts[5])) + d.Set("service_id", nameParts[7]) +} else if len(nameParts) == 4 { + // `{{project}}/{{location}}/{{namespace_id}}/{{service_id}}` + d.Set("namespace", fmt.Sprintf("projects/%s/locations/%s/namespaces/%s", nameParts[0], nameParts[1], nameParts[2])) + d.Set("service_id", nameParts[3]) + id := fmt.Sprintf("projects/%s/locations/%s/namespaces/%s/services/%s", nameParts[0], nameParts[1], nameParts[2], nameParts[3]) + d.Set("name", id) + d.SetId(id) +} else if len(nameParts) == 3 { + // `{{location}}/{{namespace_id}}/{{service_id}}` + project, err := getProject(d, config) + if err != nil { + return nil, err + } + d.Set("namespace", fmt.Sprintf("projects/%s/locations/%s/namespaces/%s", project, nameParts[0], nameParts[1])) + d.Set("service_id", nameParts[2]) + id := fmt.Sprintf("projects/%s/locations/%s/namespaces/%s/services/%s", project, nameParts[0], nameParts[1], nameParts[2]) + d.Set("name", id) + d.SetId(id) +} else { + return nil, fmt.Errorf( + "Saw %s when the name is expected to have shape %s, %s or %s", + d.Get("name"), + "projects/{{project}}/locations/{{location}}/namespaces/{{namespace_id}}/services/{{service_id}}", + "{{project}}/{{location}}/{{namespace_id}}/{{service_id}}", + "{{location}}/{{namespace_id}}/{{service_id}}") +} +return []*schema.ResourceData{d}, nil + diff --git a/templates/terraform/decoders/backend_service.go.erb b/templates/terraform/decoders/backend_service.go.erb index 50df79d6a47b..3e0796f151a5 100644 --- a/templates/terraform/decoders/backend_service.go.erb +++ b/templates/terraform/decoders/backend_service.go.erb @@ -24,4 +24,17 @@ if ok && m["enabled"] == false { delete(res, "iap") } +// Requests with consistentHash will error for specific values of +// localityLbPolicy. However, the API will not remove it if the backend +// service is updated to from supporting to non-supporting localityLbPolicy +// (e.g. RING_HASH to RANDOM), which causes an error on subsequent update. +// In order to prevent errors, we ignore any consistentHash returned +// from the API when the localityLbPolicy doesn't support it. +if v, ok := res["localityLbPolicy"]; ok { + lbPolicy := v.(string) + if lbPolicy != "MAGLEV" && lbPolicy != "RING_HASH" { + delete(res, "consistentHash") + } +} + return res, nil diff --git a/templates/terraform/decoders/cloudiot_device_registry.go.erb b/templates/terraform/decoders/cloudiot_device_registry.go.erb new file mode 100644 index 000000000000..1ce63da41715 --- /dev/null +++ b/templates/terraform/decoders/cloudiot_device_registry.go.erb @@ -0,0 +1,31 @@ +config := meta.(*Config) + +log.Printf("[DEBUG] Decoding state notification config: %q", d.Id()) +log.Printf("[DEBUG] State notification config before decoding: %v", d.Get("state_notification_config")) +if err := d.Set("state_notification_config", flattenCloudIotDeviceRegistryStateNotificationConfig(res["stateNotificationConfig"], d, config)); err != nil { + return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) +} +log.Printf("[DEBUG] State notification config after decoding: %v", d.Get("state_notification_config")) + +log.Printf("[DEBUG] Decoding HTTP config: %q", d.Id()) +log.Printf("[DEBUG] HTTP config before decoding: %v", d.Get("http_config")) +if err := d.Set("http_config", flattenCloudIotDeviceRegistryHTTPConfig(res["httpConfig"], d, config)); err != nil { + return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) +} +log.Printf("[DEBUG] HTTP config after decoding: %v", d.Get("http_config")) + +log.Printf("[DEBUG] Decoding MQTT config: %q", d.Id()) +log.Printf("[DEBUG] MQTT config before decoding: %v", d.Get("mqtt_config")) +if err := d.Set("mqtt_config", flattenCloudIotDeviceRegistryMqttConfig(res["mqttConfig"], d, config)); err != nil { + return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) +} +log.Printf("[DEBUG] MQTT config after decoding: %v", d.Get("mqtt_config")) + +log.Printf("[DEBUG] Decoding credentials: %q", d.Id()) +log.Printf("[DEBUG] credentials before decoding: %v", d.Get("credentials")) +if err := d.Set("credentials", flattenCloudIotDeviceRegistryCredentials(res["credentials"], d, config)); err != nil { + return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) +} +log.Printf("[DEBUG] credentials after decoding: %v", d.Get("credentials")) + +return res, nil diff --git a/templates/terraform/decoders/containeranalysis_occurrence.go.erb b/templates/terraform/decoders/containeranalysis_occurrence.go.erb new file mode 100644 index 000000000000..0b7dbc5c91bc --- /dev/null +++ b/templates/terraform/decoders/containeranalysis_occurrence.go.erb @@ -0,0 +1,43 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +<% unless version == 'ga' -%> +// Resource object was flattened in GA API +if nestedResource, ok := res["resource"]; ok { + if resObj, ok := nestedResource.(map[string]interface{}); ok { + res["resourceUri"] = resObj["uri"] + delete(res, "resource") + } +} + +// Beta attestation.attestation.genericSignedAttestation +// => GA attestation +if attV, ok := res["attestation"]; ok && attV != nil { + att := attV.(map[string]interface{}) + if nestedAttV, ok := att["attestation"]; ok && nestedAttV != nil { + nestedAtt := nestedAttV.(map[string]interface{}) + if genericV, ok := nestedAtt["genericSignedAttestation"]; ok { + genericAtt := genericV.(map[string]interface{}) + res["attestation"] = map[string]interface{}{ + "serializedPayload": genericAtt["serializedPayload"], + "signatures": genericAtt["signatures"], + } + } + } +} + +<% else -%> +// encoder logic only in non-GA version +<% end -%> +return res, nil diff --git a/templates/terraform/decoders/os_config_patch_deployment.go.erb b/templates/terraform/decoders/os_config_patch_deployment.go.erb new file mode 100644 index 000000000000..fce9e9366fc3 --- /dev/null +++ b/templates/terraform/decoders/os_config_patch_deployment.go.erb @@ -0,0 +1,9 @@ +if res["patchConfig"] != nil { + patchConfig := res["patchConfig"].(map[string]interface{}) + if patchConfig["goo"] != nil { + patchConfig["goo"].(map[string]interface{})["enabled"] = true + res["patchConfig"] = patchConfig + } +} + +return res, nil diff --git a/templates/terraform/decoders/region_backend_service.go.erb b/templates/terraform/decoders/region_backend_service.go.erb new file mode 100644 index 000000000000..1d1eacba8e53 --- /dev/null +++ b/templates/terraform/decoders/region_backend_service.go.erb @@ -0,0 +1,28 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +// Requests with consistentHash will error for specific values of +// localityLbPolicy. However, the API will not remove it if the backend +// service is updated to from supporting to non-supporting localityLbPolicy +// (e.g. RING_HASH to RANDOM), which causes an error on subsequent update. +// In order to prevent errors, we ignore any consistentHash returned +// from the API when the localityLbPolicy doesn't support it. +if v, ok := res["localityLbPolicy"]; ok { + lbPolicy := v.(string) + if lbPolicy != "MAGLEV" && lbPolicy != "RING_HASH" { + delete(res, "consistentHash") + } +} + +return res, nil diff --git a/templates/terraform/encoders/backend_service.go.erb b/templates/terraform/encoders/backend_service.go.erb index 93ae452dfb7f..db47dca273af 100644 --- a/templates/terraform/encoders/backend_service.go.erb +++ b/templates/terraform/encoders/backend_service.go.erb @@ -32,5 +32,22 @@ if iapVal == nil { obj["iap"] = iap } +backendsRaw, ok := obj["backends"] +if !ok { + return obj, nil +} +backends := backendsRaw.([]interface{}) +for _, backendRaw := range backends { + backend := backendRaw.(map[string]interface{}) + backendGroup, ok := backend["group"] + if !ok { + continue + } + if strings.Contains(backendGroup.(string), "global/networkEndpointGroups") { + // Remove `max_utilization` from any backend that belongs to a global NEG. This field + // has a default value and causes API validation errors + backend["maxUtilization"] = nil + } +} return obj, nil diff --git a/templates/terraform/encoders/bigquery_job.go.erb b/templates/terraform/encoders/bigquery_job.go.erb new file mode 100644 index 000000000000..f9c94f684fcb --- /dev/null +++ b/templates/terraform/encoders/bigquery_job.go.erb @@ -0,0 +1,20 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +project, err := getProject(d, meta.(*Config)) +if err != nil { + return nil, err +} +obj["jobReference"].(map[string]interface{})["project"] = project +return obj, nil \ No newline at end of file diff --git a/templates/terraform/encoders/cloud_asset_feed.go.erb b/templates/terraform/encoders/cloud_asset_feed.go.erb new file mode 100644 index 000000000000..5d2c986d9ace --- /dev/null +++ b/templates/terraform/encoders/cloud_asset_feed.go.erb @@ -0,0 +1,8 @@ +// Remove the "folders/" prefix from the folder ID +if folder, ok := d.GetOkExists("folder"); ok { + d.Set("folder_id", strings.TrimPrefix(folder.(string), "folders/")) +} +// The feed object must be under the "feed" attribute on the request. +newObj := make(map[string]interface{}) +newObj["feed"] = obj +return newObj, nil \ No newline at end of file diff --git a/templates/terraform/encoders/cloudiot_device_registry.go.erb b/templates/terraform/encoders/cloudiot_device_registry.go.erb new file mode 100644 index 000000000000..695c0bca5679 --- /dev/null +++ b/templates/terraform/encoders/cloudiot_device_registry.go.erb @@ -0,0 +1,43 @@ +config := meta.(*Config) + +log.Printf("[DEBUG] Resource data before encoding extra schema entries %q: %#v", d.Id(), obj) + +log.Printf("[DEBUG] Encoding state notification config: %q", d.Id()) +stateNotificationConfigProp, err := expandCloudIotDeviceRegistryStateNotificationConfig(d.Get("state_notification_config"), d, config) +if err != nil { + return nil, err +} else if v, ok := d.GetOkExists("state_notification_config"); !isEmptyValue(reflect.ValueOf(stateNotificationConfigProp)) && (ok || !reflect.DeepEqual(v, stateNotificationConfigProp)) { + log.Printf("[DEBUG] Encoding %q. Setting stateNotificationConfig: %#v", d.Id(), stateNotificationConfigProp) + obj["stateNotificationConfig"] = stateNotificationConfigProp +} + +log.Printf("[DEBUG] Encoding HTTP config: %q", d.Id()) +httpConfigProp, err := expandCloudIotDeviceRegistryHTTPConfig(d.Get("http_config"), d, config) +if err != nil { + return nil, err +} else if v, ok := d.GetOkExists("http_config"); !isEmptyValue(reflect.ValueOf(httpConfigProp)) && (ok || !reflect.DeepEqual(v, httpConfigProp)) { + log.Printf("[DEBUG] Encoding %q. Setting httpConfig: %#v", d.Id(), httpConfigProp) + obj["httpConfig"] = httpConfigProp +} + +log.Printf("[DEBUG] Encoding MQTT config: %q", d.Id()) +mqttConfigProp, err := expandCloudIotDeviceRegistryMqttConfig(d.Get("mqtt_config"), d, config) +if err != nil { + return nil, err +} else if v, ok := d.GetOkExists("mqtt_config"); !isEmptyValue(reflect.ValueOf(mqttConfigProp)) && (ok || !reflect.DeepEqual(v, mqttConfigProp)) { + log.Printf("[DEBUG] Encoding %q. Setting mqttConfig: %#v", d.Id(), mqttConfigProp) + obj["mqttConfig"] = mqttConfigProp +} + +log.Printf("[DEBUG] Encoding credentials: %q", d.Id()) +credentialsProp, err := expandCloudIotDeviceRegistryCredentials(d.Get("credentials"), d, config) +if err != nil { + return nil, err +} else if v, ok := d.GetOkExists("credentials"); !isEmptyValue(reflect.ValueOf(credentialsProp)) && (ok || !reflect.DeepEqual(v, credentialsProp)) { + log.Printf("[DEBUG] Encoding %q. Setting credentials: %#v", d.Id(), credentialsProp) + obj["credentials"] = credentialsProp +} + +log.Printf("[DEBUG] Resource data after encoding extra schema entries %q: %#v", d.Id(), obj) + +return obj, nil diff --git a/templates/terraform/encoders/compute_per_instance_config.go.erb b/templates/terraform/encoders/compute_per_instance_config.go.erb new file mode 100644 index 000000000000..6daa41778f8a --- /dev/null +++ b/templates/terraform/encoders/compute_per_instance_config.go.erb @@ -0,0 +1,18 @@ +<%# The license inside this block applies to this file. + # Copyright 2017 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +wrappedReq := map[string]interface{}{ + "instances": []interface{}{obj}, +} +return wrappedReq, nil diff --git a/templates/terraform/encoders/containeranalysis_occurrence.go.erb b/templates/terraform/encoders/containeranalysis_occurrence.go.erb new file mode 100644 index 000000000000..5697d9c8fa66 --- /dev/null +++ b/templates/terraform/encoders/containeranalysis_occurrence.go.erb @@ -0,0 +1,43 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +<% unless version == 'ga' -%> +// Resource object was flattened in GA API +if resourceuri, ok := obj["resourceUri"]; ok { + obj["resource"] = map[string]interface{}{ + "uri": resourceuri, + } + delete(obj, "resourceUri") +} + + +// Beta `attestation.genericSignedAttestation` was flattened to just +// `attestation` (no contentType) in GA +if v, ok := obj["attestation"]; ok && v != nil { + gaAtt := v.(map[string]interface{}) + obj["attestation"] = map[string]interface{}{ + "attestation": map[string]interface{}{ + "genericSignedAttestation": map[string]interface{}{ + "contentType": "SIMPLE_SIGNING_JSON", + "serializedPayload": gaAtt["serializedPayload"], + "signatures": gaAtt["signatures"], + }, + }, + } +} +<% else -%> +// encoder logic only in non-GA versions +<% end -%> + +return obj, nil diff --git a/templates/terraform/encoders/monitoring_slo.go.erb b/templates/terraform/encoders/monitoring_slo.go.erb new file mode 100644 index 000000000000..6a8a950629a4 --- /dev/null +++ b/templates/terraform/encoders/monitoring_slo.go.erb @@ -0,0 +1,18 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +// Name/Service Level Objective ID is a query parameter and cannot +// be given in data +delete(obj, "sloId") +return obj, nil \ No newline at end of file diff --git a/templates/terraform/encoders/os_config_patch_deployment.go.erb b/templates/terraform/encoders/os_config_patch_deployment.go.erb new file mode 100644 index 000000000000..a146d237e8c6 --- /dev/null +++ b/templates/terraform/encoders/os_config_patch_deployment.go.erb @@ -0,0 +1,24 @@ +schedule := obj["recurringSchedule"].(map[string]interface{}) +if schedule["monthly"] != nil { + obj["recurringSchedule"].(map[string]interface{})["frequency"] = "MONTHLY" +} else if schedule["weekly"] != nil { + obj["recurringSchedule"].(map[string]interface{})["frequency"] = "WEEKLY" +} + +if obj["patchConfig"] != nil { + patchConfig := obj["patchConfig"].(map[string]interface{}) + if patchConfig["goo"] != nil { + goo := patchConfig["goo"].(map[string]interface{}) + + if goo["enabled"] == true { + delete(goo, "enabled") + patchConfig["goo"] = goo + } else { + delete(patchConfig, "goo") + } + + obj["patchConfig"] = patchConfig + } +} + +return obj, nil diff --git a/templates/terraform/env_var_context.go.erb b/templates/terraform/env_var_context.go.erb index d0809f7520be..3d9ac7b88a0d 100644 --- a/templates/terraform/env_var_context.go.erb +++ b/templates/terraform/env_var_context.go.erb @@ -17,5 +17,9 @@ "<%= var_name -%>": getTestProjectFromEnv(), <% elsif var_type == :FIRESTORE_PROJECT_NAME -%> "<%= var_name -%>": getTestFirestoreProjectFromEnv(t), + <% elsif var_type == :CUST_ID -%> + "<%= var_name -%>": getTestCustIdFromEnv(t), + <% elsif var_type == :IDENTITY_USER -%> + "<%= var_name -%>": getTestIdentityUserFromEnv(t), <% end -%> <% end -%> diff --git a/templates/terraform/examples/access_context_manager_access_level_basic.tf.erb b/templates/terraform/examples/access_context_manager_access_level_basic.tf.erb index 26f57256e105..ec6e32c0cf45 100644 --- a/templates/terraform/examples/access_context_manager_access_level_basic.tf.erb +++ b/templates/terraform/examples/access_context_manager_access_level_basic.tf.erb @@ -1,11 +1,11 @@ resource "google_access_context_manager_access_level" "<%= ctx[:primary_resource_id] %>" { - parent = "accessPolicies/${google_access_context_manager_access_policy.test-access.name}" - name = "accessPolicies/${google_access_context_manager_access_policy.test-access.name}/accessLevels/<%= ctx[:vars]['access_level_name'] %>" + parent = "accessPolicies/${google_access_context_manager_access_policy.access-policy.name}" + name = "accessPolicies/${google_access_context_manager_access_policy.access-policy.name}/accessLevels/<%= ctx[:vars]['access_level_name'] %>" title = "<%= ctx[:vars]['access_level_name'] %>" basic { conditions { device_policy { - require_screen_lock = false + require_screen_lock = true os_constraints { os_type = "DESKTOP_CHROME_OS" } diff --git a/templates/terraform/examples/active_directory_domain_basic.tf.erb b/templates/terraform/examples/active_directory_domain_basic.tf.erb new file mode 100644 index 000000000000..caf8f8871c7c --- /dev/null +++ b/templates/terraform/examples/active_directory_domain_basic.tf.erb @@ -0,0 +1,5 @@ +resource "google_active_directory_domain" "ad-domain" { + domain_name = "mydomain.org.com" + locations = ["us-central1"] + reserved_ip_range = "192.168.255.0/24" +} \ No newline at end of file diff --git a/templates/terraform/examples/address_with_shared_loadbalancer_vip.tf.erb b/templates/terraform/examples/address_with_shared_loadbalancer_vip.tf.erb new file mode 100644 index 000000000000..a06579f539af --- /dev/null +++ b/templates/terraform/examples/address_with_shared_loadbalancer_vip.tf.erb @@ -0,0 +1,6 @@ +resource "google_compute_address" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]['address_name'] %>" + address_type = "INTERNAL" + purpose = "SHARED_LOADBALANCER_VIP" +} diff --git a/templates/terraform/examples/app_engine_flexible_app_version.tf.erb b/templates/terraform/examples/app_engine_flexible_app_version.tf.erb index 4b220e9cd5db..a65b47ed63b8 100644 --- a/templates/terraform/examples/app_engine_flexible_app_version.tf.erb +++ b/templates/terraform/examples/app_engine_flexible_app_version.tf.erb @@ -1,12 +1,32 @@ +resource "google_project" "my_project" { + name = "<%= ctx[:vars]['project'] %>" + project_id = "<%= ctx[:vars]['project'] %>" + org_id = "<%= ctx[:test_env_vars]['org_id'] %>" + billing_account = "<%= ctx[:test_env_vars]['billing_account'] %>" +} + +resource "google_app_engine_application" "app" { + project = google_project.my_project.project_id + location_id = "us-central" +} + resource "google_project_service" "service" { + project = google_project.my_project.project_id service = "appengineflex.googleapis.com" disable_dependent_services = false } +resource "google_project_iam_member" "gae_api" { + project = google_project_service.service.project + role = "roles/compute.networkUser" + member = "serviceAccount:service-${google_project.my_project.number}@gae-api-prod.google.com.iam.gserviceaccount.com" +} + resource "google_app_engine_flexible_app_version" "<%= ctx[:primary_resource_id] %>" { version_id = "v1" - service = "<%= ctx[:vars]['service_name'] %>" + project = google_project_iam_member.gae_api.project + service = "default" runtime = "nodejs" entrypoint { @@ -31,6 +51,18 @@ resource "google_app_engine_flexible_app_version" "<%= ctx[:primary_resource_id] port = "8080" } + handlers { + url_regex = ".*\\/my-path\\/*" + security_level = "SECURE_ALWAYS" + login = "LOGIN_REQUIRED" + auth_fail_action = "AUTH_FAIL_ACTION_REDIRECT" + + static_files { + path = "my-other-path" + upload_path_regex = ".*\\/my-path\\/*" + } + } + automatic_scaling { cool_down_period = "120s" cpu_utilization { @@ -38,10 +70,11 @@ resource "google_app_engine_flexible_app_version" "<%= ctx[:primary_resource_id] } } - delete_service_on_destroy = true + noop_on_destroy = true } resource "google_storage_bucket" "bucket" { + project = google_project.my_project.project_id name = "<%= ctx[:vars]['bucket_name'] %>" } diff --git a/templates/terraform/examples/app_engine_standard_app_version.tf.erb b/templates/terraform/examples/app_engine_standard_app_version.tf.erb index cdf15ef6acbb..9af3a5cceaf3 100644 --- a/templates/terraform/examples/app_engine_standard_app_version.tf.erb +++ b/templates/terraform/examples/app_engine_standard_app_version.tf.erb @@ -17,6 +17,20 @@ resource "google_app_engine_standard_app_version" "<%= ctx[:primary_resource_id] port = "8080" } + automatic_scaling { + max_concurrent_requests = 10 + min_idle_instances = 1 + max_idle_instances = 3 + min_pending_latency = "1s" + max_pending_latency = "5s" + standard_scheduler_settings { + target_cpu_utilization = 0.5 + target_throughput_utilization = 0.75 + min_instances = 2 + max_instances = 10 + } + } + delete_service_on_destroy = true } @@ -39,6 +53,10 @@ resource "google_app_engine_standard_app_version" "myapp_v2" { port = "8080" } + basic_scaling { + max_instances = 5 + } + noop_on_destroy = true } diff --git a/templates/terraform/examples/artifact_registry_repository_basic.tf.erb b/templates/terraform/examples/artifact_registry_repository_basic.tf.erb new file mode 100644 index 000000000000..28a74bdbf584 --- /dev/null +++ b/templates/terraform/examples/artifact_registry_repository_basic.tf.erb @@ -0,0 +1,8 @@ +resource "google_artifact_registry_repository" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + + location = "us-central1" + repository_id = "<%= ctx[:vars]['repository_id'] %>" + description = "<%= ctx[:vars]['description'] %>" + format = "DOCKER" +} diff --git a/templates/terraform/examples/artifact_registry_repository_cmek.tf.erb b/templates/terraform/examples/artifact_registry_repository_cmek.tf.erb new file mode 100644 index 000000000000..6217d3b2d052 --- /dev/null +++ b/templates/terraform/examples/artifact_registry_repository_cmek.tf.erb @@ -0,0 +1,9 @@ +resource "google_artifact_registry_repository" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + + location = "us-central1" + repository_id = "<%= ctx[:vars]['repository_id'] %>" + description = "example docker repository with cmek" + format = "DOCKER" + kms_key_name = "<%= ctx[:vars]['kms_key_name'] %>" +} diff --git a/templates/terraform/examples/artifact_registry_repository_iam.tf.erb b/templates/terraform/examples/artifact_registry_repository_iam.tf.erb new file mode 100644 index 000000000000..3ac1ebdfa31c --- /dev/null +++ b/templates/terraform/examples/artifact_registry_repository_iam.tf.erb @@ -0,0 +1,24 @@ +resource "google_artifact_registry_repository" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + + location = "us-central1" + repository_id = "<%= ctx[:vars]['repository_id'] %>" + description = "<%= ctx[:vars]['description'] %>" + format = "DOCKER" +} + +resource "google_service_account" "test-account" { + provider = google-beta + + account_id = "<%= ctx[:vars]['account_id'] %>" + display_name = "Test Service Account" +} + +resource "google_artifact_registry_repository_iam_member" "test-iam" { + provider = google-beta + + location = google_artifact_registry_repository.<%= ctx[:primary_resource_id] %>.location + repository = google_artifact_registry_repository.<%= ctx[:primary_resource_id] %>.name + role = "roles/artifactregistry.reader" + member = "serviceAccount:${google_service_account.test-account.email}" +} diff --git a/templates/terraform/examples/autoscaler_basic.tf.erb b/templates/terraform/examples/autoscaler_basic.tf.erb index e0a644b14b0d..f8333408ace6 100644 --- a/templates/terraform/examples/autoscaler_basic.tf.erb +++ b/templates/terraform/examples/autoscaler_basic.tf.erb @@ -22,7 +22,7 @@ resource "google_compute_instance_template" "foobar" { tags = ["foo", "bar"] disk { - source_image = data.google_compute_image.debian_9.self_link + source_image = data.google_compute_image.debian_9.id } network_interface { diff --git a/templates/terraform/examples/autoscaler_single_instance.tf.erb b/templates/terraform/examples/autoscaler_single_instance.tf.erb index c7aac8d1d70a..fc603e073f88 100644 --- a/templates/terraform/examples/autoscaler_single_instance.tf.erb +++ b/templates/terraform/examples/autoscaler_single_instance.tf.erb @@ -28,7 +28,7 @@ resource "google_compute_instance_template" "default" { tags = ["foo", "bar"] disk { - source_image = data.google_compute_image.debian_9.self_link + source_image = data.google_compute_image.debian_9.id } network_interface { diff --git a/templates/terraform/examples/backend_service_basic.tf.erb b/templates/terraform/examples/backend_service_basic.tf.erb index c3779e7cb9d2..f46281cdb9ce 100644 --- a/templates/terraform/examples/backend_service_basic.tf.erb +++ b/templates/terraform/examples/backend_service_basic.tf.erb @@ -1,6 +1,6 @@ resource "google_compute_backend_service" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['backend_service_name'] %>" - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_http_health_check" "default" { diff --git a/templates/terraform/examples/backend_service_network_endpoint.tf.erb b/templates/terraform/examples/backend_service_network_endpoint.tf.erb new file mode 100644 index 000000000000..53cb96620ee7 --- /dev/null +++ b/templates/terraform/examples/backend_service_network_endpoint.tf.erb @@ -0,0 +1,24 @@ +resource "google_compute_global_network_endpoint_group" "external_proxy" { + name = "<%= ctx[:vars]['neg_name'] %>" + network_endpoint_type = "INTERNET_FQDN_PORT" + default_port = "443" +} + +resource "google_compute_global_network_endpoint" "proxy" { + global_network_endpoint_group = google_compute_global_network_endpoint_group.external_proxy.id + fqdn = "test.example.com" + port = google_compute_global_network_endpoint_group.external_proxy.default_port +} + +resource "google_compute_backend_service" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['backend_service_name'] %>" + enable_cdn = true + timeout_sec = 10 + connection_draining_timeout_sec = 10 + + custom_request_headers = ["host: ${google_compute_global_network_endpoint.proxy.fqdn}"] + + backend { + group = google_compute_global_network_endpoint_group.external_proxy.id + } +} diff --git a/templates/terraform/examples/backend_service_signed_url_key.tf.erb b/templates/terraform/examples/backend_service_signed_url_key.tf.erb index b9570befa984..7ce1d3b5a635 100644 --- a/templates/terraform/examples/backend_service_signed_url_key.tf.erb +++ b/templates/terraform/examples/backend_service_signed_url_key.tf.erb @@ -16,14 +16,14 @@ resource "google_compute_backend_service" "example_backend" { group = google_compute_instance_group_manager.webservers.instance_group } - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_instance_group_manager" "webservers" { name = "my-webservers" version { - instance_template = google_compute_instance_template.webserver.self_link + instance_template = google_compute_instance_template.webserver.id name = "primary" } diff --git a/templates/terraform/examples/backend_service_traffic_director_ring_hash.tf.erb b/templates/terraform/examples/backend_service_traffic_director_ring_hash.tf.erb index 080fb2aee9de..46b584e3c4f8 100644 --- a/templates/terraform/examples/backend_service_traffic_director_ring_hash.tf.erb +++ b/templates/terraform/examples/backend_service_traffic_director_ring_hash.tf.erb @@ -2,7 +2,7 @@ resource "google_compute_backend_service" "<%= ctx[:primary_resource_id] %>" { provider = google-beta name = "<%= ctx[:vars]['backend_service_name'] %>" - health_checks = [google_compute_health_check.health_check.self_link] + health_checks = [google_compute_health_check.health_check.id] load_balancing_scheme = "INTERNAL_SELF_MANAGED" locality_lb_policy = "RING_HASH" session_affinity = "HTTP_COOKIE" diff --git a/templates/terraform/examples/backend_service_traffic_director_round_robin.tf.erb b/templates/terraform/examples/backend_service_traffic_director_round_robin.tf.erb index 068013aee910..e45e8f354ba8 100644 --- a/templates/terraform/examples/backend_service_traffic_director_round_robin.tf.erb +++ b/templates/terraform/examples/backend_service_traffic_director_round_robin.tf.erb @@ -2,7 +2,7 @@ resource "google_compute_backend_service" "<%= ctx[:primary_resource_id] %>" { provider = google-beta name = "<%= ctx[:vars]['backend_service_name'] %>" - health_checks = [google_compute_health_check.health_check.self_link] + health_checks = [google_compute_health_check.health_check.id] load_balancing_scheme = "INTERNAL_SELF_MANAGED" locality_lb_policy = "ROUND_ROBIN" } diff --git a/templates/terraform/examples/base_configs/example_file.tf.erb b/templates/terraform/examples/base_configs/example_file.tf.erb index 5d2155f0173b..bad51ff17c74 100644 --- a/templates/terraform/examples/base_configs/example_file.tf.erb +++ b/templates/terraform/examples/base_configs/example_file.tf.erb @@ -1,2 +1,2 @@ <% autogen_exception -%> -<%= example.config_example -%> +<%= example.config_example(pwd) -%> diff --git a/templates/terraform/examples/base_configs/iam_test_file.go.erb b/templates/terraform/examples/base_configs/iam_test_file.go.erb index c7c939f8c9e9..dca7bd54cbcf 100644 --- a/templates/terraform/examples/base_configs/iam_test_file.go.erb +++ b/templates/terraform/examples/base_configs/iam_test_file.go.erb @@ -1,4 +1,4 @@ -<%= lines(autogen_notice :go) -%> +<%= lines(autogen_notice(:go, pwd)) -%> package google @@ -52,9 +52,9 @@ import_url = import_format.gsub(/({{)(\w+)(}})/, '%s').gsub(object.__product.bas func TestAcc<%= resource_name -%>IamBindingGenerated(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -92,9 +92,9 @@ func TestAcc<%= resource_name -%>IamBindingGenerated(t *testing.T) { func TestAcc<%= resource_name -%>IamMemberGenerated(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -121,12 +121,16 @@ func TestAcc<%= resource_name -%>IamMemberGenerated(t *testing.T) { func TestAcc<%= resource_name -%>IamPolicyGenerated(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> <% unless object.iam_policy.admin_iam_role.nil? -%> - context["service_account"] = getTestServiceAccountFromEnv(t) + // This may skip test, so do it first + sa := getTestServiceAccountFromEnv(t) +<% end -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> +<% unless object.iam_policy.admin_iam_role.nil? -%> + context["service_account"] = sa <% end -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -164,9 +168,9 @@ func TestAcc<%= resource_name -%>IamPolicyGenerated(t *testing.T) { func TestAcc<%= resource_name -%>IamBindingGenerated_withCondition(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -192,9 +196,9 @@ func TestAcc<%= resource_name -%>IamBindingGenerated_withCondition(t *testing.T) func TestAcc<%= resource_name -%>IamBindingGenerated_withAndWithoutCondition(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -228,9 +232,9 @@ func TestAcc<%= resource_name -%>IamBindingGenerated_withAndWithoutCondition(t * func TestAcc<%= resource_name -%>IamMemberGenerated_withCondition(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -256,9 +260,9 @@ func TestAcc<%= resource_name -%>IamMemberGenerated_withCondition(t *testing.T) func TestAcc<%= resource_name -%>IamMemberGenerated_withAndWithoutCondition(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -292,12 +296,16 @@ func TestAcc<%= resource_name -%>IamMemberGenerated_withAndWithoutCondition(t *t func TestAcc<%= resource_name -%>IamPolicyGenerated_withCondition(t *testing.T) { t.Parallel() -<%= lines(compile('templates/terraform/iam/iam_context.go.erb')) -%> <% unless object.iam_policy.admin_iam_role.nil? -%> - context["service_account"] = getTestServiceAccountFromEnv(t) + // This may skip test, so do it first + sa := getTestServiceAccountFromEnv(t) +<% end -%> +<%= lines(compile(pwd + '/templates/terraform/iam/iam_context.go.erb')) -%> +<% unless object.iam_policy.admin_iam_role.nil? -%> + context["service_account"] = sa <% end -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless object.min_version.name == "ga" -%> Providers: testAccProvidersOiCS, @@ -323,13 +331,13 @@ func TestAcc<%= resource_name -%>IamPolicyGenerated_withCondition(t *testing.T) func testAcc<%= resource_name -%>IamMember_basicGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> resource "<%= resource_ns_iam -%>_member" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" member = "user:admin@hashicorptest.com" } @@ -338,7 +346,7 @@ resource "<%= resource_ns_iam -%>_member" "foo" { func testAcc<%= resource_name -%>IamPolicy_basicGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> data "google_iam_policy" "foo" { <% unless object.min_version.name == "ga" -%> @@ -360,7 +368,7 @@ resource "<%= resource_ns_iam -%>_policy" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> policy_data = data.google_iam_policy.foo.policy_data } `, context) @@ -368,7 +376,7 @@ resource "<%= resource_ns_iam -%>_policy" "foo" { func testAcc<%= resource_name -%>IamPolicy_emptyBinding(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> data "google_iam_policy" "foo" { <% unless object.min_version.name == "ga" -%> @@ -380,7 +388,7 @@ resource "<%= resource_ns_iam -%>_policy" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> policy_data = data.google_iam_policy.foo.policy_data } `, context) @@ -388,13 +396,13 @@ resource "<%= resource_ns_iam -%>_policy" "foo" { func testAcc<%= resource_name -%>IamBinding_basicGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> resource "<%= resource_ns_iam -%>_binding" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" members = ["user:admin@hashicorptest.com"] } @@ -403,13 +411,13 @@ resource "<%= resource_ns_iam -%>_binding" "foo" { func testAcc<%= resource_name -%>IamBinding_updateGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> resource "<%= resource_ns_iam -%>_binding" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" members = ["user:admin@hashicorptest.com", "user:paddy@hashicorp.com"] } @@ -419,13 +427,13 @@ resource "<%= resource_ns_iam -%>_binding" "foo" { <% unless version == 'ga' || object.iam_policy.iam_conditions_request_type.nil? -%> func testAcc<%= resource_name -%>IamBinding_withConditionGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> resource "<%= resource_ns_iam -%>_binding" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" members = ["user:admin@hashicorptest.com"] condition { @@ -439,13 +447,13 @@ resource "<%= resource_ns_iam -%>_binding" "foo" { func testAcc<%= resource_name -%>IamBinding_withAndWithoutConditionGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> resource "<%= resource_ns_iam -%>_binding" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" members = ["user:admin@hashicorptest.com"] } @@ -454,7 +462,7 @@ resource "<%= resource_ns_iam -%>_binding" "foo2" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" members = ["user:admin@hashicorptest.com"] condition { @@ -468,13 +476,13 @@ resource "<%= resource_ns_iam -%>_binding" "foo2" { func testAcc<%= resource_name -%>IamMember_withConditionGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> resource "<%= resource_ns_iam -%>_member" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" member = "user:admin@hashicorptest.com" condition { @@ -488,13 +496,13 @@ resource "<%= resource_ns_iam -%>_member" "foo" { func testAcc<%= resource_name -%>IamMember_withAndWithoutConditionGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> resource "<%= resource_ns_iam -%>_member" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" member = "user:admin@hashicorptest.com" } @@ -503,7 +511,7 @@ resource "<%= resource_ns_iam -%>_member" "foo2" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "%{role}" member = "user:admin@hashicorptest.com" condition { @@ -517,7 +525,7 @@ resource "<%= resource_ns_iam -%>_member" "foo2" { func testAcc<%= resource_name -%>IamPolicy_withConditionGenerated(context map[string]interface{}) string { return Nprintf(` -<%= example.config_test_body -%> +<%= example.config_test_body(pwd) -%> data "google_iam_policy" "foo" { <% unless object.min_version.name == "ga" -%> @@ -544,7 +552,7 @@ resource "<%= resource_ns_iam -%>_policy" "foo" { <% unless object.min_version.name == "ga" -%> provider = google-beta <% end -%> -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> policy_data = data.google_iam_policy.foo.policy_data } `, context) diff --git a/templates/terraform/examples/base_configs/test_file.go.erb b/templates/terraform/examples/base_configs/test_file.go.erb index 2c023942abb0..26323863ee6e 100644 --- a/templates/terraform/examples/base_configs/test_file.go.erb +++ b/templates/terraform/examples/base_configs/test_file.go.erb @@ -1,4 +1,4 @@ -<%= lines(autogen_notice :go) -%> +<%= lines(autogen_notice(:go, pwd)) -%> package google @@ -6,7 +6,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) <% @@ -40,20 +39,23 @@ object.examples -%> func TestAcc<%= test_slug -%>(t *testing.T) { +<% if example.skip_vcr -%> + skipIfVcr(t) +<% end -%> t.Parallel() context := map[string]interface{} { -<%= lines(indent(compile('templates/terraform/env_var_context.go.erb'), 4)) -%> +<%= lines(indent(compile(pwd + '/templates/terraform/env_var_context.go.erb'), 4)) -%> <% unless example.test_vars_overrides.nil? -%> <% example.test_vars_overrides.each do |var_name, override| -%> "<%= var_name %>": <%= override %>, <% end -%> <% end -%> - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } <% versioned_provider = !example_version.nil? && example_version != 'ga' -%> - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, <% unless versioned_provider -%> Providers: testAccProviders, @@ -61,7 +63,7 @@ func TestAcc<%= test_slug -%>(t *testing.T) { Providers: testAccProvidersOiCS, <% end -%> <% unless object.skip_delete -%> - CheckDestroy: testAccCheck<%= "#{resource_name}" -%>Destroy, + CheckDestroy: testAccCheck<%= "#{resource_name}" -%>DestroyProducer(t), <% end -%> Steps: []resource.TestStep{ { @@ -83,37 +85,39 @@ func TestAcc<%= test_slug -%>(t *testing.T) { } func testAcc<%= test_slug -%>(context map[string]interface{}) string { -<%= example.config_test -%> +<%= example.config_test(pwd) -%> } <%- end %> <% unless object.skip_delete -%> -func testAccCheck<%= resource_name -%>Destroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "<%= terraform_name -%>" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - <% if object.custom_code.test_check_destroy -%> -<%= lines(compile(object.custom_code.test_check_destroy)) -%> - <% else -%> - config := testAccProvider.Meta().(*Config) +func testAccCheck<%= resource_name -%>DestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "<%= terraform_name -%>" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } + + <% if object.custom_code.test_check_destroy -%> + <%= lines(compile(pwd + '/' + object.custom_code.test_check_destroy)) -%> + <% else -%> + config := googleProviderConfig(t) - url, err := replaceVarsForTest(config, rs, "<%= "{{#{object.__product.name}BasePath}}#{object.self_link_uri}" -%>") - if err != nil { - return err - } + url, err := replaceVarsForTest(config, rs, "<%= "{{#{object.__product.name}BasePath}}#{object.self_link_uri}" -%>") + if err != nil { + return err + } - _, err = sendRequest(config, "<%= object.read_verb.to_s.upcase -%>", "", url, nil<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) - if err == nil { - return fmt.Errorf("<%= resource_name -%> still exists at %s", url) + _, err = sendRequest(config, "<%= object.read_verb.to_s.upcase -%>", "", url, nil<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) + if err == nil { + return fmt.Errorf("<%= resource_name -%> still exists at %s", url) + } + <% end -%> } - <% end -%> - } - return nil + return nil + } } -<%- end %> \ No newline at end of file +<% end -%> diff --git a/templates/terraform/examples/bigquery_connection_basic.tf.erb b/templates/terraform/examples/bigquery_connection_basic.tf.erb new file mode 100644 index 000000000000..5e1f8beb4630 --- /dev/null +++ b/templates/terraform/examples/bigquery_connection_basic.tf.erb @@ -0,0 +1,42 @@ +resource "google_sql_database_instance" "instance" { + provider = google-beta + name = "<%= ctx[:vars]['database_instance_name'] %>" + database_version = "POSTGRES_11" + region = "us-central1" + settings { + tier = "db-f1-micro" + } +} + +resource "google_sql_database" "db" { + provider = google-beta + instance = google_sql_database_instance.instance.name + name = "db" +} + +resource "random_password" "pwd" { + length = 16 + special = false +} + +resource "google_sql_user" "user" { + provider = google-beta + name = "<%= ctx[:vars]['username'] %>" + instance = google_sql_database_instance.instance.name + password = random_password.pwd.result +} + +resource "google_bigquery_connection" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + friendly_name = "👋" + description = "a riveting description" + cloud_sql { + instance_id = google_sql_database_instance.instance.connection_name + database = google_sql_database.db.name + type = "POSTGRES" + credential { + username = google_sql_user.user.name + password = google_sql_user.user.password + } + } +} diff --git a/templates/terraform/examples/bigquery_connection_full.tf.erb b/templates/terraform/examples/bigquery_connection_full.tf.erb new file mode 100644 index 000000000000..07291bc69538 --- /dev/null +++ b/templates/terraform/examples/bigquery_connection_full.tf.erb @@ -0,0 +1,44 @@ +resource "google_sql_database_instance" "instance" { + provider = google-beta + name = "<%= ctx[:vars]['database_instance_name'] %>" + database_version = "POSTGRES_11" + region = "us-central1" + settings { + tier = "db-f1-micro" + } +} + +resource "google_sql_database" "db" { + provider = google-beta + instance = google_sql_database_instance.instance.name + name = "db" +} + +resource "random_password" "pwd" { + length = 16 + special = false +} + +resource "google_sql_user" "user" { + provider = google-beta + name = "<%= ctx[:vars]['username'] %>" + instance = google_sql_database_instance.instance.name + password = random_password.pwd.result +} + +resource "google_bigquery_connection" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + connection_id = "<%= ctx[:vars]['connection_id'] %>" + location = "US" + friendly_name = "👋" + description = "a riveting description" + cloud_sql { + instance_id = google_sql_database_instance.instance.connection_name + database = google_sql_database.db.name + type = "POSTGRES" + credential { + username = google_sql_user.user.name + password = google_sql_user.user.password + } + } +} diff --git a/templates/terraform/examples/bigquery_dataset_cmek.tf.erb b/templates/terraform/examples/bigquery_dataset_cmek.tf.erb index 5bec43c07e05..589c400ba84b 100644 --- a/templates/terraform/examples/bigquery_dataset_cmek.tf.erb +++ b/templates/terraform/examples/bigquery_dataset_cmek.tf.erb @@ -6,13 +6,13 @@ resource "google_bigquery_dataset" "<%= ctx[:primary_resource_id] %>" { default_table_expiration_ms = 3600000 default_encryption_configuration { - kms_key_name = google_kms_crypto_key.crypto_key.self_link + kms_key_name = google_kms_crypto_key.crypto_key.id } } resource "google_kms_crypto_key" "crypto_key" { name = "<%= ctx[:vars]['key_name'] %>" - key_ring = google_kms_key_ring.key_ring.self_link + key_ring = google_kms_key_ring.key_ring.id } resource "google_kms_key_ring" "key_ring" { diff --git a/templates/terraform/examples/bigquery_job_copy.tf.erb b/templates/terraform/examples/bigquery_job_copy.tf.erb new file mode 100644 index 000000000000..a827a1bf6c36 --- /dev/null +++ b/templates/terraform/examples/bigquery_job_copy.tf.erb @@ -0,0 +1,122 @@ +resource "google_bigquery_table" "source" { + count = length(google_bigquery_dataset.source) + + dataset_id = google_bigquery_dataset.source[count.index].dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_${count.index}_table" + + schema = <" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + copy { + source_tables { + project_id = google_bigquery_table.source.0.project + dataset_id = google_bigquery_table.source.0.dataset_id + table_id = google_bigquery_table.source.0.table_id + } + + source_tables { + project_id = google_bigquery_table.source.1.project + dataset_id = google_bigquery_table.source.1.dataset_id + table_id = google_bigquery_table.source.1.table_id + } + + destination_table { + project_id = google_bigquery_table.dest.project + dataset_id = google_bigquery_table.dest.dataset_id + table_id = google_bigquery_table.dest.table_id + } + + destination_encryption_configuration { + kms_key_name = google_kms_crypto_key.crypto_key.id + } + } + + depends_on = ["google_project_iam_member.encrypt_role"] +} diff --git a/templates/terraform/examples/bigquery_job_copy_table_reference.tf.erb b/templates/terraform/examples/bigquery_job_copy_table_reference.tf.erb new file mode 100644 index 000000000000..3083a5b87505 --- /dev/null +++ b/templates/terraform/examples/bigquery_job_copy_table_reference.tf.erb @@ -0,0 +1,116 @@ +resource "google_bigquery_table" "source" { + count = length(google_bigquery_dataset.source) + + dataset_id = google_bigquery_dataset.source[count.index].dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_${count.index}_table" + + schema = <" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + copy { + source_tables { + table_id = google_bigquery_table.source.0.id + } + + source_tables { + table_id = google_bigquery_table.source.1.id + } + + destination_table { + table_id = google_bigquery_table.dest.id + } + + destination_encryption_configuration { + kms_key_name = google_kms_crypto_key.crypto_key.id + } + } + + depends_on = ["google_project_iam_member.encrypt_role"] +} diff --git a/templates/terraform/examples/bigquery_job_extract.tf.erb b/templates/terraform/examples/bigquery_job_extract.tf.erb new file mode 100644 index 000000000000..58f008dd384d --- /dev/null +++ b/templates/terraform/examples/bigquery_job_extract.tf.erb @@ -0,0 +1,54 @@ +resource "google_bigquery_table" "source-one" { + dataset_id = google_bigquery_dataset.source-one.dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_table" + + schema = <" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + extract { + destination_uris = ["${google_storage_bucket.dest.url}/extract"] + + source_table { + project_id = google_bigquery_table.source-one.project + dataset_id = google_bigquery_table.source-one.dataset_id + table_id = google_bigquery_table.source-one.table_id + } + + destination_format = "NEWLINE_DELIMITED_JSON" + compression = "GZIP" + } +} \ No newline at end of file diff --git a/templates/terraform/examples/bigquery_job_extract_table_reference.tf.erb b/templates/terraform/examples/bigquery_job_extract_table_reference.tf.erb new file mode 100644 index 000000000000..0a2ce51ea1b0 --- /dev/null +++ b/templates/terraform/examples/bigquery_job_extract_table_reference.tf.erb @@ -0,0 +1,52 @@ +resource "google_bigquery_table" "source-one" { + dataset_id = google_bigquery_dataset.source-one.dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_table" + + schema = <" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + extract { + destination_uris = ["${google_storage_bucket.dest.url}/extract"] + + source_table { + table_id = google_bigquery_table.source-one.id + } + + destination_format = "NEWLINE_DELIMITED_JSON" + compression = "GZIP" + } +} \ No newline at end of file diff --git a/templates/terraform/examples/bigquery_job_load.tf.erb b/templates/terraform/examples/bigquery_job_load.tf.erb new file mode 100644 index 000000000000..3643a86492f8 --- /dev/null +++ b/templates/terraform/examples/bigquery_job_load.tf.erb @@ -0,0 +1,37 @@ +resource "google_bigquery_table" "foo" { + dataset_id = google_bigquery_dataset.bar.dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_table" +} + +resource "google_bigquery_dataset" "bar" { + dataset_id = "<%= ctx[:vars]['job_id'] %>_dataset" + friendly_name = "test" + description = "This is a test description" + location = "US" +} + +resource "google_bigquery_job" "<%= ctx[:primary_resource_id] %>" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + labels = { + "my_job" ="load" + } + + load { + source_uris = [ + "gs://cloud-samples-data/bigquery/us-states/us-states-by-date.csv", + ] + + destination_table { + project_id = google_bigquery_table.foo.project + dataset_id = google_bigquery_table.foo.dataset_id + table_id = google_bigquery_table.foo.table_id + } + + skip_leading_rows = 1 + schema_update_options = ["ALLOW_FIELD_RELAXATION", "ALLOW_FIELD_ADDITION"] + + write_disposition = "WRITE_APPEND" + autodetect = true + } +} \ No newline at end of file diff --git a/templates/terraform/examples/bigquery_job_load_table_reference.tf.erb b/templates/terraform/examples/bigquery_job_load_table_reference.tf.erb new file mode 100644 index 000000000000..c9a07cb92962 --- /dev/null +++ b/templates/terraform/examples/bigquery_job_load_table_reference.tf.erb @@ -0,0 +1,35 @@ +resource "google_bigquery_table" "foo" { + dataset_id = google_bigquery_dataset.bar.dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_table" +} + +resource "google_bigquery_dataset" "bar" { + dataset_id = "<%= ctx[:vars]['job_id'] %>_dataset" + friendly_name = "test" + description = "This is a test description" + location = "US" +} + +resource "google_bigquery_job" "<%= ctx[:primary_resource_id] %>" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + labels = { + "my_job" ="load" + } + + load { + source_uris = [ + "gs://cloud-samples-data/bigquery/us-states/us-states-by-date.csv", + ] + + destination_table { + table_id = google_bigquery_table.foo.id + } + + skip_leading_rows = 1 + schema_update_options = ["ALLOW_FIELD_RELAXATION", "ALLOW_FIELD_ADDITION"] + + write_disposition = "WRITE_APPEND" + autodetect = true + } +} \ No newline at end of file diff --git a/templates/terraform/examples/bigquery_job_query.tf.erb b/templates/terraform/examples/bigquery_job_query.tf.erb new file mode 100644 index 000000000000..c848701ef448 --- /dev/null +++ b/templates/terraform/examples/bigquery_job_query.tf.erb @@ -0,0 +1,36 @@ +resource "google_bigquery_table" "foo" { + dataset_id = google_bigquery_dataset.bar.dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_table" +} + +resource "google_bigquery_dataset" "bar" { + dataset_id = "<%= ctx[:vars]['job_id'] %>_dataset" + friendly_name = "test" + description = "This is a test description" + location = "US" +} + +resource "google_bigquery_job" "<%= ctx[:primary_resource_id] %>" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + labels = { + "example-label" ="example-value" + } + + query { + query = "SELECT state FROM [lookerdata:cdc.project_tycho_reports]" + + destination_table { + project_id = google_bigquery_table.foo.project + dataset_id = google_bigquery_table.foo.dataset_id + table_id = google_bigquery_table.foo.table_id + } + + allow_large_results = true + flatten_results = true + + script_options { + key_result_statement = "LAST" + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/bigquery_job_query_table_reference.tf.erb b/templates/terraform/examples/bigquery_job_query_table_reference.tf.erb new file mode 100644 index 000000000000..52e34e0b79dc --- /dev/null +++ b/templates/terraform/examples/bigquery_job_query_table_reference.tf.erb @@ -0,0 +1,38 @@ +resource "google_bigquery_table" "foo" { + dataset_id = google_bigquery_dataset.bar.dataset_id + table_id = "<%= ctx[:vars]['job_id'] %>_table" +} + +resource "google_bigquery_dataset" "bar" { + dataset_id = "<%= ctx[:vars]['job_id'] %>_dataset" + friendly_name = "test" + description = "This is a test description" + location = "US" +} + +resource "google_bigquery_job" "<%= ctx[:primary_resource_id] %>" { + job_id = "<%= ctx[:vars]['job_id'] %>" + + labels = { + "example-label" ="example-value" + } + + query { + query = "SELECT state FROM [lookerdata:cdc.project_tycho_reports]" + + destination_table { + table_id = google_bigquery_table.foo.id + } + + default_dataset { + dataset_id = google_bigquery_dataset.bar.id + } + + allow_large_results = true + flatten_results = true + + script_options { + key_result_statement = "LAST" + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/bigquery_reservation_basic.tf.erb b/templates/terraform/examples/bigquery_reservation_basic.tf.erb index 2bb683049da8..f1c5754927ad 100644 --- a/templates/terraform/examples/bigquery_reservation_basic.tf.erb +++ b/templates/terraform/examples/bigquery_reservation_basic.tf.erb @@ -5,5 +5,5 @@ resource "google_bigquery_reservation" "<%= ctx[:primary_resource_id] %>" { // Set to 0 for testing purposes // In reality this would be larger than zero slot_capacity = 0 - ignore_idle_slots = true + ignore_idle_slots = false } \ No newline at end of file diff --git a/templates/terraform/examples/scheduled_query.tf.erb b/templates/terraform/examples/bigquerydatatransfer_config_scheduled_query.tf.erb similarity index 95% rename from templates/terraform/examples/scheduled_query.tf.erb rename to templates/terraform/examples/bigquerydatatransfer_config_scheduled_query.tf.erb index 2c5ae0b1c57d..bd7ed87b04a3 100644 --- a/templates/terraform/examples/scheduled_query.tf.erb +++ b/templates/terraform/examples/bigquerydatatransfer_config_scheduled_query.tf.erb @@ -15,7 +15,7 @@ resource "google_bigquery_data_transfer_config" "<%= ctx[:primary_resource_id] % schedule = "first sunday of quarter 00:00" destination_dataset_id = google_bigquery_dataset.my_dataset.dataset_id params = { - destination_table_name_template = "my-table" + destination_table_name_template = "my_table" write_disposition = "WRITE_APPEND" query = "SELECT name FROM tabl WHERE x = 'y'" } diff --git a/templates/terraform/examples/bigtable_app_profile_multicluster.tf.erb b/templates/terraform/examples/bigtable_app_profile_multicluster.tf.erb index 52748eeb377b..4314b7728b5e 100644 --- a/templates/terraform/examples/bigtable_app_profile_multicluster.tf.erb +++ b/templates/terraform/examples/bigtable_app_profile_multicluster.tf.erb @@ -6,6 +6,8 @@ resource "google_bigtable_instance" "instance" { num_nodes = 3 storage_type = "HDD" } + + deletion_protection = "<%= ctx[:vars]['deletion_protection'] %>" } resource "google_bigtable_app_profile" "ap" { diff --git a/templates/terraform/examples/bigtable_app_profile_singlecluster.tf.erb b/templates/terraform/examples/bigtable_app_profile_singlecluster.tf.erb index 3aee678a14fe..3e76ca678e37 100644 --- a/templates/terraform/examples/bigtable_app_profile_singlecluster.tf.erb +++ b/templates/terraform/examples/bigtable_app_profile_singlecluster.tf.erb @@ -6,6 +6,8 @@ resource "google_bigtable_instance" "instance" { num_nodes = 3 storage_type = "HDD" } + + deletion_protection = "<%= ctx[:vars]['deletion_protection'] %>" } resource "google_bigtable_app_profile" "ap" { diff --git a/templates/terraform/examples/cloud_asset_folder_feed.tf.erb b/templates/terraform/examples/cloud_asset_folder_feed.tf.erb new file mode 100644 index 000000000000..34fe8e58850a --- /dev/null +++ b/templates/terraform/examples/cloud_asset_folder_feed.tf.erb @@ -0,0 +1,51 @@ +# Create a feed that sends notifications about network resource updates under a +# particular folder. +resource "google_cloud_asset_folder_feed" "<%= ctx[:primary_resource_id] %>" { + billing_project = "<%= ctx[:test_env_vars]["project"] %>" + folder = google_folder.my_folder.folder_id + feed_id = "<%= ctx[:vars]["feed_id"] %>" + content_type = "RESOURCE" + + asset_types = [ + "compute.googleapis.com/Subnetwork", + "compute.googleapis.com/Network", + ] + + feed_output_config { + pubsub_destination { + topic = google_pubsub_topic.feed_output.id + } + } + + # Wait for the permission to be ready on the destination topic. + depends_on = [ + google_pubsub_topic_iam_member.cloud_asset_writer, + ] +} + +# The topic where the resource change notifications will be sent. +resource "google_pubsub_topic" "feed_output" { + project = "<%= ctx[:test_env_vars]["project"] %>" + name = "<%= ctx[:vars]["feed_id"] %>" +} + +# The folder that will be monitored for resource updates. +resource "google_folder" "my_folder" { + display_name = "<%= ctx[:vars]["folder_name"] %>" + parent = "organizations/<%= ctx[:test_env_vars]["org_id"] %>" +} + +# Find the project number of the project whose identity will be used for sending +# the asset change notifications. +data "google_project" "project" { + project_id = "<%= ctx[:test_env_vars]["project"] %>" +} + +# Allow the publishing role to the Cloud Asset service account of the project that +# was used for sending the notifications. +resource "google_pubsub_topic_iam_member" "cloud_asset_writer" { + project = "<%= ctx[:test_env_vars]["project"] %>" + topic = google_pubsub_topic.feed_output.id + role = "roles/pubsub.publisher" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-cloudasset.iam.gserviceaccount.com" +} \ No newline at end of file diff --git a/templates/terraform/examples/cloud_asset_organization_feed.tf.erb b/templates/terraform/examples/cloud_asset_organization_feed.tf.erb new file mode 100644 index 000000000000..7546b178d35e --- /dev/null +++ b/templates/terraform/examples/cloud_asset_organization_feed.tf.erb @@ -0,0 +1,45 @@ +# Create a feed that sends notifications about network resource updates under a +# particular organization. +resource "google_cloud_asset_organization_feed" "<%= ctx[:primary_resource_id] %>" { + billing_project = "<%= ctx[:test_env_vars]["project"] %>" + org_id = "<%= ctx[:test_env_vars]["org_id"] %>" + feed_id = "<%= ctx[:vars]["feed_id"] %>" + content_type = "RESOURCE" + + asset_types = [ + "compute.googleapis.com/Subnetwork", + "compute.googleapis.com/Network", + ] + + feed_output_config { + pubsub_destination { + topic = google_pubsub_topic.feed_output.id + } + } + + # Wait for the permission to be ready on the destination topic. + depends_on = [ + google_pubsub_topic_iam_member.cloud_asset_writer, + ] +} + +# The topic where the resource change notifications will be sent. +resource "google_pubsub_topic" "feed_output" { + project = "<%= ctx[:test_env_vars]["project"] %>" + name = "<%= ctx[:vars]["feed_id"] %>" +} + +# Find the project number of the project whose identity will be used for sending +# the asset change notifications. +data "google_project" "project" { + project_id = "<%= ctx[:test_env_vars]["project"] %>" +} + +# Allow the publishing role to the Cloud Asset service account of the project that +# was used for sending the notifications. +resource "google_pubsub_topic_iam_member" "cloud_asset_writer" { + project = "<%= ctx[:test_env_vars]["project"] %>" + topic = google_pubsub_topic.feed_output.id + role = "roles/pubsub.publisher" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-cloudasset.iam.gserviceaccount.com" +} \ No newline at end of file diff --git a/templates/terraform/examples/cloud_asset_project_feed.tf.erb b/templates/terraform/examples/cloud_asset_project_feed.tf.erb new file mode 100644 index 000000000000..786f209e9481 --- /dev/null +++ b/templates/terraform/examples/cloud_asset_project_feed.tf.erb @@ -0,0 +1,43 @@ +# Create a feed that sends notifications about network resource updates. +resource "google_cloud_asset_project_feed" "<%= ctx[:primary_resource_id] %>" { + project = "<%= ctx[:test_env_vars]["project"] %>" + feed_id = "<%= ctx[:vars]["feed_id"] %>" + content_type = "RESOURCE" + + asset_types = [ + "compute.googleapis.com/Subnetwork", + "compute.googleapis.com/Network", + ] + + feed_output_config { + pubsub_destination { + topic = google_pubsub_topic.feed_output.id + } + } + + # Wait for the permission to be ready on the destination topic. + depends_on = [ + google_pubsub_topic_iam_member.cloud_asset_writer, + ] +} + +# The topic where the resource change notifications will be sent. +resource "google_pubsub_topic" "feed_output" { + project = "<%= ctx[:test_env_vars]["project"] %>" + name = "<%= ctx[:vars]["feed_id"] %>" +} + +# Find the project number of the project whose identity will be used for sending +# the asset change notifications. +data "google_project" "project" { + project_id = "<%= ctx[:test_env_vars]["project"] %>" +} + +# Allow the publishing role to the Cloud Asset service account of the project that +# was used for sending the notifications. +resource "google_pubsub_topic_iam_member" "cloud_asset_writer" { + project = "<%= ctx[:test_env_vars]["project"] %>" + topic = google_pubsub_topic.feed_output.id + role = "roles/pubsub.publisher" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-cloudasset.iam.gserviceaccount.com" +} \ No newline at end of file diff --git a/templates/terraform/examples/cloud_identity_group_membership.tf.erb b/templates/terraform/examples/cloud_identity_group_membership.tf.erb new file mode 100644 index 000000000000..d1edd447f956 --- /dev/null +++ b/templates/terraform/examples/cloud_identity_group_membership.tf.erb @@ -0,0 +1,42 @@ +resource "google_cloud_identity_group" "group" { + provider = google-beta + display_name = "<%= ctx[:vars]['id_group'] %>" + + parent = "customers/<%= ctx[:test_env_vars]['cust_id'] %>" + + group_key { + id = "<%= ctx[:vars]['id_group'] %>@<%= ctx[:test_env_vars]['org_domain'] %>" + } + + labels = { + "cloudidentity.googleapis.com/groups.discussion_forum" = "" + } +} + +resource "google_cloud_identity_group" "child-group" { + provider = google-beta + display_name = "<%= ctx[:vars]['id_group'] %>-child" + + parent = "customers/<%= ctx[:test_env_vars]['cust_id'] %>" + + group_key { + id = "<%= ctx[:vars]['id_group'] %>-child@<%= ctx[:test_env_vars]['org_domain'] %>" + } + + labels = { + "cloudidentity.googleapis.com/groups.discussion_forum" = "" + } +} + +resource "google_cloud_identity_group_membership" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + group = google_cloud_identity_group.group.id + + member_key { + id = google_cloud_identity_group.child-group.group_key[0].id + } + + roles { + name = "MEMBER" + } +} diff --git a/templates/terraform/examples/cloud_identity_group_membership_user.tf.erb b/templates/terraform/examples/cloud_identity_group_membership_user.tf.erb new file mode 100644 index 000000000000..d3a708ec7ea3 --- /dev/null +++ b/templates/terraform/examples/cloud_identity_group_membership_user.tf.erb @@ -0,0 +1,31 @@ +resource "google_cloud_identity_group" "group" { + provider = google-beta + display_name = "<%= ctx[:vars]['id_group'] %>" + + parent = "customers/<%= ctx[:test_env_vars]['cust_id'] %>" + + group_key { + id = "<%= ctx[:vars]['id_group'] %>@<%= ctx[:test_env_vars]['org_domain'] %>" + } + + labels = { + "cloudidentity.googleapis.com/groups.discussion_forum" = "" + } +} + +resource "google_cloud_identity_group_membership" "cloud_identity_group_membership_basic" { + provider = google-beta + group = google_cloud_identity_group.group.id + + member_key { + id = "<%= ctx[:test_env_vars]['identity_user'] %>@<%= ctx[:test_env_vars]['org_domain'] %>" + } + + roles { + name = "MEMBER" + } + + roles { + name = "MANAGER" + } +} \ No newline at end of file diff --git a/templates/terraform/examples/cloud_identity_groups_basic.tf.erb b/templates/terraform/examples/cloud_identity_groups_basic.tf.erb new file mode 100644 index 000000000000..6b5ea7405134 --- /dev/null +++ b/templates/terraform/examples/cloud_identity_groups_basic.tf.erb @@ -0,0 +1,14 @@ +resource "google_cloud_identity_group" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + display_name = "<%= ctx[:vars]['id_group'] %>" + + parent = "customers/<%= ctx[:test_env_vars]['cust_id'] %>" + + group_key { + id = "<%= ctx[:vars]['id_group'] %>@<%= ctx[:test_env_vars]['org_domain'] %>" + } + + labels = { + "cloudidentity.googleapis.com/groups.discussion_forum" = "" + } +} diff --git a/templates/terraform/examples/cloud_identity_groups_full.tf.erb b/templates/terraform/examples/cloud_identity_groups_full.tf.erb new file mode 100644 index 000000000000..2e01f47b5f97 --- /dev/null +++ b/templates/terraform/examples/cloud_identity_groups_full.tf.erb @@ -0,0 +1,26 @@ +resource "google_cloud_identity_group" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + display_name = "<%= ctx[:vars]['id_group'] %>" + description = "my new cloud identity group" + + parent = "customers/<%= ctx[:test_env_vars]['cust_id'] %>" + + group_key { + id = "<%= ctx[:vars]['id_group'] %>@<%= ctx[:test_env_vars]['org_domain'] %>" + } + + additional_group_keys { + id = "<%= ctx[:vars]['id_group'] %>-two@<%= ctx[:test_env_vars]['org_domain'] %>" + } + + labels = { + "cloudidentity.googleapis.com/groups.discussion_forum" = "" + } + + dynamic_group_metadata { + queries { + resource_type = "USER" + query = "organizations.department.exists(org, org.department=='engineering'" + } + } +} diff --git a/templates/terraform/examples/cloud_run_service_sql.tf.erb b/templates/terraform/examples/cloud_run_service_sql.tf.erb index 42ab5d8b4f93..16d54ae53878 100644 --- a/templates/terraform/examples/cloud_run_service_sql.tf.erb +++ b/templates/terraform/examples/cloud_run_service_sql.tf.erb @@ -13,7 +13,7 @@ resource "google_cloud_run_service" "<%= ctx[:primary_resource_id] %>" { annotations = { "autoscaling.knative.dev/maxScale" = "1000" "run.googleapis.com/cloudsql-instances" = "<%= ctx[:test_env_vars]['project'] %>:us-central1:${google_sql_database_instance.instance.name}" - "run.googleapis.com/client-name" = "cloud-console" + "run.googleapis.com/client-name" = "terraform" } } } diff --git a/templates/terraform/examples/cloud_run_service_traffic_split.tf.erb b/templates/terraform/examples/cloud_run_service_traffic_split.tf.erb new file mode 100644 index 000000000000..b3dcffce5a0a --- /dev/null +++ b/templates/terraform/examples/cloud_run_service_traffic_split.tf.erb @@ -0,0 +1,26 @@ +resource "google_cloud_run_service" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['cloud_run_service_name'] %>" + location = "us-central1" + + template { + spec { + containers { + image = "gcr.io/cloudrun/hello" + } + } + metadata { + name = "<%= ctx[:vars]['cloud_run_service_name'] %>-green" + } + } + + traffic { + percent = 25 + revision_name = "<%= ctx[:vars]['cloud_run_service_name'] %>-green" + } + + traffic { + percent = 75 + # This revision needs to already exist + revision_name = "<%= ctx[:vars]['cloud_run_service_name'] %>-blue" + } +} diff --git a/templates/terraform/examples/cloudiot_device_basic.tf.erb b/templates/terraform/examples/cloudiot_device_basic.tf.erb new file mode 100644 index 000000000000..dbfd02b17e4d --- /dev/null +++ b/templates/terraform/examples/cloudiot_device_basic.tf.erb @@ -0,0 +1,8 @@ +resource "google_cloudiot_registry" "registry" { + name = "<%= ctx[:vars]['cloudiot_device_registry_name'] %>" +} + +resource "google_cloudiot_device" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['cloudiot_device_name'] %>" + registry = google_cloudiot_registry.registry.id +} diff --git a/templates/terraform/examples/cloudiot_device_full.tf.erb b/templates/terraform/examples/cloudiot_device_full.tf.erb new file mode 100644 index 000000000000..0c3457d4029c --- /dev/null +++ b/templates/terraform/examples/cloudiot_device_full.tf.erb @@ -0,0 +1,27 @@ +resource "google_cloudiot_registry" "registry" { + name = "<%= ctx[:vars]['cloudiot_device_registry_name'] %>" +} + +resource "google_cloudiot_device" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['cloudiot_device_name'] %>" + registry = google_cloudiot_registry.registry.id + + credentials { + public_key { + format = "RSA_PEM" + key = file("test-fixtures/rsa_public.pem") + } + } + + blocked = false + + log_level = "INFO" + + metadata = { + test_key_1 = "test_value_1" + } + + gateway_config { + gateway_type = "NON_GATEWAY" + } +} diff --git a/templates/terraform/examples/cloudiot_device_registry_basic.tf.erb b/templates/terraform/examples/cloudiot_device_registry_basic.tf.erb new file mode 100644 index 000000000000..67bf5e108304 --- /dev/null +++ b/templates/terraform/examples/cloudiot_device_registry_basic.tf.erb @@ -0,0 +1,3 @@ +resource "google_cloudiot_registry" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['cloudiot_registry_name'] %>" +} diff --git a/templates/terraform/examples/cloudiot_device_registry_full.tf.erb b/templates/terraform/examples/cloudiot_device_registry_full.tf.erb new file mode 100644 index 000000000000..f5c11d9abdcc --- /dev/null +++ b/templates/terraform/examples/cloudiot_device_registry_full.tf.erb @@ -0,0 +1,46 @@ +resource "google_pubsub_topic" "default-devicestatus" { + name = "<%= ctx[:vars]['cloudiot_device_status_topic_name'] %>" +} + +resource "google_pubsub_topic" "default-telemetry" { + name = "<%= ctx[:vars]['cloudiot_device_telemetry_topic_name'] %>" +} + +resource "google_pubsub_topic" "additional-telemetry" { + name = "<%= ctx[:vars]['cloudiot_additional_device_telemetry_topic_name'] %>" +} + +resource "google_cloudiot_registry" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['cloudiot_registry_name'] %>" + + event_notification_configs { + pubsub_topic_name = google_pubsub_topic.additional-telemetry.id + subfolder_matches = "<%= ctx[:vars]['cloudiot_subfolder_matches_additional_device_telemetry_topic'] %>" + } + + event_notification_configs { + pubsub_topic_name = google_pubsub_topic.default-telemetry.id + subfolder_matches = "" + } + + state_notification_config = { + pubsub_topic_name = google_pubsub_topic.default-devicestatus.id + } + + mqtt_config = { + mqtt_enabled_state = "MQTT_ENABLED" + } + + http_config = { + http_enabled_state = "HTTP_ENABLED" + } + + log_level = "INFO" + + credentials { + public_key_certificate = { + format = "X509_CERTIFICATE_PEM" + certificate = file("test-fixtures/rsa_cert.pem") + } + } +} diff --git a/templates/terraform/examples/cloudiot_device_registry_single_event_notification_configs.tf.erb b/templates/terraform/examples/cloudiot_device_registry_single_event_notification_configs.tf.erb new file mode 100644 index 000000000000..45139229a27c --- /dev/null +++ b/templates/terraform/examples/cloudiot_device_registry_single_event_notification_configs.tf.erb @@ -0,0 +1,13 @@ +resource "google_pubsub_topic" "default-telemetry" { + name = "<%= ctx[:vars]['cloudiot_device_telemetry_topic_name'] %>" +} + +resource "google_cloudiot_registry" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['cloudiot_registry_name'] %>" + + event_notification_configs { + pubsub_topic_name = google_pubsub_topic.default-telemetry.id + subfolder_matches = "" + } + +} diff --git a/templates/terraform/examples/compute_packet_mirroring_full.tf.erb b/templates/terraform/examples/compute_packet_mirroring_full.tf.erb index fddffadab0ec..07ea295f8908 100644 --- a/templates/terraform/examples/compute_packet_mirroring_full.tf.erb +++ b/templates/terraform/examples/compute_packet_mirroring_full.tf.erb @@ -10,7 +10,7 @@ resource "google_compute_instance" "mirror" { } network_interface { - network = google_compute_network.default.self_link + network = google_compute_network.default.id access_config { } } @@ -21,15 +21,15 @@ resource "google_compute_packet_mirroring" "<%= ctx[:primary_resource_id] %>" { provider = google-beta description = "bar" network { - url = google_compute_network.default.self_link + url = google_compute_network.default.id } collector_ilb { - url = google_compute_forwarding_rule.default.self_link + url = google_compute_forwarding_rule.default.id } mirrored_resources { tags = ["foo"] instances { - url = google_compute_instance.mirror.self_link + url = google_compute_instance.mirror.id } } filter { @@ -45,7 +45,7 @@ resource "google_compute_network" "default" { resource "google_compute_subnetwork" "default" { name = "<%= ctx[:vars]['subnetwork_name'] %>" provider = google-beta - network = google_compute_network.default.self_link + network = google_compute_network.default.id ip_cidr_range = "10.2.0.0/16" } @@ -53,7 +53,7 @@ resource "google_compute_subnetwork" "default" { resource "google_compute_region_backend_service" "default" { name = "<%= ctx[:vars]['service_name'] %>" provider = google-beta - health_checks = ["${google_compute_health_check.default.self_link}"] + health_checks = [google_compute_health_check.default.id] } resource "google_compute_health_check" "default" { @@ -74,9 +74,9 @@ resource "google_compute_forwarding_rule" "default" { is_mirroring_collector = true ip_protocol = "TCP" load_balancing_scheme = "INTERNAL" - backend_service = google_compute_region_backend_service.default.self_link + backend_service = google_compute_region_backend_service.default.id all_ports = true - network = google_compute_network.default.self_link - subnetwork = google_compute_subnetwork.default.self_link + network = google_compute_network.default.id + subnetwork = google_compute_subnetwork.default.id network_tier = "PREMIUM" } diff --git a/templates/terraform/examples/container_analysis_note_attestation_full.tf.erb b/templates/terraform/examples/container_analysis_note_attestation_full.tf.erb new file mode 100644 index 000000000000..994ec46b55d0 --- /dev/null +++ b/templates/terraform/examples/container_analysis_note_attestation_full.tf.erb @@ -0,0 +1,22 @@ +resource "google_container_analysis_note" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]["note_name"] %>" + + short_description = "test note" + long_description = "a longer description of test note" + expiration_time = "2120-10-02T15:01:23.045123456Z" + + related_url { + url = "some.url" + label = "foo" + } + + related_url { + url = "google.com" + } + + attestation_authority { + hint { + human_readable_name = "Attestor Note" + } + } +} diff --git a/templates/terraform/examples/container_analysis_occurence_attestation.tf.erb b/templates/terraform/examples/container_analysis_occurence_attestation.tf.erb new file mode 100644 index 000000000000..8d0fceec2b8d --- /dev/null +++ b/templates/terraform/examples/container_analysis_occurence_attestation.tf.erb @@ -0,0 +1,35 @@ +resource "google_binary_authorization_attestor" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]["attestor_name"] %>" + attestation_authority_note { + note_reference = google_container_analysis_note.note.name + public_keys { + ascii_armored_pgp_public_key = <" + attestation_authority { + hint { + human_readable_name = "Attestor Note" + } + } +} diff --git a/templates/terraform/examples/container_analysis_occurrence_kms.tf.erb b/templates/terraform/examples/container_analysis_occurrence_kms.tf.erb new file mode 100644 index 000000000000..505393076c54 --- /dev/null +++ b/templates/terraform/examples/container_analysis_occurrence_kms.tf.erb @@ -0,0 +1,51 @@ +resource "google_binary_authorization_attestor" "attestor" { + name = "<%= ctx[:vars]["attestor"] %>" + attestation_authority_note { + note_reference = google_container_analysis_note.note.name + public_keys { + id = data.google_kms_crypto_key_version.version.id + pkix_public_key { + public_key_pem = data.google_kms_crypto_key_version.version.public_key[0].pem + signature_algorithm = data.google_kms_crypto_key_version.version.public_key[0].algorithm + } + } + } +} + +resource "google_container_analysis_note" "note" { + name = "<%= ctx[:vars]["note_name"] %>" + attestation_authority { + hint { + human_readable_name = "Attestor Note" + } + } +} + +data "google_kms_key_ring" "keyring" { + name = "my-key-ring" + location = "global" +} + +data "google_kms_crypto_key" "crypto-key" { + name = "my-key" + key_ring = data.google_kms_key_ring.keyring.self_link +} + +data "google_kms_crypto_key_version" "version" { + crypto_key = data.google_kms_crypto_key.crypto-key.self_link +} + +resource "google_container_analysis_occurrence" "<%= ctx[:primary_resource_id] %>" { + resource_uri = "gcr.io/my-project/my-image" + note_name = google_container_analysis_note.note.id + + // See "Creating Attestations" Guide for expected + // payload and signature formats. + attestation { + serialized_payload = filebase64("path/to/my/payload.json") + signatures { + public_key_id = data.google_kms_crypto_key_version.version.id + serialized_payload = filebase64("path/to/my/payload.json.sig") + } + } +} diff --git a/templates/terraform/examples/data_catalog_entry_basic.tf.erb b/templates/terraform/examples/data_catalog_entry_basic.tf.erb new file mode 100644 index 000000000000..b59545415f74 --- /dev/null +++ b/templates/terraform/examples/data_catalog_entry_basic.tf.erb @@ -0,0 +1,11 @@ +resource "google_data_catalog_entry" "<%= ctx[:primary_resource_id] %>" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "<%= ctx[:vars]['entry_id'] %>" + + user_specified_type = "my_custom_type" + user_specified_system = "SomethingExternal" +} + +resource "google_data_catalog_entry_group" "entry_group" { + entry_group_id = "<%= ctx[:vars]['entry_group_id'] %>" +} \ No newline at end of file diff --git a/templates/terraform/examples/data_catalog_entry_fileset.tf.erb b/templates/terraform/examples/data_catalog_entry_fileset.tf.erb new file mode 100644 index 000000000000..ad7ea3e213b2 --- /dev/null +++ b/templates/terraform/examples/data_catalog_entry_fileset.tf.erb @@ -0,0 +1,14 @@ +resource "google_data_catalog_entry" "<%= ctx[:primary_resource_id] %>" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "<%= ctx[:vars]['entry_id'] %>" + + type = "FILESET" + + gcs_fileset_spec { + file_patterns = ["gs://fake_bucket/dir/*"] + } +} + +resource "google_data_catalog_entry_group" "entry_group" { + entry_group_id = "<%= ctx[:vars]['entry_group_id'] %>" +} \ No newline at end of file diff --git a/templates/terraform/examples/data_catalog_entry_full.tf.erb b/templates/terraform/examples/data_catalog_entry_full.tf.erb new file mode 100644 index 000000000000..56d995b91bff --- /dev/null +++ b/templates/terraform/examples/data_catalog_entry_full.tf.erb @@ -0,0 +1,54 @@ +resource "google_data_catalog_entry" "<%= ctx[:primary_resource_id] %>" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "<%= ctx[:vars]['entry_id'] %>" + + user_specified_type = "my_user_specified_type" + user_specified_system = "Something_custom" + linked_resource = "my/linked/resource" + + display_name = "my custom type entry" + description = "a custom type entry for a user specified system" + + schema = <" { + entry_group_id = "<%= ctx[:vars]['entry_group_id'] %>" +} diff --git a/templates/terraform/examples/data_catalog_entry_group_full.tf.erb b/templates/terraform/examples/data_catalog_entry_group_full.tf.erb new file mode 100644 index 000000000000..0aea77513532 --- /dev/null +++ b/templates/terraform/examples/data_catalog_entry_group_full.tf.erb @@ -0,0 +1,6 @@ +resource "google_data_catalog_entry_group" "<%= ctx[:primary_resource_id] %>" { + entry_group_id = "<%= ctx[:vars]['entry_group_id'] %>" + + display_name = "terraform entry group" + description = "entry group created by Terraform" +} diff --git a/templates/terraform/examples/data_catalog_entry_group_tag.tf.erb b/templates/terraform/examples/data_catalog_entry_group_tag.tf.erb new file mode 100644 index 000000000000..3eddbfe6e767 --- /dev/null +++ b/templates/terraform/examples/data_catalog_entry_group_tag.tf.erb @@ -0,0 +1,72 @@ +resource "google_data_catalog_entry" "first_entry" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "<%= ctx[:vars]['first_entry'] %>" + + user_specified_type = "my_custom_type" + user_specified_system = "SomethingExternal" +} + +resource "google_data_catalog_entry" "second_entry" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "<%= ctx[:vars]['second_entry'] %>" + + user_specified_type = "another_custom_type" + user_specified_system = "SomethingElseExternal" +} + +resource "google_data_catalog_entry_group" "entry_group" { + entry_group_id = "<%= ctx[:vars]['entry_group_id'] %>" +} + +resource "google_data_catalog_tag_template" "tag_template" { + tag_template_id = "<%= ctx[:vars]['tag_template_id'] %>" + region = "us-central1" + display_name = "Demo Tag Template" + + fields { + field_id = "source" + display_name = "Source of data asset" + type { + primitive_type = "STRING" + } + is_required = true + } + + fields { + field_id = "num_rows" + display_name = "Number of rows in the data asset" + type { + primitive_type = "DOUBLE" + } + } + + fields { + field_id = "pii_type" + display_name = "PII type" + type { + enum_type { + allowed_values { + display_name = "EMAIL" + } + allowed_values { + display_name = "SOCIAL SECURITY NUMBER" + } + allowed_values { + display_name = "NONE" + } + } + } + } + + force_delete = "<%= ctx[:vars]['force_delete'] %>" +} + +resource "google_data_catalog_tag" "<%= ctx[:primary_resource_id] %>" { + parent = google_data_catalog_entry_group.entry_group.id + template = google_data_catalog_tag_template.tag_template.id + + fields { + field_name = "source" + string_value = "my-string" + } +} \ No newline at end of file diff --git a/templates/terraform/examples/data_catalog_entry_tag_basic.tf.erb b/templates/terraform/examples/data_catalog_entry_tag_basic.tf.erb new file mode 100644 index 000000000000..162f86409ec1 --- /dev/null +++ b/templates/terraform/examples/data_catalog_entry_tag_basic.tf.erb @@ -0,0 +1,64 @@ +resource "google_data_catalog_entry" "entry" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "<%= ctx[:vars]['entry_id'] %>" + + user_specified_type = "my_custom_type" + user_specified_system = "SomethingExternal" +} + +resource "google_data_catalog_entry_group" "entry_group" { + entry_group_id = "<%= ctx[:vars]['entry_group_id'] %>" +} + +resource "google_data_catalog_tag_template" "tag_template" { + tag_template_id = "<%= ctx[:vars]['tag_template_id'] %>" + region = "us-central1" + display_name = "Demo Tag Template" + + fields { + field_id = "source" + display_name = "Source of data asset" + type { + primitive_type = "STRING" + } + is_required = true + } + + fields { + field_id = "num_rows" + display_name = "Number of rows in the data asset" + type { + primitive_type = "DOUBLE" + } + } + + fields { + field_id = "pii_type" + display_name = "PII type" + type { + enum_type { + allowed_values { + display_name = "EMAIL" + } + allowed_values { + display_name = "SOCIAL SECURITY NUMBER" + } + allowed_values { + display_name = "NONE" + } + } + } + } + + force_delete = "<%= ctx[:vars]['force_delete'] %>" +} + +resource "google_data_catalog_tag" "<%= ctx[:primary_resource_id] %>" { + parent = google_data_catalog_entry.entry.id + template = google_data_catalog_tag_template.tag_template.id + + fields { + field_name = "source" + string_value = "my-string" + } +} \ No newline at end of file diff --git a/templates/terraform/examples/data_catalog_entry_tag_full.tf.erb b/templates/terraform/examples/data_catalog_entry_tag_full.tf.erb new file mode 100644 index 000000000000..0eae34529a17 --- /dev/null +++ b/templates/terraform/examples/data_catalog_entry_tag_full.tf.erb @@ -0,0 +1,132 @@ +resource "google_data_catalog_entry" "entry" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "<%= ctx[:vars]['entry_id'] %>" + + user_specified_type = "my_custom_type" + user_specified_system = "SomethingExternal" + + schema = <" { + parent = google_data_catalog_entry.entry.id + template = google_data_catalog_tag_template.tag_template.id + + fields { + field_name = "source" + string_value = "my-string" + } + + fields { + field_name = "num_rows" + double_value = 5 + } + + fields { + field_name = "pii_type" + enum_value = "EMAIL" + } + + column = "address" +} + +resource "google_data_catalog_tag" "second-tag" { + parent = google_data_catalog_entry.entry.id + template = google_data_catalog_tag_template.tag_template.id + + fields { + field_name = "source" + string_value = "my-string" + } + + fields { + field_name = "pii_type" + enum_value = "NONE" + } + + column = "first_name" +} \ No newline at end of file diff --git a/templates/terraform/examples/data_catalog_tag_template_basic.tf.erb b/templates/terraform/examples/data_catalog_tag_template_basic.tf.erb new file mode 100644 index 000000000000..02d0d67c20ab --- /dev/null +++ b/templates/terraform/examples/data_catalog_tag_template_basic.tf.erb @@ -0,0 +1,42 @@ +resource "google_data_catalog_tag_template" "<%= ctx[:primary_resource_id] %>" { + tag_template_id = "<%= ctx[:vars]['tag_template_id'] %>" + region = "us-central1" + display_name = "Demo Tag Template" + + fields { + field_id = "source" + display_name = "Source of data asset" + type { + primitive_type = "STRING" + } + is_required = true + } + + fields { + field_id = "num_rows" + display_name = "Number of rows in the data asset" + type { + primitive_type = "DOUBLE" + } + } + + fields { + field_id = "pii_type" + display_name = "PII type" + type { + enum_type { + allowed_values { + display_name = "EMAIL" + } + allowed_values { + display_name = "SOCIAL SECURITY NUMBER" + } + allowed_values { + display_name = "NONE" + } + } + } + } + + force_delete = "<%= ctx[:vars]['force_delete'] %>" +} diff --git a/templates/terraform/examples/data_fusion_instance_full.tf.erb b/templates/terraform/examples/data_fusion_instance_full.tf.erb index f1d47153f95c..91e9f6bff87e 100644 --- a/templates/terraform/examples/data_fusion_instance_full.tf.erb +++ b/templates/terraform/examples/data_fusion_instance_full.tf.erb @@ -14,4 +14,5 @@ resource "google_data_fusion_instance" "<%= ctx[:primary_resource_id] %>" { network = "default" ip_allocation = "10.89.48.0/22" } + version = "6.1.1" } \ No newline at end of file diff --git a/templates/terraform/examples/dialogflow_entity_type_basic.tf.erb b/templates/terraform/examples/dialogflow_entity_type_basic.tf.erb new file mode 100644 index 000000000000..bafe6d533f82 --- /dev/null +++ b/templates/terraform/examples/dialogflow_entity_type_basic.tf.erb @@ -0,0 +1,19 @@ +resource "google_dialogflow_agent" "basic_agent" { + display_name = "example_agent" + default_language_code = "en" + time_zone = "America/New_York" +} + +resource "google_dialogflow_entity_type" "<%= ctx[:primary_resource_id] %>" { + depends_on = [google_dialogflow_agent.basic_agent] + display_name = "<%= ctx[:vars]["entity_type_name"] %>" + kind = "KIND_MAP" + entities { + value = "value1" + synonyms = ["synonym1","synonym2"] + } + entities { + value = "value2" + synonyms = ["synonym3","synonym4"] + } +} \ No newline at end of file diff --git a/templates/terraform/examples/dns_managed_zone_private.tf.erb b/templates/terraform/examples/dns_managed_zone_private.tf.erb index b3afb2178df1..2b6ca85e12ab 100644 --- a/templates/terraform/examples/dns_managed_zone_private.tf.erb +++ b/templates/terraform/examples/dns_managed_zone_private.tf.erb @@ -10,10 +10,10 @@ resource "google_dns_managed_zone" "<%= ctx[:primary_resource_id] %>" { private_visibility_config { networks { - network_url = google_compute_network.network-1.self_link + network_url = google_compute_network.network-1.id } networks { - network_url = google_compute_network.network-2.self_link + network_url = google_compute_network.network-2.id } } } diff --git a/templates/terraform/examples/dns_managed_zone_private_forwarding.tf.erb b/templates/terraform/examples/dns_managed_zone_private_forwarding.tf.erb index 6b38db0716d3..e5f49dd70e80 100644 --- a/templates/terraform/examples/dns_managed_zone_private_forwarding.tf.erb +++ b/templates/terraform/examples/dns_managed_zone_private_forwarding.tf.erb @@ -1,5 +1,4 @@ resource "google_dns_managed_zone" "<%= ctx[:primary_resource_id] %>" { - provider = google-beta name = "<%= ctx[:vars]['zone_name'] %>" dns_name = "private.example.com." description = "Example private DNS zone" @@ -11,10 +10,10 @@ resource "google_dns_managed_zone" "<%= ctx[:primary_resource_id] %>" { private_visibility_config { networks { - network_url = google_compute_network.network-1.self_link + network_url = google_compute_network.network-1.id } networks { - network_url = google_compute_network.network-2.self_link + network_url = google_compute_network.network-2.id } } diff --git a/templates/terraform/examples/dns_managed_zone_private_peering.tf.erb b/templates/terraform/examples/dns_managed_zone_private_peering.tf.erb index 129b1aa8e25d..e574801bafae 100644 --- a/templates/terraform/examples/dns_managed_zone_private_peering.tf.erb +++ b/templates/terraform/examples/dns_managed_zone_private_peering.tf.erb @@ -1,6 +1,4 @@ resource "google_dns_managed_zone" "<%= ctx[:primary_resource_id] %>" { - provider = google-beta - name = "<%= ctx[:vars]['zone_name'] %>" dns_name = "peering.example.com." description = "Example private DNS peering zone" @@ -9,32 +7,24 @@ resource "google_dns_managed_zone" "<%= ctx[:primary_resource_id] %>" { private_visibility_config { networks { - network_url = google_compute_network.network-source.self_link + network_url = google_compute_network.network-source.id } } peering_config { target_network { - network_url = google_compute_network.network-target.self_link + network_url = google_compute_network.network-target.id } } } resource "google_compute_network" "network-source" { - provider = google-beta - name = "<%= ctx[:vars]['network_source_name'] %>" auto_create_subnetworks = false } resource "google_compute_network" "network-target" { - provider = google-beta - name = "<%= ctx[:vars]['network_target_name'] %>" auto_create_subnetworks = false } -provider "google-beta" { - region = "us-central1" - zone = "us-central1-a" -} diff --git a/templates/terraform/examples/dns_managed_zone_service_directory.tf.erb b/templates/terraform/examples/dns_managed_zone_service_directory.tf.erb new file mode 100644 index 000000000000..6960aa263971 --- /dev/null +++ b/templates/terraform/examples/dns_managed_zone_service_directory.tf.erb @@ -0,0 +1,29 @@ +resource "google_dns_managed_zone" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + + name = "<%= ctx[:vars]['zone_name'] %>" + dns_name = "services.example.com." + description = "Example private DNS Service Directory zone" + + visibility = "private" + + service_directory_config { + namespace { + namespace_url = google_service_directory_namespace.example.id + } + } +} + +resource "google_service_directory_namespace" "example" { + provider = google-beta + + namespace_id = "example" + location = "us-central1" +} + +resource "google_compute_network" "network" { + provider = google-beta + + name = "<%= ctx[:vars]['network_name'] %>" + auto_create_subnetworks = false +} \ No newline at end of file diff --git a/templates/terraform/examples/dns_policy_basic.tf.erb b/templates/terraform/examples/dns_policy_basic.tf.erb index 89b0f3f3b599..117499f69a31 100644 --- a/templates/terraform/examples/dns_policy_basic.tf.erb +++ b/templates/terraform/examples/dns_policy_basic.tf.erb @@ -1,6 +1,4 @@ resource "google_dns_policy" "<%= ctx[:primary_resource_id] %>" { - provider = google-beta - name = "<%= ctx[:vars]['policy_name'] %>" enable_inbound_forwarding = true @@ -16,28 +14,19 @@ resource "google_dns_policy" "<%= ctx[:primary_resource_id] %>" { } networks { - network_url = google_compute_network.network-1.self_link + network_url = google_compute_network.network-1.id } networks { - network_url = google_compute_network.network-2.self_link + network_url = google_compute_network.network-2.id } } resource "google_compute_network" "network-1" { - provider = google-beta - name = "<%= ctx[:vars]['network_1_name'] %>" auto_create_subnetworks = false } resource "google_compute_network" "network-2" { - provider = google-beta - name = "<%= ctx[:vars]['network_2_name'] %>" auto_create_subnetworks = false } - -provider "google-beta" { - region = "us-central1" - zone = "us-central1-a" -} diff --git a/templates/terraform/examples/external_vpn_gateway.tf.erb b/templates/terraform/examples/external_vpn_gateway.tf.erb index 2b45778d9edd..8851c143be8a 100644 --- a/templates/terraform/examples/external_vpn_gateway.tf.erb +++ b/templates/terraform/examples/external_vpn_gateway.tf.erb @@ -2,7 +2,7 @@ resource "google_compute_ha_vpn_gateway" "ha_gateway" { provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['ha_vpn_gateway_name'] %>" - network = google_compute_network.network.self_link + network = google_compute_network.network.id } resource "google_compute_external_vpn_gateway" "external_gateway" { @@ -28,7 +28,7 @@ resource "google_compute_subnetwork" "network_subnet1" { name = "ha-vpn-subnet-1" ip_cidr_range = "10.0.1.0/24" region = "us-central1" - network = google_compute_network.network.self_link + network = google_compute_network.network.id } resource "google_compute_subnetwork" "network_subnet2" { @@ -36,7 +36,7 @@ resource "google_compute_subnetwork" "network_subnet2" { name = "ha-vpn-subnet-2" ip_cidr_range = "10.0.2.0/24" region = "us-west1" - network = google_compute_network.network.self_link + network = google_compute_network.network.id } resource "google_compute_router" "router1" { @@ -52,11 +52,11 @@ resource "google_compute_vpn_tunnel" "tunnel1" { provider = google-beta name = "ha-vpn-tunnel1" region = "us-central1" - vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway.self_link - peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.self_link + vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway.id + peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.id peer_external_gateway_interface = 0 shared_secret = "a secret message" - router = google_compute_router.router1.self_link + router = google_compute_router.router1.id vpn_gateway_interface = 0 } @@ -64,11 +64,11 @@ resource "google_compute_vpn_tunnel" "tunnel2" { provider = google-beta name = "ha-vpn-tunnel2" region = "us-central1" - vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway.self_link - peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.self_link + vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway.id + peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.id peer_external_gateway_interface = 0 shared_secret = "a secret message" - router = " ${google_compute_router.router1.self_link}" + router = " ${google_compute_router.router1.id}" vpn_gateway_interface = 1 } diff --git a/templates/terraform/examples/filestore_instance_full.tf.erb b/templates/terraform/examples/filestore_instance_full.tf.erb new file mode 100644 index 000000000000..988dc0431ca6 --- /dev/null +++ b/templates/terraform/examples/filestore_instance_full.tf.erb @@ -0,0 +1,30 @@ +resource "google_filestore_instance" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]["instance_name"] %>" + zone = "us-central1-b" + tier = "BASIC_SSD" + + file_shares { + capacity_gb = 2660 + name = "share1" + + nfs_export_options { + ip_ranges = ["10.0.0.0/24"] + access_mode = "READ_WRITE" + squash_mode = "NO_ROOT_SQUASH" + } + + nfs_export_options { + ip_ranges = ["10.10.0.0/24"] + access_mode = "READ_ONLY" + squash_mode = "ROOT_SQUASH" + anon_uid = 123 + anon_gid = 456 + } + } + + networks { + network = "default" + modes = ["MODE_IPV4"] + } +} diff --git a/templates/terraform/examples/firebase_web_app_basic.tf.erb b/templates/terraform/examples/firebase_web_app_basic.tf.erb new file mode 100644 index 000000000000..5a7b4f99120e --- /dev/null +++ b/templates/terraform/examples/firebase_web_app_basic.tf.erb @@ -0,0 +1,46 @@ +resource "google_project" "default" { + provider = google-beta + + project_id = "tf-test%{random_suffix}" + name = "tf-test%{random_suffix}" + org_id = "<%= ctx[:test_env_vars]['org_id'] %>" +} + +resource "google_firebase_project" "default" { + provider = google-beta + project = google_project.default.project_id +} + +resource "google_firebase_web_app" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + project = google_project.default.project_id + display_name = "<%= ctx[:vars]['display_name'] %>" + + depends_on = [google_firebase_project.default] +} + +data "google_firebase_web_app_config" "basic" { + provider = google-beta + web_app_id = google_firebase_web_app.basic.app_id +} + +resource "google_storage_bucket" "default" { + provider = google-beta + name = "<%= ctx[:vars]['bucket_name'] %>" +} + +resource "google_storage_bucket_object" "default" { + provider = google-beta + bucket = google_storage_bucket.default.name + name = "firebase-config.json" + + content = jsonencode({ + appId = google_firebase_web_app.basic.app_id + apiKey = data.google_firebase_web_app_config.basic.api_key + authDomain = data.google_firebase_web_app_config.basic.auth_domain + databaseURL = lookup(data.google_firebase_web_app_config.basic, "database_url", "") + storageBucket = lookup(data.google_firebase_web_app_config.basic, "storage_bucket", "") + messagingSenderId = lookup(data.google_firebase_web_app_config.basic, "messaging_sender_id", "") + measurementId = lookup(data.google_firebase_web_app_config.basic, "measurement_id", "") + }) +} diff --git a/templates/terraform/examples/forwarding_rule_basic.tf.erb b/templates/terraform/examples/forwarding_rule_basic.tf.erb index 47fa4a17245c..1774fdda51bb 100644 --- a/templates/terraform/examples/forwarding_rule_basic.tf.erb +++ b/templates/terraform/examples/forwarding_rule_basic.tf.erb @@ -1,6 +1,6 @@ resource "google_compute_forwarding_rule" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['forwarding_rule_name'] %>" - target = google_compute_target_pool.default.self_link + target = google_compute_target_pool.default.id port_range = "80" } diff --git a/templates/terraform/examples/forwarding_rule_global_internallb.tf.erb b/templates/terraform/examples/forwarding_rule_global_internallb.tf.erb index f8f96a8c84c1..baba424e88d8 100644 --- a/templates/terraform/examples/forwarding_rule_global_internallb.tf.erb +++ b/templates/terraform/examples/forwarding_rule_global_internallb.tf.erb @@ -3,16 +3,16 @@ resource "google_compute_forwarding_rule" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['forwarding_rule_name'] %>" region = "us-central1" load_balancing_scheme = "INTERNAL" - backend_service = "${google_compute_region_backend_service.backend.self_link}" + backend_service = google_compute_region_backend_service.backend.id all_ports = true allow_global_access = true - network = "${google_compute_network.default.name}" - subnetwork = "${google_compute_subnetwork.default.name}" + network = google_compute_network.default.name + subnetwork = google_compute_subnetwork.default.name } resource "google_compute_region_backend_service" "backend" { name = "<%= ctx[:vars]['backend_name'] %>" region = "us-central1" - health_checks = ["${google_compute_health_check.hc.self_link}"] + health_checks = [google_compute_health_check.hc.id] } resource "google_compute_health_check" "hc" { name = "check-<%= ctx[:vars]['backend_name'] %>" @@ -30,5 +30,5 @@ resource "google_compute_subnetwork" "default" { name = "<%= ctx[:vars]['network_name'] %>" ip_cidr_range = "10.0.0.0/16" region = "us-central1" - network = "${google_compute_network.default.self_link}" -} \ No newline at end of file + network = google_compute_network.default.id +} diff --git a/templates/terraform/examples/forwarding_rule_http_lb.tf.erb b/templates/terraform/examples/forwarding_rule_http_lb.tf.erb index c7381e2dc9f2..b04eb8da4f97 100644 --- a/templates/terraform/examples/forwarding_rule_http_lb.tf.erb +++ b/templates/terraform/examples/forwarding_rule_http_lb.tf.erb @@ -8,9 +8,9 @@ resource "google_compute_forwarding_rule" "<%= ctx[:primary_resource_id] %>" { ip_protocol = "TCP" load_balancing_scheme = "INTERNAL_MANAGED" port_range = "80" - target = google_compute_region_target_http_proxy.default.self_link - network = google_compute_network.default.self_link - subnetwork = google_compute_subnetwork.default.self_link + target = google_compute_region_target_http_proxy.default.id + network = google_compute_network.default.id + subnetwork = google_compute_subnetwork.default.id network_tier = "PREMIUM" } @@ -19,7 +19,7 @@ resource "google_compute_region_target_http_proxy" "default" { region = "us-central1" name = "<%= ctx[:vars]['region_target_http_proxy_name'] %>" - url_map = google_compute_region_url_map.default.self_link + url_map = google_compute_region_url_map.default.id } resource "google_compute_region_url_map" "default" { @@ -27,7 +27,7 @@ resource "google_compute_region_url_map" "default" { region = "us-central1" name = "<%= ctx[:vars]['region_url_map_name'] %>" - default_service = google_compute_region_backend_service.default.self_link + default_service = google_compute_region_backend_service.default.id } resource "google_compute_region_backend_service" "default" { @@ -46,7 +46,7 @@ resource "google_compute_region_backend_service" "default" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] } data "google_compute_image" "debian_image" { @@ -58,9 +58,9 @@ data "google_compute_image" "debian_image" { resource "google_compute_region_instance_group_manager" "rigm" { provider = google-beta region = "us-central1" - name = "rigm-internal" + name = "<%= ctx[:vars]['rigm_name'] %>" version { - instance_template = google_compute_instance_template.instance_template.self_link + instance_template = google_compute_instance_template.instance_template.id name = "primary" } base_instance_name = "internal-glb" @@ -73,8 +73,8 @@ resource "google_compute_instance_template" "instance_template" { machine_type = "n1-standard-1" network_interface { - network = google_compute_network.default.self_link - subnetwork = google_compute_subnetwork.default.self_link + network = google_compute_network.default.id + subnetwork = google_compute_subnetwork.default.id } disk { @@ -100,7 +100,7 @@ resource "google_compute_region_health_check" "default" { resource "google_compute_firewall" "fw1" { provider = google-beta name = "<%= ctx[:vars]['fw_name'] %>-1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id source_ranges = ["10.1.2.0/24"] allow { protocol = "tcp" @@ -118,7 +118,7 @@ resource "google_compute_firewall" "fw2" { depends_on = [google_compute_firewall.fw1] provider = google-beta name = "<%= ctx[:vars]['fw_name'] %>-2" - network = google_compute_network.default.self_link + network = google_compute_network.default.id source_ranges = ["0.0.0.0/0"] allow { protocol = "tcp" @@ -132,7 +132,7 @@ resource "google_compute_firewall" "fw3" { depends_on = [google_compute_firewall.fw2] provider = google-beta name = "<%= ctx[:vars]['fw_name'] %>-3" - network = google_compute_network.default.self_link + network = google_compute_network.default.id source_ranges = ["130.211.0.0/22", "35.191.0.0/16"] allow { protocol = "tcp" @@ -145,7 +145,7 @@ resource "google_compute_firewall" "fw4" { depends_on = [google_compute_firewall.fw3] provider = google-beta name = "<%= ctx[:vars]['fw_name'] %>-4" - network = google_compute_network.default.self_link + network = google_compute_network.default.id source_ranges = ["10.129.0.0/26"] target_tags = ["load-balanced-backend"] allow { @@ -175,7 +175,7 @@ resource "google_compute_subnetwork" "default" { name = "<%= ctx[:vars]['network_name'] %>-default" ip_cidr_range = "10.1.2.0/24" region = "us-central1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id } resource "google_compute_subnetwork" "proxy" { @@ -183,7 +183,7 @@ resource "google_compute_subnetwork" "proxy" { name = "<%= ctx[:vars]['network_name'] %>-proxy" ip_cidr_range = "10.129.0.0/26" region = "us-central1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id purpose = "INTERNAL_HTTPS_LOAD_BALANCER" role = "ACTIVE" } diff --git a/templates/terraform/examples/forwarding_rule_internallb.tf.erb b/templates/terraform/examples/forwarding_rule_internallb.tf.erb index d78072f6392e..886884b60af6 100644 --- a/templates/terraform/examples/forwarding_rule_internallb.tf.erb +++ b/templates/terraform/examples/forwarding_rule_internallb.tf.erb @@ -4,7 +4,7 @@ resource "google_compute_forwarding_rule" "<%= ctx[:primary_resource_id] %>" { region = "us-central1" load_balancing_scheme = "INTERNAL" - backend_service = google_compute_region_backend_service.backend.self_link + backend_service = google_compute_region_backend_service.backend.id all_ports = true network = google_compute_network.default.name subnetwork = google_compute_subnetwork.default.name @@ -13,7 +13,7 @@ resource "google_compute_forwarding_rule" "<%= ctx[:primary_resource_id] %>" { resource "google_compute_region_backend_service" "backend" { name = "<%= ctx[:vars]['backend_name'] %>" region = "us-central1" - health_checks = [google_compute_health_check.hc.self_link] + health_checks = [google_compute_health_check.hc.id] } resource "google_compute_health_check" "hc" { @@ -35,5 +35,5 @@ resource "google_compute_subnetwork" "default" { name = "<%= ctx[:vars]['network_name'] %>" ip_cidr_range = "10.0.0.0/16" region = "us-central1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id } diff --git a/templates/terraform/examples/global_forwarding_rule_http.tf.erb b/templates/terraform/examples/global_forwarding_rule_http.tf.erb index 71225b15e162..8d9793018471 100644 --- a/templates/terraform/examples/global_forwarding_rule_http.tf.erb +++ b/templates/terraform/examples/global_forwarding_rule_http.tf.erb @@ -1,19 +1,19 @@ resource "google_compute_global_forwarding_rule" "default" { name = "<%= ctx[:vars]['forwarding_rule_name'] %>" - target = google_compute_target_http_proxy.default.self_link + target = google_compute_target_http_proxy.default.id port_range = "80" } resource "google_compute_target_http_proxy" "default" { name = "<%= ctx[:vars]['http_proxy_name'] %>" description = "a description" - url_map = google_compute_url_map.default.self_link + url_map = google_compute_url_map.default.id } resource "google_compute_url_map" "default" { name = "url-map-<%= ctx[:vars]['http_proxy_name'] %>" description = "a description" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -22,11 +22,11 @@ resource "google_compute_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_backend_service.default.self_link + service = google_compute_backend_service.default.id } } } @@ -37,7 +37,7 @@ resource "google_compute_backend_service" "default" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_http_health_check" "default" { diff --git a/templates/terraform/examples/global_forwarding_rule_internal.tf.erb b/templates/terraform/examples/global_forwarding_rule_internal.tf.erb index aa01994cb81e..6b1baac7090f 100644 --- a/templates/terraform/examples/global_forwarding_rule_internal.tf.erb +++ b/templates/terraform/examples/global_forwarding_rule_internal.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_global_forwarding_rule" "default" { provider = google-beta name = "<%= ctx[:vars]['forwarding_rule_name'] %>" - target = google_compute_target_http_proxy.default.self_link + target = google_compute_target_http_proxy.default.id port_range = "80" load_balancing_scheme = "INTERNAL_SELF_MANAGED" ip_address = "0.0.0.0" @@ -18,14 +18,14 @@ resource "google_compute_target_http_proxy" "default" { provider = google-beta name = "<%= ctx[:vars]['http_proxy_name'] %>" description = "a description" - url_map = google_compute_url_map.default.self_link + url_map = google_compute_url_map.default.id } resource "google_compute_url_map" "default" { provider = google-beta name = "url-map-<%= ctx[:vars]['http_proxy_name'] %>" description = "a description" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -34,11 +34,11 @@ resource "google_compute_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_backend_service.default.self_link + service = google_compute_backend_service.default.id } } } @@ -58,7 +58,7 @@ resource "google_compute_backend_service" "default" { max_rate_per_instance = 50 } - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] } data "google_compute_image" "debian_image" { @@ -71,7 +71,7 @@ resource "google_compute_instance_group_manager" "igm" { provider = google-beta name = "igm-internal" version { - instance_template = google_compute_instance_template.instance_template.self_link + instance_template = google_compute_instance_template.instance_template.id name = "primary" } base_instance_name = "internal-glb" diff --git a/templates/terraform/examples/global_network_endpoint.tf.erb b/templates/terraform/examples/global_network_endpoint.tf.erb index 5dbf2770adc7..32592e299c85 100644 --- a/templates/terraform/examples/global_network_endpoint.tf.erb +++ b/templates/terraform/examples/global_network_endpoint.tf.erb @@ -1,18 +1,13 @@ resource "google_compute_global_network_endpoint" "<%= ctx[:primary_resource_id] %>" { - global_network_endpoint_group = google_compute_network_endpoint_group.neg.name + global_network_endpoint_group = google_compute_global_network_endpoint_group.neg.name fqdn = "www.example.com" - port = google_compute_network_endpoint_group.neg.default_port - ip_address = google_compute_instance.endpoint-instance.network_interface[0].network_ip + port = 90 + ip_address = "8.8.8.8" } -resource "google_compute_global_network_endpoint_group" "group" { - name = "<%= ctx[:vars]['neg_name'] %>" - network = google_compute_network.default.self_link - default_port = "90" -} - -resource "google_compute_network" "default" { - name = "<%= ctx[:vars]['network_name'] %>" - auto_create_subnetworks = false +resource "google_compute_global_network_endpoint_group" "neg" { + name = "<%= ctx[:vars]['neg_name'] %>" + default_port = "90" + network_endpoint_type = "INTERNET_IP_PORT" } diff --git a/templates/terraform/examples/ha_vpn_gateway_basic.tf.erb b/templates/terraform/examples/ha_vpn_gateway_basic.tf.erb index 98c135c1621d..0352a7e92621 100644 --- a/templates/terraform/examples/ha_vpn_gateway_basic.tf.erb +++ b/templates/terraform/examples/ha_vpn_gateway_basic.tf.erb @@ -2,7 +2,7 @@ resource "google_compute_ha_vpn_gateway" "ha_gateway1" { provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['ha_vpn_gateway1_name'] %>" - network = google_compute_network.network1.self_link + network = google_compute_network.network1.id } resource "google_compute_network" "network1" { diff --git a/templates/terraform/examples/ha_vpn_gateway_gcp_to_gcp.tf.erb b/templates/terraform/examples/ha_vpn_gateway_gcp_to_gcp.tf.erb index abc47c260485..dd7139e1b4a7 100644 --- a/templates/terraform/examples/ha_vpn_gateway_gcp_to_gcp.tf.erb +++ b/templates/terraform/examples/ha_vpn_gateway_gcp_to_gcp.tf.erb @@ -2,14 +2,14 @@ resource "google_compute_ha_vpn_gateway" "ha_gateway1" { provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['ha_vpn_gateway1_name'] %>" - network = google_compute_network.network1.self_link + network = google_compute_network.network1.id } resource "google_compute_ha_vpn_gateway" "ha_gateway2" { provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['ha_vpn_gateway2_name'] %>" - network = google_compute_network.network2.self_link + network = google_compute_network.network2.id } resource "google_compute_network" "network1" { @@ -31,7 +31,7 @@ resource "google_compute_subnetwork" "network1_subnet1" { name = "ha-vpn-subnet-1" ip_cidr_range = "10.0.1.0/24" region = "us-central1" - network = google_compute_network.network1.self_link + network = google_compute_network.network1.id } resource "google_compute_subnetwork" "network1_subnet2" { @@ -39,7 +39,7 @@ resource "google_compute_subnetwork" "network1_subnet2" { name = "ha-vpn-subnet-2" ip_cidr_range = "10.0.2.0/24" region = "us-west1" - network = google_compute_network.network1.self_link + network = google_compute_network.network1.id } resource "google_compute_subnetwork" "network2_subnet1" { @@ -47,7 +47,7 @@ resource "google_compute_subnetwork" "network2_subnet1" { name = "ha-vpn-subnet-3" ip_cidr_range = "192.168.1.0/24" region = "us-central1" - network = google_compute_network.network2.self_link + network = google_compute_network.network2.id } resource "google_compute_subnetwork" "network2_subnet2" { @@ -55,7 +55,7 @@ resource "google_compute_subnetwork" "network2_subnet2" { name = "ha-vpn-subnet-4" ip_cidr_range = "192.168.2.0/24" region = "us-east1" - network = google_compute_network.network2.self_link + network = google_compute_network.network2.id } resource "google_compute_router" "router1" { @@ -80,10 +80,10 @@ resource "google_compute_vpn_tunnel" "tunnel1" { provider = google-beta name = "ha-vpn-tunnel1" region = "us-central1" - vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway1.self_link - peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway2.self_link + vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway1.id + peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway2.id shared_secret = "a secret message" - router = google_compute_router.router1.self_link + router = google_compute_router.router1.id vpn_gateway_interface = 0 } @@ -91,10 +91,10 @@ resource "google_compute_vpn_tunnel" "tunnel2" { provider = google-beta name = "ha-vpn-tunnel2" region = "us-central1" - vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway1.self_link - peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway2.self_link + vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway1.id + peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway2.id shared_secret = "a secret message" - router = google_compute_router.router1.self_link + router = google_compute_router.router1.id vpn_gateway_interface = 1 } @@ -102,10 +102,10 @@ resource "google_compute_vpn_tunnel" "tunnel3" { provider = google-beta name = "ha-vpn-tunnel3" region = "us-central1" - vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway2.self_link - peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway1.self_link + vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway2.id + peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway1.id shared_secret = "a secret message" - router = google_compute_router.router2.self_link + router = google_compute_router.router2.id vpn_gateway_interface = 0 } @@ -113,10 +113,10 @@ resource "google_compute_vpn_tunnel" "tunnel4" { provider = google-beta name = "ha-vpn-tunnel4" region = "us-central1" - vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway2.self_link - peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway1.self_link + vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway2.id + peer_gcp_gateway = google_compute_ha_vpn_gateway.ha_gateway1.id shared_secret = "a secret message" - router = google_compute_router.router2.self_link + router = google_compute_router.router2.id vpn_gateway_interface = 1 } diff --git a/templates/terraform/examples/healthcare_dataset_basic.tf.erb b/templates/terraform/examples/healthcare_dataset_basic.tf.erb index a8c8c6b98615..8f9719a6108d 100644 --- a/templates/terraform/examples/healthcare_dataset_basic.tf.erb +++ b/templates/terraform/examples/healthcare_dataset_basic.tf.erb @@ -2,5 +2,4 @@ resource "google_healthcare_dataset" "default" { name = "<%= ctx[:vars]['dataset_name'] %>" location = "us-central1" time_zone = "UTC" - provider = google-beta } diff --git a/templates/terraform/examples/healthcare_dicom_store_basic.tf.erb b/templates/terraform/examples/healthcare_dicom_store_basic.tf.erb index 8828bb2294bb..00db67aba347 100644 --- a/templates/terraform/examples/healthcare_dicom_store_basic.tf.erb +++ b/templates/terraform/examples/healthcare_dicom_store_basic.tf.erb @@ -9,16 +9,13 @@ resource "google_healthcare_dicom_store" "default" { labels = { label1 = "labelvalue1" } - provider = google-beta } resource "google_pubsub_topic" "topic" { name = "<%= ctx[:vars]['pubsub_topic']%>" - provider = google-beta } resource "google_healthcare_dataset" "dataset" { name = "<%= ctx[:vars]['dataset_name'] %>" location = "us-central1" - provider = google-beta } diff --git a/templates/terraform/examples/healthcare_fhir_store_basic.tf.erb b/templates/terraform/examples/healthcare_fhir_store_basic.tf.erb index 1f167f660a6d..1ff31c2fce66 100644 --- a/templates/terraform/examples/healthcare_fhir_store_basic.tf.erb +++ b/templates/terraform/examples/healthcare_fhir_store_basic.tf.erb @@ -15,16 +15,13 @@ resource "google_healthcare_fhir_store" "default" { labels = { label1 = "labelvalue1" } - provider = google-beta } resource "google_pubsub_topic" "topic" { name = "<%= ctx[:vars]['pubsub_topic']%>" - provider = google-beta } resource "google_healthcare_dataset" "dataset" { name = "<%= ctx[:vars]['dataset_name'] %>" location = "us-central1" - provider = google-beta } diff --git a/templates/terraform/examples/healthcare_fhir_store_streaming_config.tf.erb b/templates/terraform/examples/healthcare_fhir_store_streaming_config.tf.erb new file mode 100644 index 000000000000..7418e6334566 --- /dev/null +++ b/templates/terraform/examples/healthcare_fhir_store_streaming_config.tf.erb @@ -0,0 +1,41 @@ +resource "google_healthcare_fhir_store" "default" { + name = "<%= ctx[:vars]['fhir_store_name'] %>" + dataset = google_healthcare_dataset.dataset.id + version = "R4" + + enable_update_create = false + disable_referential_integrity = false + disable_resource_versioning = false + enable_history_import = false + + labels = { + label1 = "labelvalue1" + } + + stream_configs { + resource_types = ["Observation"] + bigquery_destination { + dataset_uri = "bq://${google_bigquery_dataset.bq_dataset.project}.${google_bigquery_dataset.bq_dataset.dataset_id}" + schema_config { + recursive_structure_depth = 3 + } + } + } +} + +resource "google_pubsub_topic" "topic" { + name = "<%= ctx[:vars]['pubsub_topic']%>" +} + +resource "google_healthcare_dataset" "dataset" { + name = "<%= ctx[:vars]['dataset_name'] %>" + location = "us-central1" +} + +resource "google_bigquery_dataset" "bq_dataset" { + dataset_id = "<%= ctx[:vars]['bq_dataset_name'] %>" + friendly_name = "test" + description = "This is a test description" + location = "US" + delete_contents_on_destroy = true +} \ No newline at end of file diff --git a/templates/terraform/examples/healthcare_hl7_v2_store_basic.tf.erb b/templates/terraform/examples/healthcare_hl7_v2_store_basic.tf.erb index c49aba55018e..908532fe2aa0 100644 --- a/templates/terraform/examples/healthcare_hl7_v2_store_basic.tf.erb +++ b/templates/terraform/examples/healthcare_hl7_v2_store_basic.tf.erb @@ -2,28 +2,20 @@ resource "google_healthcare_hl7_v2_store" "default" { name = "<%= ctx[:vars]['hl7_v2_store_name'] %>" dataset = google_healthcare_dataset.dataset.id - parser_config { - allow_null_header = false - segment_terminator = "Jw==" - } - - notification_config { + notification_configs { pubsub_topic = google_pubsub_topic.topic.id } labels = { label1 = "labelvalue1" } - provider = google-beta } resource "google_pubsub_topic" "topic" { name = "<%= ctx[:vars]['pubsub_topic']%>" - provider = google-beta } resource "google_healthcare_dataset" "dataset" { name = "<%= ctx[:vars]['dataset_name'] %>" location = "us-central1" - provider = google-beta } diff --git a/templates/terraform/examples/healthcare_hl7_v2_store_parser_config.tf.erb b/templates/terraform/examples/healthcare_hl7_v2_store_parser_config.tf.erb new file mode 100644 index 000000000000..04f73fb405ae --- /dev/null +++ b/templates/terraform/examples/healthcare_hl7_v2_store_parser_config.tf.erb @@ -0,0 +1,96 @@ +resource "google_healthcare_hl7_v2_store" "default" { + provider = google-beta + name = "<%= ctx[:vars]['hl7_v2_store_name'] %>" + dataset = google_healthcare_dataset.dataset.id + + parser_config { + allow_null_header = false + segment_terminator = "Jw==" + schema = <" { name = "<%= ctx[:vars]['interconnect_attachment_name'] %>" interconnect = "my-interconnect-id" - router = google_compute_router.foobar.self_link + router = google_compute_router.foobar.id } resource "google_compute_router" "foobar" { diff --git a/templates/terraform/examples/kms_crypto_key_asymmetric_sign.tf.erb b/templates/terraform/examples/kms_crypto_key_asymmetric_sign.tf.erb index 2d3152d67351..9020af7f6854 100644 --- a/templates/terraform/examples/kms_crypto_key_asymmetric_sign.tf.erb +++ b/templates/terraform/examples/kms_crypto_key_asymmetric_sign.tf.erb @@ -5,7 +5,7 @@ resource "google_kms_key_ring" "keyring" { resource "google_kms_crypto_key" "<%= ctx[:primary_resource_id] %>" { name = "crypto-key-example" - key_ring = google_kms_key_ring.keyring.self_link + key_ring = google_kms_key_ring.keyring.id purpose = "ASYMMETRIC_SIGN" version_template { diff --git a/templates/terraform/examples/kms_crypto_key_basic.tf.erb b/templates/terraform/examples/kms_crypto_key_basic.tf.erb index 9e95e976cce0..6730375aee43 100644 --- a/templates/terraform/examples/kms_crypto_key_basic.tf.erb +++ b/templates/terraform/examples/kms_crypto_key_basic.tf.erb @@ -5,7 +5,7 @@ resource "google_kms_key_ring" "keyring" { resource "google_kms_crypto_key" "<%= ctx[:primary_resource_id] %>" { name = "crypto-key-example" - key_ring = google_kms_key_ring.keyring.self_link + key_ring = google_kms_key_ring.keyring.id rotation_period = "100000s" lifecycle { diff --git a/templates/terraform/examples/kms_key_ring_import_job.tf.erb b/templates/terraform/examples/kms_key_ring_import_job.tf.erb new file mode 100644 index 000000000000..3c316881495f --- /dev/null +++ b/templates/terraform/examples/kms_key_ring_import_job.tf.erb @@ -0,0 +1,12 @@ +resource "google_kms_key_ring" "keyring" { + name = "<%= ctx[:vars]['keyring'] %>" + location = "global" +} + +resource "google_kms_key_ring_import_job" "<%= ctx[:primary_resource_id] %>" { + key_ring = google_kms_key_ring.keyring.id + import_job_id = "my-import-job" + + import_method = "RSA_OAEP_3072_SHA1_AES_256" + protection_level = "SOFTWARE" +} diff --git a/templates/terraform/examples/machine_image_basic.tf.erb b/templates/terraform/examples/machine_image_basic.tf.erb new file mode 100644 index 000000000000..3cbc359540be --- /dev/null +++ b/templates/terraform/examples/machine_image_basic.tf.erb @@ -0,0 +1,21 @@ +resource "google_compute_instance" "vm" { + provider = google-beta + name = "<%= ctx[:vars]['vm_name'] %>" + machine_type = "n1-standard-1" + + boot_disk { + initialize_params { + image = "debian-cloud/debian-9" + } + } + + network_interface { + network = "default" + } +} + +resource "google_compute_machine_image" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]['image_name'] %>" + source_instance = google_compute_instance.vm.self_link +} diff --git a/templates/terraform/examples/managed_ssl_certificate_basic.tf.erb b/templates/terraform/examples/managed_ssl_certificate_basic.tf.erb index fce3f3fe2403..af30857c9213 100644 --- a/templates/terraform/examples/managed_ssl_certificate_basic.tf.erb +++ b/templates/terraform/examples/managed_ssl_certificate_basic.tf.erb @@ -12,8 +12,8 @@ resource "google_compute_target_https_proxy" "default" { provider = google-beta name = "<%= ctx[:vars]['proxy_name'] %>" - url_map = google_compute_url_map.default.self_link - ssl_certificates = [google_compute_managed_ssl_certificate.default.self_link] + url_map = google_compute_url_map.default.id + ssl_certificates = [google_compute_managed_ssl_certificate.default.id] } resource "google_compute_url_map" "default" { @@ -22,7 +22,7 @@ resource "google_compute_url_map" "default" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id host_rule { hosts = ["sslcert.tf-test.club"] @@ -31,11 +31,11 @@ resource "google_compute_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_backend_service.default.self_link + service = google_compute_backend_service.default.id } } } @@ -48,7 +48,7 @@ resource "google_compute_backend_service" "default" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_http_health_check" "default" { @@ -71,7 +71,7 @@ resource "google_compute_global_forwarding_rule" "default" { provider = google-beta name = "<%= ctx[:vars]['forwarding_rule_name'] %>" - target = google_compute_target_https_proxy.default.self_link + target = google_compute_target_https_proxy.default.id port_range = 443 } diff --git a/templates/terraform/examples/managed_ssl_certificate_recreation.tf.erb b/templates/terraform/examples/managed_ssl_certificate_recreation.tf.erb new file mode 100644 index 000000000000..b8156337374b --- /dev/null +++ b/templates/terraform/examples/managed_ssl_certificate_recreation.tf.erb @@ -0,0 +1,71 @@ +// This example allows the list of managed domains to be modified and will +// recreate the ssl certificate and update the target https proxy correctly + +resource "google_compute_target_https_proxy" "default" { + provider = google-beta + name = "test-proxy" + url_map = google_compute_url_map.default.id + ssl_certificates = [google_compute_managed_ssl_certificate.cert.id] +} + +locals { + managed_domains = list("test.example.com") +} + +resource "random_id" "certificate" { + byte_length = 4 + prefix = "issue6147-cert-" + + keepers = { + domains = join(",", local.managed_domains) + } +} + +resource "google_compute_managed_ssl_certificate" "cert" { + provider = google-beta + name = random_id.certificate.hex + + lifecycle { + create_before_destroy = true + } + + managed { + domains = local.managed_domains + } +} + +resource "google_compute_url_map" "default" { + provider = google-beta + name = "url-map" + description = "a description" + default_service = google_compute_backend_service.default.id + host_rule { + hosts = ["mysite.com"] + path_matcher = "allpaths" + } + path_matcher { + name = "allpaths" + default_service = google_compute_backend_service.default.id + path_rule { + paths = ["/*"] + service = google_compute_backend_service.default.id + } + } +} + +resource "google_compute_backend_service" "default" { + provider = google-beta + name = "backend-service" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + health_checks = [google_compute_http_health_check.default.id] +} + +resource "google_compute_http_health_check" "default" { + provider = google-beta + name = "http-health-check" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} diff --git a/templates/terraform/examples/memcache_instance_basic.tf.erb b/templates/terraform/examples/memcache_instance_basic.tf.erb new file mode 100644 index 000000000000..94611af6af40 --- /dev/null +++ b/templates/terraform/examples/memcache_instance_basic.tf.erb @@ -0,0 +1,33 @@ +resource "google_compute_network" "network" { + provider = google-beta + name = "tf-test%{random_suffix}" +} + +resource "google_compute_global_address" "service_range" { + provider = google-beta + name = "tf-test%{random_suffix}" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 16 + network = google_compute_network.network.id +} + +resource "google_service_networking_connection" "private_service_connection" { + provider = google-beta + network = google_compute_network.network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.service_range.name] +} + +resource "google_memcache_instance" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]["instance_name"] %>" + region = "us-central1" + authorized_network = google_service_networking_connection.private_service_connection.network + + node_config { + cpu_count = 1 + memory_size_mb = 1024 + } + node_count = 1 +} diff --git a/templates/terraform/examples/monitoring_metric_descriptor_alert.tf.erb b/templates/terraform/examples/monitoring_metric_descriptor_alert.tf.erb new file mode 100644 index 000000000000..8bb1d5355007 --- /dev/null +++ b/templates/terraform/examples/monitoring_metric_descriptor_alert.tf.erb @@ -0,0 +1,21 @@ +resource "google_monitoring_metric_descriptor" "<%= ctx[:primary_resource_id] %>" { + description = "Daily sales records from all branch stores." + display_name = "<%= ctx[:vars]["display_name"] %>" + type = "custom.googleapis.com/stores/<%= ctx[:vars]["type"] %>" + metric_kind = "GAUGE" + value_type = "DOUBLE" + unit = "{USD}" +} + +resource "google_monitoring_alert_policy" "alert_policy" { + display_name = "<%= ctx[:vars]["display_name"] %>" + combiner = "OR" + conditions { + display_name = "test condition" + condition_threshold { + filter = "metric.type=\"${google_monitoring_metric_descriptor.<%= ctx[:primary_resource_id] %>.type}\" AND resource.type=\"gce_instance\"" + duration = "60s" + comparison = "COMPARISON_GT" + } + } +} diff --git a/templates/terraform/examples/monitoring_metric_descriptor_basic.tf.erb b/templates/terraform/examples/monitoring_metric_descriptor_basic.tf.erb new file mode 100644 index 000000000000..206c1df9535b --- /dev/null +++ b/templates/terraform/examples/monitoring_metric_descriptor_basic.tf.erb @@ -0,0 +1,18 @@ +resource "google_monitoring_metric_descriptor" "<%= ctx[:primary_resource_id] %>" { + description = "Daily sales records from all branch stores." + display_name = "<%= ctx[:vars]["display_name"] %>" + type = "custom.googleapis.com/stores/<%= ctx[:vars]["type"] %>" + metric_kind = "GAUGE" + value_type = "DOUBLE" + unit = "{USD}" + labels { + key = "store_id" + value_type = "STRING" + description = "The ID of the store." + } + launch_stage = "BETA" + metadata { + sample_period = "60s" + ingest_delay = "30s" + } +} diff --git a/templates/terraform/examples/monitoring_slo_appengine.tf.erb b/templates/terraform/examples/monitoring_slo_appengine.tf.erb new file mode 100644 index 000000000000..6025a44379e0 --- /dev/null +++ b/templates/terraform/examples/monitoring_slo_appengine.tf.erb @@ -0,0 +1,19 @@ +data "google_monitoring_app_engine_service" "default" { + module_id = "default" +} + +resource "google_monitoring_slo" "<%= ctx[:primary_resource_id] -%>" { + service = data.google_monitoring_app_engine_service.default.service_id + + slo_id = "<%= ctx[:vars]['slo_id'] -%>" + display_name = "Terraform Test SLO for App Engine" + + goal = 0.9 + calendar_period = "DAY" + + basic_sli { + latency { + threshold = "1s" + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/monitoring_slo_request_based.tf.erb b/templates/terraform/examples/monitoring_slo_request_based.tf.erb new file mode 100644 index 000000000000..7ef0d61856e1 --- /dev/null +++ b/templates/terraform/examples/monitoring_slo_request_based.tf.erb @@ -0,0 +1,27 @@ +resource "google_monitoring_custom_service" "customsrv" { + service_id = "<%= ctx[:vars]['service_id'] %>" + display_name = "My Custom Service" +} + +resource "google_monitoring_slo" "<%= ctx[:primary_resource_id] %>" { + service = google_monitoring_custom_service.customsrv.service_id + slo_id = "<%= ctx[:vars]['slo_id'] %>" + display_name = "Terraform Test SLO with request based SLI (good total ratio)" + + goal = 0.9 + rolling_period_days = 30 + + request_based_sli { + distribution_cut { + distribution_filter = join(" AND ", [ + "metric.type=\"serviceruntime.googleapis.com/api/request_latencies\"", + "resource.type=\"consumed_api\"", + "resource.label.\"project_id\"=\"<%= ctx[:test_env_vars]['project'] -%>\"", + ]) + + range { + max = 10 + } + } + } +} diff --git a/templates/terraform/examples/monitoring_slo_windows_based_good_bad_metric_filter.tf.erb b/templates/terraform/examples/monitoring_slo_windows_based_good_bad_metric_filter.tf.erb new file mode 100644 index 000000000000..f43f1adf18ab --- /dev/null +++ b/templates/terraform/examples/monitoring_slo_windows_based_good_bad_metric_filter.tf.erb @@ -0,0 +1,20 @@ +resource "google_monitoring_custom_service" "customsrv" { + service_id = "<%= ctx[:vars]['service_id'] %>" + display_name = "My Custom Service" +} + +resource "google_monitoring_slo" "<%= ctx[:primary_resource_id] %>" { + service = google_monitoring_custom_service.customsrv.service_id + display_name = "Terraform Test SLO with window based SLI" + + goal = 0.95 + calendar_period = "FORTNIGHT" + + windows_based_sli { + window_period = "400s" + good_bad_metric_filter = join(" AND ", [ + "metric.type=\"monitoring.googleapis.com/uptime_check/check_passed\"", + "resource.type=\"uptime_url\"", + ]) + } +} diff --git a/templates/terraform/examples/monitoring_slo_windows_based_metric_mean.tf.erb b/templates/terraform/examples/monitoring_slo_windows_based_metric_mean.tf.erb new file mode 100644 index 000000000000..f7a38919b97d --- /dev/null +++ b/templates/terraform/examples/monitoring_slo_windows_based_metric_mean.tf.erb @@ -0,0 +1,26 @@ +resource "google_monitoring_custom_service" "customsrv" { + service_id = "<%= ctx[:vars]['service_id'] %>" + display_name = "My Custom Service" +} + +resource "google_monitoring_slo" "<%= ctx[:primary_resource_id] %>" { + service = google_monitoring_custom_service.customsrv.service_id + display_name = "Terraform Test SLO with window based SLI" + + goal = 0.9 + rolling_period_days = 20 + + windows_based_sli { + window_period = "600s" + metric_mean_in_range { + time_series = join(" AND ", [ + "metric.type=\"agent.googleapis.com/cassandra/client_request/latency/95p\"", + "resource.type=\"gce_instance\"", + ]) + + range { + max = 5 + } + } + } +} diff --git a/templates/terraform/examples/monitoring_slo_windows_based_metric_sum.tf.erb b/templates/terraform/examples/monitoring_slo_windows_based_metric_sum.tf.erb new file mode 100644 index 000000000000..aee54b150f02 --- /dev/null +++ b/templates/terraform/examples/monitoring_slo_windows_based_metric_sum.tf.erb @@ -0,0 +1,26 @@ +resource "google_monitoring_custom_service" "customsrv" { + service_id = "<%= ctx[:vars]['service_id'] %>" + display_name = "My Custom Service" +} + +resource "google_monitoring_slo" "<%= ctx[:primary_resource_id] %>" { + service = google_monitoring_custom_service.customsrv.service_id + display_name = "Terraform Test SLO with window based SLI" + + goal = 0.9 + rolling_period_days = 20 + + windows_based_sli { + window_period = "400s" + metric_sum_in_range { + time_series = join(" AND ", [ + "metric.type=\"monitoring.googleapis.com/uptime_check/request_latency\"", + "resource.type=\"uptime_url\"", + ]) + + range { + max = 5000 + } + } + } +} diff --git a/templates/terraform/examples/monitoring_slo_windows_based_ratio_threshold.tf.erb b/templates/terraform/examples/monitoring_slo_windows_based_ratio_threshold.tf.erb new file mode 100644 index 000000000000..0bacbe7e9c18 --- /dev/null +++ b/templates/terraform/examples/monitoring_slo_windows_based_ratio_threshold.tf.erb @@ -0,0 +1,33 @@ +resource "google_monitoring_custom_service" "customsrv" { + service_id = "<%= ctx[:vars]['service_id'] %>" + display_name = "My Custom Service" +} + +resource "google_monitoring_slo" "<%= ctx[:primary_resource_id] %>" { + service = google_monitoring_custom_service.customsrv.service_id + display_name = "Terraform Test SLO with window based SLI" + + goal = 0.9 + rolling_period_days = 20 + + windows_based_sli { + window_period = "100s" + + good_total_ratio_threshold { + threshold = 0.1 + performance { + distribution_cut { + distribution_filter = join(" AND ", [ + "metric.type=\"serviceruntime.googleapis.com/api/request_latencies\"", + "resource.type=\"consumed_api\"", + ]) + + range { + min = 1 + max = 9 + } + } + } + } + } +} diff --git a/templates/terraform/examples/network_endpoint.tf.erb b/templates/terraform/examples/network_endpoint.tf.erb index 52ad5025ebb5..9d52e2a6e9b0 100644 --- a/templates/terraform/examples/network_endpoint.tf.erb +++ b/templates/terraform/examples/network_endpoint.tf.erb @@ -22,7 +22,7 @@ resource "google_compute_instance" "endpoint-instance" { } network_interface { - subnetwork = google_compute_subnetwork.default.self_link + subnetwork = google_compute_subnetwork.default.id access_config { } } @@ -30,8 +30,8 @@ resource "google_compute_instance" "endpoint-instance" { resource "google_compute_network_endpoint_group" "group" { name = "<%= ctx[:vars]['neg_name'] %>" - network = google_compute_network.default.self_link - subnetwork = google_compute_subnetwork.default.self_link + network = google_compute_network.default.id + subnetwork = google_compute_subnetwork.default.id default_port = "90" zone = "us-central1-a" } @@ -45,5 +45,5 @@ resource "google_compute_subnetwork" "default" { name = "<%= ctx[:vars]['subnetwork_name'] %>" ip_cidr_range = "10.0.0.1/16" region = "us-central1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id } diff --git a/templates/terraform/examples/network_endpoint_group.tf.erb b/templates/terraform/examples/network_endpoint_group.tf.erb index 693d60782028..2001acc3c070 100644 --- a/templates/terraform/examples/network_endpoint_group.tf.erb +++ b/templates/terraform/examples/network_endpoint_group.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_network_endpoint_group" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['neg_name'] %>" - network = google_compute_network.default.self_link - subnetwork = google_compute_subnetwork.default.self_link + network = google_compute_network.default.id + subnetwork = google_compute_subnetwork.default.id default_port = "90" zone = "us-central1-a" } @@ -15,5 +15,5 @@ resource "google_compute_subnetwork" "default" { name = "<%= ctx[:vars]['subnetwork_name'] %>" ip_cidr_range = "10.0.0.0/16" region = "us-central1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id } diff --git a/templates/terraform/examples/network_management_connectivity_test_addresses.tf.erb b/templates/terraform/examples/network_management_connectivity_test_addresses.tf.erb new file mode 100644 index 000000000000..f67260dd4f94 --- /dev/null +++ b/templates/terraform/examples/network_management_connectivity_test_addresses.tf.erb @@ -0,0 +1,44 @@ +resource "google_network_management_connectivity_test" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['primary_resource_name'] %>" + source { + ip_address = google_compute_address.source-addr.address + project_id = google_compute_address.source-addr.project + network = google_compute_network.vpc.id + network_type = "GCP_NETWORK" + } + + destination { + ip_address = google_compute_address.dest-addr.address + project_id = google_compute_address.dest-addr.project + network = google_compute_network.vpc.id + } + + protocol = "UDP" +} + +resource "google_compute_network" "vpc" { + name = "<%= ctx[:vars]['network'] %>" +} + +resource "google_compute_subnetwork" "subnet" { + name = "<%= ctx[:vars]['network'] %>-subnet" + ip_cidr_range = "10.0.0.0/16" + region = "us-central1" + network = google_compute_network.vpc.id +} + +resource "google_compute_address" "source-addr" { + name = "<%= ctx[:vars]['source_addr'] %>" + subnetwork = google_compute_subnetwork.subnet.id + address_type = "INTERNAL" + address = "10.0.42.42" + region = "us-central1" +} + +resource "google_compute_address" "dest-addr" { + name = "<%= ctx[:vars]['dest_addr'] %>" + subnetwork = google_compute_subnetwork.subnet.id + address_type = "INTERNAL" + address = "10.0.43.43" + region = "us-central1" +} diff --git a/templates/terraform/examples/network_management_connectivity_test_instances.tf.erb b/templates/terraform/examples/network_management_connectivity_test_instances.tf.erb new file mode 100644 index 000000000000..30572705d2c7 --- /dev/null +++ b/templates/terraform/examples/network_management_connectivity_test_instances.tf.erb @@ -0,0 +1,55 @@ +resource "google_network_management_connectivity_test" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['primary_resource_name'] %>" + source { + instance = google_compute_instance.source.id + } + + destination { + instance = google_compute_instance.destination.id + } + + protocol = "TCP" +} + +resource "google_compute_instance" "source" { + name = "<%= ctx[:vars]['source_instance'] %>" + machine_type = "n1-standard-1" + + boot_disk { + initialize_params { + image = data.google_compute_image.debian_9.id + } + } + + network_interface { + network = google_compute_network.vpc.id + access_config { + } + } +} + +resource "google_compute_instance" "destination" { + name = "<%= ctx[:vars]['dest_instance'] %>" + machine_type = "n1-standard-1" + + boot_disk { + initialize_params { + image = data.google_compute_image.debian_9.id + } + } + + network_interface { + network = google_compute_network.vpc.id + access_config { + } + } +} + +resource "google_compute_network" "vpc" { + name = "<%= ctx[:vars]['network_name'] %>" +} + +data "google_compute_image" "debian_9" { + family = "debian-9" + project = "debian-cloud" +} diff --git a/templates/terraform/examples/network_peering_routes_config_basic.tf.erb b/templates/terraform/examples/network_peering_routes_config_basic.tf.erb index eb33b6c5fbe0..92fecd467a8c 100644 --- a/templates/terraform/examples/network_peering_routes_config_basic.tf.erb +++ b/templates/terraform/examples/network_peering_routes_config_basic.tf.erb @@ -8,8 +8,8 @@ resource "google_compute_network_peering_routes_config" "<%= ctx[:primary_resour resource "google_compute_network_peering" "peering_primary" { name = "<%= ctx[:vars]['peering_primary_name'] %>" - network = google_compute_network.network_primary.self_link - peer_network = google_compute_network.network_secondary.self_link + network = google_compute_network.network_primary.id + peer_network = google_compute_network.network_secondary.id import_custom_routes = true export_custom_routes = true @@ -17,8 +17,8 @@ resource "google_compute_network_peering" "peering_primary" { resource "google_compute_network_peering" "peering_secondary" { name = "<%= ctx[:vars]['peering_secondary_name'] %>" - network = google_compute_network.network_secondary.self_link - peer_network = google_compute_network.network_primary.self_link + network = google_compute_network.network_secondary.id + peer_network = google_compute_network.network_primary.id } resource "google_compute_network" "network_primary" { diff --git a/templates/terraform/examples/node_group_autoscaling_policy.tf.erb b/templates/terraform/examples/node_group_autoscaling_policy.tf.erb index da1be0e9a3a8..abc9694b7e16 100644 --- a/templates/terraform/examples/node_group_autoscaling_policy.tf.erb +++ b/templates/terraform/examples/node_group_autoscaling_policy.tf.erb @@ -1,13 +1,8 @@ -data "google_compute_node_types" "central1a" { - provider = google-beta - zone = "us-central1-a" -} - resource "google_compute_node_template" "soletenant-tmpl" { provider = google-beta name = "<%= ctx[:vars]['template_name'] %>" region = "us-central1" - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" } resource "google_compute_node_group" "<%= ctx[:primary_resource_id] %>" { @@ -17,7 +12,7 @@ resource "google_compute_node_group" "<%= ctx[:primary_resource_id] %>" { description = "example google_compute_node_group for Terraform Google Provider" size = 1 - node_template = google_compute_node_template.soletenant-tmpl.self_link + node_template = google_compute_node_template.soletenant-tmpl.id autoscaling_policy { mode = "ON" min_nodes = 1 diff --git a/templates/terraform/examples/node_group_basic.tf.erb b/templates/terraform/examples/node_group_basic.tf.erb index 821d7e16cb40..9de28cdaf0aa 100644 --- a/templates/terraform/examples/node_group_basic.tf.erb +++ b/templates/terraform/examples/node_group_basic.tf.erb @@ -1,11 +1,7 @@ -data "google_compute_node_types" "central1a" { - zone = "us-central1-a" -} - resource "google_compute_node_template" "soletenant-tmpl" { name = "<%= ctx[:vars]['template_name'] %>" region = "us-central1" - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" } resource "google_compute_node_group" "<%= ctx[:primary_resource_id] %>" { @@ -14,5 +10,5 @@ resource "google_compute_node_group" "<%= ctx[:primary_resource_id] %>" { description = "example google_compute_node_group for Terraform Google Provider" size = 1 - node_template = google_compute_node_template.soletenant-tmpl.self_link + node_template = google_compute_node_template.soletenant-tmpl.id } diff --git a/templates/terraform/examples/node_template_basic.tf.erb b/templates/terraform/examples/node_template_basic.tf.erb index 7dc0ce41c03d..4b932196b9b4 100644 --- a/templates/terraform/examples/node_template_basic.tf.erb +++ b/templates/terraform/examples/node_template_basic.tf.erb @@ -1,9 +1,5 @@ -data "google_compute_node_types" "central1a" { - zone = "us-central1-a" -} - resource "google_compute_node_template" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['template_name'] %>" region = "us-central1" - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" } diff --git a/templates/terraform/examples/node_template_server_binding.tf.erb b/templates/terraform/examples/node_template_server_binding.tf.erb index f300437aa4dc..391355935e43 100644 --- a/templates/terraform/examples/node_template_server_binding.tf.erb +++ b/templates/terraform/examples/node_template_server_binding.tf.erb @@ -13,7 +13,7 @@ resource "google_compute_node_template" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['template_name'] %>" region = "us-central1" - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" node_affinity_labels = { foo = "baz" diff --git a/templates/terraform/examples/notebook_environment_basic.tf.erb b/templates/terraform/examples/notebook_environment_basic.tf.erb new file mode 100644 index 000000000000..b89f2e6a8ac7 --- /dev/null +++ b/templates/terraform/examples/notebook_environment_basic.tf.erb @@ -0,0 +1,8 @@ +resource "google_notebooks_environment" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]["environment_name"] %>" + location = "us-west1-a" + container_image { + repository = "gcr.io/deeplearning-platform-release/base-cpu" + } +} diff --git a/templates/terraform/examples/notebook_instance_basic.tf.erb b/templates/terraform/examples/notebook_instance_basic.tf.erb new file mode 100644 index 000000000000..5a418bfef193 --- /dev/null +++ b/templates/terraform/examples/notebook_instance_basic.tf.erb @@ -0,0 +1,10 @@ +resource "google_notebooks_instance" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]["instance_name"] %>" + location = "us-west1-a" + machine_type = "n1-standard-1" + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-cpu" + } +} diff --git a/templates/terraform/examples/notebook_instance_basic_container.tf.erb b/templates/terraform/examples/notebook_instance_basic_container.tf.erb new file mode 100644 index 000000000000..cff8598c9907 --- /dev/null +++ b/templates/terraform/examples/notebook_instance_basic_container.tf.erb @@ -0,0 +1,14 @@ +resource "google_notebooks_instance" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]["instance_name"] %>" + location = "us-west1-a" + machine_type = "n1-standard-1" + metadata = { + proxy-mode = "service_account" + terraform = "true" + } + container_image { + repository = "gcr.io/deeplearning-platform-release/base-cpu" + tag = "latest" + } +} diff --git a/templates/terraform/examples/notebook_instance_basic_gpu.tf.erb b/templates/terraform/examples/notebook_instance_basic_gpu.tf.erb new file mode 100644 index 000000000000..491280d1f3db --- /dev/null +++ b/templates/terraform/examples/notebook_instance_basic_gpu.tf.erb @@ -0,0 +1,16 @@ +resource "google_notebooks_instance" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]["instance_name"] %>" + location = "us-west1-a" + machine_type = "n1-standard-1" + + install_gpu_driver = true + accelerator_config { + type = "NVIDIA_TESLA_T4" + core_count = 1 + } + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-gpu" + } +} diff --git a/templates/terraform/examples/notebook_instance_full.tf.erb b/templates/terraform/examples/notebook_instance_full.tf.erb new file mode 100644 index 000000000000..df2404b8b2a1 --- /dev/null +++ b/templates/terraform/examples/notebook_instance_full.tf.erb @@ -0,0 +1,43 @@ +resource "google_notebooks_instance" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + name = "<%= ctx[:vars]["instance_name"] %>" + location = "us-central1-a" + machine_type = "n1-standard-1" + + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-cpu" + } + + instance_owners = "admin@hashicorptest.com" + service_account = "<%= ctx[:test_env_vars]["service_account"] %>" + + install_gpu_driver = true + boot_disk_type = "PD_SSD" + boot_disk_size_gb = 110 + + no_public_ip = true + no_proxy_access = true + + network = data.google_compute_network.my_network.id + subnet = data.google_compute_subnetwork.my_subnetwork.id + + labels = { + k = "val" + } + + metadata = { + terraform = "true" + } +} + +data "google_compute_network" "my_network" { + provider = google-beta + name = "default" +} + +data "google_compute_subnetwork" "my_subnetwork" { + provider = google-beta + name = "default" + region = "us-central1" +} \ No newline at end of file diff --git a/templates/terraform/examples/os_config_guest_policies_basic.tf.erb b/templates/terraform/examples/os_config_guest_policies_basic.tf.erb new file mode 100644 index 000000000000..29eadc0991ee --- /dev/null +++ b/templates/terraform/examples/os_config_guest_policies_basic.tf.erb @@ -0,0 +1,42 @@ +data "google_compute_image" "my_image" { + provider = google-beta + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance" "foobar" { + provider = google-beta + name = "<%= ctx[:vars]['instance_name'] %>" + machine_type = "n1-standard-1" + zone = "us-central1-a" + can_ip_forward = false + tags = ["foo", "bar"] + + boot_disk { + initialize_params { + image = data.google_compute_image.my_image.self_link + } + } + + network_interface { + network = "default" + } + + metadata = { + foo = "bar" + } +} + +resource "google_os_config_guest_policies" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + guest_policy_id = "<%= ctx[:vars]['guest_policy_id'] %>" + + assignment { + instances = [google_compute_instance.foobar.id] + } + + packages { + name = "my-package" + desired_state = "UPDATED" + } +} \ No newline at end of file diff --git a/templates/terraform/examples/os_config_guest_policies_packages.tf.erb b/templates/terraform/examples/os_config_guest_policies_packages.tf.erb new file mode 100644 index 000000000000..a8fe5e23332b --- /dev/null +++ b/templates/terraform/examples/os_config_guest_policies_packages.tf.erb @@ -0,0 +1,54 @@ +resource "google_os_config_guest_policies" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + guest_policy_id = "<%= ctx[:vars]['guest_policy_id'] %>" + + assignment { + group_labels { + labels = { + color = "red", + env = "test" + } + } + + group_labels { + labels = { + color = "blue", + env = "test" + } + } + } + + packages { + name = "my-package" + desired_state = "INSTALLED" + } + + packages { + name = "bad-package-1" + desired_state = "REMOVED" + } + + packages { + name = "bad-package-2" + desired_state = "REMOVED" + manager = "APT" + } + + package_repositories { + apt { + uri = "https://packages.cloud.google.com/apt" + archive_type = "DEB" + distribution = "cloud-sdk-stretch" + components = ["main"] + } + } + + package_repositories { + yum { + id = "google-cloud-sdk" + display_name = "Google Cloud SDK" + base_url = "https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64" + gpg_keys = ["https://packages.cloud.google.com/yum/doc/yum-key.gpg", "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"] + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/os_config_guest_policies_recipes.tf.erb b/templates/terraform/examples/os_config_guest_policies_recipes.tf.erb new file mode 100644 index 000000000000..718ca22a1154 --- /dev/null +++ b/templates/terraform/examples/os_config_guest_policies_recipes.tf.erb @@ -0,0 +1,29 @@ +resource "google_os_config_guest_policies" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + guest_policy_id = "<%= ctx[:vars]['guest_policy_id'] %>" + + assignment { + zones = ["us-east1-b", "us-east1-d"] + } + + recipes { + name = "<%= ctx[:vars]['guest_policy_id'] %>-recipe" + desired_state = "INSTALLED" + + artifacts { + id = "<%= ctx[:vars]['guest_policy_id'] %>-artifact-id" + + gcs { + bucket = "my-bucket" + object = "executable.msi" + generation = 1546030865175603 + } + } + + install_steps { + msi_installation { + artifact_id = "<%= ctx[:vars]['guest_policy_id'] %>-artifact-id" + } + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/os_config_patch_deployment_basic.tf.erb b/templates/terraform/examples/os_config_patch_deployment_basic.tf.erb new file mode 100644 index 000000000000..0a173a68e43d --- /dev/null +++ b/templates/terraform/examples/os_config_patch_deployment_basic.tf.erb @@ -0,0 +1,21 @@ +resource "google_os_config_patch_deployment" "<%= ctx[:primary_resource_id] %>" { + patch_deployment_id = "<%= ctx[:vars]['instance_name'] %>" + + instance_filter { + all = true + } + + recurring_schedule { + time_zone { + id = "America/New_York" + } + + time_of_day { + hours = 1 + } + + weekly { + day_of_week = "MONDAY" + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/os_config_patch_deployment_full.tf.erb b/templates/terraform/examples/os_config_patch_deployment_full.tf.erb new file mode 100644 index 000000000000..76178e499c28 --- /dev/null +++ b/templates/terraform/examples/os_config_patch_deployment_full.tf.erb @@ -0,0 +1,97 @@ +resource "google_os_config_patch_deployment" "<%= ctx[:primary_resource_id] %>" { + patch_deployment_id = "<%= ctx[:vars]['instance_name'] %>" + + instance_filter { + group_labels { + labels = { + env = "dev", + app = "web" + } + } + + instance_name_prefixes = ["test-"] + + zones = ["us-central1-a", "us-central-1c"] + } + + patch_config { + reboot_config = "ALWAYS" + + apt { + type = "DIST" + excludes = ["python"] + } + + yum { + security = true + minimal = true + excludes = ["bash"] + } + + goo { + enabled = true + } + + zypper { + categories = ["security"] + } + + windows_update { + exclusive_patches = ["KB4339284"] + } + + pre_step { + linux_exec_step_config { + allowed_success_codes = [0,3] + local_path = "/tmp/pre_patch_script.sh" + } + + windows_exec_step_config { + interpreter = "SHELL" + allowed_success_codes = [0,2] + local_path = "C:\\Users\\user\\pre-patch-script.cmd" + } + } + + post_step { + linux_exec_step_config { + gcs_object { + bucket = "my-patch-scripts" + generation_number = "1523477886880" + object = "linux/post_patch_script" + } + } + + windows_exec_step_config { + interpreter = "POWERSHELL" + gcs_object { + bucket = "my-patch-scripts" + generation_number = "135920493447" + object = "windows/post_patch_script.ps1" + } + } + } + } + + duration = "10s" + + recurring_schedule { + time_zone { + id = "America/New_York" + } + + time_of_day { + hours = 0 + minutes = 30 + seconds = 30 + nanos = 20 + } + + monthly { + week_day_of_month { + week_ordinal = -1 + day_of_week = "TUESDAY" + } + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/os_config_patch_deployment_instance.tf.erb b/templates/terraform/examples/os_config_patch_deployment_instance.tf.erb new file mode 100644 index 000000000000..c8ac959c81e2 --- /dev/null +++ b/templates/terraform/examples/os_config_patch_deployment_instance.tf.erb @@ -0,0 +1,59 @@ +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance" "foobar" { + name = "<%= ctx[:vars]['instance_name'] %>" + machine_type = "n1-standard-1" + zone = "us-central1-a" + can_ip_forward = false + tags = ["foo", "bar"] + + boot_disk { + initialize_params { + image = data.google_compute_image.my_image.self_link + } + } + + network_interface { + network = "default" + } + + metadata = { + foo = "bar" + } +} + +resource "google_os_config_patch_deployment" "<%= ctx[:primary_resource_id] %>" { + patch_deployment_id = "<%= ctx[:vars]['instance_name'] %>" + + instance_filter { + instances = [google_compute_instance.foobar.id] + } + + patch_config { + yum { + security = true + minimal = true + excludes = ["bash"] + } + } + + recurring_schedule { + time_zone { + id = "America/New_York" + } + + time_of_day { + hours = 0 + minutes = 30 + seconds = 30 + nanos = 20 + } + + monthly { + month_day = 1 + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/pubsub_topic_cmek.tf.erb b/templates/terraform/examples/pubsub_topic_cmek.tf.erb index 974de2457c3a..258dfacd5455 100644 --- a/templates/terraform/examples/pubsub_topic_cmek.tf.erb +++ b/templates/terraform/examples/pubsub_topic_cmek.tf.erb @@ -1,11 +1,11 @@ resource "google_pubsub_topic" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['topic_name'] %>" - kms_key_name = google_kms_crypto_key.crypto_key.self_link + kms_key_name = google_kms_crypto_key.crypto_key.id } resource "google_kms_crypto_key" "crypto_key" { name = "<%= ctx[:vars]['key_name'] %>" - key_ring = google_kms_key_ring.key_ring.self_link + key_ring = google_kms_key_ring.key_ring.id } resource "google_kms_key_ring" "key_ring" { diff --git a/templates/terraform/examples/redis_instance_full.tf.erb b/templates/terraform/examples/redis_instance_full.tf.erb index 9ac81a29cc85..a76043b58a4d 100644 --- a/templates/terraform/examples/redis_instance_full.tf.erb +++ b/templates/terraform/examples/redis_instance_full.tf.erb @@ -6,7 +6,7 @@ resource "google_redis_instance" "<%= ctx[:primary_resource_id] %>" { location_id = "us-central1-a" alternative_location_id = "us-central1-f" - authorized_network = data.google_compute_network.redis-network.self_link + authorized_network = data.google_compute_network.redis-network.id redis_version = "REDIS_3_2" display_name = "Terraform Test Instance" diff --git a/templates/terraform/examples/redis_instance_private_service.tf.erb b/templates/terraform/examples/redis_instance_private_service.tf.erb index c70cb77d84a9..65db1ff4aa80 100644 --- a/templates/terraform/examples/redis_instance_private_service.tf.erb +++ b/templates/terraform/examples/redis_instance_private_service.tf.erb @@ -7,11 +7,11 @@ resource "google_compute_global_address" "service_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = google_compute_network.network.self_link + network = google_compute_network.network.id } resource "google_service_networking_connection" "private_service_connection" { - network = google_compute_network.network.self_link + network = google_compute_network.network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.service_range.name] } @@ -24,7 +24,7 @@ resource "google_redis_instance" "<%= ctx[:primary_resource_id] %>" { location_id = "us-central1-a" alternative_location_id = "us-central1-f" - authorized_network = google_compute_network.network.self_link + authorized_network = google_compute_network.network.id connect_mode = "PRIVATE_SERVICE_ACCESS" redis_version = "REDIS_3_2" diff --git a/templates/terraform/examples/region_autoscaler_basic.tf.erb b/templates/terraform/examples/region_autoscaler_basic.tf.erb index 355d02440f8b..c3c4ef4a2528 100644 --- a/templates/terraform/examples/region_autoscaler_basic.tf.erb +++ b/templates/terraform/examples/region_autoscaler_basic.tf.erb @@ -22,7 +22,7 @@ resource "google_compute_instance_template" "foobar" { tags = ["foo", "bar"] disk { - source_image = data.google_compute_image.debian_9.self_link + source_image = data.google_compute_image.debian_9.id } network_interface { diff --git a/templates/terraform/examples/region_backend_service_balancing_mode.tf.erb b/templates/terraform/examples/region_backend_service_balancing_mode.tf.erb index fb29e0ddf88c..fb521a835700 100644 --- a/templates/terraform/examples/region_backend_service_balancing_mode.tf.erb +++ b/templates/terraform/examples/region_backend_service_balancing_mode.tf.erb @@ -1,6 +1,4 @@ resource "google_compute_region_backend_service" "default" { - provider = google-beta - load_balancing_scheme = "INTERNAL_MANAGED" backend { @@ -14,23 +12,19 @@ resource "google_compute_region_backend_service" "default" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] } data "google_compute_image" "debian_image" { - provider = google-beta - family = "debian-9" project = "debian-cloud" } resource "google_compute_region_instance_group_manager" "rigm" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['rigm_name'] %>" version { - instance_template = google_compute_instance_template.instance_template.self_link + instance_template = google_compute_instance_template.instance_template.id name = "primary" } base_instance_name = "internal-glb" @@ -38,14 +32,12 @@ resource "google_compute_region_instance_group_manager" "rigm" { } resource "google_compute_instance_template" "instance_template" { - provider = google-beta - name = "template-<%= ctx[:vars]['region_backend_service_name'] %>" machine_type = "n1-standard-1" network_interface { - network = google_compute_network.default.self_link - subnetwork = google_compute_subnetwork.default.self_link + network = google_compute_network.default.id + subnetwork = google_compute_subnetwork.default.id } disk { @@ -58,8 +50,6 @@ resource "google_compute_instance_template" "instance_template" { } resource "google_compute_region_health_check" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { @@ -68,18 +58,14 @@ resource "google_compute_region_health_check" "default" { } resource "google_compute_network" "default" { - provider = google-beta - name = "<%= ctx[:vars]['network_name'] %>" auto_create_subnetworks = false routing_mode = "REGIONAL" } resource "google_compute_subnetwork" "default" { - provider = google-beta - name = "<%= ctx[:vars]['network_name'] %>-default" ip_cidr_range = "10.1.2.0/24" region = "us-central1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id } diff --git a/templates/terraform/examples/region_backend_service_basic.tf.erb b/templates/terraform/examples/region_backend_service_basic.tf.erb index b7a04720e926..acbc9a75e482 100644 --- a/templates/terraform/examples/region_backend_service_basic.tf.erb +++ b/templates/terraform/examples/region_backend_service_basic.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_region_backend_service" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['region_backend_service_name'] %>" region = "us-central1" - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] connection_draining_timeout_sec = 10 session_affinity = "CLIENT_IP" } diff --git a/templates/terraform/examples/region_backend_service_ilb_ring_hash.tf.erb b/templates/terraform/examples/region_backend_service_ilb_ring_hash.tf.erb index 4c754df00b24..cce5eb0ae5c3 100644 --- a/templates/terraform/examples/region_backend_service_ilb_ring_hash.tf.erb +++ b/templates/terraform/examples/region_backend_service_ilb_ring_hash.tf.erb @@ -1,9 +1,7 @@ resource "google_compute_region_backend_service" "<%= ctx[:primary_resource_id] %>" { - provider = "google-beta" - region = "us-central1" name = "<%= ctx[:vars]['region_backend_service_name'] %>" - health_checks = ["${google_compute_health_check.health_check.self_link}"] + health_checks = [google_compute_health_check.health_check.id] load_balancing_scheme = "INTERNAL_MANAGED" locality_lb_policy = "RING_HASH" session_affinity = "HTTP_COOKIE" @@ -26,8 +24,6 @@ resource "google_compute_region_backend_service" "<%= ctx[:primary_resource_id] } resource "google_compute_health_check" "health_check" { - provider = "google-beta" - name = "<%= ctx[:vars]['health_check_name'] %>" http_health_check { port = 80 diff --git a/templates/terraform/examples/region_backend_service_ilb_round_robin.tf.erb b/templates/terraform/examples/region_backend_service_ilb_round_robin.tf.erb index 488de257e029..2d93c3b2c2f3 100644 --- a/templates/terraform/examples/region_backend_service_ilb_round_robin.tf.erb +++ b/templates/terraform/examples/region_backend_service_ilb_round_robin.tf.erb @@ -1,17 +1,13 @@ resource "google_compute_region_backend_service" "<%= ctx[:primary_resource_id] %>" { - provider = "google-beta" - region = "us-central1" name = "<%= ctx[:vars]['region_backend_service_name'] %>" - health_checks = ["${google_compute_health_check.health_check.self_link}"] + health_checks = [google_compute_health_check.health_check.id] protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" locality_lb_policy = "ROUND_ROBIN" } resource "google_compute_health_check" "health_check" { - provider = "google-beta" - name = "<%= ctx[:vars]['health_check_name'] %>" http_health_check { port = 80 diff --git a/templates/terraform/examples/region_disk_basic.tf.erb b/templates/terraform/examples/region_disk_basic.tf.erb index 52726d6d9277..4945baa0a3b0 100644 --- a/templates/terraform/examples/region_disk_basic.tf.erb +++ b/templates/terraform/examples/region_disk_basic.tf.erb @@ -1,6 +1,6 @@ resource "google_compute_region_disk" "regiondisk" { name = "<%= ctx[:vars]['region_disk_name'] %>" - snapshot = google_compute_snapshot.snapdisk.self_link + snapshot = google_compute_snapshot.snapdisk.id type = "pd-ssd" region = "us-central1" physical_block_size_bytes = 4096 diff --git a/templates/terraform/examples/region_disk_resource_policy_attachment_basic.tf.erb b/templates/terraform/examples/region_disk_resource_policy_attachment_basic.tf.erb index aae4fa46e7dc..d7fea5727985 100644 --- a/templates/terraform/examples/region_disk_resource_policy_attachment_basic.tf.erb +++ b/templates/terraform/examples/region_disk_resource_policy_attachment_basic.tf.erb @@ -21,7 +21,7 @@ resource "google_compute_snapshot" "snapdisk" { resource "google_compute_region_disk" "ssd" { name = "<%= ctx[:vars]['disk_name'] %>" replica_zones = ["us-central1-a", "us-central1-f"] - snapshot = google_compute_snapshot.snapdisk.self_link + snapshot = google_compute_snapshot.snapdisk.id size = 50 type = "pd-ssd" region = "us-central1" diff --git a/templates/terraform/examples/region_ssl_certificate_target_https_proxies.tf.erb b/templates/terraform/examples/region_ssl_certificate_target_https_proxies.tf.erb index 50a6bf311b36..9be83a3f210c 100644 --- a/templates/terraform/examples/region_ssl_certificate_target_https_proxies.tf.erb +++ b/templates/terraform/examples/region_ssl_certificate_target_https_proxies.tf.erb @@ -9,7 +9,6 @@ // name with name_prefix, or use random_id resource. Example: resource "google_compute_region_ssl_certificate" "default" { - provider = google-beta region = "us-central1" name_prefix = "my-certificate-" private_key = file("path/to/private.key") @@ -21,20 +20,18 @@ resource "google_compute_region_ssl_certificate" "default" { } resource "google_compute_region_target_https_proxy" "default" { - provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['region_target_https_proxy_name'] %>" - url_map = google_compute_region_url_map.default.self_link - ssl_certificates = [google_compute_region_ssl_certificate.default.self_link] + url_map = google_compute_region_url_map.default.id + ssl_certificates = [google_compute_region_ssl_certificate.default.id] } resource "google_compute_region_url_map" "default" { - provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['region_url_map_name'] %>" description = "a description" - default_service = google_compute_region_backend_service.default.self_link + default_service = google_compute_region_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -43,27 +40,25 @@ resource "google_compute_region_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.default.self_link + default_service = google_compute_region_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_region_backend_service.default.self_link + service = google_compute_region_backend_service.default.id } } } resource "google_compute_region_backend_service" "default" { - provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] } resource "google_compute_region_health_check" "default" { - provider = google-beta region = "us-central1" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { diff --git a/templates/terraform/examples/region_target_http_proxy_basic.tf.erb b/templates/terraform/examples/region_target_http_proxy_basic.tf.erb index dff5528e3958..ca44fc4f296f 100644 --- a/templates/terraform/examples/region_target_http_proxy_basic.tf.erb +++ b/templates/terraform/examples/region_target_http_proxy_basic.tf.erb @@ -1,17 +1,13 @@ resource "google_compute_region_target_http_proxy" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_target_http_proxy_name'] %>" - url_map = google_compute_region_url_map.default.self_link + url_map = google_compute_region_url_map.default.id } resource "google_compute_region_url_map" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_url_map_name'] %>" - default_service = google_compute_region_backend_service.default.self_link + default_service = google_compute_region_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -20,29 +16,25 @@ resource "google_compute_region_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.default.self_link + default_service = google_compute_region_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_region_backend_service.default.self_link + service = google_compute_region_backend_service.default.id } } } resource "google_compute_region_backend_service" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] } resource "google_compute_region_health_check" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { diff --git a/templates/terraform/examples/region_target_http_proxy_https_redirect.tf.erb b/templates/terraform/examples/region_target_http_proxy_https_redirect.tf.erb new file mode 100644 index 000000000000..b4cca54587e9 --- /dev/null +++ b/templates/terraform/examples/region_target_http_proxy_https_redirect.tf.erb @@ -0,0 +1,14 @@ +resource "google_compute_region_target_http_proxy" "default" { + region = "us-central1" + name = "<%= ctx[:vars]['region_target_http_proxy_name'] %>" + url_map = google_compute_region_url_map.default.id +} + +resource "google_compute_region_url_map" "default" { + region = "us-central1" + name = "<%= ctx[:vars]['region_url_map_name'] %>" + default_url_redirect { + https_redirect = true + strip_query = false + } +} diff --git a/templates/terraform/examples/region_target_https_proxy_basic.tf.erb b/templates/terraform/examples/region_target_https_proxy_basic.tf.erb index bde6dde8671c..3924678ac3c5 100644 --- a/templates/terraform/examples/region_target_https_proxy_basic.tf.erb +++ b/templates/terraform/examples/region_target_https_proxy_basic.tf.erb @@ -1,15 +1,11 @@ resource "google_compute_region_target_https_proxy" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_target_https_proxy_name'] %>" - url_map = google_compute_region_url_map.default.self_link - ssl_certificates = [google_compute_region_ssl_certificate.default.self_link] + url_map = google_compute_region_url_map.default.id + ssl_certificates = [google_compute_region_ssl_certificate.default.id] } resource "google_compute_region_ssl_certificate" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_ssl_certificate_name'] %>" private_key = file("path/to/private.key") @@ -17,13 +13,11 @@ resource "google_compute_region_ssl_certificate" "default" { } resource "google_compute_region_url_map" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_url_map_name'] %>" description = "a description" - default_service = google_compute_region_backend_service.default.self_link + default_service = google_compute_region_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -32,29 +26,25 @@ resource "google_compute_region_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.default.self_link + default_service = google_compute_region_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_region_backend_service.default.self_link + service = google_compute_region_backend_service.default.id } } } resource "google_compute_region_backend_service" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] } resource "google_compute_region_health_check" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { diff --git a/templates/terraform/examples/region_url_map_basic.tf.erb b/templates/terraform/examples/region_url_map_basic.tf.erb index f819aaaa8dd3..01b4b5a18958 100644 --- a/templates/terraform/examples/region_url_map_basic.tf.erb +++ b/templates/terraform/examples/region_url_map_basic.tf.erb @@ -1,12 +1,10 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_url_map_name'] %>" description = "a description" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -15,53 +13,47 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id path_rule { paths = ["/home"] - service = google_compute_region_backend_service.home.self_link + service = google_compute_region_backend_service.home.id } path_rule { paths = ["/login"] - service = google_compute_region_backend_service.login.self_link + service = google_compute_region_backend_service.login.id } } test { - service = google_compute_region_backend_service.home.self_link + service = google_compute_region_backend_service.home.id host = "hi.com" path = "/home" } } resource "google_compute_region_backend_service" "login" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['login_region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] } resource "google_compute_region_backend_service" "home" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['home_region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] } resource "google_compute_region_health_check" "default" { - provider = google-beta - region = "us-central1" name = "<%= ctx[:vars]['region_health_check_name'] %>" diff --git a/templates/terraform/examples/region_url_map_l7_ilb_path.tf.erb b/templates/terraform/examples/region_url_map_l7_ilb_path.tf.erb index 13eb610fd80a..43523de9f88e 100644 --- a/templates/terraform/examples/region_url_map_l7_ilb_path.tf.erb +++ b/templates/terraform/examples/region_url_map_l7_ilb_path.tf.erb @@ -1,8 +1,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { - provider = "google-beta" name = "<%= ctx[:vars]['region_url_map_name'] %>" description = "a description" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -11,7 +10,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id path_rule { paths = ["/home"] @@ -39,7 +38,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { } } request_mirror_policy { - backend_service = google_compute_region_backend_service.home.self_link + backend_service = google_compute_region_backend_service.home.id } retry_policy { num_retries = 4 @@ -57,7 +56,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { path_prefix_rewrite = "A replacement path" } weighted_backend_services { - backend_service = google_compute_region_backend_service.home.self_link + backend_service = google_compute_region_backend_service.home.id weight = 400 header_action { request_headers_to_remove = ["RemoveMe"] @@ -79,24 +78,22 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_region_backend_service.home.self_link + service = google_compute_region_backend_service.home.id host = "hi.com" path = "/home" } } resource "google_compute_region_backend_service" "home" { - provider = "google-beta" name = "<%= ctx[:vars]['home_region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] load_balancing_scheme = "INTERNAL_MANAGED" } resource "google_compute_region_health_check" "default" { - provider = "google-beta" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { port = 80 diff --git a/templates/terraform/examples/region_url_map_l7_ilb_path_partial.tf.erb b/templates/terraform/examples/region_url_map_l7_ilb_path_partial.tf.erb index b5b264144248..48eebdab4d3e 100644 --- a/templates/terraform/examples/region_url_map_l7_ilb_path_partial.tf.erb +++ b/templates/terraform/examples/region_url_map_l7_ilb_path_partial.tf.erb @@ -1,8 +1,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { - provider = "google-beta" name = "<%= ctx[:vars]['region_url_map_name'] %>" description = "a description" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -11,7 +10,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id path_rule { paths = ["/home"] @@ -32,7 +31,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { path_prefix_rewrite = "A replacement path" } weighted_backend_services { - backend_service = google_compute_region_backend_service.home.self_link + backend_service = google_compute_region_backend_service.home.id weight = 400 header_action { response_headers_to_add { @@ -47,24 +46,22 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_region_backend_service.home.self_link + service = google_compute_region_backend_service.home.id host = "hi.com" path = "/home" } } resource "google_compute_region_backend_service" "home" { - provider = "google-beta" name = "<%= ctx[:vars]['home_region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] load_balancing_scheme = "INTERNAL_MANAGED" } resource "google_compute_region_health_check" "default" { - provider = "google-beta" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { port = 80 diff --git a/templates/terraform/examples/region_url_map_l7_ilb_route.tf.erb b/templates/terraform/examples/region_url_map_l7_ilb_route.tf.erb index e6f691367ebb..017cea1c1095 100644 --- a/templates/terraform/examples/region_url_map_l7_ilb_route.tf.erb +++ b/templates/terraform/examples/region_url_map_l7_ilb_route.tf.erb @@ -1,8 +1,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { - provider = "google-beta" name = "<%= ctx[:vars]['region_url_map_name'] %>" description = "a description" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -11,7 +10,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id route_rules { priority = 1 @@ -60,24 +59,22 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_region_backend_service.home.self_link + service = google_compute_region_backend_service.home.id host = "hi.com" path = "/home" } } resource "google_compute_region_backend_service" "home" { - provider = "google-beta" name = "<%= ctx[:vars]['home_region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] load_balancing_scheme = "INTERNAL_MANAGED" } resource "google_compute_region_health_check" "default" { - provider = "google-beta" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { port = 80 diff --git a/templates/terraform/examples/region_url_map_l7_ilb_route_partial.tf.erb b/templates/terraform/examples/region_url_map_l7_ilb_route_partial.tf.erb index 7060f3ed6937..a15b770a5a8d 100644 --- a/templates/terraform/examples/region_url_map_l7_ilb_route_partial.tf.erb +++ b/templates/terraform/examples/region_url_map_l7_ilb_route_partial.tf.erb @@ -1,8 +1,7 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { - provider = "google-beta" name = "<%= ctx[:vars]['region_url_map_name'] %>" description = "a description" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -11,11 +10,11 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_region_backend_service.home.self_link + default_service = google_compute_region_backend_service.home.id route_rules { priority = 1 - service = google_compute_region_backend_service.home.self_link + service = google_compute_region_backend_service.home.id header_action { request_headers_to_remove = ["RemoveMe2"] } @@ -35,24 +34,22 @@ resource "google_compute_region_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_region_backend_service.home.self_link + service = google_compute_region_backend_service.home.id host = "hi.com" path = "/home" } } resource "google_compute_region_backend_service" "home" { - provider = "google-beta" name = "<%= ctx[:vars]['home_region_backend_service_name'] %>" protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_region_health_check.default.self_link] + health_checks = [google_compute_region_health_check.default.id] load_balancing_scheme = "INTERNAL_MANAGED" } resource "google_compute_region_health_check" "default" { - provider = "google-beta" name = "<%= ctx[:vars]['region_health_check_name'] %>" http_health_check { port = 80 diff --git a/templates/terraform/examples/resource_policy_placement_policy.tf.erb b/templates/terraform/examples/resource_policy_placement_policy.tf.erb new file mode 100644 index 000000000000..d7b986612db4 --- /dev/null +++ b/templates/terraform/examples/resource_policy_placement_policy.tf.erb @@ -0,0 +1,8 @@ +resource "google_compute_resource_policy" "baz" { + name = "<%= ctx[:vars]['name'] %>" + region = "us-central1" + group_placement_policy { + vm_count = 2 + collocation = "COLLOCATED" + } +} diff --git a/templates/terraform/examples/route_ilb.tf.erb b/templates/terraform/examples/route_ilb.tf.erb index 8d9ba6902f22..8a6376c32031 100644 --- a/templates/terraform/examples/route_ilb.tf.erb +++ b/templates/terraform/examples/route_ilb.tf.erb @@ -7,7 +7,7 @@ resource "google_compute_subnetwork" "default" { name = "<%= ctx[:vars]['subnet_name'] %>" ip_cidr_range = "10.0.1.0/24" region = "us-central1" - network = google_compute_network.default.self_link + network = google_compute_network.default.id } resource "google_compute_health_check" "hc" { @@ -23,7 +23,7 @@ resource "google_compute_health_check" "hc" { resource "google_compute_region_backend_service" "backend" { name = "<%= ctx[:vars]['backend_name'] %>" region = "us-central1" - health_checks = [google_compute_health_check.hc.self_link] + health_checks = [google_compute_health_check.hc.id] } resource "google_compute_forwarding_rule" "default" { @@ -31,7 +31,7 @@ resource "google_compute_forwarding_rule" "default" { region = "us-central1" load_balancing_scheme = "INTERNAL" - backend_service = google_compute_region_backend_service.backend.self_link + backend_service = google_compute_region_backend_service.backend.id all_ports = true network = google_compute_network.default.name subnetwork = google_compute_subnetwork.default.name @@ -41,6 +41,6 @@ resource "google_compute_route" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['route_name'] %>" dest_range = "0.0.0.0/0" network = google_compute_network.default.name - next_hop_ilb = google_compute_forwarding_rule.default.self_link + next_hop_ilb = google_compute_forwarding_rule.default.id priority = 2000 } diff --git a/templates/terraform/examples/router_nat_basic.tf.erb b/templates/terraform/examples/router_nat_basic.tf.erb index 8dfb43925135..7bb8629a90c1 100644 --- a/templates/terraform/examples/router_nat_basic.tf.erb +++ b/templates/terraform/examples/router_nat_basic.tf.erb @@ -4,7 +4,7 @@ resource "google_compute_network" "net" { resource "google_compute_subnetwork" "subnet" { name = "<%= ctx[:vars]['subnet_name'] %>" - network = google_compute_network.net.self_link + network = google_compute_network.net.id ip_cidr_range = "10.0.0.0/16" region = "us-central1" } @@ -12,7 +12,7 @@ resource "google_compute_subnetwork" "subnet" { resource "google_compute_router" "router" { name = "<%= ctx[:vars]['router_name'] %>" region = google_compute_subnetwork.subnet.region - network = google_compute_network.net.self_link + network = google_compute_network.net.id bgp { asn = 64514 diff --git a/templates/terraform/examples/router_nat_manual_ips.tf.erb b/templates/terraform/examples/router_nat_manual_ips.tf.erb index 231ee13a102c..fa549eeec86b 100644 --- a/templates/terraform/examples/router_nat_manual_ips.tf.erb +++ b/templates/terraform/examples/router_nat_manual_ips.tf.erb @@ -4,7 +4,7 @@ resource "google_compute_network" "net" { resource "google_compute_subnetwork" "subnet" { name = "<%= ctx[:vars]['subnet_name'] %>" - network = google_compute_network.net.self_link + network = google_compute_network.net.id ip_cidr_range = "10.0.0.0/16" region = "us-central1" } @@ -12,7 +12,7 @@ resource "google_compute_subnetwork" "subnet" { resource "google_compute_router" "router" { name = "<%= ctx[:vars]['router_name'] %>" region = google_compute_subnetwork.subnet.region - network = google_compute_network.net.self_link + network = google_compute_network.net.id } resource "google_compute_address" "address" { @@ -31,7 +31,7 @@ resource "google_compute_router_nat" "<%= ctx[:primary_resource_id] %>" { source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS" subnetwork { - name = google_compute_subnetwork.default.self_link + name = google_compute_subnetwork.subnet.id source_ip_ranges_to_nat = ["ALL_IP_RANGES"] } } diff --git a/templates/terraform/examples/scheduler_job_app_engine.tf.erb b/templates/terraform/examples/scheduler_job_app_engine.tf.erb index d3e1586acd25..259b563b0ba0 100644 --- a/templates/terraform/examples/scheduler_job_app_engine.tf.erb +++ b/templates/terraform/examples/scheduler_job_app_engine.tf.erb @@ -5,6 +5,13 @@ resource "google_cloud_scheduler_job" "job" { time_zone = "Europe/London" attempt_deadline = "320s" + retry_config { + min_backoff_duration = "1s" + max_retry_duration = "10s" + max_doublings = 2 + retry_count = 3 + } + app_engine_http_target { http_method = "POST" diff --git a/templates/terraform/examples/scheduler_job_http.tf.erb b/templates/terraform/examples/scheduler_job_http.tf.erb index 5cf2cbebe681..e994f49bcb53 100644 --- a/templates/terraform/examples/scheduler_job_http.tf.erb +++ b/templates/terraform/examples/scheduler_job_http.tf.erb @@ -5,6 +5,10 @@ resource "google_cloud_scheduler_job" "job" { time_zone = "America/New_York" attempt_deadline = "320s" + retry_config { + retry_count = 1 + } + http_target { http_method = "POST" uri = "https://example.com/ping" diff --git a/templates/terraform/examples/secret_config_basic.tf.erb b/templates/terraform/examples/secret_config_basic.tf.erb index 1fcb383d7dea..9e4daaa00033 100644 --- a/templates/terraform/examples/secret_config_basic.tf.erb +++ b/templates/terraform/examples/secret_config_basic.tf.erb @@ -1,6 +1,4 @@ resource "google_secret_manager_secret" "<%= ctx[:primary_resource_id] %>" { - provider = google-beta - secret_id = "<%= ctx[:vars]['secret_id'] %>" labels = { diff --git a/templates/terraform/examples/secret_version_basic.tf.erb b/templates/terraform/examples/secret_version_basic.tf.erb index 13a1a74c42dc..3df2522be8f0 100644 --- a/templates/terraform/examples/secret_version_basic.tf.erb +++ b/templates/terraform/examples/secret_version_basic.tf.erb @@ -1,6 +1,4 @@ resource "google_secret_manager_secret" "secret-basic" { - provider = google-beta - secret_id = "<%= ctx[:vars]['secret_id'] %>" labels = { @@ -14,8 +12,6 @@ resource "google_secret_manager_secret" "secret-basic" { resource "google_secret_manager_secret_version" "<%= ctx[:primary_resource_id] %>" { - provider = google-beta - secret = google_secret_manager_secret.secret-basic.id secret_data = "<%= ctx[:vars]['data'] %>" diff --git a/templates/terraform/examples/service_directory_endpoint_basic.tf.erb b/templates/terraform/examples/service_directory_endpoint_basic.tf.erb new file mode 100644 index 000000000000..2f55c715beff --- /dev/null +++ b/templates/terraform/examples/service_directory_endpoint_basic.tf.erb @@ -0,0 +1,25 @@ +resource "google_service_directory_namespace" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + namespace_id = "<%= ctx[:vars]["namespace_id"] %>" + location = "us-central1" +} + +resource "google_service_directory_service" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + service_id = "<%= ctx[:vars]["service_id"] %>" + namespace = google_service_directory_namespace.<%= ctx[:primary_resource_id] %>.id +} + +resource "google_service_directory_endpoint" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + endpoint_id = "<%= ctx[:vars]["endpoint_id"] %>" + service = google_service_directory_service.<%= ctx[:primary_resource_id] %>.id + + metadata = { + stage = "prod" + region = "us-central1" + } + + address = "1.2.3.4" + port = 5353 +} diff --git a/templates/terraform/examples/service_directory_namespace_basic.tf.erb b/templates/terraform/examples/service_directory_namespace_basic.tf.erb new file mode 100644 index 000000000000..28b6e4a5b5b9 --- /dev/null +++ b/templates/terraform/examples/service_directory_namespace_basic.tf.erb @@ -0,0 +1,10 @@ +resource "google_service_directory_namespace" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + namespace_id = "<%= ctx[:vars]["namespace_id"] %>" + location = "us-central1" + + labels = { + key = "value" + foo = "bar" + } +} diff --git a/templates/terraform/examples/service_directory_service_basic.tf.erb b/templates/terraform/examples/service_directory_service_basic.tf.erb new file mode 100644 index 000000000000..c62ff6711961 --- /dev/null +++ b/templates/terraform/examples/service_directory_service_basic.tf.erb @@ -0,0 +1,16 @@ +resource "google_service_directory_namespace" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + namespace_id = "<%= ctx[:vars]["namespace_id"] %>" + location = "us-central1" +} + +resource "google_service_directory_service" "<%= ctx[:primary_resource_id] %>" { + provider = google-beta + service_id = "<%= ctx[:vars]["service_id"] %>" + namespace = google_service_directory_namespace.<%= ctx[:primary_resource_id] %>.id + + metadata = { + stage = "prod" + region = "us-central1" + } +} diff --git a/templates/terraform/examples/ssl_certificate_target_https_proxies.tf.erb b/templates/terraform/examples/ssl_certificate_target_https_proxies.tf.erb index 85f0644444dd..62b3082dabd0 100644 --- a/templates/terraform/examples/ssl_certificate_target_https_proxies.tf.erb +++ b/templates/terraform/examples/ssl_certificate_target_https_proxies.tf.erb @@ -20,15 +20,15 @@ resource "google_compute_ssl_certificate" "default" { resource "google_compute_target_https_proxy" "default" { name = "<%= ctx[:vars]['target_https_proxy_name'] %>" - url_map = google_compute_url_map.default.self_link - ssl_certificates = [google_compute_ssl_certificate.default.self_link] + url_map = google_compute_url_map.default.id + ssl_certificates = [google_compute_ssl_certificate.default.id] } resource "google_compute_url_map" "default" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -37,11 +37,11 @@ resource "google_compute_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_backend_service.default.self_link + service = google_compute_backend_service.default.id } } } @@ -52,7 +52,7 @@ resource "google_compute_backend_service" "default" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_http_health_check" "default" { diff --git a/templates/terraform/examples/stateful_igm.tf.erb b/templates/terraform/examples/stateful_igm.tf.erb new file mode 100644 index 000000000000..4df648bb26a8 --- /dev/null +++ b/templates/terraform/examples/stateful_igm.tf.erb @@ -0,0 +1,64 @@ +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-basic" { + name = "<%= ctx[:vars]['template_name'] %>" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_instance_group_manager" "igm-no-tp" { + description = "Terraform test instance group manager" + name = "<%= ctx[:vars]['igm_name'] %>" + + version { + name = "prod" + instance_template = google_compute_instance_template.igm-basic.self_link + } + + base_instance_name = "igm-no-tp" + zone = "us-central1-c" + target_size = 2 +} + +resource "google_compute_disk" "default" { + name = "test-disk-%{random_suffix}" + type = "pd-ssd" + zone = google_compute_instance_group_manager.igm.zone + image = "debian-8-jessie-v20170523" + physical_block_size_bytes = 4096 +} + +resource "google_compute_per_instance_config" "with_disk" { + zone = google_compute_instance_group_manager.igm.zone + instance_group_manager = google_compute_instance_group_manager.igm.name + name = "instance-1" + preserved_state { + metadata = { + foo = "bar" + } + + disk { + device_name = "my-stateful-disk" + source = google_compute_disk.default.id + mode = "READ_ONLY" + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/stateful_rigm.tf.erb b/templates/terraform/examples/stateful_rigm.tf.erb new file mode 100644 index 000000000000..c6ca87ac404b --- /dev/null +++ b/templates/terraform/examples/stateful_rigm.tf.erb @@ -0,0 +1,64 @@ +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-basic" { + name = "<%= ctx[:vars]['template_name'] %>" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_region_instance_group_manager" "rigm" { + description = "Terraform test instance group manager" + name = "<%= ctx[:vars]['igm_name'] %>" + + version { + name = "prod" + instance_template = google_compute_instance_template.igm-basic.self_link + } + + base_instance_name = "rigm" + region = "us-central1" + target_size = 2 +} + +resource "google_compute_disk" "default" { + name = "test-disk-%{random_suffix}" + type = "pd-ssd" + zone = "us-central1-a" + image = "debian-8-jessie-v20170523" + physical_block_size_bytes = 4096 +} + +resource "google_compute_region_per_instance_config" "with_disk" { + region = google_compute_instance_group_manager.igm.region + region_instance_group_manager = google_compute_region_instance_group_manager.rigm.name + name = "instance-1" + preserved_state { + metadata = { + foo = "bar" + } + + disk { + device_name = "my-stateful-disk" + source = google_compute_disk.default.id + mode = "READ_ONLY" + } + } +} \ No newline at end of file diff --git a/templates/terraform/examples/subnetwork_basic.tf.erb b/templates/terraform/examples/subnetwork_basic.tf.erb index 4cd85ab1e126..f5e224b07d2e 100644 --- a/templates/terraform/examples/subnetwork_basic.tf.erb +++ b/templates/terraform/examples/subnetwork_basic.tf.erb @@ -2,7 +2,7 @@ resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges" name = "<%= ctx[:vars]['subnetwork_name'] %>" ip_cidr_range = "10.2.0.0/16" region = "us-central1" - network = google_compute_network.custom-test.self_link + network = google_compute_network.custom-test.id secondary_ip_range { range_name = "tf-test-secondary-range-update1" ip_cidr_range = "192.168.10.0/24" diff --git a/templates/terraform/examples/subnetwork_internal_l7lb.tf.erb b/templates/terraform/examples/subnetwork_internal_l7lb.tf.erb index 12bf8b878ecb..a59d00fcd673 100644 --- a/templates/terraform/examples/subnetwork_internal_l7lb.tf.erb +++ b/templates/terraform/examples/subnetwork_internal_l7lb.tf.erb @@ -6,7 +6,7 @@ resource "google_compute_subnetwork" "network-for-l7lb" { region = "us-central1" purpose = "INTERNAL_HTTPS_LOAD_BALANCER" role = "ACTIVE" - network = google_compute_network.custom-test.self_link + network = google_compute_network.custom-test.id } resource "google_compute_network" "custom-test" { diff --git a/templates/terraform/examples/subnetwork_logging_config.tf.erb b/templates/terraform/examples/subnetwork_logging_config.tf.erb index 048a98af49a5..159c6df99094 100644 --- a/templates/terraform/examples/subnetwork_logging_config.tf.erb +++ b/templates/terraform/examples/subnetwork_logging_config.tf.erb @@ -2,7 +2,7 @@ resource "google_compute_subnetwork" "subnet-with-logging" { name = "<%= ctx[:vars]['subnetwork_name'] %>" ip_cidr_range = "10.2.0.0/16" region = "us-central1" - network = google_compute_network.custom-test.self_link + network = google_compute_network.custom-test.id log_config { aggregation_interval = "INTERVAL_10_MIN" diff --git a/templates/terraform/examples/target_http_proxy_basic.tf.erb b/templates/terraform/examples/target_http_proxy_basic.tf.erb index a3e099b98fca..86877bfe8c86 100644 --- a/templates/terraform/examples/target_http_proxy_basic.tf.erb +++ b/templates/terraform/examples/target_http_proxy_basic.tf.erb @@ -1,11 +1,11 @@ resource "google_compute_target_http_proxy" "default" { name = "<%= ctx[:vars]['target_http_proxy_name'] %>" - url_map = google_compute_url_map.default.self_link + url_map = google_compute_url_map.default.id } resource "google_compute_url_map" "default" { name = "<%= ctx[:vars]['url_map_name'] %>" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -14,11 +14,11 @@ resource "google_compute_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_backend_service.default.self_link + service = google_compute_backend_service.default.id } } } @@ -29,7 +29,7 @@ resource "google_compute_backend_service" "default" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_http_health_check" "default" { diff --git a/templates/terraform/examples/target_http_proxy_https_redirect.tf.erb b/templates/terraform/examples/target_http_proxy_https_redirect.tf.erb new file mode 100644 index 000000000000..a3afb4721400 --- /dev/null +++ b/templates/terraform/examples/target_http_proxy_https_redirect.tf.erb @@ -0,0 +1,12 @@ +resource "google_compute_target_http_proxy" "default" { + name = "<%= ctx[:vars]['target_http_proxy_name'] %>" + url_map = google_compute_url_map.default.id +} + +resource "google_compute_url_map" "default" { + name = "<%= ctx[:vars]['url_map_name'] %>" + default_url_redirect { + https_redirect = true + strip_query = false + } +} diff --git a/templates/terraform/examples/target_https_proxy_basic.tf.erb b/templates/terraform/examples/target_https_proxy_basic.tf.erb index d8de469b10c1..bf515488eeab 100644 --- a/templates/terraform/examples/target_https_proxy_basic.tf.erb +++ b/templates/terraform/examples/target_https_proxy_basic.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_target_https_proxy" "default" { name = "<%= ctx[:vars]['target_https_proxy_name'] %>" - url_map = google_compute_url_map.default.self_link - ssl_certificates = [google_compute_ssl_certificate.default.self_link] + url_map = google_compute_url_map.default.id + ssl_certificates = [google_compute_ssl_certificate.default.id] } resource "google_compute_ssl_certificate" "default" { @@ -14,7 +14,7 @@ resource "google_compute_url_map" "default" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id host_rule { hosts = ["mysite.com"] @@ -23,11 +23,11 @@ resource "google_compute_url_map" "default" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.default.self_link + default_service = google_compute_backend_service.default.id path_rule { paths = ["/*"] - service = google_compute_backend_service.default.self_link + service = google_compute_backend_service.default.id } } } @@ -38,7 +38,7 @@ resource "google_compute_backend_service" "default" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_http_health_check" "default" { diff --git a/templates/terraform/examples/target_instance_basic.tf.erb b/templates/terraform/examples/target_instance_basic.tf.erb index c1ea4e2daacb..2a83b88b2c9e 100644 --- a/templates/terraform/examples/target_instance_basic.tf.erb +++ b/templates/terraform/examples/target_instance_basic.tf.erb @@ -1,6 +1,6 @@ resource "google_compute_target_instance" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['target_name'] %>" - instance = google_compute_instance.target-vm.self_link + instance = google_compute_instance.target-vm.id } data "google_compute_image" "vmimage" { diff --git a/templates/terraform/examples/target_ssl_proxy_basic.tf.erb b/templates/terraform/examples/target_ssl_proxy_basic.tf.erb index cd1207a9cc69..8bbfcf259643 100644 --- a/templates/terraform/examples/target_ssl_proxy_basic.tf.erb +++ b/templates/terraform/examples/target_ssl_proxy_basic.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_target_ssl_proxy" "default" { name = "<%= ctx[:vars]['target_ssl_proxy_name'] %>" - backend_service = google_compute_backend_service.default.self_link - ssl_certificates = [google_compute_ssl_certificate.default.self_link] + backend_service = google_compute_backend_service.default.id + ssl_certificates = [google_compute_ssl_certificate.default.id] } resource "google_compute_ssl_certificate" "default" { @@ -13,7 +13,7 @@ resource "google_compute_ssl_certificate" "default" { resource "google_compute_backend_service" "default" { name = "<%= ctx[:vars]['backend_service_name'] %>" protocol = "SSL" - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] } resource "google_compute_health_check" "default" { diff --git a/templates/terraform/examples/target_tcp_proxy_basic.tf.erb b/templates/terraform/examples/target_tcp_proxy_basic.tf.erb index fd9c04fc5346..69ad592fdeb0 100644 --- a/templates/terraform/examples/target_tcp_proxy_basic.tf.erb +++ b/templates/terraform/examples/target_tcp_proxy_basic.tf.erb @@ -1,6 +1,6 @@ resource "google_compute_target_tcp_proxy" "default" { name = "<%= ctx[:vars]['target_tcp_proxy_name'] %>" - backend_service = google_compute_backend_service.default.self_link + backend_service = google_compute_backend_service.default.id } resource "google_compute_backend_service" "default" { @@ -8,7 +8,7 @@ resource "google_compute_backend_service" "default" { protocol = "TCP" timeout_sec = 10 - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] } resource "google_compute_health_check" "default" { diff --git a/templates/terraform/examples/target_vpn_gateway_basic.tf.erb b/templates/terraform/examples/target_vpn_gateway_basic.tf.erb index 824ce65f7772..0b57481df59d 100644 --- a/templates/terraform/examples/target_vpn_gateway_basic.tf.erb +++ b/templates/terraform/examples/target_vpn_gateway_basic.tf.erb @@ -1,6 +1,6 @@ resource "google_compute_vpn_gateway" "target_gateway" { name = "<%= ctx[:vars]['target_vpn_gateway_name'] %>" - network = google_compute_network.network1.self_link + network = google_compute_network.network1.id } resource "google_compute_network" "network1" { @@ -15,7 +15,7 @@ resource "google_compute_forwarding_rule" "fr_esp" { name = "<%= ctx[:vars]['esp_forwarding_rule_name'] %>" ip_protocol = "ESP" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_forwarding_rule" "fr_udp500" { @@ -23,7 +23,7 @@ resource "google_compute_forwarding_rule" "fr_udp500" { ip_protocol = "UDP" port_range = "500" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_forwarding_rule" "fr_udp4500" { @@ -31,7 +31,7 @@ resource "google_compute_forwarding_rule" "fr_udp4500" { ip_protocol = "UDP" port_range = "4500" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_vpn_tunnel" "tunnel1" { @@ -39,7 +39,7 @@ resource "google_compute_vpn_tunnel" "tunnel1" { peer_ip = "15.0.0.120" shared_secret = "a secret message" - target_vpn_gateway = google_compute_vpn_gateway.target_gateway.self_link + target_vpn_gateway = google_compute_vpn_gateway.target_gateway.id depends_on = [ google_compute_forwarding_rule.fr_esp, @@ -54,5 +54,5 @@ resource "google_compute_route" "route1" { dest_range = "15.0.0.0/24" priority = 1000 - next_hop_vpn_tunnel = google_compute_vpn_tunnel.tunnel1.self_link + next_hop_vpn_tunnel = google_compute_vpn_tunnel.tunnel1.id } diff --git a/templates/terraform/examples/uptime_check_config_http.tf.erb b/templates/terraform/examples/uptime_check_config_http.tf.erb index 4c10c16a974b..738eb5eab249 100644 --- a/templates/terraform/examples/uptime_check_config_http.tf.erb +++ b/templates/terraform/examples/uptime_check_config_http.tf.erb @@ -5,6 +5,9 @@ resource "google_monitoring_uptime_check_config" "<%= ctx[:primary_resource_id] http_check { path = "/some-path" port = "8010" + request_method = "POST" + content_type = "URL_ENCODED" + body = "Zm9vJTI1M0RiYXI=" } monitored_resource { diff --git a/templates/terraform/examples/url_map_basic.tf.erb b/templates/terraform/examples/url_map_basic.tf.erb index 5d870c445cf2..586ad7593ad2 100644 --- a/templates/terraform/examples/url_map_basic.tf.erb +++ b/templates/terraform/examples/url_map_basic.tf.erb @@ -2,7 +2,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -16,31 +16,31 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "mysite" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id path_rule { paths = ["/home"] - service = google_compute_backend_service.home.self_link + service = google_compute_backend_service.home.id } path_rule { paths = ["/login"] - service = google_compute_backend_service.login.self_link + service = google_compute_backend_service.login.id } path_rule { paths = ["/static"] - service = google_compute_backend_bucket.static.self_link + service = google_compute_backend_bucket.static.id } } path_matcher { name = "otherpaths" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id } test { - service = google_compute_backend_service.home.self_link + service = google_compute_backend_service.home.id host = "hi.com" path = "/home" } @@ -52,7 +52,7 @@ resource "google_compute_backend_service" "login" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_backend_service" "home" { @@ -61,7 +61,7 @@ resource "google_compute_backend_service" "home" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_http_health_check.default.self_link] + health_checks = [google_compute_http_health_check.default.id] } resource "google_compute_http_health_check" "default" { diff --git a/templates/terraform/examples/url_map_header_based_routing.tf.erb b/templates/terraform/examples/url_map_header_based_routing.tf.erb new file mode 100644 index 000000000000..dbc35a54325f --- /dev/null +++ b/templates/terraform/examples/url_map_header_based_routing.tf.erb @@ -0,0 +1,75 @@ +resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['url_map_name'] %>" + description = "header-based routing example" + default_service = google_compute_backend_service.default.id + + host_rule { + hosts = ["*"] + path_matcher = "allpaths" + } + + path_matcher { + name = "allpaths" + default_service = google_compute_backend_service.default.id + + route_rules { + priority = 1 + service = google_compute_backend_service.service-a.id + match_rules { + prefix_match = "/" + ignore_case = true + header_matches { + header_name = "abtest" + exact_match = "a" + } + } + } + route_rules { + priority = 2 + service = google_compute_backend_service.service-b.id + match_rules { + ignore_case = true + prefix_match = "/" + header_matches { + header_name = "abtest" + exact_match = "b" + } + } + } + } +} + +resource "google_compute_backend_service" "default" { + name = "<%= ctx[:vars]['default_backend_service_name'] %>" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_http_health_check.default.id] +} + +resource "google_compute_backend_service" "service-a" { + name = "<%= ctx[:vars]['service_a_backend_service_name'] %>" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_http_health_check.default.id] +} + +resource "google_compute_backend_service" "service-b" { + name = "<%= ctx[:vars]['service_b_backend_service_name'] %>" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_http_health_check.default.id] +} + +resource "google_compute_http_health_check" "default" { + name = "<%= ctx[:vars]['health_check_name'] %>" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} + diff --git a/templates/terraform/examples/url_map_parameter_based_routing.tf.erb b/templates/terraform/examples/url_map_parameter_based_routing.tf.erb new file mode 100644 index 000000000000..bef10d1a4e97 --- /dev/null +++ b/templates/terraform/examples/url_map_parameter_based_routing.tf.erb @@ -0,0 +1,75 @@ +resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { + name = "<%= ctx[:vars]['url_map_name'] %>" + description = "parameter-based routing example" + default_service = google_compute_backend_service.default.id + + host_rule { + hosts = ["*"] + path_matcher = "allpaths" + } + + path_matcher { + name = "allpaths" + default_service = google_compute_backend_service.default.id + + route_rules { + priority = 1 + service = google_compute_backend_service.service-a.id + match_rules { + prefix_match = "/" + ignore_case = true + query_parameter_matches { + name = "abtest" + exact_match = "a" + } + } + } + route_rules { + priority = 2 + service = google_compute_backend_service.service-b.id + match_rules { + ignore_case = true + prefix_match = "/" + query_parameter_matches { + name = "abtest" + exact_match = "b" + } + } + } + } +} + +resource "google_compute_backend_service" "default" { + name = "<%= ctx[:vars]['default_backend_service_name'] %>" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_http_health_check.default.id] +} + +resource "google_compute_backend_service" "service-a" { + name = "<%= ctx[:vars]['service_a_backend_service_name'] %>" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_http_health_check.default.id] +} + +resource "google_compute_backend_service" "service-b" { + name = "<%= ctx[:vars]['service_b_backend_service_name'] %>" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_http_health_check.default.id] +} + +resource "google_compute_http_health_check" "default" { + name = "<%= ctx[:vars]['health_check_name'] %>" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} + diff --git a/templates/terraform/examples/url_map_traffic_director_path.tf.erb b/templates/terraform/examples/url_map_traffic_director_path.tf.erb index d20634c96738..d61fbc0d7989 100644 --- a/templates/terraform/examples/url_map_traffic_director_path.tf.erb +++ b/templates/terraform/examples/url_map_traffic_director_path.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -10,7 +10,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id path_rule { paths = ["/home"] @@ -39,7 +39,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { } } request_mirror_policy { - backend_service = google_compute_backend_service.home.self_link + backend_service = google_compute_backend_service.home.id } retry_policy { num_retries = 4 @@ -57,7 +57,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { path_prefix_rewrite = "A replacement path" } weighted_backend_services { - backend_service = google_compute_backend_service.home.self_link + backend_service = google_compute_backend_service.home.id weight = 400 header_action { request_headers_to_remove = ["RemoveMe"] @@ -79,7 +79,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_backend_service.home.self_link + service = google_compute_backend_service.home.id host = "hi.com" path = "/home" } @@ -91,7 +91,7 @@ resource "google_compute_backend_service" "home" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] load_balancing_scheme = "INTERNAL_SELF_MANAGED" } diff --git a/templates/terraform/examples/url_map_traffic_director_path_partial.tf.erb b/templates/terraform/examples/url_map_traffic_director_path_partial.tf.erb index 97b40b773645..aea9bc2f8333 100644 --- a/templates/terraform/examples/url_map_traffic_director_path_partial.tf.erb +++ b/templates/terraform/examples/url_map_traffic_director_path_partial.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -10,7 +10,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id path_rule { paths = ["/home"] @@ -26,7 +26,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { disabled = false } weighted_backend_services { - backend_service = google_compute_backend_service.home.self_link + backend_service = google_compute_backend_service.home.id weight = 400 header_action { request_headers_to_remove = ["RemoveMe"] @@ -48,7 +48,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_backend_service.home.self_link + service = google_compute_backend_service.home.id host = "hi.com" path = "/home" } @@ -60,7 +60,7 @@ resource "google_compute_backend_service" "home" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] load_balancing_scheme = "INTERNAL_SELF_MANAGED" } diff --git a/templates/terraform/examples/url_map_traffic_director_route.tf.erb b/templates/terraform/examples/url_map_traffic_director_route.tf.erb index 2f4cb7f4f0cc..e959621a45e1 100644 --- a/templates/terraform/examples/url_map_traffic_director_route.tf.erb +++ b/templates/terraform/examples/url_map_traffic_director_route.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -10,7 +10,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id route_rules { priority = 1 @@ -59,7 +59,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_backend_service.home.self_link + service = google_compute_backend_service.home.id host = "hi.com" path = "/home" } @@ -71,7 +71,7 @@ resource "google_compute_backend_service" "home" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] load_balancing_scheme = "INTERNAL_SELF_MANAGED" } diff --git a/templates/terraform/examples/url_map_traffic_director_route_partial.tf.erb b/templates/terraform/examples/url_map_traffic_director_route_partial.tf.erb index cfa07f561772..c0c8777ee9f6 100644 --- a/templates/terraform/examples/url_map_traffic_director_route_partial.tf.erb +++ b/templates/terraform/examples/url_map_traffic_director_route_partial.tf.erb @@ -1,7 +1,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { name = "<%= ctx[:vars]['url_map_name'] %>" description = "a description" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id host_rule { hosts = ["mysite.com"] @@ -10,7 +10,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { path_matcher { name = "allpaths" - default_service = google_compute_backend_service.home.self_link + default_service = google_compute_backend_service.home.id route_rules { priority = 1 @@ -30,7 +30,7 @@ resource "google_compute_url_map" "<%= ctx[:primary_resource_id] %>" { } test { - service = google_compute_backend_service.home.self_link + service = google_compute_backend_service.home.id host = "hi.com" path = "/home" } @@ -42,7 +42,7 @@ resource "google_compute_backend_service" "home" { protocol = "HTTP" timeout_sec = 10 - health_checks = [google_compute_health_check.default.self_link] + health_checks = [google_compute_health_check.default.id] load_balancing_scheme = "INTERNAL_SELF_MANAGED" } diff --git a/templates/terraform/examples/vpn_tunnel_basic.tf.erb b/templates/terraform/examples/vpn_tunnel_basic.tf.erb index 4991d0d0234b..6ee49cb02c60 100644 --- a/templates/terraform/examples/vpn_tunnel_basic.tf.erb +++ b/templates/terraform/examples/vpn_tunnel_basic.tf.erb @@ -3,7 +3,7 @@ resource "google_compute_vpn_tunnel" "tunnel1" { peer_ip = "15.0.0.120" shared_secret = "a secret message" - target_vpn_gateway = google_compute_vpn_gateway.target_gateway.self_link + target_vpn_gateway = google_compute_vpn_gateway.target_gateway.id depends_on = [ google_compute_forwarding_rule.fr_esp, @@ -14,7 +14,7 @@ resource "google_compute_vpn_tunnel" "tunnel1" { resource "google_compute_vpn_gateway" "target_gateway" { name = "<%= ctx[:vars]['target_vpn_gateway_name'] %>" - network = google_compute_network.network1.self_link + network = google_compute_network.network1.id } resource "google_compute_network" "network1" { @@ -29,7 +29,7 @@ resource "google_compute_forwarding_rule" "fr_esp" { name = "<%= ctx[:vars]['esp_forwarding_rule_name'] %>" ip_protocol = "ESP" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_forwarding_rule" "fr_udp500" { @@ -37,7 +37,7 @@ resource "google_compute_forwarding_rule" "fr_udp500" { ip_protocol = "UDP" port_range = "500" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_forwarding_rule" "fr_udp4500" { @@ -45,7 +45,7 @@ resource "google_compute_forwarding_rule" "fr_udp4500" { ip_protocol = "UDP" port_range = "4500" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_route" "route1" { @@ -54,5 +54,5 @@ resource "google_compute_route" "route1" { dest_range = "15.0.0.0/24" priority = 1000 - next_hop_vpn_tunnel = google_compute_vpn_tunnel.tunnel1.self_link + next_hop_vpn_tunnel = google_compute_vpn_tunnel.tunnel1.id } diff --git a/templates/terraform/examples/vpn_tunnel_beta.tf.erb b/templates/terraform/examples/vpn_tunnel_beta.tf.erb index a363eb6b5da0..0b2415d4e1df 100644 --- a/templates/terraform/examples/vpn_tunnel_beta.tf.erb +++ b/templates/terraform/examples/vpn_tunnel_beta.tf.erb @@ -4,7 +4,7 @@ resource "google_compute_vpn_tunnel" "tunnel1" { peer_ip = "15.0.0.120" shared_secret = "a secret message" - target_vpn_gateway = google_compute_vpn_gateway.target_gateway.self_link + target_vpn_gateway = google_compute_vpn_gateway.target_gateway.id depends_on = [ google_compute_forwarding_rule.fr_esp, @@ -20,7 +20,7 @@ resource "google_compute_vpn_tunnel" "tunnel1" { resource "google_compute_vpn_gateway" "target_gateway" { provider = google-beta name = "<%= ctx[:vars]['target_vpn_gateway_name'] %>" - network = google_compute_network.network1.self_link + network = google_compute_network.network1.id } resource "google_compute_network" "network1" { @@ -38,7 +38,7 @@ resource "google_compute_forwarding_rule" "fr_esp" { name = "<%= ctx[:vars]['esp_forwarding_rule_name'] %>" ip_protocol = "ESP" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_forwarding_rule" "fr_udp500" { @@ -47,7 +47,7 @@ resource "google_compute_forwarding_rule" "fr_udp500" { ip_protocol = "UDP" port_range = "500" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_forwarding_rule" "fr_udp4500" { @@ -56,7 +56,7 @@ resource "google_compute_forwarding_rule" "fr_udp4500" { ip_protocol = "UDP" port_range = "4500" ip_address = google_compute_address.vpn_static_ip.address - target = google_compute_vpn_gateway.target_gateway.self_link + target = google_compute_vpn_gateway.target_gateway.id } resource "google_compute_route" "route1" { @@ -66,7 +66,7 @@ resource "google_compute_route" "route1" { dest_range = "15.0.0.0/24" priority = 1000 - next_hop_vpn_tunnel = google_compute_vpn_tunnel.tunnel1.self_link + next_hop_vpn_tunnel = google_compute_vpn_tunnel.tunnel1.id } provider "google-beta" { diff --git a/templates/terraform/expand_property_method.erb b/templates/terraform/expand_property_method.erb index f1169861172d..0c7301f9ebad 100644 --- a/templates/terraform/expand_property_method.erb +++ b/templates/terraform/expand_property_method.erb @@ -13,10 +13,11 @@ # limitations under the License. -%> <% if property.custom_expand -%> -<%= lines(compile_template(property.custom_expand, +<%= lines(compile_template(pwd + '/' + property.custom_expand, prefix: prefix, property: property, - object: object)) -%> + object: object, + pwd: pwd)) -%> <% else -%> <%# Generate expanders for Maps %> @@ -35,8 +36,14 @@ func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d T transformed<%= titlelize_property(prop) -%>, err := expand<%= prefix -%><%= titlelize_property(property) -%><%= titlelize_property(prop) -%>(original["<%= Google::StringUtils.underscore(prop.name) -%>"], d, config) if err != nil { return nil, err + <% if prop.send_empty_value -%> + } else { + transformed["<%= prop.api_name -%>"] = transformed<%= titlelize_property(prop) -%> + <% else -%> + } else if val := reflect.ValueOf(transformed<%= titlelize_property(prop) -%>); val.IsValid() && !isEmptyValue(val) { + transformed["<%= prop.api_name -%>"] = transformed<%= titlelize_property(prop) -%> + <% end -%> } - transformed["<%= prop.api_name -%>"] = transformed<%= titlelize_property(prop) -%> <% end -%> @@ -150,7 +157,7 @@ func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d T if raw == nil { return nil, fmt.Errorf("Invalid value for <%= property.name.underscore -%>: nil") } - f, err := <%= build_expand_resource_ref('raw.(string)', property.item_type) %> + f, err := <%= build_expand_resource_ref('raw.(string)', property.item_type, pwd) %> if err != nil { return nil, fmt.Errorf("Invalid value for <%= property.name.underscore -%>: %s", err) } @@ -160,7 +167,7 @@ func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d T } <% else -%> <% if property.is_a?(Api::Type::ResourceRef) -%> - f, err := <%= build_expand_resource_ref('v.(string)', property) %> + f, err := <%= build_expand_resource_ref('v.(string)', property, pwd) %> if err != nil { return nil, fmt.Errorf("Invalid value for <%= property.name.underscore -%>: %s", err) } @@ -180,7 +187,7 @@ func expand<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d T <% property.nested_properties.each do |prop| -%> <%# Map is a map from {key -> object} in the API, but Terraform can't represent that so we treat the key as a property of the object in Terraform schema. %> -<%= lines(build_expand_method(prefix + titlelize_property(property), prop, object), 1) -%> +<%= lines(build_expand_method(prefix + titlelize_property(property), prop, object, pwd), 1) -%> <% end -%> <% end -%> diff --git a/templates/terraform/extra_schema_entry/cloudiot_device_registry.go.erb b/templates/terraform/extra_schema_entry/cloudiot_device_registry.go.erb new file mode 100644 index 000000000000..4f9c955aa201 --- /dev/null +++ b/templates/terraform/extra_schema_entry/cloudiot_device_registry.go.erb @@ -0,0 +1,80 @@ +"state_notification_config": { + Type: schema.TypeMap, + Description: `A PubSub topic to publish device state updates.`, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "pubsub_topic_name": { + Type: schema.TypeString, + Description: `PubSub topic name to publish device state updates.`, + Required: true, + DiffSuppressFunc: compareSelfLinkOrResourceName, + }, + }, + }, +}, +"mqtt_config": { + Type: schema.TypeMap, + Description: `Activate or deactivate MQTT.`, + Computed: true, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "mqtt_enabled_state": { + Type: schema.TypeString, + Description: `The field allows MQTT_ENABLED or MQTT_DISABLED`, + Required: true, + ValidateFunc: validation.StringInSlice( + []string{"MQTT_DISABLED", "MQTT_ENABLED"}, false), + }, + }, + }, +}, +"http_config": { + Type: schema.TypeMap, + Description: `Activate or deactivate HTTP.`, + Computed: true, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "http_enabled_state": { + Type: schema.TypeString, + Description: `The field allows HTTP_ENABLED or HTTP_DISABLED`, + Required: true, + ValidateFunc: validation.StringInSlice( + []string{"HTTP_DISABLED", "HTTP_ENABLED"}, false), + }, + }, + }, +}, +"credentials": { + Type: schema.TypeList, + Description: `List of public key certificates to authenticate devices.`, + Optional: true, + MaxItems: 10, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "public_key_certificate": { + Type: schema.TypeMap, + Description: `A public key certificate format and data.`, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "format": { + Type: schema.TypeString, + Description: `The field allows only X509_CERTIFICATE_PEM.`, + Required: true, + ValidateFunc: validation.StringInSlice( + []string{"X509_CERTIFICATE_PEM"}, false), + }, + "certificate": { + Type: schema.TypeString, + Description: `The certificate data.`, + Required: true, + }, + }, + }, + }, + }, + }, +}, diff --git a/templates/terraform/extra_schema_entry/firewall.erb b/templates/terraform/extra_schema_entry/firewall.erb new file mode 100644 index 000000000000..2997c9083647 --- /dev/null +++ b/templates/terraform/extra_schema_entry/firewall.erb @@ -0,0 +1,21 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +"enable_logging": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + Deprecated: "Deprecated in favor of log_config", + Description: "This field denotes whether to enable logging for a particular firewall rule. If logging is enabled, logs will be exported to Stackdriver.", +}, \ No newline at end of file diff --git a/templates/terraform/flatten_property_method.erb b/templates/terraform/flatten_property_method.erb index f3258bd2ddda..70e6d46b7ab0 100644 --- a/templates/terraform/flatten_property_method.erb +++ b/templates/terraform/flatten_property_method.erb @@ -13,9 +13,10 @@ # limitations under the License. -%> <% if property.custom_flatten -%> -<%= lines(compile_template(property.custom_flatten, +<%= lines(compile_template(pwd + '/' + property.custom_flatten, prefix: prefix, - property: property)) -%> + property: property, + pwd: pwd)) -%> <% else -%> <% if tf_types.include?(property.class) -%> func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d *schema.ResourceData, config *Config) interface{} { @@ -132,7 +133,7 @@ func flatten<%= prefix -%><%= titlelize_property(property) -%>(v interface{}, d } <% if property.nested_properties? -%> <% property.nested_properties.each do |prop| -%> - <%= lines(build_flatten_method(prefix + titlelize_property(property), prop, object), 1) -%> + <%= lines(build_flatten_method(prefix + titlelize_property(property), prop, object, pwd), 1) -%> <% end -%> <% end -%> <% else -%> diff --git a/templates/terraform/iam/iam_context.go.erb b/templates/terraform/iam/iam_context.go.erb index 70e551ede8e7..55bafd393547 100644 --- a/templates/terraform/iam/iam_context.go.erb +++ b/templates/terraform/iam/iam_context.go.erb @@ -1,13 +1,13 @@ context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), "role": "<%= object.iam_policy.allowed_iam_role -%>", <% unless object.iam_policy.admin_iam_role.nil? -%> "admin_role": "<%= object.iam_policy.admin_iam_role-%>", <% end -%> <% unless object.iam_policy.test_project_name.nil? -%> - "project_id" : fmt.Sprintf("<%= object.iam_policy.test_project_name -%>%s", acctest.RandString(10)), + "project_id" : fmt.Sprintf("<%= object.iam_policy.test_project_name -%>%s", randString(t, 10)), <% end -%> -<%= lines(compile('templates/terraform/env_var_context.go.erb')) -%> +<%= lines(compile(pwd + '/templates/terraform/env_var_context.go.erb')) -%> <% unless example.test_vars_overrides.nil? -%> <% example.test_vars_overrides.each do |var_name, override| -%> "<%= var_name %>": <%= override %>, diff --git a/templates/terraform/iam_policy.go.erb b/templates/terraform/iam_policy.go.erb index 546534926f9e..3ea2a255365b 100644 --- a/templates/terraform/iam_policy.go.erb +++ b/templates/terraform/iam_policy.go.erb @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -%> -<%= lines(autogen_notice :go) -%> +<%= lines(autogen_notice(:go, pwd)) -%> package google import ( @@ -57,7 +57,7 @@ var <%= resource_name -%>IamSchema = map[string]*schema.Schema{ } <% unless object.iam_policy.custom_diff_suppress.nil? -%> -<%= lines(compile(object.iam_policy.custom_diff_suppress)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.custom_diff_suppress)) -%> <% end -%> type <%= resource_name -%>IamUpdater struct { diff --git a/templates/terraform/nested_property_documentation.erb b/templates/terraform/nested_property_documentation.erb index f52def3caa49..50c53b8efeba 100644 --- a/templates/terraform/nested_property_documentation.erb +++ b/templates/terraform/nested_property_documentation.erb @@ -1,7 +1,7 @@ <% if property.flatten_object -%> <% property.nested_properties.each do |prop| -%> -<%= lines(build_nested_property_documentation(prop)) -%> +<%= lines(build_nested_property_documentation(prop, pwd)) -%> <% end -%> <% elsif property.nested_properties? @@ -11,10 +11,10 @@ The `<%= property.name.underscore -%>` block <%= if property.output then "contai * `<%= property.key_name.underscore -%>` - (Required) The identifier for this object. Format specified above. <% end -%> <% property.nested_properties.each do |prop| -%> -<%= lines(build_property_documentation(prop)) -%> +<%= lines(build_property_documentation(prop, pwd)) -%> <% end -%> <% property.nested_properties.each do |prop| -%> -<%= lines(build_nested_property_documentation(prop)) -%> +<%= lines(build_nested_property_documentation(prop, pwd)) -%> <% end -%> <% end -%> diff --git a/templates/terraform/nested_query.go.erb b/templates/terraform/nested_query.go.erb index 79e0e04d9351..2a989c64a70b 100644 --- a/templates/terraform/nested_query.go.erb +++ b/templates/terraform/nested_query.go.erb @@ -13,9 +13,7 @@ func flattenNested<%= resource_name -%>(d *schema.ResourceData, meta interface{} <% end -%> v, ok = res["<%=object.nested_query.keys[-1]-%>"] if !ok || v == nil { - // It's possible that there is only one of these resources and - // that res represents that resource. - v = res + return nil,nil } switch v.(type) { diff --git a/templates/terraform/objectlib/base.go.erb b/templates/terraform/objectlib/base.go.erb index 917fecc91388..15f9804fd6ba 100644 --- a/templates/terraform/objectlib/base.go.erb +++ b/templates/terraform/objectlib/base.go.erb @@ -1,16 +1,18 @@ -<%= lines(autogen_notice :go) -%> +<%= lines(autogen_notice(:go, pwd)) -%> package google <% resource_name = product_ns + object.name properties = object.all_user_properties - api_version = @base_url.split("/")[-1] # See discussion on asset name here: https://github.com/GoogleCloudPlatform/magic-modules/pull/1520 asset_name_template = '//' + product_ns.downcase + '.googleapis.com/' + (!object.self_link.nil? && !object.self_link.empty? ? object.self_link : object.base_url + '/{{name}}') + version_regex = /\/(v\d[^\/]*)\// + api_version = version_regex.match?(asset_name_template) ? version_regex.match(asset_name_template)[1] : @base_url.split("/")[-1] + asset_name_template.gsub!(version_regex, '/') %> -<%= lines(compile(object.custom_code.constants)) if object.custom_code.constants -%> +<%= lines(compile(pwd + '/' + object.custom_code.constants)) if object.custom_code.constants -%> func Get<%= resource_name -%>CaiObject(d TerraformResourceData, config *Config) (Asset, error) { name, err := assetName(d, config, "<%= asset_name_template -%>") @@ -61,11 +63,11 @@ func Get<%= resource_name -%>ApiObject(d TerraformResourceData, config *Config) <% if object.custom_code.encoder -%> func resource<%= resource_name -%>Encoder(d TerraformResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { -<%= lines(compile(object.custom_code.encoder)) -%> +<%= lines(compile(pwd + '/' + object.custom_code.encoder)) -%> } <% end -%> <% object.settable_properties.each do |prop| -%> -<%= lines(build_expand_method(resource_name, prop, object), 1) -%> +<%= lines(build_expand_method(resource_name, prop, object, pwd), 1) -%> <% end -%> diff --git a/templates/terraform/operation.go.erb b/templates/terraform/operation.go.erb index 60a8e69de3a7..c5c92d51fdb7 100644 --- a/templates/terraform/operation.go.erb +++ b/templates/terraform/operation.go.erb @@ -2,7 +2,7 @@ product_name = object.__product.name has_project = object.base_url.include?("{{project}}") -%> -<%= lines(autogen_notice :go) -%> +<%= lines(autogen_notice(:go, pwd)) -%> package google import ( @@ -49,24 +49,24 @@ func create<%= product_name %>Waiter(config *Config, op map[string]interface{}, Might as well just nolint it so we can pass the linter checks. -%> // nolint: deadcode,unused -func <%= product_name.camelize(:lower) -%>OperationWaitTimeWithResponse(config *Config, op map[string]interface{}, response *map[string]interface{},<% if has_project -%> project,<% end -%> activity string, timeoutMinutes int) error { +func <%= product_name.camelize(:lower) -%>OperationWaitTimeWithResponse(config *Config, op map[string]interface{}, response *map[string]interface{},<% if has_project -%> project,<% end -%> activity string, timeout time.Duration) error { w, err := create<%= product_name %>Waiter(config, op, <% if has_project -%> project, <%end-%> activity) if err != nil || w == nil { // If w is nil, the op was synchronous. return err } - if err := OperationWait(w, activity, timeoutMinutes, config.PollInterval); err != nil { + if err := OperationWait(w, activity, timeout, config.PollInterval); err != nil { return err } return json.Unmarshal([]byte(w.CommonOperationWaiter.Op.Response), response) } <% end -%> -func <%= product_name.camelize(:lower) -%>OperationWaitTime(config *Config, op map[string]interface{}, <% if has_project -%> project,<% end -%> activity string, timeoutMinutes int) error { +func <%= product_name.camelize(:lower) -%>OperationWaitTime(config *Config, op map[string]interface{}, <% if has_project -%> project,<% end -%> activity string, timeout time.Duration) error { w, err := create<%= product_name %>Waiter(config, op, <% if has_project -%> project, <%end-%> activity) if err != nil || w == nil { // If w is nil, the op was synchronous. return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/templates/terraform/post_create/cloud_asset_feed.go.erb b/templates/terraform/post_create/cloud_asset_feed.go.erb new file mode 100644 index 000000000000..ea3cf4fe41b0 --- /dev/null +++ b/templates/terraform/post_create/cloud_asset_feed.go.erb @@ -0,0 +1,2 @@ +// Restore the original value of user_project_override. +config.UserProjectOverride = origUserProjectOverride \ No newline at end of file diff --git a/templates/terraform/post_create/compute_backend_service_security_policy.go.erb b/templates/terraform/post_create/compute_backend_service_security_policy.go.erb index 451631628976..a34cf43d8639 100644 --- a/templates/terraform/post_create/compute_backend_service_security_policy.go.erb +++ b/templates/terraform/post_create/compute_backend_service_security_policy.go.erb @@ -11,7 +11,8 @@ if o, n := d.GetChange("security_policy"); o.(string) != n.(string) { if err != nil { return errwrap.Wrapf("Error setting Backend Service security policy: {{err}}", err) } - waitErr := computeOperationWait(config, op, project, "Setting Backend Service Security Policy") + // This uses the create timeout for simplicity, though technically this code appears in both create and update + waitErr := computeOperationWaitTime(config, op, project, "Setting Backend Service Security Policy", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { return waitErr } diff --git a/templates/terraform/post_create/compute_network_delete_default_route.erb b/templates/terraform/post_create/compute_network_delete_default_route.erb index 31f31ef5612a..70f1b1d3f0a2 100644 --- a/templates/terraform/post_create/compute_network_delete_default_route.erb +++ b/templates/terraform/post_create/compute_network_delete_default_route.erb @@ -16,7 +16,7 @@ if d.Get("delete_default_routes_on_create").(bool) { if err != nil { return fmt.Errorf("Error deleting route: %s", err) } - err = computeOperationWait(config, op, project, "Deleting Route") + err = computeOperationWaitTime(config, op, project, "Deleting Route", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } diff --git a/templates/terraform/post_create/labels.erb b/templates/terraform/post_create/labels.erb index db554f32914d..030fd48fd2c4 100644 --- a/templates/terraform/post_create/labels.erb +++ b/templates/terraform/post_create/labels.erb @@ -27,7 +27,7 @@ if v, ok := d.GetOkExists("labels"); !isEmptyValue(reflect.ValueOf(v)) && (ok || err = computeOperationWaitTime( config, res, project, "Updating <%= resource_name -%> Labels", - int(d.Timeout(schema.TimeoutUpdate).Minutes())) + d.Timeout(schema.TimeoutUpdate)) if err != nil { return err diff --git a/templates/terraform/post_create/set_computed_name.erb b/templates/terraform/post_create/set_computed_name.erb index 54638e146a87..4a3342ee8d8d 100644 --- a/templates/terraform/post_create/set_computed_name.erb +++ b/templates/terraform/post_create/set_computed_name.erb @@ -1,7 +1,15 @@ // `name` is autogenerated from the api so needs to be set post-create name, ok := res["name"] if !ok { - return fmt.Errorf("Create response didn't contain critical fields. Create may not have succeeded.") + respBody, ok := res["response"] + if !ok { + return fmt.Errorf("Create response didn't contain critical fields. Create may not have succeeded.") + } + + name, ok = respBody.(map[string]interface{})["name"] + if !ok { + return fmt.Errorf("Create response didn't contain critical fields. Create may not have succeeded.") + } } d.Set("name", name.(string)) d.SetId(name.(string)) diff --git a/templates/terraform/post_update/compute_per_instance_config.go.erb b/templates/terraform/post_update/compute_per_instance_config.go.erb new file mode 100644 index 000000000000..2d641b55ea40 --- /dev/null +++ b/templates/terraform/post_update/compute_per_instance_config.go.erb @@ -0,0 +1,40 @@ +// Instance name in applyUpdatesToInstances request must include zone +instanceName, err := replaceVars(d, config, "zones/{{zone}}/instances/{{name}}") +if err != nil { + return err +} + +obj = make(map[string]interface{}) +obj["instances"] = []string{instanceName} + +minAction := d.Get("minimal_action") +if minAction == "" { + minAction = "NONE" +} +obj["minimalAction"] = minAction + +mostDisruptiveAction := d.Get("most_disruptive_action_allowed") +if mostDisruptiveAction != "" { + mostDisruptiveAction = "REPLACE" +} +obj["mostDisruptiveActionAllowed"] = mostDisruptiveAction + +url, err = replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/applyUpdatesToInstances") +if err != nil { + return err +} + +log.Printf("[DEBUG] Applying updates to PerInstanceConfig %q: %#v", d.Id(), obj) +res, err = sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutUpdate)) + +if err != nil { + return fmt.Errorf("Error updating PerInstanceConfig %q: %s", d.Id(), err) +} + +err = computeOperationWaitTime( + config, res, project, "Applying update to PerInstanceConfig", + d.Timeout(schema.TimeoutUpdate)) + +if err != nil { + return err +} \ No newline at end of file diff --git a/templates/terraform/post_update/compute_region_per_instance_config.go.erb b/templates/terraform/post_update/compute_region_per_instance_config.go.erb new file mode 100644 index 000000000000..75870d1a5568 --- /dev/null +++ b/templates/terraform/post_update/compute_region_per_instance_config.go.erb @@ -0,0 +1,40 @@ +// Instance name in applyUpdatesToInstances request must include zone +instanceName, err := findInstanceName(d, config) +if err != nil { + return err +} + +obj = make(map[string]interface{}) +obj["instances"] = []string{instanceName} + +minAction := d.Get("minimal_action") +if minAction == "" { + minAction = "NONE" +} +obj["minimalAction"] = minAction + +mostDisruptiveAction := d.Get("most_disruptive_action_allowed") +if mostDisruptiveAction != "" { + mostDisruptiveAction = "REPLACE" +} +obj["mostDisruptiveActionAllowed"] = mostDisruptiveAction + +url, err = replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/applyUpdatesToInstances") +if err != nil { + return err +} + +log.Printf("[DEBUG] Applying updates to PerInstanceConfig %q: %#v", d.Id(), obj) +res, err = sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutUpdate)) + +if err != nil { + return fmt.Errorf("Error updating PerInstanceConfig %q: %s", d.Id(), err) +} + +err = computeOperationWaitTime( + config, res, project, "Applying update to PerInstanceConfig", + d.Timeout(schema.TimeoutUpdate)) + +if err != nil { + return err +} \ No newline at end of file diff --git a/templates/terraform/pre_create/cloud_asset_feed.go.erb b/templates/terraform/pre_create/cloud_asset_feed.go.erb new file mode 100644 index 000000000000..a8354702cc68 --- /dev/null +++ b/templates/terraform/pre_create/cloud_asset_feed.go.erb @@ -0,0 +1,16 @@ +// This should never happen, but the linter complains otherwise with ineffectual assignment to `project` +if project == "dummy lint" { + log.Printf("[DEBUG] Found project in url: %s", project) +} +// Send the project ID in the X-Goog-User-Project header. +origUserProjectOverride := config.UserProjectOverride +config.UserProjectOverride = true +// If we have a billing project, use that one in the header. +bp, bpok := d.GetOk("billing_project") +if bpok && bp != "" { + project = bp.(string) +} else { + // otherwise, use the resource's project + rp, _ := d.GetOk("project") + project = rp.(string) +} diff --git a/templates/terraform/pre_delete/compute_per_instance_config.go.erb b/templates/terraform/pre_delete/compute_per_instance_config.go.erb new file mode 100644 index 000000000000..0fcf58a17620 --- /dev/null +++ b/templates/terraform/pre_delete/compute_per_instance_config.go.erb @@ -0,0 +1,3 @@ +obj = map[string]interface{}{ + "names": [1]string{d.Get("name").(string)}, +} \ No newline at end of file diff --git a/templates/terraform/pre_delete/detach_disk.erb b/templates/terraform/pre_delete/detach_disk.erb index aecc82e09a1e..2f5a4e923801 100644 --- a/templates/terraform/pre_delete/detach_disk.erb +++ b/templates/terraform/pre_delete/detach_disk.erb @@ -41,8 +41,8 @@ if v, ok := readRes["users"].([]interface{}); ok { return fmt.Errorf("Error detaching disk %s from instance %s/%s/%s: %s", call.deviceName, call.project, call.zone, call.instance, err.Error()) } - err = computeOperationWait(config, op, call.project, - fmt.Sprintf("Detaching disk from %s/%s/%s", call.project, call.zone, call.instance)) + err = computeOperationWaitTime(config, op, call.project, + fmt.Sprintf("Detaching disk from %s/%s/%s", call.project, call.zone, call.instance), d.Timeout(schema.TimeoutDelete)) if err != nil { if opErr, ok := err.(ComputeOperationError); ok && len(opErr.Errors) == 1 && opErr.Errors[0].Code == "RESOURCE_NOT_FOUND" { log.Printf("[WARN] instance %q was deleted while awaiting detach", call.instance) diff --git a/templates/terraform/pre_update/cloudiot_device_registry.go.erb b/templates/terraform/pre_update/cloudiot_device_registry.go.erb new file mode 100644 index 000000000000..8170993ab0c5 --- /dev/null +++ b/templates/terraform/pre_update/cloudiot_device_registry.go.erb @@ -0,0 +1,35 @@ +log.Printf("[DEBUG] updateMask before adding extra schema entries %q: %v", d.Id(), updateMask) + +log.Printf("[DEBUG] Pre-update on state notification config: %q", d.Id()) +if d.HasChange("state_notification_config") { + log.Printf("[DEBUG] %q stateNotificationConfig.pubsubTopicName has a change. Adding it to the update mask", d.Id()) + updateMask = append(updateMask, "stateNotificationConfig.pubsubTopicName") +} + +log.Printf("[DEBUG] Pre-update on MQTT config: %q", d.Id()) +if d.HasChange("mqtt_config") { + log.Printf("[DEBUG] %q mqttConfig.mqttEnabledState has a change. Adding it to the update mask", d.Id()) + updateMask = append(updateMask, "mqttConfig.mqttEnabledState") +} + +log.Printf("[DEBUG] Pre-update on HTTP config: %q", d.Id()) +if d.HasChange("http_config") { + log.Printf("[DEBUG] %q httpConfig.httpEnabledState has a change. Adding it to the update mask", d.Id()) + updateMask = append(updateMask, "httpConfig.httpEnabledState") +} + +log.Printf("[DEBUG] Pre-update on credentials: %q", d.Id()) +if d.HasChange("credentials") { + log.Printf("[DEBUG] %q credentials has a change. Adding it to the update mask", d.Id()) + updateMask = append(updateMask, "credentials") +} + +log.Printf("[DEBUG] updateMask after adding extra schema entries %q: %v", d.Id(), updateMask) + +// Refreshing updateMask after adding extra schema entries +url, err = addQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) +if err != nil { + return err +} + +log.Printf("[DEBUG] Update URL %q: %v", d.Id(), url) diff --git a/templates/terraform/property_documentation.erb b/templates/terraform/property_documentation.erb index 3f4bf27169d3..ca30bc5216ab 100644 --- a/templates/terraform/property_documentation.erb +++ b/templates/terraform/property_documentation.erb @@ -14,6 +14,21 @@ <% end -%> <% end -%> <%= indent(property.description.strip.gsub("\n\n", "\n"), 2) -%> +<% if property.is_a?(Api::Type::Enum) && !property.output && !property.skip_docs_values -%> + + +<% unless property.default_value.nil? || property.default_value == "" -%> + Default value: `<%= property.default_value %>` + +<% end -%> + Possible values are: +<% property.values.select { |v| v != "" }.each do |v| -%> + * `<%= v %>` +<% end -%> +<% end -%> +<% if property.sensitive -%> + **Note**: This property is sensitive and will not be displayed in the plan. +<% end -%> <% if property.is_a?(Api::Type::NestedObject) || property.is_a?(Api::Type::Map) || (property.is_a?(Api::Type::Array) && property.item_type.is_a?(Api::Type::NestedObject)) -%> Structure is documented below. <% end -%> diff --git a/templates/terraform/resource.erb b/templates/terraform/resource.erb index e6396d09897d..e2c52e66b3f8 100644 --- a/templates/terraform/resource.erb +++ b/templates/terraform/resource.erb @@ -12,11 +12,11 @@ # See the License for the specific language governing permissions and # limitations under the License. -%> -<%= lines(autogen_notice :go) -%> +<%= lines(autogen_notice(:go, pwd)) -%> package google -<%= lines(compile(object.custom_code.constants)) if object.custom_code.constants -%> +<%= lines(compile(pwd + '/' + object.custom_code.constants)) if object.custom_code.constants -%> <% resource_name = product_ns + object.name @@ -28,7 +28,7 @@ package google client_name_camel = client_name.camelize(:lower) client_name_pascal = client_name.camelize(:upper) client_name_lower = client_name.downcase - has_project = object.base_url.include?('{{project}}') + has_project = object.base_url.include?('{{project}}') || (object.create_url && object.create_url.include?('{{project}}')) has_region = object.base_url.include?('{{region}}') && object.parameters.any?{ |p| p.name == 'region' && p.ignore_read } # In order of preference, use TF override, # general defined timeouts, or default Timeouts @@ -82,22 +82,22 @@ func resource<%= resource_name -%>() *schema.Resource { <% end -%> <% end -%> -<%= lines(compile(object.custom_code.resource_definition)) if object.custom_code.resource_definition -%> +<%= lines(compile(pwd + '/' + object.custom_code.resource_definition)) if object.custom_code.resource_definition -%> Schema: map[string]*schema.Schema{ <% order_properties(properties).each do |prop| -%> -<%= lines(build_schema_property(prop, object)) -%> +<%= lines(build_schema_property(prop, object, pwd)) -%> <% end -%> <%- unless object.virtual_fields.empty? -%> <%- object.virtual_fields.each do |field| -%> "<%= field.name -%>": { - Type: schema.TypeBool, + Type: <%= tf_type(field) -%>, Optional: true, - Default: false, + Default: <%= go_literal(field.default_value) -%>, }, <% end -%> <% end -%> -<%= lines(compile(object.custom_code.extra_schema_entry)) if object.custom_code.extra_schema_entry -%> +<%= lines(compile(pwd + '/' + object.custom_code.extra_schema_entry)) if object.custom_code.extra_schema_entry -%> <% if has_project -%> "project": { Type: schema.TypeString, @@ -117,13 +117,13 @@ func resource<%= resource_name -%>() *schema.Resource { } <% properties.each do |prop| -%> -<%= lines(build_subresource_schema(prop, object), 1) -%> +<%= lines(build_subresource_schema(prop, object, pwd), 1) -%> <% end -%> <% object.settable_properties.select {|p| p.unordered_list}.each do |prop| -%> func resource<%= resource_name -%><%= prop.name.camelize(:upper) -%>SetStyleDiff(diff *schema.ResourceDiff, meta interface{}) error { <%= - compile_template('templates/terraform/unordered_list_customize_diff.erb', + compile_template(pwd + '/templates/terraform/unordered_list_customize_diff.erb', prop: prop, resource_name: resource_name) -%> @@ -132,7 +132,7 @@ func resource<%= resource_name -%><%= prop.name.camelize(:upper) -%>SetStyleDiff func resource<%= resource_name -%>Create(d *schema.ResourceData, meta interface{}) error { <% if object.custom_code.custom_create -%> - <%= lines(compile(object.custom_code.custom_create)) -%> + <%= lines(compile(pwd + '/' + object.custom_code.custom_create)) -%> <% else -%> config := meta.(*Config) @@ -201,6 +201,7 @@ func resource<%= resource_name -%>Create(d *schema.ResourceData, meta interface{ project = parts[1] } <% end -%> +<%= lines(compile(pwd + '/' + object.custom_code.pre_create)) if object.custom_code.pre_create -%> res, err := sendRequestWithTimeout(config, "<%= object.create_verb.to_s.upcase -%>", <% if has_project || object.supports_indirect_user_project_override %>project<% else %>""<% end %>, url, obj, d.Timeout(schema.TimeoutCreate)<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) if err != nil { <% if object.custom_code.post_create_failure && object.async.nil? # Only add if not handled by async error handling -%> @@ -228,7 +229,7 @@ func resource<%= resource_name -%>Create(d *schema.ResourceData, meta interface{ <% if object.async&.allow?('create') -%> <% if object.async.is_a? Provider::Terraform::PollAsync -%> - err = PollingWaitTime(resource<%= resource_name -%>PollRead(d, meta), <%= object.async.check_response_func -%>, "Creating <%= object.name -%>", d.Timeout(schema.TimeoutCreate)) + err = PollingWaitTime(resource<%= resource_name -%>PollRead(d, meta), <%= object.async.check_response_func_existence -%>, "Creating <%= object.name -%>", d.Timeout(schema.TimeoutCreate), <%= object.async.target_occurrences -%>) if err != nil { <% if object.async.suppress_error -%> log.Printf("[ERROR] Unable to confirm eventually consistent <%= object.name -%> %q finished updating: %q", d.Id(), err) @@ -248,7 +249,7 @@ func resource<%= resource_name -%>Create(d *schema.ResourceData, meta interface{ var opRes map[string]interface{} err = <%= client_name_camel -%>OperationWaitTimeWithResponse( config, res, &opRes, <% if has_project -%> project, <% end -%> "Creating <%= object.name -%>", - int(d.Timeout(schema.TimeoutCreate).Minutes())) + d.Timeout(schema.TimeoutCreate)) if err != nil { <% if object.custom_code.post_create_failure -%> resource<%= resource_name -%>PostCreateFailure(d, meta) @@ -298,7 +299,7 @@ func resource<%= resource_name -%>Create(d *schema.ResourceData, meta interface{ <% else -%> err = <%= client_name_camel -%>OperationWaitTime( config, res, <% if has_project -%> project, <% end -%> "Creating <%= object.name -%>", - int(d.Timeout(schema.TimeoutCreate).Minutes())) + d.Timeout(schema.TimeoutCreate)) if err != nil { <% if object.custom_code.post_create_failure -%> @@ -315,7 +316,7 @@ func resource<%= resource_name -%>Create(d *schema.ResourceData, meta interface{ log.Printf("[DEBUG] Finished creating <%= object.name -%> %q: %#v", d.Id(), res) -<%= lines(compile(object.custom_code.post_create)) if object.custom_code.post_create -%> +<%= lines(compile(pwd + '/' + object.custom_code.post_create)) if object.custom_code.post_create -%> return resource<%= resource_name -%>Read(d, meta) <% end # if custom_create -%> @@ -325,7 +326,7 @@ func resource<%= resource_name -%>Create(d *schema.ResourceData, meta interface{ func resource<%= resource_name -%>PollRead(d *schema.ResourceData, meta interface{}) PollReadFunc { return func() (map[string]interface{}, error) { <% if object.async.custom_poll_read -%> -<%= lines(compile(object.async.custom_poll_read)) -%> +<%= lines(compile(pwd + '/' + object.async.custom_poll_read)) -%> <% else -%> config := meta.(*Config) @@ -407,7 +408,11 @@ func resource<%= resource_name -%>Read(d *schema.ResourceData, meta interface{}) <% end -%> res, err := sendRequest(config, "<%= object.read_verb.to_s.upcase -%>", <% if has_project || object.supports_indirect_user_project_override %>project<% else %>""<% end %>, url, nil<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) if err != nil { +<% if object.read_error_transform -%> + return handleNotFoundError(<%= object.read_error_transform %>(err), d, fmt.Sprintf("<%= resource_name -%> %q", d.Id())) +<% else -%> return handleNotFoundError(err, d, fmt.Sprintf("<%= resource_name -%> %q", d.Id())) +<% end -%> } <% if object.nested_query -%> @@ -443,7 +448,7 @@ func resource<%= resource_name -%>Read(d *schema.ResourceData, meta interface{}) // Explicitly set virtual fields to default values if unset <%- object.virtual_fields.each do |field| -%> if _, ok := d.GetOk("<%= field.name -%>"); !ok { - d.Set("<%= field.name -%>", false) + d.Set("<%= field.name -%>", <%= go_literal(field.default_value) -%>) } <% end -%> <% end -%> @@ -594,7 +599,7 @@ if <%= props.map { |prop| "d.HasChange(\"#{prop.name.underscore}\")" }.join ' || project = parts[1] } <% end -%> -<% if object.async.is_a? Api::OpAsync-%> +<% if object.async&.allow?('update') && object.async.is_a?(Api::OpAsync) -%> res, err := sendRequestWithTimeout(config, "<%= key[:update_verb] -%>", <% if has_project || object.supports_indirect_user_project_override %>project<% else %>""<% end %>, url, obj, d.Timeout(schema.TimeoutUpdate)<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) <% else -%> _, err = sendRequestWithTimeout(config, "<%= key[:update_verb] -%>", <% if has_project || object.supports_indirect_user_project_override %>project<% else %>""<% end %>, url, obj, d.Timeout(schema.TimeoutUpdate)<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) @@ -607,12 +612,12 @@ if <%= props.map { |prop| "d.HasChange(\"#{prop.name.underscore}\")" }.join ' || <% if object.async.is_a? Api::OpAsync-%> err = <%= client_name_camel -%>OperationWaitTime( config, res, <% if has_project -%> project, <% end -%> "Updating <%= object.name -%>", - int(d.Timeout(schema.TimeoutUpdate).Minutes())) + d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } <% elsif object.async.is_a? Provider::Terraform::PollAsync -%> - err = PollingWaitTime(resource<%= resource_name -%>PollRead(d, meta), <%= object.async.check_response_func -%>, "Updating <%= object.name -%>", d.Timeout(schema.TimeoutUpdate)) + err = PollingWaitTime(resource<%= resource_name -%>PollRead(d, meta), <%= object.async.check_response_func_existence -%>, "Updating <%= object.name -%>", d.Timeout(schema.TimeoutUpdate), <%= object.async.target_occurrences -%>) if err != nil { <% if object.async.suppress_error-%> log.Printf("[ERROR] Unable to confirm eventually consistent <%= object.name -%> %q finished updating: %q", d.Id(), err) @@ -677,8 +682,8 @@ if <%= props.map { |prop| "d.HasChange(\"#{prop.name.underscore}\")" }.join ' || } log.Printf("[DEBUG] Updating <%= object.name -%> %q: %#v", d.Id(), obj) -<%= lines(compile('templates/terraform/update_mask.erb')) if object.update_mask -%> -<%= lines(compile(object.custom_code.pre_update)) if object.custom_code.pre_update -%> +<%= lines(compile(pwd + '/templates/terraform/update_mask.erb')) if object.update_mask -%> +<%= lines(compile(pwd + '/' + object.custom_code.pre_update)) if object.custom_code.pre_update -%> <% if object.nested_query&.modify_by_patch -%> <%# Keep this after mutex - patch request data relies on current resource state %> obj, err = resource<%= resource_name -%>PatchUpdateEncoder(d, meta, obj) @@ -692,7 +697,7 @@ if <%= props.map { |prop| "d.HasChange(\"#{prop.name.underscore}\")" }.join ' || project = parts[1] } <% end -%> -<% if object.async.is_a? Api::OpAsync-%> +<% if object.async&.allow?('update') && object.async.is_a?(Api::OpAsync) -%> res, err := sendRequestWithTimeout(config, "<%= object.update_verb -%>", <% if has_project || object.supports_indirect_user_project_override %>project<% else %>""<% end %>, url, obj, d.Timeout(schema.TimeoutUpdate)<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) <% else -%> _, err = sendRequestWithTimeout(config, "<%= object.update_verb -%>", <% if has_project || object.supports_indirect_user_project_override %>project<% else %>""<% end %>, url, obj, d.Timeout(schema.TimeoutUpdate)<%= object.error_retry_predicates ? ", " + object.error_retry_predicates.join(',') : "" -%>) @@ -706,13 +711,13 @@ if <%= props.map { |prop| "d.HasChange(\"#{prop.name.underscore}\")" }.join ' || <% if object.async.is_a? Api::OpAsync -%> err = <%= client_name_camel -%>OperationWaitTime( config, res, <% if has_project -%> project, <% end -%> "Updating <%= object.name -%>", - int(d.Timeout(schema.TimeoutUpdate).Minutes())) + d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } <% elsif object.async.is_a? Provider::Terraform::PollAsync -%> - err = PollingWaitTime(resource<%= resource_name -%>PollRead(d, meta), <%= object.async.check_response_func -%>, "Updating <%= object.name -%>", d.Timeout(schema.TimeoutUpdate)) + err = PollingWaitTime(resource<%= resource_name -%>PollRead(d, meta), <%= object.async.check_response_func_existence -%>, "Updating <%= object.name -%>", d.Timeout(schema.TimeoutUpdate), <%= object.async.target_occurrences -%>) if err != nil { <% if object.async.suppress_error-%> log.Printf("[ERROR] Unable to confirm eventually consistent <%= object.name -%> %q finished updating: %q", d.Id(), err) @@ -724,7 +729,7 @@ if <%= props.map { |prop| "d.HasChange(\"#{prop.name.underscore}\")" }.join ' || <% end -%> <% end # if object.input -%> -<%= lines(compile(object.custom_code.post_update)) if object.custom_code.post_update -%> +<%= lines(compile(pwd + '/' + object.custom_code.post_update)) if object.custom_code.post_update -%> return resource<%= resource_name -%>Read(d, meta) } <% end # if updatable? -%> @@ -738,7 +743,7 @@ func resource<%= resource_name -%>Delete(d *schema.ResourceData, meta interface{ return nil <% elsif object.custom_code.custom_delete -%> -<%= lines(compile(object.custom_code.custom_delete)) -%> +<%= lines(compile(pwd + '/' + object.custom_code.custom_delete)) -%> <% else -%> config := meta.(*Config) @@ -765,7 +770,7 @@ func resource<%= resource_name -%>Delete(d *schema.ResourceData, meta interface{ <%# If the deletion of the object requires sending a request body, the custom code will set 'obj' -%> var obj map[string]interface{} -<%= lines(compile(object.custom_code.pre_delete)) if object.custom_code.pre_delete -%> +<%= lines(compile(pwd + '/' + object.custom_code.pre_delete)) if object.custom_code.pre_delete -%> <% if object.nested_query&.modify_by_patch -%> <%# Keep this after mutex - patch request data relies on current resource state %> obj, err = resource<%= resource_name -%>PatchDeleteEncoder(d, meta, obj) @@ -793,14 +798,24 @@ func resource<%= resource_name -%>Delete(d *schema.ResourceData, meta interface{ } <% if object.async&.allow?('delete') -%> +<% if object.async.is_a? Provider::Terraform::PollAsync -%> + err = PollingWaitTime(resource<%= resource_name -%>PollRead(d, meta), <%= object.async.check_response_func_absence -%>, "Deleting <%= object.name -%>", d.Timeout(schema.TimeoutCreate), <%= object.async.target_occurrences -%>) + if err != nil { +<% if object.async.suppress_error -%> + log.Printf("[ERROR] Unable to confirm eventually consistent <%= object.name -%> %q finished updating: %q", d.Id(), err) +<% else -%> + return fmt.Errorf("Error waiting to delete <%= object.name -%>: %s", err) +<% end -%> + } +<% else -%> err = <%= client_name_camel -%>OperationWaitTime( config, res, <% if has_project -%> project, <% end -%> "Deleting <%= object.name -%>", - int(d.Timeout(schema.TimeoutDelete).Minutes())) + d.Timeout(schema.TimeoutDelete)) if err != nil { return err } - +<% end -%> <% end -%> log.Printf("[DEBUG] Finished deleting <%= object.name -%> %q: %#v", d.Id(), res) @@ -811,7 +826,7 @@ func resource<%= resource_name -%>Delete(d *schema.ResourceData, meta interface{ <% unless object.exclude_import -%> func resource<%= resource_name -%>Import(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { <% if object.custom_code.custom_import -%> -<%= lines(compile(object.custom_code.custom_import)) -%> +<%= lines(compile(pwd + '/' + object.custom_code.custom_import)) -%> <% else -%> config := meta.(*Config) if err := parseImportId([]string{ @@ -832,10 +847,10 @@ func resource<%= resource_name -%>Import(d *schema.ResourceData, meta interface{ <%- unless object.virtual_fields.empty? -%> // Explicitly set virtual fields to default values on import <%- object.virtual_fields.each do |field| -%> - d.Set("<%= field.name %>", false) + d.Set("<%= field.name %>", <%= go_literal(field.default_value) -%>) <% end -%> <% end -%> -<%= lines(compile(object.custom_code.post_import)) if object.custom_code.post_import -%> +<%= lines(compile(pwd + '/' + object.custom_code.post_import)) if object.custom_code.post_import -%> return []*schema.ResourceData{d}, nil <% end -%> @@ -844,27 +859,27 @@ func resource<%= resource_name -%>Import(d *schema.ResourceData, meta interface{ <%- nested_prefix = object.nested_query ? "Nested" : "" -%> <% object.gettable_properties.reject(&:ignore_read).each do |prop| -%> -<%= lines(build_flatten_method(nested_prefix+resource_name, prop, object), 1) -%> +<%= lines(build_flatten_method(nested_prefix+resource_name, prop, object, pwd), 1) -%> <% end -%> <% object.settable_properties.each do |prop| -%> -<%= lines(build_expand_method(nested_prefix+resource_name, prop, object), 1) -%> +<%= lines(build_expand_method(nested_prefix+resource_name, prop, object, pwd), 1) -%> <% end -%> <% if object.custom_code.encoder -%> func resource<%= resource_name -%>Encoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { -<%= lines(compile(object.custom_code.encoder)) -%> +<%= lines(compile(pwd + '/' + object.custom_code.encoder)) -%> } <% end -%> <% if object.custom_code.update_encoder-%> func resource<%= resource_name -%>UpdateEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { -<%= lines(compile(object.custom_code.update_encoder)) -%> +<%= lines(compile(pwd + '/' + object.custom_code.update_encoder)) -%> } <% end -%> <% if object.nested_query -%> -<%= compile_template('templates/terraform/nested_query.go.erb', +<%= compile_template(pwd + '/templates/terraform/nested_query.go.erb', object: object, settable_properties: object.settable_properties, resource_name: resource_name) -%> @@ -872,17 +887,17 @@ func resource<%= resource_name -%>UpdateEncoder(d *schema.ResourceData, meta int <% if object.custom_code.decoder -%> func resource<%= resource_name -%>Decoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { -<%= lines(compile(object.custom_code.decoder)) -%> +<%= lines(compile(pwd + '/' + object.custom_code.decoder)) -%> } <% end -%> <% if object.custom_code.post_create_failure -%> func resource<%= resource_name -%>PostCreateFailure(d *schema.ResourceData, meta interface{}) { -<%= lines(compile(object.custom_code.post_create_failure)) -%> +<%= lines(compile(pwd + '/' + object.custom_code.post_create_failure)) -%> } <% end -%> <% if object.schema_version -%> -<%= lines(compile("templates/terraform/state_migrations/#{product_ns.underscore}_#{object.name.underscore}.go.erb")) -%> +<%= lines(compile(pwd + "/templates/terraform/state_migrations/#{product_ns.underscore}_#{object.name.underscore}.go.erb")) -%> <% end -%> diff --git a/templates/terraform/resource.html.markdown.erb b/templates/terraform/resource.html.markdown.erb index 4136380dfcb8..9c2017316773 100644 --- a/templates/terraform/resource.html.markdown.erb +++ b/templates/terraform/resource.html.markdown.erb @@ -40,6 +40,7 @@ tf_subcategory = (object.__product.display_name) terraform_name = object.legacy_name || "google_#{tf_product}_#{object.name.underscore}" properties = object.all_user_properties + sensitive_props = object.all_nested_properties(object.root_properties).select(&:sensitive) # In order of preference, use TF override, # general defined timeouts, or default Timeouts timeouts = object.timeouts @@ -47,7 +48,7 @@ timeouts ||= Api::Timeouts.new -%> --- -<%= lines(autogen_notice :yaml) -%> +<%= lines(autogen_notice(:yaml, pwd)) -%> subcategory: "<%= tf_subcategory -%>" layout: "google" page_title: "Google: <%= terraform_name -%>" @@ -82,11 +83,19 @@ To get more information about <%= object.name -%>, see: ~> **Warning:** <%= object.docs.warning -%> <%- end -%> +<%- if !sensitive_props.empty? -%> +<%- + sense_props = sensitive_props.map! {|prop| "`"+prop.lineage+"`"}.to_sentence +-%> + +~> **Warning:** All arguments including <%= sense_props -%> will be stored in the raw +state as plain-text. [Read more about sensitive data in state](/docs/state/sensitive-data.html). +<%- end -%> <%#- We over-generate examples/oics buttons here; they'll all be _valid_ just not necessarily intended for this provider version. Unless/Until we split our docs, this is a non-issue. -%> <% unless object.examples.empty? -%> - <%- object.examples.each do |example| -%> + <%- object.examples.reject(&:skip_docs).each do |example| -%> <%- unless example.skip_test -%>
@@ -97,7 +106,7 @@ To get more information about <%= object.name -%>, see: ## Example Usage - <%= example.name.camelize(:upper).titleize %> -<%= example.config_documentation -%> +<%= example.config_documentation(pwd) -%> <%- end %> <%- end -%> ## Argument Reference @@ -105,11 +114,11 @@ To get more information about <%= object.name -%>, see: The following arguments are supported: <% object.root_properties.select(&:required).each do |prop| -%> -<%= lines(build_property_documentation(prop)) -%> +<%= lines(build_property_documentation(prop, pwd)) -%> <% end -%> <% properties.select(&:required).each do |prop| -%> -<%= lines(build_nested_property_documentation(prop)) -%> +<%= lines(build_nested_property_documentation(prop, pwd)) -%> <% end -%> <%- unless object.docs.required_properties.nil? -%> <%= "\n" + object.docs.required_properties -%> @@ -118,9 +127,9 @@ The following arguments are supported: - - - <% object.root_properties.reject(&:required).reject(&:output).each do |prop| -%> -<%= lines(build_property_documentation(prop)) -%> +<%= lines(build_property_documentation(prop, pwd)) -%> <% end -%> -<% if object.base_url.include?("{{project}}")-%> +<% if object.base_url.include?("{{project}}") || (object.create_url && object.create_url.include?('{{project}}'))-%> <%# The following new line allow for project to be bullet-formatted properly. -%> * `project` - (Optional) The ID of the project in which the resource belongs. @@ -136,7 +145,7 @@ The following arguments are supported: <%= "\n" + object.docs.optional_properties -%> <% end -%> <% properties.reject(&:required).reject(&:output).each do |prop| -%> -<%= lines(build_nested_property_documentation(prop)) -%> +<%= lines(build_nested_property_documentation(prop, pwd)) -%> <% end -%> ## Attributes Reference @@ -145,14 +154,14 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `<%= id_format(object) %>` <% object.root_properties.select(&:output).each do |prop| -%> -<%= lines(build_property_documentation(prop)) -%> +<%= lines(build_property_documentation(prop, pwd)) -%> <% end -%> <% if object.has_self_link -%> * `self_link` - The URI of the created resource. <% end -%> <% properties.select(&:output).each do |prop| -%> -<%= lines(build_nested_property_documentation(prop)) -%> +<%= lines(build_nested_property_documentation(prop, pwd)) -%> <% end -%> <%- unless object.docs.attributes.nil? -%> <%= "\n" + object.docs.attributes -%> diff --git a/templates/terraform/resource_definition/firewall.erb b/templates/terraform/resource_definition/firewall.erb index 7375e2971a3d..da5a6e482bba 100644 --- a/templates/terraform/resource_definition/firewall.erb +++ b/templates/terraform/resource_definition/firewall.erb @@ -14,3 +14,4 @@ -%> SchemaVersion: 1, MigrateState: resourceComputeFirewallMigrateState, +CustomizeDiff: resourceComputeFirewallEnableLoggingCustomizeDiff, diff --git a/templates/terraform/resource_iam.html.markdown.erb b/templates/terraform/resource_iam.html.markdown.erb index 028ff18be3ee..b6bdef0565f5 100644 --- a/templates/terraform/resource_iam.html.markdown.erb +++ b/templates/terraform/resource_iam.html.markdown.erb @@ -50,7 +50,7 @@ timeouts ||= Api::Timeouts.new -%> --- -<%= lines(autogen_notice :yaml) -%> +<%= lines(autogen_notice(:yaml, pwd)) -%> subcategory: "<%= tf_subcategory -%>" layout: "google" page_title: "Google: <%= resource_ns_iam -%>" @@ -95,7 +95,7 @@ data "google_iam_policy" "admin" { } resource "<%= resource_ns_iam -%>_policy" "policy" { -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> policy_data = data.google_iam_policy.admin.policy_data } ``` @@ -120,7 +120,7 @@ data "google_iam_policy" "admin" { } resource "<%= resource_ns_iam -%>_policy" "policy" { -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> policy_data = data.google_iam_policy.admin.policy_data } ``` @@ -129,7 +129,7 @@ resource "<%= resource_ns_iam -%>_policy" "policy" { ```hcl resource "<%= resource_ns_iam -%>_binding" "binding" { -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "<%= object.iam_policy.admin_iam_role || object.iam_policy.allowed_iam_role -%>" members = [ "user:jane@example.com", @@ -142,7 +142,7 @@ With IAM Conditions ([beta](https://terraform.io/docs/providers/google/provider_ ```hcl resource "<%= resource_ns_iam -%>_binding" "binding" { -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "<%= object.iam_policy.admin_iam_role || object.iam_policy.allowed_iam_role -%>" members = [ "user:jane@example.com", @@ -160,7 +160,7 @@ resource "<%= resource_ns_iam -%>_binding" "binding" { ```hcl resource "<%= resource_ns_iam -%>_member" "member" { -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "<%= object.iam_policy.admin_iam_role || object.iam_policy.allowed_iam_role -%>" member = "user:jane@example.com" } @@ -171,7 +171,7 @@ With IAM Conditions ([beta](https://terraform.io/docs/providers/google/provider_ ```hcl resource "<%= resource_ns_iam -%>_member" "member" { -<%= lines(compile(object.iam_policy.example_config_body)) -%> +<%= lines(compile(pwd + '/' + object.iam_policy.example_config_body)) -%> role = "<%= object.iam_policy.admin_iam_role || object.iam_policy.allowed_iam_role -%>" member = "user:jane@example.com" @@ -262,7 +262,7 @@ Any variables not passed in the import command will be taken from the provider c IAM member imports use space-delimited identifiers: the resource in question, the role, and the member identity, e.g. ``` -$ terraform import <% if object.min_version.name == 'beta' %>-provider=google-beta <% end -%><%= resource_ns_iam -%>_member.editor "<%= id_format(object).gsub('{{name}}', "{{#{object.name.underscore}}}") -%> <%= object.iam_policy.allowed_iam_role -%> jane@example.com" +$ terraform import <% if object.min_version.name == 'beta' %>-provider=google-beta <% end -%><%= resource_ns_iam -%>_member.editor "<%= all_formats.first.gsub('{{name}}', "{{#{object.name.underscore}}}") -%> <%= object.iam_policy.allowed_iam_role -%> jane@example.com" ``` IAM binding imports use space-delimited identifiers: the resource in question and the role, e.g. diff --git a/templates/terraform/schema_property.erb b/templates/terraform/schema_property.erb index 689b910b2b0d..a82fc5ccdce5 100644 --- a/templates/terraform/schema_property.erb +++ b/templates/terraform/schema_property.erb @@ -14,7 +14,7 @@ -%> <% if property.flatten_object -%> <% order_properties(property.properties).each do |prop| -%> - <%= lines(build_schema_property(prop, object)) -%> + <%= lines(build_schema_property(prop, object, pwd)) -%> <% end -%> <% elsif tf_types.include?(property.class) -%> "<%= property.name.underscore -%>": { @@ -36,7 +36,9 @@ <% else -%> Optional: true, <% end -%> -<% if property.deprecated? -%> +<% if property.removed? -%> + Removed: "<%= property.removed_message %>", +<% elsif property.deprecated? -%> Deprecated: "<%= property.deprecation_message %>", <% end -%> <% if force_new?(property, object) -%> @@ -64,13 +66,22 @@ <% unless property.state_func.nil? -%> StateFunc: <%= property.state_func %>, <% end -%> - Description: `<%= property.description.strip.gsub("`", "'") -%>`, +<% enum_values_description = "" -%> +<% if property.is_a?(Api::Type::Enum) && !property.output -%> +<% unless property.default_value.nil? || property.default_value == "" -%> +<% enum_values_description += " Default value: \"#{property.default_value}\"" -%> +<% end -%> +<% enum_values_description += " Possible values: [" -%> +<% enum_values_description += property.values.select { |v| v != "" }.map { |v| "\"#{v}\"" }.join(', ') -%> +<% enum_values_description += "]" -%> +<% end -%> + Description: `<%= property.description.strip.gsub("`", "'") + enum_values_description -%>`, <% if property.is_a?(Api::Type::NestedObject) -%> MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ <% order_properties(property.properties).each do |prop| -%> - <%= lines(build_schema_property(prop, object)) -%> + <%= lines(build_schema_property(prop, object, pwd)) -%> <% end -%> }, }, @@ -88,7 +99,7 @@ Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ <% order_properties(property.item_type.properties).each do |prop| -%> - <%= lines(build_schema_property(prop, object)) -%> + <%= lines(build_schema_property(prop, object, pwd)) -%> <% end -%> }, }, @@ -130,7 +141,7 @@ <% end -%> }, <% order_properties(property.value_type.properties).each do |prop| -%> - <%= lines(build_schema_property(prop, object)) -%> + <%= lines(build_schema_property(prop, object, pwd)) -%> <% end -%> }, }, @@ -146,13 +157,13 @@ <% end -%> <% unless property.conflicting().empty? -%> <% conflicting_props = property.conflicting().map(&:name).map(&:underscore) -%> - ConflictsWith: <%= go_literal(conflicting_props.reject {|sp| property_for_schema_path(sp, object).nil? }) -%>, + ConflictsWith: <%= go_literal(conflicting_props.map {|sp| get_property_schema_path(sp, object) }.compact) -%>, <% end -%> <% unless property.at_least_one_of_list().empty? -%> - AtLeastOneOf: <%= go_literal(property.at_least_one_of_list.reject {|sp| property_for_schema_path(sp, object).nil? }) -%>, + AtLeastOneOf: <%= go_literal(property.at_least_one_of_list.map {|sp| get_property_schema_path(sp, object) }.compact) -%>, <% end -%> <% unless property.exactly_one_of_list().empty? -%> - ExactlyOneOf: <%= go_literal(property.exactly_one_of_list.reject {|sp| property_for_schema_path(sp, object).nil? }) -%>, + ExactlyOneOf: <%= go_literal(property.exactly_one_of_list.map {|sp| get_property_schema_path(sp, object) }.compact) -%>, <% end -%> }, <% else -%> diff --git a/templates/terraform/schema_subresource.erb b/templates/terraform/schema_subresource.erb index 3d5ba12434d0..a62023a5d884 100644 --- a/templates/terraform/schema_subresource.erb +++ b/templates/terraform/schema_subresource.erb @@ -19,7 +19,7 @@ func <%= namespace_property_from_object(property, object) -%>Schema() *schema.Re return &schema.Resource{ Schema: map[string]*schema.Schema{ <% order_properties(property.item_type.properties).each do |prop| -%> - <%= lines(build_schema_property(prop, object)) -%> + <%= lines(build_schema_property(prop, object, pwd)) -%> <% end -%> }, } @@ -27,5 +27,5 @@ func <%= namespace_property_from_object(property, object) -%>Schema() *schema.Re <% end %> <% property.nested_properties.each do |prop| -%> -<%= lines(build_subresource_schema(prop, object), 1) -%> +<%= lines(build_subresource_schema(prop, object, pwd), 1) -%> <% end -%> diff --git a/templates/terraform/sweeper_file.go.erb b/templates/terraform/sweeper_file.go.erb index edf65bb8de51..67375d164e88 100644 --- a/templates/terraform/sweeper_file.go.erb +++ b/templates/terraform/sweeper_file.go.erb @@ -1,4 +1,4 @@ -<%= lines(autogen_notice :go) -%> +<%= lines(autogen_notice(:go, pwd)) -%> package google @@ -19,6 +19,7 @@ listUrlTemplate.sub! "zones/{{zone}}", "aggregated" aggregatedList = listUrlTemplate.include? "/aggregated/" deleteUrlTemplate = object.__product.base_url + object.delete_uri +delete_id = deleteUrlTemplate.include? "_id" -%> func init() { @@ -45,6 +46,9 @@ func testSweep<%= sweeper_name -%>(region string) error { return err } + t := &testing.T{} + billingId := getTestBillingAccountFromEnv(t) + // Setup variables to replace in list template d := &ResourceDataMock{ FieldsInSchema: map[string]interface{}{ @@ -52,6 +56,7 @@ func testSweep<%= sweeper_name -%>(region string) error { "region":region, "location":region, "zone":"-", + "billing_account":billingId, }, } @@ -97,12 +102,25 @@ func testSweep<%= sweeper_name -%>(region string) error { nonPrefixCount := 0 for _, ri := range rl { obj := ri.(map[string]interface{}) + <% if delete_id -%> + var name string + // Id detected in the delete URL, attempt to use id. + if obj["id"] != nil { + name = GetResourceNameFromSelfLink(obj["id"].(string)) + } else if obj["name"] != nil { + name = GetResourceNameFromSelfLink(obj["name"].(string)) + } else { + log.Printf("[INFO][SWEEPER_LOG] %s resource name and id were nil", resourceName) + return nil + } + <% else -%> if obj["name"] == nil { log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) return nil } name := GetResourceNameFromSelfLink(obj["name"].(string)) + <% end -%> // Skip resources that shouldn't be sweeped if !isSweepableTestResource(name) { nonPrefixCount++ diff --git a/templates/terraform/update_encoder/compute_per_instance_config.go.erb b/templates/terraform/update_encoder/compute_per_instance_config.go.erb new file mode 100644 index 000000000000..12c2c63915ee --- /dev/null +++ b/templates/terraform/update_encoder/compute_per_instance_config.go.erb @@ -0,0 +1,19 @@ +<%# The license inside this block applies to this file. + # Copyright 2017 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +// updates and creates use different wrapping object names +wrappedReq := map[string]interface{}{ + "perInstanceConfigs": []interface{}{obj}, +} +return wrappedReq, nil diff --git a/templates/terraform/update_encoder/containeranalysis_occurrence.go.erb b/templates/terraform/update_encoder/containeranalysis_occurrence.go.erb new file mode 100644 index 000000000000..4ce716cf1483 --- /dev/null +++ b/templates/terraform/update_encoder/containeranalysis_occurrence.go.erb @@ -0,0 +1,23 @@ +<%# The license inside this block applies to this file. + # Copyright 2020 Google Inc. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +-%> +// Note is required, even for PATCH +noteNameProp, err := expandContainerAnalysisOccurrenceNoteName(d.Get("note_name"), d, meta.(*Config)) +if err != nil { + return nil, err +} else if v, ok := d.GetOkExists("note_name"); !isEmptyValue(reflect.ValueOf(noteNameProp)) && (ok || !reflect.DeepEqual(v, noteNameProp)) { + obj["noteName"] = noteNameProp +} + +return resource<%= resource_name -%>Encoder(d, meta, obj) diff --git a/templates/terraform/update_mask.erb b/templates/terraform/update_mask.erb index 68f9d67c33d1..fc5aea548778 100644 --- a/templates/terraform/update_mask.erb +++ b/templates/terraform/update_mask.erb @@ -1,22 +1,21 @@ -updateMask := []string{} -<% update_body_properties.each do |prop| -%> -<%# UpdateMask documentation is not not obvious about which fields are supported or +<%# Template for code adding update mask query parameter to update URL. + + UpdateMask documentation is not not obvious about which fields are supported or how deeply nesting is supported. For instance, if we change the field foo.bar.baz, it seems that *sometimes*, 'foo' is a valid value. Other times, it needs to be 'foo.bar', and other times 'foo.bar.baz'. If the defaults don't work for you, - You can customize the exact list of fields that are passed for a property + you can customize the exact list of fields that are passed for a property using `update_mask_fields`. --#%> -if d.HasChange("<%= prop.name.underscore -%>") { -<% -mask = prop.api_name -if prop.update_mask_fields - mask = prop.update_mask_fields.join(',') -end -%> - updateMask = append(updateMask, "<%= mask -%>") +updateMask := []string{} +<% + masks_for_props = get_property_update_masks_groups(update_body_properties) + masks_for_props.each do |prop_name, masks| -%> + +if d.HasChange("<%= prop_name %>") { + updateMask = append(updateMask, <%= masks.map{|m| "\"#{m}\"" }.join(",\n") %>) } -<% end -%> +<% end # update_body_properties.each -%> // updateMask is a URL parameter but not present in the schema, so replaceVars // won't set it url, err = addQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) diff --git a/third_party/inspec/custom_functions/dns_managed_zone.erb b/third_party/inspec/custom_functions/dns_managed_zone.erb index d3e79309945a..7a231037088c 100644 --- a/third_party/inspec/custom_functions/dns_managed_zone.erb +++ b/third_party/inspec/custom_functions/dns_managed_zone.erb @@ -1,12 +1,12 @@ def key_signing_key_algorithm - specs = @dnssec_config&.default_key_specs | [] + specs = @dnssec_config&.default_key_specs specs.each do |spec| return spec.algorithm if spec.key_type == 'keySigning' end end def zone_signing_key_algorithm - specs = @dnssec_config&.default_key_specs | [] + specs = @dnssec_config&.default_key_specs specs.each do |spec| return spec.algorithm if spec.key_type == 'zoneSigning' end diff --git a/third_party/inspec/custom_functions/google_compute_instance.erb b/third_party/inspec/custom_functions/google_compute_instance.erb index f0f058efb96f..9e62a03ba488 100644 --- a/third_party/inspec/custom_functions/google_compute_instance.erb +++ b/third_party/inspec/custom_functions/google_compute_instance.erb @@ -82,19 +82,19 @@ end def metadata_keys return [] if !defined?(@metadata) || @metadata.nil? - @metadata.item[:items].map { |m| m[:key] } + @metadata['items']&.map { |m| m['key'] } end def metadata_values return [] if !defined?(@metadata) || @metadata.nil? - @metadata.item[:items].map { |m| m[:value] } + @metadata['items']&.map { |m| m['value'] } end def metadata_value_by_key(metadata_key) return [] if !defined?(@metadata) || @metadata.nil? - @metadata.item[:items].each do |item| - if item[:key] == metadata_key - return item[:value] + @metadata['items']&.each do |item| + if item['key'] == metadata_key + return item['value'] end end [] @@ -107,21 +107,21 @@ def service_account_scopes end def block_project_ssh_keys - return false if !defined?(@metadata.items) || @metadata.items.nil? - @metadata.items.each do |element| - return true if element.key=='block-project-ssh-keys' and element.value.casecmp('true').zero? - return true if element.key=='block-project-ssh-keys' and element.value=='1' + return false if !defined?(@metadata['items']) || @metadata['items'].nil? + @metadata['items'].each do |element| + return true if element['key']=='block-project-ssh-keys' and element['value'].casecmp('true').zero? + return true if element['key']=='block-project-ssh-keys' and element['value']=='1' end false end def has_serial_port_disabled? - return false if !defined?(@metadata.items) || @metadata.items.nil? - @metadata.items.each do |element| - return true if element.key=='serial-port-enable' and element.value.casecmp('false').zero? - return true if element.key=='serial-port-enable' and element.value=='0' + return false if !defined?(@metadata['items']) || @metadata['items'].nil? + @metadata['items'].each do |element| + return false if element['key']=='serial-port-enable' and element['value'].casecmp('true').zero? + return false if element['key']=='serial-port-enable' and element['value']=='1' end - false + true end def has_disks_encrypted_with_csek? diff --git a/third_party/inspec/documentation/google_service_account.md b/third_party/inspec/documentation/google_service_account.md index dd45a4adb683..2afc82333d17 100644 --- a/third_party/inspec/documentation/google_service_account.md +++ b/third_party/inspec/documentation/google_service_account.md @@ -1,17 +1,17 @@ ### Test that a GCP project IAM service account has the expected unique identifier - describe google_service_account(name: 'projects/sample-project/serviceAccounts/sample-account@sample-project.iam.gserviceaccount.com') do + describe google_service_account(project: 'sample-project', name: 'sample-account@sample-project.iam.gserviceaccount.com') do its('unique_id') { should eq 12345678 } end ### Test that a GCP project IAM service account has the expected oauth2 client identifier - describe google_service_account(name: 'projects/sample-project/serviceAccounts/sample-account@sample-project.iam.gserviceaccount.com') do + describe google_service_account(project: 'sample-project', name: 'sample-account@sample-project.iam.gserviceaccount.com') do its('oauth2_client_id') { should eq 12345678 } end ### Test that a GCP project IAM service account does not have user managed keys - describe google_service_account(name: 'projects/sample-project/serviceAccounts/sample-account@sample-project.iam.gserviceaccount.com') do - it { should have_user_managed_keys } + describe google_service_account_keys(project: 'chef-gcp-inspec', service_account: "display-name@project-id.iam.gserviceaccount.com") do + its('key_types') { should_not include 'USER_MANAGED' } end \ No newline at end of file diff --git a/third_party/inspec/documentation/google_service_account_keys.md b/third_party/inspec/documentation/google_service_account_keys.md index f5719a429fb0..4b72ffc90327 100644 --- a/third_party/inspec/documentation/google_service_account_keys.md +++ b/third_party/inspec/documentation/google_service_account_keys.md @@ -1,11 +1,11 @@ ### Test that there are no more than a specified number of keys for the service account - describe google_service_account_keys(service_account: 'projects/sample-project/serviceAccounts/sample-account@sample-project.iam.gserviceaccount.com') do + describe google_service_account_keys(project: 'sample-project', service_account: 'sample-account@sample-project.iam.gserviceaccount.com') do its('count') { should be <= 1000} end ### Test that a service account with expected name is available - describe google_service_account_keys(service_account: 'projects/sample-project/serviceAccounts/sample-account@sample-project.iam.gserviceaccount.com') do + describe google_service_account_keys(project: 'sample-project', service_account: 'sample-account@sample-project.iam.gserviceaccount.com') do its('key_names'){ should include "projects/sample-project/serviceAccounts/test-sa@sample-project.iam.gserviceaccount.com/keys/c6bd986da9fac6d71178db41d1741cbe751a5080" } end \ No newline at end of file diff --git a/third_party/inspec/documentation/google_storage_bucket_object.md b/third_party/inspec/documentation/google_storage_bucket_object.md index eb5b5253164f..09292ead86a4 100644 --- a/third_party/inspec/documentation/google_storage_bucket_object.md +++ b/third_party/inspec/documentation/google_storage_bucket_object.md @@ -27,5 +27,5 @@ ### Test that a GCP storage bucket object was last updated within a certain time period describe google_storage_bucket_object(bucket: 'bucket-buvsjjcndqz', object: 'bucket-object-pmxbiikq') do - its('updated_date') { should be > Time.now - 365*60*60*24*10 } + its('time_updated') { should be > Time.now - 365*60*60*24*10 } end \ No newline at end of file diff --git a/third_party/terraform/data_sources/data_google_game_services_game_server_deployment_rollout.go.erb b/third_party/terraform/data_sources/data_google_game_services_game_server_deployment_rollout.go.erb new file mode 100644 index 000000000000..b27c465aeaa7 --- /dev/null +++ b/third_party/terraform/data_sources/data_google_game_services_game_server_deployment_rollout.go.erb @@ -0,0 +1,33 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +func dataSourceGameServicesGameServerDeploymentRollout() *schema.Resource { + + dsSchema := datasourceSchemaFromResourceSchema(resourceGameServicesGameServerDeploymentRollout().Schema) + addRequiredFieldsToSchema(dsSchema, "deployment_id") + + return &schema.Resource{ + Read: dataSourceGameServicesGameServerDeploymentRolloutRead, + Schema: dsSchema, + } +} + +func dataSourceGameServicesGameServerDeploymentRolloutRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + id, err := replaceVars(d, config, "projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return resourceGameServicesGameServerDeploymentRolloutRead(d, meta) + +} +<% end -%> diff --git a/third_party/terraform/data_sources/data_source_cloud_identity_group_memberships.go.erb b/third_party/terraform/data_sources/data_source_cloud_identity_group_memberships.go.erb new file mode 100644 index 000000000000..3dd6ef81bbf6 --- /dev/null +++ b/third_party/terraform/data_sources/data_source_cloud_identity_group_memberships.go.erb @@ -0,0 +1,73 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "fmt" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + cloudidentity "google.golang.org/api/cloudidentity/v1beta1" +) + +func dataSourceGoogleCloudIdentityGroupMemberships() *schema.Resource { + // Generate datasource schema from resource + dsSchema := datasourceSchemaFromResourceSchema(resourceCloudIdentityGroupMembership().Schema) + + return &schema.Resource{ + Read: dataSourceGoogleCloudIdentityGroupMembershipsRead, + + Schema: map[string]*schema.Schema{ + "memberships": { + Type: schema.TypeList, + Computed: true, + Description: `List of Cloud Identity group memberships.`, + Elem: &schema.Resource{ + Schema: dsSchema, + }, + }, + "group": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name of the Group to get memberships from.`, + }, + }, + } +} + +func dataSourceGoogleCloudIdentityGroupMembershipsRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + resp, err := config.clientCloudIdentity.Groups.Memberships.List(d.Get("group").(string)).View("FULL").Do() + if err != nil { + return handleNotFoundError(err, d, fmt.Sprintf("CloudIdentityGroups %q", d.Id())) + } + + result := []map[string]interface{}{} + for _, member := range resp.Memberships { + result = append(result, map[string]interface{}{ + "name": member.Name, + "roles": flattenCloudIdentityGroupMembershipsRoles(member.Roles), + "member_key": flattenCloudIdentityGroupsEntityKey(member.MemberKey), + "preferred_member_key": flattenCloudIdentityGroupsEntityKey(member.PreferredMemberKey), + }) + } + + d.Set("memberships", result) + d.SetId(time.Now().UTC().String()) + return nil +} + +func flattenCloudIdentityGroupMembershipsRoles(roles []*cloudidentity.MembershipRole) []interface{} { + transformed := []interface{}{} + + for _, role := range roles { + transformed = append(transformed, map[string]interface{}{ + "name": role.Name, + }) + } + return transformed +} +<% end -%> diff --git a/third_party/terraform/data_sources/data_source_cloud_identity_groups.go.erb b/third_party/terraform/data_sources/data_source_cloud_identity_groups.go.erb new file mode 100644 index 000000000000..d148cd86e6c9 --- /dev/null +++ b/third_party/terraform/data_sources/data_source_cloud_identity_groups.go.erb @@ -0,0 +1,74 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "fmt" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + cloudidentity "google.golang.org/api/cloudidentity/v1beta1" +) + +func dataSourceGoogleCloudIdentityGroups() *schema.Resource { + // Generate datasource schema from resource + dsSchema := datasourceSchemaFromResourceSchema(resourceCloudIdentityGroup().Schema) + + return &schema.Resource{ + Read: dataSourceGoogleCloudIdentityGroupsRead, + + Schema: map[string]*schema.Schema{ + "groups": { + Type: schema.TypeList, + Computed: true, + Description: `List of Cloud Identity groups.`, + Elem: &schema.Resource{ + Schema: dsSchema, + }, + }, + "parent": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The resource name of the entity under which this Group resides in the +Cloud Identity resource hierarchy. + +Must be of the form identitysources/{identity_source_id} for external-identity-mapped +groups or customers/{customer_id} for Google Groups.`, + }, + }, + } +} + +func dataSourceGoogleCloudIdentityGroupsRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + resp, err := config.clientCloudIdentity.Groups.List().Parent(d.Get("parent").(string)).View("FULL").Do() + if err != nil { + return handleNotFoundError(err, d, fmt.Sprintf("CloudIdentityGroups %q", d.Id())) + } + + result := []map[string]interface{}{} + for _, group := range resp.Groups { + result = append(result, map[string]interface{}{ + "name": group.Name, + "display_name": group.DisplayName, + "labels": group.Labels, + "description": group.Description, + "group_key": flattenCloudIdentityGroupsEntityKey(group.GroupKey), + }) + } + + d.Set("groups", result) + d.SetId(time.Now().UTC().String()) + return nil +} + +func flattenCloudIdentityGroupsEntityKey(entityKey *cloudidentity.EntityKey) []interface{} { + transformed := map[string]interface{}{ + "id": entityKey.Id, + "namespace": entityKey.Namespace, + } + return []interface{}{transformed} +} +<% end -%> diff --git a/third_party/terraform/data_sources/data_source_compute_network_endpoint_group_test.go b/third_party/terraform/data_sources/data_source_compute_network_endpoint_group_test.go index f8ccbf73fe4a..59a400ae6bd4 100644 --- a/third_party/terraform/data_sources/data_source_compute_network_endpoint_group_test.go +++ b/third_party/terraform/data_sources/data_source_compute_network_endpoint_group_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,10 +12,10 @@ func TestAccDataSourceComputeNetworkEndpointGroup(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/data_sources/data_source_google_compute_image.go b/third_party/terraform/data_sources/data_source_google_compute_image.go index a09aa70f8453..d7eab4410775 100644 --- a/third_party/terraform/data_sources/data_source_google_compute_image.go +++ b/third_party/terraform/data_sources/data_source_google_compute_image.go @@ -4,7 +4,6 @@ import ( "fmt" "log" "strconv" - "strings" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" compute "google.golang.org/api/compute/v1" @@ -113,15 +112,12 @@ func dataSourceGoogleComputeImageRead(d *schema.ResourceData, meta interface{}) return err } - params := []string{project} var image *compute.Image if v, ok := d.GetOk("name"); ok { - params = append(params, v.(string)) log.Printf("[DEBUG] Fetching image %s", v.(string)) image, err = config.clientCompute.Images.Get(project, v.(string)).Do() log.Printf("[DEBUG] Fetched image %s", v.(string)) } else if v, ok := d.GetOk("family"); ok { - params = append(params, "family", v.(string)) log.Printf("[DEBUG] Fetching latest non-deprecated image from family %s", v.(string)) image, err = config.clientCompute.Images.GetFromFamily(project, v.(string)).Do() log.Printf("[DEBUG] Fetched latest non-deprecated image from family %s", v.(string)) @@ -162,7 +158,11 @@ func dataSourceGoogleComputeImageRead(d *schema.ResourceData, meta interface{}) d.Set("source_image_id", image.SourceImageId) d.Set("status", image.Status) - d.SetId(strings.Join(params, "/")) + id, err := replaceVars(d, config, "projects/{{project}}/global/images/{{name}}") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) return nil } diff --git a/third_party/terraform/data_sources/data_source_google_container_engine_versions.go.erb b/third_party/terraform/data_sources/data_source_google_container_engine_versions.go.erb index b8d1f519ccae..601c55d276f5 100644 --- a/third_party/terraform/data_sources/data_source_google_container_engine_versions.go.erb +++ b/third_party/terraform/data_sources/data_source_google_container_engine_versions.go.erb @@ -58,6 +58,14 @@ func dataSourceGoogleContainerEngineVersions() *schema.Resource { Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + +<% unless version == 'ga' -%> + "release_channel_default_version": { + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, +<% end %> }, } } @@ -110,6 +118,14 @@ func dataSourceGoogleContainerEngineVersionsRead(d *schema.ResourceData, meta in d.Set("default_cluster_version", resp.DefaultClusterVersion) +<% unless version == 'ga' -%> + m := map[string]string{} + for _, v := range resp.Channels { + m[v.Channel] = v.DefaultVersion + } + d.Set("release_channel_default_version", m) +<% end %> + d.SetId(time.Now().UTC().String()) return nil } diff --git a/third_party/terraform/data_sources/data_source_google_firebase_web_app.go.erb b/third_party/terraform/data_sources/data_source_google_firebase_web_app.go.erb new file mode 100644 index 000000000000..b10239c1a6f7 --- /dev/null +++ b/third_party/terraform/data_sources/data_source_google_firebase_web_app.go.erb @@ -0,0 +1,36 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> +import ( + "errors" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +func dataSourceGoogleFirebaseWebApp() *schema.Resource { + // Generate datasource schema from resource + dsSchema := datasourceSchemaFromResourceSchema(resourceFirebaseWebApp().Schema) + + // Set 'Required' schema elements + addRequiredFieldsToSchema(dsSchema, "app_id") + + return &schema.Resource{ + Read: dataSourceGoogleFirebaseWebAppRead, + Schema: dsSchema, + } +} + +func dataSourceGoogleFirebaseWebAppRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + appId := d.Get("app_id") + project, err := getProject(d, config) + if err != nil { + return err + } + name := fmt.Sprintf("projects/%s/webApps/%s", project, appId.(string)) + d.SetId(name) + d.Set("name", name) + return resourceFirebaseWebAppRead(d, meta) +} +<% end -%> diff --git a/third_party/terraform/data_sources/data_source_google_firebase_web_app_config.go.erb b/third_party/terraform/data_sources/data_source_google_firebase_web_app_config.go.erb new file mode 100644 index 000000000000..7656404d8e50 --- /dev/null +++ b/third_party/terraform/data_sources/data_source_google_firebase_web_app_config.go.erb @@ -0,0 +1,131 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +func dataSourceGoogleFirebaseWebappConfig() *schema.Resource { + return &schema.Resource{ + Read: dataSourceGoogleFirebaseWebappConfigRead, + + Schema: map[string]*schema.Schema{ + "web_app_id": { + Type: schema.TypeString, + Required: true, + Description: `The id of the Firebase web App.`, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Description: `The project id of the Firebase web App.`, + }, + "api_key": { + Type: schema.TypeString, + Computed: true, + Description: `The API key associated with the web App.`, + }, + "auth_domain": { + Type: schema.TypeString, + Computed: true, + Description: `The domain Firebase Auth configures for OAuth redirects, in the format: + +projectId.firebaseapp.com`, + }, + "database_url": { + Type: schema.TypeString, + Computed: true, + Description: `The default Firebase Realtime Database URL.`, + }, + "location_id": { + Type: schema.TypeString, + Computed: true, + Description: `The ID of the project's default GCP resource location. The location is one of the available GCP resource +locations. + +This field is omitted if the default GCP resource location has not been finalized yet. To set your project's +default GCP resource location, call defaultLocation.finalize after you add Firebase services to your project.`, + }, + "measurement_id": { + Type: schema.TypeString, + Computed: true, + Description: `The unique Google-assigned identifier of the Google Analytics web stream associated with the Firebase Web App. +Firebase SDKs use this ID to interact with Google Analytics APIs. + +This field is only present if the App is linked to a web stream in a Google Analytics App + Web property. +Learn more about this ID and Google Analytics web streams in the Analytics documentation. + +To generate a measurementId and link the Web App with a Google Analytics web stream, +call projects.addGoogleAnalytics.`, + }, + "messaging_sender_id": { + Type: schema.TypeString, + Computed: true, + Description: `The sender ID for use with Firebase Cloud Messaging.`, + }, + "storage_bucket": { + Type: schema.TypeString, + Computed: true, + Description: `The default Cloud Storage for Firebase storage bucket name.`, + }, + }, + } + +} + +func dataSourceGoogleFirebaseWebappConfigRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + id := d.Get("web_app_id").(string) + + project, err := getProject(d, config) + if err != nil { + return err + } + + url, err := replaceVars(d, config, "{{FirebaseBasePath}}projects/{{project}}/webApps/{{web_app_id}}/config") + if err != nil { + return err + } + + res, err := sendRequest(config, "GET", project, url, nil) + if err != nil { + return handleNotFoundError(err, d, fmt.Sprintf("FirebaseWebApp config %q", d.Id())) + } + + err = d.Set("api_key", res["apiKey"]) + if err != nil { + return err + } + err = d.Set("auth_domain", res["authDomain"]) + if err != nil { + return err + } + err = d.Set("database_url", res["databaseURL"]) + if err != nil { + return err + } + err = d.Set("location_id", res["locationId"]) + if err != nil { + return err + } + err = d.Set("measurement_id", res["measurementId"]) + if err != nil { + return err + } + err = d.Set("messaging_sender_id", res["messagingSenderId"]) + if err != nil { + return err + } + err = d.Set("storage_bucket", res["storageBucket"]) + if err != nil { + return err + } + + d.SetId(id) + return nil +} +<% end -%> diff --git a/third_party/terraform/data_sources/data_source_google_folder.go b/third_party/terraform/data_sources/data_source_google_folder.go index fc2f3e08c65b..6960228f8968 100644 --- a/third_party/terraform/data_sources/data_source_google_folder.go +++ b/third_party/terraform/data_sources/data_source_google_folder.go @@ -15,6 +15,10 @@ func dataSourceGoogleFolder() *schema.Resource { Type: schema.TypeString, Required: true, }, + "folder_id": { + Type: schema.TypeString, + Computed: true, + }, "name": { Type: schema.TypeString, Computed: true, diff --git a/third_party/terraform/data_sources/data_source_google_iam_policy.go.erb b/third_party/terraform/data_sources/data_source_google_iam_policy.go.erb index 96ab2c5508da..ba2d7c9b2dbe 100644 --- a/third_party/terraform/data_sources/data_source_google_iam_policy.go.erb +++ b/third_party/terraform/data_sources/data_source_google_iam_policy.go.erb @@ -49,7 +49,6 @@ func dataSourceGoogleIamPolicy() *schema.Resource { }, Set: schema.HashString, }, -<% unless version == 'ga' -%> "condition": { Type: schema.TypeList, Optional: true, @@ -71,7 +70,6 @@ func dataSourceGoogleIamPolicy() *schema.Resource { }, }, }, -<% end -%> }, }, }, @@ -130,9 +128,7 @@ func dataSourceGoogleIamPolicyRead(d *schema.ResourceData, meta interface{}) err for i, v := range bset.List() { binding := v.(map[string]interface{}) members := convertStringSet(binding["members"].(*schema.Set)) -<% unless version == 'ga' -%> condition := expandIamCondition(binding["condition"]) -<% end -%> // Sort members to get simpler diffs as it's what the API does sort.Strings(members) @@ -140,9 +136,7 @@ func dataSourceGoogleIamPolicyRead(d *schema.ResourceData, meta interface{}) err policy.Bindings[i] = &cloudresourcemanager.Binding{ Role: binding["role"].(string), Members: members, -<% unless version == 'ga' -%> Condition: condition, -<% end -%> } } diff --git a/third_party/terraform/data_sources/data_source_google_iam_testable_permissions.go b/third_party/terraform/data_sources/data_source_google_iam_testable_permissions.go new file mode 100644 index 000000000000..1206b1fa41c7 --- /dev/null +++ b/third_party/terraform/data_sources/data_source_google_iam_testable_permissions.go @@ -0,0 +1,136 @@ +package google + +import ( + "fmt" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/helper/validation" +) + +func dataSourceGoogleIamTestablePermissions() *schema.Resource { + return &schema.Resource{ + Read: dataSourceGoogleIamTestablePermissionsRead, + Schema: map[string]*schema.Schema{ + "full_resource_name": { + Type: schema.TypeString, + Required: true, + }, + "stages": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{"ALPHA", "BETA", "GA", "DEPRECATED"}, true), + }, + }, + "custom_support_level": { + Type: schema.TypeString, + Optional: true, + Default: "SUPPORTED", + ValidateFunc: validation.StringInSlice([]string{"NOT_SUPPORTED", "SUPPORTED", "TESTING"}, true), + }, + "permissions": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Computed: true, + }, + "title": { + Type: schema.TypeString, + Computed: true, + }, + "custom_support_level": { + Type: schema.TypeString, + Computed: true, + }, + "stage": { + Type: schema.TypeString, + Computed: true, + }, + "api_disabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + }, + } +} + +func dataSourceGoogleIamTestablePermissionsRead(d *schema.ResourceData, meta interface{}) (err error) { + config := meta.(*Config) + body := make(map[string]interface{}) + body["pageSize"] = 500 + permissions := make([]map[string]interface{}, 0) + + custom_support_level := strings.ToUpper(d.Get("custom_support_level").(string)) + stages := []string{} + for _, e := range d.Get("stages").([]interface{}) { + stages = append(stages, strings.ToUpper(e.(string))) + } + if len(stages) == 0 { + // Since schema.TypeLists cannot specify defaults, we'll specify it here + stages = append(stages, "GA") + } + for { + url := "https://iam.googleapis.com/v1/permissions:queryTestablePermissions" + body["fullResourceName"] = d.Get("full_resource_name").(string) + res, err := sendRequest(config, "POST", "", url, body) + if err != nil { + return fmt.Errorf("Error retrieving permissions: %s", err) + } + + pagePermissions := flattenTestablePermissionsList(res["permissions"], custom_support_level, stages) + permissions = append(permissions, pagePermissions...) + pToken, ok := res["nextPageToken"] + if ok && pToken != nil && pToken.(string) != "" { + body["pageToken"] = pToken.(string) + } else { + break + } + } + + if err := d.Set("permissions", permissions); err != nil { + return fmt.Errorf("Error retrieving permissions: %s", err) + } + + d.SetId(d.Get("full_resource_name").(string)) + return nil +} + +func flattenTestablePermissionsList(v interface{}, custom_support_level string, stages []string) []map[string]interface{} { + if v == nil { + return make([]map[string]interface{}, 0) + } + + ls := v.([]interface{}) + permissions := make([]map[string]interface{}, 0, len(ls)) + for _, raw := range ls { + p := raw.(map[string]interface{}) + + if _, ok := p["name"]; ok { + var csl bool + if custom_support_level == "SUPPORTED" { + csl = p["customRolesSupportLevel"] == nil || p["customRolesSupportLevel"] == "SUPPORTED" + } else { + csl = p["customRolesSupportLevel"] == custom_support_level + } + if csl && p["stage"] != nil && stringInSlice(stages, p["stage"].(string)) { + permissions = append(permissions, map[string]interface{}{ + "name": p["name"], + "title": p["title"], + "stage": p["stage"], + "api_disabled": p["apiDisabled"], + "custom_support_level": p["customRolesSupportLevel"], + }) + } + } + } + + return permissions +} diff --git a/third_party/terraform/data_sources/data_source_google_kms_crypto_key_version.go b/third_party/terraform/data_sources/data_source_google_kms_crypto_key_version.go index cee92f025381..af279db92714 100644 --- a/third_party/terraform/data_sources/data_source_google_kms_crypto_key_version.go +++ b/third_party/terraform/data_sources/data_source_google_kms_crypto_key_version.go @@ -117,7 +117,7 @@ func dataSourceGoogleKmsCryptoKeyVersionRead(d *schema.ResourceData, meta interf return fmt.Errorf("Error reading CryptoKeyVersion public key: %s", err) } } - d.SetId(fmt.Sprintf("//cloudkms.googleapis.com/%s/cryptoKeyVersions/%d", d.Get("crypto_key"), d.Get("version"))) + d.SetId(fmt.Sprintf("//cloudkms.googleapis.com/v1/%s/cryptoKeyVersions/%d", d.Get("crypto_key"), d.Get("version"))) return nil } diff --git a/third_party/terraform/data_sources/data_source_google_netblock_ip_ranges.go b/third_party/terraform/data_sources/data_source_google_netblock_ip_ranges.go index bdd68e957be2..21f5bde46b76 100644 --- a/third_party/terraform/data_sources/data_source_google_netblock_ip_ranges.go +++ b/third_party/terraform/data_sources/data_source_google_netblock_ip_ranges.go @@ -1,13 +1,26 @@ package google import ( + "encoding/json" "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "io/ioutil" "net/http" "strings" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" ) +type googRanges struct { + SyncToken string `json:"syncToken"` + CreationTime string `json:"creationTime"` + Prefixes []prefixes `json:"prefixes"` +} + +type prefixes struct { + Ipv4Prefix string `json:"ipv4Prefix"` + Ipv6Prefix string `json:"ipv6Prefix"` +} + func dataSourceGoogleNetblockIpRanges() *schema.Resource { return &schema.Resource{ Read: dataSourceGoogleNetblockIpRangesRead, @@ -47,7 +60,7 @@ func dataSourceGoogleNetblockIpRangesRead(d *schema.ResourceData, meta interface case "cloud-netblocks": // https://cloud.google.com/compute/docs/faq#where_can_i_find_product_name_short_ip_ranges const CLOUD_NETBLOCK_DNS = "_cloud-netblocks.googleusercontent.com" - CidrBlocks, err := getCidrBlocks(CLOUD_NETBLOCK_DNS) + CidrBlocks, err := getCidrBlocksFromDns(CLOUD_NETBLOCK_DNS) if err != nil { return err @@ -56,9 +69,9 @@ func dataSourceGoogleNetblockIpRangesRead(d *schema.ResourceData, meta interface d.Set("cidr_blocks_ipv4", CidrBlocks["cidr_blocks_ipv4"]) d.Set("cidr_blocks_ipv6", CidrBlocks["cidr_blocks_ipv6"]) case "google-netblocks": - // https://support.google.com/a/answer/33786?hl=en - const GOOGLE_NETBLOCK_DNS = "_spf.google.com" - CidrBlocks, err := getCidrBlocks(GOOGLE_NETBLOCK_DNS) + // https://cloud.google.com/vpc/docs/configure-private-google-access?hl=en#ip-addr-defaults + const GOOGLE_NETBLOCK_URL = "http://www.gstatic.com/ipranges/goog.json" + CidrBlocks, err := getCidrBlocksFromUrl(GOOGLE_NETBLOCK_URL) if err != nil { return err @@ -132,7 +145,7 @@ func netblock_request(name string) (string, error) { return string(body), nil } -func getCidrBlocks(netblock string) (map[string][]string, error) { +func getCidrBlocksFromDns(netblock string) (map[string][]string, error) { var dnsNetblockList []string cidrBlocks := make(map[string][]string) @@ -186,3 +199,40 @@ func getCidrBlocks(netblock string) (map[string][]string, error) { return cidrBlocks, nil } + +func getCidrBlocksFromUrl(url string) (map[string][]string, error) { + cidrBlocks := make(map[string][]string) + + response, err := http.Get(url) + + if err != nil { + return nil, fmt.Errorf("Error: %s", err) + } + + defer response.Body.Close() + body, err := ioutil.ReadAll(response.Body) + + if err != nil { + return nil, fmt.Errorf("Error to retrieve the CIDR list: %s", err) + } + + ranges := googRanges{} + jsonErr := json.Unmarshal(body, &ranges) + if jsonErr != nil { + return nil, fmt.Errorf("Error reading JSON list: %s", jsonErr) + } + + for _, element := range ranges.Prefixes { + + if len(element.Ipv4Prefix) > 0 { + cidrBlocks["cidr_blocks_ipv4"] = append(cidrBlocks["cidr_blocks_ipv4"], element.Ipv4Prefix) + cidrBlocks["cidr_blocks"] = append(cidrBlocks["cidr_blocks"], element.Ipv4Prefix) + } else if len(element.Ipv6Prefix) > 0 { + cidrBlocks["cidr_blocks_ipv6"] = append(cidrBlocks["cidr_blocks_ipv6"], element.Ipv6Prefix) + cidrBlocks["cidr_blocks"] = append(cidrBlocks["cidr_blocks"], element.Ipv6Prefix) + } + + } + + return cidrBlocks, nil +} diff --git a/third_party/terraform/data_sources/data_source_google_organization.go b/third_party/terraform/data_sources/data_source_google_organization.go index b0df4dccb924..519a0c1680d0 100644 --- a/third_party/terraform/data_sources/data_source_google_organization.go +++ b/third_party/terraform/data_sources/data_source_google_organization.go @@ -70,10 +70,20 @@ func dataSourceOrganizationRead(d *schema.ResourceData, meta interface{}) error } if len(resp.Organizations) > 1 { - return fmt.Errorf("More than one matching organization found") + // Attempt to find an exact domain match + for _, org := range resp.Organizations { + if org.DisplayName == v.(string) { + organization = org + break + } + } + if organization == nil { + return fmt.Errorf("Received multiple organizations in the response, but could not find an exact domain match.") + } + } else { + organization = resp.Organizations[0] } - organization = resp.Organizations[0] } else if v, ok := d.GetOk("organization"); ok { var resp *cloudresourcemanager.Organization err := retryTimeDuration(func() (err error) { diff --git a/third_party/terraform/data_sources/data_source_google_redis_instance.go b/third_party/terraform/data_sources/data_source_google_redis_instance.go new file mode 100644 index 000000000000..aa8e99c38e2e --- /dev/null +++ b/third_party/terraform/data_sources/data_source_google_redis_instance.go @@ -0,0 +1,29 @@ +package google + +import "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + +func dataSourceGoogleRedisInstance() *schema.Resource { + // Generate datasource schema from resource + dsSchema := datasourceSchemaFromResourceSchema(resourceRedisInstance().Schema) + + // Set 'Required' schema elements + addRequiredFieldsToSchema(dsSchema, "name") + + // Set 'Optional' schema elements + addOptionalFieldsToSchema(dsSchema, "project", "region") + + return &schema.Resource{ + Read: dataSourceGoogleRedisInstanceRead, + Schema: dsSchema, + } +} + +func dataSourceGoogleRedisInstanceRead(d *schema.ResourceData, meta interface{}) error { + id, err := replaceVars(d, meta.(*Config), "projects/{{project}}/locations/{{region}}/instances/{{name}}") + if err != nil { + return err + } + d.SetId(id) + + return resourceRedisInstanceRead(d, meta) +} diff --git a/third_party/terraform/data_sources/data_source_google_service_account_id_token.go b/third_party/terraform/data_sources/data_source_google_service_account_id_token.go new file mode 100644 index 000000000000..7c7b32a9488c --- /dev/null +++ b/third_party/terraform/data_sources/data_source_google_service_account_id_token.go @@ -0,0 +1,127 @@ +package google + +import ( + "time" + + "fmt" + "strings" + + iamcredentials "google.golang.org/api/iamcredentials/v1" + "google.golang.org/api/idtoken" + "google.golang.org/api/option" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + "golang.org/x/net/context" +) + +const ( + userInfoScope = "https://www.googleapis.com/auth/userinfo.email" +) + +func dataSourceGoogleServiceAccountIdToken() *schema.Resource { + + return &schema.Resource{ + Read: dataSourceGoogleServiceAccountIdTokenRead, + Schema: map[string]*schema.Schema{ + "target_audience": { + Type: schema.TypeString, + Required: true, + }, + "target_service_account": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateRegexp("(" + strings.Join(PossibleServiceAccountNames, "|") + ")"), + }, + "delegates": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateRegexp(ServiceAccountLinkRegex), + }, + }, + "include_email": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + // Not used currently + // https://github.com/googleapis/google-api-go-client/issues/542 + // "format": { + // Type: schema.TypeString, + // Optional: true, + // ValidateFunc: validation.StringInSlice([]string{ + // "FULL", "STANDARD"}, true), + // Default: "STANDARD", + // }, + "id_token": { + Type: schema.TypeString, + Sensitive: true, + Computed: true, + }, + }, + } +} + +func dataSourceGoogleServiceAccountIdTokenRead(d *schema.ResourceData, meta interface{}) error { + + config := meta.(*Config) + targetAudience := d.Get("target_audience").(string) + creds, err := config.GetCredentials([]string{userInfoScope}) + if err != nil { + return fmt.Errorf("error calling getCredentials(): %v", err) + } + + ts := creds.TokenSource + + // If the source token is just an access_token, all we can do is use the iamcredentials api to get an id_token + if _, ok := ts.(staticTokenSource); ok { + // Use + // https://cloud.google.com/iam/docs/reference/credentials/rest/v1/projects.serviceAccounts/generateIdToken + service := config.clientIamCredentials + name := fmt.Sprintf("projects/-/serviceAccounts/%s", d.Get("target_service_account").(string)) + tokenRequest := &iamcredentials.GenerateIdTokenRequest{ + Audience: targetAudience, + IncludeEmail: d.Get("include_email").(bool), + Delegates: convertStringSet(d.Get("delegates").(*schema.Set)), + } + at, err := service.Projects.ServiceAccounts.GenerateIdToken(name, tokenRequest).Do() + if err != nil { + return fmt.Errorf("error calling iamcredentials.GenerateIdToken: %v", err) + } + + d.SetId(time.Now().UTC().String()) + d.Set("id_token", at.Token) + + return nil + } + + tok, err := ts.Token() + if err != nil { + return fmt.Errorf("unable to get Token() from tokenSource: %v", err) + } + + // only user-credential TokenSources have refreshTokens + if tok.RefreshToken != "" { + return fmt.Errorf("unsupported Credential Type supplied. Use serviceAccount credentials") + } + ctx := context.Background() + co := []option.ClientOption{} + if creds.JSON != nil { + co = append(co, idtoken.WithCredentialsJSON(creds.JSON)) + } + + idTokenSource, err := idtoken.NewTokenSource(ctx, targetAudience, co...) + if err != nil { + return fmt.Errorf("unable to retrieve TokenSource: %v", err) + } + idToken, err := idTokenSource.Token() + if err != nil { + return fmt.Errorf("unable to retrieve Token: %v", err) + } + + d.SetId(time.Now().UTC().String()) + d.Set("id_token", idToken.AccessToken) + + return nil +} diff --git a/third_party/terraform/data_sources/data_source_google_storage_bucket_object.go b/third_party/terraform/data_sources/data_source_google_storage_bucket_object.go index 6350d436a7fa..8c3f8ae12326 100644 --- a/third_party/terraform/data_sources/data_source_google_storage_bucket_object.go +++ b/third_party/terraform/data_sources/data_source_google_storage_bucket_object.go @@ -49,6 +49,7 @@ func dataSourceGoogleStorageBucketObjectRead(d *schema.ResourceData, meta interf d.Set("self_link", res["selfLink"]) d.Set("storage_class", res["storageClass"]) d.Set("md5hash", res["md5Hash"]) + d.Set("metadata", res["metadata"]) d.SetId(bucket + "-" + name) diff --git a/third_party/terraform/data_sources/data_source_monitoring_notification_channel.go b/third_party/terraform/data_sources/data_source_monitoring_notification_channel.go index fe550c810343..6aac37b78e76 100644 --- a/third_party/terraform/data_sources/data_source_monitoring_notification_channel.go +++ b/third_party/terraform/data_sources/data_source_monitoring_notification_channel.go @@ -26,7 +26,7 @@ func dataSourceMonitoringNotificationChannel() *schema.Resource { func dataSourceMonitoringNotificationChannelRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - url, err := replaceVars(d, config, "{{MonitoringBasePath}}projects/{{project}}/notificationChannels") + url, err := replaceVars(d, config, "{{MonitoringBasePath}}v3/projects/{{project}}/notificationChannels") if err != nil { return err } diff --git a/third_party/terraform/data_sources/data_source_monitoring_service.go b/third_party/terraform/data_sources/data_source_monitoring_service.go index 49e998496f75..34edd11c03a7 100644 --- a/third_party/terraform/data_sources/data_source_monitoring_service.go +++ b/third_party/terraform/data_sources/data_source_monitoring_service.go @@ -2,8 +2,9 @@ package google import ( "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/schema" neturl "net/url" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" ) type monitoringServiceTypeStateSetter func(map[string]interface{}, *schema.ResourceData, interface{}) error @@ -47,13 +48,13 @@ func dataSourceMonitoringServiceTypeReadFromList(listFilter string, typeStateSet return err } - listUrlTmpl := "{{MonitoringBasePath}}projects/{{project}}/services?filter=" + neturl.QueryEscape(filters) + listUrlTmpl := "{{MonitoringBasePath}}v3/projects/{{project}}/services?filter=" + neturl.QueryEscape(filters) url, err := replaceVars(d, config, listUrlTmpl) if err != nil { return err } - resp, err := sendRequest(config, "GET", project, url, nil, isMonitoringRetryableError) + resp, err := sendRequest(config, "GET", project, url, nil, isMonitoringConcurrentEditError) if err != nil { return fmt.Errorf("unable to list Monitoring Service for data source: %v", err) } diff --git a/third_party/terraform/data_sources/data_source_monitoring_service_test.go b/third_party/terraform/data_sources/data_source_monitoring_service_test.go index c68a1a5a0eee..b6f359c75d5c 100644 --- a/third_party/terraform/data_sources/data_source_monitoring_service_test.go +++ b/third_party/terraform/data_sources/data_source_monitoring_service_test.go @@ -8,7 +8,7 @@ import ( ) func TestAccDataSourceMonitoringService_AppEngine(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/data_sources/data_source_secret_manager_secret_version.go.erb b/third_party/terraform/data_sources/data_source_secret_manager_secret_version.go.erb index 137f0c2cdad1..714b5ab07d78 100644 --- a/third_party/terraform/data_sources/data_source_secret_manager_secret_version.go.erb +++ b/third_party/terraform/data_sources/data_source_secret_manager_secret_version.go.erb @@ -1,6 +1,5 @@ <% autogen_exception -%> package google -<% unless version == "ga" -%> import ( "fmt" @@ -125,5 +124,3 @@ func dataSourceSecretManagerSecretVersionRead(d *schema.ResourceData, meta inter d.SetId(time.Now().UTC().String()) return nil } - -<% end -%> diff --git a/third_party/terraform/data_sources/data_source_storage_object_signed_url.go b/third_party/terraform/data_sources/data_source_storage_object_signed_url.go index b57d314e6cfd..a14a0ec5dd14 100644 --- a/third_party/terraform/data_sources/data_source_storage_object_signed_url.go +++ b/third_party/terraform/data_sources/data_source_storage_object_signed_url.go @@ -51,8 +51,9 @@ func dataSourceGoogleSignedUrl() *schema.Resource { Default: "", }, "credentials": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Sensitive: true, + Optional: true, }, "duration": { Type: schema.TypeString, diff --git a/third_party/terraform/data_sources/data_sql_database_instance.go b/third_party/terraform/data_sources/data_sql_database_instance.go new file mode 100644 index 000000000000..aa00e14cb58d --- /dev/null +++ b/third_party/terraform/data_sources/data_sql_database_instance.go @@ -0,0 +1,22 @@ +package google + +import ( + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +func dataSourceSqlDatabaseInstance() *schema.Resource { + + dsSchema := datasourceSchemaFromResourceSchema(resourceSqlDatabaseInstance().Schema) + addRequiredFieldsToSchema(dsSchema, "name") + + return &schema.Resource{ + Read: dataSourceSqlDatabaseInstanceRead, + Schema: dsSchema, + } +} + +func dataSourceSqlDatabaseInstanceRead(d *schema.ResourceData, meta interface{}) error { + + return resourceSqlDatabaseInstanceRead(d, meta) + +} diff --git a/third_party/terraform/resources/common_operation_test.go b/third_party/terraform/resources/common_operation_test.go index 7b3c168dd1ec..db0b6722a9bd 100644 --- a/third_party/terraform/resources/common_operation_test.go +++ b/third_party/terraform/resources/common_operation_test.go @@ -55,7 +55,7 @@ func TestOperationWait_TimeoutsShouldRetry(t *testing.T) { testWaiter := TestWaiter{ runCount: 0, } - err := OperationWait(&testWaiter, "my-activity", 1, 0*time.Second) + err := OperationWait(&testWaiter, "my-activity", 1*time.Minute, 0*time.Second) if err != nil { t.Fatalf("unexpected error waiting for operation: got '%v', want 'nil'", err) } diff --git a/third_party/terraform/resources/resource_app_engine_application.go b/third_party/terraform/resources/resource_app_engine_application.go index 4baaab1ffeb6..892ce434289c 100644 --- a/third_party/terraform/resources/resource_app_engine_application.go +++ b/third_party/terraform/resources/resource_app_engine_application.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" @@ -21,6 +22,11 @@ func resourceAppEngineApplication() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Update: schema.DefaultTimeout(4 * time.Minute), + }, + CustomizeDiff: customdiff.All( appEngineApplicationLocationIDCustomizeDiff, ), @@ -32,15 +38,18 @@ func resourceAppEngineApplication() *schema.Resource { Computed: true, ForceNew: true, ValidateFunc: validateProjectID(), + Description: `The project ID to create the application under.`, }, "auth_domain": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The domain to authenticate users with when using App Engine's User API.`, }, "location_id": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The location to serve the app from.`, }, "serving_status": { Type: schema.TypeString, @@ -51,69 +60,91 @@ func resourceAppEngineApplication() *schema.Resource { "USER_DISABLED", "SYSTEM_DISABLED", }, false), - Computed: true, + Computed: true, + Description: `The serving status of the app.`, }, - "feature_settings": { - Type: schema.TypeList, + "database_type": { + Type: schema.TypeString, Optional: true, + ValidateFunc: validation.StringInSlice([]string{ + "CLOUD_FIRESTORE", + "CLOUD_DATASTORE_COMPATIBILITY", + }, false), Computed: true, - MaxItems: 1, - Elem: appEngineApplicationFeatureSettingsResource(), + }, + "feature_settings": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `A block of optional settings to configure specific App Engine features:`, + Elem: appEngineApplicationFeatureSettingsResource(), }, "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Unique name of the app.`, }, "app_id": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Identifier of the app.`, }, "url_dispatch_rule": { - Type: schema.TypeList, - Computed: true, - Elem: appEngineApplicationURLDispatchRuleResource(), + Type: schema.TypeList, + Computed: true, + Description: `A list of dispatch rule blocks. Each block has a domain, path, and service field.`, + Elem: appEngineApplicationURLDispatchRuleResource(), }, "code_bucket": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The GCS bucket code is being stored in for this app.`, }, "default_hostname": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The default hostname for this app.`, }, "default_bucket": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The GCS bucket content is being stored in for this app.`, }, "gcr_domain": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The GCR domain used for storing managed Docker images for this app.`, }, "iap": { Type: schema.TypeList, Optional: true, - Description: `Settings for enabling Cloud Identity Aware Proxy`, MaxItems: 1, + Description: `Settings for enabling Cloud Identity Aware Proxy`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Adapted for use with the app`, }, "oauth2_client_id": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `OAuth2 client ID to use for the authentication flow.`, }, "oauth2_client_secret": { - Type: schema.TypeString, - Required: true, - Sensitive: true, + Type: schema.TypeString, + Required: true, + Sensitive: true, + Description: `OAuth2 client secret to use for the authentication flow. The SHA-256 hash of the value is returned in the oauth2ClientSecretSha256 field.`, }, "oauth2_client_secret_sha256": { - Type: schema.TypeString, - Computed: true, - Sensitive: true, + Type: schema.TypeString, + Computed: true, + Sensitive: true, + Description: `Hex-encoded SHA-256 hash of the client secret.`, }, }, }, @@ -188,7 +219,7 @@ func resourceAppEngineApplicationCreate(d *schema.ResourceData, meta interface{} d.SetId(project) // Wait for the operation to complete - waitErr := appEngineOperationWait(config, op, project, "App Engine app to create") + waitErr := appEngineOperationWaitTime(config, op, project, "App Engine app to create", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { d.SetId("") return waitErr @@ -215,6 +246,7 @@ func resourceAppEngineApplicationRead(d *schema.ResourceData, meta interface{}) d.Set("app_id", app.Id) d.Set("serving_status", app.ServingStatus) d.Set("gcr_domain", app.GcrDomain) + d.Set("database_type", app.DatabaseType) d.Set("project", pid) dispatchRules, err := flattenAppEngineApplicationDispatchRules(app.DispatchRules) if err != nil { @@ -259,13 +291,13 @@ func resourceAppEngineApplicationUpdate(d *schema.ResourceData, meta interface{} defer mutexKV.Unlock(lockName) log.Printf("[DEBUG] Updating App Engine App") - op, err := config.clientAppEngine.Apps.Patch(pid, app).UpdateMask("authDomain,servingStatus,featureSettings.splitHealthChecks").Do() + op, err := config.clientAppEngine.Apps.Patch(pid, app).UpdateMask("authDomain,databaseType,servingStatus,featureSettings.splitHealthChecks,iap").Do() if err != nil { return fmt.Errorf("Error updating App Engine application: %s", err.Error()) } // Wait for the operation to complete - waitErr := appEngineOperationWait(config, op, pid, "App Engine app to update") + waitErr := appEngineOperationWaitTime(config, op, pid, "App Engine app to update", d.Timeout(schema.TimeoutUpdate)) if waitErr != nil { return waitErr } @@ -285,6 +317,7 @@ func expandAppEngineApplication(d *schema.ResourceData, project string) (*appeng LocationId: d.Get("location_id").(string), Id: project, GcrDomain: d.Get("gcr_domain").(string), + DatabaseType: d.Get("database_type").(string), ServingStatus: d.Get("serving_status").(string), } featureSettings, err := expandAppEngineApplicationFeatureSettings(d) diff --git a/third_party/terraform/resources/resource_bigquery_table.go.erb b/third_party/terraform/resources/resource_bigquery_table.go similarity index 59% rename from third_party/terraform/resources/resource_bigquery_table.go.erb rename to third_party/terraform/resources/resource_bigquery_table.go index b46bbf817a27..3aada3d6018d 100644 --- a/third_party/terraform/resources/resource_bigquery_table.go.erb +++ b/third_party/terraform/resources/resource_bigquery_table.go @@ -1,5 +1,3 @@ -// <% autogen_exception -%> - package google import ( @@ -7,7 +5,8 @@ import ( "errors" "fmt" "log" - "regexp" + "reflect" + "sort" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/helper/structure" @@ -15,6 +14,40 @@ import ( "google.golang.org/api/bigquery/v2" ) +// JSONBytesEqual compares the JSON in two byte slices. +// Reference: https://stackoverflow.com/questions/32408890/how-to-compare-two-json-requests +func JSONBytesEqual(a, b []byte) (bool, error) { + var j, j2 interface{} + if err := json.Unmarshal(a, &j); err != nil { + return false, err + } + jList := j.([]interface{}) + sort.Slice(jList, func(i, k int) bool { + return jList[i].(map[string]interface{})["name"].(string) < jList[k].(map[string]interface{})["name"].(string) + }) + if err := json.Unmarshal(b, &j2); err != nil { + return false, err + } + j2List := j2.([]interface{}) + sort.Slice(j2List, func(i, k int) bool { + return j2List[i].(map[string]interface{})["name"].(string) < j2List[k].(map[string]interface{})["name"].(string) + }) + return reflect.DeepEqual(j2List, jList), nil +} + +// Compare the JSON strings are equal +func bigQueryTableSchemaDiffSuppress(_, old, new string, _ *schema.ResourceData) bool { + oldBytes := []byte(old) + newBytes := []byte(new) + + eq, err := JSONBytesEqual(oldBytes, newBytes) + if err != nil { + log.Printf("[DEBUG] Error comparing JSON bytes: %v, %v", old, new) + } + + return eq +} + func resourceBigQueryTable() *schema.Resource { return &schema.Resource{ Create: resourceBigQueryTableCreate, @@ -29,30 +62,34 @@ func resourceBigQueryTable() *schema.Resource { // letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum // length is 1,024 characters. "table_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `A unique ID for the resource. Changing this forces a new resource to be created.`, }, // DatasetId: [Required] The ID of the dataset containing this table. "dataset_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The dataset ID to create the table in. Changing this forces a new resource to be created.`, }, // ProjectId: [Required] The ID of the project containing this table. "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs.`, }, // Description: [Optional] A user-friendly description of this table. "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The field description.`, }, // ExpirationTime: [Optional] The time when this table expires, in @@ -60,9 +97,10 @@ func resourceBigQueryTable() *schema.Resource { // indefinitely. Expired tables will be deleted and their storage // reclaimed. "expiration_time": { - Type: schema.TypeInt, - Optional: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + Computed: true, + Description: `The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.`, }, // ExternalDataConfiguration [Optional] Describes the data format, @@ -70,30 +108,34 @@ func resourceBigQueryTable() *schema.Resource { // By defining these properties, the data source can then be queried as // if it were a standard BigQuery table. "external_data_configuration": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // Autodetect : [Required] If true, let BigQuery try to autodetect the // schema and format of the table. "autodetect": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `Let BigQuery try to autodetect the schema and format of the table.`, }, // SourceFormat [Required] The data format. "source_format": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The data format. Supported values are: "CSV", "GOOGLE_SHEETS", "NEWLINE_DELIMITED_JSON", "AVRO", "PARQUET", and "DATSTORE_BACKUP". To use "GOOGLE_SHEETS" the scopes must include "googleapis.com/auth/drive.readonly".`, ValidateFunc: validation.StringInSlice([]string{ "CSV", "GOOGLE_SHEETS", "NEWLINE_DELIMITED_JSON", "AVRO", "DATSTORE_BACKUP", "PARQUET", }, false), }, // SourceURIs [Required] The fully-qualified URIs that point to your data in Google Cloud. "source_uris": { - Type: schema.TypeList, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Required: true, + Description: `A list of the fully-qualified URIs that point to your data in Google Cloud.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, // Compression: [Optional] The compression type of the data source. "compression": { @@ -101,35 +143,55 @@ func resourceBigQueryTable() *schema.Resource { Optional: true, ValidateFunc: validation.StringInSlice([]string{"NONE", "GZIP"}, false), Default: "NONE", + Description: `The compression type of the data source. Valid values are "NONE" or "GZIP".`, + }, + // Schema: Optional] The schema for the data. + // Schema is required for CSV and JSON formats if autodetect is not on. + // Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, Avro, ORC and Parquet formats. + "schema": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.ValidateJsonString, + StateFunc: func(v interface{}) string { + json, _ := structure.NormalizeJsonString(v) + return json + }, + Description: `A JSON schema for the external table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables.`, }, // CsvOptions: [Optional] Additional properties to set if // sourceFormat is set to CSV. "csv_options": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Additional properties to set if source_format is set to "CSV".`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // Quote: [Required] The value that is used to quote data // sections in a CSV file. "quote": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allow_quoted_newlines property to true. The API-side default is ", specified in Terraform escaped as \". Due to limitations with Terraform default values, this value is required to be explicitly set.`, }, // AllowJaggedRows: [Optional] Indicates if BigQuery should // accept rows that are missing trailing optional columns. "allow_jagged_rows": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Indicates if BigQuery should accept rows that are missing trailing optional columns.`, }, // AllowQuotedNewlines: [Optional] Indicates if BigQuery // should allow quoted data sections that contain newline // characters in a CSV file. The default value is false. "allow_quoted_newlines": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.`, }, // Encoding: [Optional] The character encoding of the data. // The supported values are UTF-8 or ISO-8859-1. @@ -138,43 +200,83 @@ func resourceBigQueryTable() *schema.Resource { Optional: true, ValidateFunc: validation.StringInSlice([]string{"ISO-8859-1", "UTF-8"}, false), Default: "UTF-8", + Description: `The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.`, }, // FieldDelimiter: [Optional] The separator for fields in a CSV file. "field_delimiter": { - Type: schema.TypeString, - Optional: true, - Default: ",", + Type: schema.TypeString, + Optional: true, + Default: ",", + Description: `The separator for fields in a CSV file.`, }, // SkipLeadingRows: [Optional] The number of rows at the top // of a CSV file that BigQuery will skip when reading the data. "skip_leading_rows": { - Type: schema.TypeInt, - Optional: true, - Default: 0, + Type: schema.TypeInt, + Optional: true, + Default: 0, + Description: `The number of rows at the top of a CSV file that BigQuery will skip when reading the data.`, }, }, }, }, // GoogleSheetsOptions: [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS. "google_sheets_options": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Additional options if source_format is set to "GOOGLE_SHEETS".`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // Range: [Optional] Range of a sheet to query from. Only used when non-empty. // Typical format: !: "range": { - Type: schema.TypeString, - Optional: true, - AtLeastOneOf: []string{"external_data_configuration.0.google_sheets_options.0.range"}, + Type: schema.TypeString, + Optional: true, + Description: `Range of a sheet to query from. Only used when non-empty. At least one of range or skip_leading_rows must be set. Typical format: "sheet_name!top_left_cell_id:bottom_right_cell_id" For example: "sheet1!A1:B20"`, + AtLeastOneOf: []string{ + "external_data_configuration.0.google_sheets_options.0.skip_leading_rows", + "external_data_configuration.0.google_sheets_options.0.range", + }, }, // SkipLeadingRows: [Optional] The number of rows at the top // of the sheet that BigQuery will skip when reading the data. "skip_leading_rows": { - Type: schema.TypeInt, - Optional: true, - AtLeastOneOf: []string{"external_data_configuration.0.google_sheets_options.0.skip_leading_rows"}, + Type: schema.TypeInt, + Optional: true, + Description: `The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.`, + AtLeastOneOf: []string{ + "external_data_configuration.0.google_sheets_options.0.skip_leading_rows", + "external_data_configuration.0.google_sheets_options.0.range", + }, + }, + }, + }, + }, + + // HivePartitioningOptions:: [Optional] Options for configuring hive partitioning detect. + "hive_partitioning_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `When set, configures hive partitioning support. Not all storage formats support hive partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + // Mode: [Optional] [Experimental] When set, what mode of hive partitioning to use when reading data. + // Two modes are supported. + //* AUTO: automatically infer partition key name(s) and type(s). + //* STRINGS: automatically infer partition key name(s). + "mode": { + Type: schema.TypeString, + Optional: true, + Description: `When set, what mode of hive partitioning to use when reading data.`, + }, + // SourceUriPrefix: [Optional] [Experimental] When hive partition detection is requested, a common for all source uris must be required. + // The prefix must end immediately before the partition key encoding begins. + "source_uri_prefix": { + Type: schema.TypeString, + Optional: true, + Description: `When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins.`, }, }, }, @@ -187,14 +289,16 @@ func resourceBigQueryTable() *schema.Resource { // many bad records, an invalid error is returned in the job result. // The default value is false. "ignore_unknown_values": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.`, }, // MaxBadRecords: [Optional] The maximum number of bad records that // BigQuery can ignore when reading data. "max_bad_records": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `The maximum number of bad records that BigQuery can ignore when reading data.`, }, }, }, @@ -202,8 +306,9 @@ func resourceBigQueryTable() *schema.Resource { // FriendlyName: [Optional] A descriptive name for this table. "friendly_name": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `A descriptive name for the table.`, }, // Labels: [Experimental] The labels associated with this table. You can @@ -214,15 +319,13 @@ func resourceBigQueryTable() *schema.Resource { // start with a letter and each label in the list must have a different // key. "labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A mapping of labels to assign to the resource.`, }, // Schema: [Optional] Describes the schema of this table. - // Schema is required for external tables in CSV and JSON formats - // and disallowed for Google Cloud Bigtable, Cloud Datastore backups, - // and Avro formats. "schema": { Type: schema.TypeString, Optional: true, @@ -232,29 +335,34 @@ func resourceBigQueryTable() *schema.Resource { json, _ := structure.NormalizeJsonString(v) return json }, + DiffSuppressFunc: bigQueryTableSchemaDiffSuppress, + Description: `A JSON schema for the table.`, }, // View: [Optional] If specified, configures this table as a view. "view": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `If specified, configures this table as a view.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // Query: [Required] A query that BigQuery executes when the view is // referenced. "query": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `A query that BigQuery executes when the view is referenced.`, }, // UseLegacySQL: [Optional] Specifies whether to use BigQuery's // legacy SQL for this view. The default value is true. If set to // false, the view will use BigQuery's standard SQL: "use_legacy_sql": { - Type: schema.TypeBool, - Optional: true, - Default: true, + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL`, }, }, }, @@ -263,86 +371,96 @@ func resourceBigQueryTable() *schema.Resource { // TimePartitioning: [Experimental] If specified, configures time-based // partitioning for this table. "time_partitioning": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `If specified, configures time-based partitioning for this table.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // ExpirationMs: [Optional] Number of milliseconds for which to keep the // storage for a partition. "expiration_ms": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `Number of milliseconds for which to keep the storage for a partition.`, }, - // Type: [Required] The only type supported is DAY, which will generate - // one partition per day based on data loading time. + // Type: [Required] The supported types are DAY and HOUR, which will generate + // one partition per day or hour based on data loading time. "type": { Type: schema.TypeString, Required: true, - ValidateFunc: validation.StringInSlice([]string{"DAY"}, false), + Description: `The supported types are DAY and HOUR, which will generate one partition per day or hour based on data loading time.`, + ValidateFunc: validation.StringInSlice([]string{"DAY", "HOUR"}, false), }, // Field: [Optional] The field used to determine how to create a time-based // partition. If time-based partitioning is enabled without this value, the // table is partitioned based on the load time. "field": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The field used to determine how to create a time-based partition. If time-based partitioning is enabled without this value, the table is partitioned based on the load time.`, }, // RequirePartitionFilter: [Optional] If set to true, queries over this table // require a partition filter that can be used for partition elimination to be // specified. "require_partition_filter": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.`, }, }, }, }, - <% unless version == 'ga' -%> // RangePartitioning: [Optional] If specified, configures range-based // partitioning for this table. - "range_partitioning": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + "range_partitioning": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `If specified, configures range-based partitioning for this table.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // Field: [Required] The field used to determine how to create a range-based // partition. "field": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The field used to determine how to create a range-based partition.`, }, // Range: [Required] Information required to partition based on ranges. - "range": &schema.Schema{ - Type: schema.TypeList, - Required: true, - MaxItems: 1, + "range": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Description: `Information required to partition based on ranges. Structure is documented below.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // Start: [Required] Start of the range partitioning, inclusive. "start": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `Start of the range partitioning, inclusive.`, }, // End: [Required] End of the range partitioning, exclusive. "end": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `End of the range partitioning, exclusive.`, }, // Interval: [Required] The width of each range within the partition. "interval": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `The width of each range within the partition.`, }, }, }, @@ -350,27 +468,29 @@ func resourceBigQueryTable() *schema.Resource { }, }, }, - <% end -%> // Clustering: [Optional] Specifies column names to use for data clustering. Up to four // top-level columns are allowed, and should be specified in descending priority order. - "clustering": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - ForceNew: true, - MaxItems: 4, - Elem: &schema.Schema{Type: schema.TypeString}, + "clustering": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 4, + Description: `Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "encryption_configuration": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Description: `Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "kms_key_name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the google_bigquery_default_service_account datasource and the google_kms_crypto_key_iam_binding resource.`, }, }, }, @@ -379,56 +499,64 @@ func resourceBigQueryTable() *schema.Resource { // CreationTime: [Output-only] The time when this table was created, in // milliseconds since the epoch. "creation_time": { - Type: schema.TypeInt, - Computed: true, + Type: schema.TypeInt, + Computed: true, + Description: `The time when this table was created, in milliseconds since the epoch.`, }, // Etag: [Output-only] A hash of this resource. "etag": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `A hash of the resource.`, }, // LastModifiedTime: [Output-only] The time when this table was last // modified, in milliseconds since the epoch. "last_modified_time": { - Type: schema.TypeInt, - Computed: true, + Type: schema.TypeInt, + Computed: true, + Description: `The time when this table was last modified, in milliseconds since the epoch.`, }, // Location: [Output-only] The geographic location where the table // resides. This value is inherited from the dataset. "location": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The geographic location where the table resides. This value is inherited from the dataset.`, }, // NumBytes: [Output-only] The size of this table in bytes, excluding // any data in the streaming buffer. "num_bytes": { - Type: schema.TypeInt, - Computed: true, + Type: schema.TypeInt, + Computed: true, + Description: `The geographic location where the table resides. This value is inherited from the dataset.`, }, // NumLongTermBytes: [Output-only] The number of bytes in the table that // are considered "long-term storage". "num_long_term_bytes": { - Type: schema.TypeInt, - Computed: true, + Type: schema.TypeInt, + Computed: true, + Description: `The number of bytes in the table that are considered "long-term storage".`, }, // NumRows: [Output-only] The number of rows of data in this table, // excluding any data in the streaming buffer. "num_rows": { - Type: schema.TypeInt, - Computed: true, + Type: schema.TypeInt, + Computed: true, + Description: `The number of rows of data in this table, excluding any data in the streaming buffer.`, }, // SelfLink: [Output-only] A URL that can be used to access this // resource again. "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, // Type: [Output-only] Describes the table type. The following values @@ -437,8 +565,9 @@ func resourceBigQueryTable() *schema.Resource { // in an external storage system, such as Google Cloud Storage. The // default value is TABLE. "type": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Describes the table type.`, }, }, } @@ -514,7 +643,6 @@ func resourceTable(d *schema.ResourceData, meta interface{}) (*bigquery.Table, e table.TimePartitioning = expandTimePartitioning(v) } - <% unless version == 'ga' -%> if v, ok := d.GetOk("range_partitioning"); ok { rangePartitioning, err := expandRangePartitioning(v) if err != nil { @@ -523,7 +651,6 @@ func resourceTable(d *schema.ResourceData, meta interface{}) (*bigquery.Table, e table.RangePartitioning = rangePartitioning } - <% end -%> if v, ok := d.GetOk("clustering"); ok { table.Clustering = &bigquery.Clustering{ @@ -558,7 +685,6 @@ func resourceBigQueryTableCreate(d *schema.ResourceData, meta interface{}) error } log.Printf("[INFO] BigQuery table %s has been created", res.Id) - d.SetId(fmt.Sprintf("projects/%s/datasets/%s/tables/%s", res.TableReference.ProjectId, res.TableReference.DatasetId, res.TableReference.TableId)) return resourceBigQueryTableRead(d, meta) @@ -605,6 +731,24 @@ func resourceBigQueryTableRead(d *schema.ResourceData, meta interface{}) error { return err } + if v, ok := d.GetOk("external_data_configuration"); ok { + // The API response doesn't return the `external_data_configuration.schema` + // used when creating the table and it cannot be queried. + // After creation, a computed schema is stored in the toplevel `schema`, + // which combines `external_data_configuration.schema` + // with any hive partioning fields found in the `source_uri_prefix`. + // So just assume the configured schema has been applied after successful + // creation, by copying the configured value back into the resource schema. + // This avoids that reading back this field will be identified as a change. + // The `ForceNew=true` on `external_data_configuration.schema` will ensure + // the users' expectation that changing the configured input schema will + // recreate the resource. + edc := v.([]interface{})[0].(map[string]interface{}) + if edc["schema"] != nil { + externalDataConfiguration[0]["schema"] = edc["schema"] + } + } + d.Set("external_data_configuration", externalDataConfiguration) } @@ -614,13 +758,11 @@ func resourceBigQueryTableRead(d *schema.ResourceData, meta interface{}) error { } } - <% unless version == 'ga' -%> if res.RangePartitioning != nil { if err := d.Set("range_partitioning", flattenRangePartitioning(res.RangePartitioning)); err != nil { return err } } - <% end -%> if res.Clustering != nil { d.Set("clustering", res.Clustering.Fields) @@ -719,12 +861,22 @@ func expandExternalDataConfiguration(cfg interface{}) (*bigquery.ExternalDataCon if v, ok := raw["google_sheets_options"]; ok { edc.GoogleSheetsOptions = expandGoogleSheetsOptions(v) } + if v, ok := raw["hive_partitioning_options"]; ok { + edc.HivePartitioningOptions = expandHivePartitioningOptions(v) + } if v, ok := raw["ignore_unknown_values"]; ok { edc.IgnoreUnknownValues = v.(bool) } if v, ok := raw["max_bad_records"]; ok { edc.MaxBadRecords = int64(v.(int)) } + if v, ok := raw["schema"]; ok { + schema, err := expandSchema(v) + if err != nil { + return nil, err + } + edc.Schema = schema + } if v, ok := raw["source_format"]; ok { edc.SourceFormat = v.(string) } @@ -751,6 +903,10 @@ func flattenExternalDataConfiguration(edc *bigquery.ExternalDataConfiguration) ( result["google_sheets_options"] = flattenGoogleSheetsOptions(edc.GoogleSheetsOptions) } + if edc.HivePartitioningOptions != nil { + result["hive_partitioning_options"] = flattenHivePartitioningOptions(edc.HivePartitioningOptions) + } + if edc.IgnoreUnknownValues { result["ignore_unknown_values"] = edc.IgnoreUnknownValues } @@ -865,6 +1021,39 @@ func flattenGoogleSheetsOptions(opts *bigquery.GoogleSheetsOptions) []map[string return []map[string]interface{}{result} } +func expandHivePartitioningOptions(configured interface{}) *bigquery.HivePartitioningOptions { + if len(configured.([]interface{})) == 0 { + return nil + } + + raw := configured.([]interface{})[0].(map[string]interface{}) + opts := &bigquery.HivePartitioningOptions{} + + if v, ok := raw["mode"]; ok { + opts.Mode = v.(string) + } + + if v, ok := raw["source_uri_prefix"]; ok { + opts.SourceUriPrefix = v.(string) + } + + return opts +} + +func flattenHivePartitioningOptions(opts *bigquery.HivePartitioningOptions) []map[string]interface{} { + result := map[string]interface{}{} + + if opts.Mode != "" { + result["mode"] = opts.Mode + } + + if opts.SourceUriPrefix != "" { + result["source_uri_prefix"] = opts.SourceUriPrefix + } + + return []map[string]interface{}{result} +} + func expandSchema(raw interface{}) (*bigquery.TableSchema, error) { var fields []*bigquery.TableFieldSchema @@ -907,7 +1096,6 @@ func expandTimePartitioning(configured interface{}) *bigquery.TimePartitioning { return tp } -<% unless version == 'ga' -%> func expandRangePartitioning(configured interface{}) (*bigquery.RangePartitioning, error) { if configured == nil { return nil, nil @@ -931,15 +1119,15 @@ func expandRangePartitioning(configured interface{}) (*bigquery.RangePartitionin rangeJson := rangeLs[0].(map[string]interface{}) rp.Range = &bigquery.RangePartitioningRange{ - Start: int64(rangeJson["start"].(int)), - End: int64(rangeJson["end"].(int)), - Interval: int64(rangeJson["interval"].(int)), + Start: int64(rangeJson["start"].(int)), + End: int64(rangeJson["end"].(int)), + Interval: int64(rangeJson["interval"].(int)), + ForceSendFields: []string{"Start"}, } } return rp, nil } -<% end -%> func flattenEncryptionConfiguration(ec *bigquery.EncryptionConfiguration) []map[string]interface{} { return []map[string]interface{}{{"kms_key_name": ec.KmsKeyName}} @@ -963,7 +1151,6 @@ func flattenTimePartitioning(tp *bigquery.TimePartitioning) []map[string]interfa return []map[string]interface{}{result} } -<% unless version == 'ga' -%> func flattenRangePartitioning(rp *bigquery.RangePartitioning) []map[string]interface{} { result := map[string]interface{}{ "field": rp.Field, @@ -978,7 +1165,6 @@ func flattenRangePartitioning(rp *bigquery.RangePartitioning) []map[string]inter return []map[string]interface{}{result} } -<% end -%> func expandView(configured interface{}) *bigquery.ViewDefinition { raw := configured.([]interface{})[0].(map[string]interface{}) diff --git a/third_party/terraform/resources/resource_bigtable_gc_policy.go b/third_party/terraform/resources/resource_bigtable_gc_policy.go index 28caf60de89c..30b6252315a6 100644 --- a/third_party/terraform/resources/resource_bigtable_gc_policy.go +++ b/third_party/terraform/resources/resource_bigtable_gc_policy.go @@ -28,60 +28,69 @@ func resourceBigtableGCPolicy() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: compareResourceNames, + Description: `The name of the Bigtable instance.`, }, "table": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the table.`, }, "column_family": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the column family.`, }, "mode": { Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `If multiple policies are set, you should choose between UNION OR INTERSECTION.`, ValidateFunc: validation.StringInSlice([]string{GCPolicyModeIntersection, GCPolicyModeUnion}, false), }, "max_age": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `GC policy that applies to all cells older than the given age.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "days": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `Number of days before applying GC policy.`, }, }, }, }, "max_version": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `GC policy that applies to all versions of a cell except for the most recent.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "number": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `Number of version before applying the GC policy.`, }, }, }, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } diff --git a/third_party/terraform/resources/resource_bigtable_instance.go b/third_party/terraform/resources/resource_bigtable_instance.go index 0f76789b3e2d..1e18c103d5fc 100644 --- a/third_party/terraform/resources/resource_bigtable_instance.go +++ b/third_party/terraform/resources/resource_bigtable_instance.go @@ -27,26 +27,43 @@ func resourceBigtableInstance() *schema.Resource { resourceBigtableInstanceClusterReorderTypeList, ), + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceBigtableInstanceResourceV0().CoreConfigSchema().ImpliedType(), + Upgrade: resourceBigtableInstanceUpgradeV0, + Version: 0, + }, + }, + + // ---------------------------------------------------------------------- + // IMPORTANT: Do not add any additional ForceNew fields to this resource. + // Destroying/recreating instances can lead to data loss for users. + // ---------------------------------------------------------------------- Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name (also called Instance Id in the Cloud Console) of the Cloud Bigtable instance.`, }, "cluster": { - Type: schema.TypeList, - Optional: true, - Computed: true, + Type: schema.TypeList, + Optional: true, + Computed: true, + Description: `A block of cluster configuration options. This can be specified at least once, and up to 4 times.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "cluster_id": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The ID of the Cloud Bigtable cluster.`, }, "zone": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The zone to create the Cloud Bigtable cluster in. Each cluster must have a different zone in the same region. Zones that support Bigtable instances are noted on the Cloud Bigtable locations page.`, }, "num_nodes": { Type: schema.TypeInt, @@ -54,21 +71,24 @@ func resourceBigtableInstance() *schema.Resource { // DEVELOPMENT instances could get returned with either zero or one node, // so mark as computed. Computed: true, - ValidateFunc: validation.IntAtLeast(3), + ValidateFunc: validation.IntAtLeast(1), + Description: `The number of nodes in your Cloud Bigtable cluster. Required, with a minimum of 1 for a PRODUCTION instance. Must be left unset for a DEVELOPMENT instance.`, }, "storage_type": { Type: schema.TypeString, Optional: true, Default: "SSD", ValidateFunc: validation.StringInSlice([]string{"SSD", "HDD"}, false), + Description: `The storage type to use. One of "SSD" or "HDD". Defaults to "SSD".`, }, }, }, }, "display_name": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The human-readable display name of the Bigtable instance. Defaults to the instance name.`, }, "instance_type": { @@ -76,13 +96,22 @@ func resourceBigtableInstance() *schema.Resource { Optional: true, Default: "PRODUCTION", ValidateFunc: validation.StringInSlice([]string{"DEVELOPMENT", "PRODUCTION"}, false), + Description: `The instance type to create. One of "DEVELOPMENT" or "PRODUCTION". Defaults to "PRODUCTION".`, + }, + + "deletion_protection": { + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Whether or not to allow Terraform to destroy the instance. Unless this field is set to false in Terraform state, a terraform destroy or terraform apply that would delete the instance will fail.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } @@ -231,6 +260,9 @@ func resourceBigtableInstanceUpdate(d *schema.ResourceData, meta interface{}) er } func resourceBigtableInstanceDestroy(d *schema.ResourceData, meta interface{}) error { + if d.Get("deletion_protection").(bool) { + return fmt.Errorf("cannot destroy instance without setting deletion_protection=false and running `terraform apply`") + } config := meta.(*Config) ctx := context.Background() diff --git a/third_party/terraform/resources/resource_bigtable_instance_migrate.go b/third_party/terraform/resources/resource_bigtable_instance_migrate.go new file mode 100644 index 000000000000..ebaaf741be54 --- /dev/null +++ b/third_party/terraform/resources/resource_bigtable_instance_migrate.go @@ -0,0 +1,80 @@ +package google + +import ( + "log" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/helper/validation" +) + +func resourceBigtableInstanceResourceV0() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "cluster": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cluster_id": { + Type: schema.TypeString, + Required: true, + }, + "zone": { + Type: schema.TypeString, + Required: true, + }, + "num_nodes": { + Type: schema.TypeInt, + Optional: true, + // DEVELOPMENT instances could get returned with either zero or one node, + // so mark as computed. + Computed: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "storage_type": { + Type: schema.TypeString, + Optional: true, + Default: "SSD", + ValidateFunc: validation.StringInSlice([]string{"SSD", "HDD"}, false), + }, + }, + }, + }, + "display_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "instance_type": { + Type: schema.TypeString, + Optional: true, + Default: "PRODUCTION", + ValidateFunc: validation.StringInSlice([]string{"DEVELOPMENT", "PRODUCTION"}, false), + }, + + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + } +} + +func resourceBigtableInstanceUpgradeV0(rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + log.Printf("[DEBUG] Attributes before migration: %#v", rawState) + + rawState["deletion_protection"] = true + + log.Printf("[DEBUG] Attributes after migration: %#v", rawState) + return rawState, nil +} diff --git a/third_party/terraform/resources/resource_bigtable_table.go b/third_party/terraform/resources/resource_bigtable_table.go index c5d001547d77..57ee358601c1 100644 --- a/third_party/terraform/resources/resource_bigtable_table.go +++ b/third_party/terraform/resources/resource_bigtable_table.go @@ -12,28 +12,35 @@ func resourceBigtableTable() *schema.Resource { return &schema.Resource{ Create: resourceBigtableTableCreate, Read: resourceBigtableTableRead, + Update: resourceBigtableTableUpdate, Delete: resourceBigtableTableDestroy, Importer: &schema.ResourceImporter{ State: resourceBigtableTableImport, }, + // ---------------------------------------------------------------------- + // IMPORTANT: Do not add any additional ForceNew fields to this resource. + // Destroying/recreating tables can lead to data loss for users. + // ---------------------------------------------------------------------- Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the table.`, }, "column_family": { - Type: schema.TypeSet, - Optional: true, - ForceNew: true, + Type: schema.TypeSet, + Optional: true, + Description: `A group of columns within a table which share a common configuration. This can be specified multiple times.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "family": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The name of the column family.`, }, }, }, @@ -44,20 +51,23 @@ func resourceBigtableTable() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: compareResourceNames, + Description: `The name of the Bigtable instance.`, }, "split_keys": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A list of predefined keys to split the table on. !> Warning: Modifying the split_keys of an existing table will cause Terraform to delete/recreate the entire google_bigtable_table resource.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } @@ -153,6 +163,54 @@ func resourceBigtableTableRead(d *schema.ResourceData, meta interface{}) error { return nil } +func resourceBigtableTableUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + ctx := context.Background() + + project, err := getProject(d, config) + if err != nil { + return err + } + + instanceName := GetResourceNameFromSelfLink(d.Get("instance_name").(string)) + c, err := config.bigtableClientFactory.NewAdminClient(project, instanceName) + if err != nil { + return fmt.Errorf("Error starting admin client. %s", err) + } + defer c.Close() + + o, n := d.GetChange("column_family") + oSet := o.(*schema.Set) + nSet := n.(*schema.Set) + name := d.Get("name").(string) + + // Add column families that are in new but not in old + for _, new := range nSet.Difference(oSet).List() { + column := new.(map[string]interface{}) + + if v, ok := column["family"]; ok { + log.Printf("[DEBUG] adding column family %q", v) + if err := c.CreateColumnFamily(ctx, name, v.(string)); err != nil { + return fmt.Errorf("Error creating column family %q: %s", v, err) + } + } + } + + // Remove column families that are in old but not in new + for _, old := range oSet.Difference(nSet).List() { + column := old.(map[string]interface{}) + + if v, ok := column["family"]; ok { + log.Printf("[DEBUG] removing column family %q", v) + if err := c.DeleteColumnFamily(ctx, name, v.(string)); err != nil { + return fmt.Errorf("Error deleting column family %q: %s", v, err) + } + } + } + + return resourceBigtableTableRead(d, meta) +} + func resourceBigtableTableDestroy(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) ctx := context.Background() diff --git a/third_party/terraform/resources/resource_cloudfunctions_function.go b/third_party/terraform/resources/resource_cloudfunctions_function.go index 5023aaba6164..8ae4bacb4ebb 100644 --- a/third_party/terraform/resources/resource_cloudfunctions_function.go +++ b/third_party/terraform/resources/resource_cloudfunctions_function.go @@ -1,6 +1,8 @@ package google import ( + "regexp" + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/helper/validation" "google.golang.org/api/cloudfunctions/v1" @@ -50,6 +52,23 @@ func (s *cloudFunctionId) cloudFunctionId() string { return fmt.Sprintf("projects/%s/locations/%s/functions/%s", s.Project, s.Region, s.Name) } +// matches all international lower case letters, number, underscores and dashes. +var labelKeyRegex = regexp.MustCompile(`^[\p{Ll}0-9_-]+$`) + +func labelKeyValidator(val interface{}, key string) (warns []string, errs []error) { + if val == nil { + return + } + + m := val.(map[string]interface{}) + for k := range m { + if !labelKeyRegex.MatchString(k) { + errs = append(errs, fmt.Errorf("%q is an invalid label key. See https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements", k)) + } + } + return +} + func (s *cloudFunctionId) locationId() string { return fmt.Sprintf("projects/%s/locations/%s", s.Project, s.Region) } @@ -113,47 +132,55 @@ func resourceCloudFunctionsFunction() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + Description: `A user-defined name of the function. Function names must be unique globally.`, ValidateFunc: validateResourceCloudFunctionsFunctionName, }, "source_archive_bucket": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The GCS bucket containing the zip archive which contains the function.`, }, "source_archive_object": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The source archive object (file) in archive bucket.`, }, "source_repository": { Type: schema.TypeList, Optional: true, MaxItems: 1, + Description: `Represents parameters related to source repository where a function is hosted. Cannot be set alongside source_archive_bucket or source_archive_object.`, ConflictsWith: []string{"source_archive_bucket", "source_archive_object"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "url": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The URL pointing to the hosted repository where the function is defined.`, }, "deployed_url": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URL pointing to the hosted repository where the function was defined at the time of deployment.`, }, }, }, }, "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `Description of the function.`, }, "available_memory_mb": { - Type: schema.TypeInt, - Optional: true, - Default: functionDefaultAllowedMemoryMb, + Type: schema.TypeInt, + Optional: true, + Default: functionDefaultAllowedMemoryMb, + Description: `Memory (in MB), available to the function. Default value is 256MB. Allowed values are: 128MB, 256MB, 512MB, 1024MB, and 2048MB.`, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { availableMemoryMB := v.(int) @@ -170,12 +197,14 @@ func resourceCloudFunctionsFunction() *schema.Resource { Optional: true, Default: functionDefaultTimeout, ValidateFunc: validation.IntBetween(functionTimeOutMin, functionTimeOutMax), + Description: `Timeout (in seconds) for the function. Default value is 60 seconds. Cannot be more than 540 seconds.`, }, "entry_point": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Name of the function that will be executed when the Google Cloud Function is triggered.`, }, "ingress_settings": { @@ -183,6 +212,7 @@ func resourceCloudFunctionsFunction() *schema.Resource { Optional: true, Default: functionDefaultIngressSettings, ValidateFunc: validation.StringInSlice(allowedIngressSettings, true), + Description: `String value that controls what traffic can reach the function. Allowed values are ALLOW_ALL and ALLOW_INTERNAL_ONLY. Changes to this field will recreate the cloud function.`, }, "vpc_connector_egress_settings": { @@ -190,40 +220,48 @@ func resourceCloudFunctionsFunction() *schema.Resource { Optional: true, Computed: true, ValidateFunc: validation.StringInSlice(allowedVpcConnectorEgressSettings, true), + Description: `The egress settings for the connector, controlling what traffic is diverted through it. Allowed values are ALL_TRAFFIC and PRIVATE_RANGES_ONLY. Defaults to PRIVATE_RANGES_ONLY. If unset, this field preserves the previously set value.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, + Type: schema.TypeMap, + ValidateFunc: labelKeyValidator, + Optional: true, + Description: `A set of key/value label pairs to assign to the function. Label keys must follow the requirements at https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements.`, }, "runtime": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The runtime in which the function is going to run. Eg. "nodejs8", "nodejs10", "python37", "go111".`, }, "service_account_email": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: ` If provided, the self-provided service account to run the function with.`, }, "vpc_connector": { Type: schema.TypeString, Optional: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The VPC Network Connector that this cloud function can connect to. It can be either the fully-qualified URI, or the short name of the network connector resource. The format of this field is projects/*/locations/*/connectors/*.`, }, "environment_variables": { - Type: schema.TypeMap, - Optional: true, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value environment variable pairs to assign to the function.`, }, "trigger_http": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Boolean variable. Any HTTP request (of a supported type) to the endpoint will trigger function execution. Supported HTTP request types are: POST, PUT, GET, DELETE, and OPTIONS. Endpoint is returned as https_trigger_url. Cannot be used with trigger_bucket and trigger_topic.`, }, "event_trigger": { @@ -232,29 +270,34 @@ func resourceCloudFunctionsFunction() *schema.Resource { Computed: true, ConflictsWith: []string{"trigger_http"}, MaxItems: 1, + Description: `A source that fires events in response to a condition in another service. Cannot be used with trigger_http.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "event_type": { - Type: schema.TypeString, - ForceNew: true, - Required: true, + Type: schema.TypeString, + ForceNew: true, + Required: true, + Description: `The type of event to observe. For example: "google.storage.object.finalize". See the documentation on calling Cloud Functions for a full reference of accepted triggers.`, }, "resource": { Type: schema.TypeString, Required: true, DiffSuppressFunc: compareSelfLinkOrResourceNameWithMultipleParts, + Description: `The name or partial URI of the resource from which to observe events. For example, "myBucket" or "projects/my-project/topics/my-topic"`, }, "failure_policy": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Specifies policy for failed executions`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "retry": { Type: schema.TypeBool, // not strictly required, but this way an empty block can't be specified - Required: true, + Required: true, + Description: `Whether the function should be retried on failure. Defaults to false.`, }, }}, }, @@ -263,9 +306,10 @@ func resourceCloudFunctionsFunction() *schema.Resource { }, "https_trigger_url": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `URL which triggers function execution. Returned only if trigger_http is used.`, }, "max_instances": { @@ -273,20 +317,23 @@ func resourceCloudFunctionsFunction() *schema.Resource { Optional: true, Default: 0, ValidateFunc: validation.IntAtLeast(0), + Description: `The limit on the maximum number of function instances that may coexist at a given time.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `Project of the function. If it is not provided, the provider project is used.`, }, "region": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `Region of function. Currently can be only "us-central1". If it is not provided, the provider region is used.`, }, }, } @@ -381,21 +428,27 @@ func resourceCloudFunctionsCreate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Creating cloud function: %s", function.Name) - op, err := config.clientCloudFunctions.Projects.Locations.Functions.Create( - cloudFuncId.locationId(), function).Do() - if err != nil { - return err - } - // Name of function should be unique - d.SetId(cloudFuncId.cloudFunctionId()) + // We retry the whole create-and-wait because Cloud Functions + // will sometimes fail a creation operation entirely if it fails to pull + // source code and we need to try the whole creation again. + rerr := retryTimeDuration(func() error { + op, err := config.clientCloudFunctions.Projects.Locations.Functions.Create( + cloudFuncId.locationId(), function).Do() + if err != nil { + return err + } - err = cloudFunctionsOperationWait(config, op, "Creating CloudFunctions Function", - int(d.Timeout(schema.TimeoutCreate).Minutes())) - if err != nil { - return err - } + // Name of function should be unique + d.SetId(cloudFuncId.cloudFunctionId()) + return cloudFunctionsOperationWait(config, op, "Creating CloudFunctions Function", + d.Timeout(schema.TimeoutCreate)) + }, d.Timeout(schema.TimeoutCreate), isCloudFunctionsSourceCodeError) + if rerr != nil { + return rerr + } + log.Printf("[DEBUG] Finished creating cloud function: %s", function.Name) return resourceCloudFunctionsRead(d, meta) } @@ -556,7 +609,7 @@ func resourceCloudFunctionsUpdate(d *schema.ResourceData, meta interface{}) erro } err = cloudFunctionsOperationWait(config, op, "Updating CloudFunctions Function", - int(d.Timeout(schema.TimeoutUpdate).Minutes())) + d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -579,7 +632,7 @@ func resourceCloudFunctionsDestroy(d *schema.ResourceData, meta interface{}) err return err } err = cloudFunctionsOperationWait(config, op, "Deleting CloudFunctions Function", - int(d.Timeout(schema.TimeoutDelete).Minutes())) + d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_cloudiot_registry.go b/third_party/terraform/resources/resource_cloudiot_registry.go deleted file mode 100644 index 21916631c45b..000000000000 --- a/third_party/terraform/resources/resource_cloudiot_registry.go +++ /dev/null @@ -1,463 +0,0 @@ -package google - -import ( - "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/validation" - "regexp" - "strings" - - "github.com/hashicorp/terraform-plugin-sdk/helper/schema" - "google.golang.org/api/cloudiot/v1" -) - -const ( - mqttEnabled = "MQTT_ENABLED" - mqttDisabled = "MQTT_DISABLED" - httpEnabled = "HTTP_ENABLED" - httpDisabled = "HTTP_DISABLED" - x509CertificatePEM = "X509_CERTIFICATE_PEM" -) - -func resourceCloudIoTRegistry() *schema.Resource { - return &schema.Resource{ - Create: resourceCloudIoTRegistryCreate, - Update: resourceCloudIoTRegistryUpdate, - Read: resourceCloudIoTRegistryRead, - Delete: resourceCloudIoTRegistryDelete, - - Importer: &schema.ResourceImporter{ - State: resourceCloudIoTRegistryStateImporter, - }, - - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validateCloudIotID, - }, - "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - "region": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - "log_level": { - Type: schema.TypeString, - Optional: true, - DiffSuppressFunc: emptyOrDefaultStringSuppress(""), - ValidateFunc: validation.StringInSlice( - []string{"", "NONE", "ERROR", "INFO", "DEBUG"}, false), - }, - "event_notification_config": { - Type: schema.TypeMap, - Optional: true, - Computed: true, - Removed: "Please use event_notification_configs instead", - }, - "event_notification_configs": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 10, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "pubsub_topic_name": { - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: compareSelfLinkOrResourceName, - }, - "subfolder_matches": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validateCloudIotRegistrySubfolderMatch, - }, - }, - }, - }, - "state_notification_config": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "pubsub_topic_name": { - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: compareSelfLinkOrResourceName, - }, - }, - }, - }, - "mqtt_config": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "mqtt_enabled_state": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice( - []string{mqttEnabled, mqttDisabled}, false), - }, - }, - }, - }, - "http_config": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "http_enabled_state": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice( - []string{httpEnabled, httpDisabled}, false), - }, - }, - }, - }, - "credentials": { - Type: schema.TypeList, - Optional: true, - MaxItems: 10, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "public_key_certificate": { - Type: schema.TypeMap, - Required: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "format": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice( - []string{x509CertificatePEM}, false), - }, - "certificate": { - Type: schema.TypeString, - Required: true, - }, - }, - }, - }, - }, - }, - }, - }, - } -} - -func buildEventNotificationConfigs(v []interface{}) []*cloudiot.EventNotificationConfig { - cfgList := make([]*cloudiot.EventNotificationConfig, 0, len(v)) - for _, cfgRaw := range v { - if cfgRaw == nil { - continue - } - cfgList = append(cfgList, buildEventNotificationConfig(cfgRaw.(map[string]interface{}))) - } - return cfgList -} - -func buildEventNotificationConfig(config map[string]interface{}) *cloudiot.EventNotificationConfig { - if len(config) == 0 { - return nil - } - cfg := &cloudiot.EventNotificationConfig{} - if v, ok := config["pubsub_topic_name"]; ok { - cfg.PubsubTopicName = v.(string) - } - if v, ok := config["subfolder_matches"]; ok { - cfg.SubfolderMatches = v.(string) - } - return cfg -} - -func buildStateNotificationConfig(config map[string]interface{}) *cloudiot.StateNotificationConfig { - if v, ok := config["pubsub_topic_name"]; ok { - return &cloudiot.StateNotificationConfig{ - PubsubTopicName: v.(string), - } - } - return nil -} - -func buildMqttConfig(config map[string]interface{}) *cloudiot.MqttConfig { - if v, ok := config["mqtt_enabled_state"]; ok { - return &cloudiot.MqttConfig{ - MqttEnabledState: v.(string), - } - } - return nil -} - -func buildHttpConfig(config map[string]interface{}) *cloudiot.HttpConfig { - if v, ok := config["http_enabled_state"]; ok { - return &cloudiot.HttpConfig{ - HttpEnabledState: v.(string), - } - } - return nil -} - -func buildPublicKeyCertificate(certificate map[string]interface{}) *cloudiot.PublicKeyCertificate { - cert := &cloudiot.PublicKeyCertificate{ - Format: certificate["format"].(string), - Certificate: certificate["certificate"].(string), - } - return cert -} - -func expandCredentials(credentials []interface{}) []*cloudiot.RegistryCredential { - certificates := make([]*cloudiot.RegistryCredential, len(credentials)) - for i, raw := range credentials { - cred := raw.(map[string]interface{}) - certificates[i] = &cloudiot.RegistryCredential{ - PublicKeyCertificate: buildPublicKeyCertificate(cred["public_key_certificate"].(map[string]interface{})), - } - } - return certificates -} - -func createDeviceRegistry(d *schema.ResourceData) *cloudiot.DeviceRegistry { - deviceRegistry := &cloudiot.DeviceRegistry{} - if v, ok := d.GetOk("event_notification_configs"); ok { - deviceRegistry.EventNotificationConfigs = buildEventNotificationConfigs(v.([]interface{})) - } - - if v, ok := d.GetOk("state_notification_config"); ok { - deviceRegistry.StateNotificationConfig = buildStateNotificationConfig(v.(map[string]interface{})) - } - if v, ok := d.GetOk("mqtt_config"); ok { - deviceRegistry.MqttConfig = buildMqttConfig(v.(map[string]interface{})) - } - if v, ok := d.GetOk("http_config"); ok { - deviceRegistry.HttpConfig = buildHttpConfig(v.(map[string]interface{})) - } - if v, ok := d.GetOk("credentials"); ok { - deviceRegistry.Credentials = expandCredentials(v.([]interface{})) - } - if v, ok := d.GetOk("log_level"); ok { - deviceRegistry.LogLevel = v.(string) - } - deviceRegistry.ForceSendFields = append(deviceRegistry.ForceSendFields, "logLevel") - - return deviceRegistry -} - -func resourceCloudIoTRegistryCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) - project, err := getProject(d, config) - if err != nil { - return err - } - region, err := getRegion(d, config) - if err != nil { - return err - } - deviceRegistry := createDeviceRegistry(d) - deviceRegistry.Id = d.Get("name").(string) - parent := fmt.Sprintf("projects/%s/locations/%s", project, region) - registryId := fmt.Sprintf("%s/registries/%s", parent, deviceRegistry.Id) - d.SetId(registryId) - - err = retryTime(func() error { - _, err := config.clientCloudIoT.Projects.Locations.Registries.Create(parent, deviceRegistry).Do() - return err - }, 5) - if err != nil { - d.SetId("") - return err - } - - // If we infer project and region, they are never actually set so we set them here - d.Set("project", project) - d.Set("region", region) - - return resourceCloudIoTRegistryRead(d, meta) -} - -func resourceCloudIoTRegistryUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) - updateMask := make([]string, 0, 5) - hasChanged := false - deviceRegistry := &cloudiot.DeviceRegistry{} - - d.Partial(true) - - if d.HasChange("event_notification_configs") { - hasChanged = true - updateMask = append(updateMask, "event_notification_configs") - if v, ok := d.GetOk("event_notification_configs"); ok { - deviceRegistry.EventNotificationConfigs = buildEventNotificationConfigs(v.([]interface{})) - } - } - - if d.HasChange("state_notification_config") { - hasChanged = true - updateMask = append(updateMask, "state_notification_config.pubsub_topic_name") - if v, ok := d.GetOk("state_notification_config"); ok { - deviceRegistry.StateNotificationConfig = buildStateNotificationConfig(v.(map[string]interface{})) - } - } - if d.HasChange("mqtt_config") { - hasChanged = true - updateMask = append(updateMask, "mqtt_config.mqtt_enabled_state") - if v, ok := d.GetOk("mqtt_config"); ok { - deviceRegistry.MqttConfig = buildMqttConfig(v.(map[string]interface{})) - } - } - if d.HasChange("http_config") { - hasChanged = true - updateMask = append(updateMask, "http_config.http_enabled_state") - if v, ok := d.GetOk("http_config"); ok { - deviceRegistry.HttpConfig = buildHttpConfig(v.(map[string]interface{})) - } - } - if d.HasChange("credentials") { - hasChanged = true - updateMask = append(updateMask, "credentials") - if v, ok := d.GetOk("credentials"); ok { - deviceRegistry.Credentials = expandCredentials(v.([]interface{})) - } - } - if d.HasChange("log_level") { - hasChanged = true - updateMask = append(updateMask, "log_level") - if v, ok := d.GetOk("log_level"); ok { - deviceRegistry.LogLevel = v.(string) - deviceRegistry.ForceSendFields = append(deviceRegistry.ForceSendFields, "logLevel") - } - } - if hasChanged { - _, err := config.clientCloudIoT.Projects.Locations.Registries.Patch(d.Id(), - deviceRegistry).UpdateMask(strings.Join(updateMask, ",")).Do() - if err != nil { - return fmt.Errorf("Error updating registry %s: %s", d.Get("name").(string), err) - } - for _, updateMaskItem := range updateMask { - d.SetPartial(updateMaskItem) - } - } - - d.Partial(false) - return resourceCloudIoTRegistryRead(d, meta) -} - -func flattenCloudIotRegistryEventNotificationConfigs(cfgs []*cloudiot.EventNotificationConfig, d *schema.ResourceData) []interface{} { - ls := make([]interface{}, 0, len(cfgs)) - for _, cfg := range cfgs { - if cfg == nil { - continue - } - ls = append(ls, map[string]interface{}{ - "subfolder_matches": cfg.SubfolderMatches, - "pubsub_topic_name": cfg.PubsubTopicName, - }) - } - return ls -} - -func resourceCloudIoTRegistryRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) - name := d.Id() - res, err := config.clientCloudIoT.Projects.Locations.Registries.Get(name).Do() - if err != nil { - return handleNotFoundError(err, d, fmt.Sprintf("Registry %q", name)) - } - d.Set("name", res.Id) - - if len(res.EventNotificationConfigs) > 0 { - cfgs := flattenCloudIotRegistryEventNotificationConfigs(res.EventNotificationConfigs, d) - if err := d.Set("event_notification_configs", cfgs); err != nil { - return fmt.Errorf("Error reading Registry: %s", err) - } - } else { - d.Set("event_notification_configs", nil) - } - - pubsubTopicName := res.StateNotificationConfig.PubsubTopicName - if pubsubTopicName != "" { - d.Set("state_notification_config", - map[string]string{"pubsub_topic_name": pubsubTopicName}) - } else { - d.Set("state_notification_config", nil) - } - - d.Set("mqtt_config", map[string]string{"mqtt_enabled_state": res.MqttConfig.MqttEnabledState}) - d.Set("http_config", map[string]string{"http_enabled_state": res.HttpConfig.HttpEnabledState}) - - credentials := make([]map[string]interface{}, len(res.Credentials)) - for i, item := range res.Credentials { - pubcert := make(map[string]interface{}) - pubcert["format"] = item.PublicKeyCertificate.Format - pubcert["certificate"] = item.PublicKeyCertificate.Certificate - credentials[i] = make(map[string]interface{}) - credentials[i]["public_key_certificate"] = pubcert - } - d.Set("credentials", credentials) - d.Set("log_level", res.LogLevel) - // Removed Computed field must be set to nil to prevent spurious diffs - d.Set("event_notification_config", nil) - - return nil -} - -func resourceCloudIoTRegistryDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) - name := d.Id() - call := config.clientCloudIoT.Projects.Locations.Registries.Delete(name) - _, err := call.Do() - if err != nil { - return err - } - d.SetId("") - return nil -} - -func resourceCloudIoTRegistryStateImporter(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - r, _ := regexp.Compile("^projects/(.+)/locations/(.+)/registries/(.+)$") - if !r.MatchString(d.Id()) { - return nil, fmt.Errorf("Invalid registry specifier. " + - "Expecting: projects/{project}/locations/{region}/registries/{name}") - } - parms := r.FindAllStringSubmatch(d.Id(), -1)[0] - project := parms[1] - region := parms[2] - name := parms[3] - - id := fmt.Sprintf("projects/%s/locations/%s/registries/%s", project, region, name) - d.Set("project", project) - d.Set("region", region) - d.SetId(id) - return []*schema.ResourceData{d}, nil -} - -func validateCloudIotID(v interface{}, k string) (warnings []string, errors []error) { - value := v.(string) - if strings.HasPrefix(value, "goog") { - errors = append(errors, fmt.Errorf( - "%q (%q) can not start with \"goog\"", k, value)) - } - if !regexp.MustCompile(CloudIoTIdRegex).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q (%q) doesn't match regexp %q", k, value, CloudIoTIdRegex)) - } - return -} - -func validateCloudIotRegistrySubfolderMatch(v interface{}, k string) (warnings []string, errors []error) { - value := v.(string) - if strings.HasPrefix(value, "/") { - errors = append(errors, fmt.Errorf( - "%q (%q) can not start with '/'", k, value)) - } - return -} diff --git a/third_party/terraform/resources/resource_composer_environment.go.erb b/third_party/terraform/resources/resource_composer_environment.go.erb index a6964e07461b..4fa03ca3a132 100644 --- a/third_party/terraform/resources/resource_composer_environment.go.erb +++ b/third_party/terraform/resources/resource_composer_environment.go.erb @@ -50,8 +50,28 @@ var ( "config.0.node_config", "config.0.software_config", "config.0.private_environment_config", +<% unless version == "ga" -%> + "config.0.web_server_network_access_control", +<% end -%> } +<% unless version == "ga" -%> + allowedIpRangesConfig = &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeString, + Required: true, + Description: `IP address or range, defined using CIDR notation, of requests that this rule applies to. Examples: 192.168.1.1 or 192.168.0.0/16 or 2001:db8::/32 or 2001:0db8:0000:0042:0000:8a2e:0370:7334. IP range prefixes should be properly truncated. For example, 1.2.3.4/24 should be truncated to 1.2.3.0/24. Similarly, for IPv6, 2001:db8::1/32 should be truncated to 2001:db8::/32.`, + }, + "description": { + Type: schema.TypeString, + Optional: true, + Description: `A description of this ip range.`, + }, + }, + } + +<% end -%> ) func resourceComposerEnvironment() *schema.Resource { @@ -78,23 +98,27 @@ func resourceComposerEnvironment() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validateGCPName, + Description: `Name of the environment.`, }, "region": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The location or Compute Engine region for the environment.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "config": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Configuration parameters for this environment.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "node_count": { @@ -103,6 +127,7 @@ func resourceComposerEnvironment() *schema.Resource { Optional: true, AtLeastOneOf: composerConfigKeys, ValidateFunc: validation.IntAtLeast(3), + Description: `The number of nodes in the Kubernetes Engine cluster that will be used to run this environment.`, }, "node_config": { Type: schema.TypeList, @@ -110,6 +135,7 @@ func resourceComposerEnvironment() *schema.Resource { Optional: true, AtLeastOneOf: composerConfigKeys, MaxItems: 1, + Description: `The configuration used for the Kubernetes Engine cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "zone": { @@ -117,6 +143,7 @@ func resourceComposerEnvironment() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The Compute Engine zone in which to deploy the VMs running the Apache Airflow software, specified as the zone name or relative resource name (e.g. "projects/{project}/zones/{zone}"). Must belong to the enclosing environment's project and region.`, }, "machine_type": { Type: schema.TypeString, @@ -124,6 +151,7 @@ func resourceComposerEnvironment() *schema.Resource { Optional: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The Compute Engine machine type used for cluster instances, specified as a name or relative resource name. For example: "projects/{project}/zones/{zone}/machineTypes/{machineType}". Must belong to the enclosing environment's project and region/zone.`, }, "network": { Type: schema.TypeString, @@ -131,18 +159,21 @@ func resourceComposerEnvironment() *schema.Resource { Optional: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The Compute Engine machine type used for cluster instances, specified as a name or relative resource name. For example: "projects/{project}/zones/{zone}/machineTypes/{machineType}". Must belong to the enclosing environment's project and region/zone. The network must belong to the environment's project. If unspecified, the "default" network ID in the environment's project is used. If a Custom Subnet Network is provided, subnetwork must also be provided.`, }, "subnetwork": { Type: schema.TypeString, Optional: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The Compute Engine subnetwork to be used for machine communications, , specified as a self-link, relative resource name (e.g. "projects/{project}/regions/{region}/subnetworks/{subnetwork}"), or by name. If subnetwork is provided, network must also be provided and the subnetwork must belong to the enclosing environment's project and region.`, }, "disk_size_gb": { - Type: schema.TypeInt, - Computed: true, - Optional: true, - ForceNew: true, + Type: schema.TypeInt, + Computed: true, + Optional: true, + ForceNew: true, + Description: `The disk size in GB used for node VMs. Minimum size is 20GB. If unspecified, defaults to 100GB. Cannot be updated.`, }, "oauth_scopes": { Type: schema.TypeSet, @@ -152,7 +183,8 @@ func resourceComposerEnvironment() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, - Set: schema.HashString, + Set: schema.HashString, + Description: `The set of Google API scopes to be made available on all node VMs. Cannot be updated. If empty, defaults to ["https://www.googleapis.com/auth/cloud-platform"].`, }, "service_account": { Type: schema.TypeString, @@ -161,6 +193,7 @@ func resourceComposerEnvironment() *schema.Resource { ForceNew: true, ValidateFunc: validateServiceAccountRelativeNameOrEmail, DiffSuppressFunc: compareServiceAccountEmailToLink, + Description: `The Google Cloud Platform Service Account to be used by the node VMs. If a service account is not specified, the "default" Compute Engine service account is used. Cannot be updated. If given, note that the service account must have roles/composer.worker for any GCP resources created under the Cloud Composer Environment.`, }, "tags": { Type: schema.TypeSet, @@ -169,47 +202,54 @@ func resourceComposerEnvironment() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, - Set: schema.HashString, + Set: schema.HashString, + Description: `The list of instance tags applied to all node VMs. Tags are used to identify valid sources or targets for network firewalls. Each tag within the list must comply with RFC1035. Cannot be updated.`, }, "ip_allocation_policy": { - Type: schema.TypeList, - Optional: true, - Computed: true, - ForceNew: true, - ConfigMode: schema.SchemaConfigModeAttr, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + ConfigMode: schema.SchemaConfigModeAttr, + MaxItems: 1, + Description: `Configuration for controlling how IPs are allocated in the GKE cluster. Cannot be updated.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "use_ip_aliases": { - Type: schema.TypeBool, - Required: true, - ForceNew: true, + Type: schema.TypeBool, + Required: true, + ForceNew: true, + Description: `Whether or not to enable Alias IPs in the GKE cluster. If true, a VPC-native cluster is created. Defaults to true if the ip_allocation_block is present in config.`, }, "cluster_secondary_range_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The name of the cluster's secondary range used to allocate IP addresses to pods. Specify either cluster_secondary_range_name or cluster_ipv4_cidr_block but not both. This field is applicable only when use_ip_aliases is true.`, ConflictsWith: []string{"config.0.node_config.0.ip_allocation_policy.0.cluster_ipv4_cidr_block"}, }, "services_secondary_range_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The name of the services' secondary range used to allocate IP addresses to the cluster. Specify either services_secondary_range_name or services_ipv4_cidr_block but not both. This field is applicable only when use_ip_aliases is true.`, ConflictsWith: []string{"config.0.node_config.0.ip_allocation_policy.0.services_ipv4_cidr_block"}, }, "cluster_ipv4_cidr_block": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The IP address range used to allocate IP addresses to pods in the cluster. Set to blank to have GKE choose a range with the default size. Set to /netmask (e.g. /14) to have GKE choose a range with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use. Specify either cluster_secondary_range_name or cluster_ipv4_cidr_block but not both.`, DiffSuppressFunc: cidrOrSizeDiffSuppress, - ConflictsWith: []string{"config.0.node_config.0.ip_allocation_policy.0.cluster_secondary_range_name"}, + ConflictsWith: []string{"config.0.node_config.0.ip_allocation_policy.0.cluster_secondary_range_name"}, }, "services_ipv4_cidr_block": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The IP address range used to allocate IP addresses in this cluster. Set to blank to have GKE choose a range with the default size. Set to /netmask (e.g. /14) to have GKE choose a range with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use. Specify either services_secondary_range_name or services_ipv4_cidr_block but not both.`, DiffSuppressFunc: cidrOrSizeDiffSuppress, - ConflictsWith: []string{"config.0.node_config.0.ip_allocation_policy.0.services_secondary_range_name"}, + ConflictsWith: []string{"config.0.node_config.0.ip_allocation_policy.0.services_secondary_range_name"}, }, }, }, @@ -223,13 +263,15 @@ func resourceComposerEnvironment() *schema.Resource { Computed: true, AtLeastOneOf: composerConfigKeys, MaxItems: 1, + Description: `The configuration settings for software inside the environment.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "airflow_config_overrides": { - Type: schema.TypeMap, - Optional: true, + Type: schema.TypeMap, + Optional: true, AtLeastOneOf: composerSoftwareConfigKeys, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Apache Airflow configuration properties to override. Property keys contain the section and property names, separated by a hyphen, for example "core-dags_are_paused_at_creation". Section names must not contain hyphens ("-"), opening square brackets ("["), or closing square brackets ("]"). The property name must not be empty and cannot contain "=" or ";". Section and property names cannot contain characters: "." Apache Airflow configuration property names must be written in snake_case. Property values can contain any character, and can be written in any lower/upper case format. Certain Apache Airflow configuration property values are blacklisted, and cannot be overridden.`, }, "pypi_packages": { Type: schema.TypeMap, @@ -237,6 +279,7 @@ func resourceComposerEnvironment() *schema.Resource { AtLeastOneOf: composerSoftwareConfigKeys, Elem: &schema.Schema{Type: schema.TypeString}, ValidateFunc: validateComposerEnvironmentPypiPackages, + Description: `Custom Python Package Index (PyPI) packages to be installed in the environment. Keys refer to the lowercase package name (e.g. "numpy"). Values are the lowercase extras and version specifier (e.g. "==1.12.0", "[devel,gcp_api]", "[devel]>=1.8.2, <1.9.2"). To specify a package without pinning it to a version specifier, use the empty string as the value.`, }, "env_variables": { Type: schema.TypeMap, @@ -244,21 +287,24 @@ func resourceComposerEnvironment() *schema.Resource { AtLeastOneOf: composerSoftwareConfigKeys, Elem: &schema.Schema{Type: schema.TypeString}, ValidateFunc: validateComposerEnvironmentEnvVariables, + Description: `Additional environment variables to provide to the Apache Airflow scheduler, worker, and webserver processes. Environment variable names must match the regular expression [a-zA-Z_][a-zA-Z0-9_]*. They cannot specify Apache Airflow software configuration overrides (they cannot match the regular expression AIRFLOW__[A-Z0-9_]+__[A-Z0-9_]+), and they cannot match any of the following reserved names: AIRFLOW_HOME C_FORCE_ROOT CONTAINER_NAME DAGS_FOLDER GCP_PROJECT GCS_BUCKET GKE_CLUSTER_NAME SQL_DATABASE SQL_INSTANCE SQL_PASSWORD SQL_PROJECT SQL_REGION SQL_USER.`, }, "image_version": { - Type: schema.TypeString, - Computed: true, - Optional: true, - AtLeastOneOf: composerSoftwareConfigKeys, - ValidateFunc: validateRegexp(composerEnvironmentVersionRegexp), + Type: schema.TypeString, + Computed: true, + Optional: true, + AtLeastOneOf: composerSoftwareConfigKeys, + ValidateFunc: validateRegexp(composerEnvironmentVersionRegexp), DiffSuppressFunc: composerImageVersionDiffSuppress, + Description: `The version of the software running in the environment. This encapsulates both the version of Cloud Composer functionality and the version of Apache Airflow. It must match the regular expression composer-[0-9]+\.[0-9]+(\.[0-9]+)?-airflow-[0-9]+\.[0-9]+(\.[0-9]+.*)?. The Cloud Composer portion of the version is a semantic version. The portion of the image version following 'airflow-' is an official Apache Airflow repository release name. See documentation for allowed release names.`, }, "python_version": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, AtLeastOneOf: composerSoftwareConfigKeys, - Computed: true, - ForceNew: true, + Computed: true, + ForceNew: true, + Description: `The major version of Python used to run the Apache Airflow scheduler, worker, and webserver processes. Can be set to '2' or '3'. If not specified, the default is '2'. Cannot be updated.`, }, }, }, @@ -270,50 +316,107 @@ func resourceComposerEnvironment() *schema.Resource { AtLeastOneOf: composerConfigKeys, MaxItems: 1, ForceNew: true, + Description: `The configuration used for the Private IP Cloud Composer environment.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enable_private_endpoint": { - Type: schema.TypeBool, - Optional: true, - Default: true, + Type: schema.TypeBool, + Optional: true, + Default: true, AtLeastOneOf: []string{ "config.0.private_environment_config.0.enable_private_endpoint", "config.0.private_environment_config.0.master_ipv4_cidr_block", + "config.0.private_environment_config.0.cloud_sql_ipv4_cidr_block", + "config.0.private_environment_config.0.web_server_ipv4_cidr_block", }, - ForceNew: true, + ForceNew: true, + Description: `If true, access to the public endpoint of the GKE cluster is denied.`, }, "master_ipv4_cidr_block": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, AtLeastOneOf: []string{ "config.0.private_environment_config.0.enable_private_endpoint", "config.0.private_environment_config.0.master_ipv4_cidr_block", + "config.0.private_environment_config.0.cloud_sql_ipv4_cidr_block", + "config.0.private_environment_config.0.web_server_ipv4_cidr_block", }, - ForceNew: true, - Default: "172.16.0.0/28", + ForceNew: true, + Default: "172.16.0.0/28", + Description: `The IP range in CIDR notation to use for the hosted master network. This range is used for assigning internal IP addresses to the cluster master or set of masters and to the internal load balancer virtual IP. This range must not overlap with any other ranges in use within the cluster's network. If left blank, the default value of '172.16.0.0/28' is used.`, + }, + "web_server_ipv4_cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + AtLeastOneOf: []string{ + "config.0.private_environment_config.0.enable_private_endpoint", + "config.0.private_environment_config.0.master_ipv4_cidr_block", + "config.0.private_environment_config.0.cloud_sql_ipv4_cidr_block", + "config.0.private_environment_config.0.web_server_ipv4_cidr_block", + }, + ForceNew: true, + Description: `The CIDR block from which IP range for web server will be reserved. Needs to be disjoint from master_ipv4_cidr_block and cloud_sql_ipv4_cidr_block.`, + }, + "cloud_sql_ipv4_cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + AtLeastOneOf: []string{ + "config.0.private_environment_config.0.enable_private_endpoint", + "config.0.private_environment_config.0.master_ipv4_cidr_block", + "config.0.private_environment_config.0.cloud_sql_ipv4_cidr_block", + "config.0.private_environment_config.0.web_server_ipv4_cidr_block", + }, + ForceNew: true, + Description: `The CIDR block from which IP range in tenant project will be reserved for Cloud SQL. Needs to be disjoint from web_server_ipv4_cidr_block.`, + }, + }, + }, + }, +<% unless version == "ga" -%> + "web_server_network_access_control": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The network-level access control policy for the Airflow web server. If unspecified, no network-level access restrictions will be applied.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allowed_ip_range": { + Type: schema.TypeSet, + Computed: true, + Optional: true, + Elem: allowedIpRangesConfig, + Description: `A collection of allowed IP ranges with descriptions.`, }, }, }, }, +<% end -%> "airflow_uri": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the Apache Airflow Web UI hosted within this environment.`, }, "dag_gcs_prefix": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The Cloud Storage prefix of the DAGs for this environment. Although Cloud Storage objects reside in a flat namespace, a hierarchical file tree can be simulated using '/'-delimited object name prefixes. DAG objects for this environment reside in a simulated directory with this prefix.`, }, "gke_cluster": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The Kubernetes Engine cluster used to run this environment.`, }, }, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `User-defined labels for this environment. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?. Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?. No more than 64 labels can be associated with a given environment. Both keys and values must be <= 128 bytes in size.`, }, }, } @@ -356,7 +459,7 @@ func resourceComposerEnvironmentCreate(d *schema.ResourceData, meta interface{}) waitErr := composerOperationWaitTime( config, op, envName.Project, "Creating Environment", - int(d.Timeout(schema.TimeoutCreate).Minutes())) + d.Timeout(schema.TimeoutCreate)) if waitErr != nil { // The resource didn't actually get created, remove from state. @@ -514,6 +617,20 @@ func resourceComposerEnvironmentUpdate(d *schema.ResourceData, meta interface{}) } d.SetPartial("config") } + + // If web_server_network_access_control has more fields added it may require changes here. + // This is scoped specifically to allowed_ip_range due to https://github.com/hashicorp/terraform-plugin-sdk/issues/98 + if d.HasChange("config.0.web_server_network_access_control.0.allowed_ip_range") { + patchObj := &composer.Environment{Config: &composer.EnvironmentConfig{}} + if config != nil { + patchObj.Config.WebServerNetworkAccessControl = config.WebServerNetworkAccessControl + } + err = resourceComposerEnvironmentPatchField("config.webServerNetworkAccessControl", patchObj, d, tfConfig) + if err != nil { + return err + } + d.SetPartial("config") + } } if d.HasChange("labels") { @@ -567,7 +684,7 @@ func resourceComposerEnvironmentPatchField(updateMask string, env *composer.Envi waitErr := composerOperationWaitTime( config, op, envName.Project, "Updating newly created Environment", - int(d.Timeout(schema.TimeoutCreate).Minutes())) + d.Timeout(schema.TimeoutCreate)) if waitErr != nil { // The resource didn't actually update. return fmt.Errorf("Error waiting to update Environment: %s", waitErr) @@ -593,7 +710,7 @@ func resourceComposerEnvironmentDelete(d *schema.ResourceData, meta interface{}) err = composerOperationWaitTime( config, op, envName.Project, "Deleting Environment", - int(d.Timeout(schema.TimeoutDelete).Minutes())) + d.Timeout(schema.TimeoutDelete)) if err != nil { return err } @@ -630,10 +747,36 @@ func flattenComposerEnvironmentConfig(envCfg *composer.EnvironmentConfig) interf transformed["node_config"] = flattenComposerEnvironmentConfigNodeConfig(envCfg.NodeConfig) transformed["software_config"] = flattenComposerEnvironmentConfigSoftwareConfig(envCfg.SoftwareConfig) transformed["private_environment_config"] = flattenComposerEnvironmentConfigPrivateEnvironmentConfig(envCfg.PrivateEnvironmentConfig) +<% unless version == "ga" -%> + transformed["web_server_network_access_control"] = flattenComposerEnvironmentConfigWebServerNetworkAccessControl(envCfg.WebServerNetworkAccessControl) +<% end -%> return []interface{}{transformed} } +<% unless version == "ga" -%> +func flattenComposerEnvironmentConfigWebServerNetworkAccessControl(accessControl *composer.WebServerNetworkAccessControl) interface{} { + if accessControl == nil || accessControl.AllowedIpRanges == nil { + return nil + } + + transformed := make([]interface{}, 0, len(accessControl.AllowedIpRanges)) + for _, ipRange := range accessControl.AllowedIpRanges { + data := map[string]interface{}{ + "value": ipRange.Value, + "description": ipRange.Description, + } + transformed = append(transformed, data) + } + + webServerNetworkAccessControl := make(map[string]interface{}) + + webServerNetworkAccessControl["allowed_ip_range"] = schema.NewSet(schema.HashResource(allowedIpRangesConfig), transformed) + + return []interface{}{webServerNetworkAccessControl} +} + +<% end -%> func flattenComposerEnvironmentConfigPrivateEnvironmentConfig(envCfg *composer.PrivateEnvironmentConfig) interface{} { if envCfg == nil { return nil @@ -642,6 +785,8 @@ func flattenComposerEnvironmentConfigPrivateEnvironmentConfig(envCfg *composer.P transformed := make(map[string]interface{}) transformed["enable_private_endpoint"] = envCfg.PrivateClusterConfig.EnablePrivateEndpoint transformed["master_ipv4_cidr_block"] = envCfg.PrivateClusterConfig.MasterIpv4CidrBlock + transformed["cloud_sql_ipv4_cidr_block"] = envCfg.CloudSqlIpv4CidrBlock + transformed["web_server_ipv4_cidr_block"] = envCfg.WebServerIpv4CidrBlock return []interface{}{transformed} } @@ -738,6 +883,14 @@ func expandComposerEnvironmentConfig(v interface{}, d *schema.ResourceData, conf } transformed.PrivateEnvironmentConfig = transformedPrivateEnvironmentConfig +<% unless version == "ga" -%> + transformedWebServerNetworkAccessControl, err := expandComposerEnvironmentConfigWebServerNetworkAccessControl(original["web_server_network_access_control"], d, config) + if err != nil { + return nil, err + } + transformed.WebServerNetworkAccessControl = transformedWebServerNetworkAccessControl + +<% end -%> return transformed, nil } @@ -748,6 +901,37 @@ func expandComposerEnvironmentConfigNodeCount(v interface{}, d *schema.ResourceD return int64(v.(int)), nil } +<% unless version == "ga" -%> +func expandComposerEnvironmentConfigWebServerNetworkAccessControl(v interface{}, d *schema.ResourceData, config *Config) (*composer.WebServerNetworkAccessControl, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + + allowedIpRangesRaw := original["allowed_ip_range"].(*schema.Set).List() + if len(allowedIpRangesRaw) == 0 { + return nil, nil + } + + transformed := &composer.WebServerNetworkAccessControl{} + allowedIpRanges := make([]*composer.AllowedIpRange, 0, len(original)) + + for _, originalIpRange := range allowedIpRangesRaw { + originalRangeRaw := originalIpRange.(map[string]interface{}) + transformedRange := &composer.AllowedIpRange{Value: originalRangeRaw["value"].(string)} + if v, ok := originalRangeRaw["description"]; ok { + transformedRange.Description = v.(string) + } + allowedIpRanges = append(allowedIpRanges, transformedRange) + } + + transformed.AllowedIpRanges = allowedIpRanges + return transformed, nil +} + +<% end -%> func expandComposerEnvironmentConfigPrivateEnvironmentConfig(v interface{}, d *schema.ResourceData, config *Config) (*composer.PrivateEnvironmentConfig, error) { l := v.([]interface{}) if len(l) == 0 { @@ -769,6 +953,14 @@ func expandComposerEnvironmentConfigPrivateEnvironmentConfig(v interface{}, d *s subBlock.MasterIpv4CidrBlock = v.(string) } + if v, ok := original["cloud_sql_ipv4_cidr_block"]; ok { + transformed.CloudSqlIpv4CidrBlock = v.(string) + } + + if v, ok := original["web_server_ipv4_cidr_block"]; ok { + transformed.WebServerIpv4CidrBlock = v.(string) + } + transformed.PrivateClusterConfig = subBlock return transformed, nil @@ -1051,7 +1243,7 @@ func handleComposerEnvironmentCreationOpFailure(id string, envName *composerEnvi waitErr := composerOperationWaitTime( config, op, envName.Project, fmt.Sprintf("Deleting invalid created Environment with state %q", env.State), - int(d.Timeout(schema.TimeoutCreate).Minutes())) + d.Timeout(schema.TimeoutCreate)) if waitErr != nil { return fmt.Errorf("Error waiting to delete invalid Environment with state %q: %s", env.State, waitErr) } diff --git a/third_party/terraform/resources/resource_compute_attached_disk.go b/third_party/terraform/resources/resource_compute_attached_disk.go index fd1886434cd3..da1399662d6b 100644 --- a/third_party/terraform/resources/resource_compute_attached_disk.go +++ b/third_party/terraform/resources/resource_compute_attached_disk.go @@ -31,37 +31,43 @@ func resourceComputeAttachedDisk() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + Description: `name or self_link of the disk that will be attached.`, DiffSuppressFunc: compareSelfLinkOrResourceName, }, "instance": { Type: schema.TypeString, Required: true, ForceNew: true, + Description: `name or self_link of the compute instance that the disk will be attached to. If the self_link is provided then zone and project are extracted from the self link. If only the name is used then zone and project must be defined as properties on the resource or provider.`, DiffSuppressFunc: compareSelfLinkOrResourceName, }, "project": { - Type: schema.TypeString, - ForceNew: true, - Computed: true, - Optional: true, + Type: schema.TypeString, + ForceNew: true, + Computed: true, + Optional: true, + Description: `The project that the referenced compute instance is a part of. If instance is referenced by its self_link the project defined in the link will take precedence.`, }, "zone": { - Type: schema.TypeString, - ForceNew: true, - Computed: true, - Optional: true, + Type: schema.TypeString, + ForceNew: true, + Computed: true, + Optional: true, + Description: `The zone that the referenced compute instance is located within. If instance is referenced by its self_link the zone defined in the link will take precedence.`, }, "device_name": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, - Computed: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Computed: true, + Description: `Specifies a unique device name of your choice that is reflected into the /dev/disk/by-id/google-* tree of a Linux operating system running within the instance. This name can be used to reference the device for mounting, resizing, and so on, from within the instance. If not specified, the server chooses a default device name to apply to this disk, in the form persistent-disks-x, where x is a number assigned by Google Compute Engine.`, }, "mode": { Type: schema.TypeString, ForceNew: true, Optional: true, Default: "READ_WRITE", + Description: `The mode in which to attach this disk, either READ_WRITE or READ_ONLY. If not specified, the default is to attach the disk in READ_WRITE mode.`, ValidateFunc: validation.StringInSlice([]string{"READ_ONLY", "READ_WRITE"}, false), }, }, @@ -103,7 +109,7 @@ func resourceAttachedDiskCreate(d *schema.ResourceData, meta interface{}) error d.SetId(fmt.Sprintf("projects/%s/zones/%s/instances/%s/%s", zv.Project, zv.Zone, zv.Name, diskName)) waitErr := computeOperationWaitTime(config, op, zv.Project, - "disk to attach", int(d.Timeout(schema.TimeoutCreate).Minutes())) + "disk to attach", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { d.SetId("") return waitErr @@ -184,7 +190,7 @@ func resourceAttachedDiskDelete(d *schema.ResourceData, meta interface{}) error } waitErr := computeOperationWaitTime(config, op, zv.Project, - fmt.Sprintf("Detaching disk from %s", zv.Name), int(d.Timeout(schema.TimeoutDelete).Minutes())) + fmt.Sprintf("Detaching disk from %s", zv.Name), d.Timeout(schema.TimeoutDelete)) if waitErr != nil { return waitErr } diff --git a/third_party/terraform/resources/resource_compute_instance.go b/third_party/terraform/resources/resource_compute_instance.go.erb similarity index 81% rename from third_party/terraform/resources/resource_compute_instance.go rename to third_party/terraform/resources/resource_compute_instance.go.erb index 326af1f9d84c..a4efa69275f4 100644 --- a/third_party/terraform/resources/resource_compute_instance.go +++ b/third_party/terraform/resources/resource_compute_instance.go.erb @@ -1,3 +1,5 @@ +// <% autogen_exception -%> + package google import ( @@ -42,6 +44,9 @@ var ( "scheduling.0.automatic_restart", "scheduling.0.preemptible", "scheduling.0.node_affinities", +<% unless version == 'ga' -%> + "scheduling.0.min_node_cpus", +<% end -%> } shieldedInstanceConfigKeys = []string{ @@ -75,10 +80,11 @@ func resourceComputeInstance() *schema.Resource { // resource_compute_instance_template schema when updating this one. Schema: map[string]*schema.Schema{ "boot_disk": { - Type: schema.TypeList, - Required: true, - ForceNew: true, - MaxItems: 1, + Type: schema.TypeList, + Required: true, + ForceNew: true, + MaxItems: 1, + Description: `The boot disk for the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "auto_delete": { @@ -87,6 +93,7 @@ func resourceComputeInstance() *schema.Resource { AtLeastOneOf: bootDiskKeys, Default: true, ForceNew: true, + Description: `Whether the disk will be auto-deleted when the instance is deleted.`, }, "device_name": { @@ -95,6 +102,7 @@ func resourceComputeInstance() *schema.Resource { AtLeastOneOf: bootDiskKeys, Computed: true, ForceNew: true, + Description: `Name with which attached disk will be accessible under /dev/disk/by-id/`, }, "disk_encryption_key_raw": { @@ -104,11 +112,13 @@ func resourceComputeInstance() *schema.Resource { ForceNew: true, ConflictsWith: []string{"boot_disk.0.kms_key_self_link"}, Sensitive: true, + Description: `A 256-bit customer-supplied encryption key, encoded in RFC 4648 base64 to encrypt this disk. Only one of kms_key_self_link and disk_encryption_key_raw may be set.`, }, "disk_encryption_key_sha256": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource.`, }, "kms_key_self_link": { @@ -119,6 +129,7 @@ func resourceComputeInstance() *schema.Resource { ConflictsWith: []string{"boot_disk.0.disk_encryption_key_raw"}, DiffSuppressFunc: compareSelfLinkRelativePaths, Computed: true, + Description: `The self_link of the encryption key that is stored in Google Cloud KMS to encrypt this disk. Only one of kms_key_self_link and disk_encryption_key_raw may be set.`, }, "initialize_params": { @@ -128,6 +139,7 @@ func resourceComputeInstance() *schema.Resource { Computed: true, ForceNew: true, MaxItems: 1, + Description: `Parameters with which a disk was created alongside the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "size": { @@ -137,6 +149,7 @@ func resourceComputeInstance() *schema.Resource { Computed: true, ForceNew: true, ValidateFunc: validation.IntAtLeast(1), + Description: `The size of the image in gigabytes.`, }, "type": { @@ -146,6 +159,7 @@ func resourceComputeInstance() *schema.Resource { Computed: true, ForceNew: true, ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd"}, false), + Description: `The GCE disk type. One of pd-standard or pd-ssd.`, }, "image": { @@ -155,6 +169,7 @@ func resourceComputeInstance() *schema.Resource { Computed: true, ForceNew: true, DiffSuppressFunc: diskImageDiffSuppress, + Description: `The image from which this disk was initialised.`, }, "labels": { @@ -163,6 +178,7 @@ func resourceComputeInstance() *schema.Resource { AtLeastOneOf: initializeParamsKeys, Computed: true, ForceNew: true, + Description: `A set of key/value label pairs assigned to the disk.`, }, }, }, @@ -175,6 +191,7 @@ func resourceComputeInstance() *schema.Resource { ForceNew: true, Default: "READ_WRITE", ValidateFunc: validation.StringInSlice([]string{"READ_WRITE", "READ_ONLY"}, false), + Description: `Read/write mode for the disk. One of "READ_ONLY" or "READ_WRITE".`, }, "source": { @@ -185,26 +202,30 @@ func resourceComputeInstance() *schema.Resource { ForceNew: true, ConflictsWith: []string{"boot_disk.initialize_params"}, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the disk attached to this instance.`, }, }, }, }, "machine_type": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The machine type to create.`, }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the instance. One of name or self_link must be provided.`, }, "network_interface": { - Type: schema.TypeList, - Required: true, - ForceNew: true, + Type: schema.TypeList, + Required: true, + ForceNew: true, + Description: `The networks attached to the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "network": { @@ -213,6 +234,7 @@ func resourceComputeInstance() *schema.Resource { Computed: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the network attached to this interface.`, }, "subnetwork": { @@ -221,36 +243,42 @@ func resourceComputeInstance() *schema.Resource { Computed: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the subnetwork attached to this interface.`, }, "subnetwork_project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The project in which the subnetwork belongs.`, }, "network_ip": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The private IP address assigned to the instance.`, }, "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The name of the interface`, }, "access_config": { - Type: schema.TypeList, - Optional: true, + Type: schema.TypeList, + Optional: true, + Description: `Access configurations, i.e. IPs via which this instance can be accessed via the Internet.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "nat_ip": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The IP address that is be 1:1 mapped to the instance's network ip.`, }, "network_tier": { @@ -258,29 +286,34 @@ func resourceComputeInstance() *schema.Resource { Optional: true, Computed: true, ValidateFunc: validation.StringInSlice([]string{"PREMIUM", "STANDARD"}, false), + Description: `The networking tier used for configuring this instance. One of PREMIUM or STANDARD.`, }, "public_ptr_domain_name": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The DNS domain name for the public PTR record.`, }, }, }, }, "alias_ip_range": { - Type: schema.TypeList, - Optional: true, + Type: schema.TypeList, + Optional: true, + Description: `An array of alias IP ranges for this network interface.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "ip_cidr_range": { Type: schema.TypeString, Required: true, DiffSuppressFunc: ipCidrRangeDiffSuppress, + Description: `The IP CIDR range represented by this alias IP range.`, }, "subnetwork_range_name": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The subnetwork secondary range name specifying the secondary range from which to allocate the IP CIDR range for this alias IP range.`, }, }, }, @@ -290,25 +323,29 @@ func resourceComputeInstance() *schema.Resource { }, "allow_stopping_for_update": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `If true, allows Terraform to stop the instance to update its properties. If you try to update a property that requires stopping the instance without setting this field, the update will fail.`, }, "attached_disk": { - Type: schema.TypeList, - Optional: true, + Type: schema.TypeList, + Optional: true, + Description: `List of disks attached to the instance`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "source": { Type: schema.TypeString, Required: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the disk attached to this instance.`, }, "device_name": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `Name with which the attached disk is accessible under /dev/disk/by-id/`, }, "mode": { @@ -316,12 +353,14 @@ func resourceComputeInstance() *schema.Resource { Optional: true, Default: "READ_WRITE", ValidateFunc: validation.StringInSlice([]string{"READ_WRITE", "READ_ONLY"}, false), + Description: `Read/write mode for the disk. One of "READ_ONLY" or "READ_WRITE".`, }, "disk_encryption_key_raw": { - Type: schema.TypeString, - Optional: true, - Sensitive: true, + Type: schema.TypeString, + Optional: true, + Sensitive: true, + Description: `A 256-bit customer-supplied encryption key, encoded in RFC 4648 base64 to encrypt this disk. Only one of kms_key_self_link and disk_encryption_key_raw may be set.`, }, "kms_key_self_link": { @@ -329,99 +368,114 @@ func resourceComputeInstance() *schema.Resource { Optional: true, DiffSuppressFunc: compareSelfLinkRelativePaths, Computed: true, + Description: `The self_link of the encryption key that is stored in Google Cloud KMS to encrypt this disk. Only one of kms_key_self_link and disk_encryption_key_raw may be set.`, }, "disk_encryption_key_sha256": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The RFC 4648 base64 encoded SHA-256 hash of the customer-supplied encryption key that protects this resource.`, }, }, }, }, "can_ip_forward": { - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + Description: `Whether sending and receiving of packets with non-matching source or destination IPs is allowed.`, }, "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `A brief description of the resource.`, }, "deletion_protection": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether deletion protection is enabled on this instance.`, }, "enable_display": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `Whether the instance has virtual displays enabled.`, }, "guest_accelerator": { - Type: schema.TypeList, - Optional: true, - Computed: true, - ForceNew: true, - ConfigMode: schema.SchemaConfigModeAttr, + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + ConfigMode: schema.SchemaConfigModeAttr, + Description: `List of the type and count of accelerator cards attached to the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "count": { - Type: schema.TypeInt, - Required: true, - ForceNew: true, + Type: schema.TypeInt, + Required: true, + ForceNew: true, + Description: `The number of the guest accelerator cards exposed to this instance.`, }, "type": { Type: schema.TypeString, Required: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The accelerator type resource exposed to this instance. E.g. nvidia-tesla-k80.`, }, }, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs assigned to the instance.`, }, "metadata": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Metadata key/value pairs made available within the instance.`, }, "metadata_startup_script": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Metadata startup scripts made available within the instance.`, }, "min_cpu_platform": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The minimum CPU platform specified for the VM instance.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If self_link is provided, this value is ignored. If neither self_link nor project are provided, the provider project is used.`, }, "scheduling": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - Computed: true, + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `The scheduling strategy being used by the instance.`, Elem: &schema.Resource{ // !!! IMPORTANT !!! // We have a custom diff function for the scheduling block due to issues with Terraform's @@ -433,6 +487,7 @@ func resourceComputeInstance() *schema.Resource { Optional: true, Computed: true, AtLeastOneOf: schedulingKeys, + Description: `Describes maintenance behavior for the instance. One of MIGRATE or TERMINATE,`, }, "automatic_restart": { @@ -440,6 +495,7 @@ func resourceComputeInstance() *schema.Resource { Optional: true, AtLeastOneOf: schedulingKeys, Default: true, + Description: `Specifies if the instance should be restarted if it was terminated by Compute Engine (not a user).`, }, "preemptible": { @@ -448,6 +504,7 @@ func resourceComputeInstance() *schema.Resource { Default: false, AtLeastOneOf: schedulingKeys, ForceNew: true, + Description: `Whether the instance is preemptible.`, }, "node_affinities": { @@ -457,41 +514,54 @@ func resourceComputeInstance() *schema.Resource { ForceNew: true, Elem: instanceSchedulingNodeAffinitiesElemSchema(), DiffSuppressFunc: emptyOrDefaultStringSuppress(""), + Description: `Specifies node affinities or anti-affinities to determine which sole-tenant nodes your instances and managed instance groups will use as host systems.`, }, +<% unless version == 'ga' -%> + "min_node_cpus": { + Type: schema.TypeInt, + Optional: true, + AtLeastOneOf: schedulingKeys, + }, +<% end -%> }, }, }, "scratch_disk": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `The scratch disks attached to the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "interface": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice([]string{"SCSI", "NVME"}, false), + Description: `The disk interface used for attaching this disk. One of SCSI or NVME.`, }, }, }, }, "service_account": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Description: `The service account to attach to the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "email": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The service account e-mail address.`, }, "scopes": { - Type: schema.TypeSet, - Required: true, + Type: schema.TypeSet, + Required: true, + Description: `A list of service scopes.`, Elem: &schema.Schema{ Type: schema.TypeString, StateFunc: func(v interface{}) string { @@ -512,6 +582,7 @@ func resourceComputeInstance() *schema.Resource { // image being used, the field needs to be marked as Computed. Computed: true, DiffSuppressFunc: emptyOrDefaultStringSuppress(""), + Description: `The shielded vm config being used by the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enable_secure_boot": { @@ -519,6 +590,7 @@ func resourceComputeInstance() *schema.Resource { Optional: true, AtLeastOneOf: shieldedInstanceConfigKeys, Default: false, + Description: `Whether secure boot is enabled for the instance.`, }, "enable_vtpm": { @@ -526,6 +598,7 @@ func resourceComputeInstance() *schema.Resource { Optional: true, AtLeastOneOf: shieldedInstanceConfigKeys, Default: true, + Description: `Whether the instance uses vTPM.`, }, "enable_integrity_monitoring": { @@ -533,6 +606,7 @@ func resourceComputeInstance() *schema.Resource { Optional: true, AtLeastOneOf: shieldedInstanceConfigKeys, Default: true, + Description: `Whether integrity monitoring is enabled for the instance.`, }, }, }, @@ -542,59 +616,80 @@ func resourceComputeInstance() *schema.Resource { Type: schema.TypeString, Optional: true, ValidateFunc: validation.StringInSlice([]string{"RUNNING", "TERMINATED"}, false), + Description: `Desired status of the instance. Either "RUNNING" or "TERMINATED".`, }, "current_status": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Current status of the instance.`, }, "tags": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Description: `The list of tags attached to the instance.`, }, "zone": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The zone of the instance. If self_link is provided, this value is ignored. If neither self_link nor zone are provided, the provider zone is used.`, }, "cpu_platform": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The CPU platform used by this instance.`, }, "instance_id": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The server-assigned unique identifier of this instance.`, }, "label_fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The unique fingerprint of the labels.`, }, "metadata_fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The unique fingerprint of the metadata.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, "tags_fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The unique fingerprint of the tags.`, }, "hostname": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `A custom hostname for the instance. Must be a fully qualified DNS name and RFC-1035-valid. Valid format is a series of labels 1-63 characters long matching the regular expression [a-z]([-a-z0-9]*[a-z0-9]), concatenated with periods. The entire hostname must not exceed 253 characters. Changing this forces a new resource to be created.`, + }, + + "resource_policies": { + Type: schema.TypeList, + Elem: &schema.Schema{Type: schema.TypeString}, + DiffSuppressFunc: compareSelfLinkRelativePaths, + Optional: true, + ForceNew: true, + MaxItems: 1, + Description: `A list of short names or self_links of resource policies to attach to the instance. Modifying this list will cause the instance to recreate. Currently a max of 1 resource policy is supported.`, }, }, CustomizeDiff: customdiff.All( @@ -723,6 +818,7 @@ func expandComputeInstance(project string, d *schema.ResourceData, config *Confi ForceSendFields: []string{"CanIpForward", "DeletionProtection"}, ShieldedVmConfig: expandShieldedVmConfigs(d), DisplayDevice: expandDisplayDevice(d), + ResourcePolicies: convertStringArr(d.Get("resource_policies").([]interface{})), }, nil } @@ -804,9 +900,6 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err return err } - // Read create timeout - createTimeout := int(d.Timeout(schema.TimeoutCreate).Minutes()) - log.Printf("[INFO] Requesting instance creation") op, err := config.clientComputeBeta.Instances.Insert(project, zone.Name, instance).Do() if err != nil { @@ -817,7 +910,7 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err d.SetId(fmt.Sprintf("projects/%s/zones/%s/instances/%s", project, z, instance.Name)) // Wait for the operation to complete - waitErr := computeOperationWaitTime(config, op, project, "instance to create", createTimeout) + waitErr := computeOperationWaitTime(config, op, project, "instance to create", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { // The resource didn't actually create d.SetId("") @@ -986,6 +1079,9 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error } } } + + d.Set("resource_policies", instance.ResourcePolicies) + // Remove nils from map in case there were disks in the config that were not present on read; // i.e. a disk was detached out of band ads := []map[string]interface{}{} @@ -1076,7 +1172,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error updating metadata: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "metadata to update", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "metadata to update", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1104,7 +1200,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error updating tags: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "tags to update", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "tags to update", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1122,7 +1218,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error updating labels: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "labels to update", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "labels to update", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1144,7 +1240,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err opErr := computeOperationWaitTime( config, op, project, "scheduling policy update", - int(d.Timeout(schema.TimeoutUpdate).Minutes())) + d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1182,7 +1278,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return fmt.Errorf("Error deleting old access_config: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "old access_config to delete", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "old access_config to delete", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1207,7 +1303,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return fmt.Errorf("Error adding new access_config: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "new access_config to add", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "new access_config to add", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1227,7 +1323,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return errwrap.Wrapf("Error removing alias_ip_range: {{err}}", err) } - opErr := computeOperationWaitTime(config, op, project, "updating alias ip ranges", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "updating alias ip ranges", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1251,7 +1347,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return errwrap.Wrapf("Error adding alias_ip_range: {{err}}", err) } - opErr := computeOperationWaitTime(config, op, project, "updating alias ip ranges", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "updating alias ip ranges", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1329,7 +1425,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return errwrap.Wrapf("Error detaching disk: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "detaching disk", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "detaching disk", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1344,7 +1440,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return errwrap.Wrapf("Error attaching disk : {{err}}", err) } - opErr := computeOperationWaitTime(config, op, project, "attaching disk", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "attaching disk", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1378,7 +1474,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error updating deletion protection flag: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "deletion protection to update", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "deletion protection to update", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1407,7 +1503,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err } opErr := computeOperationWaitTime( config, op, project, "updating status", - int(d.Timeout(schema.TimeoutUpdate).Minutes())) + d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1432,7 +1528,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err return errwrap.Wrapf("Error stopping instance: {{err}}", err) } - opErr := computeOperationWaitTime(config, op, project, "stopping instance", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "stopping instance", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1450,7 +1546,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return err } - opErr := computeOperationWaitTime(config, op, project, "updating machinetype", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "updating machinetype", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1472,7 +1568,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return err } - opErr := computeOperationWaitTime(config, op, project, "updating min cpu platform", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "updating min cpu platform", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1491,7 +1587,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return err } - opErr := computeOperationWaitTime(config, op, project, "updating service account", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "updating service account", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1507,7 +1603,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err if err != nil { return fmt.Errorf("Error updating display device: %s", err) } - opErr := computeOperationWaitTime(config, op, project, "updating display device", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "updating display device", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1522,7 +1618,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err } opErr := computeOperationWaitTime(config, op, project, - "starting instance", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + "starting instance", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1538,7 +1634,7 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err } opErr := computeOperationWaitTime(config, op, project, - "shielded vm config update", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + "shielded vm config update", d.Timeout(schema.TimeoutUpdate)) if opErr != nil { return opErr } @@ -1762,7 +1858,7 @@ func resourceComputeInstanceDelete(d *schema.ResourceData, meta interface{}) err } // Wait for the operation to complete - opErr := computeOperationWaitTime(config, op, project, "instance to delete", int(d.Timeout(schema.TimeoutDelete).Minutes())) + opErr := computeOperationWaitTime(config, op, project, "instance to delete", d.Timeout(schema.TimeoutDelete)) if opErr != nil { return opErr } diff --git a/third_party/terraform/resources/resource_compute_instance_from_template.go b/third_party/terraform/resources/resource_compute_instance_from_template.go index a09aa731605f..373942ef4eaf 100644 --- a/third_party/terraform/resources/resource_compute_instance_from_template.go +++ b/third_party/terraform/resources/resource_compute_instance_from_template.go @@ -68,9 +68,10 @@ func computeInstanceFromTemplateSchema() map[string]*schema.Schema { }) s["source_instance_template"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Name or self link of an instance template to create the instance based on.`, } return s @@ -163,7 +164,7 @@ func resourceComputeInstanceFromTemplateCreate(d *schema.ResourceData, meta inte // Wait for the operation to complete waitErr := computeOperationWaitTime(config, op, project, - "instance to create", int(d.Timeout(schema.TimeoutCreate).Minutes())) + "instance to create", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { // The resource didn't actually create d.SetId("") diff --git a/third_party/terraform/resources/resource_compute_instance_group.go b/third_party/terraform/resources/resource_compute_instance_group.go index 744e140baf61..0f25b4757849 100644 --- a/third_party/terraform/resources/resource_compute_instance_group.go +++ b/third_party/terraform/resources/resource_compute_instance_group.go @@ -4,6 +4,7 @@ import ( "fmt" "log" "strings" + "time" "google.golang.org/api/compute/v1" "google.golang.org/api/googleapi" @@ -21,50 +22,63 @@ func resourceComputeInstanceGroup() *schema.Resource { State: resourceComputeInstanceGroupImportState, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(6 * time.Minute), + Update: schema.DefaultTimeout(6 * time.Minute), + Delete: schema.DefaultTimeout(6 * time.Minute), + }, + SchemaVersion: 2, MigrateState: resourceComputeInstanceGroupMigrateState, Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the instance group. Must be 1-63 characters long and comply with RFC1035. Supported characters include lowercase letters, numbers, and hyphens.`, }, "zone": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The zone that this instance group should be created in.`, }, "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `An optional textual description of the instance group.`, }, "instances": { - Type: schema.TypeSet, - Optional: true, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: selfLinkRelativePathHash, + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: selfLinkRelativePathHash, + Description: `List of instances in the group. They should be given as self_link URLs. When adding instances they must all be in the same network and zone as the instance group.`, }, "named_port": { - Type: schema.TypeList, - Optional: true, + Type: schema.TypeList, + Optional: true, + Description: `The named port configuration.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The name which the port will be mapped to.`, }, "port": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `The port number to map the name to.`, }, }, }, @@ -76,23 +90,27 @@ func resourceComputeInstanceGroup() *schema.Resource { Computed: true, DiffSuppressFunc: compareSelfLinkOrResourceName, ForceNew: true, + Description: `The URL of the network the instance group is in. If this is different from the network where the instances are in, the creation fails. Defaults to the network where the instances are in (if neither network nor instances is specified, this field will be blank).`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, "size": { - Type: schema.TypeInt, - Computed: true, + Type: schema.TypeInt, + Computed: true, + Description: `The number of instances in the group.`, }, }, } @@ -159,16 +177,26 @@ func resourceComputeInstanceGroupCreate(d *schema.ResourceData, meta interface{} d.SetId(fmt.Sprintf("projects/%s/zones/%s/instanceGroups/%s", project, zone, name)) // Wait for the operation to complete - err = computeOperationWait(config, op, project, "Creating InstanceGroup") + err = computeOperationWaitTime(config, op, project, "Creating InstanceGroup", d.Timeout(schema.TimeoutCreate)) if err != nil { d.SetId("") return err } if v, ok := d.GetOk("instances"); ok { - instanceUrls := convertStringArr(v.(*schema.Set).List()) - if !validInstanceURLs(instanceUrls) { - return fmt.Errorf("Error invalid instance URLs: %v", instanceUrls) + tmpUrls := convertStringArr(v.(*schema.Set).List()) + + var instanceUrls []string + for _, v := range tmpUrls { + if strings.HasPrefix(v, "https://") { + instanceUrls = append(instanceUrls, v) + } else { + url, err := replaceVars(d, config, "{{ComputeBasePath}}"+v) + if err != nil { + return err + } + instanceUrls = append(instanceUrls, url) + } } addInstanceReq := &compute.InstanceGroupsAddInstancesRequest{ @@ -183,7 +211,7 @@ func resourceComputeInstanceGroupCreate(d *schema.ResourceData, meta interface{} } // Wait for the operation to complete - err = computeOperationWait(config, op, project, "Adding instances to InstanceGroup") + err = computeOperationWaitTime(config, op, project, "Adding instances to InstanceGroup", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -295,7 +323,7 @@ func resourceComputeInstanceGroupUpdate(d *schema.ResourceData, meta interface{} } } else { // Wait for the operation to complete - err = computeOperationWait(config, removeOp, project, "Updating InstanceGroup") + err = computeOperationWaitTime(config, removeOp, project, "Updating InstanceGroup", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -316,7 +344,7 @@ func resourceComputeInstanceGroupUpdate(d *schema.ResourceData, meta interface{} } // Wait for the operation to complete - err = computeOperationWait(config, addOp, project, "Updating InstanceGroup") + err = computeOperationWaitTime(config, addOp, project, "Updating InstanceGroup", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -339,7 +367,7 @@ func resourceComputeInstanceGroupUpdate(d *schema.ResourceData, meta interface{} return fmt.Errorf("Error updating named ports for InstanceGroup: %s", err) } - err = computeOperationWait(config, op, project, "Updating InstanceGroup") + err = computeOperationWaitTime(config, op, project, "Updating InstanceGroup", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -369,7 +397,7 @@ func resourceComputeInstanceGroupDelete(d *schema.ResourceData, meta interface{} return fmt.Errorf("Error deleting InstanceGroup: %s", err) } - err = computeOperationWait(config, op, project, "Deleting InstanceGroup") + err = computeOperationWaitTime(config, op, project, "Deleting InstanceGroup", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_compute_instance_group_manager.go b/third_party/terraform/resources/resource_compute_instance_group_manager.go.erb similarity index 67% rename from third_party/terraform/resources/resource_compute_instance_group_manager.go rename to third_party/terraform/resources/resource_compute_instance_group_manager.go.erb index 42f21d003d09..f702ab9c9850 100644 --- a/third_party/terraform/resources/resource_compute_instance_group_manager.go +++ b/third_party/terraform/resources/resource_compute_instance_group_manager.go.erb @@ -1,3 +1,5 @@ +// <% autogen_exception -%> + package google import ( @@ -31,49 +33,57 @@ func resourceComputeInstanceGroupManager() *schema.Resource { Schema: map[string]*schema.Schema{ "base_instance_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The base instance name to use for instances in this group. The value must be a valid RFC1035 name. Supported characters are lowercase letters, numbers, and hyphens (-). Instances are named by appending a hyphen and a random four-character string to the base instance name.`, }, "instance_template": { - Type: schema.TypeString, - Optional: true, - Computed: true, - Removed: "This field has been replaced by `version.instance_template`", + Type: schema.TypeString, + Optional: true, + Computed: true, + Removed: "This field has been replaced by `version.instance_template`", + Description: `The full URL to an instance template from which all new instances of this version will be created.`, }, "version": { - Type: schema.TypeList, - Required: true, + Type: schema.TypeList, + Required: true, + Description: `Application versions managed by this instance group. Each version deals with a specific instance template, allowing canary release scenarios.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `Version name.`, }, "instance_template": { Type: schema.TypeString, Required: true, DiffSuppressFunc: compareSelfLinkRelativePaths, + Description: `The full URL to an instance template from which all new instances of this version will be created.`, }, "target_size": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The number of instances calculated as a fixed number or a percentage depending on the settings.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "fixed": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `The number of instances which are managed for this version. Conflicts with percent.`, }, "percent": { Type: schema.TypeInt, Optional: true, ValidateFunc: validation.IntBetween(0, 100), + Description: `The number of instances (calculated as percentage) which are managed for this version. Conflicts with fixed. Note that when using percent, rounding will be in favor of explicitly set target_size values; a managed instance group with 2 instances and 2 versions, one of which has a target_size.percent of 60 will create 2 instances of that version.`, }, }, }, @@ -83,62 +93,72 @@ func resourceComputeInstanceGroupManager() *schema.Resource { }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the instance group manager. Must be 1-63 characters long and comply with RFC1035. Supported characters include lowercase letters, numbers, and hyphens.`, }, "zone": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The zone that instances in this group should be created in.`, }, "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `An optional textual description of the instance group manager.`, }, "fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The fingerprint of the instance group manager.`, }, "instance_group": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The full URL of the instance group created by the manager.`, }, "named_port": { - Type: schema.TypeSet, - Optional: true, + Type: schema.TypeSet, + Optional: true, + Description: `The named port configuration.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The name of the port.`, }, "port": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `The port number.`, }, }, }, }, "project": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URL of the created resource.`, }, "update_strategy": { @@ -153,53 +173,61 @@ func resourceComputeInstanceGroupManager() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, - Set: selfLinkRelativePathHash, + Set: selfLinkRelativePathHash, + Description: `The full URL of all target pools to which new instances in the group are added. Updating the target pools attribute does not affect existing instances.`, }, "target_size": { - Type: schema.TypeInt, - Computed: true, - Optional: true, + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `The target number of running instances for this managed instance group. This value should always be explicitly set unless this resource is attached to an autoscaler, in which case it should never be set. Defaults to 0.`, }, "auto_healing_policies": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The autohealing policies for this managed instance group. You can specify only one value.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "health_check": { Type: schema.TypeString, Required: true, DiffSuppressFunc: compareSelfLinkRelativePaths, + Description: `The health check resource that signals autohealing.`, }, "initial_delay_sec": { Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntBetween(0, 3600), + Description: `The number of seconds that the managed instance group waits before it applies autohealing policies to new instances or recently recreated instances. Between 0 and 3600.`, }, }, }, }, "update_policy": { - Computed: true, - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Computed: true, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The update policy for this managed instance group.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "minimal_action": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice([]string{"RESTART", "REPLACE"}, false), + Description: `Minimal action to be taken on an instance. You can specify either RESTART to restart existing instances or REPLACE to delete and create new instances from the target template. If you specify a RESTART, the Updater will attempt to perform that action only. However, if the Updater determines that the minimal action you specify is not enough to perform the update, it might perform a more disruptive action.`, }, "type": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice([]string{"OPPORTUNISTIC", "PROACTIVE"}, false), + Description: `The type of update process. You can specify either PROACTIVE so that the instance group manager proactively executes actions in order to bring instances to their target versions or OPPORTUNISTIC so that no action is proactively executed but the update will be performed as part of other actions (for example, resizes or recreateInstances calls).`, }, "max_surge_fixed": { @@ -207,6 +235,7 @@ func resourceComputeInstanceGroupManager() *schema.Resource { Optional: true, Computed: true, ConflictsWith: []string{"update_policy.0.max_surge_percent"}, + Description: `The maximum number of instances that can be created above the specified targetSize during the update process. Conflicts with max_surge_percent. If neither is set, defaults to 1`, }, "max_surge_percent": { @@ -214,6 +243,7 @@ func resourceComputeInstanceGroupManager() *schema.Resource { Optional: true, ConflictsWith: []string{"update_policy.0.max_surge_fixed"}, ValidateFunc: validation.IntBetween(0, 100), + Description: `The maximum number of instances(calculated as percentage) that can be created above the specified targetSize during the update process. Conflicts with max_surge_fixed.`, }, "max_unavailable_fixed": { @@ -221,6 +251,7 @@ func resourceComputeInstanceGroupManager() *schema.Resource { Optional: true, Computed: true, ConflictsWith: []string{"update_policy.0.max_unavailable_percent"}, + Description: `The maximum number of instances that can be unavailable during the update process. Conflicts with max_unavailable_percent. If neither is set, defaults to 1.`, }, "max_unavailable_percent": { @@ -228,22 +259,49 @@ func resourceComputeInstanceGroupManager() *schema.Resource { Optional: true, ConflictsWith: []string{"update_policy.0.max_unavailable_fixed"}, ValidateFunc: validation.IntBetween(0, 100), + Description: `The maximum number of instances(calculated as percentage) that can be unavailable during the update process. Conflicts with max_unavailable_fixed.`, }, "min_ready_sec": { Type: schema.TypeInt, Optional: true, ValidateFunc: validation.IntBetween(0, 3600), + Description: `Minimum number of seconds to wait for after a newly created instance becomes available. This value must be from range [0, 3600].`, }, }, }, }, "wait_for_instances": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether to wait for all instances to be created/updated before returning. Note that if this is set to true and the operation does not succeed, Terraform will continue trying until it times out.`, + }, +<% unless version == 'ga' -%> + "stateful_disk": { + Type: schema.TypeSet, + Optional: true, + Description: `Disks created on the instances that will be preserved on instance delete, update, etc.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": { + Type: schema.TypeString, + Required: true, + Description: `The device name of the disk to be attached.`, + }, + + "delete_rule": { + Type: schema.TypeString, + Default: "NEVER", + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NEVER", "ON_PERMANENT_INSTANCE_DELETION"}, true), + Description: `A value that prescribes what should happen to the stateful disk when the VM instance is deleted. The available options are NEVER and ON_PERMANENT_INSTANCE_DELETION. NEVER detatch the disk when the VM is deleted, but not delete the disk. ON_PERMANENT_INSTANCE_DELETION will delete the stateful disk when the VM is permanently deleted from the instance group. The default is NEVER.`, + }, + }, + }, }, +<% end -%> }, } } @@ -298,6 +356,9 @@ func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta inte AutoHealingPolicies: expandAutoHealingPolicies(d.Get("auto_healing_policies").([]interface{})), Versions: expandVersions(d.Get("version").([]interface{})), UpdatePolicy: expandUpdatePolicy(d.Get("update_policy").([]interface{})), +<% unless version == 'ga' -%> + StatefulPolicy: expandStatefulPolicy(d.Get("stateful_disk").(*schema.Set).List()), +<% end -%> // Force send TargetSize to allow a value of 0. ForceSendFields: []string{"TargetSize"}, } @@ -318,8 +379,7 @@ func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta inte d.SetId(id) // Wait for the operation to complete - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Creating InstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Creating InstanceGroupManager", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -420,6 +480,11 @@ func resourceComputeInstanceGroupManagerRead(d *schema.ResourceData, meta interf if err = d.Set("named_port", flattenNamedPortsBeta(manager.NamedPorts)); err != nil { return fmt.Errorf("Error setting named_port in state: %s", err.Error()) } +<% unless version == 'ga' -%> + if err = d.Set("stateful_disk", flattenStatefulPolicy(manager.StatefulPolicy)); err != nil { + return fmt.Errorf("Error setting stateful_disk in state: %s", err.Error()) + } +<% end -%> d.Set("fingerprint", manager.Fingerprint) d.Set("instance_group", ConvertSelfLinkToV1(manager.InstanceGroup)) d.Set("self_link", ConvertSelfLinkToV1(manager.SelfLink)) @@ -470,6 +535,7 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte if d.HasChange("target_pools") { updatedManager.TargetPools = convertStringSet(d.Get("target_pools").(*schema.Set)) + updatedManager.ForceSendFields = append(updatedManager.ForceSendFields, "TargetPools") change = true } @@ -489,14 +555,20 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte change = true } +<% unless version == 'ga' -%> + if d.HasChange("stateful_disk") { + updatedManager.StatefulPolicy = expandStatefulPolicy(d.Get("stateful_disk").(*schema.Set).List()) + change = true + } + +<% end -%> if change { op, err := config.clientComputeBeta.InstanceGroupManagers.Patch(project, zone, d.Get("name").(string), updatedManager).Do() if err != nil { return fmt.Errorf("Error updating managed group instances: %s", err) } - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Updating managed group instances", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Updating managed group instances", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -522,8 +594,7 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte } // Wait for the operation to complete: - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Updating InstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Updating InstanceGroupManager", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -543,8 +614,7 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte } // Wait for the operation to complete - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Updating InstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Updating InstanceGroupManager", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -582,8 +652,7 @@ func resourceComputeInstanceGroupManagerDelete(d *schema.ResourceData, meta inte currentSize := int64(d.Get("target_size").(int)) // Wait for the operation to complete - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) - err = computeOperationWaitTime(config, op, project, "Deleting InstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Deleting InstanceGroupManager", d.Timeout(schema.TimeoutDelete)) for err != nil && currentSize > 0 { if !strings.Contains(err.Error(), "timeout") { @@ -604,8 +673,7 @@ func resourceComputeInstanceGroupManagerDelete(d *schema.ResourceData, meta inte log.Printf("[INFO] timeout occurred, but instance group is shrinking (%d < %d)", instanceGroupSize, currentSize) currentSize = instanceGroupSize - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) - err = computeOperationWaitTime(config, op, project, "Deleting InstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Deleting InstanceGroupManager", d.Timeout(schema.TimeoutDelete)) } d.SetId("") @@ -626,6 +694,23 @@ func expandAutoHealingPolicies(configured []interface{}) []*computeBeta.Instance return autoHealingPolicies } +<% unless version == 'ga' -%> +func expandStatefulPolicy(configured []interface{}) *computeBeta.StatefulPolicy { + disks := make(map[string]computeBeta.StatefulPolicyPreservedStateDiskDevice) + for _, raw := range configured { + data := raw.(map[string]interface{}) + disk := computeBeta.StatefulPolicyPreservedStateDiskDevice{ + AutoDelete: data["delete_rule"].(string), + } + disks[data["device_name"].(string)] = disk + } + if len(disks) > 0 { + return &computeBeta.StatefulPolicy{PreservedState: &computeBeta.StatefulPolicyPreservedState{Disks: disks}} + } + return nil +} + +<% end -%> func expandVersions(configured []interface{}) []*computeBeta.InstanceGroupManagerVersion { versions := make([]*computeBeta.InstanceGroupManagerVersion, 0, len(configured)) for _, raw := range configured { @@ -716,6 +801,24 @@ func flattenAutoHealingPolicies(autoHealingPolicies []*computeBeta.InstanceGroup return autoHealingPoliciesSchema } +<% unless version == 'ga' -%> +func flattenStatefulPolicy(statefulPolicy *computeBeta.StatefulPolicy) []map[string]interface{} { + if statefulPolicy == nil || statefulPolicy.PreservedState == nil || statefulPolicy.PreservedState.Disks == nil { + return make([]map[string]interface{}, 0, 0) + } + result := make([]map[string]interface{}, 0, len(statefulPolicy.PreservedState.Disks)) + for deviceName, disk := range statefulPolicy.PreservedState.Disks { + data := map[string]interface{}{ + "device_name": deviceName, + "delete_rule": disk.AutoDelete, + } + + result = append(result, data) + } + return result +} + +<% end -%> func flattenUpdatePolicy(updatePolicy *computeBeta.InstanceGroupManagerUpdatePolicy) []map[string]interface{} { results := []map[string]interface{}{} if updatePolicy != nil { diff --git a/third_party/terraform/resources/resource_compute_instance_iam_test.go b/third_party/terraform/resources/resource_compute_instance_iam_test.go index 0922cd8ca545..272cf0c038d3 100644 --- a/third_party/terraform/resources/resource_compute_instance_iam_test.go +++ b/third_party/terraform/resources/resource_compute_instance_iam_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -16,9 +15,9 @@ func TestAccComputeInstanceIamPolicy(t *testing.T) { project := getTestProjectFromEnv() role := "roles/compute.osLogin" zone := getTestZoneFromEnv() - instanceName := fmt.Sprintf("tf-test-instance-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-instance-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/resources/resource_compute_instance_template.go b/third_party/terraform/resources/resource_compute_instance_template.go.erb similarity index 68% rename from third_party/terraform/resources/resource_compute_instance_template.go rename to third_party/terraform/resources/resource_compute_instance_template.go.erb index 1d38e6c9c598..d3c36e319795 100644 --- a/third_party/terraform/resources/resource_compute_instance_template.go +++ b/third_party/terraform/resources/resource_compute_instance_template.go.erb @@ -1,9 +1,12 @@ +// <% autogen_exception -%> + package google import ( "fmt" "reflect" "strings" + "time" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/helper/customdiff" @@ -20,6 +23,9 @@ var ( "scheduling.0.automatic_restart", "scheduling.0.preemptible", "scheduling.0.node_affinities", +<% unless version == 'ga' -%> + "scheduling.0.min_node_cpus", +<% end -%> } shieldedInstanceTemplateConfigKeys = []string{ @@ -45,6 +51,11 @@ func resourceComputeInstanceTemplate() *schema.Resource { ), MigrateState: resourceComputeInstanceTemplateMigrateState, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + // A compute instance template is more or less a subset of a compute // instance. Please attempt to maintain consistency with the // resource_compute_instance schema when updating this one. @@ -56,13 +67,15 @@ func resourceComputeInstanceTemplate() *schema.Resource { ForceNew: true, ConflictsWith: []string{"name_prefix"}, ValidateFunc: validateGCPName, + Description: `The name of the instance template. If you leave this blank, Terraform will auto-generate a unique name.`, }, "name_prefix": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `Creates a unique name beginning with the specified prefix. Conflicts with name.`, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { // https://cloud.google.com/compute/docs/reference/latest/instanceTemplates#resource // uuid is 26 characters, limit the prefix to 37. @@ -76,49 +89,56 @@ func resourceComputeInstanceTemplate() *schema.Resource { }, "disk": { - Type: schema.TypeList, - Required: true, - ForceNew: true, + Type: schema.TypeList, + Required: true, + ForceNew: true, + Description: `Disks to attach to instances created from this template. This can be specified multiple times for multiple disks.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "auto_delete": { - Type: schema.TypeBool, - Optional: true, - Default: true, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + Default: true, + ForceNew: true, + Description: `Whether or not the disk should be auto-deleted. This defaults to true.`, }, "boot": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Computed: true, + Description: `Indicates that this is a boot disk.`, }, "device_name": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `A unique device name that is reflected into the /dev/ tree of a Linux operating system running within the instance. If not specified, the server chooses a default device name to apply to this disk.`, }, "disk_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Name of the disk. When not provided, this defaults to the name of the instance.`, }, "disk_size_gb": { - Type: schema.TypeInt, - Optional: true, - ForceNew: true, + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Description: `The size of the image in gigabytes. If not specified, it will inherit the size of its base image. For SCRATCH disks, the size must be exactly 375GB.`, }, "disk_type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The GCE disk type. Can be either "pd-ssd", "local-ssd", or "pd-standard".`, }, "labels": { @@ -128,47 +148,54 @@ func resourceComputeInstanceTemplate() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, + Description: `A set of key/value label pairs to assign to disks,`, }, "source_image": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The image from which to initialize this disk. This can be one of: the image's self_link, projects/{project}/global/images/{image}, projects/{project}/global/images/family/{family}, global/images/{image}, global/images/family/{family}, family/{family}, {project}/{family}, {project}/{image}, {family}, or {image}. ~> Note: Either source or source_image is required when creating a new instance except for when creating a local SSD.`, }, "interface": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `Specifies the disk interface to use for attaching this disk.`, }, "mode": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The mode in which to attach this disk, either READ_WRITE or READ_ONLY. If you are attaching or creating a boot disk, this must read-write mode.`, }, "source": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The name (not self_link) of the disk (such as those managed by google_compute_disk) to attach. ~> Note: Either source or source_image is required when creating a new instance except for when creating a local SSD.`, }, "type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The type of GCE disk, can be either "SCRATCH" or "PERSISTENT".`, }, "disk_encryption_key": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Description: `Encrypts or decrypts a disk using a customer-supplied encryption key.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "kms_key_self_link": { @@ -176,6 +203,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkRelativePaths, + Description: `The self link of the encryption key that is stored in Google Cloud KMS.`, }, }, }, @@ -185,57 +213,66 @@ func resourceComputeInstanceTemplate() *schema.Resource { }, "machine_type": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The machine type to create. To create a machine with a custom type (such as extended memory), format the value like custom-VCPUS-MEM_IN_MB like custom-6-20480 for 6 vCPU and 20GB of RAM.`, }, "can_ip_forward": { - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + Description: `Whether to allow sending and receiving of packets with non-matching source or destination IPs. This defaults to false.`, }, "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `A brief description of this resource.`, }, "enable_display": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Enable Virtual Displays on this instance. Note: allow_stopping_for_update must be set to true in order to update this field.`, }, "instance_description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `A description of the instance.`, }, "metadata": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Metadata key/value pairs to make available from within instances created from this template.`, }, "metadata_startup_script": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `An alternative to using the startup-script metadata key, mostly to match the compute_instance resource. This replaces the startup-script metadata key on the created instance and thus the two mechanisms are not allowed to be used simultaneously.`, }, "metadata_fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The unique fingerprint of the metadata.`, }, "network_interface": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Networks to attach to instances created from this template. This can be specified multiple times for multiple networks.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "network": { @@ -244,6 +281,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { ForceNew: true, Computed: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the network to attach this interface to. Use network attribute for Legacy or Auto subnetted networks and subnetwork for custom subnetted networks.`, }, "subnetwork": { @@ -252,57 +290,67 @@ func resourceComputeInstanceTemplate() *schema.Resource { ForceNew: true, Computed: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name of the subnetwork to attach this interface to. The subnetwork must exist in the same region this instance will be created in. Either network or subnetwork must be provided.`, }, "subnetwork_project": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The ID of the project in which the subnetwork belongs. If it is not provided, the provider project is used.`, }, "network_ip": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The private IP address to assign to the instance. If empty, the address will be automatically assigned.`, }, "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The name of the network_interface.`, }, "access_config": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Access configurations, i.e. IPs via which this instance can be accessed via the Internet. Omit to ensure that the instance is not accessible from the Internet (this means that ssh provisioners will not work unless you are running Terraform can send traffic to the instance's network (e.g. via tunnel or because it is running on another cloud instance on that network). This block can be repeated multiple times.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "nat_ip": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The IP address that will be 1:1 mapped to the instance's network ip. If not given, one will be generated.`, }, "network_tier": { Type: schema.TypeString, Optional: true, Computed: true, + ForceNew: true, + Description: `The networking tier used for configuring this instance template. This field can take the following values: PREMIUM or STANDARD. If this field is not specified, it is assumed to be PREMIUM.`, ValidateFunc: validation.StringInSlice([]string{"PREMIUM", "STANDARD"}, false), }, // Possibly configurable- this was added so we don't break if it's inadvertently set "public_ptr_domain_name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The DNS domain name for the public PTR record.The DNS domain name for the public PTR record.`, }, }, }, }, "alias_ip_range": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `An array of alias IP ranges for this network interface. Can only be specified for network interfaces on subnet-mode networks.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "ip_cidr_range": { @@ -310,11 +358,13 @@ func resourceComputeInstanceTemplate() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: ipCidrRangeDiffSuppress, + Description: `The IP CIDR range represented by this alias IP range. This IP CIDR range must belong to the specified subnetwork and cannot contain IP addresses reserved by system or used by other network interfaces. At the time of writing only a netmask (e.g. /24) may be supplied, with a CIDR format resulting in an API error.`, }, "subnetwork_range_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The subnetwork secondary range name specifying the secondary range from which to allocate the IP CIDR range for this alias IP range. If left unspecified, the primary range of the subnetwork will be used.`, }, }, }, @@ -324,25 +374,28 @@ func resourceComputeInstanceTemplate() *schema.Resource { }, "project": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "region": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `An instance template is a global resource that is not bound to a zone or a region. However, you can still specify some regional resources in an instance template, which restricts the template to the region where that resource resides. For example, a custom subnetwork resource is tied to a specific region. Defaults to the region of the Provider if no value is given.`, }, "scheduling": { - Type: schema.TypeList, - Optional: true, - Computed: true, - ForceNew: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Description: `The scheduling strategy to use.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "preemptible": { @@ -351,6 +404,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { AtLeastOneOf: schedulingInstTemplateKeys, Default: false, ForceNew: true, + Description: `Allows instance to be preempted. This defaults to false.`, }, "automatic_restart": { @@ -359,6 +413,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { AtLeastOneOf: schedulingInstTemplateKeys, Default: true, ForceNew: true, + Description: `Specifies whether the instance should be automatically restarted if it is terminated by Compute Engine (not terminated by a user). This defaults to true.`, }, "on_host_maintenance": { @@ -367,6 +422,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { Computed: true, AtLeastOneOf: schedulingInstTemplateKeys, ForceNew: true, + Description: `Defines the maintenance behavior for this instance.`, }, "node_affinities": { @@ -376,34 +432,47 @@ func resourceComputeInstanceTemplate() *schema.Resource { ForceNew: true, Elem: instanceSchedulingNodeAffinitiesElemSchema(), DiffSuppressFunc: emptyOrDefaultStringSuppress(""), + Description: `Specifies node affinities or anti-affinities to determine which sole-tenant nodes your instances and managed instance groups will use as host systems.`, + }, +<% unless version == 'ga' -%> + "min_node_cpus": { + Type: schema.TypeInt, + Optional: true, + AtLeastOneOf: schedulingInstTemplateKeys, + Description: `Minimum number of cpus for the instance.`, }, +<% end -%> }, }, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, "service_account": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Description: `Service account to attach to the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "email": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The service account e-mail address. If not given, the default Google Compute Engine service account is used.`, }, "scopes": { - Type: schema.TypeSet, - Required: true, - ForceNew: true, + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Description: `A list of service scopes. Both OAuth2 URLs and gcloud short names are supported. To allow full access to all Cloud APIs, use the cloud-platform scope.`, Elem: &schema.Schema{ Type: schema.TypeString, StateFunc: func(v interface{}) string { @@ -417,10 +486,11 @@ func resourceComputeInstanceTemplate() *schema.Resource { }, "shielded_instance_config": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Description: `Enable Shielded VM on this instance. Shielded VM provides verifiable integrity to prevent against malware and rootkits. Defaults to disabled. Note: shielded_instance_config can only be used with boot images with shielded vm support.`, // Since this block is used by the API based on which // image being used, the field needs to be marked as Computed. Computed: true, @@ -433,6 +503,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { AtLeastOneOf: shieldedInstanceTemplateConfigKeys, Default: false, ForceNew: true, + Description: `Verify the digital signature of all boot components, and halt the boot process if signature verification fails. Defaults to false.`, }, "enable_vtpm": { @@ -441,6 +512,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { AtLeastOneOf: shieldedInstanceTemplateConfigKeys, Default: true, ForceNew: true, + Description: `Use a virtualized trusted platform module, which is a specialized computer chip you can use to encrypt objects like keys and certificates. Defaults to true.`, }, "enable_integrity_monitoring": { @@ -449,57 +521,65 @@ func resourceComputeInstanceTemplate() *schema.Resource { AtLeastOneOf: shieldedInstanceTemplateConfigKeys, Default: true, ForceNew: true, + Description: `Compare the most recent boot measurements to the integrity policy baseline and return a pair of pass/fail results depending on whether they match or not. Defaults to true.`, }, }, }, }, "guest_accelerator": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `List of the type and count of accelerator cards attached to the instance.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "count": { - Type: schema.TypeInt, - Required: true, - ForceNew: true, + Type: schema.TypeInt, + Required: true, + ForceNew: true, + Description: `The number of the guest accelerator cards exposed to this instance.`, }, "type": { Type: schema.TypeString, Required: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The accelerator type resource to expose to this instance. E.g. nvidia-tesla-k80.`, }, }, }, }, "min_cpu_platform": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Specifies a minimum CPU platform. Applicable values are the friendly names of CPU platforms, such as Intel Haswell or Intel Skylake.`, }, "tags": { - Type: schema.TypeSet, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Description: `Tags to attach to the instance.`, }, "tags_fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The unique fingerprint of the tags.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Description: `A set of key/value label pairs to assign to instances created from this template,`, }, }, } @@ -785,7 +865,7 @@ func resourceComputeInstanceTemplateCreate(d *schema.ResourceData, meta interfac // Store the ID now d.SetId(fmt.Sprintf("projects/%s/global/instanceTemplates/%s", project, instanceTemplate.Name)) - err = computeOperationWait(config, op, project, "Creating Instance Template") + err = computeOperationWaitTime(config, op, project, "Creating Instance Template", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -1146,7 +1226,7 @@ func resourceComputeInstanceTemplateDelete(d *schema.ResourceData, meta interfac return fmt.Errorf("Error deleting instance template: %s", err) } - err = computeOperationWait(config, op, project, "Deleting Instance Template") + err = computeOperationWaitTime(config, op, project, "Deleting Instance Template", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_compute_network_peering.go.erb b/third_party/terraform/resources/resource_compute_network_peering.go similarity index 73% rename from third_party/terraform/resources/resource_compute_network_peering.go.erb rename to third_party/terraform/resources/resource_compute_network_peering.go index acccdf4a8f1c..aad41b7b0227 100644 --- a/third_party/terraform/resources/resource_compute_network_peering.go.erb +++ b/third_party/terraform/resources/resource_compute_network_peering.go @@ -1,10 +1,11 @@ -<% autogen_exception -%> package google import ( "fmt" "log" "sort" + "strings" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "google.golang.org/api/compute/v1" @@ -22,12 +23,18 @@ func resourceComputeNetworkPeering() *schema.Resource { State: resourceComputeNetworkPeeringImport, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateGCPName, + Description: `Name of the peering.`, }, "network": { @@ -36,6 +43,7 @@ func resourceComputeNetworkPeering() *schema.Resource { ForceNew: true, ValidateFunc: validateRegexp(peerNetworkLinkRegex), DiffSuppressFunc: compareSelfLinkRelativePaths, + Description: `The primary network of the peering.`, }, "peer_network": { @@ -44,36 +52,54 @@ func resourceComputeNetworkPeering() *schema.Resource { ForceNew: true, ValidateFunc: validateRegexp(peerNetworkLinkRegex), DiffSuppressFunc: compareSelfLinkRelativePaths, + Description: `The peer network in the peering. The peer network may belong to a different project.`, }, "export_custom_routes": { + Type: schema.TypeBool, + ForceNew: true, + Optional: true, + Default: false, + Description: `Whether to export the custom routes to the peer network. Defaults to false.`, + }, + + "import_custom_routes": { + Type: schema.TypeBool, + ForceNew: true, + Optional: true, + Default: false, + Description: `Whether to export the custom routes from the peer network. Defaults to false.`, + }, + + "export_subnet_routes_with_public_ip": { Type: schema.TypeBool, ForceNew: true, Optional: true, - Default: false, + Default: true, }, - "import_custom_routes": { + "import_subnet_routes_with_public_ip": { Type: schema.TypeBool, ForceNew: true, Optional: true, - Default: false, }, "state": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `State for the peering, either ACTIVE or INACTIVE. The peering is ACTIVE when there's a matching configuration in the peer network.`, }, "state_details": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Details about the current state of the peering.`, }, "auto_create_routes": { - Type: schema.TypeBool, - Optional: true, - Removed: "auto_create_routes has been removed because it's redundant and not user-configurable. It can safely be removed from your config", + Type: schema.TypeBool, + Optional: true, + Removed: "auto_create_routes has been removed because it's redundant and not user-configurable. It can safely be removed from your config", Computed: true, }, }, @@ -107,7 +133,7 @@ func resourceComputeNetworkPeeringCreate(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error adding network peering: %s", err) } - err = computeOperationWait(config, addOp, networkFieldValue.Project, "Adding Network Peering") + err = computeOperationWaitTime(config, addOp, networkFieldValue.Project, "Adding Network Peering", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -142,6 +168,8 @@ func resourceComputeNetworkPeeringRead(d *schema.ResourceData, meta interface{}) d.Set("name", peering.Name) d.Set("import_custom_routes", peering.ImportCustomRoutes) d.Set("export_custom_routes", peering.ExportCustomRoutes) + d.Set("import_subnet_routes_with_public_ip", peering.ImportSubnetRoutesWithPublicIp) + d.Set("export_subnet_routes_with_public_ip", peering.ExportSubnetRoutesWithPublicIp) d.Set("state", peering.State) d.Set("state_details", peering.StateDetails) @@ -182,7 +210,7 @@ func resourceComputeNetworkPeeringDelete(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error removing peering `%s` from network `%s`: %s", name, networkFieldValue.Name, err) } } else { - err = computeOperationWait(config, removeOp, networkFieldValue.Project, "Removing Network Peering") + err = computeOperationWaitTime(config, removeOp, networkFieldValue.Project, "Removing Network Peering", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } @@ -201,11 +229,14 @@ func findPeeringFromNetwork(network *compute.Network, peeringName string) *compu } func expandNetworkPeering(d *schema.ResourceData) *compute.NetworkPeering { return &compute.NetworkPeering{ - ExchangeSubnetRoutes: true, - Name: d.Get("name").(string), - Network: d.Get("peer_network").(string), - ExportCustomRoutes: d.Get("export_custom_routes").(bool), - ImportCustomRoutes: d.Get("import_custom_routes").(bool), + ExchangeSubnetRoutes: true, + Name: d.Get("name").(string), + Network: d.Get("peer_network").(string), + ExportCustomRoutes: d.Get("export_custom_routes").(bool), + ImportCustomRoutes: d.Get("import_custom_routes").(bool), + ExportSubnetRoutesWithPublicIp: d.Get("export_subnet_routes_with_public_ip").(bool), + ImportSubnetRoutesWithPublicIp: d.Get("import_subnet_routes_with_public_ip").(bool), + ForceSendFields: []string{"ExportSubnetRoutesWithPublicIp"}, } } diff --git a/third_party/terraform/resources/resource_compute_project_default_network_tier.go b/third_party/terraform/resources/resource_compute_project_default_network_tier.go index 7110fe240572..97252ef67ab9 100644 --- a/third_party/terraform/resources/resource_compute_project_default_network_tier.go +++ b/third_party/terraform/resources/resource_compute_project_default_network_tier.go @@ -2,8 +2,10 @@ package google import ( "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/validation" "log" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/helper/validation" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "google.golang.org/api/compute/v1" @@ -19,20 +21,26 @@ func resourceComputeProjectDefaultNetworkTier() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + }, + SchemaVersion: 0, Schema: map[string]*schema.Schema{ "network_tier": { Type: schema.TypeString, Required: true, + Description: `The default network tier to be configured for the project. This field can take the following values: PREMIUM or STANDARD.`, ValidateFunc: validation.StringInSlice([]string{"PREMIUM", "STANDARD"}, false), }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } @@ -55,7 +63,7 @@ func resourceComputeProjectDefaultNetworkTierCreateOrUpdate(d *schema.ResourceDa } log.Printf("[DEBUG] SetDefaultNetworkTier: %d (%s)", op.Id, op.SelfLink) - err = computeOperationWait(config, op, projectID, "SetDefaultNetworkTier") + err = computeOperationWaitTime(config, op, projectID, "SetDefaultNetworkTier", d.Timeout(schema.TimeoutCreate)) if err != nil { return fmt.Errorf("SetDefaultNetworkTier failed: %s", err) } diff --git a/third_party/terraform/resources/resource_compute_project_metadata.go b/third_party/terraform/resources/resource_compute_project_metadata.go index 7dbb65b58e77..94969c3787c7 100644 --- a/third_party/terraform/resources/resource_compute_project_metadata.go +++ b/third_party/terraform/resources/resource_compute_project_metadata.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "google.golang.org/api/compute/v1" @@ -18,20 +19,27 @@ func resourceComputeProjectMetadata() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + SchemaVersion: 0, Schema: map[string]*schema.Schema{ "metadata": { - Type: schema.TypeMap, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A series of key value pairs.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } @@ -49,7 +57,7 @@ func resourceComputeProjectMetadataCreateOrUpdate(d *schema.ResourceData, meta i Items: expandComputeMetadata(d.Get("metadata").(map[string]interface{})), } - err = resourceComputeProjectMetadataSet(projectID, config, md) + err = resourceComputeProjectMetadataSet(projectID, config, md, d.Timeout(schema.TimeoutCreate)) if err != nil { return fmt.Errorf("SetCommonInstanceMetadata failed: %s", err) } @@ -97,7 +105,7 @@ func resourceComputeProjectMetadataDelete(d *schema.ResourceData, meta interface } md := &compute.Metadata{} - err = resourceComputeProjectMetadataSet(projectID, config, md) + err = resourceComputeProjectMetadataSet(projectID, config, md, d.Timeout(schema.TimeoutDelete)) if err != nil { return fmt.Errorf("SetCommonInstanceMetadata failed: %s", err) } @@ -105,7 +113,7 @@ func resourceComputeProjectMetadataDelete(d *schema.ResourceData, meta interface return resourceComputeProjectMetadataRead(d, meta) } -func resourceComputeProjectMetadataSet(projectID string, config *Config, md *compute.Metadata) error { +func resourceComputeProjectMetadataSet(projectID string, config *Config, md *compute.Metadata, timeout time.Duration) error { createMD := func() error { log.Printf("[DEBUG] Loading project service: %s", projectID) project, err := config.clientCompute.Projects.Get(projectID).Do() @@ -120,7 +128,7 @@ func resourceComputeProjectMetadataSet(projectID string, config *Config, md *com } log.Printf("[DEBUG] SetCommonMetadata: %d (%s)", op.Id, op.SelfLink) - return computeOperationWait(config, op, project.Name, "SetCommonMetadata") + return computeOperationWaitTime(config, op, project.Name, "SetCommonMetadata", timeout) } err := MetadataRetryWrapper(createMD) diff --git a/third_party/terraform/resources/resource_compute_project_metadata_item.go b/third_party/terraform/resources/resource_compute_project_metadata_item.go index d18fee581d71..7564871a8f43 100644 --- a/third_party/terraform/resources/resource_compute_project_metadata_item.go +++ b/third_party/terraform/resources/resource_compute_project_metadata_item.go @@ -28,19 +28,22 @@ func resourceComputeProjectMetadataItem() *schema.Resource { Schema: map[string]*schema.Schema{ "key": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The metadata key to set.`, }, "value": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The value to set for the given metadata key.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, @@ -63,7 +66,7 @@ func resourceComputeProjectMetadataItemCreate(d *schema.ResourceData, meta inter key := d.Get("key").(string) val := d.Get("value").(string) - err = updateComputeCommonInstanceMetadata(config, projectID, key, &val, int(d.Timeout(schema.TimeoutCreate).Minutes()), failIfPresent) + err = updateComputeCommonInstanceMetadata(config, projectID, key, &val, d.Timeout(schema.TimeoutCreate), failIfPresent) if err != nil { return err } @@ -115,7 +118,7 @@ func resourceComputeProjectMetadataItemUpdate(d *schema.ResourceData, meta inter _, n := d.GetChange("value") new := n.(string) - err = updateComputeCommonInstanceMetadata(config, projectID, key, &new, int(d.Timeout(schema.TimeoutUpdate).Minutes()), overwritePresent) + err = updateComputeCommonInstanceMetadata(config, projectID, key, &new, d.Timeout(schema.TimeoutUpdate), overwritePresent) if err != nil { return err } @@ -133,7 +136,7 @@ func resourceComputeProjectMetadataItemDelete(d *schema.ResourceData, meta inter key := d.Get("key").(string) - err = updateComputeCommonInstanceMetadata(config, projectID, key, nil, int(d.Timeout(schema.TimeoutDelete).Minutes()), overwritePresent) + err = updateComputeCommonInstanceMetadata(config, projectID, key, nil, d.Timeout(schema.TimeoutDelete), overwritePresent) if err != nil { return err } @@ -142,7 +145,7 @@ func resourceComputeProjectMetadataItemDelete(d *schema.ResourceData, meta inter return nil } -func updateComputeCommonInstanceMetadata(config *Config, projectID string, key string, afterVal *string, timeout int, failIfPresent metadataPresentBehavior) error { +func updateComputeCommonInstanceMetadata(config *Config, projectID string, key string, afterVal *string, timeout time.Duration, failIfPresent metadataPresentBehavior) error { updateMD := func() error { log.Printf("[DEBUG] Loading project metadata: %s", projectID) project, err := config.clientCompute.Projects.Get(projectID).Do() diff --git a/third_party/terraform/resources/resource_compute_region_instance_group_manager.go b/third_party/terraform/resources/resource_compute_region_instance_group_manager.go.erb similarity index 64% rename from third_party/terraform/resources/resource_compute_region_instance_group_manager.go rename to third_party/terraform/resources/resource_compute_region_instance_group_manager.go.erb index 59c9a271eb1a..d0bb0a5b4187 100644 --- a/third_party/terraform/resources/resource_compute_region_instance_group_manager.go +++ b/third_party/terraform/resources/resource_compute_region_instance_group_manager.go.erb @@ -1,3 +1,5 @@ +// <% autogen_exception -%> + package google import ( @@ -31,48 +33,56 @@ func resourceComputeRegionInstanceGroupManager() *schema.Resource { Schema: map[string]*schema.Schema{ "base_instance_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The base instance name to use for instances in this group. The value must be a valid RFC1035 name. Supported characters are lowercase letters, numbers, and hyphens (-). Instances are named by appending a hyphen and a random four-character string to the base instance name.`, }, "instance_template": { - Type: schema.TypeString, - Computed: true, - Removed: "This field has been replaced by `version.instance_template` in 3.0.0", + Type: schema.TypeString, + Computed: true, + Removed: "This field has been replaced by `version.instance_template` in 3.0.0", + Description: `The full URL to an instance template from which all new instances of this version will be created.`, }, "version": { - Type: schema.TypeList, - Required: true, + Type: schema.TypeList, + Required: true, + Description: `Application versions managed by this instance group. Each version deals with a specific instance template, allowing canary release scenarios.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `Version name.`, }, "instance_template": { Type: schema.TypeString, Required: true, DiffSuppressFunc: compareSelfLinkRelativePaths, + Description: `The full URL to an instance template from which all new instances of this version will be created.`, }, "target_size": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The number of instances calculated as a fixed number or a percentage depending on the settings.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "fixed": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `The number of instances which are managed for this version. Conflicts with percent.`, }, "percent": { Type: schema.TypeInt, Optional: true, ValidateFunc: validation.IntBetween(0, 100), + Description: `The number of instances (calculated as percentage) which are managed for this version. Conflicts with fixed. Note that when using percent, rounding will be in favor of explicitly set target_size values; a managed instance group with 2 instances and 2 versions, one of which has a target_size.percent of 60 will create 2 instances of that version.`, }, }, }, @@ -82,61 +92,71 @@ func resourceComputeRegionInstanceGroupManager() *schema.Resource { }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the instance group manager. Must be 1-63 characters long and comply with RFC1035. Supported characters include lowercase letters, numbers, and hyphens.`, }, "region": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The region where the managed instance group resides.`, }, "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `An optional textual description of the instance group manager.`, }, "fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The fingerprint of the instance group manager.`, }, "instance_group": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The full URL of the instance group created by the manager.`, }, "named_port": { - Type: schema.TypeSet, - Optional: true, + Type: schema.TypeSet, + Optional: true, + Description: `The named port configuration.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The name of the port.`, }, "port": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `The port number.`, }, }, }, }, "project": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URL of the created resource.`, }, "update_strategy": { @@ -152,50 +172,57 @@ func resourceComputeRegionInstanceGroupManager() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, - Set: selfLinkRelativePathHash, + Set: selfLinkRelativePathHash, + Description: `The full URL of all target pools to which new instances in the group are added. Updating the target pools attribute does not affect existing instances.`, }, "target_size": { - Type: schema.TypeInt, - Computed: true, - Optional: true, + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `The target number of running instances for this managed instance group. This value should always be explicitly set unless this resource is attached to an autoscaler, in which case it should never be set. Defaults to 0.`, }, // If true, the resource will report ready only after no instances are being created. // This will not block future reads if instances are being recreated, and it respects // the "createNoRetry" parameter that's available for this resource. "wait_for_instances": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether to wait for all instances to be created/updated before returning. Note that if this is set to true and the operation does not succeed, Terraform will continue trying until it times out.`, }, "auto_healing_policies": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The autohealing policies for this managed instance group. You can specify only one value.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "health_check": { Type: schema.TypeString, Required: true, DiffSuppressFunc: compareSelfLinkRelativePaths, + Description: `The health check resource that signals autohealing.`, }, "initial_delay_sec": { Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntBetween(0, 3600), + Description: `The number of seconds that the managed instance group waits before it applies autohealing policies to new instances or recently recreated instances. Between 0 and 3600.`, }, }, }, }, "distribution_policy_zones": { - Type: schema.TypeSet, - Optional: true, - ForceNew: true, - Computed: true, - Set: hashZoneFromSelfLinkOrResourceName, + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The distribution policy for this managed instance group. You can specify one or more values.`, + Set: hashZoneFromSelfLinkOrResourceName, Elem: &schema.Schema{ Type: schema.TypeString, DiffSuppressFunc: compareSelfLinkOrResourceName, @@ -203,22 +230,25 @@ func resourceComputeRegionInstanceGroupManager() *schema.Resource { }, "update_policy": { - Type: schema.TypeList, - Computed: true, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Computed: true, + Optional: true, + MaxItems: 1, + Description: `The update policy for this managed instance group.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "minimal_action": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice([]string{"RESTART", "REPLACE"}, false), + Description: `Minimal action to be taken on an instance. You can specify either RESTART to restart existing instances or REPLACE to delete and create new instances from the target template. If you specify a RESTART, the Updater will attempt to perform that action only. However, if the Updater determines that the minimal action you specify is not enough to perform the update, it might perform a more disruptive action.`, }, "type": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice([]string{"OPPORTUNISTIC", "PROACTIVE"}, false), + Description: `The type of update process. You can specify either PROACTIVE so that the instance group manager proactively executes actions in order to bring instances to their target versions or OPPORTUNISTIC so that no action is proactively executed but the update will be performed as part of other actions (for example, resizes or recreateInstances calls).`, }, "max_surge_fixed": { @@ -226,12 +256,14 @@ func resourceComputeRegionInstanceGroupManager() *schema.Resource { Optional: true, Computed: true, ConflictsWith: []string{"update_policy.0.max_surge_percent"}, + Description: `The maximum number of instances that can be created above the specified targetSize during the update process. Conflicts with max_surge_percent. It has to be either 0 or at least equal to the number of zones. If fixed values are used, at least one of max_unavailable_fixed or max_surge_fixed must be greater than 0.`, }, "max_surge_percent": { Type: schema.TypeInt, Optional: true, ConflictsWith: []string{"update_policy.0.max_surge_fixed"}, + Description: `The maximum number of instances(calculated as percentage) that can be created above the specified targetSize during the update process. Conflicts with max_surge_fixed. Percent value is only allowed for regional managed instance groups with size at least 10.`, ValidateFunc: validation.IntBetween(0, 100), }, @@ -239,6 +271,7 @@ func resourceComputeRegionInstanceGroupManager() *schema.Resource { Type: schema.TypeInt, Optional: true, Computed: true, + Description: `The maximum number of instances that can be unavailable during the update process. Conflicts with max_unavailable_percent. It has to be either 0 or at least equal to the number of zones. If fixed values are used, at least one of max_unavailable_fixed or max_surge_fixed must be greater than 0.`, ConflictsWith: []string{"update_policy.0.max_unavailable_percent"}, }, @@ -247,22 +280,50 @@ func resourceComputeRegionInstanceGroupManager() *schema.Resource { Optional: true, ConflictsWith: []string{"update_policy.0.max_unavailable_fixed"}, ValidateFunc: validation.IntBetween(0, 100), + Description: `The maximum number of instances(calculated as percentage) that can be unavailable during the update process. Conflicts with max_unavailable_fixed. Percent value is only allowed for regional managed instance groups with size at least 10.`, }, "min_ready_sec": { Type: schema.TypeInt, Optional: true, ValidateFunc: validation.IntBetween(0, 3600), + Description: `Minimum number of seconds to wait for after a newly created instance becomes available. This value must be from range [0, 3600].`, }, "instance_redistribution_type": { Type: schema.TypeString, Optional: true, ValidateFunc: validation.StringInSlice([]string{"PROACTIVE", "NONE", ""}, false), DiffSuppressFunc: emptyOrDefaultStringSuppress("PROACTIVE"), + Description: `The instance redistribution policy for regional managed instance groups. Valid values are: "PROACTIVE", "NONE". If PROACTIVE (default), the group attempts to maintain an even distribution of VM instances across zones in the region. If NONE, proactive redistribution is disabled.`, }, }, }, }, +<% unless version == 'ga' -%> + + "stateful_disk": { + Type: schema.TypeSet, + Optional: true, + Description: `Disks created on the instances that will be preserved on instance delete, update, etc. Structure is documented below. For more information see the official documentation. Proactive cross zone instance redistribution must be disabled before you can update stateful disks on existing instance group managers. This can be controlled via the update_policy.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": { + Type: schema.TypeString, + Required: true, + Description: `The device name of the disk to be attached.`, + }, + + "delete_rule": { + Type: schema.TypeString, + Default: "NEVER", + Optional: true, + Description: `A value that prescribes what should happen to the stateful disk when the VM instance is deleted. The available options are NEVER and ON_PERMANENT_INSTANCE_DELETION. NEVER detatch the disk when the VM is deleted, but not delete the disk. ON_PERMANENT_INSTANCE_DELETION will delete the stateful disk when the VM is permanently deleted from the instance group. The default is NEVER.`, + ValidateFunc: validation.StringInSlice([]string{"NEVER", "ON_PERMANENT_INSTANCE_DELETION"}, true), + }, + }, + }, + }, +<% end -%> }, } } @@ -291,6 +352,9 @@ func resourceComputeRegionInstanceGroupManagerCreate(d *schema.ResourceData, met Versions: expandVersions(d.Get("version").([]interface{})), UpdatePolicy: expandRegionUpdatePolicy(d.Get("update_policy").([]interface{})), DistributionPolicy: expandDistributionPolicy(d.Get("distribution_policy_zones").(*schema.Set)), +<% unless version == 'ga' -%> + StatefulPolicy: expandStatefulPolicy(d.Get("stateful_disk").(*schema.Set).List()), +<% end -%> // Force send TargetSize to allow size of 0. ForceSendFields: []string{"TargetSize"}, } @@ -308,8 +372,7 @@ func resourceComputeRegionInstanceGroupManagerCreate(d *schema.ResourceData, met d.SetId(id) // Wait for the operation to complete - timeoutInMinutes := int(d.Timeout(schema.TimeoutCreate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Creating InstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Creating InstanceGroupManager", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -400,7 +463,12 @@ func resourceComputeRegionInstanceGroupManagerRead(d *schema.ResourceData, meta if err := d.Set("update_policy", flattenRegionUpdatePolicy(manager.UpdatePolicy)); err != nil { return fmt.Errorf("Error setting update_policy in state: %s", err.Error()) } +<% unless version == 'ga' -%> + if err = d.Set("stateful_disk", flattenStatefulPolicy(manager.StatefulPolicy)); err != nil { + return fmt.Errorf("Error setting stateful_disk in state: %s", err.Error()) + } +<% end -%> if d.Get("wait_for_instances").(bool) { conf := resource.StateChangeConf{ Pending: []string{"creating", "error"}, @@ -437,6 +505,7 @@ func resourceComputeRegionInstanceGroupManagerUpdate(d *schema.ResourceData, met if d.HasChange("target_pools") { updatedManager.TargetPools = convertStringSet(d.Get("target_pools").(*schema.Set)) + updatedManager.ForceSendFields = append(updatedManager.ForceSendFields, "TargetPools") change = true } @@ -456,14 +525,20 @@ func resourceComputeRegionInstanceGroupManagerUpdate(d *schema.ResourceData, met change = true } +<% unless version == 'ga' -%> + if d.HasChange("stateful_disk") { + updatedManager.StatefulPolicy = expandStatefulPolicy(d.Get("stateful_disk").(*schema.Set).List()) + change = true + } + +<% end -%> if change { op, err := config.clientComputeBeta.RegionInstanceGroupManagers.Patch(project, region, d.Get("name").(string), updatedManager).Do() if err != nil { return fmt.Errorf("Error updating region managed group instances: %s", err) } - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Updating region managed group instances", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Updating region managed group instances", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -485,8 +560,7 @@ func resourceComputeRegionInstanceGroupManagerUpdate(d *schema.ResourceData, met return fmt.Errorf("Error updating RegionInstanceGroupManager: %s", err) } - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Updating RegionInstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Updating RegionInstanceGroupManager", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -504,8 +578,7 @@ func resourceComputeRegionInstanceGroupManagerUpdate(d *schema.ResourceData, met return fmt.Errorf("Error resizing RegionInstanceGroupManager: %s", err) } - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) - err = computeOperationWaitTime(config, op, project, "Resizing RegionInstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Resizing RegionInstanceGroupManager", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -539,8 +612,7 @@ func resourceComputeRegionInstanceGroupManagerDelete(d *schema.ResourceData, met } // Wait for the operation to complete - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) - err = computeOperationWaitTime(config, op, project, "Deleting RegionInstanceGroupManager", timeoutInMinutes) + err = computeOperationWaitTime(config, op, project, "Deleting RegionInstanceGroupManager", d.Timeout(schema.TimeoutDelete)) if err != nil { return fmt.Errorf("Error waiting for delete to complete: %s", err) } diff --git a/third_party/terraform/resources/resource_compute_router_interface.go b/third_party/terraform/resources/resource_compute_router_interface.go index 5b2ed866d1e4..ce154987eea1 100644 --- a/third_party/terraform/resources/resource_compute_router_interface.go +++ b/third_party/terraform/resources/resource_compute_router_interface.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "time" "strings" @@ -20,16 +21,23 @@ func resourceComputeRouterInterface() *schema.Resource { State: resourceComputeRouterInterfaceImportState, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `A unique name for the interface, required by GCE. Changing this forces a new interface to be created.`, }, "router": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the router this interface will be attached to. Changing this forces a new interface to be created.`, }, "vpn_tunnel": { Type: schema.TypeString, @@ -38,6 +46,7 @@ func resourceComputeRouterInterface() *schema.Resource { ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, AtLeastOneOf: []string{"vpn_tunnel", "interconnect_attachment", "ip_range"}, + Description: `The name or resource link to the VPN tunnel this interface will be linked to. Changing this forces a new interface to be created. Only one of vpn_tunnel and interconnect_attachment can be specified.`, }, "interconnect_attachment": { Type: schema.TypeString, @@ -46,25 +55,29 @@ func resourceComputeRouterInterface() *schema.Resource { ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, AtLeastOneOf: []string{"vpn_tunnel", "interconnect_attachment", "ip_range"}, + Description: `The name or resource link to the VLAN interconnect for this interface. Changing this forces a new interface to be created. Only one of vpn_tunnel and interconnect_attachment can be specified.`, }, "ip_range": { Type: schema.TypeString, Optional: true, ForceNew: true, AtLeastOneOf: []string{"vpn_tunnel", "interconnect_attachment", "ip_range"}, + Description: `IP address and range of the interface. The IP range must be in the RFC3927 link-local IP space. Changing this forces a new interface to be created.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which this interface's router belongs. If it is not provided, the provider project is used. Changing this forces a new interface to be created.`, }, "region": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The region this interface's router sits in. If not specified, the project region will be used. Changing this forces a new interface to be created.`, }, }, } @@ -146,7 +159,7 @@ func resourceComputeRouterInterfaceCreate(d *schema.ResourceData, meta interface return fmt.Errorf("Error patching router %s/%s: %s", region, routerName, err) } d.SetId(fmt.Sprintf("%s/%s/%s", region, routerName, ifaceName)) - err = computeOperationWait(config, op, project, "Patching router") + err = computeOperationWaitTime(config, op, project, "Patching router", d.Timeout(schema.TimeoutCreate)) if err != nil { d.SetId("") return fmt.Errorf("Error waiting to patch router %s/%s: %s", region, routerName, err) @@ -271,7 +284,7 @@ func resourceComputeRouterInterfaceDelete(d *schema.ResourceData, meta interface return fmt.Errorf("Error patching router %s/%s: %s", region, routerName, err) } - err = computeOperationWait(config, op, project, "Patching router") + err = computeOperationWaitTime(config, op, project, "Patching router", d.Timeout(schema.TimeoutDelete)) if err != nil { return fmt.Errorf("Error waiting to patch router %s/%s: %s", region, routerName, err) } diff --git a/third_party/terraform/resources/resource_compute_security_policy.go.erb b/third_party/terraform/resources/resource_compute_security_policy.go.erb index 7a3afc557118..b552b537f6e9 100644 --- a/third_party/terraform/resources/resource_compute_security_policy.go.erb +++ b/third_party/terraform/resources/resource_compute_security_policy.go.erb @@ -36,18 +36,21 @@ func resourceComputeSecurityPolicy() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validateGCPName, + Description: `The name of the security policy.`, }, "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `An optional description of this security policy. Max size is 2048.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The project in which the resource belongs. If it is not provided, the provider project is used.`, }, "rule": { @@ -60,11 +63,13 @@ func resourceComputeSecurityPolicy() *schema.Resource { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice([]string{"allow", "deny(403)", "deny(404)", "deny(502)"}, false), + Description: `Action to take when match matches the request. Valid values: "allow" : allow access to target, "deny(status)" : deny access to target, returns the HTTP response code specified (valid values are 403, 404 and 502)`, }, "priority": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `An unique positive integer indicating the priority of evaluation for a rule. Rules are evaluated from highest priority (lowest numerically) to lowest priority (highest numerically) in order.`, }, "match": { @@ -80,20 +85,23 @@ func resourceComputeSecurityPolicy() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "src_ip_ranges": { - Type: schema.TypeSet, - Required: true, - MinItems: 1, - MaxItems: 5, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 10, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Set of IP addresses or ranges (IPV4 or IPV6) in CIDR notation to match against inbound traffic. There is a limit of 5 IP ranges per rule. A value of '*' matches all IPs (can be used to override the default behavior).`, }, }, }, + Description: `The configuration options available when specifying versioned_expr. This field must be specified if versioned_expr is specified and cannot be specified if versioned_expr is not specified.`, }, "versioned_expr": { Type: schema.TypeString, Optional: true, ValidateFunc: validation.StringInSlice([]string{"SRC_IPS_V1"}, false), + Description: `Predefined rule expression. If this field is specified, config must also be specified. Available options: SRC_IPS_V1: Must specify the corresponding src_ip_ranges field in config.`, }, "expr": { @@ -103,8 +111,9 @@ func resourceComputeSecurityPolicy() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "expression": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `Textual representation of an expression in Common Expression Language syntax. The application context of the containing message determines which well-known feature set of CEL is supported.`, }, // These fields are not yet supported (Issue terraform-providers/terraform-provider-google#4497: mbang) // "title": { @@ -121,32 +130,39 @@ func resourceComputeSecurityPolicy() *schema.Resource { // }, }, }, + Description: `User defined CEVAL expression. A CEVAL expression is used to specify match criteria such as origin.ip, source.region_code and contents in the request header.`, }, }, }, + Description: `A match condition that incoming traffic is evaluated against. If it evaluates to true, the corresponding action is enforced.`, }, "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `An optional description of this rule. Max size is 64.`, }, "preview": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `When set to true, the action specified above is not enforced. Stackdriver logs for requests that trigger a preview action are annotated as such.`, }, }, }, + Description: `The set of rules that belong to this policy. There must always be a default rule (rule with priority 2147483647 and match "*"). If no rules are provided when creating a security policy, a default rule with action "allow" will be added.`, }, "fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Fingerprint of this resource.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, }, } @@ -199,7 +215,7 @@ func resourceComputeSecurityPolicyCreate(d *schema.ResourceData, meta interface{ } d.SetId(id) - err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Creating SecurityPolicy %q", sp), int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Creating SecurityPolicy %q", sp), d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -255,7 +271,7 @@ func resourceComputeSecurityPolicyUpdate(d *schema.ResourceData, meta interface{ return errwrap.Wrapf(fmt.Sprintf("Error updating SecurityPolicy %q: {{err}}", sp), err) } - err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -283,7 +299,7 @@ func resourceComputeSecurityPolicyUpdate(d *schema.ResourceData, meta interface{ return errwrap.Wrapf(fmt.Sprintf("Error updating SecurityPolicy %q: {{err}}", sp), err) } - err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -295,7 +311,7 @@ func resourceComputeSecurityPolicyUpdate(d *schema.ResourceData, meta interface{ return errwrap.Wrapf(fmt.Sprintf("Error updating SecurityPolicy %q: {{err}}", sp), err) } - err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -312,7 +328,7 @@ func resourceComputeSecurityPolicyUpdate(d *schema.ResourceData, meta interface{ return errwrap.Wrapf(fmt.Sprintf("Error updating SecurityPolicy %q: {{err}}", sp), err) } - err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, project, fmt.Sprintf("Updating SecurityPolicy %q", sp), d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -337,7 +353,7 @@ func resourceComputeSecurityPolicyDelete(d *schema.ResourceData, meta interface{ return errwrap.Wrapf("Error deleting SecurityPolicy: {{err}}", err) } - err = computeOperationWaitTime(config, op, project, "Deleting SecurityPolicy", int(d.Timeout(schema.TimeoutDelete).Minutes())) + err = computeOperationWaitTime(config, op, project, "Deleting SecurityPolicy", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_compute_shared_vpc_host_project.go b/third_party/terraform/resources/resource_compute_shared_vpc_host_project.go index 7e09f52e9010..d916dfc14783 100644 --- a/third_party/terraform/resources/resource_compute_shared_vpc_host_project.go +++ b/third_party/terraform/resources/resource_compute_shared_vpc_host_project.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" ) @@ -16,11 +17,17 @@ func resourceComputeSharedVpcHostProject() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "project": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The ID of the project that will serve as a Shared VPC host project`, }, }, } @@ -37,7 +44,7 @@ func resourceComputeSharedVpcHostProjectCreate(d *schema.ResourceData, meta inte d.SetId(hostProject) - err = computeOperationWaitTime(config, op, hostProject, "Enabling Shared VPC Host", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, hostProject, "Enabling Shared VPC Host", d.Timeout(schema.TimeoutCreate)) if err != nil { d.SetId("") return err @@ -75,7 +82,7 @@ func resourceComputeSharedVpcHostProjectDelete(d *schema.ResourceData, meta inte return fmt.Errorf("Error disabling Shared VPC Host %q: %s", hostProject, err) } - err = computeOperationWaitTime(config, op, hostProject, "Disabling Shared VPC Host", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, hostProject, "Disabling Shared VPC Host", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_compute_shared_vpc_service_project.go b/third_party/terraform/resources/resource_compute_shared_vpc_service_project.go index e84b3e75b037..3b5e5746b1d4 100644 --- a/third_party/terraform/resources/resource_compute_shared_vpc_service_project.go +++ b/third_party/terraform/resources/resource_compute_shared_vpc_service_project.go @@ -3,6 +3,7 @@ package google import ( "fmt" "strings" + "time" computeBeta "google.golang.org/api/compute/v0.beta" @@ -21,16 +22,23 @@ func resourceComputeSharedVpcServiceProject() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "host_project": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The ID of a host project to associate.`, }, "service_project": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The ID of the project that will serve as a Shared VPC service project.`, }, }, } @@ -52,7 +60,7 @@ func resourceComputeSharedVpcServiceProjectCreate(d *schema.ResourceData, meta i if err != nil { return err } - err = computeOperationWaitTime(config, op, hostProject, "Enabling Shared VPC Resource", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, hostProject, "Enabling Shared VPC Resource", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -118,7 +126,7 @@ func disableXpnResource(d *schema.ResourceData, config *Config, hostProject, pro if err != nil { return err } - err = computeOperationWaitTime(config, op, hostProject, "Disabling Shared VPC Resource", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = computeOperationWaitTime(config, op, hostProject, "Disabling Shared VPC Resource", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_compute_target_pool.go b/third_party/terraform/resources/resource_compute_target_pool.go index aa8a68916904..53650ec52e09 100644 --- a/third_party/terraform/resources/resource_compute_target_pool.go +++ b/third_party/terraform/resources/resource_compute_target_pool.go @@ -5,6 +5,7 @@ import ( "log" "regexp" "strings" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "google.golang.org/api/compute/v1" @@ -23,29 +24,39 @@ func resourceComputeTargetPool() *schema.Resource { State: resourceTargetPoolStateImporter, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Update: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `A unique name for the resource, required by GCE. Changing this forces a new resource to be created.`, }, "backup_pool": { - Type: schema.TypeString, - Optional: true, - ForceNew: false, + Type: schema.TypeString, + Optional: true, + ForceNew: false, + Description: `URL to the backup target pool. Must also set failover_ratio.`, }, "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Textual description field.`, }, "failover_ratio": { - Type: schema.TypeFloat, - Optional: true, - ForceNew: true, + Type: schema.TypeFloat, + Optional: true, + ForceNew: true, + Description: `Ratio (0 to 1) of failed nodes before using the backup pool (which must also be set).`, }, "health_checks": { @@ -57,6 +68,7 @@ func resourceComputeTargetPool() *schema.Resource { Type: schema.TypeString, DiffSuppressFunc: compareSelfLinkOrResourceName, }, + Description: `List of zero or one health check name or self_link. Only legacy google_compute_http_health_check is supported.`, }, "instances": { @@ -73,32 +85,37 @@ func resourceComputeTargetPool() *schema.Resource { Set: func(v interface{}) int { return schema.HashString(canonicalizeInstanceRef(v.(string))) }, + Description: `List of instances in the pool. They can be given as URLs, or in the form of "zone/name". Note that the instances need not exist at the time of target pool creation, so there is no need to use the Terraform interpolators to create a dependency on the instances from the target pool.`, }, "project": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "region": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `Where the target pool resides. Defaults to project region.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, "session_affinity": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Default: "NONE", + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "NONE", + Description: `How to distribute load. Options are "NONE" (no affinity). "CLIENT_IP" (hash of the source/dest addresses / ports), and "CLIENT_IP_PROTO" also includes the protocol (default "NONE").`, }, }, } @@ -213,7 +230,7 @@ func resourceComputeTargetPoolCreate(d *schema.ResourceData, meta interface{}) e } d.SetId(id) - err = computeOperationWait(config, op, project, "Creating Target Pool") + err = computeOperationWaitTime(config, op, project, "Creating Target Pool", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -262,7 +279,7 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error updating health_check: %s", err) } - err = computeOperationWait(config, op, project, "Updating Target Pool") + err = computeOperationWaitTime(config, op, project, "Updating Target Pool", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -278,7 +295,7 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error updating health_check: %s", err) } - err = computeOperationWait(config, op, project, "Updating Target Pool") + err = computeOperationWaitTime(config, op, project, "Updating Target Pool", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -312,7 +329,7 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error updating instances: %s", err) } - err = computeOperationWait(config, op, project, "Updating Target Pool") + err = computeOperationWaitTime(config, op, project, "Updating Target Pool", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -327,7 +344,7 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e if err != nil { return fmt.Errorf("Error updating instances: %s", err) } - err = computeOperationWait(config, op, project, "Updating Target Pool") + err = computeOperationWaitTime(config, op, project, "Updating Target Pool", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -345,7 +362,7 @@ func resourceComputeTargetPoolUpdate(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error updating backup_pool: %s", err) } - err = computeOperationWait(config, op, project, "Updating Target Pool") + err = computeOperationWaitTime(config, op, project, "Updating Target Pool", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -423,7 +440,7 @@ func resourceComputeTargetPoolDelete(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error deleting TargetPool: %s", err) } - err = computeOperationWait(config, op, project, "Deleting Target Pool") + err = computeOperationWaitTime(config, op, project, "Deleting Target Pool", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_container_cluster.go.erb b/third_party/terraform/resources/resource_container_cluster.go.erb index 4045f88d19f2..35e3803df20d 100644 --- a/third_party/terraform/resources/resource_container_cluster.go.erb +++ b/third_party/terraform/resources/resource_container_cluster.go.erb @@ -23,12 +23,13 @@ var ( networkConfig = &schema.Resource{ Schema: map[string]*schema.Schema{ "cidr_blocks": { - Type: schema.TypeSet, + Type: schema.TypeSet, // Despite being the only entry in a nested block, this should be kept // Optional. Expressing the parent with no entries and omitting the // parent entirely are semantically different. Optional: true, Elem: cidrBlockConfig, + Description: `External networks that can access the Kubernetes cluster master through HTTPS.`, }, }, } @@ -38,10 +39,12 @@ var ( Type: schema.TypeString, Required: true, ValidateFunc: validation.CIDRNetwork(0, 32), + Description: `External network that can access Kubernetes master through HTTPS. Must be specified in CIDR notation.`, }, "display_name": { Type: schema.TypeString, Optional: true, + Description: `Field for users to identify CIDR blocks.`, }, }, } @@ -53,14 +56,36 @@ var ( "addons_config.0.http_load_balancing", "addons_config.0.horizontal_pod_autoscaling", "addons_config.0.network_policy_config", + "addons_config.0.cloudrun_config", <% unless version == 'ga' -%> "addons_config.0.istio_config", - "addons_config.0.cloudrun_config", "addons_config.0.dns_cache_config", + "addons_config.0.gce_persistent_disk_csi_driver_config", + "addons_config.0.kalm_config", + "addons_config.0.config_connector_config", + <% end -%> + } + + forceNewClusterNodeConfigFields = []string{ + <% unless version == 'ga' -%> + "workload_metadata_config", <% end -%> } ) +// This uses the node pool nodeConfig schema but sets +// node-pool-only updatable fields to ForceNew +func clusterSchemaNodeConfig() *schema.Schema { + nodeConfigSch := schemaNodeConfig() + schemaMap := nodeConfigSch.Elem.(*schema.Resource).Schema + for _, k := range forceNewClusterNodeConfigFields { + if sch, ok := schemaMap[k]; ok { + changeFieldSchemaToForceNew(sch) + } + } + return nodeConfigSch +} + func rfc5545RecurrenceDiffSuppress(k, o, n string, d *schema.ResourceData) bool { // This diff gets applied in the cloud console if you specify // "FREQ=DAILY" in your config and add a maintenance exclusion. @@ -88,6 +113,7 @@ func resourceContainerCluster() *schema.Resource { Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(40 * time.Minute), + Read: schema.DefaultTimeout(40 * time.Minute), Update: schema.DefaultTimeout(60 * time.Minute), Delete: schema.DefaultTimeout(40 * time.Minute), }, @@ -101,9 +127,10 @@ func resourceContainerCluster() *schema.Resource { Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the cluster, unique within the project and location.`, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { value := v.(string) @@ -133,46 +160,52 @@ func resourceContainerCluster() *schema.Resource { }, "location": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The location (region or zone) in which the cluster master will be created, as well as the default node location. If you specify a zone (such as us-central1-a), the cluster will be a zonal cluster with a single cluster master. If you specify a region (such as us-west1), the cluster will be a regional cluster with multiple masters spread across zones in the region, and with default node locations in those zones as well.`, }, "region": { - Type: schema.TypeString, - Optional: true, - Removed: "Use location instead", - Computed: true, + Type: schema.TypeString, + Optional: true, + Removed: "Use location instead", + Computed: true, + Description: `The region in which the cluster master will be created. Zone and region have been removed in favor of location.`, }, "zone": { - Type: schema.TypeString, - Optional: true, - Removed: "Use location instead", - Computed: true, + Type: schema.TypeString, + Optional: true, + Removed: "Use location instead", + Computed: true, + Description: `The zone in which the cluster master will be created. Zone and region have been removed in favor of location.`, }, "node_locations": { - Type: schema.TypeSet, - Optional: true, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The list of zones in which the cluster's nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster's zone.`, }, "additional_zones": { - Type: schema.TypeSet, - Optional: true, - Removed: "Use node_locations instead", - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Optional: true, + Removed: "Use node_locations instead", + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Additional_zones has been removed in favor of node_locations.`, }, "addons_config": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The configuration for addons supported by GKE.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "http_load_balancing": { @@ -181,6 +214,7 @@ func resourceContainerCluster() *schema.Resource { Computed: true, AtLeastOneOf: addonsConfigKeys, MaxItems: 1, + Description: `The status of the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster. It is enabled by default; set disabled = true to disable.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "disabled": { @@ -196,6 +230,7 @@ func resourceContainerCluster() *schema.Resource { Computed: true, AtLeastOneOf: addonsConfigKeys, MaxItems: 1, + Description: `The status of the Horizontal Pod Autoscaling addon, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods. It ensures that a Heapster pod is running in the cluster, which is also used by the Cloud Monitoring service. It is enabled by default; set disabled = true to disable.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "disabled": { @@ -206,11 +241,12 @@ func resourceContainerCluster() *schema.Resource { }, }, "kubernetes_dashboard": { - Type: schema.TypeList, - Optional: true, - Removed: "The Kubernetes Dashboard addon is removed for clusters on GKE.", - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Removed: "The Kubernetes Dashboard addon is removed for clusters on GKE.", + Computed: true, + MaxItems: 1, + Description: `The status of Kubernetes Dashboard addon.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "disabled": { @@ -226,6 +262,7 @@ func resourceContainerCluster() *schema.Resource { Computed: true, AtLeastOneOf: addonsConfigKeys, MaxItems: 1, + Description: `Whether we should enable the network policy addon for the master. This must be enabled in order to enable network policy for the nodes. To enable this, you must also define a network_policy block, otherwise nothing will happen. It can only be disabled if the nodes already do not have network policies enabled. Defaults to disabled; set disabled = false to enable.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "disabled": { @@ -235,50 +272,71 @@ func resourceContainerCluster() *schema.Resource { }, }, }, - <% unless version == 'ga' -%> - "istio_config": { + "cloudrun_config": { Type: schema.TypeList, Optional: true, Computed: true, AtLeastOneOf: addonsConfigKeys, MaxItems: 1, + Description: `The status of the CloudRun addon. It is disabled by default. Set disabled = false to enable.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "disabled": { Type: schema.TypeBool, Required: true, }, + }, + }, + }, + <% unless version == 'ga' -%> + "istio_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the Istio addon.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disabled": { + Type: schema.TypeBool, + Required: true, + Description: `The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Set disabled = false to enable.`, + }, "auth": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, // We can't use a Terraform-level default because it won't be true when the block is disabled: true DiffSuppressFunc: emptyOrDefaultStringSuppress("AUTH_NONE"), - ValidateFunc: validation.StringInSlice([]string{"AUTH_NONE", "AUTH_MUTUAL_TLS"}, false), + ValidateFunc: validation.StringInSlice([]string{"AUTH_NONE", "AUTH_MUTUAL_TLS"}, false), + Description: `The authentication type between services in Istio. Available options include AUTH_MUTUAL_TLS.`, }, }, }, }, - "cloudrun_config": { + "dns_cache_config": { Type: schema.TypeList, Optional: true, Computed: true, AtLeastOneOf: addonsConfigKeys, MaxItems: 1, + Description: `The status of the NodeLocal DNSCache addon. It is disabled by default. Set enabled = true to enable.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "disabled": { + "enabled": { Type: schema.TypeBool, Required: true, }, }, }, }, - "dns_cache_config": { + "gce_persistent_disk_csi_driver_config": { Type: schema.TypeList, Optional: true, Computed: true, AtLeastOneOf: addonsConfigKeys, MaxItems: 1, + Description: `Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. Defaults to disabled; set enabled = true to enable.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { @@ -288,6 +346,39 @@ func resourceContainerCluster() *schema.Resource { }, }, }, + "kalm_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `Configuration for the KALM addon, which manages the lifecycle of k8s. It is disabled by default; Set enabled = true to enable.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "config_connector_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The of the Config Connector addon.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + <% end -%> }, }, @@ -298,62 +389,80 @@ func resourceContainerCluster() *schema.Resource { MaxItems: 1, // This field is Optional + Computed because we automatically set the // enabled value to false if the block is not returned in API responses. - Optional: true, - Computed: true, + Optional: true, + Computed: true, + Description: `Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster's workload. See the guide to using Node Auto-Provisioning for more details.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `Whether node auto-provisioning is enabled. Resource limits for cpu and memory must be defined to enable node auto-provisioning.`, }, "resource_limits": { - Type: schema.TypeList, - Optional: true, + Type: schema.TypeList, + Optional: true, + Description: `Global constraints for machine resources in the cluster. Configuring the cpu and memory types is required if node auto-provisioning is enabled. These limits will apply to node pool autoscaling in addition to node auto-provisioning.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "resource_type": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The type of the resource. For example, cpu and memory. See the guide to using Node Auto-Provisioning for a list of types.`, }, "minimum": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `Minimum amount of the resource in the cluster.`, }, "maximum": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `Maximum amount of the resource in the cluster.`, }, }, }, }, "auto_provisioning_defaults": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - Computed: true, + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `Contains defaults for a node pool created by NAP.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "oauth_scopes": { - Type: schema.TypeList, - Optional: true, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, DiffSuppressFunc: containerClusterAddedScopesSuppress, + Description: `Scopes that are used by NAP when creating node pools.`, }, "service_account": { - Type: schema.TypeString, - Optional: true, - Default: "default", + Type: schema.TypeString, + Optional: true, + Default: "default", + Description: `The Google Cloud Platform Service Account to be used by the node VMs.`, }, + <% unless version == 'ga' -%> + "min_cpu_platform": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: emptyOrDefaultStringSuppress("automatic"), + Description: `Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such as Intel Haswell.`, + }, + <% end -%> }, }, }, <% unless version == 'ga' -%> "autoscaling_profile": { - Type: schema.TypeString, - Default: "BALANCED", - Optional: true, - ValidateFunc: validation.StringInSlice([]string{"BALANCED", "OPTIMIZE_UTILIZATION"}, false), + Type: schema.TypeString, + Default: "BALANCED", + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"BALANCED", "OPTIMIZE_UTILIZATION"}, false), + Description: `Configuration options for the Autoscaling profile feature, which lets you choose whether the cluster autoscaler should optimize for resource utilization or resource availability when deciding to remove nodes from a cluster. Can be BALANCED or OPTIMIZE_UTILIZATION. Defaults to BALANCED.`, }, <% end -%> }, @@ -361,103 +470,117 @@ func resourceContainerCluster() *schema.Resource { }, "cluster_ipv4_cidr": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: orEmpty(validateRFC1918Network(8, 32)), + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: orEmpty(validateRFC1918Network(8, 32)), ConflictsWith: []string{"ip_allocation_policy"}, + Description: `The IP address range of the Kubernetes pods in this cluster in CIDR notation (e.g. 10.96.0.0/14). Leave blank to have one automatically chosen or specify a /14 block in 10.0.0.0/8. This field will only work for routes-based clusters, where ip_allocation_policy is not defined.`, }, "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: ` Description of the cluster.`, }, "enable_binary_authorization": { - Default: false, - Type: schema.TypeBool, - Optional: true, + Default: false, + Type: schema.TypeBool, + Optional: true, + Description: `Enable Binary Authorization for this cluster. If enabled, all container images will be validated by Google Binary Authorization.`, }, "enable_kubernetes_alpha": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + Description: `Whether to enable Kubernetes Alpha features for this cluster. Note that when this option is enabled, the cluster cannot be upgraded and will be automatically deleted after 30 days.`, }, "enable_tpu": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Whether to enable Cloud TPU resources in this cluster.`, <% if version == 'ga' -%> Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", Computed: true, <% else -%> - Default: false, + Default: false, <% end -%> }, "enable_legacy_abac": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether the ABAC authorizer is enabled for this cluster. When enabled, identities in the system, including service accounts, nodes, and controllers, will have statically granted permissions beyond those provided by the RBAC configuration or IAM. Defaults to false.`, }, -<% unless version == 'ga' -%> "enable_shielded_nodes": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Enable Shielded Nodes features on all nodes in this cluster. Defaults to false.`, }, -<% end -%> "authenticator_groups_config": { - Type: schema.TypeList, - Optional: true, - Computed: true, - ForceNew: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Description: `Configuration for the Google Groups for GKE feature.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "security_group": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in format gke-security-groups@yourdomain.com.`, }, }, }, }, "initial_node_count": { - Type: schema.TypeInt, - Optional: true, - ForceNew: true, + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Description: `The number of nodes to create in this cluster's default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set if node_pool is not set. If you're using google_container_node_pool objects with no default node pool, you'll need to set this to a value of at least 1, alongside setting remove_default_node_pool to true.`, }, "logging_service": { Type: schema.TypeString, Optional: true, - Default: "logging.googleapis.com/kubernetes", + Computed: true, +<% unless version == 'ga' -%> + ConflictsWith: []string{"cluster_telemetry"}, +<% end -%> ValidateFunc: validation.StringInSlice([]string{"logging.googleapis.com", "logging.googleapis.com/kubernetes", "none"}, false), + Description: `The logging service that the cluster should write logs to. Available options include logging.googleapis.com(Legacy Stackdriver), logging.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Logging), and none. Defaults to logging.googleapis.com/kubernetes.`, }, "maintenance_policy": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The maintenance policy to use for the cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "daily_maintenance_window": { Type: schema.TypeList, - Optional: true, + Optional: true, ExactlyOneOf: []string{ "maintenance_policy.0.daily_maintenance_window", "maintenance_policy.0.recurring_window", }, - MaxItems: 1, + MaxItems: 1, + Description: `Time window specified for daily maintenance operations. Specify start_time in RFC3339 format "HH:MM”, where HH : [00-23] and MM : [00-59] GMT.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "start_time": { @@ -474,28 +597,29 @@ func resourceContainerCluster() *schema.Resource { }, }, "recurring_window": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, ExactlyOneOf: []string{ "maintenance_policy.0.daily_maintenance_window", "maintenance_policy.0.recurring_window", }, + Description: `Time window for recurring maintenance operations.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "start_time": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, ValidateFunc: validateRFC3339Date, }, "end_time": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, ValidateFunc: validateRFC3339Date, }, "recurrence": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, DiffSuppressFunc: rfc5545RecurrenceDiffSuppress, }, }, @@ -506,23 +630,26 @@ func resourceContainerCluster() *schema.Resource { }, "master_auth": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Computed: true, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Computed: true, + Description: `The authentication information for accessing the Kubernetes master. Some values in this block are only returned by the API if your service account has permission to get credentials for your GKE cluster. If you see an unexpected diff removing a username/password or unsetting your client cert, ensure you have the container.clusters.getCredentials permission.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "password": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, AtLeastOneOf: []string{"master_auth.0.password", "master_auth.0.username", "master_auth.0.client_certificate_config"}, - Sensitive: true, + Sensitive: true, + Description: `The password to use for HTTP basic authentication when accessing the Kubernetes master endpoint.`, }, "username": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, AtLeastOneOf: []string{"master_auth.0.password", "master_auth.0.username", "master_auth.0.client_certificate_config"}, + Description: `The username to use for HTTP basic authentication when accessing the Kubernetes master endpoint. If not present basic auth will be disabled.`, }, // Ideally, this would be Optional (and not Computed). @@ -530,59 +657,70 @@ func resourceContainerCluster() *schema.Resource { // though, being unset was considered identical to set // and the issue_client_certificate value being true. "client_certificate_config": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - Computed: true, + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, AtLeastOneOf: []string{"master_auth.0.password", "master_auth.0.username", "master_auth.0.client_certificate_config"}, - ForceNew: true, + ForceNew: true, + Description: `Whether client certificate authorization is enabled for this cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "issue_client_certificate": { - Type: schema.TypeBool, - Required: true, - ForceNew: true, + Type: schema.TypeBool, + Required: true, + ForceNew: true, + Description: `Whether client certificate authorization is enabled for this cluster.`, }, }, }, }, "client_certificate": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Base64 encoded public certificate used by clients to authenticate to the cluster endpoint.`, }, "client_key": { - Type: schema.TypeString, - Computed: true, - Sensitive: true, + Type: schema.TypeString, + Computed: true, + Sensitive: true, + Description: `Base64 encoded private key used by clients to authenticate to the cluster endpoint.`, }, "cluster_ca_certificate": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Base64 encoded public certificate that is the root of trust for the cluster.`, }, }, }, }, "master_authorized_networks_config": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: networkConfig, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: networkConfig, + Description: `The desired configuration options for master authorized networks. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists).`, }, "min_master_version": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The minimum version of the master. GKE will auto-update the master to new versions, so this does not guarantee the current master version--use the read-only master_version field to obtain that. If unset, the cluster's version will be set by GKE to the version of the most recent official release (which is not necessarily the latest version).`, }, "monitoring_service": { Type: schema.TypeString, Optional: true, - Default: "monitoring.googleapis.com/kubernetes", + Computed: true, +<% unless version == 'ga' -%> + ConflictsWith: []string{"cluster_telemetry"}, +<% end -%> ValidateFunc: validation.StringInSlice([]string{"monitoring.googleapis.com", "monitoring.googleapis.com/kubernetes", "none"}, false), + Description: `The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com(Legacy Stackdriver), monitoring.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Monitoring), and none. Defaults to monitoring.googleapis.com/kubernetes.`, }, "network": { @@ -591,18 +729,21 @@ func resourceContainerCluster() *schema.Resource { Default: "default", ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine network to which the cluster is connected. For Shared VPC, set this to the self link of the shared network.`, }, "network_policy": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Configuration options for the NetworkPolicy feature.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `Whether network policy is enabled on the cluster.`, }, "provider": { Type: schema.TypeString, @@ -610,12 +751,13 @@ func resourceContainerCluster() *schema.Resource { Optional: true, ValidateFunc: validation.StringInSlice([]string{"PROVIDER_UNSPECIFIED", "CALICO"}, false), DiffSuppressFunc: emptyOrDefaultStringSuppress("PROVIDER_UNSPECIFIED"), + Description: `The selected network policy provider. Defaults to PROVIDER_UNSPECIFIED.`, }, }, }, }, - "node_config": schemaNodeConfig, + "node_config": clusterSchemaNodeConfig(), "node_pool": { Type: schema.TypeList, @@ -625,40 +767,45 @@ func resourceContainerCluster() *schema.Resource { Elem: &schema.Resource{ Schema: schemaNodePool, }, + Description: `List of node pools associated with this cluster. See google_container_node_pool for schema. Warning: node pools defined inside a cluster can't be changed (or added/removed) after cluster creation without deleting and recreating the entire cluster. Unless you absolutely need the ability to say "these are the only node pools associated with this cluster", use the google_container_node_pool resource instead of this property.`, }, "node_version": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The Kubernetes version on the nodes. Must either be unset or set to the same value as min_master_version on create. Defaults to the default version set by GKE which is not necessarily the latest version. This only affects nodes in the default node pool. While a fuzzy version can be specified, it's recommended that you specify explicit versions as Terraform will see spurious diffs when fuzzy versions are used. See the google_container_engine_versions data source's version_prefix field to approximate fuzzy versions in a Terraform-compatible way. To update nodes in other node pools, use the version attribute on the node pool.`, }, "pod_security_policy_config": { <% if version == 'ga' -%> // Remove return nil from expand when this is removed for good. - Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", + Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", Computed: true, <% else -%> DiffSuppressFunc: podSecurityPolicyCfgSuppress, <% end -%> - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Configuration for the PodSecurityPolicy feature.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.`, }, }, }, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "subnetwork": { @@ -667,35 +814,41 @@ func resourceContainerCluster() *schema.Resource { Computed: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine subnetwork in which the cluster's instances are launched.`, }, "endpoint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The IP address of this cluster's Kubernetes master.`, }, "instance_group_urls": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `List of instance group URLs which have been assigned to the cluster.`, }, "master_version": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The current version of the master in the cluster. This may be different than the min_master_version set in the config if the master has been updated by GKE.`, }, "services_ipv4_cidr": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The IP address range of the Kubernetes services in this cluster, in CIDR notation (e.g. 1.2.3.4/29). Service addresses are typically put in the last /16 from the container CIDR.`, }, "ip_allocation_policy": { - Type: schema.TypeList, - MaxItems: 1, - ForceNew: true, - Optional: true, + Type: schema.TypeList, + MaxItems: 1, + ForceNew: true, + Optional: true, ConflictsWith: []string{"cluster_ipv4_cidr"}, + Description: `Configuration of cluster IP allocation for VPC-native clusters. Adding this block enables IP aliasing, making the cluster VPC-native instead of routes-based.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ // GKE creates/deletes secondary ranges in VPC @@ -706,6 +859,7 @@ func resourceContainerCluster() *schema.Resource { ForceNew: true, ConflictsWith: ipAllocationRangeFields, DiffSuppressFunc: cidrOrSizeDiffSuppress, + Description: `The IP address range for the cluster pod IPs. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.`, }, "services_ipv4_cidr_block": { @@ -715,6 +869,7 @@ func resourceContainerCluster() *schema.Resource { ForceNew: true, ConflictsWith: ipAllocationRangeFields, DiffSuppressFunc: cidrOrSizeDiffSuppress, + Description: `The IP address range of the services IPs in this cluster. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.`, }, // User manages secondary ranges manually @@ -724,6 +879,7 @@ func resourceContainerCluster() *schema.Resource { Computed: true, ForceNew: true, ConflictsWith: ipAllocationCidrBlockFields, + Description: `The IP address range of the services IPs in this cluster. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.`, }, "services_secondary_range_name": { @@ -732,6 +888,7 @@ func resourceContainerCluster() *schema.Resource { Computed: true, ForceNew: true, ConflictsWith: ipAllocationCidrBlockFields, + Description: `The name of the existing secondary range in the cluster's subnetwork to use for service ClusterIPs. Alternatively, services_ipv4_cidr_block can be used to automatically create a GKE-managed one.`, }, "subnetwork_name": { @@ -751,9 +908,21 @@ func resourceContainerCluster() *schema.Resource { }, }, +<% unless version == 'ga' -%> + "networking_mode": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{"VPC_NATIVE", "ROUTES"}, false), + Description: `Determines whether alias IPs or routes will be used for pod IPs in the cluster.`, + }, +<% end -%> + "remove_default_node_pool": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `If true, deletes the default node pool upon cluster creation. If you're using google_container_node_pool resources with no default node pool, this should be set to true, alongside setting initial_node_count to at least 1.`, }, "private_cluster_config": { @@ -762,180 +931,252 @@ func resourceContainerCluster() *schema.Resource { Optional: true, Computed: true, DiffSuppressFunc: containerClusterPrivateClusterConfigSuppress, + Description: `Configuration for private clusters, clusters with private nodes.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enable_private_endpoint": { - Type: schema.TypeBool, - Required: true, - ForceNew: true, + Type: schema.TypeBool, + Required: true, + ForceNew: true, DiffSuppressFunc: containerClusterPrivateClusterConfigSuppress, + Description: `Enables the private cluster feature, creating a private endpoint on the cluster. In a private cluster, nodes only have RFC 1918 private addresses and communicate with the master's private endpoint via private networking.`, }, "enable_private_nodes": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, DiffSuppressFunc: containerClusterPrivateClusterConfigSuppress, + Description: `When true, the cluster's private endpoint is used as the cluster endpoint and access through the public endpoint is disabled. When false, either endpoint can be used. This field only applies to private clusters, when enable_private_nodes is true.`, }, "master_ipv4_cidr_block": { Type: schema.TypeString, Optional: true, ForceNew: true, ValidateFunc: orEmpty(validation.CIDRNetwork(28, 28)), + Description: `The IP range in CIDR notation to use for the hosted master network. This range will be used for assigning private IP addresses to the cluster master(s) and the ILB VIP. This range must not overlap with any other ranges in use within the cluster's network, and it must be a /28 subnet. See Private Cluster Limitations for more details. This field only applies to private clusters, when enable_private_nodes is true.`, }, "peering_name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The name of the peering between this cluster and the Google owned VPC.`, }, "private_endpoint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The internal IP address of this cluster's master endpoint.`, }, "public_endpoint": { - Type: schema.TypeString, + Type: schema.TypeString, + Computed: true, + Description: `The external IP address of this cluster's master endpoint.`, + }, +<% unless version == 'ga' -%> + "master_global_access_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, Computed: true, + Description: "Controls cluster master global access settings.", + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Whether the cluster master is accessible globally or not.`, + }, + }, + }, }, +<% end -%> }, }, }, "resource_labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The GCE resource labels (a map of key/value pairs) to be applied to the cluster.`, }, "label_fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The fingerprint of the set of labels for this cluster.`, }, "default_max_pods_per_node": { - Type: schema.TypeInt, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The default maximum number of pods per node in this cluster. This doesn't work on "routes-based" clusters, clusters that don't have IP Aliasing enabled.`, }, "vertical_pod_autoscaling": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Description: `Vertical Pod Autoscaling automatically adjusts the resources of pods controlled by it.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Enables vertical pod autoscaling.`, + }, + }, + }, + }, + "workload_identity_config": { Type: schema.TypeList, MaxItems: 1, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "enabled": { - Type: schema.TypeBool, + "identity_namespace": { + Type: schema.TypeString, Required: true, }, }, }, }, + "database_encryption": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `Application-layer Secrets Encryption settings. The object format is {state = string, key_name = string}. Valid values of state are: "ENCRYPTED"; "DECRYPTED". key_name is the name of a CloudKMS key.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "state": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"ENCRYPTED", "DECRYPTED"}, false), + Description: `ENCRYPTED or DECRYPTED.`, + }, + "key_name": { + Type: schema.TypeString, + Optional: true, + Description: `The key to use to encrypt/decrypt secrets.`, + }, + }, + }, + }, + <% unless version == 'ga' -%> "release_channel": { - Type: schema.TypeList, - ForceNew: true, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Configuration options for the Release channel feature, which provide more control over automatic upgrades of your GKE clusters.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "channel": { Type: schema.TypeString, Required: true, - ForceNew: true, ValidateFunc: validation.StringInSlice([]string{"UNSPECIFIED", "RAPID", "REGULAR", "STABLE"}, false), DiffSuppressFunc: emptyOrDefaultStringSuppress("UNSPECIFIED"), + Description: `The selected release channel.`, }, }, }, }, - "workload_identity_config": { + + "tpu_ipv4_cidr_block": { + Computed: true, + Type: schema.TypeString, + Description: `The IP address range of the Cloud TPUs in this cluster, in CIDR notation (e.g. 1.2.3.4/29).`, + }, + + + + "cluster_telemetry": { Type: schema.TypeList, - MaxItems: 1, Optional: true, + Computed: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "identity_namespace": { - Type: schema.TypeString, - Required: true, + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"DISABLED","ENABLED","SYSTEM_ONLY"}, false), }, }, }, }, - - "tpu_ipv4_cidr_block": { - Computed: true, - Type: schema.TypeString, - }, - - "database_encryption": { - Type: schema.TypeList, + + "default_snat_status": { + Type: schema.TypeList, MaxItems: 1, Optional: true, - ForceNew: true, Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "state": { - Type: schema.TypeString, - ForceNew: true, - Required: true, - ValidateFunc: validation.StringInSlice([]string{"ENCRYPTED", "DECRYPTED"}, false), - }, - "key_name": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, + Description: `Whether the cluster disables default in-node sNAT rules. In-node sNAT rules will be disabled when defaultSnatStatus is disabled.`, + Elem: &schema.Resource { + Schema: map[string]*schema.Schema { + "disabled": { + Type: schema.TypeBool, + Required: true, + Description: `When disabled is set to false, default IP masquerade rules will be applied to the nodes to prevent sNAT on cluster internal traffic.`, }, }, }, }, <% end -%> + "enable_intranode_visibility": { + Type: schema.TypeBool, + Optional: true, + Description: `Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network.`, +<% if version == 'ga' -%> + Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", + Computed: true, +<% else -%> + Default: false, +<% end -%> + }, "resource_usage_export_config": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "enable_network_egress_metering": { - Type: schema.TypeBool, - Optional: true, - Default: false, - }, - "enable_resource_consumption_metering": { - Type: schema.TypeBool, - Optional: true, - Default: true, - }, - "bigquery_destination": { - Type: schema.TypeList, - MaxItems: 1, - Required: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "dataset_id": { - Type: schema.TypeString, - Required: true, - }, - }, - }, + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Description: `Configuration for the ResourceUsageExportConfig feature.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_network_egress_metering": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether to enable network egress metering for this cluster. If enabled, a daemonset will be created in the cluster to meter network egress traffic.`, + }, + "enable_resource_consumption_metering": { + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Whether to enable resource consumption metering on this cluster. When enabled, a table will be created in the resource export BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export. Defaults to true.`, + }, + "bigquery_destination": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Description: `Parameters for using BigQuery as the destination of resource usage export.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dataset_id": { + Type: schema.TypeString, + Required: true, + Description: `The ID of a BigQuery Dataset.`, }, + }, }, + }, }, + }, }, - "enable_intranode_visibility": { - Type: schema.TypeBool, - Optional: true, -<% if version == 'ga' -%> - Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", - Computed: true, -<% else -%> - Default: false, -<% end -%> - }, }, } } @@ -1012,6 +1253,18 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er clusterName := d.Get("name").(string) +<% unless version == 'ga' -%> + ipAllocationBlock, err := expandIPAllocationPolicy(d.Get("ip_allocation_policy"), d.Get("networking_mode").(string)) + if err != nil { + return err + } +<% else -%> + ipAllocationBlock, err := expandIPAllocationPolicy(d.Get("ip_allocation_policy")) + if err != nil { + return err + } +<% end -%> + cluster := &containerBeta.Cluster{ Name: clusterName, InitialNodeCount: int64(d.Get("initial_node_count").(int)), @@ -1029,24 +1282,26 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er NetworkPolicy: expandNetworkPolicy(d.Get("network_policy")), AddonsConfig: expandClusterAddonsConfig(d.Get("addons_config")), EnableKubernetesAlpha: d.Get("enable_kubernetes_alpha").(bool), - IpAllocationPolicy: expandIPAllocationPolicy(d.Get("ip_allocation_policy")), + IpAllocationPolicy: ipAllocationBlock, PodSecurityPolicyConfig: expandPodSecurityPolicyConfig(d.Get("pod_security_policy_config")), Autoscaling: expandClusterAutoscaling(d.Get("cluster_autoscaling"), d), - BinaryAuthorization: &containerBeta.BinaryAuthorization{ + BinaryAuthorization: &containerBeta.BinaryAuthorization{ Enabled: d.Get("enable_binary_authorization").(bool), ForceSendFields: []string{"Enabled"}, }, -<% unless version == 'ga' -%> ShieldedNodes: &containerBeta.ShieldedNodes{ Enabled: d.Get("enable_shielded_nodes").(bool), ForceSendFields: []string{"Enabled"}, }, +<% unless version == 'ga' -%> ReleaseChannel: expandReleaseChannel(d.Get("release_channel")), + ClusterTelemetry: expandClusterTelemetry(d.Get("cluster_telemetry")), EnableTpu: d.Get("enable_tpu").(bool), NetworkConfig: &containerBeta.NetworkConfig{ EnableIntraNodeVisibility: d.Get("enable_intranode_visibility").(bool), - }, -<% end -%> + DefaultSnatStatus: expandDefaultSnatStatus(d.Get("default_snat_status")), + }, +<% end -%> MasterAuth: expandMasterAuth(d.Get("master_auth")), ResourceLabels: expandStringMap(d, "resource_labels"), } @@ -1090,7 +1345,7 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er } if v, ok := d.GetOk("subnetwork"); ok { - subnetwork, err := ParseSubnetworkFieldValue(v.(string), d, config) + subnetwork, err := parseRegionalFieldValue("subnetworks", v.(string), "project", "location", "location", d, config, true) // variant of ParseSubnetworkFieldValue if err != nil { return err } @@ -1131,7 +1386,6 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er cluster.VerticalPodAutoscaling = expandVerticalPodAutoscaling(v) } -<% unless version == 'ga' -%> if v, ok := d.GetOk("database_encryption"); ok { cluster.DatabaseEncryption = expandDatabaseEncryption(v) } @@ -1139,7 +1393,6 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er if v, ok := d.GetOk("workload_identity_config"); ok { cluster.WorkloadIdentityConfig = expandWorkloadIdentityConfig(v) } -<% end -%> if v, ok := d.GetOk("resource_usage_export_config"); ok { cluster.ResourceUsageExportConfig = expandResourceUsageExportConfig(v) @@ -1165,8 +1418,7 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er d.SetId(containerClusterFullName(project, location, clusterName)) // Wait until it's created - timeoutInMinutes := int(d.Timeout(schema.TimeoutCreate).Minutes()) - waitErr := containerOperationWait(config, op, project, location, "creating GKE cluster", timeoutInMinutes) + waitErr := containerOperationWait(config, op, project, location, "creating GKE cluster", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { // Check if the create operation failed because Terraform was prematurely terminated. If it was we can persist the // operation id to state so that a subsequent refresh of this resource will wait until the operation has terminated @@ -1209,7 +1461,7 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er if err != nil { return errwrap.Wrapf("Error deleting default node pool: {{err}}", err) } - err = containerOperationWait(config, op, project, location, "removing default node pool", timeoutInMinutes) + err = containerOperationWait(config, op, project, location, "removing default node pool", d.Timeout(schema.TimeoutCreate)) if err != nil { return errwrap.Wrapf("Error while waiting to delete default node pool: {{err}}", err) } @@ -1251,7 +1503,7 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro Name: operation, } d.Set("operation", "") - waitErr := containerOperationWait(config, op, project, location, "resuming GKE cluster", int(d.Timeout(schema.TimeoutCreate).Minutes())) + waitErr := containerOperationWait(config, op, project, location, "resuming GKE cluster", d.Timeout(schema.TimeoutRead)) if waitErr != nil { return waitErr } @@ -1301,18 +1553,22 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro return err } d.Set("enable_binary_authorization", cluster.BinaryAuthorization != nil && cluster.BinaryAuthorization.Enabled) -<% unless version == 'ga' -%> if cluster.ShieldedNodes != nil { d.Set("enable_shielded_nodes", cluster.ShieldedNodes.Enabled) } +<% unless version == 'ga' -%> d.Set("enable_tpu", cluster.EnableTpu) d.Set("tpu_ipv4_cidr_block", cluster.TpuIpv4CidrBlock) if err := d.Set("release_channel", flattenReleaseChannel(cluster.ReleaseChannel)); err != nil { return err } + + if err := d.Set("default_snat_status", flattenDefaultSnatStatus(cluster.NetworkConfig.DefaultSnatStatus)); err != nil { + return err + } d.Set("enable_intranode_visibility", cluster.NetworkConfig.EnableIntraNodeVisibility) -<% end -%> +<% end -%> if err := d.Set("authenticator_groups_config", flattenAuthenticatorGroupsConfig(cluster.AuthenticatorGroupsConfig)); err != nil { return err } @@ -1354,16 +1610,21 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro return err } -<% unless version == 'ga' -%> if err := d.Set("workload_identity_config", flattenWorkloadIdentityConfig(cluster.WorkloadIdentityConfig)); err != nil { return err } + if err := d.Set("database_encryption", flattenDatabaseEncryption(cluster.DatabaseEncryption)); err != nil { + return err + } + +<% unless version == 'ga' -%> + if err := d.Set("pod_security_policy_config", flattenPodSecurityPolicyConfig(cluster.PodSecurityPolicyConfig)); err != nil { return err } - if err := d.Set("database_encryption", flattenDatabaseEncryption(cluster.DatabaseEncryption)); err != nil { + if err := d.Set("cluster_telemetry", flattenClusterTelemetry(cluster.ClusterTelemetry)); err != nil { return err } <% end -%> @@ -1372,7 +1633,7 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro d.Set("label_fingerprint", cluster.LabelFingerprint) if err := d.Set("resource_usage_export_config", flattenResourceUsageExportConfig(cluster.ResourceUsageExportConfig)); err != nil { - return err + return err } return nil @@ -1392,7 +1653,6 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } clusterName := d.Get("name").(string) - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) if _, err := containerClusterAwaitRestingState(config, project, location, clusterName, d.Timeout(schema.TimeoutUpdate)); err != nil { return err @@ -1410,7 +1670,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er return err } // Wait until it's updated - return containerOperationWait(config, op, project, location, updateDescription, timeoutInMinutes) + return containerOperationWait(config, op, project, location, updateDescription, d.Timeout(schema.TimeoutUpdate)) } } @@ -1493,7 +1753,6 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er d.SetPartial("enable_binary_authorization") } -<% unless version == 'ga' -%> if d.HasChange("enable_shielded_nodes") { enabled := d.Get("enable_shielded_nodes").(bool) req := &containerBeta.UpdateClusterRequest{ @@ -1516,6 +1775,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er d.SetPartial("enable_shielded_nodes") } +<% unless version == 'ga' -%> if d.HasChange("enable_intranode_visibility") { enabled := d.Get("enable_intranode_visibility").(bool) req := &containerBeta.UpdateClusterRequest{ @@ -1535,7 +1795,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - err = containerOperationWait(config, op, project, location, "updating GKE Intra Node Visibility", timeoutInMinutes) + err = containerOperationWait(config, op, project, location, "updating GKE Intra Node Visibility", d.Timeout(schema.TimeoutUpdate)) log.Println("[DEBUG] done updating enable_intranode_visibility") return err } @@ -1549,8 +1809,67 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er d.SetPartial("enable_intranode_visibility") } + + if d.HasChange("default_snat_status") { + req := &containerBeta.UpdateClusterRequest{ + Update: &containerBeta.ClusterUpdate{ + DesiredDefaultSnatStatus: expandDefaultSnatStatus(d.Get("default_snat_status")), + }, + } + updateF := func() error { + log.Println("[DEBUG] updating default_snat_status") + name := containerClusterFullName(project, location, clusterName) + op, err := config.clientContainerBeta.Projects.Locations.Clusters.Update(name, req).Do() + if err != nil { + return err + } + + // Wait until it's updated + err = containerOperationWait(config, op, project, location, "updating GKE Default SNAT status", d.Timeout(schema.TimeoutUpdate)) + log.Println("[DEBUG] done updating default_snat_status") + return err + } + + // Call update serially. + if err := lockedCall(lockKey, updateF); err != nil { + return err + } + + log.Printf("[INFO] GKE cluster %s Default SNAT status has been updated", d.Id()) + + d.SetPartial("default_snat_status") + } + + if d.HasChange("release_channel") { + req := &containerBeta.UpdateClusterRequest{ + Update: &containerBeta.ClusterUpdate{ + DesiredReleaseChannel: expandReleaseChannel(d.Get("release_channel")), + }, + } + updateF := func() error { + log.Println("[DEBUG] updating release_channel") + name := containerClusterFullName(project, location, clusterName) + op, err := config.clientContainerBeta.Projects.Locations.Clusters.Update(name, req).Do() + if err != nil { + return err + } + + // Wait until it's updated + err = containerOperationWait(config, op, project, location, "updating Release Channel", d.Timeout(schema.TimeoutUpdate)) + log.Println("[DEBUG] done updating release_channel") + return err + } + + // Call update serially. + if err := lockedCall(lockKey, updateF); err != nil { + return err + } + + log.Printf("[INFO] GKE cluster %s Release Channel has been updated to %#v", d.Id(), req.Update.DesiredReleaseChannel) + + d.SetPartial("release_channel") + } <% end -%> - if d.HasChange("maintenance_policy") { req := &containerBeta.SetMaintenancePolicyRequest{ MaintenancePolicy: expandMaintenancePolicy(d, meta), @@ -1565,7 +1884,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - return containerOperationWait(config, op, project, location, "updating GKE cluster maintenance policy", timeoutInMinutes) + return containerOperationWait(config, op, project, location, "updating GKE cluster maintenance policy", d.Timeout(schema.TimeoutUpdate)) } // Call update serially. @@ -1578,7 +1897,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er d.SetPartial("maintenance_policy") } - if d.HasChange("node_locations") { + if d.HasChange("node_locations") { azSetOldI, azSetNewI := d.GetChange("node_locations") azSetNew := azSetNewI.(*schema.Set) azSetOld := azSetOldI.(*schema.Set) @@ -1643,7 +1962,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - err = containerOperationWait(config, op, project, location, "updating GKE legacy ABAC", timeoutInMinutes) + err = containerOperationWait(config, op, project, location, "updating GKE legacy ABAC", d.Timeout(schema.TimeoutUpdate)) log.Println("[DEBUG] done updating enable_legacy_abac") return err } @@ -1676,7 +1995,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - return containerOperationWait(config, op, project, location, "updating GKE logging+monitoring service", timeoutInMinutes) + return containerOperationWait(config, op, project, location, "updating GKE logging+monitoring service", d.Timeout(schema.TimeoutUpdate)) } // Call update serially. @@ -1704,7 +2023,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - err = containerOperationWait(config, op, project, location, "updating GKE cluster network policy", timeoutInMinutes) + err = containerOperationWait(config, op, project, location, "updating GKE cluster network policy", d.Timeout(schema.TimeoutUpdate)) log.Println("[DEBUG] done updating network_policy") return err } @@ -1727,7 +2046,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er return err } - if err := nodePoolUpdate(d, meta, nodePoolInfo, fmt.Sprintf("node_pool.%d.", i), timeoutInMinutes); err != nil { + if err := nodePoolUpdate(d, meta, nodePoolInfo, fmt.Sprintf("node_pool.%d.", i), d.Timeout(schema.TimeoutUpdate)); err != nil { return err } } @@ -1735,13 +2054,14 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // The master must be updated before the nodes - if d.HasChange("min_master_version") { - desiredMasterVersion := d.Get("min_master_version").(string) - currentMasterVersion := d.Get("master_version").(string) - des, err := version.NewVersion(desiredMasterVersion) + // If set to "", skip this step- any master version satisfies that minimum. + if ver := d.Get("min_master_version").(string); d.HasChange("min_master_version") && ver != "" { + des, err := version.NewVersion(ver) if err != nil { return err } + + currentMasterVersion := d.Get("master_version").(string) cur, err := version.NewVersion(currentMasterVersion) if err != nil { return err @@ -1751,7 +2071,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er if cur.LessThan(des) { req := &containerBeta.UpdateClusterRequest{ Update: &containerBeta.ClusterUpdate{ - DesiredMasterVersion: desiredMasterVersion, + DesiredMasterVersion: ver, }, } @@ -1760,7 +2080,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er if err := lockedCall(lockKey, updateF); err != nil { return err } - log.Printf("[INFO] GKE cluster %s: master has been updated to %s", d.Id(), desiredMasterVersion) + log.Printf("[INFO] GKE cluster %s: master has been updated to %s", d.Id(), ver) } d.SetPartial("min_master_version") } @@ -1816,7 +2136,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - return containerOperationWait(config, op, project, location, "updating GKE image type", timeoutInMinutes) + return containerOperationWait(config, op, project, location, "updating GKE image type", d.Timeout(schema.TimeoutUpdate)) } // Call update serially. @@ -1853,7 +2173,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - return containerOperationWait(config, op, project, location, "updating master auth", timeoutInMinutes) + return containerOperationWait(config, op, project, location, "updating master auth", d.Timeout(schema.TimeoutUpdate)) } // Call update serially. @@ -1885,6 +2205,31 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } } + if d.HasChange("database_encryption") { + c := d.Get("database_encryption") + req := &containerBeta.UpdateClusterRequest{ + Update: &containerBeta.ClusterUpdate{ + DesiredDatabaseEncryption: expandDatabaseEncryption(c), + }, + } + + updateF := func() error { + name := containerClusterFullName(project, location, clusterName) + op, err := config.clientContainerBeta.Projects.Locations.Clusters.Update(name, req).Do() + if err != nil { + return err + } + // Wait until it's updated + return containerOperationWait(config, op, project, location, "updating GKE cluster database encryption config", d.Timeout(schema.TimeoutUpdate)) + } + if err := lockedCall(lockKey, updateF); err != nil { + return err + } + log.Printf("[INFO] GKE cluster %s database encryption config has been updated", d.Id()) + + d.SetPartial("database_encryption") + } + <% unless version == 'ga' -%> if d.HasChange("pod_security_policy_config") { c := d.Get("pod_security_policy_config") @@ -1901,7 +2246,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er return err } // Wait until it's updated - return containerOperationWait(config, op, project, location, "updating GKE cluster pod security policy config", timeoutInMinutes) + return containerOperationWait(config, op, project, location, "updating GKE cluster pod security policy config", d.Timeout(schema.TimeoutUpdate)) } if err := lockedCall(lockKey, updateF); err != nil { return err @@ -1910,6 +2255,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er d.SetPartial("pod_security_policy_config") } +<% end -%> if d.HasChange("workload_identity_config") { // Because GKE uses a non-RESTful update function, when removing the @@ -1939,13 +2285,12 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er d.SetPartial("workload_identity_config") } -<% end -%> if d.HasChange("resource_labels") { resourceLabels := d.Get("resource_labels").(map[string]interface{}) labelFingerprint := d.Get("label_fingerprint").(string) req := &containerBeta.SetLabelsRequest{ - ResourceLabels: convertStringMap(resourceLabels), + ResourceLabels: convertStringMap(resourceLabels), LabelFingerprint: labelFingerprint, } updateF := func() error { @@ -1956,7 +2301,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } // Wait until it's updated - return containerOperationWait(config, op, project, location, "updating GKE resource labels", timeoutInMinutes) + return containerOperationWait(config, op, project, location, "updating GKE resource labels", d.Timeout(schema.TimeoutUpdate)) } // Call update serially. @@ -1976,7 +2321,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[WARN] Container cluster %q default node pool already removed, no change", d.Id()) } else { - err = containerOperationWait(config, op, project, location, "removing default node pool", timeoutInMinutes) + err = containerOperationWait(config, op, project, location, "removing default node pool", d.Timeout(schema.TimeoutUpdate)) if err != nil { return errwrap.Wrapf("Error deleting default node pool: {{err}}", err) } @@ -1984,32 +2329,64 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } if d.HasChange("resource_usage_export_config") { - c := d.Get("resource_usage_export_config") - req := &containerBeta.UpdateClusterRequest{ - Update: &containerBeta.ClusterUpdate{ - DesiredResourceUsageExportConfig: expandResourceUsageExportConfig(c), - }, - } + c := d.Get("resource_usage_export_config") + req := &containerBeta.UpdateClusterRequest{ + Update: &containerBeta.ClusterUpdate{ + DesiredResourceUsageExportConfig: expandResourceUsageExportConfig(c), + }, + } - updateF := func() error { - name := containerClusterFullName(project, location, clusterName) - op, err := config.clientContainerBeta.Projects.Locations.Clusters.Update(name, req).Do() - if err != nil { - return err - } - // Wait until it's updated - return containerOperationWait(config, op, project, location, "updating GKE cluster resource usage export config", timeoutInMinutes) - } - if err := lockedCall(lockKey, updateF); err != nil { - return err + updateF := func() error { + name := containerClusterFullName(project, location, clusterName) + op, err := config.clientContainerBeta.Projects.Locations.Clusters.Update(name, req).Do() + if err != nil { + return err } - log.Printf("[INFO] GKE cluster %s resource usage export config has been updated", d.Id()) + // Wait until it's updated + return containerOperationWait(config, op, project, location, "updating GKE cluster resource usage export config", d.Timeout(schema.TimeoutUpdate)) + } + if err := lockedCall(lockKey, updateF); err != nil { + return err + } + log.Printf("[INFO] GKE cluster %s resource usage export config has been updated", d.Id()) - d.SetPartial("resource_usage_export_config") + d.SetPartial("resource_usage_export_config") } d.Partial(false) +<% unless version == 'ga' -%> + if d.HasChange("cluster_telemetry") { + req := &containerBeta.UpdateClusterRequest{ + Update: &containerBeta.ClusterUpdate{ + DesiredClusterTelemetry: expandClusterTelemetry(d.Get("cluster_telemetry")), + }, + } + updateF := func() error { + log.Println("[DEBUG] updating cluster_telemetry") + name := containerClusterFullName(project, location, clusterName) + op, err := config.clientContainerBeta.Projects.Locations.Clusters.Update(name, req).Do() + if err != nil { + return err + } + + // Wait until it's updated + err = containerOperationWait(config, op, project, location, "updating Cluster Telemetry", d.Timeout(schema.TimeoutUpdate)) + log.Println("[DEBUG] done updating cluster_telemetry") + return err + } + + // Call update serially. + if err := lockedCall(lockKey, updateF); err != nil { + return err + } + + log.Printf("[INFO] GKE cluster %s Cluster Telemetry has been updated to %#v", d.Id(), req.Update.DesiredClusterTelemetry) + + d.SetPartial("cluster_telemetry") + } +<% end -%> + if _, err := containerClusterAwaitRestingState(config, project, location, clusterName, d.Timeout(schema.TimeoutUpdate)); err != nil { return err } @@ -2031,7 +2408,6 @@ func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) er } clusterName := d.Get("name").(string) - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) if _, err := containerClusterAwaitRestingState(config, project, location, clusterName, d.Timeout(schema.TimeoutDelete)); err != nil { return err @@ -2065,7 +2441,7 @@ func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) er } // Wait until it's deleted - waitErr := containerOperationWait(config, op, project, location, "deleting GKE cluster", timeoutInMinutes) + waitErr := containerOperationWait(config, op, project, location, "deleting GKE cluster", d.Timeout(schema.TimeoutDelete)) if waitErr != nil { return waitErr } @@ -2104,8 +2480,7 @@ func cleanFailedContainerCluster(d *schema.ResourceData, meta interface{}) error } // Wait until it's deleted - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) - waitErr := containerOperationWait(config, op, project, location, "deleting GKE cluster", timeoutInMinutes) + waitErr := containerOperationWait(config, op, project, location, "deleting GKE cluster", d.Timeout(schema.TimeoutDelete)) if waitErr != nil { return waitErr } @@ -2159,6 +2534,10 @@ func getInstanceGroupUrlsFromManagerUrls(config *Config, igmUrls []string) ([]st } matches := instanceGroupManagerURL.FindStringSubmatch(u) instanceGroupManager, err := config.clientCompute.InstanceGroupManagers.Get(matches[1], matches[2], matches[3]).Do() + if isGoogleApiErrorWithCode(err, 404) { + // The IGM URL is stale; don't include it + continue + } if err != nil { return nil, fmt.Errorf("Error reading instance group manager returned as an instance group URL: %s", err) } @@ -2200,20 +2579,20 @@ func expandClusterAddonsConfig(configured interface{}) *containerBeta.AddonsConf } } -<% unless version == 'ga' -%> - if v, ok := config["istio_config"]; ok && len(v.([]interface{})) > 0 { + if v, ok := config["cloudrun_config"]; ok && len(v.([]interface{})) > 0 { addon := v.([]interface{})[0].(map[string]interface{}) - ac.IstioConfig = &containerBeta.IstioConfig{ + ac.CloudRunConfig = &containerBeta.CloudRunConfig{ Disabled: addon["disabled"].(bool), - Auth: addon["auth"].(string), ForceSendFields: []string{"Disabled"}, } } - if v, ok := config["cloudrun_config"]; ok && len(v.([]interface{})) > 0 { +<% unless version == 'ga' -%> + if v, ok := config["istio_config"]; ok && len(v.([]interface{})) > 0 { addon := v.([]interface{})[0].(map[string]interface{}) - ac.CloudRunConfig = &containerBeta.CloudRunConfig{ + ac.IstioConfig = &containerBeta.IstioConfig{ Disabled: addon["disabled"].(bool), + Auth: addon["auth"].(string), ForceSendFields: []string{"Disabled"}, } } @@ -2221,7 +2600,30 @@ func expandClusterAddonsConfig(configured interface{}) *containerBeta.AddonsConf if v, ok := config["dns_cache_config"]; ok && len(v.([]interface{})) > 0 { addon := v.([]interface{})[0].(map[string]interface{}) ac.DnsCacheConfig = &containerBeta.DnsCacheConfig{ - Enabled: addon["enabled"].(bool), + Enabled: addon["enabled"].(bool), + ForceSendFields: []string{"Enabled"}, + } + } + + if v, ok := config["gce_persistent_disk_csi_driver_config"]; ok && len(v.([]interface{})) > 0 { + addon := v.([]interface{})[0].(map[string]interface{}) + ac.GcePersistentDiskCsiDriverConfig = &containerBeta.GcePersistentDiskCsiDriverConfig{ + Enabled: addon["enabled"].(bool), + ForceSendFields: []string{"Enabled"}, + } + } + + if v, ok := config["kalm_config"]; ok && len(v.([]interface{})) > 0 { + addon := v.([]interface{})[0].(map[string]interface{}) + ac.KalmConfig = &containerBeta.KalmConfig{ + Enabled: addon["enabled"].(bool), + ForceSendFields: []string{"Enabled"}, + } + } + if v, ok := config["config_connector_config"]; ok && len(v.([]interface{})) > 0 { + addon := v.([]interface{})[0].(map[string]interface{}) + ac.ConfigConnectorConfig = &containerBeta.ConfigConnectorConfig{ + Enabled: addon["enabled"].(bool), ForceSendFields: []string{"Enabled"}, } } @@ -2230,25 +2632,41 @@ func expandClusterAddonsConfig(configured interface{}) *containerBeta.AddonsConf return ac } -func expandIPAllocationPolicy(configured interface{}) *containerBeta.IPAllocationPolicy { +<% unless version == 'ga' -%> +func expandIPAllocationPolicy(configured interface{}, networkingMode string) (*containerBeta.IPAllocationPolicy, error) { +<% else -%> +func expandIPAllocationPolicy(configured interface{}) (*containerBeta.IPAllocationPolicy, error) { +<% end -%> l := configured.([]interface{}) if len(l) == 0 || l[0] == nil { +<% unless version == 'ga' -%> + if networkingMode == "VPC_NATIVE" { + return nil, fmt.Errorf("`ip_allocation_policy` block is required for VPC_NATIVE clusters.") + } +<% end -%> return &containerBeta.IPAllocationPolicy{ - UseIpAliases: false, + UseIpAliases: false, ForceSendFields: []string{"UseIpAliases"}, - } + }, nil } config := l[0].(map[string]interface{}) return &containerBeta.IPAllocationPolicy{ - UseIpAliases: true, +<% unless version == 'ga' -%> + UseIpAliases: networkingMode == "VPC_NATIVE" || networkingMode == "", +<% else -%> + UseIpAliases: true, +<% end -%> ClusterIpv4CidrBlock: config["cluster_ipv4_cidr_block"].(string), ServicesIpv4CidrBlock: config["services_ipv4_cidr_block"].(string), ClusterSecondaryRangeName: config["cluster_secondary_range_name"].(string), ServicesSecondaryRangeName: config["services_secondary_range_name"].(string), - ForceSendFields: []string{"UseIpAliases"}, - } + ForceSendFields: []string{"UseIpAliases"}, +<% unless version == 'ga' -%> + UseRoutes: networkingMode == "ROUTES", +<% end -%> + }, nil } func expandMaintenancePolicy(d *schema.ResourceData, meta interface{}) *containerBeta.MaintenancePolicy { @@ -2307,7 +2725,7 @@ func expandMaintenancePolicy(d *schema.ResourceData, meta interface{}) *containe RecurringWindow: &containerBeta.RecurringTimeWindow{ Window: &containerBeta.TimeWindow{ StartTime: rw["start_time"].(string), - EndTime: rw["end_time"].(string), + EndTime: rw["end_time"].(string), }, Recurrence: rw["recurrence"].(string), }, @@ -2355,9 +2773,9 @@ func expandClusterAutoscaling(configured interface{}, d *schema.ResourceData) *c EnableNodeAutoprovisioning: config["enabled"].(bool), ResourceLimits: resourceLimits, <% unless version == 'ga' -%> - AutoscalingProfile: config["autoscaling_profile"].(string), + AutoscalingProfile: config["autoscaling_profile"].(string), <% end -%> - AutoprovisioningNodePoolDefaults: expandAutoProvisioningDefaults(config["auto_provisioning_defaults"], d), + AutoprovisioningNodePoolDefaults: expandAutoProvisioningDefaults(config["auto_provisioning_defaults"], d), } } @@ -2368,10 +2786,20 @@ func expandAutoProvisioningDefaults(configured interface{}, d *schema.ResourceDa } config := l[0].(map[string]interface{}) - return &containerBeta.AutoprovisioningNodePoolDefaults{ + npd := &containerBeta.AutoprovisioningNodePoolDefaults{ OauthScopes: convertStringArr(config["oauth_scopes"].([]interface{})), ServiceAccount: config["service_account"].(string), } + +<% unless version == 'ga' -%> + cpu := config["min_cpu_platform"].(string) + // the only way to unset the field is to pass "automatic" as its value + if cpu == "" { + cpu = "automatic" + } + npd.MinCpuPlatform = cpu +<% end -%> + return npd } func expandAuthenticatorGroupsConfig(configured interface{}) *containerBeta.AuthenticatorGroupsConfig { @@ -2462,12 +2890,31 @@ func expandPrivateClusterConfig(configured interface{}) *containerBeta.PrivateCl } config := l[0].(map[string]interface{}) return &containerBeta.PrivateClusterConfig{ - EnablePrivateEndpoint: config["enable_private_endpoint"].(bool), - EnablePrivateNodes: config["enable_private_nodes"].(bool), - MasterIpv4CidrBlock: config["master_ipv4_cidr_block"].(string), - ForceSendFields: []string{"EnablePrivateEndpoint", "EnablePrivateNodes", "MasterIpv4CidrBlock"}, + EnablePrivateEndpoint: config["enable_private_endpoint"].(bool), + EnablePrivateNodes: config["enable_private_nodes"].(bool), + MasterIpv4CidrBlock: config["master_ipv4_cidr_block"].(string), +<% unless version == 'ga' -%> + MasterGlobalAccessConfig: expandPrivateClusterConfigMasterGlobalAccessConfig(config["master_global_access_config"]), + ForceSendFields: []string{"EnablePrivateEndpoint", "EnablePrivateNodes", "MasterIpv4CidrBlock", "MasterGlobalAccessConfig"}, +<% else -%> + ForceSendFields: []string{"EnablePrivateEndpoint", "EnablePrivateNodes", "MasterIpv4CidrBlock"}, +<% end -%> + } +} + +<% unless version == 'ga' -%> +func expandPrivateClusterConfigMasterGlobalAccessConfig(configured interface{}) *containerBeta.PrivateClusterMasterGlobalAccessConfig { + l := configured.([]interface{}) + if len(l) == 0 { + return nil + } + config := l[0].(map[string]interface{}) + return &containerBeta.PrivateClusterMasterGlobalAccessConfig{ + Enabled: config["enabled"].(bool), + ForceSendFields: []string{"Enabled"}, } } +<% end -%> func expandVerticalPodAutoscaling(configured interface{}) *containerBeta.VerticalPodAutoscaling { l := configured.([]interface{}) @@ -2480,6 +2927,18 @@ func expandVerticalPodAutoscaling(configured interface{}) *containerBeta.Vertica } } +func expandDatabaseEncryption(configured interface{}) *containerBeta.DatabaseEncryption { + l := configured.([]interface{}) + if len(l) == 0 { + return nil + } + config := l[0].(map[string]interface{}) + return &containerBeta.DatabaseEncryption{ + State: config["state"].(string), + KeyName: config["key_name"].(string), + } +} + <% unless version == 'ga' -%> func expandReleaseChannel(configured interface{}) *containerBeta.ReleaseChannel { l := configured.([]interface{}) @@ -2492,18 +2951,32 @@ func expandReleaseChannel(configured interface{}) *containerBeta.ReleaseChannel } } -func expandDatabaseEncryption(configured interface{}) *containerBeta.DatabaseEncryption { +func expandClusterTelemetry(configured interface{}) *containerBeta.ClusterTelemetry { l := configured.([]interface{}) - if len(l) == 0 { + if len(l) == 0 || l[0] == nil { return nil } config := l[0].(map[string]interface{}) - return &containerBeta.DatabaseEncryption{ - State: config["state"].(string), - KeyName: config["key_name"].(string), + return &containerBeta.ClusterTelemetry{ + Type: config["type"].(string), + } +} + +func expandDefaultSnatStatus(configured interface{}) *containerBeta.DefaultSnatStatus { + l := configured.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil } + config := l[0].(map[string]interface{}) + return &containerBeta.DefaultSnatStatus{ + Disabled: config["disabled"].(bool), + ForceSendFields: []string{"Disabled"}, + } + } +<% end -%> + func expandWorkloadIdentityConfig(configured interface{}) *containerBeta.WorkloadIdentityConfig { l := configured.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -2514,7 +2987,6 @@ func expandWorkloadIdentityConfig(configured interface{}) *containerBeta.Workloa IdentityNamespace: config["identity_namespace"].(string), } } -<% end -%> func expandPodSecurityPolicyConfig(configured interface{}) *containerBeta.PodSecurityPolicyConfig { <% unless version == 'ga' -%> @@ -2547,28 +3019,29 @@ func expandDefaultMaxPodsConstraint(v interface{}) *containerBeta.MaxPodsConstra func expandResourceUsageExportConfig(configured interface{}) *containerBeta.ResourceUsageExportConfig { l := configured.([]interface{}) if len(l) == 0 || l[0] == nil { - return &containerBeta.ResourceUsageExportConfig{} + return &containerBeta.ResourceUsageExportConfig{} } resourceUsageConfig := l[0].(map[string]interface{}) result := &containerBeta.ResourceUsageExportConfig{ - EnableNetworkEgressMetering: resourceUsageConfig["enable_network_egress_metering"].(bool), - ConsumptionMeteringConfig: &containerBeta.ConsumptionMeteringConfig{ - Enabled: resourceUsageConfig["enable_resource_consumption_metering"].(bool), - ForceSendFields: []string{"Enabled"}, - }, - ForceSendFields: []string{"EnableNetworkEgressMetering"}, + EnableNetworkEgressMetering: resourceUsageConfig["enable_network_egress_metering"].(bool), + ConsumptionMeteringConfig: &containerBeta.ConsumptionMeteringConfig{ + Enabled: resourceUsageConfig["enable_resource_consumption_metering"].(bool), + ForceSendFields: []string{"Enabled"}, + }, + ForceSendFields: []string{"EnableNetworkEgressMetering"}, } if _, ok := resourceUsageConfig["bigquery_destination"]; ok { - if len(resourceUsageConfig["bigquery_destination"].([]interface{})) > 0 { - bigqueryDestination := resourceUsageConfig["bigquery_destination"].([]interface{})[0].(map[string]interface{}) - if _, ok := bigqueryDestination["dataset_id"]; ok { - result.BigqueryDestination = &containerBeta.BigQueryDestination{ - DatasetId: bigqueryDestination["dataset_id"].(string), - } - } + destinationArr := resourceUsageConfig["bigquery_destination"].([]interface{}) + if len(destinationArr) > 0 && destinationArr[0] != nil { + bigqueryDestination := destinationArr[0].(map[string]interface{}) + if _, ok := bigqueryDestination["dataset_id"]; ok { + result.BigqueryDestination = &containerBeta.BigQueryDestination{ + DatasetId: bigqueryDestination["dataset_id"].(string), + } } + } } return result } @@ -2617,6 +3090,14 @@ func flattenClusterAddonsConfig(c *containerBeta.AddonsConfig) []map[string]inte } } + if c.CloudRunConfig != nil { + result["cloudrun_config"] = []map[string]interface{}{ + { + "disabled": c.CloudRunConfig.Disabled, + }, + } + } + <% unless version == 'ga' -%> if c.IstioConfig != nil { result["istio_config"] = []map[string]interface{}{ @@ -2627,21 +3108,36 @@ func flattenClusterAddonsConfig(c *containerBeta.AddonsConfig) []map[string]inte } } - if c.CloudRunConfig != nil { - result["cloudrun_config"] = []map[string]interface{}{ + if c.DnsCacheConfig != nil { + result["dns_cache_config"] = []map[string]interface{}{ { - "disabled": c.CloudRunConfig.Disabled, + "enabled": c.DnsCacheConfig.Enabled, }, } } - if c.DnsCacheConfig != nil { - result["dns_cache_config"] = []map[string]interface{}{ - { - "enabled": c.DnsCacheConfig.Enabled, - }, - } - } + if c.GcePersistentDiskCsiDriverConfig != nil { + result["gce_persistent_disk_csi_driver_config"] = []map[string]interface{}{ + { + "enabled": c.GcePersistentDiskCsiDriverConfig.Enabled, + }, + } + } + + if c.KalmConfig != nil { + result["kalm_config"] = []map[string]interface{}{ + { + "enabled": c.KalmConfig.Enabled, + }, + } + } + if c.ConfigConnectorConfig != nil { + result["config_connector_config"] = []map[string]interface{}{ + { + "enabled": c.ConfigConnectorConfig.Enabled, + }, + } + } <% end -%> return []map[string]interface{}{result} } @@ -2677,15 +3173,31 @@ func flattenPrivateClusterConfig(c *containerBeta.PrivateClusterConfig) []map[st } return []map[string]interface{}{ { - "enable_private_endpoint": c.EnablePrivateEndpoint, - "enable_private_nodes": c.EnablePrivateNodes, - "master_ipv4_cidr_block": c.MasterIpv4CidrBlock, - "peering_name": c.PeeringName, - "private_endpoint": c.PrivateEndpoint, - "public_endpoint": c.PublicEndpoint, + "enable_private_endpoint": c.EnablePrivateEndpoint, + "enable_private_nodes": c.EnablePrivateNodes, + "master_ipv4_cidr_block": c.MasterIpv4CidrBlock, +<% unless version == 'ga' -%> + "master_global_access_config": flattenPrivateClusterConfigMasterGlobalAccessConfig(c.MasterGlobalAccessConfig), +<% end -%> + "peering_name": c.PeeringName, + "private_endpoint": c.PrivateEndpoint, + "public_endpoint": c.PublicEndpoint, + }, + } +} + +<% unless version == 'ga' -%> +func flattenPrivateClusterConfigMasterGlobalAccessConfig(c *containerBeta.PrivateClusterMasterGlobalAccessConfig) []map[string]interface{} { + if c == nil { + return nil + } + return []map[string]interface{}{ + { + "enabled": c.Enabled, }, } } +<% end -%> func flattenVerticalPodAutoscaling(c *containerBeta.VerticalPodAutoscaling) []map[string]interface{} { if c == nil { @@ -2714,6 +3226,27 @@ func flattenReleaseChannel(c *containerBeta.ReleaseChannel) []map[string]interfa return result } +func flattenClusterTelemetry(c *containerBeta.ClusterTelemetry) []map[string]interface{} { + result := []map[string]interface{}{} + if c != nil { + result = append(result, map[string]interface{}{ + "type": c.Type, + }) + } + return result +} + +func flattenDefaultSnatStatus(c *containerBeta.DefaultSnatStatus) []map[string]interface{} { + result := []map[string]interface{}{} + if c != nil { + result = append(result, map[string]interface{}{ + "disabled": c.Disabled, + }) + } + return result +} + +<% end -%> func flattenWorkloadIdentityConfig(c *containerBeta.WorkloadIdentityConfig) []map[string]interface{} { if c == nil { return nil @@ -2724,19 +3257,24 @@ func flattenWorkloadIdentityConfig(c *containerBeta.WorkloadIdentityConfig) []ma }, } } -<% end -%> func flattenIPAllocationPolicy(c *containerBeta.Cluster, d *schema.ResourceData, config *Config) []map[string]interface{} { // If IP aliasing isn't enabled, none of the values in this block can be set. if c == nil || c.IpAllocationPolicy == nil || !c.IpAllocationPolicy.UseIpAliases { +<% unless version == 'ga' -%> + d.Set("networking_mode", "ROUTES") +<% end -%> return nil } +<% unless version == 'ga' -%> + d.Set("networking_mode", "VPC_NATIVE") +<% end -%> p := c.IpAllocationPolicy return []map[string]interface{}{ { - "cluster_ipv4_cidr_block": p.ClusterIpv4CidrBlock, - "services_ipv4_cidr_block": p.ServicesIpv4CidrBlock, + "cluster_ipv4_cidr_block": p.ClusterIpv4CidrBlock, + "services_ipv4_cidr_block": p.ServicesIpv4CidrBlock, "cluster_secondary_range_name": p.ClusterSecondaryRangeName, "services_secondary_range_name": p.ServicesSecondaryRangeName, }, @@ -2765,7 +3303,7 @@ func flattenMaintenancePolicy(mp *containerBeta.MaintenancePolicy) []map[string] "recurring_window": []map[string]interface{}{ { "start_time": mp.Window.RecurringWindow.Window.StartTime, - "end_time": mp.Window.RecurringWindow.Window.EndTime, + "end_time": mp.Window.RecurringWindow.Window.EndTime, "recurrence": mp.Window.RecurringWindow.Recurrence, }, }, @@ -2836,6 +3374,9 @@ func flattenAutoProvisioningDefaults(a *containerBeta.AutoprovisioningNodePoolDe r := make(map[string]interface{}) r["oauth_scopes"] = a.OauthScopes r["service_account"] = a.ServiceAccount + <% unless version == 'ga' -%> + r["min_cpu_platform"] = a.MinCpuPlatform + <% end -%> return []map[string]interface{}{r} } @@ -2861,7 +3402,11 @@ func flattenMasterAuthorizedNetworksConfig(c *containerBeta.MasterAuthorizedNetw <% unless version == 'ga' -%> func flattenPodSecurityPolicyConfig(c *containerBeta.PodSecurityPolicyConfig) []map[string]interface{} { if c == nil { - return nil + return []map[string]interface{}{ + { + "enabled": false, + }, + } } return []map[string]interface{}{ { @@ -2869,30 +3414,30 @@ func flattenPodSecurityPolicyConfig(c *containerBeta.PodSecurityPolicyConfig) [] }, } } + <% end -%> func flattenResourceUsageExportConfig(c *containerBeta.ResourceUsageExportConfig) []map[string]interface{} { - if c == nil { - return nil - } + if c == nil { + return nil + } - enableResourceConsumptionMetering := false - if c.ConsumptionMeteringConfig != nil && c.ConsumptionMeteringConfig.Enabled == true { - enableResourceConsumptionMetering = true - } + enableResourceConsumptionMetering := false + if c.ConsumptionMeteringConfig != nil && c.ConsumptionMeteringConfig.Enabled == true { + enableResourceConsumptionMetering = true + } - return []map[string]interface{}{ - { - "enable_network_egress_metering": c.EnableNetworkEgressMetering, - "enable_resource_consumption_metering": enableResourceConsumptionMetering, - "bigquery_destination": []map[string]interface{}{ - {"dataset_id": c.BigqueryDestination.DatasetId}, - }, - }, - } + return []map[string]interface{}{ + { + "enable_network_egress_metering": c.EnableNetworkEgressMetering, + "enable_resource_consumption_metering": enableResourceConsumptionMetering, + "bigquery_destination": []map[string]interface{}{ + {"dataset_id": c.BigqueryDestination.DatasetId}, + }, + }, + } } -<% unless version == 'ga' -%> func flattenDatabaseEncryption(c *containerBeta.DatabaseEncryption) []map[string]interface{} { if c == nil { return nil @@ -2904,7 +3449,6 @@ func flattenDatabaseEncryption(c *containerBeta.DatabaseEncryption) []map[string }, } } -<% end -%> func resourceContainerClusterStateImporter(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*Config) @@ -2984,7 +3528,7 @@ func containerClusterAddedScopesSuppress(k, old, new string, d *schema.ResourceD } // combine what the default scopes are with what was passed - m := golangSetFromStringSlice(append(addedScopes, convertStringArr(n.([]interface{}))... )) + m := golangSetFromStringSlice(append(addedScopes, convertStringArr(n.([]interface{}))...)) combined := stringSliceFromGolangSet(m) // compare if the combined new scopes and default scopes differ from the old scopes @@ -3060,4 +3604,5 @@ func podSecurityPolicyCfgSuppress(k, old, new string, r *schema.ResourceData) bo } return false } + <% end -%> diff --git a/third_party/terraform/resources/resource_container_node_pool.go.erb b/third_party/terraform/resources/resource_container_node_pool.go.erb index f6e897c47ba1..8ed9aae21c14 100644 --- a/third_party/terraform/resources/resource_container_node_pool.go.erb +++ b/third_party/terraform/resources/resource_container_node_pool.go.erb @@ -42,33 +42,38 @@ func resourceContainerNodePool() *schema.Resource { schemaNodePool, map[string]*schema.Schema{ "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.`, }, "cluster": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The cluster to create the node pool for. Cluster must be present in location provided for zonal clusters.`, }, "zone": { - Type: schema.TypeString, - Optional: true, - Removed: "use location instead", - Computed: true, + Type: schema.TypeString, + Optional: true, + Removed: "use location instead", + Computed: true, + Description: `The zone of the cluster`, }, "region": { - Type: schema.TypeString, - Optional: true, - Removed: "use location instead", - Computed: true, + Type: schema.TypeString, + Optional: true, + Removed: "use location instead", + Computed: true, + Description: `The region of the cluster`, }, "location": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The location (region or zone) of the cluster.`, }, }), } @@ -76,126 +81,141 @@ func resourceContainerNodePool() *schema.Resource { var schemaNodePool = map[string]*schema.Schema{ "autoscaling": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "min_node_count": &schema.Schema{ Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntAtLeast(0), + Description: `Minimum number of nodes in the NodePool. Must be >=0 and <= max_node_count.`, }, "max_node_count": &schema.Schema{ Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntAtLeast(1), + Description: `Maximum number of nodes in the NodePool. Must be >= min_node_count.`, }, }, }, }, "max_pods_per_node": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The maximum number of pods per node in this node pool. Note that this does not work on node pools which are "route-based" - that is, node pools belonging to clusters that do not have IP Aliasing enabled.`, }, -<% unless version == 'ga' -%> "node_locations": { - Type: schema.TypeSet, - Optional: true, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The list of zones in which the node pool's nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If unspecified, the cluster-level node_locations will be used.`, }, -<% end -%> "upgrade_settings": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of max_surge and max_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "max_surge": { Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntAtLeast(0), + Description: `The number of additional nodes that can be added to the node pool during an upgrade. Increasing max_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.`, }, "max_unavailable": { Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntAtLeast(0), + Description: `The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.`, }, }, }, }, "initial_node_count": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.`, }, "instance_group_urls": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The resource URLs of the managed instance groups associated with this node pool.`, }, "management": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Node management configuration, wherein auto-repair and auto-upgrade is configured.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "auto_repair": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether the nodes will be automatically repaired.`, }, "auto_upgrade": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether the nodes will be automatically upgraded.`, }, }, }, }, "name": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The name of the node pool. If left blank, Terraform will auto-generate a unique name.`, }, "name_prefix": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `Creates a unique name for the node pool beginning with the specified prefix. Conflicts with name.`, }, - "node_config": schemaNodeConfig, + "node_config": schemaNodeConfig(), "node_count": { Type: schema.TypeInt, Optional: true, Computed: true, ValidateFunc: validation.IntAtLeast(0), + Description: `The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside autoscaling.`, }, "version": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The Kubernetes version for the nodes in this pool. Note that if this field and auto_upgrade are both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it's recommended that you specify explicit versions as Terraform will see spurious diffs when fuzzy versions are used. See the google_container_engine_versions data source's version_prefix field to approximate fuzzy versions in a Terraform-compatible way.`, }, } @@ -269,6 +289,11 @@ func resourceContainerNodePoolCreate(d *schema.ResourceData, meta interface{}) e timeout := d.Timeout(schema.TimeoutCreate) startTime := time.Now() + // Set the ID before we attempt to create - that way, if we receive an error but + // the resource is created anyway, it will be refreshed on the next call to + // apply. + d.SetId(fmt.Sprintf("projects/%s/locations/%s/clusters/%s/nodePools/%s", nodePoolInfo.project, nodePoolInfo.location, nodePoolInfo.cluster, nodePool.Name)) + var operation *containerBeta.Operation err = resource.Retry(timeout, func() *resource.RetryError { operation, err = config.clientContainerBeta. @@ -289,11 +314,9 @@ func resourceContainerNodePoolCreate(d *schema.ResourceData, meta interface{}) e } timeout -= time.Since(startTime) - d.SetId(fmt.Sprintf("projects/%s/locations/%s/clusters/%s/nodePools/%s", nodePoolInfo.project, nodePoolInfo.location, nodePoolInfo.cluster, nodePool.Name)) - waitErr := containerOperationWait(config, operation, nodePoolInfo.project, - nodePoolInfo.location, "creating GKE NodePool", int(timeout.Minutes())) + nodePoolInfo.location, "creating GKE NodePool", timeout) if waitErr != nil { // The resource didn't actually create @@ -322,27 +345,13 @@ func resourceContainerNodePoolCreate(d *schema.ResourceData, meta interface{}) e func resourceContainerNodePoolRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) nodePoolInfo, err := extractNodePoolInformation(d, config) - - name := getNodePoolName(d.Id()) - if err != nil { return err } - var nodePool = &containerBeta.NodePool{} - err = resource.Retry(2*time.Minute, func() *resource.RetryError { - nodePool, err = config.clientContainerBeta. - Projects.Locations.Clusters.NodePools.Get(nodePoolInfo.fullyQualifiedName(name)).Do() - - if err != nil { - return resource.NonRetryableError(err) - } - if nodePool.Status != "RUNNING" { - return resource.RetryableError(fmt.Errorf("Nodepool %q has status %q with message %q", d.Get("name"), nodePool.Status, nodePool.StatusMessage)) - } - return nil - }) + name := getNodePoolName(d.Id()) + nodePool, err := config.clientContainerBeta.Projects.Locations.Clusters.NodePools.Get(nodePoolInfo.fullyQualifiedName(name)).Do() if err != nil { return handleNotFoundError(err, d, fmt.Sprintf("NodePool %q from cluster %q", name, nodePoolInfo.cluster)) } @@ -364,7 +373,6 @@ func resourceContainerNodePoolRead(d *schema.ResourceData, meta interface{}) err func resourceContainerNodePoolUpdate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) nodePoolInfo, err := extractNodePoolInformation(d, config) if err != nil { @@ -378,7 +386,7 @@ func resourceContainerNodePoolUpdate(d *schema.ResourceData, meta interface{}) e } d.Partial(true) - if err := nodePoolUpdate(d, meta, nodePoolInfo, "", timeoutInMinutes); err != nil { + if err := nodePoolUpdate(d, meta, nodePoolInfo, "", d.Timeout(schema.TimeoutUpdate)); err != nil { return err } d.Partial(false) @@ -406,25 +414,27 @@ func resourceContainerNodePoolDelete(d *schema.ResourceData, meta interface{}) e return err } - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) - mutexKV.Lock(nodePoolInfo.lockKey()) defer mutexKV.Unlock(nodePoolInfo.lockKey()) - var op = &containerBeta.Operation{} - var count = 0 - err = resource.Retry(30*time.Second, func() *resource.RetryError { - count++ - op, err = config.clientContainerBeta.Projects.Locations. - Clusters.NodePools.Delete(nodePoolInfo.fullyQualifiedName(name)).Do() + + timeout := d.Timeout(schema.TimeoutDelete) + startTime := time.Now() + + var operation *containerBeta.Operation + err = resource.Retry(timeout, func() *resource.RetryError { + operation, err = config.clientContainerBeta. + Projects.Locations.Clusters.NodePools.Delete(nodePoolInfo.fullyQualifiedName(name)).Do() if err != nil { - return resource.RetryableError(err) + if isFailedPreconditionError(err) { + // We get failed precondition errors if the cluster is updating + // while we try to delete the node pool. + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) } - if count == 15 { - return resource.NonRetryableError(fmt.Errorf("Error retrying to delete node pool %s", name)) - } return nil }) @@ -432,8 +442,10 @@ func resourceContainerNodePoolDelete(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error deleting NodePool: %s", err) } + timeout -= time.Since(startTime) + // Wait until it's deleted - waitErr := containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "deleting GKE NodePool", timeoutInMinutes) + waitErr := containerOperationWait(config, operation, nodePoolInfo.project, nodePoolInfo.location, "deleting GKE NodePool", timeout) if waitErr != nil { return waitErr } @@ -475,7 +487,7 @@ func resourceContainerNodePoolStateImporter(d *schema.ResourceData, meta interfa id, err := replaceVars(d, config, "projects/{{project}}/locations/{{location}}/clusters/{{cluster}}/nodePools/{{name}}") if err != nil { return nil, err -} + } d.SetId(id) @@ -510,21 +522,16 @@ func expandNodePool(d *schema.ResourceData, prefix string) (*containerBeta.NodeP nodeCount = nc.(int) } - -<% unless version == 'ga' -%> var locations []string if v, ok := d.GetOk("node_locations"); ok && v.(*schema.Set).Len() > 0 { locations = convertStringSet(v.(*schema.Set)) } -<% end -%> np := &containerBeta.NodePool{ Name: name, InitialNodeCount: int64(nodeCount), Config: expandNodeConfig(d.Get(prefix + "node_config")), -<% unless version == 'ga' -%> Locations: locations, -<% end -%> Version: d.Get(prefix + "version").(string), } @@ -578,6 +585,7 @@ func flattenNodePool(d *schema.ResourceData, config *Config, np *containerBeta.N // instance groups instead. They should all have the same size, but in case a resize // failed or something else strange happened, we'll just use the average size. size := 0 + igmUrls := []string{} for _, url := range np.InstanceGroupUrls { // retrieve instance group manager (InstanceGroupUrls are actually URLs for InstanceGroupManagers) matches := instanceGroupManagerURL.FindStringSubmatch(url) @@ -585,21 +593,28 @@ func flattenNodePool(d *schema.ResourceData, config *Config, np *containerBeta.N return nil, fmt.Errorf("Error reading instance group manage URL '%q'", url) } igm, err := config.clientComputeBeta.InstanceGroupManagers.Get(matches[1], matches[2], matches[3]).Do() + if isGoogleApiErrorWithCode(err, 404) { + // The IGM URL in is stale; don't include it + continue + } if err != nil { return nil, fmt.Errorf("Error reading instance group manager returned as an instance group URL: %q", err) } size += int(igm.TargetSize) + igmUrls = append(igmUrls, url) + } + nodeCount := 0 + if len(igmUrls) > 0 { + nodeCount = size / len(igmUrls) } nodePool := map[string]interface{}{ "name": np.Name, "name_prefix": d.Get(prefix + "name_prefix"), "initial_node_count": np.InitialNodeCount, -<% unless version == 'ga' -%> "node_locations": schema.NewSet(schema.HashString, convertStringArrToInterface(np.Locations)), -<% end -%> - "node_count": size / len(np.InstanceGroupUrls), + "node_count": nodeCount, "node_config": flattenNodeConfig(np.Config), - "instance_group_urls": np.InstanceGroupUrls, + "instance_group_urls": igmUrls, "version": np.Version, } @@ -630,7 +645,7 @@ func flattenNodePool(d *schema.ResourceData, config *Config, np *containerBeta.N if np.UpgradeSettings != nil { nodePool["upgrade_settings"] = []map[string]interface{}{ { - "max_surge": np.UpgradeSettings.MaxSurge, + "max_surge": np.UpgradeSettings.MaxSurge, "max_unavailable": np.UpgradeSettings.MaxUnavailable, }, } @@ -641,7 +656,7 @@ func flattenNodePool(d *schema.ResourceData, config *Config, np *containerBeta.N return nodePool, nil } -func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *NodePoolInformation, prefix string, timeoutInMinutes int) error { +func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *NodePoolInformation, prefix string, timeout time.Duration) error { config := meta.(*Config) name := d.Get(prefix + "name").(string) @@ -680,7 +695,7 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node return containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "updating GKE node pool", - timeoutInMinutes) + timeout) } // Call update serially. @@ -714,7 +729,7 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node return containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "updating GKE node pool", - timeoutInMinutes) + timeout) } // Call update serially. @@ -724,7 +739,40 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node log.Printf("[INFO] Updated image type in Node Pool %s", d.Id()) } +<% unless version == 'ga' -%> + if d.HasChange(prefix + "node_config.0.workload_metadata_config") { + req := &containerBeta.UpdateNodePoolRequest{ + NodePoolId: name, + WorkloadMetadataConfig: expandWorkloadMetadataConfig( + d.Get(prefix + "node_config.0.workload_metadata_config")), + } + if req.WorkloadMetadataConfig == nil { + req.ForceSendFields = []string{"WorkloadMetadataConfig"} + } + updateF := func() error { + op, err := config.clientContainerBeta.Projects.Locations.Clusters.NodePools. + Update(nodePoolInfo.fullyQualifiedName(name), req).Do() + if err != nil { + return err + } + + // Wait until it's updated + return containerOperationWait(config, op, + nodePoolInfo.project, + nodePoolInfo.location, + "updating GKE node pool workload_metadata_config", + timeout) + } + + // Call update serially. + if err := lockedCall(lockKey, updateF); err != nil { + return err + } + + log.Printf("[INFO] Updated workload_metadata_config for node pool %s", name) + } +<% end -%> if prefix == "" { d.SetPartial("node_config") } @@ -746,7 +794,7 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node return containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "updating GKE node pool size", - timeoutInMinutes) + timeout) } // Call update serially. @@ -784,7 +832,7 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node // Wait until it's updated return containerOperationWait(config, op, nodePoolInfo.project, - nodePoolInfo.location, "updating GKE node pool management", timeoutInMinutes) + nodePoolInfo.location, "updating GKE node pool management", timeout) } // Call update serially. @@ -815,7 +863,7 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node // Wait until it's updated return containerOperationWait(config, op, nodePoolInfo.project, - nodePoolInfo.location, "updating GKE node pool version", timeoutInMinutes) + nodePoolInfo.location, "updating GKE node pool version", timeout) } // Call update serially. @@ -830,7 +878,6 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node } } -<% unless version == 'ga' -%> if d.HasChange(prefix + "node_locations") { req := &containerBeta.UpdateNodePoolRequest{ Locations: convertStringSet(d.Get(prefix + "node_locations").(*schema.Set)), @@ -843,7 +890,7 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node } // Wait until it's updated - return containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "updating GKE node pool node locations", timeoutInMinutes) + return containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "updating GKE node pool node locations", timeout) } // Call update serially. @@ -857,7 +904,6 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node d.SetPartial("node_locations") } } -<% end -%> if d.HasChange(prefix + "upgrade_settings") { upgradeSettings := &containerBeta.UpgradeSettings{} @@ -877,7 +923,7 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node } // Wait until it's updated - return containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "updating GKE node pool upgrade settings", timeoutInMinutes) + return containerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location, "updating GKE node pool upgrade settings", timeout) } // Call update serially. diff --git a/third_party/terraform/resources/resource_container_registry.go b/third_party/terraform/resources/resource_container_registry.go index e843f168a3eb..74046a37630d 100644 --- a/third_party/terraform/resources/resource_container_registry.go +++ b/third_party/terraform/resources/resource_container_registry.go @@ -22,18 +22,21 @@ func resourceContainerRegistry() *schema.Resource { StateFunc: func(s interface{}) string { return strings.ToUpper(s.(string)) }, + Description: `The location of the registry. One of ASIA, EU, US or not specified. See the official documentation for more information on registry locations.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "bucket_self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, }, } diff --git a/third_party/terraform/resources/resource_dataflow_flex_template_job.go.erb b/third_party/terraform/resources/resource_dataflow_flex_template_job.go.erb new file mode 100644 index 000000000000..d84913427a83 --- /dev/null +++ b/third_party/terraform/resources/resource_dataflow_flex_template_job.go.erb @@ -0,0 +1,236 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "google.golang.org/api/googleapi" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/helper/validation" + dataflow "google.golang.org/api/dataflow/v1b3" +) + +// NOTE: resource_dataflow_flex_template currently does not support updating existing jobs. +// Changing any non-computed field will result in the job being deleted (according to its +// on_delete policy) and recreated with the updated parameters. + +// resourceDataflowFlexTemplateJob defines the schema for Dataflow FlexTemplate jobs. +func resourceDataflowFlexTemplateJob() *schema.Resource { + return &schema.Resource{ + Create: resourceDataflowFlexTemplateJobCreate, + Read: resourceDataflowFlexTemplateJobRead, + Update: resourceDataflowFlexTemplateJobUpdate, + Delete: resourceDataflowFlexTemplateJobDelete, + Schema: map[string]*schema.Schema{ + + "container_spec_gcs_path": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "on_delete": { + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{"cancel", "drain"}, false), + Optional: true, + Default: "cancel", + }, + + "labels": { + Type: schema.TypeMap, + Optional: true, + DiffSuppressFunc: resourceDataflowJobLabelDiffSuppress, + ForceNew: true, + }, + + "parameters": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "job_id": { + Type: schema.TypeString, + Computed: true, + }, + + "state": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +// resourceDataflowFlexTemplateJobCreate creates a Flex Template Job from TF code. +func resourceDataflowFlexTemplateJobCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + project, err := getProject(d, config) + if err != nil { + return err + } + + region, err := getRegion(d, config) + if err != nil { + return err + } + + request := dataflow.LaunchFlexTemplateRequest{ + LaunchParameter: &dataflow.LaunchFlexTemplateParameter{ + ContainerSpecGcsPath: d.Get("container_spec_gcs_path").(string), + JobName: d.Get("name").(string), + Parameters: expandStringMap(d, "parameters"), + }, + } + + response, err := config.clientDataflow.Projects.Locations.FlexTemplates.Launch(project, region, &request).Do() + if err != nil { + return err + } + + job := response.Job + d.SetId(job.Id) + d.Set("job_id", job.Id) + + return resourceDataflowFlexTemplateJobRead(d, meta) +} + +// resourceDataflowFlexTemplateJobRead reads a Flex Template Job resource. +func resourceDataflowFlexTemplateJobRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + project, err := getProject(d, config) + if err != nil { + return err + } + + region, err := getRegion(d, config) + if err != nil { + return err + } + + jobId := d.Id() + + job, err := resourceDataflowJobGetJob(config, project, region, jobId) + if err != nil { + return handleNotFoundError(err, d, fmt.Sprintf("Dataflow job %s", jobId)) + } + + d.Set("state", job.CurrentState) + d.Set("name", job.Name) + d.Set("project", project) + d.Set("labels", job.Labels) + + if _, ok := dataflowTerminalStatesMap[job.CurrentState]; ok { + log.Printf("[DEBUG] Removing resource '%s' because it is in state %s.\n", job.Name, job.CurrentState) + d.SetId("") + return nil + } + + return nil +} + +// resourceDataflowFlexTemplateJobUpdate is a blank method to enable updating +// the on_delete virtual field +func resourceDataflowFlexTemplateJobUpdate(d *schema.ResourceData, meta interface{}) error { + return nil +} + +func resourceDataflowFlexTemplateJobDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + project, err := getProject(d, config) + if err != nil { + return err + } + + region, err := getRegion(d, config) + if err != nil { + return err + } + + id := d.Id() + + requestedState, err := resourceDataflowJobMapRequestedState(d.Get("on_delete").(string)) + if err != nil { + return err + } + + // Retry updating the state while the job is not ready to be canceled/drained. + err = resource.Retry(time.Minute*time.Duration(15), func() *resource.RetryError { + // To terminate a dataflow job, we update the job with a requested + // terminal state. + job := &dataflow.Job{ + RequestedState: requestedState, + } + + _, updateErr := resourceDataflowJobUpdateJob(config, project, region, id, job) + if updateErr != nil { + gerr, isGoogleErr := updateErr.(*googleapi.Error) + if !isGoogleErr { + // If we have an error and it's not a google-specific error, we should go ahead and return. + return resource.NonRetryableError(updateErr) + } + + if strings.Contains(gerr.Message, "not yet ready for canceling") { + // Retry cancelling job if it's not ready. + // Sleep to avoid hitting update quota with repeated attempts. + time.Sleep(5 * time.Second) + return resource.RetryableError(updateErr) + } + + if strings.Contains(gerr.Message, "Job has terminated") { + // Job has already been terminated, skip. + return nil + } + } + + return nil + }) + if err != nil { + return err + } + + // Wait for state to reach terminal state (canceled/drained/done) + _, ok := dataflowTerminalStatesMap[d.Get("state").(string)] + for !ok { + log.Printf("[DEBUG] Waiting for job with job state %q to terminate...", d.Get("state").(string)) + time.Sleep(5 * time.Second) + + err = resourceDataflowFlexTemplateJobRead(d, meta) + if err != nil { + return fmt.Errorf("Error while reading job to see if it was properly terminated: %v", err) + } + _, ok = dataflowTerminalStatesMap[d.Get("state").(string)] + } + + // Only remove the job from state if it's actually successfully canceled. + if _, ok := dataflowTerminalStatesMap[d.Get("state").(string)]; ok { + log.Printf("[DEBUG] Removing dataflow job with final state %q", d.Get("state").(string)) + d.SetId("") + return nil + } + return fmt.Errorf("Unable to cancel the dataflow job '%s' - final state was %q.", d.Id(), d.Get("state").(string)) +} + +<% end -%> diff --git a/third_party/terraform/resources/resource_dataflow_job.go b/third_party/terraform/resources/resource_dataflow_job.go index e2fc47281e1e..37315e38847a 100644 --- a/third_party/terraform/resources/resource_dataflow_job.go +++ b/third_party/terraform/resources/resource_dataflow_job.go @@ -6,6 +6,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/helper/validation" @@ -44,55 +45,70 @@ func resourceDataflowJob() *schema.Resource { return &schema.Resource{ Create: resourceDataflowJobCreate, Read: resourceDataflowJobRead, + Update: resourceDataflowJobUpdateByReplacement, Delete: resourceDataflowJobDelete, + Timeouts: &schema.ResourceTimeout{ + Update: schema.DefaultTimeout(10 * time.Minute), + }, + CustomizeDiff: customdiff.All( + resourceDataflowJobTypeCustomizeDiff, + ), Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, Required: true, - ForceNew: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `A unique name for the resource, required by Dataflow.`, }, "template_gcs_path": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + Description: `The GCS path to the Dataflow job template.`, }, "temp_gcs_location": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + Description: `A writeable location on GCS for the Dataflow job to dump its temporary data.`, }, "zone": { Type: schema.TypeString, Optional: true, - ForceNew: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The zone in which the created job should run. If it is not provided, the provider zone is used.`, }, "region": { Type: schema.TypeString, Optional: true, - ForceNew: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The region in which the created job should run.`, }, "max_workers": { Type: schema.TypeInt, Optional: true, - ForceNew: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.`, }, "parameters": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, + Type: schema.TypeMap, + Optional: true, + Description: `Key/Value pairs to be passed to the Dataflow job (as used in the template).`, }, "labels": { Type: schema.TypeMap, Optional: true, - ForceNew: true, DiffSuppressFunc: resourceDataflowJobLabelDiffSuppress, + Description: `User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.`, }, "on_delete": { @@ -100,65 +116,99 @@ func resourceDataflowJob() *schema.Resource { ValidateFunc: validation.StringInSlice([]string{"cancel", "drain"}, false), Optional: true, Default: "drain", - ForceNew: true, + Description: `One of "drain" or "cancel". Specifies behavior of deletion during terraform destroy.`, }, "project": { Type: schema.TypeString, Optional: true, Computed: true, - ForceNew: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The project in which the resource belongs.`, }, "state": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The current state of the resource, selected from the JobState enum.`, }, "type": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The type of this job, selected from the JobType enum.`, }, "service_account_email": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Description: `The Service Account email used to create the job.`, }, "network": { Type: schema.TypeString, Optional: true, - ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The network to which VMs will be assigned. If it is not provided, "default" will be used.`, }, "subnetwork": { Type: schema.TypeString, Optional: true, - ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".`, }, "machine_type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Description: `The machine type to use for the job.`, }, "ip_configuration": { Type: schema.TypeString, Optional: true, - ForceNew: true, + Description: `The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".`, ValidateFunc: validation.StringInSlice([]string{"WORKER_IP_PUBLIC", "WORKER_IP_PRIVATE", ""}, false), }, + "additional_experiments": { + Type: schema.TypeSet, + Optional: true, + Description: `List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "job_id": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The unique ID of this job.`, }, }, } } +func resourceDataflowJobTypeCustomizeDiff(d *schema.ResourceDiff, meta interface{}) error { + // All non-virtual fields are ForceNew for batch jobs + if d.Get("type") == "JOB_TYPE_BATCH" { + resourceSchema := resourceDataflowJob().Schema + for field := range resourceSchema { + if field == "on_delete" { + continue + } + // Labels map will likely have suppressed changes, so we check each key instead of the parent field + if field == "labels" { + resourceDataflowJobIterateMapForceNew(field, d) + } else if d.HasChange(field) { + d.ForceNew(field) + } + } + } + + return nil +} + func resourceDataflowJobCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -167,29 +217,16 @@ func resourceDataflowJobCreate(d *schema.ResourceData, meta interface{}) error { return err } - zone, err := getZone(d, config) - if err != nil { - return err - } - region, err := getRegion(d, config) if err != nil { return err } params := expandStringMap(d, "parameters") - labels := expandStringMap(d, "labels") - env := dataflow.RuntimeEnvironment{ - MaxWorkers: int64(d.Get("max_workers").(int)), - Network: d.Get("network").(string), - ServiceAccountEmail: d.Get("service_account_email").(string), - Subnetwork: d.Get("subnetwork").(string), - TempLocation: d.Get("temp_gcs_location").(string), - MachineType: d.Get("machine_type").(string), - IpConfiguration: d.Get("ip_configuration").(string), - AdditionalUserLabels: labels, - Zone: zone, + env, err := resourceDataflowJobSetupEnv(d, config) + if err != nil { + return err } request := dataflow.CreateJobFromTemplateRequest{ @@ -235,6 +272,14 @@ func resourceDataflowJobRead(d *schema.ResourceData, meta interface{}) error { d.Set("project", project) d.Set("labels", job.Labels) + sdkPipelineOptions, err := ConvertToMap(job.Environment.SdkPipelineOptions) + if err != nil { + return err + } + optionsMap := sdkPipelineOptions["options"].(map[string]interface{}) + d.Set("template_gcs_path", optionsMap["templateLocation"]) + d.Set("temp_gcs_location", optionsMap["tempLocation"]) + if _, ok := dataflowTerminalStatesMap[job.CurrentState]; ok { log.Printf("[DEBUG] Removing resource '%s' because it is in state %s.\n", job.Name, job.CurrentState) d.SetId("") @@ -245,6 +290,57 @@ func resourceDataflowJobRead(d *schema.ResourceData, meta interface{}) error { return nil } +// Stream update method. Batch job changes should have been set to ForceNew via custom diff +func resourceDataflowJobUpdateByReplacement(d *schema.ResourceData, meta interface{}) error { + // Don't send an update request if only virtual fields have changes + if resourceDataflowJobIsVirtualUpdate(d) { + return nil + } + + config := meta.(*Config) + + project, err := getProject(d, config) + if err != nil { + return err + } + + region, err := getRegion(d, config) + if err != nil { + return err + } + + params := expandStringMap(d, "parameters") + + env, err := resourceDataflowJobSetupEnv(d, config) + if err != nil { + return err + } + + request := dataflow.LaunchTemplateParameters{ + JobName: d.Get("name").(string), + Parameters: params, + Environment: &env, + Update: true, + } + + var response *dataflow.LaunchTemplateResponse + err = retryTimeDuration(func() (updateErr error) { + response, updateErr = resourceDataflowJobLaunchTemplate(config, project, region, d.Get("template_gcs_path").(string), &request) + return updateErr + }, time.Minute*time.Duration(5), isDataflowJobUpdateRetryableError) + if err != nil { + return err + } + + if err := waitForDataflowJobToBeUpdated(d, config, response.Job.Id, d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("Error updating job with job ID %q: %v", d.Id(), err) + } + + d.SetId(response.Job.Id) + + return resourceDataflowJobRead(d, meta) +} + func resourceDataflowJobDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*Config) @@ -342,9 +438,9 @@ func resourceDataflowJobCreateJob(config *Config, project string, region string, func resourceDataflowJobGetJob(config *Config, project string, region string, id string) (*dataflow.Job, error) { if region == "" { - return config.clientDataflow.Projects.Jobs.Get(project, id).Do() + return config.clientDataflow.Projects.Jobs.Get(project, id).View("JOB_VIEW_ALL").Do() } - return config.clientDataflow.Projects.Locations.Jobs.Get(project, region, id).Do() + return config.clientDataflow.Projects.Locations.Jobs.Get(project, region, id).View("JOB_VIEW_ALL").Do() } func resourceDataflowJobUpdateJob(config *Config, project string, region string, id string, job *dataflow.Job) (*dataflow.Job, error) { @@ -353,3 +449,113 @@ func resourceDataflowJobUpdateJob(config *Config, project string, region string, } return config.clientDataflow.Projects.Locations.Jobs.Update(project, region, id, job).Do() } + +func resourceDataflowJobLaunchTemplate(config *Config, project string, region string, gcsPath string, request *dataflow.LaunchTemplateParameters) (*dataflow.LaunchTemplateResponse, error) { + if region == "" { + return config.clientDataflow.Projects.Templates.Launch(project, request).GcsPath(gcsPath).Do() + } + return config.clientDataflow.Projects.Locations.Templates.Launch(project, region, request).GcsPath(gcsPath).Do() +} + +func resourceDataflowJobSetupEnv(d *schema.ResourceData, config *Config) (dataflow.RuntimeEnvironment, error) { + zone, err := getZone(d, config) + if err != nil { + return dataflow.RuntimeEnvironment{}, err + } + + labels := expandStringMap(d, "labels") + + additionalExperiments := convertStringSet(d.Get("additional_experiments").(*schema.Set)) + + env := dataflow.RuntimeEnvironment{ + MaxWorkers: int64(d.Get("max_workers").(int)), + Network: d.Get("network").(string), + ServiceAccountEmail: d.Get("service_account_email").(string), + Subnetwork: d.Get("subnetwork").(string), + TempLocation: d.Get("temp_gcs_location").(string), + MachineType: d.Get("machine_type").(string), + IpConfiguration: d.Get("ip_configuration").(string), + AdditionalUserLabels: labels, + Zone: zone, + AdditionalExperiments: additionalExperiments, + } + return env, nil +} + +func resourceDataflowJobIterateMapForceNew(mapKey string, d *schema.ResourceDiff) { + obj := d.Get(mapKey).(map[string]interface{}) + for k := range obj { + entrySchemaKey := mapKey + "." + k + if d.HasChange(entrySchemaKey) { + // ForceNew must be called on the parent map to trigger + d.ForceNew(mapKey) + break + } + } +} + +func resourceDataflowJobIterateMapHasChange(mapKey string, d *schema.ResourceData) bool { + obj := d.Get(mapKey).(map[string]interface{}) + for k := range obj { + entrySchemaKey := mapKey + "." + k + if d.HasChange(entrySchemaKey) { + return true + } + } + return false +} + +func resourceDataflowJobIsVirtualUpdate(d *schema.ResourceData) bool { + // on_delete is the only virtual field + if d.HasChange("on_delete") { + // Check if other fields have changes, which would require an actual update request + resourceSchema := resourceDataflowJob().Schema + for field := range resourceSchema { + if field == "on_delete" { + continue + } + // Labels map will likely have suppressed changes, so we check each key instead of the parent field + if (field == "labels" && resourceDataflowJobIterateMapHasChange(field, d)) || + (field != "labels" && d.HasChange(field)) { + return false + } + } + // on_delete is changing, but nothing else + return true + } + + return false +} + +func waitForDataflowJobToBeUpdated(d *schema.ResourceData, config *Config, replacementJobID string, timeout time.Duration) error { + return resource.Retry(timeout, func() *resource.RetryError { + project, err := getProject(d, config) + if err != nil { + return resource.NonRetryableError(err) + } + + region, err := getRegion(d, config) + if err != nil { + return resource.NonRetryableError(err) + } + + replacementJob, err := resourceDataflowJobGetJob(config, project, region, replacementJobID) + if err != nil { + if isRetryableError(err) { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + + state := replacementJob.CurrentState + switch state { + case "", "JOB_STATE_PENDING": + return resource.RetryableError(fmt.Errorf("the replacement job with ID %q has pending state %q.", replacementJobID, state)) + case "JOB_STATE_FAILED": + return resource.NonRetryableError(fmt.Errorf("the replacement job with ID %q failed with state %q.", replacementJobID, state)) + default: + log.Printf("[DEBUG] the replacement job with ID %q has state %q.", replacementJobID, state) + return nil + } + }) +} diff --git a/third_party/terraform/resources/resource_dataproc_cluster.go.erb b/third_party/terraform/resources/resource_dataproc_cluster.go.erb index c4839b896d11..03a0c97d7c44 100644 --- a/third_party/terraform/resources/resource_dataproc_cluster.go.erb +++ b/third_party/terraform/resources/resource_dataproc_cluster.go.erb @@ -55,6 +55,7 @@ var ( "cluster_config.0.autoscaling_config", <% unless version == 'ga' -%> "cluster_config.0.lifecycle_config", + "cluster_config.0.endpoint_config", <% end -%> } ) @@ -74,9 +75,10 @@ func resourceDataprocCluster() *schema.Resource { Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the cluster, unique within the project and zone.`, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { value := v.(string) @@ -101,17 +103,19 @@ func resourceDataprocCluster() *schema.Resource { }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the cluster will exist. If it is not provided, the provider project is used.`, }, "region": { - Type: schema.TypeString, - Optional: true, - Default: "global", - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Default: "global", + ForceNew: true, + Description: `The region in which the cluster and associated nodes will be created in. Defaults to global.`, }, "labels": { @@ -121,14 +125,16 @@ func resourceDataprocCluster() *schema.Resource { // GCP automatically adds two labels // 'goog-dataproc-cluster-uuid' // 'goog-dataproc-cluster-name' - Computed: true, + Computed: true, + Description: `The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including goog-dataproc-cluster-name which is the name of the cluster.`, }, "cluster_config": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Allows you to configure various aspects of the cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -137,6 +143,7 @@ func resourceDataprocCluster() *schema.Resource { Optional: true, AtLeastOneOf: clusterConfigKeys, ForceNew: true, + Description: `The Cloud Storage staging bucket used to stage files, such as Hadoop jars, between client machines and the cluster. Note: If you don't explicitly specify a staging_bucket then GCP will auto create / assign one for you. However, you are not guaranteed an auto generated bucket which is solely dedicated to your cluster; it may be shared with other clusters in the same region/zone also choosing to use the auto generation option.`, }, // If the user does not specify a staging bucket, GCP will allocate one automatically. // The staging_bucket field provides a way for the user to supply their own @@ -144,8 +151,9 @@ func resourceDataprocCluster() *schema.Resource { // the definitive bucket allocated and in use (either the user supplied one via // staging_bucket, or the GCP generated one) "bucket": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: ` The name of the cloud storage bucket ultimately used to house the staging data for the cluster. If staging_bucket is specified, it will contain this value, otherwise it will be the auto generated name.`, }, "gce_cluster_config": { @@ -154,6 +162,7 @@ func resourceDataprocCluster() *schema.Resource { AtLeastOneOf: clusterConfigKeys, Computed: true, MaxItems: 1, + Description: `Common config settings for resources of Google Compute Engine cluster instances, applicable to all instances in the cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -163,6 +172,7 @@ func resourceDataprocCluster() *schema.Resource { Computed: true, AtLeastOneOf: gceClusterConfigKeys, ForceNew: true, + Description: `The GCP zone where your data is stored and used (i.e. where the master and the worker nodes will be created in). If region is set to 'global' (default) then zone is mandatory, otherwise GCP is able to make use of Auto Zone Placement to determine this automatically for you. Note: This setting additionally determines and restricts which computing resources are available for use with other configs such as cluster_config.master_config.machine_type and cluster_config.worker_config.machine_type.`, }, "network": { @@ -173,6 +183,7 @@ func resourceDataprocCluster() *schema.Resource { ForceNew: true, ConflictsWith: []string{"cluster_config.0.gce_cluster_config.0.subnetwork"}, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine network to the cluster will be part of. Conflicts with subnetwork. If neither is specified, this defaults to the "default" network.`, }, "subnetwork": { @@ -182,6 +193,7 @@ func resourceDataprocCluster() *schema.Resource { ForceNew: true, ConflictsWith: []string{"cluster_config.0.gce_cluster_config.0.network"}, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine subnetwork the cluster will be part of. Conflicts with network.`, }, "tags": { @@ -190,6 +202,7 @@ func resourceDataprocCluster() *schema.Resource { AtLeastOneOf: gceClusterConfigKeys, ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The list of instance tags applied to instances in the cluster. Tags are used to identify valid sources or targets for network firewalls.`, }, "service_account": { @@ -197,6 +210,7 @@ func resourceDataprocCluster() *schema.Resource { Optional: true, AtLeastOneOf: gceClusterConfigKeys, ForceNew: true, + Description: `The service account to be used by the Node VMs. If not specified, the "default" service account is used.`, }, "service_account_scopes": { @@ -205,6 +219,7 @@ func resourceDataprocCluster() *schema.Resource { Computed: true, AtLeastOneOf: gceClusterConfigKeys, ForceNew: true, + Description: `The set of Google API scopes to be made available on all of the node VMs under the service_account specified. These can be either FQDNs, or scope aliases.`, Elem: &schema.Schema{ Type: schema.TypeString, StateFunc: func(v interface{}) string { @@ -220,6 +235,7 @@ func resourceDataprocCluster() *schema.Resource { AtLeastOneOf: gceClusterConfigKeys, ForceNew: true, Default: false, + Description: `By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. If set to true, all instances in the cluster will only have internal IP addresses. Note: Private Google Access (also known as privateIpGoogleAccess) must be enabled on the subnetwork that the cluster will be launched in.`, }, "metadata": { @@ -228,6 +244,7 @@ func resourceDataprocCluster() *schema.Resource { AtLeastOneOf: gceClusterConfigKeys, Elem: &schema.Schema{Type: schema.TypeString}, ForceNew: true, + Description: `A map of the Compute Engine metadata entries to add to all instances`, }, }, }, @@ -242,12 +259,14 @@ func resourceDataprocCluster() *schema.Resource { AtLeastOneOf: clusterConfigKeys, Computed: true, MaxItems: 1, + Description: `The Google Compute Engine config settings for the additional (aka preemptible) instances in a cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "num_instances": { - Type: schema.TypeInt, - Optional: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + Computed: true, + Description: `Specifies the number of preemptible nodes to create. Defaults to 0.`, AtLeastOneOf: []string{ "cluster_config.0.preemptible_worker_config.0.num_instances", "cluster_config.0.preemptible_worker_config.0.disk_config", @@ -259,9 +278,10 @@ func resourceDataprocCluster() *schema.Resource { // "machine_type": { ... } // "min_cpu_platform": { ... } "disk_config": { - Type: schema.TypeList, - Optional: true, - Computed: true, + Type: schema.TypeList, + Optional: true, + Computed: true, + Description: `Disk Config`, AtLeastOneOf: []string{ "cluster_config.0.preemptible_worker_config.0.num_instances", "cluster_config.0.preemptible_worker_config.0.disk_config", @@ -276,6 +296,7 @@ func resourceDataprocCluster() *schema.Resource { Computed: true, AtLeastOneOf: preemptibleWorkerDiskConfigKeys, ForceNew: true, + Description: `The amount of local SSD disks that will be attached to each preemptible worker node. Defaults to 0.`, }, "boot_disk_size_gb": { @@ -285,6 +306,7 @@ func resourceDataprocCluster() *schema.Resource { AtLeastOneOf: preemptibleWorkerDiskConfigKeys, ForceNew: true, ValidateFunc: validation.IntAtLeast(10), + Description: `Size of the primary disk attached to each preemptible worker node, specified in GB. The smallest allowed disk size is 10GB. GCP will default to a predetermined computed value if not set (currently 500GB). Note: If SSDs are not attached, it also contains the HDFS data blocks and Hadoop working directories.`, }, "boot_disk_type": { @@ -294,15 +316,17 @@ func resourceDataprocCluster() *schema.Resource { ForceNew: true, ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd", ""}, false), Default: "pd-standard", + Description: `The disk type of the primary disk attached to each preemptible worker node. One of "pd-ssd" or "pd-standard". Defaults to "pd-standard".`, }, }, }, }, "instance_names": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `List of preemptible instance names which have been assigned to the cluster.`, }, }, }, @@ -311,8 +335,8 @@ func resourceDataprocCluster() *schema.Resource { "security_config": { Type: schema.TypeList, Optional: true, - Description: "Security related configuration", MaxItems: 1, + Description: `Security related configuration.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "kerberos_config": { @@ -364,8 +388,8 @@ Kerberos realm and the remote trusted realm, in a cross realm trust relationship Description: `The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.`, }, "keystore_password_uri": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, Description: `The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc`, @@ -413,7 +437,7 @@ by Dataproc`, AtLeastOneOf: clusterConfigKeys, Computed: true, MaxItems: 1, - + Description: `The config settings for software inside the cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "image_version": { @@ -423,19 +447,21 @@ by Dataproc`, AtLeastOneOf: clusterSoftwareConfigKeys, ForceNew: true, DiffSuppressFunc: dataprocImageVersionDiffSuppress, + Description: `The Cloud Dataproc image version to use for the cluster - this controls the sets of software versions installed onto the nodes when you create clusters. If not specified, defaults to the latest version.`, }, - "override_properties": { Type: schema.TypeMap, Optional: true, AtLeastOneOf: clusterSoftwareConfigKeys, ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A list of override and additional properties (key/value pairs) used to modify various aspects of the common configuration files used when creating a cluster.`, }, "properties": { - Type: schema.TypeMap, - Computed: true, + Type: schema.TypeMap, + Computed: true, + Description: `A list of the properties used to set the daemon config files. This will include any values supplied by the user via cluster_config.software_config.override_properties`, }, // We have two versions of the properties field here because by default @@ -451,10 +477,11 @@ by Dataproc`, Type: schema.TypeSet, Optional: true, AtLeastOneOf: clusterSoftwareConfigKeys, + Description: `The set of optional components to activate on the cluster.`, Elem: &schema.Schema{ Type: schema.TypeString, - ValidateFunc: validation.StringInSlice([]string{"COMPONENT_UNSPECIFIED", "ANACONDA", "DRUID", "HIVE_WEBHCAT", - "JUPYTER", "KERBEROS", "PRESTO", "ZEPPELIN", "ZOOKEEPER"}, false), + ValidateFunc: validation.StringInSlice([]string{"COMPONENT_UNSPECIFIED", "ANACONDA", "DRUID", "HBASE", "HIVE_WEBHCAT", + "JUPYTER", "KERBEROS", "PRESTO", "RANGER", "SOLR", "ZEPPELIN", "ZOOKEEPER"}, false), }, }, }, @@ -466,19 +493,22 @@ by Dataproc`, Optional: true, AtLeastOneOf: clusterConfigKeys, ForceNew: true, + Description: `Commands to execute on each node after config is completed. You can specify multiple versions of these.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "script": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The script to be executed during initialization of the cluster. The script must be a GCS file with a gs:// prefix.`, }, "timeout_sec": { - Type: schema.TypeInt, - Optional: true, - Default: 300, - ForceNew: true, + Type: schema.TypeInt, + Optional: true, + Default: 300, + ForceNew: true, + Description: `The maximum duration (in seconds) which script is allowed to take to execute its action. GCP will default to a predetermined computed value if not set (currently 300).`, }, }, }, @@ -488,11 +518,13 @@ by Dataproc`, Optional: true, AtLeastOneOf: clusterConfigKeys, MaxItems: 1, + Description: `The Customer managed encryption keys settings for the cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "kms_key_name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.`, }, }, }, @@ -502,42 +534,48 @@ by Dataproc`, Optional: true, AtLeastOneOf: clusterConfigKeys, MaxItems: 1, + Description: `The autoscaling policy config associated with the cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "policy_uri": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The autoscaling policy used by the cluster.`, }, }, }, }, <% unless version == 'ga' -%> "lifecycle_config": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - AtLeastOneOf: clusterConfigKeys, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + AtLeastOneOf: clusterConfigKeys, + Description: `The settings for auto deletion cluster schedule.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "idle_delete_ttl": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The duration to keep the cluster alive while idling (no jobs running). After this TTL, the cluster will be deleted. Valid range: [10m, 14d].`, AtLeastOneOf: []string{ "cluster_config.0.lifecycle_config.0.idle_delete_ttl", "cluster_config.0.lifecycle_config.0.auto_delete_time", }, }, "idle_start_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Time when the cluster became idle (most recent job finished) and became eligible for deletion due to idleness.`, }, // the API also has the auto_delete_ttl option in its request, however, // the value is not returned in the response, rather the auto_delete_time // after calculating ttl with the update time is returned, thus, for now // we will only allow auto_delete_time to updated. "auto_delete_time": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The time when cluster will be auto-deleted. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".`, DiffSuppressFunc: timestampDiffSuppress(time.RFC3339Nano), AtLeastOneOf: []string{ "cluster_config.0.lifecycle_config.0.idle_delete_ttl", @@ -547,6 +585,29 @@ by Dataproc`, }, }, }, + "endpoint_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The config settings for port access on the cluster. Structure defined below.`, + AtLeastOneOf: clusterConfigKeys, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_http_port_access": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + Description: `The flag to enable http access to specific ports on the cluster from external sources (aka Component Gateway). Defaults to false.`, + }, + "http_ports": { + Type: schema.TypeMap, + Computed: true, + Description: `The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.`, + }, + }, + }, + }, <% end -%> }, }, @@ -557,12 +618,12 @@ by Dataproc`, func instanceConfigSchema(parent string) *schema.Schema { var instanceConfigKeys = []string{ - "cluster_config.0."+parent+".0.num_instances", - "cluster_config.0."+parent+".0.image_uri", - "cluster_config.0."+parent+".0.machine_type", - "cluster_config.0."+parent+".0.min_cpu_platform", - "cluster_config.0."+parent+".0.disk_config", - "cluster_config.0."+parent+".0.accelerators", + "cluster_config.0." + parent + ".0.num_instances", + "cluster_config.0." + parent + ".0.image_uri", + "cluster_config.0." + parent + ".0.machine_type", + "cluster_config.0." + parent + ".0.min_cpu_platform", + "cluster_config.0." + parent + ".0.disk_config", + "cluster_config.0." + parent + ".0.accelerators", } return &schema.Schema{ @@ -571,12 +632,14 @@ func instanceConfigSchema(parent string) *schema.Schema { Computed: true, AtLeastOneOf: clusterConfigKeys, MaxItems: 1, + Description: `The Google Compute Engine config settings for the master/worker instances in a cluster.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "num_instances": { Type: schema.TypeInt, Optional: true, Computed: true, + Description: `Specifies the number of master/worker nodes to create. If not specified, GCP will default to a predetermined computed value.`, AtLeastOneOf: instanceConfigKeys, }, @@ -586,6 +649,7 @@ func instanceConfigSchema(parent string) *schema.Schema { Computed: true, AtLeastOneOf: instanceConfigKeys, ForceNew: true, + Description: `The URI for the image to use for this master/worker`, }, "machine_type": { @@ -594,6 +658,7 @@ func instanceConfigSchema(parent string) *schema.Schema { Computed: true, AtLeastOneOf: instanceConfigKeys, ForceNew: true, + Description: `The name of a Google Compute Engine machine type to create for the master/worker`, }, "min_cpu_platform": { @@ -602,6 +667,7 @@ func instanceConfigSchema(parent string) *schema.Schema { Computed: true, AtLeastOneOf: instanceConfigKeys, ForceNew: true, + Description: `The name of a minimum generation of CPU family for the master/worker. If not specified, GCP will default to a predetermined computed value for each zone.`, }, "disk_config": { Type: schema.TypeList, @@ -609,41 +675,44 @@ func instanceConfigSchema(parent string) *schema.Schema { Computed: true, AtLeastOneOf: instanceConfigKeys, MaxItems: 1, - + Description: `Disk Config`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "num_local_ssds": { - Type: schema.TypeInt, - Optional: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + Computed: true, + Description: `The amount of local SSD disks that will be attached to each master cluster node. Defaults to 0.`, AtLeastOneOf: []string{ - "cluster_config.0."+parent+".0.disk_config.0.num_local_ssds", - "cluster_config.0."+parent+".0.disk_config.0.boot_disk_size_gb", - "cluster_config.0."+parent+".0.disk_config.0.boot_disk_type", + "cluster_config.0." + parent + ".0.disk_config.0.num_local_ssds", + "cluster_config.0." + parent + ".0.disk_config.0.boot_disk_size_gb", + "cluster_config.0." + parent + ".0.disk_config.0.boot_disk_type", }, - ForceNew: true, + ForceNew: true, }, "boot_disk_size_gb": { - Type: schema.TypeInt, - Optional: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + Computed: true, + Description: `Size of the primary disk attached to each node, specified in GB. The primary disk contains the boot volume and system libraries, and the smallest allowed disk size is 10GB. GCP will default to a predetermined computed value if not set (currently 500GB). Note: If SSDs are not attached, it also contains the HDFS data blocks and Hadoop working directories.`, AtLeastOneOf: []string{ - "cluster_config.0."+parent+".0.disk_config.0.num_local_ssds", - "cluster_config.0."+parent+".0.disk_config.0.boot_disk_size_gb", - "cluster_config.0."+parent+".0.disk_config.0.boot_disk_type", + "cluster_config.0." + parent + ".0.disk_config.0.num_local_ssds", + "cluster_config.0." + parent + ".0.disk_config.0.boot_disk_size_gb", + "cluster_config.0." + parent + ".0.disk_config.0.boot_disk_type", }, ForceNew: true, ValidateFunc: validation.IntAtLeast(10), }, "boot_disk_type": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The disk type of the primary disk attached to each node. One of "pd-ssd" or "pd-standard". Defaults to "pd-standard".`, AtLeastOneOf: []string{ - "cluster_config.0."+parent+".0.disk_config.0.num_local_ssds", - "cluster_config.0."+parent+".0.disk_config.0.boot_disk_size_gb", - "cluster_config.0."+parent+".0.disk_config.0.boot_disk_type", + "cluster_config.0." + parent + ".0.disk_config.0.num_local_ssds", + "cluster_config.0." + parent + ".0.disk_config.0.boot_disk_size_gb", + "cluster_config.0." + parent + ".0.disk_config.0.boot_disk_type", }, ForceNew: true, ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd", ""}, false), @@ -660,12 +729,14 @@ func instanceConfigSchema(parent string) *schema.Schema { AtLeastOneOf: instanceConfigKeys, ForceNew: true, Elem: acceleratorsSchema(), + Description: `The Compute Engine accelerator (GPU) configuration for these instances. Can be specified multiple times.`, }, "instance_names": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `List of master/worker instance names which have been assigned to the cluster.`, }, }, }, @@ -677,15 +748,17 @@ func acceleratorsSchema() *schema.Resource { return &schema.Resource{ Schema: map[string]*schema.Schema{ "accelerator_type": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The short name of the accelerator type to expose to this instance. For example, nvidia-tesla-k80.`, }, "accelerator_count": { - Type: schema.TypeInt, - Required: true, - ForceNew: true, + Type: schema.TypeInt, + Required: true, + ForceNew: true, + Description: `The number of the accelerator cards of this type exposed to this instance. Often restricted to one of 1, 2, 4, or 8.`, }, }, } @@ -730,8 +803,7 @@ func resourceDataprocClusterCreate(d *schema.ResourceData, meta interface{}) err d.SetId(fmt.Sprintf("projects/%s/regions/%s/clusters/%s", project, region, cluster.ClusterName)) // Wait until it's created - timeoutInMinutes := int(d.Timeout(schema.TimeoutCreate).Minutes()) - waitErr := dataprocClusterOperationWait(config, op, "creating Dataproc cluster", timeoutInMinutes) + waitErr := dataprocClusterOperationWait(config, op, "creating Dataproc cluster", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { // The resource didn't actually create // Note that we do not remove the ID here - this resource tends to leave @@ -792,6 +864,10 @@ func expandClusterConfig(d *schema.ResourceData, config *Config) (*dataproc.Clus if cfg, ok := configOptions(d, "cluster_config.0.lifecycle_config"); ok { conf.LifecycleConfig = expandLifecycleConfig(cfg) } + + if cfg, ok := configOptions(d, "cluster_config.0.endpoint_config"); ok { + conf.EndpointConfig = expandEndpointConfig(cfg) + } <% end -%> if cfg, ok := configOptions(d, "cluster_config.0.master_config"); ok { @@ -874,7 +950,7 @@ func expandSecurityConfig(cfg map[string]interface{}) *dataproc.SecurityConfig { } func expandKerberosConfig(cfg map[string]interface{}) *dataproc.KerberosConfig { - conf := &dataproc.KerberosConfig{} + conf := &dataproc.KerberosConfig{} if v, ok := cfg["enable_kerberos"]; ok { conf.EnableKerberos = v.(bool) } @@ -885,13 +961,13 @@ func expandKerberosConfig(cfg map[string]interface{}) *dataproc.KerberosConfig { conf.KmsKeyUri = v.(string) } if v, ok := cfg["keystore_uri"]; ok { - conf.KeystoreUri= v.(string) + conf.KeystoreUri = v.(string) } if v, ok := cfg["truststore_uri"]; ok { conf.TruststoreUri = v.(string) } if v, ok := cfg["keystore_password_uri"]; ok { - conf.KeystorePasswordUri= v.(string) + conf.KeystorePasswordUri = v.(string) } if v, ok := cfg["key_password_uri"]; ok { conf.KeyPasswordUri = v.(string) @@ -974,6 +1050,15 @@ func expandLifecycleConfig(cfg map[string]interface{}) *dataproc.LifecycleConfig } return conf } + +func expandEndpointConfig(cfg map[string]interface{}) *dataproc.EndpointConfig { + conf := &dataproc.EndpointConfig{} + if v, ok := cfg["enable_http_port_access"]; ok { + conf.EnableHttpPortAccess = v.(bool) + } + return conf +} + <% end -%> func expandInitializationActions(v interface{}) []*dataproc.NodeInitializationAction { @@ -1083,7 +1168,6 @@ func resourceDataprocClusterUpdate(d *schema.ResourceData, meta interface{}) err region := d.Get("region").(string) clusterName := d.Get("name").(string) - timeoutInMinutes := int(d.Timeout(schema.TimeoutUpdate).Minutes()) cluster := &dataproc.Cluster{ ClusterName: clusterName, @@ -1155,7 +1239,7 @@ func resourceDataprocClusterUpdate(d *schema.ResourceData, meta interface{}) err } // Wait until it's updated - waitErr := dataprocClusterOperationWait(config, op, "updating Dataproc cluster ", timeoutInMinutes) + waitErr := dataprocClusterOperationWait(config, op, "updating Dataproc cluster ", d.Timeout(schema.TimeoutUpdate)) if waitErr != nil { return waitErr } @@ -1215,7 +1299,8 @@ func flattenClusterConfig(d *schema.ResourceData, cfg *dataproc.ClusterConfig) ( "encryption_config": flattenEncryptionConfig(d, cfg.EncryptionConfig), "autoscaling_config": flattenAutoscalingConfig(d, cfg.AutoscalingConfig), <% unless version == 'ga' -%> - "lifecycle_config": flattenLifecycleConfig(d, cfg.LifecycleConfig), + "lifecycle_config": flattenLifecycleConfig(d, cfg.LifecycleConfig), + "endpoint_config": flattenEndpointConfig(d, cfg.EndpointConfig), <% end -%> } @@ -1310,6 +1395,20 @@ func flattenLifecycleConfig(d *schema.ResourceData, lc *dataproc.LifecycleConfig return []map[string]interface{}{data} } + +func flattenEndpointConfig(d *schema.ResourceData, ec *dataproc.EndpointConfig) []map[string]interface{} { + if ec == nil { + return nil + } + + data := map[string]interface{}{ + "enable_http_port_access": ec.EnableHttpPortAccess, + "http_ports": ec.HttpPorts, + } + + return []map[string]interface{}{data} +} + <% end -%> func flattenAccelerators(accelerators []*dataproc.AcceleratorConfig) interface{} { @@ -1371,6 +1470,22 @@ func flattenGceClusterConfig(d *schema.ResourceData, gcc *dataproc.GceClusterCon } func flattenPreemptibleInstanceGroupConfig(d *schema.ResourceData, icg *dataproc.InstanceGroupConfig) []map[string]interface{} { + // if num_instances is 0, icg will always be returned nil. This means the + // server has discarded diskconfig etc. However, the only way to remove the + // preemptible group is to set the size to 0, because it's O+C. Many users + // won't remove the rest of the config (eg disk config). Therefore, we need to + // preserve the other set fields by using the old state to stop users from + // getting a diff. + if icg == nil { + icgSchema := d.Get("cluster_config.0.preemptible_worker_config") + log.Printf("[DEBUG] state of preemptible is %#v", icgSchema) + if v, ok := icgSchema.([]interface{}); ok && len(v) > 0 { + if m, ok := v[0].(map[string]interface{}); ok { + return []map[string]interface{}{m} + } + } + } + disk := map[string]interface{}{} data := map[string]interface{}{} @@ -1429,7 +1544,6 @@ func resourceDataprocClusterDelete(d *schema.ResourceData, meta interface{}) err region := d.Get("region").(string) clusterName := d.Get("name").(string) - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) log.Printf("[DEBUG] Deleting Dataproc cluster %s", clusterName) op, err := config.clientDataprocBeta.Projects.Regions.Clusters.Delete( @@ -1439,7 +1553,7 @@ func resourceDataprocClusterDelete(d *schema.ResourceData, meta interface{}) err } // Wait until it's deleted - waitErr := dataprocClusterOperationWait(config, op, "deleting Dataproc cluster", timeoutInMinutes) + waitErr := dataprocClusterOperationWait(config, op, "deleting Dataproc cluster", d.Timeout(schema.TimeoutDelete)) if waitErr != nil { return waitErr } diff --git a/third_party/terraform/resources/resource_dataproc_job.go b/third_party/terraform/resources/resource_dataproc_job.go index e4ced4e41b63..e35d0039e8a5 100644 --- a/third_party/terraform/resources/resource_dataproc_job.go +++ b/third_party/terraform/resources/resource_dataproc_job.go @@ -25,34 +25,38 @@ func resourceDataprocJob() *schema.Resource { Schema: map[string]*schema.Schema{ "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The project in which the cluster can be found and jobs subsequently run against. If it is not provided, the provider project is used.`, }, // Ref: https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs#JobReference "region": { - Type: schema.TypeString, - Optional: true, - Default: "global", - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Default: "global", + ForceNew: true, + Description: `The Cloud Dataproc region. This essentially determines which clusters are available for this job to be submitted to. If not specified, defaults to global.`, }, // If a job is still running, trying to delete a job will fail. Setting // this flag to true however will force the deletion by first cancelling // the job and then deleting it "force_delete": { - Type: schema.TypeBool, - Default: false, - Optional: true, + Type: schema.TypeBool, + Default: false, + Optional: true, + Description: `By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.`, }, "reference": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The reference of the job`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "job_id": { @@ -68,9 +72,10 @@ func resourceDataprocJob() *schema.Resource { }, "placement": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Description: `The config of job placement.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "cluster_name": { @@ -89,9 +94,10 @@ func resourceDataprocJob() *schema.Resource { }, "status": { - Type: schema.TypeList, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Computed: true, + MaxItems: 1, + Description: `The status of the job.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "state": { @@ -241,9 +247,8 @@ func resourceDataprocJobCreate(d *schema.ResourceData, meta interface{}) error { } d.SetId(fmt.Sprintf("projects/%s/regions/%s/jobs/%s", project, region, job.Reference.JobId)) - timeoutInMinutes := int(d.Timeout(schema.TimeoutCreate).Minutes()) waitErr := dataprocJobOperationWait(config, region, project, job.Reference.JobId, - "Creating Dataproc job", timeoutInMinutes, 1) + "Creating Dataproc job", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { return waitErr } @@ -310,7 +315,6 @@ func resourceDataprocJobDelete(d *schema.ResourceData, meta interface{}) error { region := d.Get("region").(string) forceDelete := d.Get("force_delete").(bool) - timeoutInMinutes := int(d.Timeout(schema.TimeoutDelete).Minutes()) parts := strings.Split(d.Id(), "/") jobId := parts[len(parts)-1] @@ -323,7 +327,7 @@ func resourceDataprocJobDelete(d *schema.ResourceData, meta interface{}) error { _, _ = config.clientDataproc.Projects.Regions.Jobs.Cancel(project, region, jobId, &dataproc.CancelJobRequest{}).Do() waitErr := dataprocJobOperationWait(config, region, project, jobId, - "Cancelling Dataproc job", timeoutInMinutes, 1) + "Cancelling Dataproc job", d.Timeout(schema.TimeoutDelete)) if waitErr != nil { return waitErr } @@ -338,7 +342,7 @@ func resourceDataprocJobDelete(d *schema.ResourceData, meta interface{}) error { } waitErr := dataprocDeleteOperationWait(config, region, project, jobId, - "Deleting Dataproc job", timeoutInMinutes, 1) + "Deleting Dataproc job", d.Timeout(schema.TimeoutDelete)) if waitErr != nil { return waitErr } @@ -375,6 +379,7 @@ var pySparkSchema = &schema.Schema{ Optional: true, ForceNew: true, MaxItems: 1, + Description: `The config of pySpark job.`, ExactlyOneOf: []string{"pyspark_config", "spark_config", "hadoop_config", "hive_config", "pig_config", "sparksql_config"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -492,6 +497,7 @@ var sparkSchema = &schema.Schema{ Optional: true, ForceNew: true, MaxItems: 1, + Description: `The config of the Spark job.`, ExactlyOneOf: []string{"pyspark_config", "spark_config", "hadoop_config", "hive_config", "pig_config", "sparksql_config"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -500,6 +506,7 @@ var sparkSchema = &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `The class containing the main method of the driver. Must be in a provided jar or jar that is already on the classpath. Conflicts with main_jar_file_uri`, ExactlyOneOf: []string{"spark_config.0.main_class", "spark_config.0.main_jar_file_uri"}, }, @@ -507,42 +514,48 @@ var sparkSchema = &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `The HCFS URI of jar file containing the driver jar. Conflicts with main_class`, ExactlyOneOf: []string{"spark_config.0.main_jar_file_uri", "spark_config.0.main_class"}, }, "args": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `The arguments to pass to the driver.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "jar_file_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "file_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "archive_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "properties": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "logging_config": loggingConfig, @@ -605,6 +618,7 @@ var hadoopSchema = &schema.Schema{ Optional: true, ForceNew: true, MaxItems: 1, + Description: `The config of Hadoop job`, ExactlyOneOf: []string{"spark_config", "pyspark_config", "hadoop_config", "hive_config", "pig_config", "sparksql_config"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -613,6 +627,7 @@ var hadoopSchema = &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `The class containing the main method of the driver. Must be in a provided jar or jar that is already on the classpath. Conflicts with main_jar_file_uri`, ExactlyOneOf: []string{"hadoop_config.0.main_jar_file_uri", "hadoop_config.0.main_class"}, }, @@ -620,42 +635,48 @@ var hadoopSchema = &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `The HCFS URI of jar file containing the driver jar. Conflicts with main_class`, ExactlyOneOf: []string{"hadoop_config.0.main_jar_file_uri", "hadoop_config.0.main_class"}, }, "args": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `The arguments to pass to the driver.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "jar_file_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "file_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "archive_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "properties": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "logging_config": loggingConfig, @@ -718,6 +739,7 @@ var hiveSchema = &schema.Schema{ Optional: true, ForceNew: true, MaxItems: 1, + Description: `The config of hive job`, ExactlyOneOf: []string{"spark_config", "pyspark_config", "hadoop_config", "hive_config", "pig_config", "sparksql_config"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -726,6 +748,7 @@ var hiveSchema = &schema.Schema{ Type: schema.TypeList, Optional: true, ForceNew: true, + Description: `The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri`, Elem: &schema.Schema{Type: schema.TypeString}, ExactlyOneOf: []string{"hive_config.0.query_file_uri", "hive_config.0.query_list"}, }, @@ -734,34 +757,39 @@ var hiveSchema = &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `HCFS URI of file containing Hive script to execute as the job. Conflicts with query_list`, ExactlyOneOf: []string{"hive_config.0.query_file_uri", "hive_config.0.query_list"}, }, "continue_on_failure": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries. Defaults to false.`, }, "script_variables": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "properties": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "jar_file_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, }, }, @@ -817,6 +845,7 @@ var pigSchema = &schema.Schema{ Optional: true, ForceNew: true, MaxItems: 1, + Description: `The config of pag job.`, ExactlyOneOf: []string{"spark_config", "pyspark_config", "hadoop_config", "hive_config", "pig_config", "sparksql_config"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -825,6 +854,7 @@ var pigSchema = &schema.Schema{ Type: schema.TypeList, Optional: true, ForceNew: true, + Description: `The list of Hive queries or statements to execute as part of the job. Conflicts with query_file_uri`, Elem: &schema.Schema{Type: schema.TypeString}, ExactlyOneOf: []string{"pig_config.0.query_file_uri", "pig_config.0.query_list"}, }, @@ -833,34 +863,39 @@ var pigSchema = &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `HCFS URI of file containing Hive script to execute as the job. Conflicts with query_list`, ExactlyOneOf: []string{"pig_config.0.query_file_uri", "pig_config.0.query_list"}, }, "continue_on_failure": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries. Defaults to false.`, }, "script_variables": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Mapping of query variable names to values (equivalent to the Pig command: name=[value]).`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "properties": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "jar_file_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "logging_config": loggingConfig, @@ -919,6 +954,7 @@ var sparkSqlSchema = &schema.Schema{ Optional: true, ForceNew: true, MaxItems: 1, + Description: `The config of SparkSql job`, ExactlyOneOf: []string{"spark_config", "pyspark_config", "hadoop_config", "hive_config", "pig_config", "sparksql_config"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -927,6 +963,7 @@ var sparkSqlSchema = &schema.Schema{ Type: schema.TypeList, Optional: true, ForceNew: true, + Description: `The list of SQL queries or statements to execute as part of the job. Conflicts with query_file_uri`, Elem: &schema.Schema{Type: schema.TypeString}, ExactlyOneOf: []string{"sparksql_config.0.query_file_uri", "sparksql_config.0.query_list"}, }, @@ -935,28 +972,32 @@ var sparkSqlSchema = &schema.Schema{ Type: schema.TypeString, Optional: true, ForceNew: true, + Description: `The HCFS URI of the script that contains SQL queries. Conflicts with query_list`, ExactlyOneOf: []string{"sparksql_config.0.query_file_uri", "sparksql_config.0.query_list"}, }, "script_variables": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "properties": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "jar_file_uris": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `HCFS URIs of jar files to be added to the Spark CLASSPATH.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "logging_config": loggingConfig, diff --git a/third_party/terraform/resources/resource_dns_record_set.go b/third_party/terraform/resources/resource_dns_record_set.go index e031c31b2394..29d676b0e4fe 100644 --- a/third_party/terraform/resources/resource_dns_record_set.go +++ b/third_party/terraform/resources/resource_dns_record_set.go @@ -24,15 +24,17 @@ func resourceDnsRecordSet() *schema.Resource { Schema: map[string]*schema.Schema{ "managed_zone": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the zone in which this record set will reside.`, }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The DNS name this record set will apply to.`, }, "rrdatas": { @@ -50,23 +52,27 @@ func resourceDnsRecordSet() *schema.Resource { DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { return strings.ToLower(strings.Trim(old, `"`)) == strings.ToLower(strings.Trim(new, `"`)) }, + Description: `The string data for the records in this record set whose meaning depends on the DNS type. For TXT record, if the string data contains spaces, add surrounding \" if you don't want your string to get split on spaces. To specify a single record value longer than 255 characters such as a TXT record for DKIM, add \"\" inside the Terraform configuration string (e.g. "first255characters\"\"morecharacters").`, }, "ttl": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + Description: `The time-to-live of this record set (seconds).`, }, "type": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The DNS record set type.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } diff --git a/third_party/terraform/resources/resource_endpoints_service.go b/third_party/terraform/resources/resource_endpoints_service.go index 330b7ef32b17..040a7353802d 100644 --- a/third_party/terraform/resources/resource_endpoints_service.go +++ b/third_party/terraform/resources/resource_endpoints_service.go @@ -4,7 +4,12 @@ import ( "encoding/base64" "encoding/json" "errors" + "fmt" "log" + "regexp" + "strconv" + "strings" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "google.golang.org/api/servicemanagement/v1" @@ -21,72 +26,93 @@ func resourceEndpointsService() *schema.Resource { SchemaVersion: 1, MigrateState: migrateEndpointsService, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "service_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the service. Usually of the form $apiname.endpoints.$projectid.cloud.goog.`, }, "openapi_config": { Type: schema.TypeString, Optional: true, ConflictsWith: []string{"grpc_config", "protoc_output_base64"}, + Description: `The full text of the OpenAPI YAML configuration as described here. Either this, or both of grpc_config and protoc_output_base64 must be specified.`, }, "grpc_config": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The full text of the Service Config YAML file (Example located here). If provided, must also provide protoc_output_base64. open_api config must not be provided.`, }, "protoc_output_base64": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The full contents of the Service Descriptor File generated by protoc. This should be a compiled .pb file, base64-encoded.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The project ID that the service belongs to. If not provided, provider project is used.`, }, "config_id": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The autogenerated ID for the configuration that is rolled out as part of the creation of this resource. Must be provided to compute engine instances as a tag.`, }, "apis": { - Type: schema.TypeList, - Computed: true, + Type: schema.TypeList, + Computed: true, + Description: `A list of API objects.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The FQDN of the API as described in the provided config.`, }, "syntax": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `SYNTAX_PROTO2 or SYNTAX_PROTO3.`, }, "version": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `A version string for this api. If specified, will have the form major-version.minor-version, e.g. 1.10.`, }, "methods": { - Type: schema.TypeList, - Computed: true, + Type: schema.TypeList, + Computed: true, + Description: `A list of Method objects.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The simple name of this method as described in the provided config.`, }, "syntax": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `SYNTAX_PROTO2 or SYNTAX_PROTO3.`, }, "request_type": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The type URL for the request to this API.`, }, "response_type": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The type URL for the response from this API.`, }, }, }, @@ -95,27 +121,60 @@ func resourceEndpointsService() *schema.Resource { }, }, "dns_address": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The address at which the service can be found - usually the same as the service name.`, }, "endpoints": { - Type: schema.TypeList, - Computed: true, + Type: schema.TypeList, + Computed: true, + Description: `A list of Endpoint objects.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The simple name of the endpoint as described in the config.`, }, "address": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The FQDN of the endpoint as described in the config.`, }, }, }, }, }, + CustomizeDiff: predictServiceId, + } +} + +func predictServiceId(d *schema.ResourceDiff, meta interface{}) error { + if !d.HasChange("openapi_config") && !d.HasChange("grpc_config") && !d.HasChange("protoc_output_base64") { + return nil + } + loc, err := time.LoadLocation("America/Los_Angeles") + if err != nil { + // Timezone data may not be present on some machines, in that case skip + return nil } + baseDate := time.Now().In(loc).Format("2006-01-02") + oldConfigId := d.Get("config_id").(string) + if match, err := regexp.MatchString(`\d\d\d\d-\d\d-\d\dr\d*`, oldConfigId); !match || err != nil { + // If we do not match the expected format, we will guess + // wrong and that is worse than not guessing. + return nil + } + if strings.HasPrefix(oldConfigId, baseDate) { + n, err := strconv.Atoi(strings.Split(oldConfigId, "r")[1]) + if err != nil { + return err + } + d.SetNew("config_id", fmt.Sprintf("%sr%d", baseDate, n+1)) + } else { + d.SetNew("config_id", baseDate+"r0") + } + return nil } func getEndpointServiceOpenAPIConfigSource(configText string) *servicemanagement.ConfigSource { @@ -181,7 +240,7 @@ func resourceEndpointsServiceCreate(d *schema.ResourceData, meta interface{}) er return err } - _, err = serviceManagementOperationWait(config, op, "Creating new ManagedService.") + _, err = serviceManagementOperationWaitTime(config, op, "Creating new ManagedService.", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -246,7 +305,7 @@ func resourceEndpointsServiceUpdate(d *schema.ResourceData, meta interface{}) er if err != nil { return err } - s, err := serviceManagementOperationWait(config, op, "Submitting service config.") + s, err := serviceManagementOperationWaitTime(config, op, "Submitting service config.", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -268,7 +327,7 @@ func resourceEndpointsServiceUpdate(d *schema.ResourceData, meta interface{}) er if err != nil { return err } - _, err = serviceManagementOperationWait(config, op, "Performing service rollout.") + _, err = serviceManagementOperationWaitTime(config, op, "Performing service rollout.", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -285,7 +344,7 @@ func resourceEndpointsServiceDelete(d *schema.ResourceData, meta interface{}) er if err != nil { return err } - _, err = serviceManagementOperationWait(config, op, "Deleting service.") + _, err = serviceManagementOperationWaitTime(config, op, "Deleting service.", d.Timeout(schema.TimeoutDelete)) d.SetId("") return err } diff --git a/third_party/terraform/resources/resource_google_folder.go b/third_party/terraform/resources/resource_google_folder.go index 9d91477db843..61bfb4195f9a 100644 --- a/third_party/terraform/resources/resource_google_folder.go +++ b/third_party/terraform/resources/resource_google_folder.go @@ -3,10 +3,11 @@ package google import ( "encoding/json" "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/schema" - resourceManagerV2Beta1 "google.golang.org/api/cloudresourcemanager/v2beta1" "strings" "time" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + resourceManagerV2Beta1 "google.golang.org/api/cloudresourcemanager/v2beta1" ) func resourceGoogleFolder() *schema.Resource { @@ -30,28 +31,37 @@ func resourceGoogleFolder() *schema.Resource { Schema: map[string]*schema.Schema{ // Format is either folders/{folder_id} or organizations/{org_id}. "parent": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The resource name of the parent Folder or Organization. Must be of the form folders/{folder_id} or organizations/{org_id}.`, }, // Must be unique amongst its siblings. "display_name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The folder's display name. A folder's display name must be unique amongst its siblings, e.g. no two folders with the same parent can share the same display name. The display name must start and end with a letter or digit, may contain letters, digits, spaces, hyphens and underscores and can be no longer than 30 characters.`, + }, + "folder_id": { + Type: schema.TypeString, + Computed: true, + Description: `The folder id from the name "folders/{folder_id}"`, }, - // Format is 'folders/{folder_id}. // The terraform id holds the same value. "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The resource name of the Folder. Its format is folders/{folder_id}.`, }, "lifecycle_state": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The lifecycle state of the folder such as ACTIVE or DELETE_REQUESTED.`, }, "create_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Timestamp when the Folder was created. Assigned by the server. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".`, }, }, } @@ -80,7 +90,7 @@ func resourceGoogleFolderCreate(d *schema.ResourceData, meta interface{}) error return err } - err = resourceManagerOperationWaitTime(config, opAsMap, "creating folder", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = resourceManagerOperationWaitTime(config, opAsMap, "creating folder", d.Timeout(schema.TimeoutCreate)) if err != nil { return fmt.Errorf("Error creating folder '%s' in '%s': %s", displayName, parent, err) } @@ -113,6 +123,8 @@ func resourceGoogleFolderRead(d *schema.ResourceData, meta interface{}) error { } d.Set("name", folder.Name) + folderId := strings.TrimPrefix(folder.Name, "folders/") + d.Set("folder_id", folderId) d.Set("parent", folder.Parent) d.Set("display_name", folder.DisplayName) d.Set("lifecycle_state", folder.LifecycleState) @@ -160,7 +172,7 @@ func resourceGoogleFolderUpdate(d *schema.ResourceData, meta interface{}) error return err } - err = resourceManagerOperationWaitTime(config, opAsMap, "move folder", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = resourceManagerOperationWaitTime(config, opAsMap, "move folder", d.Timeout(schema.TimeoutUpdate)) if err != nil { return fmt.Errorf("Error moving folder '%s' to '%s': %s", displayName, newParent, err) } diff --git a/third_party/terraform/resources/resource_google_folder_organization_policy.go b/third_party/terraform/resources/resource_google_folder_organization_policy.go index 0dbb206190dd..2be8431ee7df 100644 --- a/third_party/terraform/resources/resource_google_folder_organization_policy.go +++ b/third_party/terraform/resources/resource_google_folder_organization_policy.go @@ -30,9 +30,10 @@ func resourceGoogleFolderOrganizationPolicy() *schema.Resource { schemaOrganizationPolicy, map[string]*schema.Schema{ "folder": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The resource name of the folder to set the policy for. Its format is folders/{folder_id}.`, }, }, ), diff --git a/third_party/terraform/resources/resource_google_organization_iam_custom_role.go b/third_party/terraform/resources/resource_google_organization_iam_custom_role.go index bde1b7269e91..ee23550f2344 100644 --- a/third_party/terraform/resources/resource_google_organization_iam_custom_role.go +++ b/third_party/terraform/resources/resource_google_organization_iam_custom_role.go @@ -21,39 +21,51 @@ func resourceGoogleOrganizationIamCustomRole() *schema.Resource { Schema: map[string]*schema.Schema{ "role_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The role id to use for this role.`, }, "org_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The numeric ID of the organization in which you want to create a custom role.`, }, "title": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `A human-readable title for the role.`, }, "permissions": { - Type: schema.TypeSet, - Required: true, - MinItems: 1, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Description: `The names of the permissions this role grants when bound in an IAM policy. At least one permission must be specified.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "stage": { Type: schema.TypeString, Optional: true, Default: "GA", + Description: `The current launch stage of the role. Defaults to GA.`, ValidateFunc: validation.StringInSlice([]string{"ALPHA", "BETA", "GA", "DEPRECATED", "DISABLED", "EAP"}, false), DiffSuppressFunc: emptyOrDefaultStringSuppress("ALPHA"), }, "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `A human-readable description for the role.`, }, "deleted": { - Type: schema.TypeBool, - Computed: true, + Type: schema.TypeBool, + Computed: true, + Description: `The current deleted state of the role.`, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Description: `The name of the role in the format organizations/{{org_id}}/roles/{{role_id}}. Like id, this field can be used as a reference in other resources such as IAM role bindings.`, }, }, } @@ -122,6 +134,7 @@ func resourceGoogleOrganizationIamCustomRoleRead(d *schema.ResourceData, meta in d.Set("role_id", parsedRoleName.Name) d.Set("org_id", parsedRoleName.OrgId) d.Set("title", role.Title) + d.Set("name", role.Name) d.Set("description", role.Description) d.Set("permissions", role.IncludedPermissions) d.Set("stage", role.Stage) diff --git a/third_party/terraform/resources/resource_google_organization_policy.go b/third_party/terraform/resources/resource_google_organization_policy.go index 81ef18d6c954..5b0dc4bcc378 100644 --- a/third_party/terraform/resources/resource_google_organization_policy.go +++ b/third_party/terraform/resources/resource_google_organization_policy.go @@ -19,45 +19,49 @@ var schemaOrganizationPolicy = map[string]*schema.Schema{ Required: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The name of the Constraint the Policy is configuring, for example, serviceuser.services.`, }, "boolean_policy": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `A boolean policy is a constraint that is either enforced or not.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enforced": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `If true, then the Policy is enforced. If false, then any configuration is acceptable.`, }, }, }, }, "list_policy": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `A policy that can define specific values that are allowed or denied for the given constraint. It can also be used to allow or deny all values. `, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "allow": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - // TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - // once hashicorp/terraform-plugin-sdk#280 is fixed - AtLeastOneOf: []string{"list_policy.0.allow", "list_policy.0.deny"}, - ConflictsWith: []string{"list_policy.0.deny"}, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `One or the other must be set.`, + ExactlyOneOf: []string{"list_policy.0.allow", "list_policy.0.deny"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "all": { Type: schema.TypeBool, Optional: true, Default: false, + Description: `The policy allows or denies all values.`, ExactlyOneOf: []string{"list_policy.0.allow.0.all", "list_policy.0.allow.0.values"}, }, "values": { Type: schema.TypeSet, Optional: true, + Description: `The policy can define specific values that are allowed or denied.`, ExactlyOneOf: []string{"list_policy.0.allow.0.all", "list_policy.0.allow.0.values"}, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, @@ -66,24 +70,24 @@ var schemaOrganizationPolicy = map[string]*schema.Schema{ }, }, "deny": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - // TODO(terraform-providers/terraform-provider-google#5193): Change back to exactly_one_of - // once hashicorp/terraform-plugin-sdk#280 is fixed - AtLeastOneOf: []string{"list_policy.0.allow", "list_policy.0.deny"}, - ConflictsWith: []string{"list_policy.0.allow"}, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `One or the other must be set.`, + ExactlyOneOf: []string{"list_policy.0.allow", "list_policy.0.deny"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "all": { Type: schema.TypeBool, Optional: true, Default: false, + Description: `The policy allows or denies all values.`, ExactlyOneOf: []string{"list_policy.0.deny.0.all", "list_policy.0.deny.0.values"}, }, "values": { Type: schema.TypeSet, Optional: true, + Description: `The policy can define specific values that are allowed or denied.`, ExactlyOneOf: []string{"list_policy.0.deny.0.all", "list_policy.0.deny.0.values"}, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, @@ -92,39 +96,46 @@ var schemaOrganizationPolicy = map[string]*schema.Schema{ }, }, "suggested_value": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The Google Cloud Console will try to default to a configuration that matches the value specified in this field.`, }, "inherit_from_parent": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `If set to true, the values from the effective Policy of the parent resource are inherited, meaning the values set in this Policy are added to the values inherited up the hierarchy.`, }, }, }, }, "version": { - Type: schema.TypeInt, - Optional: true, - Computed: true, + Type: schema.TypeInt, + Optional: true, + Computed: true, + Description: `Version of the Policy. Default version is 0.`, }, "etag": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The etag of the organization policy. etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other.`, }, "update_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds, representing when the variable was last updated. Example: "2016-10-09T12:33:37.578138407Z".`, }, "restore_policy": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `A restore policy is a constraint to restore the default policy.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "default": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `May only be set to true. If set, then the default Policy is restored.`, }, }, }, diff --git a/third_party/terraform/resources/resource_google_project.go b/third_party/terraform/resources/resource_google_project.go index 5bf38d717e17..21128e442741 100644 --- a/third_party/terraform/resources/resource_google_project.go +++ b/third_party/terraform/resources/resource_google_project.go @@ -33,10 +33,10 @@ func resourceGoogleProject() *schema.Resource { }, Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(4 * time.Minute), - Update: schema.DefaultTimeout(4 * time.Minute), - Read: schema.DefaultTimeout(4 * time.Minute), - Delete: schema.DefaultTimeout(4 * time.Minute), + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Read: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), }, MigrateState: resourceGoogleProjectMigrateState, @@ -47,45 +47,54 @@ func resourceGoogleProject() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validateProjectID(), + Description: `The project ID. Changing this forces a new project to be created.`, }, "skip_delete": { - Type: schema.TypeBool, - Optional: true, - Computed: true, + Type: schema.TypeBool, + Optional: true, + Computed: true, + Description: `If true, the Terraform resource can be deleted without deleting the Project via the Google API.`, }, "auto_create_network": { - Type: schema.TypeBool, - Optional: true, - Default: true, + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Create the 'default' network automatically. Default true. If set to false, the default network will be deleted. Note that, for quota purposes, you will still need to have 1 network slot available to create the project successfully, even if you set auto_create_network to false, since the network will exist momentarily.`, }, "name": { Type: schema.TypeString, Required: true, ValidateFunc: validateProjectName(), + Description: `The display name of the project.`, }, "org_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The numeric ID of the organization this project belongs to. Changing this forces a new project to be created. Only one of org_id or folder_id may be specified. If the org_id is specified then the project is created at the top level. Changing this forces the project to be migrated to the newly specified organization.`, }, "folder_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - StateFunc: parseFolderId, + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: parseFolderId, + Description: `The numeric ID of the folder this project should be created under. Only one of org_id or folder_id may be specified. If the folder_id is specified, then the project is created under the specified folder. Changing this forces the project to be migrated to the newly specified folder.`, }, "number": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The numeric identifier of the project.`, }, "billing_account": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The alphanumeric ID of the billing account this project belongs to. The user or service account performing this operation with Terraform must have Billing Account Administrator privileges (roles/billing.admin) in the organization. See Google Cloud Billing API Access Control for more details.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs to assign to the project.`, }, }, } @@ -136,7 +145,7 @@ func resourceGoogleProjectCreate(d *schema.ResourceData, meta interface{}) error return err } - waitErr := resourceManagerOperationWaitTime(config, opAsMap, "creating folder", int(d.Timeout(schema.TimeoutCreate).Minutes())) + waitErr := resourceManagerOperationWaitTime(config, opAsMap, "creating folder", d.Timeout(schema.TimeoutCreate)) if waitErr != nil { // The resource wasn't actually created d.SetId("") @@ -151,6 +160,10 @@ func resourceGoogleProjectCreate(d *schema.ResourceData, meta interface{}) error } } + // Sleep for 10s, letting the billing account settle before other resources + // try to use this project. + time.Sleep(10 * time.Second) + err = resourceGoogleProjectRead(d, meta) if err != nil { return err @@ -204,6 +217,9 @@ func resourceGoogleProjectRead(d *schema.ResourceData, meta interface{}) error { p, err := readGoogleProject(d, config) if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 403 && strings.Contains(gerr.Message, "caller does not have permission") { + return fmt.Errorf("the user does not have permission to access Project %q or it may not exist", pid) + } return handleNotFoundError(err, d, fmt.Sprintf("Project %q", pid)) } @@ -433,7 +449,7 @@ func forceDeleteComputeNetwork(d *schema.ResourceData, config *Config, projectId if err != nil { return errwrap.Wrapf("Error deleting firewall: {{err}}", err) } - err = computeOperationWait(config, op, projectId, "Deleting Firewall") + err = computeOperationWaitTime(config, op, projectId, "Deleting Firewall", d.Timeout(schema.TimeoutCreate)) if err != nil { return err } @@ -493,7 +509,7 @@ func deleteComputeNetwork(project, network string, config *Config) error { return errwrap.Wrapf("Error deleting network: {{err}}", err) } - err = computeOperationWaitTime(config, op, project, "Deleting Network", 10) + err = computeOperationWaitTime(config, op, project, "Deleting Network", 10*time.Minute) if err != nil { return err } @@ -561,7 +577,7 @@ func doEnableServicesRequest(services []string, project string, config *Config, return errwrap.Wrapf("failed to send enable services request: {{err}}", err) } // Poll for the API to return - waitErr := serviceUsageOperationWait(config, op, project, fmt.Sprintf("Enable Project %q Services: %+v", project, services)) + waitErr := serviceUsageOperationWait(config, op, project, fmt.Sprintf("Enable Project %q Services: %+v", project, services), timeout) if waitErr != nil { return waitErr } diff --git a/third_party/terraform/resources/resource_google_project_iam_custom_role.go b/third_party/terraform/resources/resource_google_project_iam_custom_role.go index c3df43e2a3f7..f39509298c7d 100644 --- a/third_party/terraform/resources/resource_google_project_iam_custom_role.go +++ b/third_party/terraform/resources/resource_google_project_iam_custom_role.go @@ -25,38 +25,50 @@ func resourceGoogleProjectIamCustomRole() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, + Description: `The camel case role id to use for this role. Cannot contain - characters.`, ValidateFunc: validateIAMCustomRoleID, }, "title": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `A human-readable title for the role.`, }, "permissions": { - Type: schema.TypeSet, - Required: true, - MinItems: 1, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Description: `The names of the permissions this role grants when bound in an IAM policy. At least one permission must be specified.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The project that the service account will be created in. Defaults to the provider project configuration.`, }, "stage": { Type: schema.TypeString, Optional: true, Default: "GA", + Description: `The current launch stage of the role. Defaults to GA.`, ValidateFunc: validation.StringInSlice([]string{"ALPHA", "BETA", "GA", "DEPRECATED", "DISABLED", "EAP"}, false), DiffSuppressFunc: emptyOrDefaultStringSuppress("ALPHA"), }, "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `A human-readable description for the role.`, }, "deleted": { - Type: schema.TypeBool, - Computed: true, + Type: schema.TypeBool, + Computed: true, + Description: `The current deleted state of the role.`, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Description: `The name of the role in the format projects/{{project}}/roles/{{role_id}}. Like id, this field can be used as a reference in other resources such as IAM role bindings.`, }, }, } @@ -126,6 +138,7 @@ func resourceGoogleProjectIamCustomRoleRead(d *schema.ResourceData, meta interfa d.Set("role_id", GetResourceNameFromSelfLink(role.Name)) d.Set("title", role.Title) + d.Set("name", role.Name) d.Set("description", role.Description) d.Set("permissions", role.IncludedPermissions) d.Set("stage", role.Stage) diff --git a/third_party/terraform/resources/resource_google_project_iam_policy.go.erb b/third_party/terraform/resources/resource_google_project_iam_policy.go.erb deleted file mode 100644 index 820937a43c41..000000000000 --- a/third_party/terraform/resources/resource_google_project_iam_policy.go.erb +++ /dev/null @@ -1,187 +0,0 @@ -<% autogen_exception -%> -package google - -import ( - "encoding/json" - "fmt" - "github.com/hashicorp/errwrap" - "github.com/hashicorp/terraform-plugin-sdk/helper/schema" - "google.golang.org/api/cloudresourcemanager/v1" - "log" -) - -func resourceGoogleProjectIamPolicy() *schema.Resource { - return &schema.Resource{ - Create: resourceGoogleProjectIamPolicyCreate, - Read: resourceGoogleProjectIamPolicyRead, - Update: resourceGoogleProjectIamPolicyUpdate, - Delete: resourceGoogleProjectIamPolicyDelete, - Importer: &schema.ResourceImporter{ - State: resourceGoogleProjectIamPolicyImport, - }, - - Schema: map[string]*schema.Schema{ - "project": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - DiffSuppressFunc: compareProjectName, - }, - "policy_data": { - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: jsonPolicyDiffSuppress, - }, - "etag": { - Type: schema.TypeString, - Computed: true, - }, - }, - } -} - -func compareProjectName(_, old, new string, _ *schema.ResourceData) bool { - // We can either get "projects/project-id" or "project-id", so strip any prefixes - return GetResourceNameFromSelfLink(old) == GetResourceNameFromSelfLink(new) -} - -func resourceGoogleProjectIamPolicyCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) - project := GetResourceNameFromSelfLink(d.Get("project").(string)) - - mutexKey := getProjectIamPolicyMutexKey(project) - mutexKV.Lock(mutexKey) - defer mutexKV.Unlock(mutexKey) - - // Get the policy in the template - policy, err := getResourceIamPolicy(d) - if err != nil { - return fmt.Errorf("Could not get valid 'policy_data' from resource: %v", err) - } - - log.Printf("[DEBUG] Setting IAM policy for project %q", project) - err = setProjectIamPolicy(policy, config, project) - if err != nil { - return err - } - - d.SetId(project) - return resourceGoogleProjectIamPolicyRead(d, meta) -} - -func resourceGoogleProjectIamPolicyRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) - project := GetResourceNameFromSelfLink(d.Get("project").(string)) - - policy, err := getProjectIamPolicy(project, config) - if err != nil { - return err - } - - policyBytes, err := json.Marshal(&cloudresourcemanager.Policy{Bindings: policy.Bindings, AuditConfigs: policy.AuditConfigs}) - if err != nil { - return fmt.Errorf("Error marshaling IAM policy: %v", err) - } - - d.Set("etag", policy.Etag) - d.Set("policy_data", string(policyBytes)) - d.Set("project", project) - return nil -} - -func resourceGoogleProjectIamPolicyUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*Config) - project := GetResourceNameFromSelfLink(d.Get("project").(string)) - - mutexKey := getProjectIamPolicyMutexKey(project) - mutexKV.Lock(mutexKey) - defer mutexKV.Unlock(mutexKey) - - // Get the policy in the template - policy, err := getResourceIamPolicy(d) - if err != nil { - return fmt.Errorf("Could not get valid 'policy_data' from resource: %v", err) - } - - log.Printf("[DEBUG] Updating IAM policy for project %q", project) - err = setProjectIamPolicy(policy, config, project) - if err != nil { - return fmt.Errorf("Error setting project IAM policy: %v", err) - } - - return resourceGoogleProjectIamPolicyRead(d, meta) -} - -func resourceGoogleProjectIamPolicyDelete(d *schema.ResourceData, meta interface{}) error { - log.Printf("[DEBUG]: Deleting google_project_iam_policy") - config := meta.(*Config) - project := GetResourceNameFromSelfLink(d.Get("project").(string)) - - mutexKey := getProjectIamPolicyMutexKey(project) - mutexKV.Lock(mutexKey) - defer mutexKV.Unlock(mutexKey) - - // Get the existing IAM policy from the API so we can repurpose the etag and audit config - ep, err := getProjectIamPolicy(project, config) - if err != nil { - return fmt.Errorf("Error retrieving IAM policy from project API: %v", err) - } - - ep.Bindings = make([]*cloudresourcemanager.Binding, 0) - if err = setProjectIamPolicy(ep, config, project); err != nil { - return fmt.Errorf("Error applying IAM policy to project: %v", err) - } - - d.SetId("") - return nil -} - -func resourceGoogleProjectIamPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - d.Set("project", d.Id()) - return []*schema.ResourceData{d}, nil -} - -func setProjectIamPolicy(policy *cloudresourcemanager.Policy, config *Config, pid string) error { - policy.Version = iamPolicyVersion - - // Apply the policy - pbytes, _ := json.Marshal(policy) - log.Printf("[DEBUG] Setting policy %#v for project: %s", string(pbytes), pid) - _, err := config.clientResourceManager.Projects.SetIamPolicy(pid, - &cloudresourcemanager.SetIamPolicyRequest{Policy: policy, UpdateMask: "bindings,etag,auditConfigs"}).Do() - - if err != nil { - return errwrap.Wrapf(fmt.Sprintf("Error applying IAM policy for project %q. Policy is %#v, error is {{err}}", pid, policy), err) - } - return nil -} - -// Get a cloudresourcemanager.Policy from a schema.ResourceData -func getResourceIamPolicy(d *schema.ResourceData) (*cloudresourcemanager.Policy, error) { - ps := d.Get("policy_data").(string) - // The policy string is just a marshaled cloudresourcemanager.Policy. - policy := &cloudresourcemanager.Policy{} - if err := json.Unmarshal([]byte(ps), policy); err != nil { - return nil, fmt.Errorf("Could not unmarshal %s:\n: %v", ps, err) - } - return policy, nil -} - -// Retrieve the existing IAM Policy for a Project -func getProjectIamPolicy(project string, config *Config) (*cloudresourcemanager.Policy, error) { - p, err := config.clientResourceManager.Projects.GetIamPolicy(project, - &cloudresourcemanager.GetIamPolicyRequest{ - Options: &cloudresourcemanager.GetPolicyOptions{ - RequestedPolicyVersion: iamPolicyVersion, - }, - }).Do() - - if err != nil { - return nil, fmt.Errorf("Error retrieving IAM policy for project %q: %s", project, err) - } - return p, nil -} - -func getProjectIamPolicyMutexKey(pid string) string { - return fmt.Sprintf("iam-project-%s", pid) -} diff --git a/third_party/terraform/resources/resource_google_project_migrate.go b/third_party/terraform/resources/resource_google_project_migrate.go index 5735e8803cef..0d6cc996fe72 100644 --- a/third_party/terraform/resources/resource_google_project_migrate.go +++ b/third_party/terraform/resources/resource_google_project_migrate.go @@ -5,6 +5,7 @@ import ( "log" "github.com/hashicorp/terraform-plugin-sdk/terraform" + "google.golang.org/api/cloudresourcemanager/v1" ) func resourceGoogleProjectMigrateState(v int, s *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { @@ -45,3 +46,18 @@ func migrateGoogleProjectStateV0toV1(s *terraform.InstanceState, config *Config) log.Printf("[DEBUG] Attributes after migration: %#v", s.Attributes) return s, nil } + +// Retrieve the existing IAM Policy for a Project +func getProjectIamPolicy(project string, config *Config) (*cloudresourcemanager.Policy, error) { + p, err := config.clientResourceManager.Projects.GetIamPolicy(project, + &cloudresourcemanager.GetIamPolicyRequest{ + Options: &cloudresourcemanager.GetPolicyOptions{ + RequestedPolicyVersion: iamPolicyVersion, + }, + }).Do() + + if err != nil { + return nil, fmt.Errorf("Error retrieving IAM policy for project %q: %s", project, err) + } + return p, nil +} diff --git a/third_party/terraform/resources/resource_google_project_organization_policy.go b/third_party/terraform/resources/resource_google_project_organization_policy.go index 202ae91a2ebe..f12fd8436506 100644 --- a/third_party/terraform/resources/resource_google_project_organization_policy.go +++ b/third_party/terraform/resources/resource_google_project_organization_policy.go @@ -30,9 +30,10 @@ func resourceGoogleProjectOrganizationPolicy() *schema.Resource { schemaOrganizationPolicy, map[string]*schema.Schema{ "project": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The project ID.`, }, }, ), diff --git a/third_party/terraform/resources/resource_google_project_service.go b/third_party/terraform/resources/resource_google_project_service.go index bf47af71c91f..e8d0cbbe293d 100644 --- a/third_party/terraform/resources/resource_google_project_service.go +++ b/third_party/terraform/resources/resource_google_project_service.go @@ -229,7 +229,7 @@ func disableServiceUsageProjectService(service, project string, d *schema.Resour return err } // Wait for the operation to complete - waitErr := serviceUsageOperationWait(config, sop, project, "api to disable") + waitErr := serviceUsageOperationWait(config, sop, project, "api to disable", d.Timeout(schema.TimeoutDelete)) if waitErr != nil { return waitErr } diff --git a/third_party/terraform/resources/resource_google_service_account.go b/third_party/terraform/resources/resource_google_service_account.go index 2356b4bbcc01..31a455a8189b 100644 --- a/third_party/terraform/resources/resource_google_service_account.go +++ b/third_party/terraform/resources/resource_google_service_account.go @@ -19,39 +19,49 @@ func resourceGoogleServiceAccount() *schema.Resource { Importer: &schema.ResourceImporter{ State: resourceGoogleServiceAccountImport, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(5 * time.Minute), + }, Schema: map[string]*schema.Schema{ "email": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The e-mail address of the service account. This value should be referenced from any google_iam_policy data sources that would grant the service account privileges.`, }, "unique_id": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The unique id of the service account.`, }, "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The fully-qualified name of the service account.`, }, "account_id": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateRFC1035Name(6, 30), + Description: `The account id that is used to generate the service account email address and a stable unique id. It is unique within a project, must be 6-30 characters long, and match the regular expression [a-z]([-a-z0-9]*[a-z0-9]) to comply with RFC1035. Changing this forces a new service account to be created.`, }, "display_name": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The display name for the service account. Can be updated without creating a new resource.`, }, "description": { Type: schema.TypeString, Optional: true, ValidateFunc: validation.StringLenBetween(0, 256), + Description: `A text description of the service account. Must be less than or equal to 256 UTF-8 bytes.`, }, "project": { - Type: schema.TypeString, - Computed: true, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + Description: `The ID of the project that the service account will be created in. Defaults to the provider project configuration.`, }, }, } @@ -83,10 +93,15 @@ func resourceGoogleServiceAccountCreate(d *schema.ResourceData, meta interface{} } d.SetId(sa.Name) - // This API is meant to be synchronous, but in practice it shows the old value for - // a few milliseconds after the update goes through. A second is more than enough - // time to ensure following reads are correct. - time.Sleep(time.Second) + + err = retryTimeDuration(func() (operr error) { + _, saerr := config.clientIAM.Projects.ServiceAccounts.Get(d.Id()).Do() + return saerr + }, d.Timeout(schema.TimeoutCreate), isNotFoundRetryableError("service account creation")) + + if err != nil { + return fmt.Errorf("Error reading service account after creation: %s", err) + } return resourceGoogleServiceAccountRead(d, meta) } @@ -146,8 +161,10 @@ func resourceGoogleServiceAccountUpdate(d *schema.ResourceData, meta interface{} if err != nil { return err } - // See comment in Create. - time.Sleep(time.Second) + // This API is meant to be synchronous, but in practice it shows the old value for + // a few milliseconds after the update goes through. 5 seconds is more than enough + // time to ensure following reads are correct. + time.Sleep(time.Second * 5) return nil } diff --git a/third_party/terraform/resources/resource_google_service_account_key.go b/third_party/terraform/resources/resource_google_service_account_key.go index 0623cb9c542a..dc5d04b4a6cc 100644 --- a/third_party/terraform/resources/resource_google_service_account_key.go +++ b/third_party/terraform/resources/resource_google_service_account_key.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/helper/validation" @@ -17,9 +18,10 @@ func resourceGoogleServiceAccountKey() *schema.Resource { Schema: map[string]*schema.Schema{ // Required "service_account_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The ID of the parent service account of the key. This can be a string in the format {ACCOUNT} or projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}, where {ACCOUNT} is the email address or unique id of the service account. If the {ACCOUNT} syntax is used, the project will be inferred from the account.`, }, // Optional "key_algorithm": { @@ -28,6 +30,7 @@ func resourceGoogleServiceAccountKey() *schema.Resource { Optional: true, ForceNew: true, ValidateFunc: validation.StringInSlice([]string{"KEY_ALG_UNSPECIFIED", "KEY_ALG_RSA_1024", "KEY_ALG_RSA_2048"}, false), + Description: `The algorithm used to generate the key, used only on create. KEY_ALG_RSA_2048 is the default algorithm. Valid values are: "KEY_ALG_RSA_1024", "KEY_ALG_RSA_2048".`, }, "pgp_key": { Type: schema.TypeString, @@ -51,27 +54,32 @@ func resourceGoogleServiceAccountKey() *schema.Resource { }, // Computed "name": { - Type: schema.TypeString, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Computed: true, + ForceNew: true, + Description: `The name used for this key pair`, }, "public_key": { - Type: schema.TypeString, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Computed: true, + ForceNew: true, + Description: `The public key, base64 encoded`, }, "private_key": { - Type: schema.TypeString, - Computed: true, - Sensitive: true, + Type: schema.TypeString, + Computed: true, + Sensitive: true, + Description: `The private key in JSON format, base64 encoded. This is what you normally get as a file when creating service account keys through the CLI or web console. This is only populated when creating a new key.`, }, "valid_after": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The key can be used after this timestamp. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".`, }, "valid_before": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The key can be used before this timestamp. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".`, }, "private_key_encrypted": { Type: schema.TypeString, @@ -111,7 +119,7 @@ func resourceGoogleServiceAccountKeyCreate(d *schema.ResourceData, meta interfac d.Set("valid_before", sak.ValidBeforeTime) d.Set("private_key", sak.PrivateKeyData) - err = serviceAccountKeyWaitTime(config.clientIAM.Projects.ServiceAccounts.Keys, d.Id(), d.Get("public_key_type").(string), "Creating Service account key", 4) + err = serviceAccountKeyWaitTime(config.clientIAM.Projects.ServiceAccounts.Keys, d.Id(), d.Get("public_key_type").(string), "Creating Service account key", 4*time.Minute) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_iam_audit_config.go b/third_party/terraform/resources/resource_iam_audit_config.go index bec5b1634d9d..3196d5dd725b 100644 --- a/third_party/terraform/resources/resource_iam_audit_config.go +++ b/third_party/terraform/resources/resource_iam_audit_config.go @@ -12,29 +12,34 @@ import ( var iamAuditConfigSchema = map[string]*schema.Schema{ "service": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `Service which will be enabled for audit logging. The special value allServices covers all services.`, }, "audit_log_config": { - Type: schema.TypeSet, - Required: true, + Type: schema.TypeSet, + Required: true, + Description: `The configuration for logging of each type of permission. This can be specified multiple times.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "log_type": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `Permission type for which logging is to be configured. Must be one of DATA_READ, DATA_WRITE, or ADMIN_READ.`, }, "exempted_members": { - Type: schema.TypeSet, - Elem: &schema.Schema{Type: schema.TypeString}, - Optional: true, + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + Description: `Identities that do not cause logging for this type of permission. Each entry can have one of the following values:user:{emailid}: An email address that represents a specific Google account. For example, alice@gmail.com or joe@example.com. serviceAccount:{emailid}: An email address that represents a service account. For example, my-other-app@appspot.gserviceaccount.com. group:{emailid}: An email address that represents a Google group. For example, admins@example.com. domain:{domain}: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com.`, }, }, }, }, "etag": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The etag of iam policy`, }, } diff --git a/third_party/terraform/resources/resource_iam_binding.go.erb b/third_party/terraform/resources/resource_iam_binding.go.erb index f429eccebad1..56d6ad8ce58c 100644 --- a/third_party/terraform/resources/resource_iam_binding.go.erb +++ b/third_party/terraform/resources/resource_iam_binding.go.erb @@ -31,7 +31,6 @@ var iamBindingSchema = map[string]*schema.Schema{ return schema.HashString(strings.ToLower(v.(string))) }, }, -<% unless version == 'ga' -%> "condition": { Type: schema.TypeList, Optional: true, @@ -57,7 +56,6 @@ var iamBindingSchema = map[string]*schema.Schema{ }, }, }, -<% end -%> "etag": { Type: schema.TypeString, Computed: true, @@ -109,11 +107,9 @@ func resourceIamBindingCreateUpdate(newUpdaterFunc newResourceIamUpdaterFunc, en } d.SetId(updater.GetResourceId() + "/" + binding.Role) -<% unless version == 'ga' -%> if k := conditionKeyFromCondition(binding.Condition); !k.Empty() { d.SetId(d.Id() + "/" + k.String()) } -<% end -%> return resourceIamBindingRead(newUpdaterFunc)(d, meta) } } @@ -152,9 +148,7 @@ func resourceIamBindingRead(newUpdaterFunc newResourceIamUpdaterFunc) schema.Rea } else { d.Set("role", binding.Role) d.Set("members", binding.Members) -<% unless version == 'ga' -%> d.Set("condition", flattenIamCondition(binding.Condition)) -<% end -%> } d.Set("etag", p.Etag) return nil @@ -169,13 +163,6 @@ func iamBindingImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser config := m.(*Config) s := strings.Fields(d.Id()) var id, role string -<% if version == 'ga' -%> - if len(s) != 2 { - d.SetId("") - return nil, fmt.Errorf("Wrong number of parts to Binding id %s; expected 'resource_name role'.", s) - } - id, role = s[0], s[1] -<% else -%> if len(s) < 2 { d.SetId("") return nil, fmt.Errorf("Wrong number of parts to Binding id %s; expected 'resource_name role [condition_title]'.", s) @@ -188,7 +175,6 @@ func iamBindingImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser // condition titles can have any characters in them, so re-join the split string id, role, conditionTitle = s[0], s[1], strings.Join(s[2:], " ") } -<% end -%> // Set the ID only to the first part so all IAM types can share the same resourceIdParserFunc. d.SetId(id) @@ -202,7 +188,6 @@ func iamBindingImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser // Use the current ID in case it changed in the resourceIdParserFunc. d.SetId(d.Id() + "/" + role) -<% unless version == 'ga' -%> // Since condition titles can have any character in them, we can't separate them from any other // field the user might set in import (like the condition description and expression). So, we // have the user just specify the title and then read the upstream policy to set the full @@ -231,7 +216,6 @@ func iamBindingImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser d.SetId(d.Id() + "/" + k.String()) } } -<% end -%> // It is possible to return multiple bindings, since we can learn about all the bindings // for this resource here. Unfortunately, `terraform import` has some messy behavior here - @@ -281,15 +265,12 @@ func getResourceIamBinding(d *schema.ResourceData) *cloudresourcemanager.Binding Members: convertStringArr(members), Role: d.Get("role").(string), } -<% unless version == 'ga' -%> if c := expandIamCondition(d.Get("condition")); c != nil { b.Condition = c } -<% end -%> return b } -<% unless version == 'ga' -%> func expandIamCondition(v interface{}) *cloudresourcemanager.Expr { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -316,4 +297,3 @@ func flattenIamCondition(condition *cloudresourcemanager.Expr) []map[string]inte }, } } -<% end -%> diff --git a/third_party/terraform/resources/resource_iam_member.go.erb b/third_party/terraform/resources/resource_iam_member.go.erb index 837c036afd17..e57e306c9a23 100644 --- a/third_party/terraform/resources/resource_iam_member.go.erb +++ b/third_party/terraform/resources/resource_iam_member.go.erb @@ -25,7 +25,6 @@ var IamMemberBaseSchema = map[string]*schema.Schema{ DiffSuppressFunc: caseDiffSuppress, ValidateFunc: validation.StringDoesNotMatch(regexp.MustCompile("^deleted:"), "Terraform does not support IAM members for deleted principals"), }, -<% unless version == 'ga' -%> "condition": { Type: schema.TypeList, Optional: true, @@ -51,7 +50,6 @@ var IamMemberBaseSchema = map[string]*schema.Schema{ }, }, }, -<% end -%> "etag": { Type: schema.TypeString, Computed: true, @@ -66,13 +64,6 @@ func iamMemberImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser config := m.(*Config) s := strings.Fields(d.Id()) var id, role, member string -<% if version == 'ga' -%> - if len(s) != 3 { - d.SetId("") - return nil, fmt.Errorf("Wrong number of parts to Member id %s; expected 'resource_name role member'.", s) - } - id, role, member = s[0], s[1], s[2] -<% else -%> if len(s) < 3 { d.SetId("") return nil, fmt.Errorf("Wrong number of parts to Member id %s; expected 'resource_name role member [condition_title]'.", s) @@ -85,7 +76,6 @@ func iamMemberImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser // condition titles can have any characters in them, so re-join the split string id, role, member, conditionTitle = s[0], s[1], s[2], strings.Join(s[3:], " ") } -<% end -%> // Set the ID only to the first part so all IAM types can share the same resourceIdParserFunc. d.SetId(id) @@ -101,7 +91,6 @@ func iamMemberImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser // Use the current ID in case it changed in the resourceIdParserFunc. d.SetId(d.Id() + "/" + role + "/" + strings.ToLower(member)) -<% unless version == 'ga' -%> // Read the upstream policy so we can set the full condition. updater, err := newUpdaterFunc(d, config) if err != nil { @@ -138,7 +127,6 @@ func iamMemberImport(newUpdaterFunc newResourceIamUpdaterFunc, resourceIdParser if k := conditionKeyFromCondition(binding.Condition); !k.Empty() { d.SetId(d.Id() + "/" + k.String()) } -<% end -%> return []*schema.ResourceData{d}, nil } @@ -165,11 +153,9 @@ func getResourceIamMember(d *schema.ResourceData) *cloudresourcemanager.Binding Members: []string{d.Get("member").(string)}, Role: d.Get("role").(string), } -<% unless version == 'ga' -%> if c := expandIamCondition(d.Get("condition")); c != nil { b.Condition = c } -<% end -%> return b } @@ -198,11 +184,9 @@ func resourceIamMemberCreate(newUpdaterFunc newResourceIamUpdaterFunc, enableBat return err } d.SetId(updater.GetResourceId() + "/" + memberBind.Role + "/" + strings.ToLower(memberBind.Members[0])) -<% unless version == 'ga' -%> if k := conditionKeyFromCondition(memberBind.Condition); !k.Empty() { d.SetId(d.Id() + "/" + k.String()) } -<% end -%> return resourceIamMemberRead(newUpdaterFunc)(d, meta) } } @@ -255,9 +239,7 @@ func resourceIamMemberRead(newUpdaterFunc newResourceIamUpdaterFunc) schema.Read d.Set("etag", p.Etag) d.Set("member", member) d.Set("role", binding.Role) -<% unless version == 'ga' -%> d.Set("condition", flattenIamCondition(binding.Condition)) -<% end -%> return nil } } diff --git a/third_party/terraform/resources/resource_logging_billing_account_bucket_config.go b/third_party/terraform/resources/resource_logging_billing_account_bucket_config.go new file mode 100644 index 000000000000..d476b86f9569 --- /dev/null +++ b/third_party/terraform/resources/resource_logging_billing_account_bucket_config.go @@ -0,0 +1,35 @@ +package google + +import ( + "fmt" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +var loggingBillingAccountBucketConfigSchema = map[string]*schema.Schema{ + "billing_account": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The parent resource that contains the logging bucket.`, + }, +} + +func billingAccountBucketConfigID(d *schema.ResourceData, config *Config) (string, error) { + billingAccount := d.Get("billing_account").(string) + location := d.Get("location").(string) + bucketID := d.Get("bucket_id").(string) + + if !strings.HasPrefix(billingAccount, "billingAccounts") { + billingAccount = "billingAccounts/" + billingAccount + } + + id := fmt.Sprintf("%s/locations/%s/buckets/%s", billingAccount, location, bucketID) + return id, nil +} + +// Create Logging Bucket config +func ResourceLoggingBillingAccountBucketConfig() *schema.Resource { + return ResourceLoggingBucketConfig("billing_account", loggingBillingAccountBucketConfigSchema, billingAccountBucketConfigID) +} diff --git a/third_party/terraform/resources/resource_logging_billing_account_sink.go b/third_party/terraform/resources/resource_logging_billing_account_sink.go index 9d13505e7c03..e272ea60d489 100644 --- a/third_party/terraform/resources/resource_logging_billing_account_sink.go +++ b/third_party/terraform/resources/resource_logging_billing_account_sink.go @@ -18,9 +18,10 @@ func resourceLoggingBillingAccountSink() *schema.Resource { }, } schm.Schema["billing_account"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The billing account exported to the sink.`, } return schm } diff --git a/third_party/terraform/resources/resource_logging_bucket_config.go b/third_party/terraform/resources/resource_logging_bucket_config.go new file mode 100644 index 000000000000..2305e660bcf0 --- /dev/null +++ b/third_party/terraform/resources/resource_logging_bucket_config.go @@ -0,0 +1,183 @@ +package google + +import ( + "fmt" + "log" + "regexp" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +var loggingBucketConfigSchema = map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Computed: true, + Description: `The resource name of the bucket`, + }, + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The location of the bucket. The supported locations are: "global" "us-central1"`, + }, + "bucket_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the logging bucket. Logging automatically creates two log buckets: _Required and _Default.`, + }, + "description": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `An optional description for this bucket.`, + }, + "retention_days": { + Type: schema.TypeInt, + Optional: true, + Default: 30, + Description: `Logs will be retained by default for this amount of time, after which they will automatically be deleted. The minimum retention period is 1 day. If this value is set to zero at bucket creation time, the default time of 30 days will be used.`, + }, + "lifecycle_state": { + Type: schema.TypeString, + Computed: true, + Description: `The bucket's lifecycle such as active or deleted.`, + }, +} + +type loggingBucketConfigIDFunc func(d *schema.ResourceData, config *Config) (string, error) + +// ResourceLoggingBucketConfig creates a resource definition by merging a unique field (eg: folder) to a generic logging bucket +// config resource. In practice the only difference between these resources is the url location. +func ResourceLoggingBucketConfig(parentType string, parentSpecificSchema map[string]*schema.Schema, iDFunc loggingBucketConfigIDFunc) *schema.Resource { + return &schema.Resource{ + Create: resourceLoggingBucketConfigAcquire(iDFunc), + Read: resourceLoggingBucketConfigRead, + Update: resourceLoggingBucketConfigUpdate, + Delete: resourceLoggingBucketConfigDelete, + Importer: &schema.ResourceImporter{ + State: resourceLoggingBucketConfigImportState(parentType), + }, + Schema: mergeSchemas(loggingBucketConfigSchema, parentSpecificSchema), + } +} + +var loggingBucketConfigIDRegex = regexp.MustCompile("(.+)/(.+)/locations/(.+)/buckets/(.+)") + +func resourceLoggingBucketConfigImportState(parent string) schema.StateFunc { + return func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + parts := loggingBucketConfigIDRegex.FindStringSubmatch(d.Id()) + if parts == nil { + return nil, fmt.Errorf("unable to parse logging sink id %#v", d.Id()) + } + + if len(parts) != 5 { + return nil, fmt.Errorf("Invalid id format. Format should be '{{parent}}/{{parent_id}}/locations/{{location}}/buckets/{{bucket_id}} with parent in %s", loggingSinkResourceTypes) + } + + validLoggingType := false + for _, v := range loggingSinkResourceTypes { + if v == parts[1] { + validLoggingType = true + break + } + } + if !validLoggingType { + return nil, fmt.Errorf("Logging parent type %s is not valid. Valid resource types: %#v", parts[1], + loggingSinkResourceTypes) + } + + d.Set(parent, parts[1]+"/"+parts[2]) + + d.Set("location", parts[3]) + + d.Set("bucket_id", parts[4]) + + return []*schema.ResourceData{d}, nil + } +} + +func resourceLoggingBucketConfigAcquire(iDFunc loggingBucketConfigIDFunc) func(*schema.ResourceData, interface{}) error { + return func(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + id, err := iDFunc(d, config) + if err != nil { + return err + } + + d.SetId(id) + + return resourceLoggingBucketConfigUpdate(d, meta) + } +} + +func resourceLoggingBucketConfigRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + log.Printf("[DEBUG] Fetching logging bucket config: %#v", d.Id()) + + url, err := replaceVars(d, config, fmt.Sprintf("{{LoggingBasePath}}%s", d.Id())) + if err != nil { + return err + } + + res, err := sendRequest(config, "GET", "", url, nil) + if err != nil { + log.Printf("[WARN] Unable to acquire logging bucket config at %s", d.Id()) + + d.SetId("") + return err + } + + d.Set("name", res["name"]) + d.Set("description", res["description"]) + d.Set("lifecycle_state", res["lifecycleState"]) + d.Set("retention_days", res["retentionDays"]) + + return nil + +} + +func resourceLoggingBucketConfigUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + obj := make(map[string]interface{}) + + url, err := replaceVars(d, config, fmt.Sprintf("{{LoggingBasePath}}%s", d.Id())) + if err != nil { + return err + } + + obj["retentionDays"] = d.Get("retention_days") + obj["description"] = d.Get("description") + + updateMask := []string{} + if d.HasChange("retention_days") { + updateMask = append(updateMask, "retentionDays") + } + if d.HasChange("description") { + updateMask = append(updateMask, "description") + } + url, err = addQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) + if err != nil { + return err + } + + _, err = sendRequestWithTimeout(config, "PATCH", "", url, obj, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("Error updating Logging Bucket Config %q: %s", d.Id(), err) + } + + return resourceLoggingBucketConfigRead(d, meta) + +} + +func resourceLoggingBucketConfigDelete(d *schema.ResourceData, meta interface{}) error { + + log.Printf("[WARN] Logging bucket configs cannot be deleted. Removing logging bucket config from state: %#v", d.Id()) + d.SetId("") + + return nil +} diff --git a/third_party/terraform/resources/resource_logging_exclusion.go b/third_party/terraform/resources/resource_logging_exclusion.go index ab7d63c9d3ad..14c5169c57e5 100644 --- a/third_party/terraform/resources/resource_logging_exclusion.go +++ b/third_party/terraform/resources/resource_logging_exclusion.go @@ -11,21 +11,25 @@ import ( var LoggingExclusionBaseSchema = map[string]*schema.Schema{ "filter": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The filter to apply when excluding logs. Only log entries that match the filter are excluded.`, }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the logging exclusion.`, }, "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `A human-readable description.`, }, "disabled": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `Whether this exclusion rule should be disabled or not. This defaults to false.`, }, } diff --git a/third_party/terraform/resources/resource_logging_folder_bucket_config.go b/third_party/terraform/resources/resource_logging_folder_bucket_config.go new file mode 100644 index 000000000000..d1697696294a --- /dev/null +++ b/third_party/terraform/resources/resource_logging_folder_bucket_config.go @@ -0,0 +1,35 @@ +package google + +import ( + "fmt" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +var loggingFolderBucketConfigSchema = map[string]*schema.Schema{ + "folder": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The parent resource that contains the logging bucket.`, + }, +} + +func folderBucketConfigID(d *schema.ResourceData, config *Config) (string, error) { + folder := d.Get("folder").(string) + location := d.Get("location").(string) + bucketID := d.Get("bucket_id").(string) + + if !strings.HasPrefix(folder, "folder") { + folder = "folders/" + folder + } + + id := fmt.Sprintf("%s/locations/%s/buckets/%s", folder, location, bucketID) + return id, nil +} + +// Create Logging Bucket config +func ResourceLoggingFolderBucketConfig() *schema.Resource { + return ResourceLoggingBucketConfig("folder", loggingFolderBucketConfigSchema, folderBucketConfigID) +} diff --git a/third_party/terraform/resources/resource_logging_folder_sink.go b/third_party/terraform/resources/resource_logging_folder_sink.go index 4d23e790c891..72af44ac6b3c 100644 --- a/third_party/terraform/resources/resource_logging_folder_sink.go +++ b/third_party/terraform/resources/resource_logging_folder_sink.go @@ -19,18 +19,20 @@ func resourceLoggingFolderSink() *schema.Resource { }, } schm.Schema["folder"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The folder to be exported to the sink. Note that either [FOLDER_ID] or "folders/[FOLDER_ID]" is accepted.`, StateFunc: func(v interface{}) string { return strings.Replace(v.(string), "folders/", "", 1) }, } schm.Schema["include_children"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + Description: `Whether or not to include children folders in the sink export. If true, logs associated with child projects are also exported; otherwise only logs relating to the provided folder are included.`, } return schm diff --git a/third_party/terraform/resources/resource_logging_organization_bucket_config.go b/third_party/terraform/resources/resource_logging_organization_bucket_config.go new file mode 100644 index 000000000000..3ecd0f1c59cd --- /dev/null +++ b/third_party/terraform/resources/resource_logging_organization_bucket_config.go @@ -0,0 +1,35 @@ +package google + +import ( + "fmt" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +var loggingOrganizationBucketConfigSchema = map[string]*schema.Schema{ + "organization": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The parent resource that contains the logging bucket.`, + }, +} + +func organizationBucketConfigID(d *schema.ResourceData, config *Config) (string, error) { + organization := d.Get("organization").(string) + location := d.Get("location").(string) + bucketID := d.Get("bucket_id").(string) + + if !strings.HasPrefix(organization, "organization") { + organization = "organizations/" + organization + } + + id := fmt.Sprintf("%s/locations/%s/buckets/%s", organization, location, bucketID) + return id, nil +} + +// Create Logging Bucket config +func ResourceLoggingOrganizationBucketConfig() *schema.Resource { + return ResourceLoggingBucketConfig("organization", loggingOrganizationBucketConfigSchema, organizationBucketConfigID) +} diff --git a/third_party/terraform/resources/resource_logging_organization_sink.go b/third_party/terraform/resources/resource_logging_organization_sink.go index cbc93a41aac0..e58fe931ece9 100644 --- a/third_party/terraform/resources/resource_logging_organization_sink.go +++ b/third_party/terraform/resources/resource_logging_organization_sink.go @@ -19,17 +19,19 @@ func resourceLoggingOrganizationSink() *schema.Resource { }, } schm.Schema["org_id"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The numeric ID of the organization to be exported to the sink.`, StateFunc: func(v interface{}) string { return strings.Replace(v.(string), "organizations/", "", 1) }, } schm.Schema["include_children"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + Description: `Whether or not to include children organizations in the sink export. If true, logs associated with child projects are also exported; otherwise only logs relating to the provided organization are included.`, } return schm diff --git a/third_party/terraform/resources/resource_logging_project_bucket_config.go b/third_party/terraform/resources/resource_logging_project_bucket_config.go new file mode 100644 index 000000000000..8bf40339afaf --- /dev/null +++ b/third_party/terraform/resources/resource_logging_project_bucket_config.go @@ -0,0 +1,35 @@ +package google + +import ( + "fmt" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" +) + +var loggingProjectBucketConfigSchema = map[string]*schema.Schema{ + "project": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The parent project that contains the logging bucket.`, + }, +} + +func projectBucketConfigID(d *schema.ResourceData, config *Config) (string, error) { + project := d.Get("project").(string) + location := d.Get("location").(string) + bucketID := d.Get("bucket_id").(string) + + if !strings.HasPrefix(project, "project") { + project = "projects/" + project + } + + id := fmt.Sprintf("%s/locations/%s/buckets/%s", project, location, bucketID) + return id, nil +} + +// Create Logging Bucket config +func ResourceLoggingProjectBucketConfig() *schema.Resource { + return ResourceLoggingBucketConfig("project", loggingProjectBucketConfigSchema, projectBucketConfigID) +} diff --git a/third_party/terraform/resources/resource_logging_project_sink.go b/third_party/terraform/resources/resource_logging_project_sink.go index 2c7e3351e76e..40c377f0b8cd 100644 --- a/third_party/terraform/resources/resource_logging_project_sink.go +++ b/third_party/terraform/resources/resource_logging_project_sink.go @@ -20,16 +20,18 @@ func resourceLoggingProjectSink() *schema.Resource { }, } schm.Schema["project"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project to create the sink in. If omitted, the project associated with the provider is used.`, } schm.Schema["unique_writer_identity"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + Description: `Whether or not to create a unique identity associated with this sink. If false (the default), then the writer_identity used is serviceAccount:cloud-logs@system.gserviceaccount.com. If true, then a unique service account is created and used for this sink. If you wish to publish logs across projects, you must set unique_writer_identity to true.`, } return schm } diff --git a/third_party/terraform/resources/resource_logging_sink.go b/third_party/terraform/resources/resource_logging_sink.go index 21b8370f9d94..0fd881a30038 100644 --- a/third_party/terraform/resources/resource_logging_sink.go +++ b/third_party/terraform/resources/resource_logging_sink.go @@ -10,37 +10,43 @@ import ( func resourceLoggingSinkSchema() map[string]*schema.Schema { return map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the logging sink.`, }, "destination": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The destination of the sink (or, in other words, where logs are written to). Can be a Cloud Storage bucket, a PubSub topic, or a BigQuery dataset. Examples: "storage.googleapis.com/[GCS_BUCKET]" "bigquery.googleapis.com/projects/[PROJECT_ID]/datasets/[DATASET]" "pubsub.googleapis.com/projects/[PROJECT_ID]/topics/[TOPIC_ID]" The writer associated with the sink must have access to write to the above resource.`, }, "filter": { Type: schema.TypeString, Optional: true, DiffSuppressFunc: optionalSurroundingSpacesSuppress, + Description: `The filter to apply when exporting logs. Only log entries that match the filter are exported.`, }, "writer_identity": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The identity associated with this sink. This identity must be granted write access to the configured destination.`, }, "bigquery_options": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Options that affect sinks exporting data to BigQuery.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "use_partitioned_tables": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `Whether to use BigQuery's partition tables. By default, Logging creates dated tables based on the log entries' timestamps, e.g. syslog_20170523. With partitioned tables the date suffix is no longer present and special query syntax has to be used instead. In both cases, tables are sharded based on UTC timezone.`, }, }, }, diff --git a/third_party/terraform/resources/resource_monitoring_dashboard.go b/third_party/terraform/resources/resource_monitoring_dashboard.go new file mode 100644 index 000000000000..10a7b49c6331 --- /dev/null +++ b/third_party/terraform/resources/resource_monitoring_dashboard.go @@ -0,0 +1,195 @@ +package google + +import ( + "fmt" + "reflect" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/helper/structure" + "github.com/hashicorp/terraform-plugin-sdk/helper/validation" +) + +func monitoringDashboardDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + computedFields := []string{"etag", "name"} + + oldMap, err := structure.ExpandJsonFromString(old) + if err != nil { + return false + } + + newMap, err := structure.ExpandJsonFromString(new) + if err != nil { + return false + } + + for _, f := range computedFields { + delete(oldMap, f) + delete(newMap, f) + } + + return reflect.DeepEqual(oldMap, newMap) +} + +func resourceMonitoringDashboard() *schema.Resource { + return &schema.Resource{ + Create: resourceMonitoringDashboardCreate, + Read: resourceMonitoringDashboardRead, + Update: resourceMonitoringDashboardUpdate, + Delete: resourceMonitoringDashboardDelete, + + Importer: &schema.ResourceImporter{ + State: resourceMonitoringDashboardImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Update: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "dashboard_json": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.ValidateJsonString, + DiffSuppressFunc: monitoringDashboardDiffSuppress, + StateFunc: func(v interface{}) string { + json, _ := structure.NormalizeJsonString(v) + return json + }, + Description: `The JSON representation of a dashboard, following the format at https://cloud.google.com/monitoring/api/ref_v3/rest/v1/projects.dashboards.`, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, + }, + }, + } +} + +func resourceMonitoringDashboardCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + obj, err := structure.ExpandJsonFromString(d.Get("dashboard_json").(string)) + if err != nil { + return err + } + + project, err := getProject(d, config) + if err != nil { + return err + } + + url, err := replaceVars(d, config, "{{MonitoringBasePath}}v1/projects/{{project}}/dashboards") + if err != nil { + return err + } + res, err := sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutCreate), isMonitoringConcurrentEditError) + if err != nil { + return fmt.Errorf("Error creating Dashboard: %s", err) + } + + name, ok := res["name"] + if !ok { + return fmt.Errorf("Create response didn't contain critical fields. Create may not have succeeded.") + } + d.SetId(name.(string)) + + return resourceMonitoringDashboardRead(d, config) +} + +func resourceMonitoringDashboardRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + url := config.MonitoringBasePath + "v1/" + d.Id() + + project, err := getProject(d, config) + if err != nil { + return err + } + + res, err := sendRequest(config, "GET", project, url, nil, isMonitoringConcurrentEditError) + if err != nil { + return handleNotFoundError(err, d, fmt.Sprintf("MonitoringDashboard %q", d.Id())) + } + + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error reading Dashboard: %s", err) + } + + str, err := structure.FlattenJsonToString(res) + if err != nil { + return fmt.Errorf("Error reading Dashboard: %s", err) + } + if err = d.Set("dashboard_json", str); err != nil { + return fmt.Errorf("Error reading Dashboard: %s", err) + } + + return nil +} + +func resourceMonitoringDashboardUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + o, n := d.GetChange("dashboard_json") + oObj, err := structure.ExpandJsonFromString(o.(string)) + if err != nil { + return err + } + nObj, err := structure.ExpandJsonFromString(n.(string)) + if err != nil { + return err + } + + nObj["etag"] = oObj["etag"] + + project, err := getProject(d, config) + if err != nil { + return err + } + + url := config.MonitoringBasePath + "v1/" + d.Id() + _, err = sendRequestWithTimeout(config, "PATCH", project, url, nObj, d.Timeout(schema.TimeoutUpdate), isMonitoringConcurrentEditError) + if err != nil { + return fmt.Errorf("Error updating Dashboard %q: %s", d.Id(), err) + } + + return resourceMonitoringDashboardRead(d, config) +} + +func resourceMonitoringDashboardDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + url := config.MonitoringBasePath + "v1/" + d.Id() + + project, err := getProject(d, config) + if err != nil { + return err + } + + _, err = sendRequestWithTimeout(config, "DELETE", project, url, nil, d.Timeout(schema.TimeoutDelete), isMonitoringConcurrentEditError) + if err != nil { + return handleNotFoundError(err, d, fmt.Sprintf("MonitoringDashboard %q", d.Id())) + } + + return nil +} + +func resourceMonitoringDashboardImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*Config) + + // current import_formats can't import fields with forward slashes in their value + parts, err := getImportIdQualifiers([]string{"projects/(?P[^/]+)/dashboards/(?P[^/]+)", "(?P[^/]+)"}, d, config, d.Id()) + if err != nil { + return nil, err + } + + d.Set("project", parts["project"]) + d.SetId(fmt.Sprintf("projects/%s/dashboards/%s", parts["project"], parts["id"])) + + return []*schema.ResourceData{d}, nil +} diff --git a/third_party/terraform/resources/resource_runtimeconfig_config.go b/third_party/terraform/resources/resource_runtimeconfig_config.go index b8c2e94fe932..0c718e1abbb5 100644 --- a/third_party/terraform/resources/resource_runtimeconfig_config.go +++ b/third_party/terraform/resources/resource_runtimeconfig_config.go @@ -27,18 +27,21 @@ func resourceRuntimeconfigConfig() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validateRegexp("[0-9A-Za-z](?:[_.A-Za-z0-9-]{0,62}[_.A-Za-z0-9])?"), + Description: `The name of the runtime config.`, }, "description": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The description to associate with the runtime config.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } @@ -78,7 +81,7 @@ func resourceRuntimeconfigConfigRead(d *schema.ResourceData, meta interface{}) e fullName := d.Id() runConfig, err := config.clientRuntimeconfig.Projects.Configs.Get(fullName).Do() if err != nil { - return err + return handleNotFoundError(err, d, fmt.Sprintf("RuntimeConfig %q", d.Id())) } project, name, err := resourceRuntimeconfigParseFullName(runConfig.Name) diff --git a/third_party/terraform/resources/resource_runtimeconfig_variable.go b/third_party/terraform/resources/resource_runtimeconfig_variable.go index 02f472b99b5e..df9f2f7e7409 100644 --- a/third_party/terraform/resources/resource_runtimeconfig_variable.go +++ b/third_party/terraform/resources/resource_runtimeconfig_variable.go @@ -21,22 +21,25 @@ func resourceRuntimeconfigVariable() *schema.Resource { Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the variable to manage. Note that variable names can be hierarchical using slashes (e.g. "prod-variables/hostname").`, }, "parent": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the RuntimeConfig resource containing this variable.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "value": { @@ -52,8 +55,9 @@ func resourceRuntimeconfigVariable() *schema.Resource { }, "update_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds, representing when the variable was last updated. Example: "2016-10-09T12:33:37.578138407Z".`, }, }, } diff --git a/third_party/terraform/resources/resource_service_networking_connection.go b/third_party/terraform/resources/resource_service_networking_connection.go index ef25558964c8..f96f62529b31 100644 --- a/third_party/terraform/resources/resource_service_networking_connection.go +++ b/third_party/terraform/resources/resource_service_networking_connection.go @@ -6,6 +6,7 @@ import ( "net/url" "regexp" "strings" + "time" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" @@ -23,12 +24,19 @@ func resourceServiceNetworkingConnection() *schema.Resource { State: resourceServiceNetworkingConnectionImportState, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "network": { Type: schema.TypeString, Required: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `Name of VPC network connected with service producers using VPC peering.`, }, // NOTE(craigatgoogle): This field is weird, it's required to make the Insert/List calls as a parameter // named "parent", however it's also defined in the response as an output field called "peering", which @@ -37,14 +45,16 @@ func resourceServiceNetworkingConnection() *schema.Resource { // delimiter. // See: https://cloud.google.com/vpc/docs/configure-private-services-access#creating-connection "service": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Provider peering service that is managing peering connectivity for a service provider organization. For Google services that support this functionality it is 'servicenetworking.googleapis.com'.`, }, "reserved_peering_ranges": { - Type: schema.TypeList, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Named IP address range(s) of PEERING type reserved for this service provider. Note that invoking this method with a different range when connection is already established will not reallocate already provisioned service producer subnetworks.`, }, "peering": { Type: schema.TypeString, @@ -88,7 +98,7 @@ func resourceServiceNetworkingConnectionCreate(d *schema.ResourceData, meta inte return err } - if err := serviceNetworkingOperationWait(config, op, "Create Service Networking Connection"); err != nil { + if err := serviceNetworkingOperationWaitTime(config, op, "Create Service Networking Connection", d.Timeout(schema.TimeoutCreate)); err != nil { return err } @@ -170,7 +180,7 @@ func resourceServiceNetworkingConnectionUpdate(d *schema.ResourceData, meta inte if err != nil { return err } - if err := serviceNetworkingOperationWait(config, op, "Update Service Networking Connection"); err != nil { + if err := serviceNetworkingOperationWaitTime(config, op, "Update Service Networking Connection", d.Timeout(schema.TimeoutUpdate)); err != nil { return err } } @@ -197,7 +207,7 @@ func resourceServiceNetworkingConnectionDelete(d *schema.ResourceData, meta inte } project := networkFieldValue.Project - res, err := sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutUpdate)) + res, err := sendRequestWithTimeout(config, "POST", project, url, obj, d.Timeout(schema.TimeoutDelete)) if err != nil { return handleNotFoundError(err, d, fmt.Sprintf("ServiceNetworkingConnection %q", d.Id())) } @@ -209,8 +219,7 @@ func resourceServiceNetworkingConnectionDelete(d *schema.ResourceData, meta inte } err = computeOperationWaitTime( - config, op, project, "Updating Network", - int(d.Timeout(schema.TimeoutUpdate).Minutes())) + config, op, project, "Updating Network", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/resources/resource_sql_database_instance.go.erb b/third_party/terraform/resources/resource_sql_database_instance.go.erb index cec33aeffcd2..039708373414 100644 --- a/third_party/terraform/resources/resource_sql_database_instance.go.erb +++ b/third_party/terraform/resources/resource_sql_database_instance.go.erb @@ -101,10 +101,11 @@ func resourceSqlDatabaseInstance() *schema.Resource { Schema: map[string]*schema.Schema{ "region": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The region the instance will sit in. Note, Cloud SQL is not available in all regions - choose from one of the options listed here. A valid region must be provided to use this resource. If a region is not provided in the resource definition, the provider region will be used instead, but this will be an apply-time error for instances if the provider region is not supported with Cloud SQL. If you choose not to provide the region argument for this resource, make sure you understand this.`, }, "settings": { @@ -114,24 +115,28 @@ func resourceSqlDatabaseInstance() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "version": { - Type: schema.TypeInt, - Computed: true, + Type: schema.TypeInt, + Computed: true, + Description: `Used to make sure changes to the settings block are atomic.`, }, "tier": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The machine type to use. See tiers for more details and supported versions. Postgres supports only shared-core machine types such as db-f1-micro, and custom machine types such as db-custom-2-13312. See the Custom Machine Type Documentation to learn about specifying custom machine types.`, }, "activation_policy": { Type: schema.TypeString, Optional: true, // Defaults differ between first and second gen instances - Computed: true, + Computed: true, + Description: `This specifies when the instance should be active. Can be either ALWAYS, NEVER or ON_DEMAND.`, }, "authorized_gae_applications": { - Type: schema.TypeList, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Deprecated: "This property is only applicable to First Generation instances, and First Generation instances are now deprecated.", + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Deprecated: "This property is only applicable to First Generation instances, and First Generation instances are now deprecated.", + Description: `This property is only applicable to First Generation instances. First Generation instances are now deprecated, see https://cloud.google.com/sql/docs/mysql/deprecation-notice for information on how to upgrade to Second Generation instances. A list of Google App Engine (GAE) project names that are allowed to access this instance.`, }, "availability_type": { Type: schema.TypeString, @@ -142,6 +147,10 @@ func resourceSqlDatabaseInstance() *schema.Resource { // configuration. Computed: true, ValidateFunc: validation.StringInSlice([]string{"REGIONAL", "ZONAL"}, false), + Description: `The availability type of the Cloud SQL instance, high availability +(REGIONAL) or single zone (ZONAL). For MySQL instances, ensure that +settings.backup_configuration.enabled and +settings.backup_configuration.binary_log_enabled are both set to true.`, }, "backup_configuration": { Type: schema.TypeList, @@ -154,11 +163,13 @@ func resourceSqlDatabaseInstance() *schema.Resource { Type: schema.TypeBool, Optional: true, AtLeastOneOf: backupConfigurationKeys, + Description: `True if binary logging is enabled. If settings.backup_configuration.enabled is false, this must be as well. Cannot be used with Postgres.`, }, "enabled": { Type: schema.TypeBool, Optional: true, AtLeastOneOf: backupConfigurationKeys, + Description: `True if backup configuration is enabled.`, }, "start_time": { Type: schema.TypeString, @@ -166,20 +177,23 @@ func resourceSqlDatabaseInstance() *schema.Resource { // start_time is randomly assigned if not set Computed: true, AtLeastOneOf: backupConfigurationKeys, + Description: `HH:MM format time indicating when backup configuration starts.`, }, "location": { Type: schema.TypeString, Optional: true, AtLeastOneOf: backupConfigurationKeys, + Description: `Location of the backup configuration.`, }, }, }, }, "crash_safe_replication": { - Type: schema.TypeBool, - Optional: true, - Computed: true, - Deprecated: "This property is only applicable to First Generation instances, and First Generation instances are now deprecated.", + Type: schema.TypeBool, + Optional: true, + Computed: true, + Deprecated: "This property is only applicable to First Generation instances, and First Generation instances are now deprecated.", + Description: `This property is only applicable to First Generation instances. First Generation instances are now deprecated, see here for information on how to upgrade to Second Generation instances. Specific to read instances, indicates when crash-safe replication flags are enabled.`, }, "database_flags": { Type: schema.TypeList, @@ -187,12 +201,14 @@ func resourceSqlDatabaseInstance() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "value": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `Value of the flag.`, }, "name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `Name of the flag.`, }, }, }, @@ -202,18 +218,21 @@ func resourceSqlDatabaseInstance() *schema.Resource { Optional: true, Default: true, DiffSuppressFunc: suppressFirstGen, + Description: `Configuration to increase storage size automatically. Note that future terraform apply calls will attempt to resize the disk to the value specified in disk_size - if this is set, do not set disk_size.`, }, "disk_size": { Type: schema.TypeInt, Optional: true, // Defaults differ between first and second gen instances - Computed: true, + Computed: true, + Description: `The size of data disk, in GB. Size of a running instance cannot be reduced but can be increased.`, }, "disk_type": { Type: schema.TypeString, Optional: true, // Set computed instead of default because this property is for second-gen only. - Computed: true, + Computed: true, + Description: `The type of data disk: PD_SSD or PD_HDD.`, }, "ip_configuration": { Type: schema.TypeList, @@ -234,6 +253,7 @@ func resourceSqlDatabaseInstance() *schema.Resource { Optional: true, Default: true, AtLeastOneOf: ipConfigurationKeys, + Description: `Whether this Cloud SQL instance should be assigned a public IPV4 address. Either ipv4_enabled must be enabled or a private_network must be configured.`, }, "require_ssl": { Type: schema.TypeBool, @@ -246,6 +266,7 @@ func resourceSqlDatabaseInstance() *schema.Resource { ValidateFunc: orEmpty(validateRegexp(privateNetworkLinkRegex)), DiffSuppressFunc: compareSelfLinkRelativePaths, AtLeastOneOf: ipConfigurationKeys, + Description: `The VPC network from which the Cloud SQL instance is accessible for private IP. For example, projects/myProject/global/networks/default. Specifying a network enables private IP. Either ipv4_enabled must be enabled or a private_network must be configured. This setting can be updated, but it cannot be removed after it is set.`, }, }, }, @@ -261,11 +282,13 @@ func resourceSqlDatabaseInstance() *schema.Resource { Type: schema.TypeString, Optional: true, AtLeastOneOf: []string{"settings.0.location_preference.0.follow_gae_application", "settings.0.location_preference.0.zone"}, + Description: `A GAE application whose zone to remain in. Must be in the same region as this instance.`, }, "zone": { Type: schema.TypeString, Optional: true, AtLeastOneOf: []string{"settings.0.location_preference.0.follow_gae_application", "settings.0.location_preference.0.zone"}, + Description: `The preferred compute engine zone.`, }, }, }, @@ -281,51 +304,61 @@ func resourceSqlDatabaseInstance() *schema.Resource { Optional: true, ValidateFunc: validation.IntBetween(1, 7), AtLeastOneOf: maintenanceWindowKeys, + Description: `Day of week (1-7), starting on Monday`, }, "hour": { Type: schema.TypeInt, Optional: true, ValidateFunc: validation.IntBetween(0, 23), AtLeastOneOf: maintenanceWindowKeys, + Description: `Hour of day (0-23), ignored if day not set`, }, "update_track": { Type: schema.TypeString, Optional: true, AtLeastOneOf: maintenanceWindowKeys, + Description: `Receive updates earlier (canary) or later (stable)`, }, }, }, + Description: `Declares a one-hour maintenance window when an Instance can automatically restart to apply updates. The maintenance window is specified in UTC time.`, }, "pricing_plan": { - Type: schema.TypeString, - Optional: true, - Default: "PER_USE", + Type: schema.TypeString, + Optional: true, + Default: "PER_USE", + Description: `Pricing plan for this instance, can only be PER_USE.`, }, "replication_type": { - Type: schema.TypeString, - Optional: true, - Deprecated: "This property is only applicable to First Generation instances, and First Generation instances are now deprecated.", - Default: "SYNCHRONOUS", + Type: schema.TypeString, + Optional: true, + Deprecated: "This property is only applicable to First Generation instances, and First Generation instances are now deprecated.", + Default: "SYNCHRONOUS", + Description: `This property is only applicable to First Generation instances. First Generation instances are now deprecated, see here for information on how to upgrade to Second Generation instances. Replication type for this instance, can be one of ASYNCHRONOUS or SYNCHRONOUS.`, }, "user_labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value user label pairs to assign to the instance.`, }, }, }, + Description: `The settings to use for the database. The configuration is detailed below.`, }, "connection_name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The connection name of the instance to be used in connection strings. For example, when connecting with Cloud SQL Proxy.`, }, "database_version": { - Type: schema.TypeString, - Optional: true, - Default: "MYSQL_5_6", - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Default: "MYSQL_5_6", + ForceNew: true, + Description: `The MySQL, PostgreSQL or SQL Server (beta) version to use. Supported values include MYSQL_5_6, MYSQL_5_7, POSTGRES_9_6,POSTGRES_11, SQLSERVER_2017_STANDARD, SQLSERVER_2017_ENTERPRISE, SQLSERVER_2017_EXPRESS, SQLSERVER_2017_WEB. Database Version Policies includes an up-to-date reference of supported versions.`, }, <% unless version == 'ga' -%> @@ -339,14 +372,13 @@ func resourceSqlDatabaseInstance() *schema.Resource { <% end -%> - <% unless version == 'ga' -%> "root_password": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Sensitive: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Sensitive: true, + Description: `Initial root password. Required for MS SQL Server, ignored by MySQL and PostgreSQL.`, }, - <% end -%> "ip_address": { Type: schema.TypeList, @@ -370,39 +402,45 @@ func resourceSqlDatabaseInstance() *schema.Resource { }, "first_ip_address": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The first IPv4 address of any type assigned. This is to support accessing the first address in the list in a terraform output when the resource is configured with a count.`, }, "public_ip_address": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `IPv4 address assigned. This is a workaround for an issue fixed in Terraform 0.12 but also provides a convenient way to access an IP of a specific type without performing filtering in a Terraform config.`, }, "private_ip_address": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `IPv4 address assigned. This is a workaround for an issue fixed in Terraform 0.12 but also provides a convenient way to access an IP of a specific type without performing filtering in a Terraform config.`, }, "name": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The name of the instance. If the name is left blank, Terraform will randomly generate one when the instance is first created. This is done because after a name is used, it cannot be reused for up to one week.`, }, "master_instance_name": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The name of the instance that will act as the master in the replication setup. Note, this requires the master to have binary_log_enabled set, as well as existing backups.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "replica_configuration": { @@ -418,42 +456,49 @@ func resourceSqlDatabaseInstance() *schema.Resource { Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `PEM representation of the trusted CA's x509 certificate.`, }, "client_certificate": { Type: schema.TypeString, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `PEM representation of the slave's x509 certificate.`, }, "client_key": { Type: schema.TypeString, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `PEM representation of the slave's private key. The corresponding public key in encoded in the client_certificate.`, }, "connect_retry_interval": { Type: schema.TypeInt, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `The number of seconds between connect retries.`, }, "dump_file_path": { Type: schema.TypeString, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `Path to a SQL file in GCS from which slave instances are created. Format is gs://bucket/filename.`, }, "failover_target": { Type: schema.TypeBool, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `Specifies if the replica is the failover target. If the field is set to true the replica will be designated as a failover replica. If the master instance fails, the replica instance will be promoted as the new master instance.`, }, "master_heartbeat_period": { Type: schema.TypeInt, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `Time in ms between replication heartbeats.`, }, "password": { Type: schema.TypeString, @@ -461,27 +506,32 @@ func resourceSqlDatabaseInstance() *schema.Resource { ForceNew: true, Sensitive: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `Password for the replication connection.`, }, "ssl_cipher": { Type: schema.TypeString, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `Permissible ciphers for use in SSL encryption.`, }, "username": { Type: schema.TypeString, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `Username for replication connection.`, }, "verify_server_certificate": { Type: schema.TypeBool, Optional: true, ForceNew: true, AtLeastOneOf: replicaConfigurationKeys, + Description: `True if the master's common name value is checked during the SSL handshake.`, }, }, }, + Description: `The configuration for replication.`, }, "server_ca_cert": { Type: schema.TypeList, @@ -493,37 +543,44 @@ func resourceSqlDatabaseInstance() *schema.Resource { Type: schema.TypeString, Computed: true, AtLeastOneOf: serverCertsKeys, + Description: `The CA Certificate used to connect to the SQL Instance via SSL.`, }, "common_name": { Type: schema.TypeString, Computed: true, AtLeastOneOf: serverCertsKeys, + Description: `The CN valid for the CA Cert.`, }, "create_time": { Type: schema.TypeString, Computed: true, AtLeastOneOf: serverCertsKeys, + Description: `Creation time of the CA Cert.`, }, "expiration_time": { Type: schema.TypeString, Computed: true, AtLeastOneOf: serverCertsKeys, + Description: `Expiration time of the CA Cert.`, }, "sha1_fingerprint": { Type: schema.TypeString, Computed: true, AtLeastOneOf: serverCertsKeys, + Description: `SHA Fingerprint of the CA Cert.`, }, }, }, }, "service_account_email_address": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The service account email address assigned to the instance.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, }, } @@ -582,12 +639,10 @@ func resourceSqlDatabaseInstanceCreate(d *schema.ResourceData, meta interface{}) ReplicaConfiguration: expandReplicaConfiguration(d.Get("replica_configuration").([]interface{})), } - <% unless version == 'ga' -%> // MSSQL Server require rootPassword to be set if strings.Contains(instance.DatabaseVersion, "SQLSERVER") { instance.RootPassword = d.Get("root_password").(string) } - <% end -%> // Modifying a replica during Create can cause problems if the master is // modified at the same time. Lock the master until we're done in order @@ -619,7 +674,7 @@ func resourceSqlDatabaseInstanceCreate(d *schema.ResourceData, meta interface{}) } d.SetId(id) - err = sqlAdminOperationWaitTime(config, op, project, "Create Instance", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = sqlAdminOperationWaitTime(config, op, project, "Create Instance", d.Timeout(schema.TimeoutCreate)) if err != nil { d.SetId("") return err @@ -646,7 +701,7 @@ func resourceSqlDatabaseInstanceCreate(d *schema.ResourceData, meta interface{}) err = retry(func() error { op, err = config.clientSqlAdmin.Users.Delete(project, instance.Name).Host(u.Host).Name(u.Name).Do() if err == nil { - err = sqlAdminOperationWaitTime(config, op, project, "Delete default root User", int(d.Timeout(schema.TimeoutCreate).Minutes())) + err = sqlAdminOperationWaitTime(config, op, project, "Delete default root User", d.Timeout(schema.TimeoutCreate)) } return err }) @@ -690,7 +745,8 @@ func expandSqlDatabaseInstanceSettings(configured []interface{}, secondGen bool) // 1st Generation instances don't support the disk_autoresize parameter // and it defaults to true - so we shouldn't set it if this is first gen if secondGen { - settings.StorageAutoResize = _settings["disk_autoresize"].(bool) + resize := _settings["disk_autoresize"].(bool) + settings.StorageAutoResize = &resize } return settings @@ -809,6 +865,7 @@ func expandBackupConfiguration(configured []interface{}) *sqladmin.BackupConfigu Enabled: _backupConfiguration["enabled"].(bool), StartTime: _backupConfiguration["start_time"].(string), Location: _backupConfiguration["location"].(string), + ForceSendFields: []string{"BinaryLogEnabled", "Enabled"}, } } @@ -912,7 +969,7 @@ func resourceSqlDatabaseInstanceUpdate(d *schema.ResourceData, meta interface{}) return fmt.Errorf("Error, failed to update instance settings for %s: %s", instance.Name, err) } - err = sqlAdminOperationWaitTime(config, op, project, "Update Instance", int(d.Timeout(schema.TimeoutUpdate).Minutes())) + err = sqlAdminOperationWaitTime(config, op, project, "Update Instance", d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -938,17 +995,18 @@ func resourceSqlDatabaseInstanceDelete(d *schema.ResourceData, meta interface{}) var op *sqladmin.Operation err = retryTimeDuration(func() (rerr error) { op, rerr = config.clientSqlAdmin.Instances.Delete(project, d.Get("name").(string)).Do() - return rerr - }, d.Timeout(schema.TimeoutDelete)) + if rerr != nil { + return rerr + } + err = sqlAdminOperationWaitTime(config, op, project, "Delete Instance", d.Timeout(schema.TimeoutDelete)) + if err != nil { + return err + } + return nil + }, d.Timeout(schema.TimeoutDelete), isSqlOperationInProgressError, isSqlInternalError) if err != nil { return fmt.Errorf("Error, failed to delete instance %s: %s", d.Get("name").(string), err) } - - err = sqlAdminOperationWaitTime(config, op, project, "Delete Instance", int(d.Timeout(schema.TimeoutDelete).Minutes())) - if err != nil { - return err - } - return nil } diff --git a/third_party/terraform/resources/resource_sql_ssl_cert.go b/third_party/terraform/resources/resource_sql_ssl_cert.go index 73274b23e67e..a3470ce139f4 100644 --- a/third_party/terraform/resources/resource_sql_ssl_cert.go +++ b/third_party/terraform/resources/resource_sql_ssl_cert.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" sqladmin "google.golang.org/api/sqladmin/v1beta4" @@ -16,60 +17,75 @@ func resourceSqlSslCert() *schema.Resource { SchemaVersion: 1, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "common_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The common name to be used in the certificate to identify the client. Constrained to [a-zA-Z.-_ ]+. Changing this forces a new resource to be created.`, }, "instance": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the Cloud SQL instance. Changing this forces a new resource to be created.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "cert": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The actual certificate data for this client certificate.`, }, "cert_serial_number": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The serial number extracted from the certificate data.`, }, "create_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The time when the certificate was created in RFC 3339 format, for example 2012-11-15T16:19:00.094Z.`, }, "expiration_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The time when the certificate expires in RFC 3339 format, for example 2012-11-15T16:19:00.094Z.`, }, "private_key": { - Type: schema.TypeString, - Computed: true, - Sensitive: true, + Type: schema.TypeString, + Computed: true, + Sensitive: true, + Description: `The private key associated with the client certificate.`, }, "server_ca_cert": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The CA cert of the server this client cert was generated from.`, }, "sha1_fingerprint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The SHA1 Fingerprint of the certificate.`, }, }, } @@ -98,7 +114,7 @@ func resourceSqlSslCertCreate(d *schema.ResourceData, meta interface{}) error { "ssl cert %s into instance %s: %s", commonName, instance, err) } - err = sqlAdminOperationWait(config, resp.Operation, project, "Create Ssl Cert") + err = sqlAdminOperationWaitTime(config, resp.Operation, project, "Create Ssl Cert", d.Timeout(schema.TimeoutCreate)) if err != nil { return fmt.Errorf("Error, failure waiting for creation of %q "+ "in %q: %s", commonName, instance, err) @@ -174,7 +190,7 @@ func resourceSqlSslCertDelete(d *schema.ResourceData, meta interface{}) error { instance, err) } - err = sqlAdminOperationWait(config, op, project, "Delete Ssl Cert") + err = sqlAdminOperationWaitTime(config, op, project, "Delete Ssl Cert", d.Timeout(schema.TimeoutDelete)) if err != nil { return fmt.Errorf("Error, failure waiting for deletion of ssl cert %q "+ diff --git a/third_party/terraform/resources/resource_sql_ssl_cert_test.go b/third_party/terraform/resources/resource_sql_ssl_cert_test.go index 77464b8d26fa..c952e5f0d111 100644 --- a/third_party/terraform/resources/resource_sql_ssl_cert_test.go +++ b/third_party/terraform/resources/resource_sql_ssl_cert_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,17 +11,17 @@ import ( func TestAccSqlClientCert_mysql(t *testing.T) { t.Parallel() - instance := acctest.RandomWithPrefix("i") - resource.Test(t, resource.TestCase{ + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlClientCertDestroy, + CheckDestroy: testAccSqlClientCertDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleSqlClientCert_mysql(instance), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlClientCertExists("google_sql_ssl_cert.cert1"), - testAccCheckGoogleSqlClientCertExists("google_sql_ssl_cert.cert2"), + testAccCheckGoogleSqlClientCertExists(t, "google_sql_ssl_cert.cert1"), + testAccCheckGoogleSqlClientCertExists(t, "google_sql_ssl_cert.cert2"), ), }, }, @@ -32,25 +31,25 @@ func TestAccSqlClientCert_mysql(t *testing.T) { func TestAccSqlClientCert_postgres(t *testing.T) { t.Parallel() - instance := acctest.RandomWithPrefix("i") - resource.Test(t, resource.TestCase{ + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlClientCertDestroy, + CheckDestroy: testAccSqlClientCertDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleSqlClientCert_postgres(instance), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlClientCertExists("google_sql_ssl_cert.cert"), + testAccCheckGoogleSqlClientCertExists(t, "google_sql_ssl_cert.cert"), ), }, }, }) } -func testAccCheckGoogleSqlClientCertExists(n string) resource.TestCheckFunc { +func testAccCheckGoogleSqlClientCertExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Resource not found: %s", n) @@ -72,26 +71,28 @@ func testAccCheckGoogleSqlClientCertExists(n string) resource.TestCheckFunc { } } -func testAccSqlClientCertDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - config := testAccProvider.Meta().(*Config) - if rs.Type != "google_sql_ssl_cert" { - continue - } +func testAccSqlClientCertDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + config := googleProviderConfig(t) + if rs.Type != "google_sql_ssl_cert" { + continue + } - fingerprint := rs.Primary.Attributes["sha1_fingerprint"] - instance := rs.Primary.Attributes["instance"] - sslCert, _ := config.clientSqlAdmin.SslCerts.Get(config.Project, instance, fingerprint).Do() + fingerprint := rs.Primary.Attributes["sha1_fingerprint"] + instance := rs.Primary.Attributes["instance"] + sslCert, _ := config.clientSqlAdmin.SslCerts.Get(config.Project, instance, fingerprint).Do() - commonName := rs.Primary.Attributes["common_name"] - if sslCert != nil { - return fmt.Errorf("Client cert %q still exists, should have been destroyed", commonName) + commonName := rs.Primary.Attributes["common_name"] + if sslCert != nil { + return fmt.Errorf("Client cert %q still exists, should have been destroyed", commonName) + } + + return nil } return nil } - - return nil } func testGoogleSqlClientCert_mysql(instance string) string { diff --git a/third_party/terraform/resources/resource_sql_user.go b/third_party/terraform/resources/resource_sql_user.go index 30b67b313ab7..3c4a8d242c7b 100644 --- a/third_party/terraform/resources/resource_sql_user.go +++ b/third_party/terraform/resources/resource_sql_user.go @@ -4,6 +4,7 @@ import ( "fmt" "log" "strings" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" sqladmin "google.golang.org/api/sqladmin/v1beta4" @@ -19,39 +20,50 @@ func resourceSqlUser() *schema.Resource { State: resourceSqlUserImporter, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + SchemaVersion: 1, MigrateState: resourceSqlUserMigrateState, Schema: map[string]*schema.Schema{ "host": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The host the user can connect from. This is only supported for MySQL instances. Don't set this field for PostgreSQL instances. Can be an IP address. Changing this forces a new resource to be created.`, }, "instance": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the Cloud SQL instance. Changing this forces a new resource to be created.`, }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the user. Changing this forces a new resource to be created.`, }, "password": { - Type: schema.TypeString, - Optional: true, - Sensitive: true, + Type: schema.TypeString, + Optional: true, + Sensitive: true, + Description: `The password for the user. Can be updated. For Postgres instances this is a Required field.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, }, } @@ -96,7 +108,7 @@ func resourceSqlUserCreate(d *schema.ResourceData, meta interface{}) error { // for which user.Host is an empty string. That's okay. d.SetId(fmt.Sprintf("%s/%s/%s", user.Name, user.Host, user.Instance)) - err = sqlAdminOperationWait(config, op, project, "Insert User") + err = sqlAdminOperationWaitTime(config, op, project, "Insert User", d.Timeout(schema.TimeoutCreate)) if err != nil { return fmt.Errorf("Error, failure waiting for insertion of %s "+ @@ -189,7 +201,7 @@ func resourceSqlUserUpdate(d *schema.ResourceData, meta interface{}) error { "user %s into user %s: %s", name, instance, err) } - err = sqlAdminOperationWait(config, op, project, "Insert User") + err = sqlAdminOperationWaitTime(config, op, project, "Insert User", d.Timeout(schema.TimeoutUpdate)) if err != nil { return fmt.Errorf("Error, failure waiting for update of %s "+ @@ -220,8 +232,15 @@ func resourceSqlUserDelete(d *schema.ResourceData, meta interface{}) error { var op *sqladmin.Operation err = retryTimeDuration(func() error { op, err = config.clientSqlAdmin.Users.Delete(project, instance).Host(host).Name(name).Do() - return err - }, d.Timeout(schema.TimeoutDelete)) + if err != nil { + return err + } + + if err := sqlAdminOperationWaitTime(config, op, project, "Delete User", d.Timeout(schema.TimeoutDelete)); err != nil { + return err + } + return nil + }, d.Timeout(schema.TimeoutDelete), isSqlOperationInProgressError, isSqlInternalError) if err != nil { return fmt.Errorf("Error, failed to delete"+ @@ -229,13 +248,6 @@ func resourceSqlUserDelete(d *schema.ResourceData, meta interface{}) error { instance, err) } - err = sqlAdminOperationWait(config, op, project, "Delete User") - - if err != nil { - return fmt.Errorf("Error, failure waiting for deletion of %s "+ - "in %s: %s", name, instance, err) - } - return nil } diff --git a/third_party/terraform/resources/resource_storage_bucket.go b/third_party/terraform/resources/resource_storage_bucket.go index 7cd63c830d56..8302a7d388dc 100644 --- a/third_party/terraform/resources/resource_storage_bucket.go +++ b/third_party/terraform/resources/resource_storage_bucket.go @@ -37,9 +37,10 @@ func resourceStorageBucket() *schema.Resource { Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the bucket.`, }, "encryption": { @@ -49,28 +50,33 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "default_kms_key_name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified. You must pay attention to whether the crypto key is available in the location that this bucket is created in. See the docs for more details.`, }, }, }, + Description: `The bucket's encryption configuration.`, }, "requester_pays": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Description: `Enables Requester Pays on a storage bucket.`, }, "force_destroy": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `When deleting a bucket, this boolean option will delete all contained objects. If you try to delete a bucket that contains objects, Terraform will fail that run.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs to assign to the bucket.`, }, "location": { @@ -81,29 +87,34 @@ func resourceStorageBucket() *schema.Resource { StateFunc: func(s interface{}) string { return strings.ToUpper(s.(string)) }, + Description: `The GCS location`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, "url": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The base URL of the bucket, in the format gs://.`, }, "storage_class": { - Type: schema.TypeString, - Optional: true, - Default: "STANDARD", + Type: schema.TypeString, + Optional: true, + Default: "STANDARD", + Description: `The Storage Class of the new bucket. Supported values include: STANDARD, MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE.`, }, "lifecycle_rule": { @@ -121,15 +132,18 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "type": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The type of the action of this Lifecycle Rule. Supported values include: Delete and SetStorageClass.`, }, "storage_class": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `The target Storage Class of objects affected by this Lifecycle Rule. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE.`, }, }, }, + Description: `The Lifecycle Rule's action configuration. A single block of this type is supported.`, }, "condition": { Type: schema.TypeSet, @@ -140,12 +154,14 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "age": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `Minimum age of an object in days to satisfy this condition.`, }, "created_before": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `Creation date of an object in RFC 3339 (e.g. 2017-06-13) to satisfy this condition.`, }, "is_live": { Type: schema.TypeBool, @@ -158,22 +174,27 @@ func resourceStorageBucket() *schema.Resource { Computed: true, Optional: true, ValidateFunc: validation.StringInSlice([]string{"LIVE", "ARCHIVED", "ANY", ""}, false), + Description: `Match to live and/or archived objects. Unversioned buckets have only live objects. Supported values include: "LIVE", "ARCHIVED", "ANY".`, }, "matches_storage_class": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Storage Class of objects to satisfy this condition. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, STANDARD, DURABLE_REDUCED_AVAILABILITY.`, }, "num_newer_versions": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `Relevant only for versioned objects. The number of newer versions of an object to satisfy this condition.`, }, }, }, + Description: `The Lifecycle Rule's condition configuration.`, }, }, }, + Description: `The bucket's Lifecycle Rules configuration.`, }, "versioning": { @@ -183,11 +204,13 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { - Type: schema.TypeBool, - Required: true, + Type: schema.TypeBool, + Required: true, + Description: `While set to true, versioning is fully enabled for this bucket.`, }, }, }, + Description: `The bucket's Versioning configuration.`, }, "website": { @@ -200,14 +223,17 @@ func resourceStorageBucket() *schema.Resource { Type: schema.TypeString, Optional: true, AtLeastOneOf: []string{"website.0.not_found_page", "website.0.main_page_suffix"}, + Description: `Behaves as the bucket's directory index where missing objects are treated as potential directories.`, }, "not_found_page": { Type: schema.TypeString, Optional: true, AtLeastOneOf: []string{"website.0.main_page_suffix", "website.0.not_found_page"}, + Description: `The custom object to return when a requested resource is not found.`, }, }, }, + Description: `Configuration if the bucket acts as a website.`, }, "retention_policy": { @@ -217,17 +243,20 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "is_locked": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `If set to true, the bucket will be locked and permanently restrict edits to the bucket's retention policy. Caution: Locking a bucket is an irreversible action.`, }, "retention_period": { Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntBetween(1, math.MaxInt32), + Description: `The period of time, in seconds, that objects in the bucket must be retained and cannot be deleted, overwritten, or archived. The value must be less than 3,155,760,000 seconds.`, }, }, }, + Description: `Configuration of the bucket's data retention policy for how long objects in the bucket should be retained.`, }, "cors": { @@ -241,6 +270,7 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, + Description: `The list of Origins eligible to receive CORS response headers. Note: "*" is permitted in the list of origins, and means "any Origin".`, }, "method": { Type: schema.TypeList, @@ -248,6 +278,7 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, + Description: `The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list of methods, and means "any method".`, }, "response_header": { Type: schema.TypeList, @@ -255,13 +286,16 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, + Description: `The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.`, }, "max_age_seconds": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + Description: `The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.`, }, }, }, + Description: `The bucket's Cross-Origin Resource Sharing (CORS) configuration.`, }, "default_event_based_hold": { @@ -276,21 +310,25 @@ func resourceStorageBucket() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "log_bucket": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The bucket that will receive log objects.`, }, "log_object_prefix": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The object prefix for log objects. If it's not provided, by default GCS sets this to this bucket's name.`, }, }, }, + Description: `The bucket's Access & Storage Logs configuration.`, }, "bucket_policy_only": { - Type: schema.TypeBool, - Optional: true, - Computed: true, + Type: schema.TypeBool, + Optional: true, + Computed: true, + Description: `Enables Bucket Policy Only access to a bucket.`, }, }, } @@ -645,7 +683,7 @@ func resourceStorageBucketDelete(d *schema.ResourceData, meta interface{}) error } if !d.Get("force_destroy").(bool) { - deleteErr := errors.New("Error trying to delete a bucket containing objects without `force_destroy` set to true") + deleteErr := fmt.Errorf("Error trying to delete bucket %s containing objects without `force_destroy` set to true", bucket) log.Printf("Error! %s : %s\n\n", bucket, deleteErr) return deleteErr } diff --git a/third_party/terraform/resources/resource_storage_bucket_acl.go b/third_party/terraform/resources/resource_storage_bucket_acl.go index d352a1e2c47f..aa1b5aed893b 100644 --- a/third_party/terraform/resources/resource_storage_bucket_acl.go +++ b/third_party/terraform/resources/resource_storage_bucket_acl.go @@ -21,14 +21,16 @@ func resourceStorageBucketAcl() *schema.Resource { Schema: map[string]*schema.Schema{ "bucket": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the bucket it applies to.`, }, "default_acl": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Description: `Configure this ACL to be the default ACL.`, }, "predefined_acl": { @@ -36,6 +38,7 @@ func resourceStorageBucketAcl() *schema.Resource { Optional: true, ForceNew: true, ConflictsWith: []string{"role_entity"}, + Description: `The canned GCS ACL to apply. Must be set if role_entity is not.`, }, "role_entity": { @@ -44,6 +47,7 @@ func resourceStorageBucketAcl() *schema.Resource { Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, ConflictsWith: []string{"predefined_acl"}, + Description: `List of role/entity pairs in the form ROLE:entity. See GCS Bucket ACL documentation for more details. Must be set if predefined_acl is not.`, }, }, } diff --git a/third_party/terraform/resources/resource_storage_bucket_object.go b/third_party/terraform/resources/resource_storage_bucket_object.go index f3e7c02a35a6..bbe85b682d55 100644 --- a/third_party/terraform/resources/resource_storage_bucket_object.go +++ b/third_party/terraform/resources/resource_storage_bucket_object.go @@ -25,46 +25,53 @@ func resourceStorageBucketObject() *schema.Resource { Schema: map[string]*schema.Schema{ "bucket": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the containing bucket.`, }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the object. If you're interpolating the name of this object, see output_name instead.`, }, "cache_control": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Description: `Cache-Control directive to specify caching behavior of object data. If omitted and object is accessible to all anonymous users, the default will be public, max-age=3600`, }, "content_disposition": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Description: `Content-Disposition of the object data.`, }, "content_encoding": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Description: `Content-Encoding of the object data.`, }, "content_language": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Description: `Content-Language of the object data.`, }, "content_type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `Content-Type of the object data. Defaults to "application/octet-stream" or "text/plain; charset=utf-8".`, }, "content": { @@ -73,16 +80,19 @@ func resourceStorageBucketObject() *schema.Resource { ForceNew: true, ConflictsWith: []string{"source"}, Sensitive: true, + Description: `Data as string to be uploaded. Must be defined if source is not. Note: The content field is marked as sensitive. To view the raw contents of the object, please define an output.`, }, "crc32c": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Base 64 CRC32 hash of the uploaded data.`, }, "md5hash": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `Base 64 MD5 hash of the uploaded data.`, }, "source": { @@ -90,6 +100,7 @@ func resourceStorageBucketObject() *schema.Resource { Optional: true, ForceNew: true, ConflictsWith: []string{"content"}, + Description: `A path to the data you want to upload. Must be defined if content is not.`, }, // Detect changes to local file or changes made outside of Terraform to the file stored on the server. @@ -133,28 +144,32 @@ func resourceStorageBucketObject() *schema.Resource { }, "storage_class": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The StorageClass of the new bucket object. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE. If not provided, this defaults to the bucket's default storage class or to a standard class.`, }, "metadata": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `User-provided metadata, in key/value pairs.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `A url reference to this object.`, }, // https://github.com/hashicorp/terraform/issues/19052 "output_name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The name of the object. Use this field in interpolations with google_storage_object_acl to recreate google_storage_object_acl resources when your google_storage_bucket_object is recreated.`, }, }, } @@ -206,6 +221,10 @@ func resourceStorageBucketObjectCreate(d *schema.ResourceData, meta interface{}) object.ContentType = v.(string) } + if v, ok := d.GetOk("metadata"); ok { + object.Metadata = convertStringMap(v.(map[string]interface{})) + } + if v, ok := d.GetOk("storage_class"); ok { object.StorageClass = v.(string) } @@ -249,6 +268,7 @@ func resourceStorageBucketObjectRead(d *schema.ResourceData, meta interface{}) e d.Set("storage_class", res.StorageClass) d.Set("self_link", res.SelfLink) d.Set("output_name", res.Name) + d.Set("metadata", res.Metadata) d.SetId(objectGetId(res)) diff --git a/third_party/terraform/resources/resource_storage_notification.go b/third_party/terraform/resources/resource_storage_notification.go index fb7bfc56981b..570406fb9d59 100644 --- a/third_party/terraform/resources/resource_storage_notification.go +++ b/third_party/terraform/resources/resource_storage_notification.go @@ -20,9 +20,10 @@ func resourceStorageNotification() *schema.Resource { Schema: map[string]*schema.Schema{ "bucket": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the bucket.`, }, "payload_format": { @@ -30,6 +31,7 @@ func resourceStorageNotification() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validation.StringInSlice([]string{"JSON_API_V1", "NONE"}, false), + Description: `The desired content of the Payload. One of "JSON_API_V1" or "NONE".`, }, "topic": { @@ -37,6 +39,7 @@ func resourceStorageNotification() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: compareSelfLinkOrResourceName, + Description: `The Cloud PubSub topic to which this subscription publishes. Expects either the topic name, assumed to belong to the default GCP provider project, or the project-level name, i.e. projects/my-gcp-project/topics/my-topic or my-topic. If the project is not set in the provider, you will need to use the project-level name.`, }, "custom_attributes": { @@ -46,6 +49,7 @@ func resourceStorageNotification() *schema.Resource { Elem: &schema.Schema{ Type: schema.TypeString, }, + Description: ` A set of key/value attribute pairs to attach to each Cloud PubSub message published for this notification subscription`, }, "event_types": { @@ -58,22 +62,26 @@ func resourceStorageNotification() *schema.Resource { "OBJECT_FINALIZE", "OBJECT_METADATA_UPDATE", "OBJECT_DELETE", "OBJECT_ARCHIVE"}, false), }, + Description: `List of event type filters for this notification config. If not specified, Cloud Storage will send notifications for all event types. The valid types are: "OBJECT_FINALIZE", "OBJECT_METADATA_UPDATE", "OBJECT_DELETE", "OBJECT_ARCHIVE"`, }, "object_name_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Specifies a prefix path filter for this notification config. Cloud Storage will only send notifications for objects in this bucket whose names begin with the specified prefix.`, }, "notification_id": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The ID of the created notification.`, }, "self_link": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, }, }, } diff --git a/third_party/terraform/resources/resource_storage_transfer_job.go b/third_party/terraform/resources/resource_storage_transfer_job.go index 494e330b77da..e0622ef59f51 100644 --- a/third_party/terraform/resources/resource_storage_transfer_job.go +++ b/third_party/terraform/resources/resource_storage_transfer_job.go @@ -44,19 +44,22 @@ func resourceStorageTransferJob() *schema.Resource { Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `The name of the Transfer Job.`, }, "description": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringLenBetween(0, 1024), + Description: `Unique description to identify the Transfer Job.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The project in which the resource belongs. If it is not provided, the provider project is used.`, }, "transfer_spec": { Type: schema.TypeList, @@ -67,10 +70,11 @@ func resourceStorageTransferJob() *schema.Resource { "object_conditions": objectConditionsSchema(), "transfer_options": transferOptionsSchema(), "gcs_data_sink": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: gcsDataSchema(), + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: gcsDataSchema(), + Description: `A Google Cloud Storage data sink.`, }, "gcs_data_source": { Type: schema.TypeList, @@ -78,6 +82,7 @@ func resourceStorageTransferJob() *schema.Resource { MaxItems: 1, Elem: gcsDataSchema(), ExactlyOneOf: transferSpecDataSourceKeys, + Description: `A Google Cloud Storage data source.`, }, "aws_s3_data_source": { Type: schema.TypeList, @@ -85,6 +90,7 @@ func resourceStorageTransferJob() *schema.Resource { MaxItems: 1, Elem: awsS3DataSchema(), ExactlyOneOf: transferSpecDataSourceKeys, + Description: `An AWS S3 data source.`, }, "http_data_source": { Type: schema.TypeList, @@ -92,9 +98,11 @@ func resourceStorageTransferJob() *schema.Resource { MaxItems: 1, Elem: httpDataSchema(), ExactlyOneOf: transferSpecDataSourceKeys, + Description: `An HTTP URL data source.`, }, }, }, + Description: `Transfer specification.`, }, "schedule": { Type: schema.TypeList, @@ -103,18 +111,20 @@ func resourceStorageTransferJob() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "schedule_start_date": { - Type: schema.TypeList, - Required: true, - ForceNew: true, - MaxItems: 1, - Elem: dateObjectSchema(), + Type: schema.TypeList, + Required: true, + ForceNew: true, + MaxItems: 1, + Elem: dateObjectSchema(), + Description: `The first day the recurring transfer is scheduled to run. If schedule_start_date is in the past, the transfer will run for the first time on the following day.`, }, "schedule_end_date": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - MaxItems: 1, - Elem: dateObjectSchema(), + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: dateObjectSchema(), + Description: `The last day the recurring transfer will be run. If schedule_end_date is the same as schedule_start_date, the transfer will be executed only once.`, }, "start_time_of_day": { Type: schema.TypeList, @@ -123,27 +133,33 @@ func resourceStorageTransferJob() *schema.Resource { MaxItems: 1, Elem: timeObjectSchema(), DiffSuppressFunc: diffSuppressEmptyStartTimeOfDay, + Description: `The time in UTC at which the transfer will be scheduled to start in a day. Transfers may start later than this time. If not specified, recurring and one-time transfers that are scheduled to run today will run immediately; recurring transfers that are scheduled to run on a future date will start at approximately midnight UTC on that date. Note that when configuring a transfer with the Cloud Platform Console, the transfer's start time in a day is specified in your local timezone.`, }, }, }, + Description: `Schedule specification defining when the Transfer Job should be scheduled to start, end and and what time to run.`, }, "status": { Type: schema.TypeString, Optional: true, Default: "ENABLED", ValidateFunc: validation.StringInSlice([]string{"ENABLED", "DISABLED", "DELETED"}, false), + Description: `Status of the job. Default: ENABLED. NOTE: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation.`, }, "creation_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `When the Transfer Job was created.`, }, "last_modification_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `When the Transfer Job was last modified.`, }, "deletion_time": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Description: `When the Transfer Job was deleted.`, }, }, } @@ -161,12 +177,14 @@ func objectConditionsSchema() *schema.Schema { ValidateFunc: validateDuration(), Optional: true, AtLeastOneOf: objectConditionsKeys, + Description: `A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".`, }, "max_time_elapsed_since_last_modification": { Type: schema.TypeString, ValidateFunc: validateDuration(), Optional: true, AtLeastOneOf: objectConditionsKeys, + Description: `A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".`, }, "include_prefixes": { Type: schema.TypeList, @@ -176,6 +194,7 @@ func objectConditionsSchema() *schema.Schema { MaxItems: 1000, Type: schema.TypeString, }, + Description: `If include_refixes is specified, objects that satisfy the object conditions must have names that start with one of the include_prefixes and that do not start with any of the exclude_prefixes. If include_prefixes is not specified, all objects except those that have names starting with one of the exclude_prefixes must satisfy the object conditions.`, }, "exclude_prefixes": { Type: schema.TypeList, @@ -185,9 +204,11 @@ func objectConditionsSchema() *schema.Schema { MaxItems: 1000, Type: schema.TypeString, }, + Description: `exclude_prefixes must follow the requirements described for include_prefixes.`, }, }, }, + Description: `Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' last_modification_time do not exclude objects in a data sink.`, } } @@ -202,21 +223,25 @@ func transferOptionsSchema() *schema.Schema { Type: schema.TypeBool, Optional: true, AtLeastOneOf: transferOptionsKeys, + Description: `Whether overwriting objects that already exist in the sink is allowed.`, }, "delete_objects_unique_in_sink": { Type: schema.TypeBool, Optional: true, AtLeastOneOf: transferOptionsKeys, ConflictsWith: []string{"transfer_spec.transfer_options.delete_objects_from_source_after_transfer"}, + Description: `Whether objects that exist only in the sink should be deleted. Note that this option and delete_objects_from_source_after_transfer are mutually exclusive.`, }, "delete_objects_from_source_after_transfer": { Type: schema.TypeBool, Optional: true, AtLeastOneOf: transferOptionsKeys, ConflictsWith: []string{"transfer_spec.transfer_options.delete_objects_unique_in_sink"}, + Description: `Whether objects should be deleted from the source after they are transferred to the sink. Note that this option and delete_objects_unique_in_sink are mutually exclusive.`, }, }, }, + Description: `Characteristics of how to treat files from datasource and sink during job. If the option delete_objects_unique_in_sink is true, object conditions based on objects' last_modification_time are ignored and do not exclude objects in a data source or a data sink.`, } } @@ -228,24 +253,28 @@ func timeObjectSchema() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validation.IntBetween(0, 24), + Description: `Hours of day in 24 hour format. Should be from 0 to 23.`, }, "minutes": { Type: schema.TypeInt, Required: true, ForceNew: true, ValidateFunc: validation.IntBetween(0, 59), + Description: `Minutes of hour of day. Must be from 0 to 59.`, }, "seconds": { Type: schema.TypeInt, Required: true, ForceNew: true, ValidateFunc: validation.IntBetween(0, 60), + Description: `Seconds of minutes of the time. Must normally be from 0 to 59.`, }, "nanos": { Type: schema.TypeInt, Required: true, ForceNew: true, ValidateFunc: validation.IntBetween(0, 999999999), + Description: `Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.`, }, }, } @@ -259,6 +288,7 @@ func dateObjectSchema() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validation.IntBetween(0, 9999), + Description: `Year of date. Must be from 1 to 9999.`, }, "month": { @@ -266,6 +296,7 @@ func dateObjectSchema() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validation.IntBetween(1, 12), + Description: `Month of year. Must be from 1 to 12.`, }, "day": { @@ -273,6 +304,7 @@ func dateObjectSchema() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validation.IntBetween(0, 31), + Description: `Day of month. Must be from 1 to 31 and valid for the year and month.`, }, }, } @@ -282,8 +314,9 @@ func gcsDataSchema() *schema.Resource { return &schema.Resource{ Schema: map[string]*schema.Schema{ "bucket_name": { - Required: true, - Type: schema.TypeString, + Required: true, + Type: schema.TypeString, + Description: `Google Cloud Storage bucket name.`, }, }, } @@ -293,8 +326,9 @@ func awsS3DataSchema() *schema.Resource { return &schema.Resource{ Schema: map[string]*schema.Schema{ "bucket_name": { - Required: true, - Type: schema.TypeString, + Required: true, + Type: schema.TypeString, + Description: `S3 Bucket name.`, }, "aws_access_key": { Type: schema.TypeList, @@ -303,17 +337,20 @@ func awsS3DataSchema() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "access_key_id": { - Type: schema.TypeString, - Required: true, - Sensitive: true, + Type: schema.TypeString, + Required: true, + Sensitive: true, + Description: `AWS Key ID.`, }, "secret_access_key": { - Type: schema.TypeString, - Required: true, - Sensitive: true, + Type: schema.TypeString, + Required: true, + Sensitive: true, + Description: `AWS Secret Access Key.`, }, }, }, + Description: `AWS credentials block.`, }, }, } @@ -323,8 +360,9 @@ func httpDataSchema() *schema.Resource { return &schema.Resource{ Schema: map[string]*schema.Schema{ "list_url": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + Description: `The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.`, }, }, } diff --git a/third_party/terraform/resources/resource_usage_export_bucket.go b/third_party/terraform/resources/resource_usage_export_bucket.go index ccd964041249..1502fb410b6e 100644 --- a/third_party/terraform/resources/resource_usage_export_bucket.go +++ b/third_party/terraform/resources/resource_usage_export_bucket.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "time" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" "google.golang.org/api/compute/v1" @@ -17,22 +18,30 @@ func resourceProjectUsageBucket() *schema.Resource { State: resourceProjectUsageBucketImportState, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(4 * time.Minute), + Delete: schema.DefaultTimeout(4 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "bucket_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The bucket to store reports in.`, }, "prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `A prefix for the reports, for instance, the project name.`, }, "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The project to set the export bucket on. If it is not provided, the provider project is used.`, }, }, } @@ -79,7 +88,7 @@ func resourceProjectUsageBucketCreate(d *schema.ResourceData, meta interface{}) return err } d.SetId(project) - err = computeOperationWait(config, op, project, "Setting usage export bucket.") + err = computeOperationWaitTime(config, op, project, "Setting usage export bucket.", d.Timeout(schema.TimeoutCreate)) if err != nil { d.SetId("") return err @@ -103,8 +112,8 @@ func resourceProjectUsageBucketDelete(d *schema.ResourceData, meta interface{}) return err } - err = computeOperationWait(config, op, project, - "Setting usage export bucket to nil, automatically disabling usage export.") + err = computeOperationWaitTime(config, op, project, + "Setting usage export bucket to nil, automatically disabling usage export.", d.Timeout(schema.TimeoutDelete)) if err != nil { return err } diff --git a/third_party/terraform/scripts/sidebar/sidebar.go b/third_party/terraform/scripts/sidebar/sidebar.go new file mode 100644 index 000000000000..d51f059ab586 --- /dev/null +++ b/third_party/terraform/scripts/sidebar/sidebar.go @@ -0,0 +1,112 @@ +//go:generate go run sidebar.go +package main + +import ( + "io/ioutil" + "log" + "os" + "path/filepath" + "regexp" + "runtime" + "strings" + "text/template" +) + +type Entry struct { + Filename string + Product string + Resource string +} + +type Entries struct { + Resources []Entry + DataSources []Entry +} + +func main() { + _, scriptPath, _, ok := runtime.Caller(0) + if !ok { + log.Fatal("Could not get current working directory") + } + tpgDir := scriptPath + for !strings.HasPrefix(filepath.Base(tpgDir), "terraform-provider-") && tpgDir != "/" { + tpgDir = filepath.Clean(tpgDir + "/..") + } + if tpgDir == "/" { + log.Fatal("Script was run outside of google provider directory") + } + + resourcesByProduct, err := entriesByProduct(tpgDir + "/website/docs/r") + if err != nil { + panic(err) + } + dataSourcesByProduct, err := entriesByProduct(tpgDir + "/website/docs/d") + if err != nil { + panic(err) + } + allEntriesByProduct := make(map[string]Entries) + for p, e := range resourcesByProduct { + v := allEntriesByProduct[p] + v.Resources = e + allEntriesByProduct[p] = v + } + for p, e := range dataSourcesByProduct { + v := allEntriesByProduct[p] + v.DataSources = e + allEntriesByProduct[p] = v + } + + tmpl, err := template.ParseFiles(tpgDir + "/website/google.erb.tmpl") + if err != nil { + panic(err) + } + f, err := os.Create(tpgDir + "/website/google.erb") + if err != nil { + panic(err) + } + defer f.Close() + err = tmpl.Execute(f, allEntriesByProduct) + if err != nil { + panic(err) + } +} + +func entriesByProduct(dir string) (map[string][]Entry, error) { + d, err := ioutil.ReadDir(dir) + if err != nil { + return nil, err + } + + entriesByProduct := make(map[string][]Entry) + for _, f := range d { + entry, err := getEntry(dir, f.Name()) + if err != nil { + return nil, err + } + entriesByProduct[entry.Product] = append(entriesByProduct[entry.Product], entry) + } + + return entriesByProduct, nil +} + +func getEntry(dir, filename string) (Entry, error) { + file, err := ioutil.ReadFile(dir + "/" + filename) + if err != nil { + return Entry{}, err + } + + return Entry{ + Filename: strings.TrimSuffix(filename, ".markdown"), + Product: findRegex(file, `subcategory: "(.*)"`), + Resource: findRegex(file, `page_title: "Google: (.*)"`), + }, nil +} + +func findRegex(contents []byte, regex string) string { + r := regexp.MustCompile(regex) + sm := r.FindStringSubmatch(string(contents)) + if len(sm) > 1 { + return sm[1] + } + return "" +} diff --git a/third_party/terraform/tests/data_google_game_services_game_server_deployment_rollout_test.go.erb b/third_party/terraform/tests/data_google_game_services_game_server_deployment_rollout_test.go.erb new file mode 100644 index 000000000000..fc41d98f53d1 --- /dev/null +++ b/third_party/terraform/tests/data_google_game_services_game_server_deployment_rollout_test.go.erb @@ -0,0 +1,72 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" + "testing" +) + +func TestAccDataSourceGameServicesGameServerDeploymentRollout_basic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersOiCS, + CheckDestroy: testAccCheckGameServicesGameServerDeploymentRolloutDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataSourceGameServicesGameServerDeploymentRollout_basic(context), + Check: resource.ComposeTestCheckFunc( + checkDataSourceStateMatchesResourceState("data.google_game_services_game_server_deployment_rollout.qa", "google_game_services_game_server_deployment_rollout.foo"), + ), + }, + }, + }) +} + +func testAccDataSourceGameServicesGameServerDeploymentRollout_basic(context map[string]interface{}) string { + return Nprintf(` +resource "google_game_services_game_server_deployment" "default" { + provider = google-beta + + deployment_id = "tf-test-deployment-%{random_suffix}" + description = "a deployment description" +} + +resource "google_game_services_game_server_config" "default" { + provider = google-beta + + config_id = "tf-test-config-%{random_suffix}" + deployment_id = google_game_services_game_server_deployment.default.deployment_id + description = "a config description" + + fleet_configs { + name = "some-non-guid" + fleet_spec = jsonencode({ "replicas" : 1, "scheduling" : "Packed", "template" : { "metadata" : { "name" : "tf-test-game-server-template" }, "spec" : { "template" : { "spec" : { "containers" : [{ "name" : "simple-udp-server", "image" : "gcr.io/agones-images/udp-server:0.14" }] } } } } }) + + // Alternate usage: + // fleet_spec = file(fleet_configs.json) + } +} + +resource "google_game_services_game_server_deployment_rollout" "foo" { + provider = google-beta + + deployment_id = google_game_services_game_server_deployment.default.deployment_id + default_game_server_config = google_game_services_game_server_config.default.name +} + +data "google_game_services_game_server_deployment_rollout" "qa" { + provider = google-beta + deployment_id = google_game_services_game_server_deployment_rollout.foo.deployment_id +} +`, context) +} +<% end -%> diff --git a/third_party/terraform/tests/data_source_cloud_identity_group_memberships_test.go.erb b/third_party/terraform/tests/data_source_cloud_identity_group_memberships_test.go.erb new file mode 100644 index 000000000000..623ab0f71109 --- /dev/null +++ b/third_party/terraform/tests/data_source_cloud_identity_group_memberships_test.go.erb @@ -0,0 +1,51 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccDataSourceCloudIdentityGroupMemberships_basic(t *testing.T) { + + context := map[string]interface{}{ + "org_domain": getTestOrgDomainFromEnv(t), + "cust_id": getTestCustIdFromEnv(t), + "identity_user": getTestIdentityUserFromEnv(t), + "random_suffix": randString(t, 10), + } + + memberId := Nprintf("%{identity_user}@%{org_domain}", context) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersOiCS, + Steps: []resource.TestStep{ + { + Config: testAccCloudIdentityGroupMembershipConfig(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.google_cloud_identity_group_memberships.members", + "memberships.#", "1"), + resource.TestCheckResourceAttr("data.google_cloud_identity_group_memberships.members", + "memberships.0.roles.#", "2"), + resource.TestCheckResourceAttr("data.google_cloud_identity_group_memberships.members", + "memberships.0.member_key.0.id", memberId), + ), + }, + }, + }) +} + +func testAccCloudIdentityGroupMembershipConfig(context map[string]interface{}) string { + return testAccCloudIdentityGroupMembership_cloudIdentityGroupMembershipUserExample(context) + Nprintf(` + +data "google_cloud_identity_group_memberships" "members" { + provider = google-beta + + group = google_cloud_identity_group_membership.cloud_identity_group_membership_basic.group +} +`, context) +} +<% end -%> diff --git a/third_party/terraform/tests/data_source_cloud_identity_groups_test.go.erb b/third_party/terraform/tests/data_source_cloud_identity_groups_test.go.erb new file mode 100644 index 000000000000..4aa96785cda1 --- /dev/null +++ b/third_party/terraform/tests/data_source_cloud_identity_groups_test.go.erb @@ -0,0 +1,47 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "regexp" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccDataSourceCloudIdentityGroups_basic(t *testing.T) { + + context := map[string]interface{}{ + "org_domain": getTestOrgDomainFromEnv(t), + "cust_id": getTestCustIdFromEnv(t), + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersOiCS, + Steps: []resource.TestStep{ + { + Config: testAccCloudIdentityGroupConfig(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("data.google_cloud_identity_groups.groups", + "groups.#"), + resource.TestMatchResourceAttr("data.google_cloud_identity_groups.groups", + "groups.0.name", regexp.MustCompile("^groups/.*$")), + ), + }, + }, + }) +} + +func testAccCloudIdentityGroupConfig(context map[string]interface{}) string { + return testAccCloudIdentityGroup_cloudIdentityGroupsBasicExample(context) + Nprintf(` + +data "google_cloud_identity_groups" "groups" { + provider = google-beta + + parent = google_cloud_identity_group.cloud_identity_group_basic.parent +} +`, context) +} +<% end -%> diff --git a/third_party/terraform/tests/data_source_compute_lb_ip_ranges_test.go b/third_party/terraform/tests/data_source_compute_lb_ip_ranges_test.go index 8950d39b3874..38d700d4ac56 100644 --- a/third_party/terraform/tests/data_source_compute_lb_ip_ranges_test.go +++ b/third_party/terraform/tests/data_source_compute_lb_ip_ranges_test.go @@ -8,7 +8,7 @@ import ( ) func TestAccDataSourceComputeLbIpRanges_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_container_registry_test.go b/third_party/terraform/tests/data_source_container_registry_test.go index d10a8caa79fe..6bcc6a885cf8 100644 --- a/third_party/terraform/tests/data_source_container_registry_test.go +++ b/third_party/terraform/tests/data_source_container_registry_test.go @@ -11,7 +11,7 @@ func TestDataSourceGoogleContainerRegistryRepository(t *testing.T) { resourceName := "data.google_container_registry_repository.test" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -45,7 +45,7 @@ func TestDataSourceGoogleContainerRegistryImage(t *testing.T) { resourceName := "data.google_container_registry_image.test" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_dns_key_test.go b/third_party/terraform/tests/data_source_dns_key_test.go index c7bc9605405b..a2e11cf9fef2 100644 --- a/third_party/terraform/tests/data_source_dns_key_test.go +++ b/third_party/terraform/tests/data_source_dns_key_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,12 +11,12 @@ import ( func TestAccDataSourceDNSKeys_basic(t *testing.T) { t.Parallel() - dnsZoneName := fmt.Sprintf("data-dnskey-test-%s", acctest.RandString(10)) + dnsZoneName := fmt.Sprintf("data-dnskey-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceDNSKeysConfig(dnsZoneName, "on"), @@ -36,12 +35,12 @@ func TestAccDataSourceDNSKeys_basic(t *testing.T) { func TestAccDataSourceDNSKeys_noDnsSec(t *testing.T) { t.Parallel() - dnsZoneName := fmt.Sprintf("data-dnskey-test-%s", acctest.RandString(10)) + dnsZoneName := fmt.Sprintf("data-dnskey-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceDNSKeysConfig(dnsZoneName, "off"), diff --git a/third_party/terraform/tests/data_source_dns_managed_zone_test.go.erb b/third_party/terraform/tests/data_source_dns_managed_zone_test.go.erb index 936d4b9940ff..1ac0c26155a3 100644 --- a/third_party/terraform/tests/data_source_dns_managed_zone_test.go.erb +++ b/third_party/terraform/tests/data_source_dns_managed_zone_test.go.erb @@ -5,29 +5,28 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceDnsManagedZone_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccDataSourceDnsManagedZone_basic(), + Config: testAccDataSourceDnsManagedZone_basic(randString(t, 10)), Check: checkDataSourceStateMatchesResourceStateWithIgnores( "data.google_dns_managed_zone.qa", "google_dns_managed_zone.foo", map[string]struct{}{ "dnssec_config.#": {}, "private_visibility_config.#": {}, -<% unless version == "ga" -%> "peering_config.#": {}, "forwarding_config.#": {}, +<% unless version == "ga" -%> "reverse_lookup": {}, <% end -%> }, @@ -37,7 +36,7 @@ func TestAccDataSourceDnsManagedZone_basic(t *testing.T) { }) } -func testAccDataSourceDnsManagedZone_basic() string { +func testAccDataSourceDnsManagedZone_basic(managedZoneName string) string { return fmt.Sprintf(` resource "google_dns_managed_zone" "foo" { name = "qa-zone-%s" @@ -48,5 +47,5 @@ resource "google_dns_managed_zone" "foo" { data "google_dns_managed_zone" "qa" { name = google_dns_managed_zone.foo.name } -`, acctest.RandString(10)) +`, managedZoneName) } diff --git a/third_party/terraform/tests/data_source_google_active_folder_test.go b/third_party/terraform/tests/data_source_google_active_folder_test.go index 6a209a91e200..18fbefb8b328 100644 --- a/third_party/terraform/tests/data_source_google_active_folder_test.go +++ b/third_party/terraform/tests/data_source_google_active_folder_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,9 +12,9 @@ func TestAccDataSourceGoogleActiveFolder_default(t *testing.T) { org := getTestOrgFromEnv(t) parent := fmt.Sprintf("organizations/%s", org) - displayName := "terraform-test-" + acctest.RandString(10) + displayName := "terraform-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -33,9 +32,9 @@ func TestAccDataSourceGoogleActiveFolder_space(t *testing.T) { org := getTestOrgFromEnv(t) parent := fmt.Sprintf("organizations/%s", org) - displayName := "terraform test " + acctest.RandString(10) + displayName := "terraform test " + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_bigquery_default_service_account_test.go b/third_party/terraform/tests/data_source_google_bigquery_default_service_account_test.go index dc5132b4e33e..21357bb7edcb 100644 --- a/third_party/terraform/tests/data_source_google_bigquery_default_service_account_test.go +++ b/third_party/terraform/tests/data_source_google_bigquery_default_service_account_test.go @@ -11,7 +11,7 @@ func TestAccDataSourceGoogleBigqueryDefaultServiceAccount_basic(t *testing.T) { resourceName := "data.google_bigquery_default_service_account.bq_account" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_billing_account_test.go b/third_party/terraform/tests/data_source_google_billing_account_test.go index 749d6d3b3790..45ef5014c91e 100644 --- a/third_party/terraform/tests/data_source_google_billing_account_test.go +++ b/third_party/terraform/tests/data_source_google_billing_account_test.go @@ -5,8 +5,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" - "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -14,7 +12,7 @@ func TestAccDataSourceGoogleBillingAccount_byFullName(t *testing.T) { billingId := getTestBillingAccountFromEnv(t) name := "billingAccounts/" + billingId - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -34,7 +32,7 @@ func TestAccDataSourceGoogleBillingAccount_byShortName(t *testing.T) { billingId := getTestBillingAccountFromEnv(t) name := "billingAccounts/" + billingId - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -54,7 +52,7 @@ func TestAccDataSourceGoogleBillingAccount_byFullNameClosed(t *testing.T) { billingId := getTestBillingAccountFromEnv(t) name := "billingAccounts/" + billingId - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -67,9 +65,9 @@ func TestAccDataSourceGoogleBillingAccount_byFullNameClosed(t *testing.T) { } func TestAccDataSourceGoogleBillingAccount_byDisplayName(t *testing.T) { - name := acctest.RandString(16) + name := randString(t, 16) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_client_config_test.go b/third_party/terraform/tests/data_source_google_client_config_test.go index 2249029c0d17..ce4b8b66b3ee 100644 --- a/third_party/terraform/tests/data_source_google_client_config_test.go +++ b/third_party/terraform/tests/data_source_google_client_config_test.go @@ -11,7 +11,7 @@ func TestAccDataSourceGoogleClientConfig_basic(t *testing.T) { resourceName := "data.google_client_config.current" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_client_openid_userinfo_test.go b/third_party/terraform/tests/data_source_google_client_openid_userinfo_test.go index f293f8b17fda..822df9231c7c 100644 --- a/third_party/terraform/tests/data_source_google_client_openid_userinfo_test.go +++ b/third_party/terraform/tests/data_source_google_client_openid_userinfo_test.go @@ -9,7 +9,7 @@ import ( func TestAccDataSourceGoogleClientOpenIDUserinfo_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_cloudfunctions_function_test.go b/third_party/terraform/tests/data_source_google_cloudfunctions_function_test.go index 9c473fa8c9b3..f2034fb06399 100644 --- a/third_party/terraform/tests/data_source_google_cloudfunctions_function_test.go +++ b/third_party/terraform/tests/data_source_google_cloudfunctions_function_test.go @@ -5,7 +5,6 @@ import ( "os" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -13,15 +12,15 @@ func TestAccDataSourceGoogleCloudFunctionsFunction_basic(t *testing.T) { t.Parallel() funcDataNameHttp := "data.google_cloudfunctions_function.function_http" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testHTTPTriggerPath) defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceGoogleCloudFunctionsFunctionConfig(functionName, diff --git a/third_party/terraform/tests/data_source_google_composer_image_versions_test.go b/third_party/terraform/tests/data_source_google_composer_image_versions_test.go index d0a608ddfed9..5cef3944f85a 100644 --- a/third_party/terraform/tests/data_source_google_composer_image_versions_test.go +++ b/third_party/terraform/tests/data_source_google_composer_image_versions_test.go @@ -13,7 +13,7 @@ import ( func TestAccDataSourceComposerImageVersions_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_address_test.go b/third_party/terraform/tests/data_source_google_compute_address_test.go index 7e618003c7c6..742e513491c7 100644 --- a/third_party/terraform/tests/data_source_google_compute_address_test.go +++ b/third_party/terraform/tests/data_source_google_compute_address_test.go @@ -76,22 +76,22 @@ func TestAccDataSourceComputeAddress(t *testing.T) { dsName := "my_address" dsFullName := fmt.Sprintf("data.google_compute_address.%s", dsName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataSourceComputeAddressDestroy(rsFullName), + CheckDestroy: testAccCheckDataSourceComputeAddressDestroy(t, rsFullName), Steps: []resource.TestStep{ { Config: testAccDataSourceComputeAddressConfig(rsName, dsName), Check: resource.ComposeTestCheckFunc( - testAccDataSourceComputeAddressCheck(dsFullName, rsFullName), + testAccDataSourceComputeAddressCheck(t, dsFullName, rsFullName), ), }, }, }) } -func testAccDataSourceComputeAddressCheck(data_source_name string, resource_name string) resource.TestCheckFunc { +func testAccDataSourceComputeAddressCheck(t *testing.T, data_source_name string, resource_name string) resource.TestCheckFunc { return func(s *terraform.State) error { ds, ok := s.RootModule().Resources[data_source_name] if !ok { @@ -134,9 +134,9 @@ func testAccDataSourceComputeAddressCheck(data_source_name string, resource_name } } -func testAccCheckDataSourceComputeAddressDestroy(resource_name string) resource.TestCheckFunc { +func testAccCheckDataSourceComputeAddressDestroy(t *testing.T, resource_name string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[resource_name] if !ok { diff --git a/third_party/terraform/tests/data_source_google_compute_backend_bucket_test.go b/third_party/terraform/tests/data_source_google_compute_backend_bucket_test.go index 8082193794b0..70f147017c1b 100644 --- a/third_party/terraform/tests/data_source_google_compute_backend_bucket_test.go +++ b/third_party/terraform/tests/data_source_google_compute_backend_bucket_test.go @@ -4,20 +4,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceComputeBackendBucket_basic(t *testing.T) { t.Parallel() - backendBucketName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + backendBucketName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendBucketDestroy, + CheckDestroy: testAccCheckComputeBackendBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceComputeBackendBucket_basic(backendBucketName, bucketName), diff --git a/third_party/terraform/tests/data_source_google_compute_backend_service_test.go b/third_party/terraform/tests/data_source_google_compute_backend_service_test.go index 6009f3b85368..0339d3adbef9 100644 --- a/third_party/terraform/tests/data_source_google_compute_backend_service_test.go +++ b/third_party/terraform/tests/data_source_google_compute_backend_service_test.go @@ -4,20 +4,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceComputeBackendService_basic(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceComputeBackendService_basic(serviceName, checkName), diff --git a/third_party/terraform/tests/data_source_google_compute_default_service_account_test.go b/third_party/terraform/tests/data_source_google_compute_default_service_account_test.go index f35c4863a76e..e5cf8ee3c1a6 100644 --- a/third_party/terraform/tests/data_source_google_compute_default_service_account_test.go +++ b/third_party/terraform/tests/data_source_google_compute_default_service_account_test.go @@ -11,7 +11,7 @@ func TestAccDataSourceGoogleComputeDefaultServiceAccount_basic(t *testing.T) { resourceName := "data.google_compute_default_service_account.default" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_forwarding_rule_test.go b/third_party/terraform/tests/data_source_google_compute_forwarding_rule_test.go index 8205113a01b1..edbf76d282f4 100644 --- a/third_party/terraform/tests/data_source_google_compute_forwarding_rule_test.go +++ b/third_party/terraform/tests/data_source_google_compute_forwarding_rule_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,10 +11,10 @@ import ( func TestAccDataSourceGoogleForwardingRule(t *testing.T) { t.Parallel() - poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) - ruleName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + poolName := fmt.Sprintf("tf-%s", randString(t, 10)) + ruleName := fmt.Sprintf("tf-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_global_address_test.go b/third_party/terraform/tests/data_source_google_compute_global_address_test.go index 3f9ff0bc5420..587052f5a848 100644 --- a/third_party/terraform/tests/data_source_google_compute_global_address_test.go +++ b/third_party/terraform/tests/data_source_google_compute_global_address_test.go @@ -16,10 +16,10 @@ func TestAccDataSourceComputeGlobalAddress(t *testing.T) { dsName := "my_address" dsFullName := fmt.Sprintf("data.google_compute_global_address.%s", dsName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeGlobalAddressDestroy, + CheckDestroy: testAccCheckComputeGlobalAddressDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceComputeGlobalAddressConfig(rsName, dsName), diff --git a/third_party/terraform/tests/data_source_google_compute_image_test.go b/third_party/terraform/tests/data_source_google_compute_image_test.go index 24c08ad753e7..199d8033698c 100644 --- a/third_party/terraform/tests/data_source_google_compute_image_test.go +++ b/third_party/terraform/tests/data_source_google_compute_image_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,13 +11,13 @@ import ( func TestAccDataSourceComputeImage(t *testing.T) { t.Parallel() - family := acctest.RandomWithPrefix("tf-test") - name := acctest.RandomWithPrefix("tf-test") + family := fmt.Sprintf("tf-test-%d", randInt(t)) + name := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeImageDestroy, + CheckDestroy: testAccCheckComputeImageDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourcePublicImageConfig, diff --git a/third_party/terraform/tests/data_source_google_compute_instance_group_test.go.erb b/third_party/terraform/tests/data_source_google_compute_instance_group_test.go.erb index 89f96acb9858..4ef0f1e4cea4 100644 --- a/third_party/terraform/tests/data_source_google_compute_instance_group_test.go.erb +++ b/third_party/terraform/tests/data_source_google_compute_instance_group_test.go.erb @@ -10,7 +10,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -18,12 +17,12 @@ import ( func TestAccDataSourceGoogleComputeInstanceGroup_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccCheckDataSourceGoogleComputeInstanceGroupConfig(), + Config: testAccCheckDataSourceGoogleComputeInstanceGroupConfig(randString(t, 10), randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckDataSourceGoogleComputeInstanceGroup("data.google_compute_instance_group.test"), ), @@ -35,12 +34,12 @@ func TestAccDataSourceGoogleComputeInstanceGroup_basic(t *testing.T) { func TestAccDataSourceGoogleComputeInstanceGroup_withNamedPort(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccCheckDataSourceGoogleComputeInstanceGroupConfigWithNamedPort(), + Config: testAccCheckDataSourceGoogleComputeInstanceGroupConfigWithNamedPort(randString(t, 10), randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckDataSourceGoogleComputeInstanceGroup("data.google_compute_instance_group.test"), ), @@ -52,12 +51,12 @@ func TestAccDataSourceGoogleComputeInstanceGroup_withNamedPort(t *testing.T) { func TestAccDataSourceGoogleComputeInstanceGroup_fromIGM(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccCheckDataSourceGoogleComputeInstanceGroup_fromIGM(), + Config: testAccCheckDataSourceGoogleComputeInstanceGroup_fromIGM(fmt.Sprintf("test-igm-%d", randInt(t)), fmt.Sprintf("test-igm-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.google_compute_instance_group.test", "instances.#", "10"), ), @@ -198,7 +197,7 @@ func testAccCheckDataSourceGoogleComputeInstanceGroup(dataSourceName string) res } } -func testAccCheckDataSourceGoogleComputeInstanceGroupConfig() string { +func testAccCheckDataSourceGoogleComputeInstanceGroupConfig(instanceName, igName string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -238,10 +237,10 @@ data "google_compute_instance_group" "test" { name = google_compute_instance_group.test.name zone = google_compute_instance_group.test.zone } -`, acctest.RandString(10), acctest.RandString(10)) +`, instanceName, igName) } -func testAccCheckDataSourceGoogleComputeInstanceGroupConfigWithNamedPort() string { +func testAccCheckDataSourceGoogleComputeInstanceGroupConfigWithNamedPort(instanceName, igName string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -291,10 +290,10 @@ data "google_compute_instance_group" "test" { name = google_compute_instance_group.test.name zone = google_compute_instance_group.test.zone } -`, acctest.RandString(10), acctest.RandString(10)) +`, instanceName, igName) } -func testAccCheckDataSourceGoogleComputeInstanceGroup_fromIGM() string { +func testAccCheckDataSourceGoogleComputeInstanceGroup_fromIGM(igmName, secondIgmName string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -332,5 +331,5 @@ resource "google_compute_instance_group_manager" "igm" { data "google_compute_instance_group" "test" { self_link = google_compute_instance_group_manager.igm.instance_group } -`, acctest.RandomWithPrefix("test-igm"), acctest.RandomWithPrefix("test-igm")) +`, igmName, secondIgmName) } diff --git a/third_party/terraform/tests/data_source_google_compute_instance_serial_port_test.go b/third_party/terraform/tests/data_source_google_compute_instance_serial_port_test.go index c11eb62da4e7..e1006f2127bb 100644 --- a/third_party/terraform/tests/data_source_google_compute_instance_serial_port_test.go +++ b/third_party/terraform/tests/data_source_google_compute_instance_serial_port_test.go @@ -5,13 +5,12 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceComputeInstanceSerialPort_basic(t *testing.T) { - instanceName := fmt.Sprintf("tf-test-serial-data-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + instanceName := fmt.Sprintf("tf-test-serial-data-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_instance_test.go b/third_party/terraform/tests/data_source_google_compute_instance_test.go index ff374791afa4..b53007892605 100644 --- a/third_party/terraform/tests/data_source_google_compute_instance_test.go +++ b/third_party/terraform/tests/data_source_google_compute_instance_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,12 +11,12 @@ import ( func TestAccDataSourceComputeInstance_basic(t *testing.T) { t.Parallel() - instanceName := fmt.Sprintf("data-instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceComputeInstanceConfig(instanceName), diff --git a/third_party/terraform/tests/data_source_google_compute_network_test.go b/third_party/terraform/tests/data_source_google_compute_network_test.go index b4bd6edcb1f8..7c0b1f4bcd91 100644 --- a/third_party/terraform/tests/data_source_google_compute_network_test.go +++ b/third_party/terraform/tests/data_source_google_compute_network_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,8 +11,8 @@ import ( func TestAccDataSourceGoogleNetwork(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + networkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_node_types_test.go b/third_party/terraform/tests/data_source_google_compute_node_types_test.go index 304c3fdf6a65..108fd7f1d79c 100644 --- a/third_party/terraform/tests/data_source_google_compute_node_types_test.go +++ b/third_party/terraform/tests/data_source_google_compute_node_types_test.go @@ -14,7 +14,7 @@ import ( func TestAccDataSourceComputeNodeTypes_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_region_instance_group_test.go.erb b/third_party/terraform/tests/data_source_google_compute_region_instance_group_test.go.erb index a02ce0822b9a..309aa2df1e7a 100644 --- a/third_party/terraform/tests/data_source_google_compute_region_instance_group_test.go.erb +++ b/third_party/terraform/tests/data_source_google_compute_region_instance_group_test.go.erb @@ -5,19 +5,20 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceRegionInstanceGroup(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() - name := "acctest-" + acctest.RandString(6) - resource.Test(t, resource.TestCase{ + name := "acctest-" + randString(t, 6) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceRegionInstanceGroup_basic(name), + Config: testAccDataSourceRegionInstanceGroup_basic(fmt.Sprintf("test-rigm--%d", randInt(t)), name), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.google_compute_region_instance_group.data_source", "name", name), resource.TestCheckResourceAttr("data.google_compute_region_instance_group.data_source", "project", getTestProjectFromEnv()), @@ -27,7 +28,7 @@ func TestAccDataSourceRegionInstanceGroup(t *testing.T) { }) } -func testAccDataSourceRegionInstanceGroup_basic(instanceManagerName string) string { +func testAccDataSourceRegionInstanceGroup_basic(rigmName, instanceManagerName string) string { return fmt.Sprintf(` resource "google_compute_target_pool" "foo" { name = "%s" @@ -71,5 +72,5 @@ resource "google_compute_region_instance_group_manager" "foo" { data "google_compute_region_instance_group" "data_source" { self_link = google_compute_region_instance_group_manager.foo.instance_group } -`, acctest.RandomWithPrefix("test-rigm-"), instanceManagerName) +`, rigmName, instanceManagerName) } diff --git a/third_party/terraform/tests/data_source_google_compute_regions_test.go b/third_party/terraform/tests/data_source_google_compute_regions_test.go index bbe7373105cd..86fb4b825aa4 100644 --- a/third_party/terraform/tests/data_source_google_compute_regions_test.go +++ b/third_party/terraform/tests/data_source_google_compute_regions_test.go @@ -13,7 +13,7 @@ import ( func TestAccComputeRegions_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_resource_policy_test.go.erb b/third_party/terraform/tests/data_source_google_compute_resource_policy_test.go.erb index 3ffeab736fce..ed4ff90ddab0 100644 --- a/third_party/terraform/tests/data_source_google_compute_resource_policy_test.go.erb +++ b/third_party/terraform/tests/data_source_google_compute_resource_policy_test.go.erb @@ -6,7 +6,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,29 +13,29 @@ import ( func TestAccDataSourceComputeResourcePolicy(t *testing.T) { t.Parallel() - randomSuffix := acctest.RandString(10) + randomSuffix := randString(t, 10) rsName := "foo_" + randomSuffix rsFullName := fmt.Sprintf("google_compute_resource_policy.%s", rsName) dsName := "my_policy_" + randomSuffix dsFullName := fmt.Sprintf("data.google_compute_resource_policy.%s", dsName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataSourceComputeResourcePolicyDestroy(rsFullName), + CheckDestroy: testAccCheckDataSourceComputeResourcePolicyDestroy(t, rsFullName), Steps: []resource.TestStep{ { Config: testAccDataSourceComputeResourcePolicyConfig(rsName, dsName, randomSuffix), Check: resource.ComposeTestCheckFunc( - testAccDataSourceComputeResourcePolicyCheck(dsFullName, rsFullName), + testAccDataSourceComputeResourcePolicyCheck(t, dsFullName, rsFullName), ), }, }, }) } -func testAccDataSourceComputeResourcePolicyCheck(dataSourceName string, resourceName string) resource.TestCheckFunc { +func testAccDataSourceComputeResourcePolicyCheck(t *testing.T, dataSourceName string, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { ds, ok := s.RootModule().Resources[dataSourceName] if !ok { @@ -74,9 +73,9 @@ func testAccDataSourceComputeResourcePolicyCheck(dataSourceName string, resource } } -func testAccCheckDataSourceComputeResourcePolicyDestroy(resourceName string) resource.TestCheckFunc { +func testAccCheckDataSourceComputeResourcePolicyDestroy(t *testing.T, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[resourceName] if !ok { diff --git a/third_party/terraform/tests/data_source_google_compute_router_test.go b/third_party/terraform/tests/data_source_google_compute_router_test.go index 020a0c627575..f4740ba21074 100644 --- a/third_party/terraform/tests/data_source_google_compute_router_test.go +++ b/third_party/terraform/tests/data_source_google_compute_router_test.go @@ -4,15 +4,14 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceComputeRouter(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("router-test") + name := fmt.Sprintf("tf-test-router-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_ssl_certificate_test.go b/third_party/terraform/tests/data_source_google_compute_ssl_certificate_test.go index 1e0c4a502e09..cbf013cc88a7 100644 --- a/third_party/terraform/tests/data_source_google_compute_ssl_certificate_test.go +++ b/third_party/terraform/tests/data_source_google_compute_ssl_certificate_test.go @@ -4,19 +4,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceComputeSslCertificate(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceComputeSslCertificateConfig(), + Config: testAccDataSourceComputeSslCertificateConfig(randString(t, 10)), Check: resource.ComposeTestCheckFunc( checkDataSourceStateMatchesResourceStateWithIgnores( "data.google_compute_ssl_certificate.cert", @@ -31,7 +30,7 @@ func TestAccDataSourceComputeSslCertificate(t *testing.T) { }) } -func testAccDataSourceComputeSslCertificateConfig() string { +func testAccDataSourceComputeSslCertificateConfig(certName string) string { return fmt.Sprintf(` resource "google_compute_ssl_certificate" "foobar" { name = "cert-test-%s" @@ -43,5 +42,5 @@ resource "google_compute_ssl_certificate" "foobar" { data "google_compute_ssl_certificate" "cert" { name = google_compute_ssl_certificate.foobar.name } -`, acctest.RandString(10)) +`, certName) } diff --git a/third_party/terraform/tests/data_source_google_compute_ssl_policy_test.go b/third_party/terraform/tests/data_source_google_compute_ssl_policy_test.go index 7d8ce20a1fa9..2817d7e93bc9 100644 --- a/third_party/terraform/tests/data_source_google_compute_ssl_policy_test.go +++ b/third_party/terraform/tests/data_source_google_compute_ssl_policy_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,12 +11,12 @@ import ( func TestAccDataSourceGoogleSslPolicy(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceGoogleSslPolicy(), + Config: testAccDataSourceGoogleSslPolicy(fmt.Sprintf("test-ssl-policy-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( testAccDataSourceGoogleSslPolicyCheck("data.google_compute_ssl_policy.ssl_policy", "google_compute_ssl_policy.foobar"), ), @@ -66,7 +65,7 @@ func testAccDataSourceGoogleSslPolicyCheck(data_source_name string, resource_nam } } -func testAccDataSourceGoogleSslPolicy() string { +func testAccDataSourceGoogleSslPolicy(policyName string) string { return fmt.Sprintf(` resource "google_compute_ssl_policy" "foobar" { name = "%s" @@ -78,5 +77,5 @@ resource "google_compute_ssl_policy" "foobar" { data "google_compute_ssl_policy" "ssl_policy" { name = google_compute_ssl_policy.foobar.name } -`, acctest.RandomWithPrefix("test-ssl-policy")) +`, policyName) } diff --git a/third_party/terraform/tests/data_source_google_compute_subnetwork_test.go b/third_party/terraform/tests/data_source_google_compute_subnetwork_test.go index 8a1f81667ace..9d170ca0a6bc 100644 --- a/third_party/terraform/tests/data_source_google_compute_subnetwork_test.go +++ b/third_party/terraform/tests/data_source_google_compute_subnetwork_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,12 +11,12 @@ import ( func TestAccDataSourceGoogleSubnetwork(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceGoogleSubnetwork(), + Config: testAccDataSourceGoogleSubnetwork(fmt.Sprintf("network-test-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( testAccDataSourceGoogleSubnetworkCheck("data.google_compute_subnetwork.my_subnetwork", "google_compute_subnetwork.foobar"), testAccDataSourceGoogleSubnetworkCheck("data.google_compute_subnetwork.my_subnetwork_self_link", "google_compute_subnetwork.foobar"), @@ -74,7 +73,7 @@ func testAccDataSourceGoogleSubnetworkCheck(data_source_name string, resource_na } } -func testAccDataSourceGoogleSubnetwork() string { +func testAccDataSourceGoogleSubnetwork(networkName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { name = "%s" @@ -100,5 +99,5 @@ data "google_compute_subnetwork" "my_subnetwork" { data "google_compute_subnetwork" "my_subnetwork_self_link" { self_link = google_compute_subnetwork.foobar.self_link } -`, acctest.RandomWithPrefix("network-test")) +`, networkName) } diff --git a/third_party/terraform/tests/data_source_google_compute_vpn_gateway_test.go b/third_party/terraform/tests/data_source_google_compute_vpn_gateway_test.go index 8202cf514e00..fbefc25e78cb 100644 --- a/third_party/terraform/tests/data_source_google_compute_vpn_gateway_test.go +++ b/third_party/terraform/tests/data_source_google_compute_vpn_gateway_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,8 +11,8 @@ import ( func TestAccDataSourceGoogleVpnGateway(t *testing.T) { t.Parallel() - vpnGatewayName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + vpnGatewayName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_compute_zones_test.go b/third_party/terraform/tests/data_source_google_compute_zones_test.go index 3fd0527f9a90..6619ceb07d75 100644 --- a/third_party/terraform/tests/data_source_google_compute_zones_test.go +++ b/third_party/terraform/tests/data_source_google_compute_zones_test.go @@ -13,7 +13,7 @@ import ( func TestAccComputeZones_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_container_cluster_test.go b/third_party/terraform/tests/data_source_google_container_cluster_test.go index 840b550b8286..5089ff004de1 100644 --- a/third_party/terraform/tests/data_source_google_container_cluster_test.go +++ b/third_party/terraform/tests/data_source_google_container_cluster_test.go @@ -4,19 +4,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccContainerClusterDatasource_zonal(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccContainerClusterDatasource_zonal(), + Config: testAccContainerClusterDatasource_zonal(randString(t, 10)), Check: resource.ComposeTestCheckFunc( checkDataSourceStateMatchesResourceStateWithIgnores( "data.google_container_cluster.kubes", @@ -37,12 +36,12 @@ func TestAccContainerClusterDatasource_zonal(t *testing.T) { func TestAccContainerClusterDatasource_regional(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccContainerClusterDatasource_regional(), + Config: testAccContainerClusterDatasource_regional(randString(t, 10)), Check: resource.ComposeTestCheckFunc( checkDataSourceStateMatchesResourceStateWithIgnores( "data.google_container_cluster.kubes", @@ -60,7 +59,7 @@ func TestAccContainerClusterDatasource_regional(t *testing.T) { }) } -func testAccContainerClusterDatasource_zonal() string { +func testAccContainerClusterDatasource_zonal(suffix string) string { return fmt.Sprintf(` resource "google_container_cluster" "kubes" { name = "tf-test-cluster-%s" @@ -77,10 +76,10 @@ data "google_container_cluster" "kubes" { name = google_container_cluster.kubes.name location = google_container_cluster.kubes.location } -`, acctest.RandString(10)) +`, suffix) } -func testAccContainerClusterDatasource_regional() string { +func testAccContainerClusterDatasource_regional(suffix string) string { return fmt.Sprintf(` resource "google_container_cluster" "kubes" { name = "tf-test-cluster-%s" @@ -92,5 +91,5 @@ data "google_container_cluster" "kubes" { name = google_container_cluster.kubes.name location = google_container_cluster.kubes.location } -`, acctest.RandString(10)) +`, suffix) } diff --git a/third_party/terraform/tests/data_source_google_container_engine_versions_test.go b/third_party/terraform/tests/data_source_google_container_engine_versions_test.go index eef37286ab6d..6a0b4d579462 100644 --- a/third_party/terraform/tests/data_source_google_container_engine_versions_test.go +++ b/third_party/terraform/tests/data_source_google_container_engine_versions_test.go @@ -13,7 +13,7 @@ import ( func TestAccContainerEngineVersions_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -30,7 +30,7 @@ func TestAccContainerEngineVersions_basic(t *testing.T) { func TestAccContainerEngineVersions_filtered(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_folder_organization_policy_test.go b/third_party/terraform/tests/data_source_google_folder_organization_policy_test.go index 63935f80134e..88ca4bce217f 100644 --- a/third_party/terraform/tests/data_source_google_folder_organization_policy_test.go +++ b/third_party/terraform/tests/data_source_google_folder_organization_policy_test.go @@ -4,15 +4,14 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceGoogleFolderOrganizationPolicy_basic(t *testing.T) { - folder := acctest.RandomWithPrefix("tf-test") + folder := fmt.Sprintf("tf-test-%d", randInt(t)) org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_folder_test.go b/third_party/terraform/tests/data_source_google_folder_test.go index 91e4be72bddd..a394f575db16 100644 --- a/third_party/terraform/tests/data_source_google_folder_test.go +++ b/third_party/terraform/tests/data_source_google_folder_test.go @@ -5,7 +5,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,9 +13,9 @@ func TestAccDataSourceGoogleFolder_byFullName(t *testing.T) { org := getTestOrgFromEnv(t) parent := fmt.Sprintf("organizations/%s", org) - displayName := "terraform-test-" + acctest.RandString(10) + displayName := "terraform-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -34,9 +33,9 @@ func TestAccDataSourceGoogleFolder_byShortName(t *testing.T) { org := getTestOrgFromEnv(t) parent := fmt.Sprintf("organizations/%s", org) - displayName := "terraform-test-" + acctest.RandString(10) + displayName := "terraform-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -54,9 +53,9 @@ func TestAccDataSourceGoogleFolder_lookupOrganization(t *testing.T) { org := getTestOrgFromEnv(t) parent := fmt.Sprintf("organizations/%s", org) - displayName := "terraform-test-" + acctest.RandString(10) + displayName := "terraform-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -72,9 +71,9 @@ func TestAccDataSourceGoogleFolder_lookupOrganization(t *testing.T) { } func TestAccDataSourceGoogleFolder_byFullNameNotFound(t *testing.T) { - name := "folders/" + acctest.RandString(16) + name := "folders/" + randString(t, 16) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_iam_role_test.go b/third_party/terraform/tests/data_source_google_iam_role_test.go index 049da7e0bb6b..f6692b4a4ae2 100644 --- a/third_party/terraform/tests/data_source_google_iam_role_test.go +++ b/third_party/terraform/tests/data_source_google_iam_role_test.go @@ -12,7 +12,7 @@ import ( func TestAccDataSourceIAMRole(t *testing.T) { name := "roles/viewer" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_iam_testable_permissions_test.go b/third_party/terraform/tests/data_source_google_iam_testable_permissions_test.go new file mode 100644 index 000000000000..3f69f82d47ac --- /dev/null +++ b/third_party/terraform/tests/data_source_google_iam_testable_permissions_test.go @@ -0,0 +1,154 @@ +package google + +import ( + "fmt" + "strconv" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccDataSourceGoogleIamTestablePermissions_basic(t *testing.T) { + t.Parallel() + + project := getTestProjectFromEnv() + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(` + data "google_iam_testable_permissions" "perms" { + full_resource_name = "//cloudresourcemanager.googleapis.com/projects/%s" + } + `, project), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleIamTestablePermissionsMeta( + project, + "data.google_iam_testable_permissions.perms", + []string{"GA"}, + "", + ), + ), + }, + { + Config: fmt.Sprintf(` + data "google_iam_testable_permissions" "perms" { + full_resource_name = "//cloudresourcemanager.googleapis.com/projects/%s" + stages = ["GA"] + } + `, project), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleIamTestablePermissionsMeta( + project, + "data.google_iam_testable_permissions.perms", + []string{"GA"}, + "", + ), + ), + }, + { + Config: fmt.Sprintf(` + data "google_iam_testable_permissions" "perms" { + full_resource_name = "//cloudresourcemanager.googleapis.com/projects/%s" + custom_support_level = "NOT_SUPPORTED" + stages = ["BETA"] + } + `, project), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleIamTestablePermissionsMeta( + project, + "data.google_iam_testable_permissions.perms", + []string{"BETA"}, + "NOT_SUPPORTED", + ), + ), + }, + { + Config: fmt.Sprintf(` + data "google_iam_testable_permissions" "perms" { + full_resource_name = "//cloudresourcemanager.googleapis.com/projects/%s" + custom_support_level = "not_supported" + stages = ["beta"] + } + `, project), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleIamTestablePermissionsMeta( + project, + "data.google_iam_testable_permissions.perms", + []string{"BETA"}, + "NOT_SUPPORTED", + ), + ), + }, + { + Config: fmt.Sprintf(` + data "google_iam_testable_permissions" "perms" { + full_resource_name = "//cloudresourcemanager.googleapis.com/projects/%s" + stages = ["ga", "beta"] + } + `, project), + Check: resource.ComposeTestCheckFunc( + testAccCheckGoogleIamTestablePermissionsMeta( + project, + "data.google_iam_testable_permissions.perms", + []string{"GA", "BETA"}, + "", + ), + ), + }, + }, + }) +} + +func testAccCheckGoogleIamTestablePermissionsMeta(project string, n string, expectedStages []string, expectedSupportLevel string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Can't find perms data source: %s", n) + } + expectedId := fmt.Sprintf("//cloudresourcemanager.googleapis.com/projects/%s", project) + if rs.Primary.ID != expectedId { + return fmt.Errorf("perms data source ID not set.") + } + attrs := rs.Primary.Attributes + count, ok := attrs["permissions.#"] + if !ok { + return fmt.Errorf("can't find 'permsissions' attribute") + } + permCount, err := strconv.Atoi(count) + if err != nil { + return err + } + if permCount < 2 { + return fmt.Errorf("count should be greater than 2") + } + foundStageCounter := len(expectedStages) + foundSupport := false + + for i := 0; i < permCount; i++ { + for s := 0; s < len(expectedStages); s++ { + stageKey := "permissions." + strconv.Itoa(i) + ".stage" + supportKey := "permissions." + strconv.Itoa(i) + ".custom_support_level" + if stringInSlice(expectedStages, attrs[stageKey]) { + foundStageCounter -= 1 + } + if attrs[supportKey] == expectedSupportLevel { + foundSupport = true + } + if foundSupport && foundStageCounter == 0 { + return nil + } + } + } + + if foundSupport { // This means we didn't find a stage + return fmt.Errorf("Could not find stages %v in output", expectedStages) + } + if foundStageCounter == 0 { // This meads we didn't fins a custom_support_level + return fmt.Errorf("Could not find custom_support_level %s in output", expectedSupportLevel) + } + return fmt.Errorf("Unable to find customSupportLevel or stages in output") + } +} diff --git a/third_party/terraform/tests/data_source_google_kms_crypto_key_test.go b/third_party/terraform/tests/data_source_google_kms_crypto_key_test.go index b9d7ec3147be..9cb431548283 100644 --- a/third_party/terraform/tests/data_source_google_kms_crypto_key_test.go +++ b/third_party/terraform/tests/data_source_google_kms_crypto_key_test.go @@ -16,7 +16,7 @@ func TestAccDataSourceGoogleKmsCryptoKey_basic(t *testing.T) { keyParts := strings.Split(kms.CryptoKey.Name, "/") cryptoKeyId := keyParts[len(keyParts)-1] - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_kms_crypto_key_version_test.go b/third_party/terraform/tests/data_source_google_kms_crypto_key_version_test.go index a009f06a075c..3eb315b40a38 100644 --- a/third_party/terraform/tests/data_source_google_kms_crypto_key_version_test.go +++ b/third_party/terraform/tests/data_source_google_kms_crypto_key_version_test.go @@ -12,7 +12,7 @@ func TestAccDataSourceGoogleKmsCryptoKeyVersion_basic(t *testing.T) { asymDecrKey := BootstrapKMSKeyWithPurpose(t, "ASYMMETRIC_DECRYPT") symKey := BootstrapKMSKey(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_kms_key_ring_test.go b/third_party/terraform/tests/data_source_google_kms_key_ring_test.go index 21239d508c65..f38011c415db 100644 --- a/third_party/terraform/tests/data_source_google_kms_key_ring_test.go +++ b/third_party/terraform/tests/data_source_google_kms_key_ring_test.go @@ -15,7 +15,7 @@ func TestAccDataSourceGoogleKmsKeyRing_basic(t *testing.T) { keyParts := strings.Split(kms.KeyRing.Name, "/") keyRingId := keyParts[len(keyParts)-1] - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_kms_secret_ciphertext_test.go b/third_party/terraform/tests/data_source_google_kms_secret_ciphertext_test.go index d20b40f80bd8..5729f099c950 100644 --- a/third_party/terraform/tests/data_source_google_kms_secret_ciphertext_test.go +++ b/third_party/terraform/tests/data_source_google_kms_secret_ciphertext_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,16 +13,16 @@ func TestAccDataKmsSecretCiphertext_basic(t *testing.T) { kms := BootstrapKMSKey(t) - plaintext := fmt.Sprintf("secret-%s", acctest.RandString(10)) + plaintext := fmt.Sprintf("secret-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testGoogleKmsSecretCiphertext_datasource(kms.CryptoKey.Name, plaintext), Check: func(s *terraform.State) error { - plaintext, err := testAccDecryptSecretDataWithCryptoKey(s, kms.CryptoKey.Name, "data.google_kms_secret_ciphertext.acceptance", "") + plaintext, err := testAccDecryptSecretDataWithCryptoKey(t, s, kms.CryptoKey.Name, "data.google_kms_secret_ciphertext.acceptance", "") if err != nil { return err diff --git a/third_party/terraform/tests/data_source_google_kms_secret_test.go b/third_party/terraform/tests/data_source_google_kms_secret_test.go index e7766487d81b..34023d483e5b 100644 --- a/third_party/terraform/tests/data_source_google_kms_secret_test.go +++ b/third_party/terraform/tests/data_source_google_kms_secret_test.go @@ -6,41 +6,42 @@ import ( "log" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudkms/v1" ) func TestAccKmsSecret_basic(t *testing.T) { + // Nested tests confuse VCR + skipIfVcr(t) t.Parallel() projectOrg := getTestOrgFromEnv(t) projectBillingAccount := getTestBillingAccountFromEnv(t) - projectId := "terraform-" + acctest.RandString(10) - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + projectId := "terraform-" + randString(t, 10) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - plaintext := fmt.Sprintf("secret-%s", acctest.RandString(10)) + plaintext := fmt.Sprintf("secret-%s", randString(t, 10)) aad := "plainaad" // The first test creates resources needed to encrypt plaintext and produce ciphertext - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testGoogleKmsCryptoKey_basic(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), Check: func(s *terraform.State) error { - ciphertext, cryptoKeyId, err := testAccEncryptSecretDataWithCryptoKey(s, "google_kms_crypto_key.crypto_key", plaintext, "") + ciphertext, cryptoKeyId, err := testAccEncryptSecretDataWithCryptoKey(t, s, "google_kms_crypto_key.crypto_key", plaintext, "") if err != nil { return err } // The second test asserts that the data source has the correct plaintext, given the created ciphertext - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -58,14 +59,14 @@ func TestAccKmsSecret_basic(t *testing.T) { { Config: testGoogleKmsCryptoKey_basic(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), Check: func(s *terraform.State) error { - ciphertext, cryptoKeyId, err := testAccEncryptSecretDataWithCryptoKey(s, "google_kms_crypto_key.crypto_key", plaintext, aad) + ciphertext, cryptoKeyId, err := testAccEncryptSecretDataWithCryptoKey(t, s, "google_kms_crypto_key.crypto_key", plaintext, aad) if err != nil { return err } // The second test asserts that the data source has the correct plaintext, given the created ciphertext - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -83,8 +84,8 @@ func TestAccKmsSecret_basic(t *testing.T) { }) } -func testAccEncryptSecretDataWithCryptoKey(s *terraform.State, cryptoKeyResourceName, plaintext, aad string) (string, *kmsCryptoKeyId, error) { - config := testAccProvider.Meta().(*Config) +func testAccEncryptSecretDataWithCryptoKey(t *testing.T, s *terraform.State, cryptoKeyResourceName, plaintext, aad string) (string, *kmsCryptoKeyId, error) { + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[cryptoKeyResourceName] if !ok { diff --git a/third_party/terraform/tests/data_source_google_monitoring_uptime_check_ips_test.go b/third_party/terraform/tests/data_source_google_monitoring_uptime_check_ips_test.go index 1fea2fda9039..e70de1187473 100644 --- a/third_party/terraform/tests/data_source_google_monitoring_uptime_check_ips_test.go +++ b/third_party/terraform/tests/data_source_google_monitoring_uptime_check_ips_test.go @@ -8,7 +8,7 @@ import ( ) func TestAccDataSourceGoogleMonitoringUptimeCheckIps_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_netblock_ip_ranges_test.go b/third_party/terraform/tests/data_source_google_netblock_ip_ranges_test.go index c45c9e830f54..6177c99128e6 100644 --- a/third_party/terraform/tests/data_source_google_netblock_ip_ranges_test.go +++ b/third_party/terraform/tests/data_source_google_netblock_ip_ranges_test.go @@ -8,7 +8,7 @@ import ( ) func TestAccDataSourceGoogleNetblockIpRanges_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_organization_test.go b/third_party/terraform/tests/data_source_google_organization_test.go index ffb4dc4aabeb..c9706dafb8b4 100644 --- a/third_party/terraform/tests/data_source_google_organization_test.go +++ b/third_party/terraform/tests/data_source_google_organization_test.go @@ -5,7 +5,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -13,7 +12,7 @@ func TestAccDataSourceGoogleOrganization_byFullName(t *testing.T) { orgId := getTestOrgFromEnv(t) name := "organizations/" + orgId - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -32,7 +31,7 @@ func TestAccDataSourceGoogleOrganization_byShortName(t *testing.T) { orgId := getTestOrgFromEnv(t) name := "organizations/" + orgId - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -48,9 +47,9 @@ func TestAccDataSourceGoogleOrganization_byShortName(t *testing.T) { } func TestAccDataSourceGoogleOrganization_byDomain(t *testing.T) { - name := acctest.RandString(16) + ".com" + name := randString(t, 16) + ".com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_project_organization_policy_test.go b/third_party/terraform/tests/data_source_google_project_organization_policy_test.go index 14f672f7d969..a18023feab33 100644 --- a/third_party/terraform/tests/data_source_google_project_organization_policy_test.go +++ b/third_party/terraform/tests/data_source_google_project_organization_policy_test.go @@ -10,7 +10,7 @@ import ( func TestAccDataSourceGoogleProjectOrganizationPolicy_basic(t *testing.T) { project := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_project_test.go b/third_party/terraform/tests/data_source_google_project_test.go index 6136d044df8e..dba4ff1e2d65 100644 --- a/third_party/terraform/tests/data_source_google_project_test.go +++ b/third_party/terraform/tests/data_source_google_project_test.go @@ -4,16 +4,15 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceGoogleProject_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - project := acctest.RandomWithPrefix("tf-test") + project := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_projects_test.go b/third_party/terraform/tests/data_source_google_projects_test.go index c7eee7d2c363..9564b2dbbfa3 100644 --- a/third_party/terraform/tests/data_source_google_projects_test.go +++ b/third_party/terraform/tests/data_source_google_projects_test.go @@ -12,7 +12,7 @@ func TestAccDataSourceGoogleProjects_basic(t *testing.T) { project := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_redis_instance_test.go b/third_party/terraform/tests/data_source_google_redis_instance_test.go new file mode 100644 index 000000000000..bd9914b1fa5b --- /dev/null +++ b/third_party/terraform/tests/data_source_google_redis_instance_test.go @@ -0,0 +1,38 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccRedisInstanceDatasource_basic(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccRedisInstanceDatasourceConfig(randString(t, 10)), + Check: resource.ComposeTestCheckFunc( + checkDataSourceStateMatchesResourceState("data.google_redis_instance.redis", "google_redis_instance.redis"), + ), + }, + }, + }) +} + +func testAccRedisInstanceDatasourceConfig(suffix string) string { + return fmt.Sprintf(` +resource "google_redis_instance" "redis" { + name = "redis-test-%s" + memory_size_gb = 1 +} + +data "google_redis_instance" "redis" { + name = "${google_redis_instance.redis.name}" +} +`, suffix) +} diff --git a/third_party/terraform/tests/data_source_google_service_account_access_token_test.go b/third_party/terraform/tests/data_source_google_service_account_access_token_test.go index 3c8f45c49142..d84122189221 100644 --- a/third_party/terraform/tests/data_source_google_service_account_access_token_test.go +++ b/third_party/terraform/tests/data_source_google_service_account_access_token_test.go @@ -33,7 +33,7 @@ func TestAccDataSourceGoogleServiceAccountAccessToken_basic(t *testing.T) { serviceAccount := getTestServiceAccountFromEnv(t) targetServiceAccountEmail := BootstrapServiceAccount(t, getTestProjectFromEnv(), serviceAccount) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_service_account_id_token_test.go b/third_party/terraform/tests/data_source_google_service_account_id_token_test.go new file mode 100644 index 000000000000..9ef4b2599358 --- /dev/null +++ b/third_party/terraform/tests/data_source_google_service_account_id_token_test.go @@ -0,0 +1,111 @@ +package google + +import ( + "context" + "testing" + + "fmt" + + "google.golang.org/api/idtoken" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +const targetAudience = "https://foo.bar/" + +func testAccCheckServiceAccountIdTokenValue(name, audience string) resource.TestCheckFunc { + return func(s *terraform.State) error { + ms := s.RootModule() + + rs, ok := ms.Resources[name] + if !ok { + return fmt.Errorf("can't find %s in state", name) + } + + v, ok := rs.Primary.Attributes["id_token"] + if !ok { + return fmt.Errorf("id_token not found") + } + + _, err := idtoken.Validate(context.Background(), v, audience) + if err != nil { + return fmt.Errorf("token validation failed: %v", err) + } + + return nil + } +} + +func TestAccDataSourceGoogleServiceAccountIdToken_basic(t *testing.T) { + t.Parallel() + + resourceName := "data.google_service_account_id_token.default" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckGoogleServiceAccountIdToken_basic(targetAudience), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "target_audience", targetAudience), + testAccCheckServiceAccountIdTokenValue(resourceName, targetAudience), + ), + }, + }, + }) +} + +func testAccCheckGoogleServiceAccountIdToken_basic(targetAudience string) string { + + return fmt.Sprintf(` +data "google_service_account_id_token" "default" { + target_audience = "%s" +} +`, targetAudience) +} + +func TestAccDataSourceGoogleServiceAccountIdToken_impersonation(t *testing.T) { + t.Parallel() + + resourceName := "data.google_service_account_id_token.default" + serviceAccount := getTestServiceAccountFromEnv(t) + targetServiceAccountEmail := BootstrapServiceAccount(t, getTestProjectFromEnv(), serviceAccount) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckGoogleServiceAccountIdToken_impersonation_datasource(targetAudience, targetServiceAccountEmail), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "target_audience", targetAudience), + testAccCheckServiceAccountIdTokenValue(resourceName, targetAudience), + ), + }, + }, + }) +} + +func testAccCheckGoogleServiceAccountIdToken_impersonation_datasource(targetAudience string, targetServiceAccount string) string { + + return fmt.Sprintf(` +data "google_service_account_access_token" "default" { + target_service_account = "%s" + scopes = ["userinfo-email", "https://www.googleapis.com/auth/cloud-platform"] + lifetime = "30s" +} + +provider google { + alias = "impersonated" + access_token = data.google_service_account_access_token.default.access_token +} + +data "google_service_account_id_token" "default" { + provider = google.impersonated + target_service_account = "%s" + target_audience = "%s" +} +`, targetServiceAccount, targetServiceAccount, targetAudience) +} diff --git a/third_party/terraform/tests/data_source_google_service_account_key_test.go b/third_party/terraform/tests/data_source_google_service_account_key_test.go index 6ab5d8982669..9375caf17959 100644 --- a/third_party/terraform/tests/data_source_google_service_account_key_test.go +++ b/third_party/terraform/tests/data_source_google_service_account_key_test.go @@ -5,7 +5,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -13,7 +12,7 @@ func TestAccDatasourceGoogleServiceAccountKey_basic(t *testing.T) { t.Parallel() resourceName := "data.google_service_account_key.acceptance" - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) serviceAccountName := fmt.Sprintf( "projects/%s/serviceAccounts/%s@%s.iam.gserviceaccount.com", getTestProjectFromEnv(), @@ -21,14 +20,14 @@ func TestAccDatasourceGoogleServiceAccountKey_basic(t *testing.T) { getTestProjectFromEnv(), ) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccDatasourceGoogleServiceAccountKey(account), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleServiceAccountKeyExists(resourceName), + testAccCheckGoogleServiceAccountKeyExists(t, resourceName), // Check that the 'name' starts with the service account name resource.TestMatchResourceAttr(resourceName, "name", regexp.MustCompile(serviceAccountName)), resource.TestCheckResourceAttrSet(resourceName, "key_algorithm"), diff --git a/third_party/terraform/tests/data_source_google_service_account_test.go b/third_party/terraform/tests/data_source_google_service_account_test.go index 7d52ab956406..2b675da2b95c 100644 --- a/third_party/terraform/tests/data_source_google_service_account_test.go +++ b/third_party/terraform/tests/data_source_google_service_account_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -12,9 +11,9 @@ func TestAccDatasourceGoogleServiceAccount_basic(t *testing.T) { t.Parallel() resourceName := "data.google_service_account.acceptance" - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_sql_ca_certs_test.go b/third_party/terraform/tests/data_source_google_sql_ca_certs_test.go index f42a685074b6..565df9d889a2 100644 --- a/third_party/terraform/tests/data_source_google_sql_ca_certs_test.go +++ b/third_party/terraform/tests/data_source_google_sql_ca_certs_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,12 +11,12 @@ import ( func TestAccDataSourceGoogleSQLCaCerts_basic(t *testing.T) { t.Parallel() - instanceName := fmt.Sprintf("data-ssl-ca-cert-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("data-ssl-ca-cert-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceGoogleSQLCaCertsConfig(instanceName), diff --git a/third_party/terraform/tests/data_source_google_storage_project_service_account_test.go b/third_party/terraform/tests/data_source_google_storage_project_service_account_test.go index e731021b4cb9..7d2b7782027e 100644 --- a/third_party/terraform/tests/data_source_google_storage_project_service_account_test.go +++ b/third_party/terraform/tests/data_source_google_storage_project_service_account_test.go @@ -11,7 +11,7 @@ func TestAccDataSourceGoogleStorageProjectServiceAccount_basic(t *testing.T) { resourceName := "data.google_storage_project_service_account.gcs_account" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_google_storage_transfer_project_service_account_test.go b/third_party/terraform/tests/data_source_google_storage_transfer_project_service_account_test.go index e14c8216814b..01675487aafb 100644 --- a/third_party/terraform/tests/data_source_google_storage_transfer_project_service_account_test.go +++ b/third_party/terraform/tests/data_source_google_storage_transfer_project_service_account_test.go @@ -11,7 +11,7 @@ func TestAccDataSourceGoogleStorageTransferProjectServiceAccount_basic(t *testin resourceName := "data.google_storage_transfer_project_service_account.default" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_monitoring_notification_channel_test.go b/third_party/terraform/tests/data_source_monitoring_notification_channel_test.go index 651ebca4e0bd..81f1f3c7a25d 100644 --- a/third_party/terraform/tests/data_source_monitoring_notification_channel_test.go +++ b/third_party/terraform/tests/data_source_monitoring_notification_channel_test.go @@ -5,17 +5,16 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataSourceGoogleMonitoringNotificationChannel_byDisplayName(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceGoogleMonitoringNotificationChannel_byDisplayName(acctest.RandomWithPrefix("tf-test")), + Config: testAccDataSourceGoogleMonitoringNotificationChannel_byDisplayName(fmt.Sprintf("tf-test-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( checkDataSourceStateMatchesResourceState( "data.google_monitoring_notification_channel.default", @@ -27,12 +26,12 @@ func TestAccDataSourceGoogleMonitoringNotificationChannel_byDisplayName(t *testi } func TestAccDataSourceGoogleMonitoringNotificationChannel_byTypeAndLabel(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceGoogleMonitoringNotificationChannel_byTypeAndLabel(acctest.RandomWithPrefix("tf-test")), + Config: testAccDataSourceGoogleMonitoringNotificationChannel_byTypeAndLabel(fmt.Sprintf("tf-test-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( checkDataSourceStateMatchesResourceState( "data.google_monitoring_notification_channel.default", @@ -44,12 +43,12 @@ func TestAccDataSourceGoogleMonitoringNotificationChannel_byTypeAndLabel(t *test } func TestAccDataSourceGoogleMonitoringNotificationChannel_UserLabel(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceGoogleMonitoringNotificationChannel_byTypeAndUserLabel(acctest.RandomWithPrefix("tf-test")), + Config: testAccDataSourceGoogleMonitoringNotificationChannel_byTypeAndUserLabel(fmt.Sprintf("tf-test-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( checkDataSourceStateMatchesResourceState( "data.google_monitoring_notification_channel.default", @@ -61,12 +60,12 @@ func TestAccDataSourceGoogleMonitoringNotificationChannel_UserLabel(t *testing.T } func TestAccDataSourceGoogleMonitoringNotificationChannel_byDisplayNameAndType(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceGoogleMonitoringNotificationChannel_byDisplayNameAndType(acctest.RandomWithPrefix("tf-test")), + Config: testAccDataSourceGoogleMonitoringNotificationChannel_byDisplayNameAndType(fmt.Sprintf("tf-test-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( checkDataSourceStateMatchesResourceState( "data.google_monitoring_notification_channel.email", @@ -78,7 +77,7 @@ func TestAccDataSourceGoogleMonitoringNotificationChannel_byDisplayNameAndType(t } func TestAccDataSourceGoogleMonitoringNotificationChannel_ErrorNoDisplayNameOrType(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -91,9 +90,9 @@ func TestAccDataSourceGoogleMonitoringNotificationChannel_ErrorNoDisplayNameOrTy } func TestAccDataSourceGoogleMonitoringNotificationChannel_ErrorNotFound(t *testing.T) { - displayName := acctest.RandomWithPrefix("tf-test") + displayName := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -106,8 +105,8 @@ func TestAccDataSourceGoogleMonitoringNotificationChannel_ErrorNotFound(t *testi } func TestAccDataSourceGoogleMonitoringNotificationChannel_ErrorNotUnique(t *testing.T) { - displayName := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + displayName := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_source_secret_manager_secret_version_test.go.erb b/third_party/terraform/tests/data_source_secret_manager_secret_version_test.go.erb index fd4c3bc3affc..c8699fd19456 100644 --- a/third_party/terraform/tests/data_source_secret_manager_secret_version_test.go.erb +++ b/third_party/terraform/tests/data_source_secret_manager_secret_version_test.go.erb @@ -1,6 +1,5 @@ <% autogen_exception -%> package google -<% unless version == "ga" -%> import ( "errors" @@ -15,12 +14,12 @@ import ( func TestAccDatasourceSecretManagerSecretVersion_basic(t *testing.T) { t.Parallel() - randomString := acctest.RandString(10) + randomString := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProvidersOiCS, - CheckDestroy: testAccCheckSecretManagerSecretVersionDestroy, + Providers: testAccProviders, + CheckDestroy: testAccCheckSecretManagerSecretVersionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDatasourceSecretManagerSecretVersion_basic(randomString), @@ -35,12 +34,12 @@ func TestAccDatasourceSecretManagerSecretVersion_basic(t *testing.T) { func TestAccDatasourceSecretManagerSecretVersion_latest(t *testing.T) { t.Parallel() - randomString := acctest.RandString(10) + randomString := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProvidersOiCS, - CheckDestroy: testAccCheckSecretManagerSecretVersionDestroy, + Providers: testAccProviders, + CheckDestroy: testAccCheckSecretManagerSecretVersionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDatasourceSecretManagerSecretVersion_latest(randomString), @@ -78,7 +77,6 @@ func testAccCheckDatasourceSecretManagerSecretVersion(n, expected string) resour func testAccDatasourceSecretManagerSecretVersion_latest(randomString string) string { return fmt.Sprintf(` resource "google_secret_manager_secret" "secret-basic" { - provider = google-beta secret_id = "tf-test-secret-version-%s" labels = { label = "my-label" @@ -89,13 +87,11 @@ resource "google_secret_manager_secret" "secret-basic" { } resource "google_secret_manager_secret_version" "secret-version-basic-1" { - provider = google-beta secret = google_secret_manager_secret.secret-basic.name secret_data = "my-tf-test-secret-first" } resource "google_secret_manager_secret_version" "secret-version-basic-2" { - provider = google-beta secret = google_secret_manager_secret.secret-basic.name secret_data = "my-tf-test-secret-second" @@ -103,7 +99,6 @@ resource "google_secret_manager_secret_version" "secret-version-basic-2" { } data "google_secret_manager_secret_version" "latest" { - provider = google-beta secret = google_secret_manager_secret_version.secret-version-basic-2.secret } `, randomString) @@ -112,7 +107,6 @@ data "google_secret_manager_secret_version" "latest" { func testAccDatasourceSecretManagerSecretVersion_basic(randomString string) string { return fmt.Sprintf(` resource "google_secret_manager_secret" "secret-basic" { - provider = google-beta secret_id = "tf-test-secret-version-%s" labels = { label = "my-label" @@ -123,16 +117,13 @@ resource "google_secret_manager_secret" "secret-basic" { } resource "google_secret_manager_secret_version" "secret-version-basic" { - provider = google-beta secret = google_secret_manager_secret.secret-basic.name secret_data = "my-tf-test-secret-%s" } data "google_secret_manager_secret_version" "basic" { - provider = google-beta secret = google_secret_manager_secret_version.secret-version-basic.secret version = 1 } `, randomString, randomString) } -<% end -%> diff --git a/third_party/terraform/tests/data_source_storage_object_signed_url_test.go b/third_party/terraform/tests/data_source_storage_object_signed_url_test.go index 1dce1f6ef1fa..d5488fd08c52 100644 --- a/third_party/terraform/tests/data_source_storage_object_signed_url_test.go +++ b/third_party/terraform/tests/data_source_storage_object_signed_url_test.go @@ -11,7 +11,6 @@ import ( "net/url" "github.com/hashicorp/go-cleanhttp" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "golang.org/x/oauth2/google" @@ -103,14 +102,14 @@ func TestUrlData_SignedUrl(t *testing.T) { func TestAccStorageSignedUrl_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testGoogleSignedUrlConfig, Check: resource.ComposeTestCheckFunc( - testAccSignedUrlExists("data.google_storage_object_signed_url.blerg"), + testAccSignedUrlExists(t, "data.google_storage_object_signed_url.blerg"), ), }, }, @@ -118,16 +117,18 @@ func TestAccStorageSignedUrl_basic(t *testing.T) { } func TestAccStorageSignedUrl_accTest(t *testing.T) { + // URL includes an expires time + skipIfVcr(t) t.Parallel() - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) headers := map[string]string{ "x-goog-test": "foo", "x-goog-if-generation-match": "1", } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -144,7 +145,7 @@ func TestAccStorageSignedUrl_accTest(t *testing.T) { }) } -func testAccSignedUrlExists(n string) resource.TestCheckFunc { +func testAccSignedUrlExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { r := s.RootModule().Resources[n] diff --git a/third_party/terraform/tests/data_source_tpu_tensorflow_versions_test.go b/third_party/terraform/tests/data_source_tpu_tensorflow_versions_test.go index b08f4998b439..9f05a0ae5ac2 100644 --- a/third_party/terraform/tests/data_source_tpu_tensorflow_versions_test.go +++ b/third_party/terraform/tests/data_source_tpu_tensorflow_versions_test.go @@ -13,7 +13,7 @@ import ( func TestAccTPUTensorflowVersions_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/data_sql_database_instance_test.go b/third_party/terraform/tests/data_sql_database_instance_test.go new file mode 100644 index 000000000000..990c96b937f5 --- /dev/null +++ b/third_party/terraform/tests/data_sql_database_instance_test.go @@ -0,0 +1,48 @@ +package google + +import ( + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "testing" +) + +func TestAccDataSourceSqlDatabaseInstance_basic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataSourceSqlDatabaseInstance_basic(context), + Check: resource.ComposeTestCheckFunc( + checkDataSourceStateMatchesResourceState("data.google_sql_database_instance.qa", "google_sql_database_instance.master"), + ), + }, + }, + }) +} + +func testAccDataSourceSqlDatabaseInstance_basic(context map[string]interface{}) string { + return Nprintf(` +resource "google_sql_database_instance" "master" { + name = "master-instance-%{random_suffix}" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { + # Second-generation instance tiers are based on the machine + # type. See argument reference below. + tier = "db-f1-micro" + } +} + +data "google_sql_database_instance" "qa" { + name = google_sql_database_instance.master.name +} +`, context) +} diff --git a/third_party/terraform/tests/resource_access_context_manager_access_level_test.go.erb b/third_party/terraform/tests/resource_access_context_manager_access_level_test.go.erb index 79285803c46d..9197180cb802 100644 --- a/third_party/terraform/tests/resource_access_context_manager_access_level_test.go.erb +++ b/third_party/terraform/tests/resource_access_context_manager_access_level_test.go.erb @@ -15,10 +15,10 @@ import ( func testAccAccessContextManagerAccessLevel_basicTest(t *testing.T) { org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAccessContextManagerAccessLevelDestroy, + CheckDestroy: testAccCheckAccessContextManagerAccessLevelDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccAccessContextManagerAccessLevel_basic(org, "my policy", "level"), @@ -43,10 +43,10 @@ func testAccAccessContextManagerAccessLevel_basicTest(t *testing.T) { func testAccAccessContextManagerAccessLevel_fullTest(t *testing.T) { org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAccessContextManagerAccessLevelDestroy, + CheckDestroy: testAccCheckAccessContextManagerAccessLevelDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccAccessContextManagerAccessLevel_full(org, "my policy", "level"), @@ -60,26 +60,48 @@ func testAccAccessContextManagerAccessLevel_fullTest(t *testing.T) { }) } -func testAccCheckAccessContextManagerAccessLevelDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_access_context_manager_access_level" { - continue - } +func testAccCheckAccessContextManagerAccessLevelDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_access_context_manager_access_level" { + continue + } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) - url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}{{name}}") - if err != nil { - return err - } + url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}{{name}}") + if err != nil { + return err + } - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("AccessLevel still exists at %s", url) + _, err = sendRequest(config, "GET", "", url, nil) + if err == nil { + return fmt.Errorf("AccessLevel still exists at %s", url) + } } + + return nil } +} - return nil +func testAccAccessContextManagerAccessLevel_customTest(t *testing.T) { + org := getTestOrgFromEnv(t) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAccessContextManagerAccessLevelDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccAccessContextManagerAccessLevel_custom(org, "my policy", "level"), + }, + { + ResourceName: "google_access_context_manager_access_level.test-access", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) } func testAccAccessContextManagerAccessLevel_basic(org, policyTitle, levelTitleName string) string { @@ -104,6 +126,27 @@ resource "google_access_context_manager_access_level" "test-access" { `, org, policyTitle, levelTitleName, levelTitleName) } +func testAccAccessContextManagerAccessLevel_custom(org, policyTitle, levelTitleName string) string { + return fmt.Sprintf(` +resource "google_access_context_manager_access_policy" "test-access" { + parent = "organizations/%s" + title = "%s" +} + +resource "google_access_context_manager_access_level" "test-access" { + parent = "accessPolicies/${google_access_context_manager_access_policy.test-access.name}" + name = "accessPolicies/${google_access_context_manager_access_policy.test-access.name}/accessLevels/%s" + title = "%s" + description = "hello" + custom { + expr { + expression = "device.os_type == OsType.DESKTOP_MAC" + } + } +} +`, org, policyTitle, levelTitleName, levelTitleName) +} + func testAccAccessContextManagerAccessLevel_basicUpdated(org, policyTitle, levelTitleName string) string { return fmt.Sprintf(` resource "google_access_context_manager_access_policy" "test-access" { diff --git a/third_party/terraform/tests/resource_access_context_manager_access_policy_test.go.erb b/third_party/terraform/tests/resource_access_context_manager_access_policy_test.go.erb index 263010e7a641..8c30b957ea69 100644 --- a/third_party/terraform/tests/resource_access_context_manager_access_policy_test.go.erb +++ b/third_party/terraform/tests/resource_access_context_manager_access_policy_test.go.erb @@ -84,6 +84,7 @@ func TestAccAccessContextManager(t *testing.T) { "service_perimeter_resource": testAccAccessContextManagerServicePerimeterResource_basicTest, "access_level": testAccAccessContextManagerAccessLevel_basicTest, "access_level_full": testAccAccessContextManagerAccessLevel_fullTest, + "access_level_custom": testAccAccessContextManagerAccessLevel_customTest, } for name, tc := range testCases { @@ -101,10 +102,10 @@ func TestAccAccessContextManager(t *testing.T) { func testAccAccessContextManagerAccessPolicy_basicTest(t *testing.T) { org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAccessContextManagerAccessPolicyDestroy, + CheckDestroy: testAccCheckAccessContextManagerAccessPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccAccessContextManagerAccessPolicy_basic(org, "my policy"), @@ -126,26 +127,28 @@ func testAccAccessContextManagerAccessPolicy_basicTest(t *testing.T) { }) } -func testAccCheckAccessContextManagerAccessPolicyDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_access_context_manager_access_policy" { - continue - } +func testAccCheckAccessContextManagerAccessPolicyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_access_context_manager_access_policy" { + continue + } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) - url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}accessPolicies/{{name}}") - if err != nil { - return err - } + url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}accessPolicies/{{name}}") + if err != nil { + return err + } - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("AccessPolicy still exists at %s", url) + _, err = sendRequest(config, "GET", "", url, nil) + if err == nil { + return fmt.Errorf("AccessPolicy still exists at %s", url) + } } - } - return nil + return nil + } } func testAccAccessContextManagerAccessPolicy_basic(org, title string) string { diff --git a/third_party/terraform/tests/resource_access_context_manager_service_perimeter_resource_test.go b/third_party/terraform/tests/resource_access_context_manager_service_perimeter_resource_test.go index 9380e6e191fd..aee837de0a53 100644 --- a/third_party/terraform/tests/resource_access_context_manager_service_perimeter_resource_test.go +++ b/third_party/terraform/tests/resource_access_context_manager_service_perimeter_resource_test.go @@ -12,12 +12,14 @@ import ( // can exist, they need to be ran serially. See AccessPolicy for the test runner. func testAccAccessContextManagerServicePerimeterResource_basicTest(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) org := getTestOrgFromEnv(t) projects := BootstrapServicePerimeterProjects(t, 2) policyTitle := "my policy" perimeterTitle := "perimeter" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -37,50 +39,52 @@ func testAccAccessContextManagerServicePerimeterResource_basicTest(t *testing.T) // Use a separate TestStep rather than a CheckDestroy because we need the service perimeter to still exist { Config: testAccAccessContextManagerServicePerimeterResource_destroy(org, policyTitle, perimeterTitle), - Check: testAccCheckAccessContextManagerServicePerimeterResourceDestroy, + Check: testAccCheckAccessContextManagerServicePerimeterResourceDestroyProducer(t), }, }, }) } -func testAccCheckAccessContextManagerServicePerimeterResourceDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_access_context_manager_service_perimeter_resource" { - continue +func testAccCheckAccessContextManagerServicePerimeterResourceDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_access_context_manager_service_perimeter_resource" { + continue + } + + config := googleProviderConfig(t) + + url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}{{perimeter_name}}") + if err != nil { + return err + } + + res, err := sendRequest(config, "GET", "", url, nil) + if err != nil { + return err + } + + v, ok := res["status"] + if !ok || v == nil { + return nil + } + + res = v.(map[string]interface{}) + v, ok = res["resources"] + if !ok || v == nil { + return nil + } + + resources := v.([]interface{}) + if len(resources) == 0 { + return nil + } + + return fmt.Errorf("expected 0 resources in perimeter, found %d: %v", len(resources), resources) } - config := testAccProvider.Meta().(*Config) - - url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}{{perimeter_name}}") - if err != nil { - return err - } - - res, err := sendRequest(config, "GET", "", url, nil) - if err != nil { - return err - } - - v, ok := res["status"] - if !ok || v == nil { - return nil - } - - res = v.(map[string]interface{}) - v, ok = res["resources"] - if !ok || v == nil { - return nil - } - - resources := v.([]interface{}) - if len(resources) == 0 { - return nil - } - - return fmt.Errorf("expected 0 resources in perimeter, found %d: %v", len(resources), resources) + return nil } - - return nil } func testAccAccessContextManagerServicePerimeterResource_basic(org, policyTitle, perimeterTitleName string, projectNumber1, projectNumber2 int64) string { diff --git a/third_party/terraform/tests/resource_access_context_manager_service_perimeter_test.go.erb b/third_party/terraform/tests/resource_access_context_manager_service_perimeter_test.go.erb index afc2c59599ab..3a7d44490444 100644 --- a/third_party/terraform/tests/resource_access_context_manager_service_perimeter_test.go.erb +++ b/third_party/terraform/tests/resource_access_context_manager_service_perimeter_test.go.erb @@ -14,10 +14,10 @@ import ( func testAccAccessContextManagerServicePerimeter_basicTest(t *testing.T) { org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAccessContextManagerServicePerimeterDestroy, + CheckDestroy: testAccCheckAccessContextManagerServicePerimeterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccAccessContextManagerServicePerimeter_basic(org, "my policy", "level", "perimeter"), @@ -34,10 +34,10 @@ func testAccAccessContextManagerServicePerimeter_basicTest(t *testing.T) { func testAccAccessContextManagerServicePerimeter_updateTest(t *testing.T) { org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAccessContextManagerServicePerimeterDestroy, + CheckDestroy: testAccCheckAccessContextManagerServicePerimeterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccAccessContextManagerServicePerimeter_basic(org, "my policy", "level", "perimeter"), @@ -83,26 +83,28 @@ func testAccAccessContextManagerServicePerimeter_updateTest(t *testing.T) { }) } -func testAccCheckAccessContextManagerServicePerimeterDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_access_context_manager_service_perimeter" { - continue - } +func testAccCheckAccessContextManagerServicePerimeterDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_access_context_manager_service_perimeter" { + continue + } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) - url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}{{name}}") - if err != nil { - return err - } + url, err := replaceVarsForTest(config, rs, "{{AccessContextManagerBasePath}}{{name}}") + if err != nil { + return err + } - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("ServicePerimeter still exists at %s", url) + _, err = sendRequest(config, "GET", "", url, nil) + if err == nil { + return fmt.Errorf("ServicePerimeter still exists at %s", url) + } } - } - return nil + return nil + } } func testAccAccessContextManagerServicePerimeter_basic(org, policyTitle, levelTitleName, perimeterTitleName string) string { diff --git a/third_party/terraform/tests/resource_active_directory_domain_update_test.go b/third_party/terraform/tests/resource_active_directory_domain_update_test.go new file mode 100644 index 000000000000..fd832566b5cb --- /dev/null +++ b/third_party/terraform/tests/resource_active_directory_domain_update_test.go @@ -0,0 +1,80 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccActiveDirectoryDomain_update(t *testing.T) { + t.Parallel() + + domain := fmt.Sprintf("mydomain%s.org1.com", randString(t, 5)) + context := map[string]interface{}{ + "domain": domain, + "resource_name": "ad-domain", + } + + resourceName := Nprintf("google_active_directory_domain.%{resource_name}", context) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckActiveDirectoryDomainDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccADDomainBasic(context), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"domain_name"}, + }, + { + Config: testAccADDomainUpdate(context), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"domain_name"}, + }, + { + Config: testAccADDomainBasic(context), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"domain_name"}, + }, + }, + }) +} + +func testAccADDomainBasic(context map[string]interface{}) string { + + return Nprintf(` + resource "google_active_directory_domain" "%{resource_name}" { + domain_name = "%{domain}" + locations = ["us-central1"] + reserved_ip_range = "192.168.255.0/24" + } + `, context) +} + +func testAccADDomainUpdate(context map[string]interface{}) string { + return Nprintf(` + resource "google_active_directory_domain" "%{resource_name}" { + domain_name = "%{domain}" + locations = ["us-central1", "us-west1"] + reserved_ip_range = "192.168.255.0/24" + labels = { + env = "test" + } + } + `, context) + +} diff --git a/third_party/terraform/tests/resource_app_engine_application_test.go b/third_party/terraform/tests/resource_app_engine_application_test.go index d96215dcba17..b5400a235d0a 100644 --- a/third_party/terraform/tests/resource_app_engine_application_test.go +++ b/third_party/terraform/tests/resource_app_engine_application_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -12,8 +11,8 @@ func TestAccAppEngineApplication_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -48,9 +47,9 @@ func TestAccAppEngineApplication_withIAP(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -102,6 +101,7 @@ resource "google_app_engine_application" "acceptance" { project = google_project.acceptance.project_id auth_domain = "hashicorptest.com" location_id = "us-central" + database_type = "CLOUD_DATASTORE_COMPATIBILITY" serving_status = "SERVING" } `, pid, pid, org) @@ -119,6 +119,7 @@ resource "google_app_engine_application" "acceptance" { project = google_project.acceptance.project_id auth_domain = "tf-test.club" location_id = "us-central" + database_type = "CLOUD_DATASTORE_COMPATIBILITY" serving_status = "USER_DISABLED" } `, pid, pid, org) diff --git a/third_party/terraform/tests/resource_app_engine_domain_mapping_test.go b/third_party/terraform/tests/resource_app_engine_domain_mapping_test.go index 206b468ecefa..b00937d9cb1e 100644 --- a/third_party/terraform/tests/resource_app_engine_domain_mapping_test.go +++ b/third_party/terraform/tests/resource_app_engine_domain_mapping_test.go @@ -4,19 +4,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccAppEngineDomainMapping_update(t *testing.T) { t.Parallel() - domainName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + domainName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAppEngineDomainMappingDestroy, + CheckDestroy: testAccCheckAppEngineDomainMappingDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccAppEngineDomainMapping_basic(domainName), diff --git a/third_party/terraform/tests/resource_app_engine_flexible_app_version_test.go b/third_party/terraform/tests/resource_app_engine_flexible_app_version_test.go index 8f6170799487..3917da9a7183 100644 --- a/third_party/terraform/tests/resource_app_engine_flexible_app_version_test.go +++ b/third_party/terraform/tests/resource_app_engine_flexible_app_version_test.go @@ -1,8 +1,6 @@ package google import ( - "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "testing" ) @@ -10,46 +8,70 @@ import ( func TestAccAppEngineFlexibleAppVersion_update(t *testing.T) { t.Parallel() - resourceName := fmt.Sprintf("tf-test-ae-service-%s", acctest.RandString(10)) + context := map[string]interface{}{ + "org_id": getTestOrgFromEnv(t), + "billing_account": getTestBillingAccountFromEnv(t), + "random_suffix": randString(t, 10), + } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAppEngineFlexibleAppVersionDestroy, + CheckDestroy: testAccCheckAppEngineFlexibleAppVersionDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccAppEngineFlexibleAppVersion_python(resourceName), + Config: testAccAppEngineFlexibleAppVersion_python(context), }, { ResourceName: "google_app_engine_flexible_app_version.foo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "delete_service_on_destroy"}, + ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "noop_on_destroy"}, }, { - Config: testAccAppEngineFlexibleAppVersion_pythonUpdate(resourceName), + Config: testAccAppEngineFlexibleAppVersion_pythonUpdate(context), }, { ResourceName: "google_app_engine_flexible_app_version.foo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "delete_service_on_destroy"}, + ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "noop_on_destroy"}, }, }, }) } -func testAccAppEngineFlexibleAppVersion_python(resourceName string) string { - return fmt.Sprintf(` +func testAccAppEngineFlexibleAppVersion_python(context map[string]interface{}) string { + return Nprintf(` +resource "google_project" "my_project" { + name = "tf-test-appeng-flex%{random_suffix}" + project_id = "tf-test-appeng-flex%{random_suffix}" + org_id = "%{org_id}" + billing_account = "%{billing_account}" +} + +resource "google_app_engine_application" "app" { + project = google_project.my_project.project_id + location_id = "us-central" +} + resource "google_project_service" "project" { + project = google_project.my_project.project_id service = "appengineflex.googleapis.com" disable_dependent_services = false } +resource "google_project_iam_member" "gae_api" { + project = google_project_service.project.project + role = "roles/compute.networkUser" + member = "serviceAccount:service-${google_project.my_project.number}@gae-api-prod.google.com.iam.gserviceaccount.com" +} + resource "google_app_engine_flexible_app_version" "foo" { + project = google_project_iam_member.gae_api.project version_id = "v1" - service = "%s" + service = "default" runtime = "python" runtime_api_version = "1" @@ -104,11 +126,12 @@ resource "google_app_engine_flexible_app_version" "foo" { instances = 1 } - delete_service_on_destroy = true + noop_on_destroy = true } resource "google_storage_bucket" "bucket" { - name = "%s-bucket" + project = google_project.my_project.project_id + name = "tf-test-%{random_suffix}-flex-ae-bucket" } resource "google_storage_bucket_object" "yaml" { @@ -127,20 +150,40 @@ resource "google_storage_bucket_object" "main" { name = "main.py" bucket = google_storage_bucket.bucket.name source = "./test-fixtures/appengine/hello-world-flask/main.py" -}`, resourceName, resourceName) +}`, context) +} + +func testAccAppEngineFlexibleAppVersion_pythonUpdate(context map[string]interface{}) string { + return Nprintf(` +resource "google_project" "my_project" { + name = "tf-test-appeng-flex%{random_suffix}" + project_id = "tf-test-appeng-flex%{random_suffix}" + org_id = "%{org_id}" + billing_account = "%{billing_account}" +} + +resource "google_app_engine_application" "app" { + project = google_project.my_project.project_id + location_id = "us-central" } -func testAccAppEngineFlexibleAppVersion_pythonUpdate(resourceName string) string { - return fmt.Sprintf(` resource "google_project_service" "project" { + project = google_project.my_project.project_id service = "appengineflex.googleapis.com" disable_dependent_services = false } +resource "google_project_iam_member" "gae_api" { + project = google_project_service.project.project + role = "roles/compute.networkUser" + member = "serviceAccount:service-${google_project.my_project.number}@gae-api-prod.google.com.iam.gserviceaccount.com" +} + resource "google_app_engine_flexible_app_version" "foo" { + project = google_project_iam_member.gae_api.project version_id = "v1" - service = "%s" + service = "default" runtime = "python" runtime_api_version = "1" @@ -195,11 +238,12 @@ resource "google_app_engine_flexible_app_version" "foo" { instances = 2 } - delete_service_on_destroy = true + noop_on_destroy = true } resource "google_storage_bucket" "bucket" { - name = "%s-bucket" + project = google_project.my_project.project_id + name = "tf-test-%{random_suffix}-flex-ae-bucket" } resource "google_storage_bucket_object" "yaml" { @@ -218,5 +262,5 @@ resource "google_storage_bucket_object" "main" { name = "main.py" bucket = google_storage_bucket.bucket.name source = "./test-fixtures/appengine/hello-world-flask/main.py" -}`, resourceName, resourceName) +}`, context) } diff --git a/third_party/terraform/tests/resource_app_engine_standard_app_version_test.go b/third_party/terraform/tests/resource_app_engine_standard_app_version_test.go new file mode 100644 index 000000000000..5f6c099830a2 --- /dev/null +++ b/third_party/terraform/tests/resource_app_engine_standard_app_version_test.go @@ -0,0 +1,204 @@ +package google + +import ( + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "testing" +) + +func TestAccAppEngineStandardAppVersion_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "org_id": getTestOrgFromEnv(t), + "billing_account": getTestBillingAccountFromEnv(t), + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAppEngineStandardAppVersionDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccAppEngineStandardAppVersion_python(context), + }, + { + ResourceName: "google_app_engine_standard_app_version.foo", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "noop_on_destroy"}, + }, + { + Config: testAccAppEngineStandardAppVersion_pythonUpdate(context), + }, + { + ResourceName: "google_app_engine_standard_app_version.foo", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "noop_on_destroy"}, + }, + }, + }) +} + +func testAccAppEngineStandardAppVersion_python(context map[string]interface{}) string { + return Nprintf(` +resource "google_project" "my_project" { + name = "tf-test-appeng-std%{random_suffix}" + project_id = "tf-test-appeng-std%{random_suffix}" + org_id = "%{org_id}" + billing_account = "%{billing_account}" +} + +resource "google_app_engine_application" "app" { + project = google_project.my_project.project_id + location_id = "us-central" +} + +resource "google_project_service" "project" { + project = google_project.my_project.project_id + service = "appengine.googleapis.com" + + disable_dependent_services = false +} + +resource "google_app_engine_standard_app_version" "foo" { + project = google_project_service.project.project + version_id = "v1" + service = "default" + runtime = "python38" + + entrypoint { + shell = "gunicorn -b :$PORT main:app" + } + + deployment { + files { + name = "main.py" + source_url = "https://storage.googleapis.com/${google_storage_bucket.bucket.name}/${google_storage_bucket_object.main.name}" + } + + files { + name = "requirements.txt" + source_url = "https://storage.googleapis.com/${google_storage_bucket.bucket.name}/${google_storage_bucket_object.requirements.name}" + } + } + + inbound_services = ["INBOUND_SERVICE_WARMUP", "INBOUND_SERVICE_MAIL"] + + env_variables = { + port = "8000" + } + + instance_class = "F2" + + automatic_scaling { + max_concurrent_requests = 10 + min_idle_instances = 1 + max_idle_instances = 3 + min_pending_latency = "1s" + max_pending_latency = "5s" + standard_scheduler_settings { + target_cpu_utilization = 0.5 + target_throughput_utilization = 0.75 + min_instances = 2 + max_instances = 10 + } + } + + noop_on_destroy = true +} + +resource "google_storage_bucket" "bucket" { + project = google_project.my_project.project_id + name = "tf-test-%{random_suffix}-standard-ae-bucket" +} + +resource "google_storage_bucket_object" "requirements" { + name = "requirements.txt" + bucket = google_storage_bucket.bucket.name + source = "./test-fixtures/appengine/hello-world-flask/requirements.txt" +} + +resource "google_storage_bucket_object" "main" { + name = "main.py" + bucket = google_storage_bucket.bucket.name + source = "./test-fixtures/appengine/hello-world-flask/main.py" +}`, context) +} + +func testAccAppEngineStandardAppVersion_pythonUpdate(context map[string]interface{}) string { + return Nprintf(` +resource "google_project" "my_project" { + name = "tf-test-appeng-std%{random_suffix}" + project_id = "tf-test-appeng-std%{random_suffix}" + org_id = "%{org_id}" + billing_account = "%{billing_account}" +} + +resource "google_app_engine_application" "app" { + project = google_project.my_project.project_id + location_id = "us-central" +} + +resource "google_project_service" "project" { + project = google_project.my_project.project_id + service = "appengine.googleapis.com" + + disable_dependent_services = false +} + +resource "google_app_engine_standard_app_version" "foo" { + project = google_project_service.project.project + version_id = "v1" + service = "default" + runtime = "python38" + + entrypoint { + shell = "gunicorn -b :$PORT main:app" + } + + deployment { + files { + name = "main.py" + source_url = "https://storage.googleapis.com/${google_storage_bucket.bucket.name}/${google_storage_bucket_object.main.name}" + } + + files { + name = "requirements.txt" + source_url = "https://storage.googleapis.com/${google_storage_bucket.bucket.name}/${google_storage_bucket_object.requirements.name}" + } + } + + inbound_services = [] + + env_variables = { + port = "8000" + } + + instance_class = "B2" + + basic_scaling { + max_instances = 5 + } + + noop_on_destroy = true +} + +resource "google_storage_bucket" "bucket" { + project = google_project.my_project.project_id + name = "tf-test-%{random_suffix}-standard-ae-bucket" +} + +resource "google_storage_bucket_object" "requirements" { + name = "requirements.txt" + bucket = google_storage_bucket.bucket.name + source = "./test-fixtures/appengine/hello-world-flask/requirements.txt" +} + +resource "google_storage_bucket_object" "main" { + name = "main.py" + bucket = google_storage_bucket.bucket.name + source = "./test-fixtures/appengine/hello-world-flask/main.py" +}`, context) +} diff --git a/third_party/terraform/tests/resource_big_query_dataset_test.go b/third_party/terraform/tests/resource_big_query_dataset_test.go index 41cb5a3938e4..ed4be18e04ec 100644 --- a/third_party/terraform/tests/resource_big_query_dataset_test.go +++ b/third_party/terraform/tests/resource_big_query_dataset_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/bigquery/v2" @@ -13,12 +12,12 @@ import ( func TestAccBigQueryDataset_basic(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryDatasetDestroy, + CheckDestroy: testAccCheckBigQueryDatasetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryDataset(datasetID), @@ -43,17 +42,17 @@ func TestAccBigQueryDataset_basic(t *testing.T) { func TestAccBigQueryDataset_datasetWithContents(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryDatasetDestroy, + CheckDestroy: testAccCheckBigQueryDatasetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryDatasetDeleteContents(datasetID), - Check: testAccAddTable(datasetID, tableID), + Check: testAccAddTable(t, datasetID, tableID), }, { ResourceName: "google_bigquery_dataset.contents_test", @@ -68,14 +67,14 @@ func TestAccBigQueryDataset_datasetWithContents(t *testing.T) { func TestAccBigQueryDataset_access(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_access_%s", acctest.RandString(10)) - otherDatasetID := fmt.Sprintf("tf_test_other_%s", acctest.RandString(10)) - otherTableID := fmt.Sprintf("tf_test_other_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_access_%s", randString(t, 10)) + otherDatasetID := fmt.Sprintf("tf_test_other_%s", randString(t, 10)) + otherTableID := fmt.Sprintf("tf_test_other_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryDatasetDestroy, + CheckDestroy: testAccCheckBigQueryDatasetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryDatasetWithOneAccess(datasetID), @@ -116,12 +115,12 @@ func TestAccBigQueryDataset_access(t *testing.T) { func TestAccBigQueryDataset_regionalLocation(t *testing.T) { t.Parallel() - datasetID1 := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID1 := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryDatasetDestroy, + CheckDestroy: testAccCheckBigQueryDatasetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryRegionalDataset(datasetID1, "asia-south1"), @@ -140,9 +139,9 @@ func TestAccBigQueryDataset_cmek(t *testing.T) { kms := BootstrapKMSKeyInLocation(t, "us") pid := getTestProjectFromEnv() - datasetID1 := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID1 := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -158,10 +157,10 @@ func TestAccBigQueryDataset_cmek(t *testing.T) { }) } -func testAccAddTable(datasetID string, tableID string) resource.TestCheckFunc { +func testAccAddTable(t *testing.T, datasetID string, tableID string) resource.TestCheckFunc { // Not actually a check, but adds a table independently of terraform return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) table := &bigquery.Table{ TableReference: &bigquery.TableReference{ DatasetId: datasetID, diff --git a/third_party/terraform/tests/resource_bigquery_connection_test.go.erb b/third_party/terraform/tests/resource_bigquery_connection_test.go.erb new file mode 100644 index 000000000000..5e05c35bc5b4 --- /dev/null +++ b/third_party/terraform/tests/resource_bigquery_connection_test.go.erb @@ -0,0 +1,132 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccBigqueryConnectionConnection_bigqueryConnectionBasic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersOiCS, + CheckDestroy: testAccCheckBigqueryConnectionConnectionDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigqueryConnectionConnection_bigqueryConnectionBasic(context), + }, + { + Config: testAccBigqueryConnectionConnection_bigqueryConnectionBasicUpdate(context), + }, + }, + }) +} + +func testAccBigqueryConnectionConnection_bigqueryConnectionBasic(context map[string]interface{}) string { + return Nprintf(` +resource "google_sql_database_instance" "instance" { + provider = google-beta + name = "tf-test-pg-database-instance%{random_suffix}" + database_version = "POSTGRES_11" + region = "us-central1" + settings { + tier = "db-f1-micro" + } +} + +resource "google_sql_database" "db" { + provider = google-beta + instance = google_sql_database_instance.instance.name + name = "db" +} + +resource "random_password" "pwd" { + length = 16 + special = false +} + +resource "google_sql_user" "user" { + provider = google-beta + name = "username" + instance = google_sql_database_instance.instance.name + password = random_password.pwd.result +} + +resource "google_bigquery_connection" "connection" { + provider = google-beta + connection_id = "tf-test-my-connection%{random_suffix}" + location = "US" + friendly_name = "👋" + description = "a riveting description" + cloud_sql { + instance_id = google_sql_database_instance.instance.connection_name + database = google_sql_database.db.name + type = "POSTGRES" + credential { + username = google_sql_user.user.name + password = google_sql_user.user.password + } + } +} +`, context) +} + +func testAccBigqueryConnectionConnection_bigqueryConnectionBasicUpdate(context map[string]interface{}) string { + return Nprintf(` +resource "google_sql_database_instance" "instance" { + provider = google-beta + name = "tf-test-mysql-database-instance%{random_suffix}" + database_version = "MYSQL_5_6" + region = "us-central1" + settings { + tier = "db-f1-micro" + } +} + +resource "google_sql_database" "db" { + provider = google-beta + instance = google_sql_database_instance.instance.name + name = "db2" +} + +resource "random_password" "pwd" { + length = 16 + special = false +} + +resource "google_sql_user" "user" { + provider = google-beta + name = "username" + instance = google_sql_database_instance.instance.name + password = random_password.pwd.result +} + +resource "google_bigquery_connection" "connection" { + provider = google-beta + connection_id = "tf-test-my-connection%{random_suffix}" + location = "US" + friendly_name = "👋👋" + description = "a very riveting description" + cloud_sql { + instance_id = google_sql_database_instance.instance.connection_name + database = google_sql_database.db.name + type = "MYSQL" + credential { + username = google_sql_user.user.name + password = google_sql_user.user.password + } + } +} +`, context) +} +<% else %> +// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. +<% end -%> diff --git a/third_party/terraform/tests/resource_bigquery_data_transfer_config_test.go b/third_party/terraform/tests/resource_bigquery_data_transfer_config_test.go index ebaa3eb9f482..273d1ce25be3 100644 --- a/third_party/terraform/tests/resource_bigquery_data_transfer_config_test.go +++ b/third_party/terraform/tests/resource_bigquery_data_transfer_config_test.go @@ -5,7 +5,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,9 +13,10 @@ import ( // but it will get deleted by parallel tests, so they need to be ran serially. func TestAccBigqueryDataTransferConfig(t *testing.T) { testCases := map[string]func(t *testing.T){ - "basic": testAccBigqueryDataTransferConfig_scheduledQuery_basic, - "update": testAccBigqueryDataTransferConfig_scheduledQuery_update, - "booleanParam": testAccBigqueryDataTransferConfig_copy_booleanParam, + "basic": testAccBigqueryDataTransferConfig_scheduledQuery_basic, + "update": testAccBigqueryDataTransferConfig_scheduledQuery_update, + "service_account": testAccBigqueryDataTransferConfig_scheduledQuery_with_service_account, + "booleanParam": testAccBigqueryDataTransferConfig_copy_booleanParam, } for name, tc := range testCases { @@ -32,12 +32,12 @@ func TestAccBigqueryDataTransferConfig(t *testing.T) { } func testAccBigqueryDataTransferConfig_scheduledQuery_basic(t *testing.T) { - random_suffix := acctest.RandString(10) + random_suffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigqueryDataTransferConfigDestroy, + CheckDestroy: testAccCheckBigqueryDataTransferConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigqueryDataTransferConfig_scheduledQuery(random_suffix, "third", "y"), @@ -53,12 +53,12 @@ func testAccBigqueryDataTransferConfig_scheduledQuery_basic(t *testing.T) { } func testAccBigqueryDataTransferConfig_scheduledQuery_update(t *testing.T) { - random_suffix := acctest.RandString(10) + random_suffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigqueryDataTransferConfigDestroy, + CheckDestroy: testAccCheckBigqueryDataTransferConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigqueryDataTransferConfig_scheduledQuery(random_suffix, "first", "y"), @@ -76,13 +76,34 @@ func testAccBigqueryDataTransferConfig_scheduledQuery_update(t *testing.T) { }) } +func testAccBigqueryDataTransferConfig_scheduledQuery_with_service_account(t *testing.T) { + random_suffix := randString(t, 10) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBigqueryDataTransferConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigqueryDataTransferConfig_scheduledQuery_service_account(random_suffix), + }, + { + ResourceName: "google_bigquery_data_transfer_config.query_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "service_account_name"}, + }, + }, + }) +} + func testAccBigqueryDataTransferConfig_copy_booleanParam(t *testing.T) { - random_suffix := acctest.RandString(10) + random_suffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigqueryDataTransferConfigDestroy, + CheckDestroy: testAccCheckBigqueryDataTransferConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigqueryDataTransferConfig_booleanParam(random_suffix), @@ -97,29 +118,31 @@ func testAccBigqueryDataTransferConfig_copy_booleanParam(t *testing.T) { }) } -func testAccCheckBigqueryDataTransferConfigDestroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_bigquery_data_transfer_config" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } +func testAccCheckBigqueryDataTransferConfigDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_bigquery_data_transfer_config" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) - url, err := replaceVarsForTest(config, rs, "{{BigqueryDataTransferBasePath}}{{name}}") - if err != nil { - return err - } + url, err := replaceVarsForTest(config, rs, "{{BigqueryDataTransferBasePath}}{{name}}") + if err != nil { + return err + } - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("BigqueryDataTransferConfig still exists at %s", url) + _, err = sendRequest(config, "GET", "", url, nil) + if err == nil { + return fmt.Errorf("BigqueryDataTransferConfig still exists at %s", url) + } } - } - return nil + return nil + } } func testAccBigqueryDataTransferConfig_scheduledQuery(random_suffix, schedule, letter string) string { @@ -150,7 +173,7 @@ resource "google_bigquery_data_transfer_config" "query_config" { schedule = "%s sunday of quarter 00:00" destination_dataset_id = google_bigquery_dataset.my_dataset.dataset_id params = { - destination_table_name_template = "my-table" + destination_table_name_template = "my_table" write_disposition = "WRITE_APPEND" query = "SELECT name FROM tabl WHERE x = '%s'" } @@ -158,6 +181,44 @@ resource "google_bigquery_data_transfer_config" "query_config" { `, random_suffix, random_suffix, schedule, letter) } +func testAccBigqueryDataTransferConfig_scheduledQuery_service_account(random_suffix string) string { + return fmt.Sprintf(` +data "google_project" "project" {} + +resource "google_service_account" "bqwriter" { + account_id = "bqwriter%s" +} + +resource "google_project_iam_member" "data_editor" { + role = "roles/bigquery.dataEditor" + member = "serviceAccount:${google_service_account.bqwriter.email}" +} + +resource "google_bigquery_dataset" "my_dataset" { + dataset_id = "my_dataset%s" + friendly_name = "foo" + description = "bar" + location = "asia-northeast1" +} + +resource "google_bigquery_data_transfer_config" "query_config" { + depends_on = [google_project_iam_member.data_editor] + + display_name = "my-query-%s" + location = "asia-northeast1" + data_source_id = "scheduled_query" + schedule = "every day 00:00" + destination_dataset_id = google_bigquery_dataset.my_dataset.dataset_id + service_account_name = google_service_account.bqwriter.email + params = { + destination_table_name_template = "my_table" + write_disposition = "WRITE_APPEND" + query = "SELECT 1 AS a" + } +} +`, random_suffix, random_suffix, random_suffix) +} + func testAccBigqueryDataTransferConfig_booleanParam(random_suffix string) string { return fmt.Sprintf(` data "google_project" "project" {} diff --git a/third_party/terraform/tests/resource_bigquery_dataset_access_test.go b/third_party/terraform/tests/resource_bigquery_dataset_access_test.go index d631739f739c..5e1aea640d30 100644 --- a/third_party/terraform/tests/resource_bigquery_dataset_access_test.go +++ b/third_party/terraform/tests/resource_bigquery_dataset_access_test.go @@ -5,7 +5,6 @@ import ( "reflect" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,26 +12,26 @@ import ( func TestAccBigQueryDatasetAccess_basic(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - saID := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + saID := fmt.Sprintf("tf-test-%s", randString(t, 10)) expected := map[string]interface{}{ "role": "OWNER", "userByEmail": fmt.Sprintf("%s@%s.iam.gserviceaccount.com", saID, getTestProjectFromEnv()), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccBigQueryDatasetAccess_basic(datasetID, saID), - Check: testAccCheckBigQueryDatasetAccessPresent("google_bigquery_dataset.dataset", expected), + Check: testAccCheckBigQueryDatasetAccessPresent(t, "google_bigquery_dataset.dataset", expected), }, { // Destroy step instead of CheckDestroy so we can check the access is removed without deleting the dataset Config: testAccBigQueryDatasetAccess_destroy(datasetID, "dataset"), - Check: testAccCheckBigQueryDatasetAccessAbsent("google_bigquery_dataset.dataset", expected), + Check: testAccCheckBigQueryDatasetAccessAbsent(t, "google_bigquery_dataset.dataset", expected), }, }, }) @@ -41,9 +40,9 @@ func TestAccBigQueryDatasetAccess_basic(t *testing.T) { func TestAccBigQueryDatasetAccess_view(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - datasetID2 := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + datasetID2 := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) expected := map[string]interface{}{ "view": map[string]interface{}{ @@ -53,26 +52,28 @@ func TestAccBigQueryDatasetAccess_view(t *testing.T) { }, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccBigQueryDatasetAccess_view(datasetID, datasetID2, tableID), - Check: testAccCheckBigQueryDatasetAccessPresent("google_bigquery_dataset.private", expected), + Check: testAccCheckBigQueryDatasetAccessPresent(t, "google_bigquery_dataset.private", expected), }, { Config: testAccBigQueryDatasetAccess_destroy(datasetID, "private"), - Check: testAccCheckBigQueryDatasetAccessAbsent("google_bigquery_dataset.private", expected), + Check: testAccCheckBigQueryDatasetAccessAbsent(t, "google_bigquery_dataset.private", expected), }, }, }) } func TestAccBigQueryDatasetAccess_multiple(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) expected1 := map[string]interface{}{ "role": "WRITER", @@ -84,45 +85,88 @@ func TestAccBigQueryDatasetAccess_multiple(t *testing.T) { "specialGroup": "projectWriters", } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccBigQueryDatasetAccess_multiple(datasetID), Check: resource.ComposeTestCheckFunc( - testAccCheckBigQueryDatasetAccessPresent("google_bigquery_dataset.dataset", expected1), - testAccCheckBigQueryDatasetAccessPresent("google_bigquery_dataset.dataset", expected2), + testAccCheckBigQueryDatasetAccessPresent(t, "google_bigquery_dataset.dataset", expected1), + testAccCheckBigQueryDatasetAccessPresent(t, "google_bigquery_dataset.dataset", expected2), ), }, { // Destroy step instead of CheckDestroy so we can check the access is removed without deleting the dataset Config: testAccBigQueryDatasetAccess_destroy(datasetID, "dataset"), Check: resource.ComposeTestCheckFunc( - testAccCheckBigQueryDatasetAccessAbsent("google_bigquery_dataset.dataset", expected1), - testAccCheckBigQueryDatasetAccessAbsent("google_bigquery_dataset.dataset", expected2), + testAccCheckBigQueryDatasetAccessAbsent(t, "google_bigquery_dataset.dataset", expected1), + testAccCheckBigQueryDatasetAccessAbsent(t, "google_bigquery_dataset.dataset", expected2), ), }, }, }) } -func testAccCheckBigQueryDatasetAccessPresent(n string, expected map[string]interface{}) resource.TestCheckFunc { - return testAccCheckBigQueryDatasetAccess(n, expected, true) +func TestAccBigQueryDatasetAccess_predefinedRole(t *testing.T) { + t.Parallel() + + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + + expected1 := map[string]interface{}{ + "role": "WRITER", + "domain": "google.com", + } + + expected2 := map[string]interface{}{ + "role": "READER", + "domain": "google.com", + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccBigQueryDatasetAccess_predefinedRole("roles/bigquery.dataEditor", datasetID), + Check: resource.ComposeTestCheckFunc( + testAccCheckBigQueryDatasetAccessPresent(t, "google_bigquery_dataset.dataset", expected1), + ), + }, + { + // Update role + Config: testAccBigQueryDatasetAccess_predefinedRole("roles/bigquery.dataViewer", datasetID), + Check: resource.ComposeTestCheckFunc( + testAccCheckBigQueryDatasetAccessPresent(t, "google_bigquery_dataset.dataset", expected2), + ), + }, + { + // Destroy step instead of CheckDestroy so we can check the access is removed without deleting the dataset + Config: testAccBigQueryDatasetAccess_destroy(datasetID, "dataset"), + Check: resource.ComposeTestCheckFunc( + testAccCheckBigQueryDatasetAccessAbsent(t, "google_bigquery_dataset.dataset", expected1), + ), + }, + }, + }) +} + +func testAccCheckBigQueryDatasetAccessPresent(t *testing.T, n string, expected map[string]interface{}) resource.TestCheckFunc { + return testAccCheckBigQueryDatasetAccess(t, n, expected, true) } -func testAccCheckBigQueryDatasetAccessAbsent(n string, expected map[string]interface{}) resource.TestCheckFunc { - return testAccCheckBigQueryDatasetAccess(n, expected, false) +func testAccCheckBigQueryDatasetAccessAbsent(t *testing.T, n string, expected map[string]interface{}) resource.TestCheckFunc { + return testAccCheckBigQueryDatasetAccess(t, n, expected, false) } -func testAccCheckBigQueryDatasetAccess(n string, expected map[string]interface{}, expectPresent bool) resource.TestCheckFunc { +func testAccCheckBigQueryDatasetAccess(t *testing.T, n string, expected map[string]interface{}, expectPresent bool) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) url, err := replaceVarsForTest(config, rs, "{{BigQueryBasePath}}projects/{{project}}/datasets/{{dataset_id}}") if err != nil { return err @@ -225,3 +269,17 @@ resource "google_bigquery_dataset" "dataset" { } `, datasetID) } + +func testAccBigQueryDatasetAccess_predefinedRole(role, datasetID string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset_access" "access" { + dataset_id = google_bigquery_dataset.dataset.dataset_id + role = "%s" + domain = "google.com" +} + +resource "google_bigquery_dataset" "dataset" { + dataset_id = "%s" +} +`, role, datasetID) +} diff --git a/third_party/terraform/tests/resource_bigquery_dataset_iam_member_test.go b/third_party/terraform/tests/resource_bigquery_dataset_iam_member_test.go new file mode 100644 index 000000000000..5b4d6a4bdad2 --- /dev/null +++ b/third_party/terraform/tests/resource_bigquery_dataset_iam_member_test.go @@ -0,0 +1,62 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccBigqueryDatasetIamMember_basic(t *testing.T) { + t.Parallel() + + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + saID := fmt.Sprintf("tf-test-%s", randString(t, 10)) + + expected := map[string]interface{}{ + "role": "roles/viewer", + "userByEmail": fmt.Sprintf("%s@%s.iam.gserviceaccount.com", saID, getTestProjectFromEnv()), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccBigqueryDatasetIamMember_basic(datasetID, saID), + Check: testAccCheckBigQueryDatasetAccessPresent(t, "google_bigquery_dataset.dataset", expected), + }, + { + // Destroy step instead of CheckDestroy so we can check the access is removed without deleting the dataset + Config: testAccBigqueryDatasetIamMember_destroy(datasetID, "dataset"), + Check: testAccCheckBigQueryDatasetAccessAbsent(t, "google_bigquery_dataset.dataset", expected), + }, + }, + }) +} + +func testAccBigqueryDatasetIamMember_destroy(datasetID, rs string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset" "%s" { + dataset_id = "%s" +} +`, rs, datasetID) +} + +func testAccBigqueryDatasetIamMember_basic(datasetID, saID string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset_iam_member" "access" { + dataset_id = google_bigquery_dataset.dataset.dataset_id + role = "roles/viewer" + member = "serviceAccount:${google_service_account.bqviewer.email}" +} + +resource "google_bigquery_dataset" "dataset" { + dataset_id = "%s" +} + +resource "google_service_account" "bqviewer" { + account_id = "%s" +} +`, datasetID, saID) +} diff --git a/third_party/terraform/tests/resource_bigquery_dataset_iam_test.go b/third_party/terraform/tests/resource_bigquery_dataset_iam_test.go new file mode 100644 index 000000000000..3e41cbcbc72d --- /dev/null +++ b/third_party/terraform/tests/resource_bigquery_dataset_iam_test.go @@ -0,0 +1,202 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccBigqueryDatasetIamBinding(t *testing.T) { + t.Parallel() + + dataset := "tf_test_dataset_iam_" + randString(t, 10) + account := "tf-test-bq-iam-" + randString(t, 10) + role := "roles/bigquery.dataViewer" + + importId := fmt.Sprintf("projects/%s/datasets/%s %s", + getTestProjectFromEnv(), dataset, role) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + // Test IAM Binding creation + Config: testAccBigqueryDatasetIamBinding_basic(dataset, account, role), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "google_bigquery_dataset_iam_binding.binding", "role", role), + ), + }, + { + ResourceName: "google_bigquery_dataset_iam_binding.binding", + ImportStateId: importId, + ImportState: true, + ImportStateVerify: true, + }, + { + // Test IAM Binding update + Config: testAccBigqueryDatasetIamBinding_update(dataset, account, role), + }, + { + ResourceName: "google_bigquery_dataset_iam_binding.binding", + ImportStateId: importId, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccBigqueryDatasetIamMember(t *testing.T) { + t.Parallel() + + dataset := "tf_test_dataset_iam_" + randString(t, 10) + account := "tf-test-bq-iam-" + randString(t, 10) + role := "roles/editor" + + importId := fmt.Sprintf("projects/%s/datasets/%s %s serviceAccount:%s", + getTestProjectFromEnv(), + dataset, + role, + serviceAccountCanonicalEmail(account)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + // Test IAM Binding creation + Config: testAccBigqueryDatasetIamMember(dataset, account, role), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "google_bigquery_dataset_iam_member.member", "role", role), + resource.TestCheckResourceAttr( + "google_bigquery_dataset_iam_member.member", "member", "serviceAccount:"+serviceAccountCanonicalEmail(account)), + ), + }, + { + ResourceName: "google_bigquery_dataset_iam_member.member", + ImportStateId: importId, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccBigqueryDatasetIamPolicy(t *testing.T) { + t.Parallel() + + dataset := "tf_test_dataset_iam_" + randString(t, 10) + account := "tf-test-bq-iam-" + randString(t, 10) + role := "roles/bigquery.dataOwner" + + importId := fmt.Sprintf("projects/%s/datasets/%s", + getTestProjectFromEnv(), dataset) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + // Test IAM Binding creation + Config: testAccBigqueryDatasetIamPolicy(dataset, account, role), + }, + { + ResourceName: "google_bigquery_dataset_iam_policy.policy", + ImportStateId: importId, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccBigqueryDatasetIamBinding_basic(dataset, account, role string) string { + return fmt.Sprintf(testBigqueryDatasetIam+` +resource "google_service_account" "test-account1" { + account_id = "%s-1" + display_name = "Bigquery Dataset IAM Testing Account" +} + +resource "google_service_account" "test-account2" { + account_id = "%s-2" + display_name = "Bigquery Dataset Iam Testing Account" +} + +resource "google_bigquery_dataset_iam_binding" "binding" { + dataset_id = google_bigquery_dataset.dataset.dataset_id + role = "%s" + members = [ + "serviceAccount:${google_service_account.test-account1.email}", + ] +} +`, dataset, account, account, role) +} + +func testAccBigqueryDatasetIamBinding_update(dataset, account, role string) string { + return fmt.Sprintf(testBigqueryDatasetIam+` +resource "google_service_account" "test-account1" { + account_id = "%s-1" + display_name = "Bigquery Dataset IAM Testing Account" +} + +resource "google_service_account" "test-account2" { + account_id = "%s-2" + display_name = "Bigquery Dataset IAM Testing Account" +} + +resource "google_bigquery_dataset_iam_binding" "binding" { + dataset_id = google_bigquery_dataset.dataset.dataset_id + role = "%s" + members = [ + "serviceAccount:${google_service_account.test-account1.email}", + "serviceAccount:${google_service_account.test-account2.email}", + ] +} +`, dataset, account, account, role) +} + +func testAccBigqueryDatasetIamMember(dataset, account, role string) string { + return fmt.Sprintf(testBigqueryDatasetIam+` +resource "google_service_account" "test-account" { + account_id = "%s" + display_name = "Bigquery Dataset IAM Testing Account" +} + +resource "google_bigquery_dataset_iam_member" "member" { + dataset_id = google_bigquery_dataset.dataset.dataset_id + role = "%s" + member = "serviceAccount:${google_service_account.test-account.email}" +} +`, dataset, account, role) +} + +func testAccBigqueryDatasetIamPolicy(dataset, account, role string) string { + return fmt.Sprintf(testBigqueryDatasetIam+` +resource "google_service_account" "test-account" { + account_id = "%s" + display_name = "Bigquery Dataset IAM Testing Account" +} + +data "google_iam_policy" "policy" { + binding { + role = "%s" + members = ["serviceAccount:${google_service_account.test-account.email}"] + } +} + +resource "google_bigquery_dataset_iam_policy" "policy" { + dataset_id = google_bigquery_dataset.dataset.dataset_id + policy_data = data.google_iam_policy.policy.policy_data +} +`, dataset, account, role) +} + +var testBigqueryDatasetIam = ` +resource "google_bigquery_dataset" "dataset" { + dataset_id = "%s" +} +` diff --git a/third_party/terraform/tests/resource_bigquery_reservation_test.go.erb b/third_party/terraform/tests/resource_bigquery_reservation_test.go.erb index 74fcbc4c4b86..61bb078d077d 100644 --- a/third_party/terraform/tests/resource_bigquery_reservation_test.go.erb +++ b/third_party/terraform/tests/resource_bigquery_reservation_test.go.erb @@ -7,24 +7,23 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) -func TestAccBigqueryReservationReservation_bigqueryReservationUpdate(t *testing.T) { +func TestAccBigqueryReservationReservation_bigqueryReservation(t *testing.T) { t.Parallel() location := "asia-northeast1" context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), "location": location, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigqueryReservationReservationDestroy, + CheckDestroy: testAccCheckBigqueryReservationReservationDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigqueryReservationReservation_bigqueryReservationBasic(context), @@ -34,33 +33,12 @@ func TestAccBigqueryReservationReservation_bigqueryReservationUpdate(t *testing. ImportState: true, ImportStateVerify: true, }, - { - Config: testAccBigqueryReservationReservation_bigqueryReservationUpdate(context), - }, - { - ResourceName: "google_bigquery_reservation.reservation", - ImportState: true, - ImportStateVerify: true, - }, }, }) } func testAccBigqueryReservationReservation_bigqueryReservationBasic(context map[string]interface{}) string { return Nprintf(` -resource "google_bigquery_reservation" "reservation" { - name = "reservation%{random_suffix}" - location = "%{location}" - // Set to 0 for testing purposes - // In reality this would be larger than zero - slot_capacity = 0 - ignore_idle_slots = true -} -`, context) -} - -func testAccBigqueryReservationReservation_bigqueryReservationUpdate(context map[string]interface{}) string { - return Nprintf(` resource "google_bigquery_reservation" "reservation" { name = "reservation%{random_suffix}" location = "%{location}" diff --git a/third_party/terraform/tests/resource_bigquery_table_test.go.erb b/third_party/terraform/tests/resource_bigquery_table_test.go similarity index 50% rename from third_party/terraform/tests/resource_bigquery_table_test.go.erb rename to third_party/terraform/tests/resource_bigquery_table_test.go index c2580a85d285..28777be94af0 100644 --- a/third_party/terraform/tests/resource_bigquery_table_test.go.erb +++ b/third_party/terraform/tests/resource_bigquery_table_test.go @@ -1,11 +1,9 @@ -<% autogen_exception -%> package google import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,16 +11,16 @@ import ( func TestAccBigQueryTable_Basic(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryTableDestroy, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccBigQueryTable(datasetID, tableID), + Config: testAccBigQueryTableDailyTimePartitioning(datasetID, tableID), }, { ResourceName: "google_bigquery_table.test", @@ -44,15 +42,15 @@ func TestAccBigQueryTable_Basic(t *testing.T) { func TestAccBigQueryTable_Kms(t *testing.T) { t.Parallel() resourceName := "google_bigquery_table.test" - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) kms := BootstrapKMSKey(t) cryptoKeyName := kms.CryptoKey.Name - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryTableDestroy, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryTableKms(cryptoKeyName, datasetID, tableID), @@ -66,17 +64,96 @@ func TestAccBigQueryTable_Kms(t *testing.T) { }) } -<% unless version == 'ga' -%> +func TestAccBigQueryTable_HourlyTimePartitioning(t *testing.T) { + t.Parallel() + + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableHourlyTimePartitioning(datasetID, tableID), + }, + { + ResourceName: "google_bigquery_table.test", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccBigQueryTableUpdated(datasetID, tableID), + }, + { + ResourceName: "google_bigquery_table.test", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccBigQueryTable_HivePartitioning(t *testing.T) { + t.Parallel() + bucketName := testBucketName(t) + resourceName := "google_bigquery_table.test" + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableHivePartitioning(bucketName, datasetID, tableID), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccBigQueryTable_HivePartitioningCustomSchema(t *testing.T) { + t.Parallel() + bucketName := testBucketName(t) + resourceName := "google_bigquery_table.test" + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableHivePartitioningCustomSchema(bucketName, datasetID, tableID), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"external_data_configuration.0.schema"}, + }, + }, + }) +} + func TestAccBigQueryTable_RangePartitioning(t *testing.T) { t.Parallel() resourceName := "google_bigquery_table.test" - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryTableDestroy, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryTableRangePartitioning(datasetID, tableID), @@ -89,18 +166,17 @@ func TestAccBigQueryTable_RangePartitioning(t *testing.T) { }, }) } -<% end -%> func TestAccBigQueryTable_View(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryTableDestroy, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryTableWithView(datasetID, tableID), @@ -117,13 +193,13 @@ func TestAccBigQueryTable_View(t *testing.T) { func TestAccBigQueryTable_updateView(t *testing.T) { t.Parallel() - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryTableDestroy, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryTableWithView(datasetID, tableID), @@ -148,37 +224,61 @@ func TestAccBigQueryTable_updateView(t *testing.T) { func TestAccBigQueryExternalDataTable_CSV(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := fmt.Sprintf("tf_test_%s.csv", acctest.RandString(10)) + bucketName := testBucketName(t) + objectName := fmt.Sprintf("tf_test_%s.csv", randString(t, 10)) - datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) - tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(10)) + datasetID := fmt.Sprintf("tf_test_%s", randString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckBigQueryTableDestroy, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccBigQueryTableFromGCS(datasetID, tableID, bucketName, objectName, TEST_CSV, "CSV", "\\\""), - Check: testAccCheckBigQueryExtData("\""), + Check: testAccCheckBigQueryExtData(t, "\""), }, { Config: testAccBigQueryTableFromGCS(datasetID, tableID, bucketName, objectName, TEST_CSV, "CSV", ""), - Check: testAccCheckBigQueryExtData(""), + Check: testAccCheckBigQueryExtData(t, ""), + }, + }, + }) +} + +func TestAccBigQueryDataTable_sheet(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableFromSheet(context), + }, + { + ResourceName: "google_bigquery_table.table", + ImportState: true, + ImportStateVerify: true, }, }, }) } -func testAccCheckBigQueryExtData(expectedQuoteChar string) resource.TestCheckFunc { +func testAccCheckBigQueryExtData(t *testing.T, expectedQuoteChar string) resource.TestCheckFunc { return func(s *terraform.State) error { for _, rs := range s.RootModule().Resources { if rs.Type != "google_bigquery_table" { continue } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) dataset := rs.Primary.Attributes["dataset_id"] table := rs.Primary.Attributes["table_id"] res, err := config.clientBigQuery.Tables.Get(config.Project, dataset, table).Do() @@ -199,23 +299,82 @@ func testAccCheckBigQueryExtData(expectedQuoteChar string) resource.TestCheckFun } } -func testAccCheckBigQueryTableDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_bigquery_table" { - continue +func testAccCheckBigQueryTableDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_bigquery_table" { + continue + } + + config := googleProviderConfig(t) + _, err := config.clientBigQuery.Tables.Get(config.Project, rs.Primary.Attributes["dataset_id"], rs.Primary.Attributes["table_id"]).Do() + if err == nil { + return fmt.Errorf("Table still present") + } } - config := testAccProvider.Meta().(*Config) - _, err := config.clientBigQuery.Tables.Get(config.Project, rs.Primary.Attributes["dataset_id"], rs.Primary.Attributes["table_id"]).Do() - if err == nil { - return fmt.Errorf("Table still present") + return nil + } +} + +func testAccBigQueryTableDailyTimePartitioning(datasetID, tableID string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset" "test" { + dataset_id = "%s" +} + +resource "google_bigquery_table" "test" { + table_id = "%s" + dataset_id = google_bigquery_dataset.test.dataset_id + + time_partitioning { + type = "DAY" + field = "ts" + require_partition_filter = true + } + clustering = ["some_int", "some_string"] + schema = < +func testAccBigQueryTableHivePartitioning(bucketName, datasetID, tableID string) string { + return fmt.Sprintf(` +resource "google_storage_bucket" "test" { + name = "%s" + force_destroy = true +} + +resource "google_storage_bucket_object" "test" { + name = "key1=20200330/init.csv" + content = ";" + bucket = google_storage_bucket.test.name +} + +resource "google_bigquery_dataset" "test" { + dataset_id = "%s" +} + +resource "google_bigquery_table" "test" { + table_id = "%s" + dataset_id = google_bigquery_dataset.test.dataset_id + + external_data_configuration { + source_format = "CSV" + autodetect = true + source_uris= ["gs://${google_storage_bucket.test.name}/*"] + + hive_partitioning_options { + mode = "AUTO" + source_uri_prefix = "gs://${google_storage_bucket.test.name}/" + } + + } + depends_on = ["google_storage_bucket_object.test"] +} +`, bucketName, datasetID, tableID) +} + +func testAccBigQueryTableHivePartitioningCustomSchema(bucketName, datasetID, tableID string) string { + return fmt.Sprintf(` +resource "google_storage_bucket" "test" { + name = "%s" + force_destroy = true +} + +resource "google_storage_bucket_object" "test" { + name = "key1=20200330/data.json" + content = "{\"name\":\"test\", \"last_modification\":\"2020-04-01\"}" + bucket = google_storage_bucket.test.name +} + +resource "google_bigquery_dataset" "test" { + dataset_id = "%s" +} + +resource "google_bigquery_table" "test" { + table_id = "%s" + dataset_id = google_bigquery_dataset.test.dataset_id + + external_data_configuration { + source_format = "NEWLINE_DELIMITED_JSON" + autodetect = false + source_uris= ["gs://${google_storage_bucket.test.name}/*"] + + hive_partitioning_options { + mode = "CUSTOM" + source_uri_prefix = "gs://${google_storage_bucket.test.name}/{key1:STRING}" + } + + schema = < func testAccBigQueryTableWithView(datasetID, tableID string) string { return fmt.Sprintf(` @@ -513,6 +756,58 @@ resource "google_bigquery_table" "test" { `, datasetID, bucketName, objectName, content, tableID, format, quoteChar) } +func testAccBigQueryTableFromSheet(context map[string]interface{}) string { + return Nprintf(` + resource "google_bigquery_table" "table" { + dataset_id = google_bigquery_dataset.dataset.dataset_id + table_id = "tf_test_sheet_%{random_suffix}" + + external_data_configuration { + autodetect = true + source_format = "GOOGLE_SHEETS" + ignore_unknown_values = true + + google_sheets_options { + skip_leading_rows = 1 + } + + source_uris = [ + "https://drive.google.com/open?id=xxxx", + ] + } + + schema = < -func testAccCheckBinaryAuthorizationPolicyDefault(pid string) resource.TestCheckFunc { +func testAccCheckBinaryAuthorizationPolicyDefault(t *testing.T, pid string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) url := fmt.Sprintf("https://binaryauthorization.googleapis.com/v1beta1/projects/%s/policy", pid) pol, err := sendRequest(config, "GET", "", url, nil) if err != nil { diff --git a/third_party/terraform/tests/resource_cloud_identity_group_test.go.erb b/third_party/terraform/tests/resource_cloud_identity_group_test.go.erb new file mode 100644 index 000000000000..61622fa3f6b0 --- /dev/null +++ b/third_party/terraform/tests/resource_cloud_identity_group_test.go.erb @@ -0,0 +1,54 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccCloudIdentityGroup_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "org_domain": getTestOrgDomainFromEnv(t), + "cust_id": getTestCustIdFromEnv(t), + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersOiCS, + CheckDestroy: testAccCheckCloudIdentityGroupDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccCloudIdentityGroup_cloudIdentityGroupsBasicExample(context), + }, + { + Config: testAccCloudIdentityGroup_update(context), + }, + }, + }) +} + +func testAccCloudIdentityGroup_update(context map[string]interface{}) string { + return Nprintf(` +resource "google_cloud_identity_group" "cloud_identity_group_basic" { + provider = google-beta + display_name = "tf-test-my-identity-group%{random_suffix}-update" + description = "my-description" + + parent = "customers/%{cust_id}" + + group_key { + id = "tf-test-my-identity-group%{random_suffix}@%{org_domain}" + } + + labels = { + "cloudidentity.googleapis.com/groups.discussion_forum" = "" + } +} +`, context) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_cloud_iot_device_update_test.go b/third_party/terraform/tests/resource_cloud_iot_device_update_test.go new file mode 100644 index 000000000000..a01060e01502 --- /dev/null +++ b/third_party/terraform/tests/resource_cloud_iot_device_update_test.go @@ -0,0 +1,103 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccCloudIoTDevice_update(t *testing.T) { + t.Parallel() + + registryName := fmt.Sprintf("psregistry-test-%s", randString(t, 10)) + deviceName := fmt.Sprintf("psdevice-test-%s", randString(t, 10)) + resourceName := fmt.Sprintf("google_cloudiot_device.%s", deviceName) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudIotDeviceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccCloudIoTDeviceBasic(deviceName, registryName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccCloudIoTDeviceExtended(deviceName, registryName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccCloudIoTDeviceBasic(deviceName, registryName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCloudIoTDeviceBasic(deviceName string, registryName string) string { + return fmt.Sprintf(` + +resource "google_cloudiot_registry" "%s" { + name = "%s" +} + +resource "google_cloudiot_device" "%s" { + name = "%s" + registry = google_cloudiot_registry.%s.id + + gateway_config { + gateway_auth_method = "DEVICE_AUTH_TOKEN_ONLY" + gateway_type = "GATEWAY" + } +} + + +`, registryName, registryName, deviceName, deviceName, registryName) +} + +func testAccCloudIoTDeviceExtended(deviceName string, registryName string) string { + return fmt.Sprintf(` + +resource "google_cloudiot_registry" "%s" { + name = "%s" +} + +resource "google_cloudiot_device" "%s" { + name = "%s" + registry = google_cloudiot_registry.%s.id + + credentials { + public_key { + format = "RSA_PEM" + key = file("test-fixtures/rsa_public.pem") + } + } + + blocked = false + + log_level = "INFO" + + metadata = { + test_key_1 = "test_value_1" + } + + gateway_config { + gateway_auth_method = "ASSOCIATION_AND_DEVICE_AUTH_TOKEN" + gateway_type = "GATEWAY" + } +} +`, registryName, registryName, deviceName, deviceName, registryName) +} diff --git a/third_party/terraform/tests/resource_cloud_run_service_test.go b/third_party/terraform/tests/resource_cloud_run_service_test.go index 08abb9ed60d0..58835313aeb8 100644 --- a/third_party/terraform/tests/resource_cloud_run_service_test.go +++ b/third_party/terraform/tests/resource_cloud_run_service_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -12,14 +11,14 @@ func TestAccCloudRunService_cloudRunServiceUpdate(t *testing.T) { t.Parallel() project := getTestProjectFromEnv() - name := "tftest-cloudrun-" + acctest.RandString(6) + name := "tftest-cloudrun-" + randString(t, 6) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccCloudRunService_cloudRunServiceUpdate(name, project, "10"), + Config: testAccCloudRunService_cloudRunServiceUpdate(name, project, "10", "600"), }, { ResourceName: "google_cloud_run_service.default", @@ -28,7 +27,7 @@ func TestAccCloudRunService_cloudRunServiceUpdate(t *testing.T) { ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, }, { - Config: testAccCloudRunService_cloudRunServiceUpdate(name, project, "50"), + Config: testAccCloudRunService_cloudRunServiceUpdate(name, project, "50", "300"), }, { ResourceName: "google_cloud_run_service.default", @@ -40,7 +39,7 @@ func TestAccCloudRunService_cloudRunServiceUpdate(t *testing.T) { }) } -func testAccCloudRunService_cloudRunServiceUpdate(name, project, concurrency string) string { +func testAccCloudRunService_cloudRunServiceUpdate(name, project, concurrency, timeoutSeconds string) string { return fmt.Sprintf(` resource "google_cloud_run_service" "default" { name = "%s" @@ -55,8 +54,12 @@ resource "google_cloud_run_service" "default" { containers { image = "gcr.io/cloudrun/hello" args = ["arrgs"] + ports { + container_port = 8080 + } } container_concurrency = %s + timeout_seconds = %s } } @@ -65,5 +68,5 @@ resource "google_cloud_run_service" "default" { latest_revision = true } } -`, name, project, concurrency) +`, name, project, concurrency, timeoutSeconds) } diff --git a/third_party/terraform/tests/resource_cloudscheduler_job_test.go.erb b/third_party/terraform/tests/resource_cloud_scheduler_job_test.go similarity index 93% rename from third_party/terraform/tests/resource_cloudscheduler_job_test.go.erb rename to third_party/terraform/tests/resource_cloud_scheduler_job_test.go index 762626ca9d71..1b03acacf030 100644 --- a/third_party/terraform/tests/resource_cloudscheduler_job_test.go.erb +++ b/third_party/terraform/tests/resource_cloud_scheduler_job_test.go @@ -1,6 +1,4 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> import ( "reflect" @@ -95,6 +93,3 @@ func TestCloudScheduler_FlattenHttpHeaders(t *testing.T) { } } } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_cloud_tasks_queue_test.go.erb b/third_party/terraform/tests/resource_cloud_tasks_queue_test.go.erb index 2f0b0690d8e5..2219f60cfdb0 100644 --- a/third_party/terraform/tests/resource_cloud_tasks_queue_test.go.erb +++ b/third_party/terraform/tests/resource_cloud_tasks_queue_test.go.erb @@ -5,16 +5,15 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccCloudTasksQueue_update(t *testing.T) { t.Parallel() - name := "cloudtasksqueuetest-" + acctest.RandString(10) + name := "cloudtasksqueuetest-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -43,9 +42,9 @@ func TestAccCloudTasksQueue_update(t *testing.T) { func TestAccCloudTasksQueue_update2Basic(t *testing.T) { t.Parallel() - name := "cloudtasksqueuetest-" + acctest.RandString(10) + name := "cloudtasksqueuetest-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_cloudbuild_trigger_test.go b/third_party/terraform/tests/resource_cloudbuild_trigger_test.go index 533774c9f084..e00c347f4a62 100644 --- a/third_party/terraform/tests/resource_cloudbuild_trigger_test.go +++ b/third_party/terraform/tests/resource_cloudbuild_trigger_test.go @@ -5,18 +5,17 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccCloudBuildTrigger_basic(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("tf-test") + name := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudBuildTriggerDestroy, + CheckDestroy: testAccCheckCloudBuildTriggerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudBuildTrigger_basic(name), @@ -41,12 +40,12 @@ func TestAccCloudBuildTrigger_basic(t *testing.T) { func TestAccCloudBuildTrigger_customizeDiffTimeoutSum(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("tf-test") + name := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudBuildTriggerDestroy, + CheckDestroy: testAccCheckCloudBuildTriggerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudBuildTrigger_customizeDiffTimeoutSum(name), @@ -59,12 +58,12 @@ func TestAccCloudBuildTrigger_customizeDiffTimeoutSum(t *testing.T) { func TestAccCloudBuildTrigger_customizeDiffTimeoutFormat(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("tf-test") + name := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudBuildTriggerDestroy, + CheckDestroy: testAccCheckCloudBuildTriggerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudBuildTrigger_customizeDiffTimeoutFormat(name), @@ -76,12 +75,12 @@ func TestAccCloudBuildTrigger_customizeDiffTimeoutFormat(t *testing.T) { func TestAccCloudBuildTrigger_disable(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("tf-test") + name := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudBuildTriggerDestroy, + CheckDestroy: testAccCheckCloudBuildTriggerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudBuildTrigger_basic(name), @@ -106,10 +105,10 @@ func TestAccCloudBuildTrigger_disable(t *testing.T) { func TestAccCloudBuildTrigger_fullStep(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudBuildTriggerDestroy, + CheckDestroy: testAccCheckCloudBuildTriggerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudBuildTrigger_fullStep(), @@ -195,6 +194,7 @@ resource "google_cloudbuild_trigger" "build_trigger" { trigger_template { branch_name = "master" repo_name = "some-repo" + invert_regex = false } build { images = ["gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA"] @@ -222,6 +222,7 @@ resource "google_cloudbuild_trigger" "build_trigger" { trigger_template { branch_name = "master-updated" repo_name = "some-repo-updated" + invert_regex = true } build { images = ["gcr.io/$PROJECT_ID/$REPO_NAME:$SHORT_SHA"] diff --git a/third_party/terraform/tests/resource_cloudfunctions_function_test.go.erb b/third_party/terraform/tests/resource_cloudfunctions_function_test.go.erb index 6ccfb03ab10b..e3af7df8159b 100644 --- a/third_party/terraform/tests/resource_cloudfunctions_function_test.go.erb +++ b/third_party/terraform/tests/resource_cloudfunctions_function_test.go.erb @@ -10,7 +10,6 @@ import ( "archive/zip" "io/ioutil" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudfunctions/v1" @@ -71,27 +70,72 @@ func TestCloudFunctionsFunction_nameValidator(t *testing.T) { } } +func TestValidLabelKeys(t *testing.T) { + testCases := []struct { + labelKey string + valid bool + }{ + { + "test-label", true, + }, + { + "test_label", true, + }, + { + "MixedCase", false, + }, + { + "number-09-dash", true, + }, + { + "", false, + }, + { + "test-label", true, + }, + { + "mixed*symbol", false, + }, + { + "intérnätional", true, + }, + } + + for _, tc := range testCases { + labels := make(map[string]interface{}) + labels[tc.labelKey] = "test value" + + _, errs := labelKeyValidator(labels, "") + if tc.valid && len(errs) > 0 { + t.Errorf("Validation failure, key: '%s' should be valid but actual errors were %q", tc.labelKey, errs) + } + if !tc.valid && len(errs) < 1 { + t.Errorf("Validation failure, key: '%s' should fail but actual errors were %q", tc.labelKey, errs) + } + } +} + func TestAccCloudFunctionsFunction_basic(t *testing.T) { t.Parallel() var function cloudfunctions.CloudFunction funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testHTTPTriggerPath) defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudFunctionsFunction_basic(functionName, bucketName, zipFilePath), Check: resource.ComposeTestCheckFunc( testAccCloudFunctionsFunctionExists( - funcResourceName, &function), + t, funcResourceName, &function), resource.TestCheckResourceAttr(funcResourceName, "name", functionName), resource.TestCheckResourceAttr(funcResourceName, @@ -130,13 +174,13 @@ func TestAccCloudFunctionsFunction_update(t *testing.T) { var function cloudfunctions.CloudFunction funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testHTTPTriggerPath) zipFileUpdatePath := createZIPArchiveForCloudFunctionSource(t, testHTTPTriggerUpdatePath) defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -144,7 +188,7 @@ func TestAccCloudFunctionsFunction_update(t *testing.T) { Config: testAccCloudFunctionsFunction_basic(functionName, bucketName, zipFilePath), Check: resource.ComposeTestCheckFunc( testAccCloudFunctionsFunctionExists( - funcResourceName, &function), + t, funcResourceName, &function), resource.TestCheckResourceAttr(funcResourceName, "available_memory_mb", "128"), testAccCloudFunctionsFunctionHasLabel("my-label", "my-label-value", &function), @@ -159,7 +203,7 @@ func TestAccCloudFunctionsFunction_update(t *testing.T) { Config: testAccCloudFunctionsFunction_updated(functionName, bucketName, zipFileUpdatePath), Check: resource.ComposeTestCheckFunc( testAccCloudFunctionsFunctionExists( - funcResourceName, &function), + t, funcResourceName, &function), resource.TestCheckResourceAttr(funcResourceName, "available_memory_mb", "256"), resource.TestCheckResourceAttr(funcResourceName, @@ -191,16 +235,16 @@ func TestAccCloudFunctionsFunction_pubsub(t *testing.T) { t.Parallel() funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) - topicName := fmt.Sprintf("tf-test-sub-%s", acctest.RandString(10)) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) + topicName := fmt.Sprintf("tf-test-sub-%s", randString(t, 10)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testPubSubTriggerPath) defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudFunctionsFunction_pubsub(functionName, bucketName, @@ -218,15 +262,15 @@ func TestAccCloudFunctionsFunction_pubsub(t *testing.T) { func TestAccCloudFunctionsFunction_bucket(t *testing.T) { t.Parallel() funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testBucketTriggerPath) defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudFunctionsFunction_bucket(functionName, bucketName, zipFilePath), @@ -251,15 +295,15 @@ func TestAccCloudFunctionsFunction_bucket(t *testing.T) { func TestAccCloudFunctionsFunction_firestore(t *testing.T) { t.Parallel() funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testFirestoreTriggerPath) defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudFunctionsFunction_firestore(functionName, bucketName, zipFilePath), @@ -277,13 +321,13 @@ func TestAccCloudFunctionsFunction_sourceRepo(t *testing.T) { t.Parallel() funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) proj := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudFunctionsFunction_sourceRepo(functionName, proj), @@ -301,15 +345,15 @@ func TestAccCloudFunctionsFunction_serviceAccountEmail(t *testing.T) { t.Parallel() funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testHTTPTriggerPath) defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudFunctionsFunction_serviceAccountEmail(functionName, bucketName, zipFilePath), @@ -327,18 +371,18 @@ func TestAccCloudFunctionsFunction_vpcConnector(t *testing.T) { t.Parallel() funcResourceName := "google_cloudfunctions_function.function" - functionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - bucketName := fmt.Sprintf("tf-test-bucket-%d", acctest.RandInt()) - networkName := fmt.Sprintf("tf-test-net-%d", acctest.RandInt()) - vpcConnectorName := fmt.Sprintf("tf-test-conn-%s", acctest.RandString(5)) + functionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + bucketName := fmt.Sprintf("tf-test-bucket-%d", randInt(t)) + networkName := fmt.Sprintf("tf-test-net-%d", randInt(t)) + vpcConnectorName := fmt.Sprintf("tf-test-conn-%s", randString(t, 5)) zipFilePath := createZIPArchiveForCloudFunctionSource(t, testHTTPTriggerPath) projectNumber := os.Getenv("GOOGLE_PROJECT_NUMBER") defer os.Remove(zipFilePath) // clean up - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFunctionsFunctionDestroy, + CheckDestroy: testAccCheckCloudFunctionsFunctionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCloudFunctionsFunction_vpcConnector(projectNumber, networkName, functionName, bucketName, zipFilePath, "10.10.0.0/28", vpcConnectorName), @@ -360,33 +404,35 @@ func TestAccCloudFunctionsFunction_vpcConnector(t *testing.T) { }) } -func testAccCheckCloudFunctionsFunctionDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckCloudFunctionsFunctionDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_cloudfunctions_function" { + continue + } - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_cloudfunctions_function" { - continue - } + name := rs.Primary.Attributes["name"] + project := rs.Primary.Attributes["project"] + region := rs.Primary.Attributes["region"] + cloudFuncId := &cloudFunctionId{ + Project: project, + Region: region, + Name: name, + } + _, err := config.clientCloudFunctions.Projects.Locations.Functions.Get(cloudFuncId.cloudFunctionId()).Do() + if err == nil { + return fmt.Errorf("Function still exists") + } - name := rs.Primary.Attributes["name"] - project := rs.Primary.Attributes["project"] - region := rs.Primary.Attributes["region"] - cloudFuncId := &cloudFunctionId{ - Project: project, - Region: region, - Name: name, - } - _, err := config.clientCloudFunctions.Projects.Locations.Functions.Get(cloudFuncId.cloudFunctionId()).Do() - if err == nil { - return fmt.Errorf("Function still exists") } + return nil } - - return nil } -func testAccCloudFunctionsFunctionExists(n string, function *cloudfunctions.CloudFunction) resource.TestCheckFunc { +func testAccCloudFunctionsFunctionExists(t *testing.T, n string, function *cloudfunctions.CloudFunction) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -396,7 +442,7 @@ func testAccCloudFunctionsFunctionExists(n string, function *cloudfunctions.Clou if rs.Primary.ID == "" { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) name := rs.Primary.Attributes["name"] project := rs.Primary.Attributes["project"] region := rs.Primary.Attributes["region"] diff --git a/third_party/terraform/tests/resource_cloudiot_device_registry_id_test.go b/third_party/terraform/tests/resource_cloudiot_device_registry_id_test.go new file mode 100644 index 000000000000..10468bc7ed1a --- /dev/null +++ b/third_party/terraform/tests/resource_cloudiot_device_registry_id_test.go @@ -0,0 +1,30 @@ +package google + +import ( + "strings" + "testing" +) + +func TestValidateCloudIoTDeviceRegistryId(t *testing.T) { + x := []StringValidationTestCase{ + // No errors + {TestName: "basic", Value: "foobar"}, + {TestName: "with numbers", Value: "foobar123"}, + {TestName: "short", Value: "foo"}, + {TestName: "long", Value: "foobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoo"}, + {TestName: "has a hyphen", Value: "foo-bar"}, + + // With errors + {TestName: "empty", Value: "", ExpectError: true}, + {TestName: "starts with a goog", Value: "googfoobar", ExpectError: true}, + {TestName: "starts with a number", Value: "1foobar", ExpectError: true}, + {TestName: "has an slash", Value: "foo/bar", ExpectError: true}, + {TestName: "has an backslash", Value: "foo\bar", ExpectError: true}, + {TestName: "too long", Value: strings.Repeat("f", 260), ExpectError: true}, + } + + es := testStringValidationCases(x, validateCloudIotDeviceRegistryID) + if len(es) > 0 { + t.Errorf("Failed to validate CloudIoT ID names: %v", es) + } +} diff --git a/third_party/terraform/tests/resource_cloudiot_device_registry_update_test.go b/third_party/terraform/tests/resource_cloudiot_device_registry_update_test.go new file mode 100644 index 000000000000..c0ae37897345 --- /dev/null +++ b/third_party/terraform/tests/resource_cloudiot_device_registry_update_test.go @@ -0,0 +1,109 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccCloudIoTRegistry_update(t *testing.T) { + t.Parallel() + + registryName := fmt.Sprintf("psregistry-test-%s", randString(t, 10)) + resourceName := fmt.Sprintf("google_cloudiot_registry.%s", registryName) + deviceStatus := fmt.Sprintf("psregistry-test-devicestatus-%s", randString(t, 10)) + defaultTelemetry := fmt.Sprintf("psregistry-test-telemetry-%s", randString(t, 10)) + additionalTelemetry := fmt.Sprintf("psregistry-additional-test-telemetry-%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudIotDeviceRegistryDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccCloudIoTRegistryBasic(registryName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccCloudIoTRegistryExtended(registryName, deviceStatus, defaultTelemetry, additionalTelemetry), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccCloudIoTRegistryBasic(registryName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCloudIoTRegistryBasic(registryName string) string { + return fmt.Sprintf(` + +resource "google_cloudiot_registry" "%s" { + name = "%s" +} +`, registryName, registryName) +} + +func testAccCloudIoTRegistryExtended(registryName string, deviceStatus string, defaultTelemetry string, additionalTelemetry string) string { + return fmt.Sprintf(` + +resource "google_pubsub_topic" "default-devicestatus" { + name = "psregistry-test-devicestatus-%s" +} + +resource "google_pubsub_topic" "default-telemetry" { + name = "psregistry-test-telemetry-%s" +} + +resource "google_pubsub_topic" "additional-telemetry" { + name = "psregistry-additional-test-telemetry-%s" +} + +resource "google_cloudiot_registry" "%s" { + name = "%s" + + event_notification_configs { + pubsub_topic_name = google_pubsub_topic.additional-telemetry.id + subfolder_matches = "test/directory" + } + + event_notification_configs { + pubsub_topic_name = google_pubsub_topic.default-telemetry.id + subfolder_matches = "" + } + + state_notification_config = { + pubsub_topic_name = google_pubsub_topic.default-devicestatus.id + } + + mqtt_config = { + mqtt_enabled_state = "MQTT_DISABLED" + } + + http_config = { + http_enabled_state = "HTTP_DISABLED" + } + + credentials { + public_key_certificate = { + format = "X509_CERTIFICATE_PEM" + certificate = file("test-fixtures/rsa_cert.pem") + } + } +} +`, deviceStatus, defaultTelemetry, additionalTelemetry, registryName, registryName) +} diff --git a/third_party/terraform/tests/resource_cloudiot_registry_test.go b/third_party/terraform/tests/resource_cloudiot_registry_test.go deleted file mode 100644 index 83686673ac82..000000000000 --- a/third_party/terraform/tests/resource_cloudiot_registry_test.go +++ /dev/null @@ -1,269 +0,0 @@ -package google - -import ( - "fmt" - "strings" - "testing" - - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" - "github.com/hashicorp/terraform-plugin-sdk/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/terraform" -) - -func TestValidateCloudIoTID(t *testing.T) { - x := []StringValidationTestCase{ - // No errors - {TestName: "basic", Value: "foobar"}, - {TestName: "with numbers", Value: "foobar123"}, - {TestName: "short", Value: "foo"}, - {TestName: "long", Value: "foobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoo"}, - {TestName: "has a hyphen", Value: "foo-bar"}, - - // With errors - {TestName: "empty", Value: "", ExpectError: true}, - {TestName: "starts with a goog", Value: "googfoobar", ExpectError: true}, - {TestName: "starts with a number", Value: "1foobar", ExpectError: true}, - {TestName: "has an slash", Value: "foo/bar", ExpectError: true}, - {TestName: "has an backslash", Value: "foo\bar", ExpectError: true}, - {TestName: "too long", Value: strings.Repeat("f", 260), ExpectError: true}, - } - - es := testStringValidationCases(x, validateCloudIotID) - if len(es) > 0 { - t.Errorf("Failed to validate CloudIoT ID names: %v", es) - } -} - -func TestAccCloudIoTRegistry_basic(t *testing.T) { - t.Parallel() - - registryName := fmt.Sprintf("psregistry-test-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCloudIoTRegistryDestroy, - Steps: []resource.TestStep{ - { - Config: testAccCloudIoTRegistry_basic(registryName), - }, - { - ResourceName: "google_cloudiot_registry.foobar", - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccCloudIoTRegistry_extended(t *testing.T) { - t.Parallel() - - registryName := fmt.Sprintf("psregistry-test-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCloudIoTRegistryDestroy, - Steps: []resource.TestStep{ - { - Config: testAccCloudIoTRegistry_extended(registryName), - }, - { - ResourceName: "google_cloudiot_registry.foobar", - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccCloudIoTRegistry_update(t *testing.T) { - t.Parallel() - - registryName := fmt.Sprintf("psregistry-test-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCloudIoTRegistryDestroy, - Steps: []resource.TestStep{ - { - Config: testAccCloudIoTRegistry_basic(registryName), - }, - { - ResourceName: "google_cloudiot_registry.foobar", - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccCloudIoTRegistry_extended(registryName), - }, - { - ResourceName: "google_cloudiot_registry.foobar", - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccCloudIoTRegistry_basic(registryName), - }, - { - ResourceName: "google_cloudiot_registry.foobar", - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccCloudIoTRegistry_eventNotificationConfigsSingle(t *testing.T) { - t.Parallel() - - registryName := fmt.Sprintf("tf-registry-test-%s", acctest.RandString(10)) - topic := fmt.Sprintf("tf-registry-test-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCloudIoTRegistryDestroy, - Steps: []resource.TestStep{ - { - Config: testAccCloudIoTRegistry_singleEventNotificationConfigs(topic, registryName), - }, - { - ResourceName: "google_cloudiot_registry.foobar", - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccCloudIoTRegistry_eventNotificationConfigsMultiple(t *testing.T) { - t.Parallel() - - registryName := fmt.Sprintf("tf-registry-test-%s", acctest.RandString(10)) - topic := fmt.Sprintf("tf-registry-test-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCloudIoTRegistryDestroy, - Steps: []resource.TestStep{ - { - Config: testAccCloudIoTRegistry_multipleEventNotificationConfigs(topic, registryName), - }, - { - ResourceName: "google_cloudiot_registry.foobar", - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func testAccCheckCloudIoTRegistryDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_cloudiot_registry" { - continue - } - config := testAccProvider.Meta().(*Config) - registry, _ := config.clientCloudIoT.Projects.Locations.Registries.Get(rs.Primary.ID).Do() - if registry != nil { - return fmt.Errorf("Registry still present") - } - } - return nil -} - -func testAccCloudIoTRegistry_basic(registryName string) string { - return fmt.Sprintf(` -resource "google_cloudiot_registry" "foobar" { - name = "%s" -} -`, registryName) -} - -func testAccCloudIoTRegistry_extended(registryName string) string { - return fmt.Sprintf(` -resource "google_pubsub_topic" "default-devicestatus" { - name = "psregistry-test-devicestatus-%s" -} - -resource "google_pubsub_topic" "default-telemetry" { - name = "psregistry-test-telemetry-%s" -} - -resource "google_cloudiot_registry" "foobar" { - name = "%s" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.default-devicestatus.id - } - - state_notification_config = { - pubsub_topic_name = google_pubsub_topic.default-telemetry.id - } - - http_config = { - http_enabled_state = "HTTP_DISABLED" - } - - mqtt_config = { - mqtt_enabled_state = "MQTT_DISABLED" - } - - log_level = "INFO" - - credentials { - public_key_certificate = { - format = "X509_CERTIFICATE_PEM" - certificate = file("test-fixtures/rsa_cert.pem") - } - } -} -`, acctest.RandString(10), acctest.RandString(10), registryName) -} - -func testAccCloudIoTRegistry_singleEventNotificationConfigs(topic, registryName string) string { - return fmt.Sprintf(` -resource "google_pubsub_topic" "event-topic-1" { - name = "%s" -} - -resource "google_cloudiot_registry" "foobar" { - name = "%s" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.event-topic-1.id - subfolder_matches = "" - } -} -`, topic, registryName) -} - -func testAccCloudIoTRegistry_multipleEventNotificationConfigs(topic, registryName string) string { - return fmt.Sprintf(` -resource "google_pubsub_topic" "event-topic-1" { - name = "%s" -} - -resource "google_pubsub_topic" "event-topic-2" { - name = "%s-alt" -} - -resource "google_cloudiot_registry" "foobar" { - name = "%s" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.event-topic-1.id - subfolder_matches = "test" - } - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.event-topic-2.id - subfolder_matches = "" - } -} -`, topic, topic, registryName) -} diff --git a/third_party/terraform/tests/resource_composer_environment_test.go.erb b/third_party/terraform/tests/resource_composer_environment_test.go.erb index 39722ed37173..f22f4eb4f8f7 100644 --- a/third_party/terraform/tests/resource_composer_environment_test.go.erb +++ b/third_party/terraform/tests/resource_composer_environment_test.go.erb @@ -10,15 +10,14 @@ import ( "time" "github.com/hashicorp/go-multierror" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/composer/v1beta1" "google.golang.org/api/storage/v1" ) -const testComposerEnvironmentPrefix = "tf-cc-testenv" -const testComposerNetworkPrefix = "tf-cc-testnet" +const testComposerEnvironmentPrefix = "tf-test-composer-env" +const testComposerNetworkPrefix = "tf-test-composer-net" func init() { resource.AddTestSweepers("gcp_composer_environment", &resource.Sweeper{ @@ -55,13 +54,13 @@ func TestComposerImageVersionDiffSuppress(t *testing.T) { func TestAccComposerEnvironment_basic(t *testing.T) { t.Parallel() - envName := acctest.RandomWithPrefix(testComposerEnvironmentPrefix) - network := acctest.RandomWithPrefix(testComposerNetworkPrefix) + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, randInt(t)) + network := fmt.Sprintf("%s-%d", testComposerNetworkPrefix, randInt(t)) subnetwork := network + "-1" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComposerEnvironmentDestroy, + CheckDestroy: testAccComposerEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComposerEnvironment_basic(envName, network, subnetwork), @@ -90,7 +89,7 @@ func TestAccComposerEnvironment_basic(t *testing.T) { PlanOnly: true, ExpectNonEmptyPlan: false, Config: testAccComposerEnvironment_basic(envName, network, subnetwork), - Check: testAccCheckClearComposerEnvironmentFirewalls(network), + Check: testAccCheckClearComposerEnvironmentFirewalls(t, network), }, }, }) @@ -101,14 +100,14 @@ func TestAccComposerEnvironment_basic(t *testing.T) { func TestAccComposerEnvironment_update(t *testing.T) { t.Parallel() - envName := acctest.RandomWithPrefix(testComposerEnvironmentPrefix) - network := acctest.RandomWithPrefix(testComposerNetworkPrefix) + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, randInt(t)) + network := fmt.Sprintf("%s-%d", testComposerNetworkPrefix, randInt(t)) subnetwork := network + "-1" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComposerEnvironmentDestroy, + CheckDestroy: testAccComposerEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComposerEnvironment_basic(envName, network, subnetwork), @@ -128,7 +127,7 @@ func TestAccComposerEnvironment_update(t *testing.T) { PlanOnly: true, ExpectNonEmptyPlan: false, Config: testAccComposerEnvironment_update(envName, network, subnetwork), - Check: testAccCheckClearComposerEnvironmentFirewalls(network), + Check: testAccCheckClearComposerEnvironmentFirewalls(t, network), }, }, }) @@ -138,14 +137,14 @@ func TestAccComposerEnvironment_update(t *testing.T) { func TestAccComposerEnvironment_private(t *testing.T) { t.Parallel() - envName := acctest.RandomWithPrefix(testComposerEnvironmentPrefix) - network := acctest.RandomWithPrefix(testComposerNetworkPrefix) + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, randInt(t)) + network := fmt.Sprintf("%s-%d", testComposerNetworkPrefix, randInt(t)) subnetwork := network + "-1" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComposerEnvironmentDestroy, + CheckDestroy: testAccComposerEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComposerEnvironment_private(envName, network, subnetwork), @@ -168,25 +167,75 @@ func TestAccComposerEnvironment_private(t *testing.T) { PlanOnly: true, ExpectNonEmptyPlan: false, Config: testAccComposerEnvironment_private(envName, network, subnetwork), - Check: testAccCheckClearComposerEnvironmentFirewalls(network), + Check: testAccCheckClearComposerEnvironmentFirewalls(t, network), }, }, }) } +<% unless version == "ga" -%> +// Checks environment creation with minimum required information. +func TestAccComposerEnvironment_privateWithWebServerControl(t *testing.T) { + t.Parallel() + + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, randInt(t)) + network := fmt.Sprintf("%s-%d", testComposerNetworkPrefix, randInt(t)) + subnetwork := network + "-1" + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccComposerEnvironmentDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComposerEnvironment_privateWithWebServerControl(envName, network, subnetwork), + }, + { + ResourceName: "google_composer_environment.test", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComposerEnvironment_privateWithWebServerControlUpdated(envName, network, subnetwork), + }, + { + ResourceName: "google_composer_environment.test", + ImportState: true, + ImportStateVerify: true, + }, + { + ResourceName: "google_composer_environment.test", + ImportState: true, + ImportStateId: fmt.Sprintf("projects/%s/locations/%s/environments/%s", getTestProjectFromEnv(), "us-central1", envName), + ImportStateVerify: true, + }, + // This is a terrible clean-up step in order to get destroy to succeed, + // due to dangling firewall rules left by the Composer Environment blocking network deletion. + // TODO(emilyye): Remove this check if firewall rules bug gets fixed by Composer. + { + PlanOnly: true, + ExpectNonEmptyPlan: false, + Config: testAccComposerEnvironment_privateWithWebServerControlUpdated(envName, network, subnetwork), + Check: testAccCheckClearComposerEnvironmentFirewalls(t, network), + }, + }, + }) +} + +<% end -%> // Checks behavior of node config, including dependencies on Compute resources. func TestAccComposerEnvironment_withNodeConfig(t *testing.T) { t.Parallel() - envName := acctest.RandomWithPrefix(testComposerEnvironmentPrefix) - network := acctest.RandomWithPrefix(testComposerNetworkPrefix) + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, randInt(t)) + network := fmt.Sprintf("%s-%d", testComposerNetworkPrefix, randInt(t)) subnetwork := network + "-1" - serviceAccount := acctest.RandomWithPrefix("tf-test") + serviceAccount := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComposerEnvironmentDestroy, + CheckDestroy: testAccComposerEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComposerEnvironment_nodeCfg(envName, network, subnetwork, serviceAccount), @@ -203,7 +252,7 @@ func TestAccComposerEnvironment_withNodeConfig(t *testing.T) { PlanOnly: true, ExpectNonEmptyPlan: false, Config: testAccComposerEnvironment_nodeCfg(envName, network, subnetwork, serviceAccount), - Check: testAccCheckClearComposerEnvironmentFirewalls(network), + Check: testAccCheckClearComposerEnvironmentFirewalls(t, network), }, }, }) @@ -211,14 +260,14 @@ func TestAccComposerEnvironment_withNodeConfig(t *testing.T) { func TestAccComposerEnvironment_withSoftwareConfig(t *testing.T) { t.Parallel() - envName := acctest.RandomWithPrefix(testComposerEnvironmentPrefix) - network := acctest.RandomWithPrefix(testComposerNetworkPrefix) + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, randInt(t)) + network := fmt.Sprintf("%s-%d", testComposerNetworkPrefix, randInt(t)) subnetwork := network + "-1" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComposerEnvironmentDestroy, + CheckDestroy: testAccComposerEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComposerEnvironment_softwareCfg(envName, network, subnetwork), @@ -235,7 +284,7 @@ func TestAccComposerEnvironment_withSoftwareConfig(t *testing.T) { PlanOnly: true, ExpectNonEmptyPlan: false, Config: testAccComposerEnvironment_softwareCfg(envName, network, subnetwork), - Check: testAccCheckClearComposerEnvironmentFirewalls(network), + Check: testAccCheckClearComposerEnvironmentFirewalls(t, network), }, }, }) @@ -246,14 +295,14 @@ func TestAccComposerEnvironment_withSoftwareConfig(t *testing.T) { func TestAccComposerEnvironment_withUpdateOnCreate(t *testing.T) { t.Parallel() - envName := acctest.RandomWithPrefix(testComposerEnvironmentPrefix) - network := acctest.RandomWithPrefix(testComposerNetworkPrefix) + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, randInt(t)) + network := fmt.Sprintf("%s-%d", testComposerNetworkPrefix, randInt(t)) subnetwork := network + "-1" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComposerEnvironmentDestroy, + CheckDestroy: testAccComposerEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComposerEnvironment_updateOnlyFields(envName, network, subnetwork), @@ -270,37 +319,39 @@ func TestAccComposerEnvironment_withUpdateOnCreate(t *testing.T) { PlanOnly: true, ExpectNonEmptyPlan: false, Config: testAccComposerEnvironment_updateOnlyFields(envName, network, subnetwork), - Check: testAccCheckClearComposerEnvironmentFirewalls(network), + Check: testAccCheckClearComposerEnvironmentFirewalls(t, network), }, }, }) } -func testAccComposerEnvironmentDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccComposerEnvironmentDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_composer_environment" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_composer_environment" { + continue + } - idTokens := strings.Split(rs.Primary.ID, "/") - if len(idTokens) != 6 { - return fmt.Errorf("Invalid ID %q, expected format projects/{project}/regions/{region}/environments/{environment}", rs.Primary.ID) - } - envName := &composerEnvironmentName{ - Project: idTokens[1], - Region: idTokens[3], - Environment: idTokens[5], - } + idTokens := strings.Split(rs.Primary.ID, "/") + if len(idTokens) != 6 { + return fmt.Errorf("Invalid ID %q, expected format projects/{project}/regions/{region}/environments/{environment}", rs.Primary.ID) + } + envName := &composerEnvironmentName{ + Project: idTokens[1], + Region: idTokens[3], + Environment: idTokens[5], + } - _, err := config.clientComposer.Projects.Locations.Environments.Get(envName.resourceName()).Do() - if err == nil { - return fmt.Errorf("environment %s still exists", envName.resourceName()) + _, err := config.clientComposer.Projects.Locations.Environments.Get(envName.resourceName()).Do() + if err == nil { + return fmt.Errorf("environment %s still exists", envName.resourceName()) + } } - } - return nil + return nil + } } func testAccComposerEnvironment_basic(name, network, subnetwork string) string { @@ -372,6 +423,112 @@ resource "google_compute_subnetwork" "test" { `, name, network, subnetwork) } +<% unless version == "ga" -%> +func testAccComposerEnvironment_privateWithWebServerControl(name, network, subnetwork string) string { + return fmt.Sprintf(` +resource "google_composer_environment" "test" { + name = "%s" + region = "us-central1" + + config { + node_config { + network = google_compute_network.test.self_link + subnetwork = google_compute_subnetwork.test.self_link + zone = "us-central1-a" + ip_allocation_policy { + use_ip_aliases = true + cluster_ipv4_cidr_block = "10.56.0.0/14" + services_ipv4_cidr_block = "10.122.0.0/20" + } + } + private_environment_config { + enable_private_endpoint = false + web_server_ipv4_cidr_block = "172.30.240.0/24" + cloud_sql_ipv4_cidr_block = "10.32.0.0/12" + master_ipv4_cidr_block = "172.17.50.0/28" + } + web_server_network_access_control { + allowed_ip_range { + value = "192.168.0.1" + description = "my range1" + } + allowed_ip_range { + value = "0.0.0.0/0" + } + } + } +} + +// use a separate network to avoid conflicts with other tests running in parallel +// that use the default network/subnet +resource "google_compute_network" "test" { + name = "%s" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "test" { + name = "%s" + ip_cidr_range = "10.2.0.0/16" + region = "us-central1" + network = google_compute_network.test.self_link + private_ip_google_access = true +} +`, name, network, subnetwork) +} + +func testAccComposerEnvironment_privateWithWebServerControlUpdated(name, network, subnetwork string) string { + return fmt.Sprintf(` +resource "google_composer_environment" "test" { + name = "%s" + region = "us-central1" + + config { + node_config { + network = google_compute_network.test.self_link + subnetwork = google_compute_subnetwork.test.self_link + zone = "us-central1-a" + ip_allocation_policy { + use_ip_aliases = true + cluster_ipv4_cidr_block = "10.56.0.0/14" + services_ipv4_cidr_block = "10.122.0.0/20" + } + } + private_environment_config { + enable_private_endpoint = false + web_server_ipv4_cidr_block = "172.30.240.0/24" + cloud_sql_ipv4_cidr_block = "10.32.0.0/12" + master_ipv4_cidr_block = "172.17.50.0/28" + } + web_server_network_access_control { + allowed_ip_range { + value = "192.168.0.1" + description = "my range1" + } + allowed_ip_range { + value = "0.0.0.0/0" + } + } + } +} + +// use a separate network to avoid conflicts with other tests running in parallel +// that use the default network/subnet +resource "google_compute_network" "test" { + name = "%s" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "test" { + name = "%s" + ip_cidr_range = "10.2.0.0/16" + region = "us-central1" + network = google_compute_network.test.self_link + private_ip_google_access = true +} +`, name, network, subnetwork) +} + +<% end -%> func testAccComposerEnvironment_update(name, network, subnetwork string) string { return fmt.Sprintf(` data "google_composer_image_versions" "all" { @@ -609,7 +766,7 @@ func testSweepComposerEnvironments(config *Config) error { allErrors = multierror.Append(allErrors, fmt.Errorf("Unable to delete environment %q: %s", e.Name, deleteErr)) continue } - waitErr := composerOperationWaitTime(config, op, config.Project, "Sweeping old test environments", 10) + waitErr := composerOperationWaitTime(config, op, config.Project, "Sweeping old test environments", 10*time.Minute) if waitErr != nil { allErrors = multierror.Append(allErrors, fmt.Errorf("Unable to delete environment %q: %s", e.Name, waitErr)) } @@ -683,9 +840,9 @@ func testSweepComposerEnvironmentCleanUpBucket(config *Config, bucket *storage.B // but will not remove them when the Environment is deleted. // // Destroy test step for config with a network will fail unless we clean up the firewalls before. -func testAccCheckClearComposerEnvironmentFirewalls(networkName string) resource.TestCheckFunc { +func testAccCheckClearComposerEnvironmentFirewalls(t *testing.T, networkName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) config.Project = getTestProjectFromEnv() network, err := config.clientCompute.Networks.Get(getTestProjectFromEnv(), networkName).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_compute_address_test.go b/third_party/terraform/tests/resource_compute_address_test.go index 231eb0faeec7..ace7124b74a6 100644 --- a/third_party/terraform/tests/resource_compute_address_test.go +++ b/third_party/terraform/tests/resource_compute_address_test.go @@ -4,20 +4,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeAddress_networkTier(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeAddressDestroy, + CheckDestroy: testAccCheckComputeAddressDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeAddress_networkTier(acctest.RandString(10)), + Config: testAccComputeAddress_networkTier(randString(t, 10)), }, { ResourceName: "google_compute_address.foobar", @@ -29,13 +28,13 @@ func TestAccComputeAddress_networkTier(t *testing.T) { } func TestAccComputeAddress_internal(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeAddressDestroy, + CheckDestroy: testAccCheckComputeAddressDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeAddress_internal(acctest.RandString(10)), + Config: testAccComputeAddress_internal(randString(t, 10)), }, { ResourceName: "google_compute_address.internal", diff --git a/third_party/terraform/tests/resource_compute_attached_disk_test.go b/third_party/terraform/tests/resource_compute_attached_disk_test.go index 10bc163d62db..3b81d09c6636 100644 --- a/third_party/terraform/tests/resource_compute_attached_disk_test.go +++ b/third_party/terraform/tests/resource_compute_attached_disk_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,11 +11,11 @@ import ( func TestAccComputeAttachedDisk_basic(t *testing.T) { t.Parallel() - diskName := acctest.RandomWithPrefix("tf-test-disk") - instanceName := acctest.RandomWithPrefix("tf-test-inst") + diskName := fmt.Sprintf("tf-test-disk-%d", randInt(t)) + instanceName := fmt.Sprintf("tf-test-inst-%d", randInt(t)) importID := fmt.Sprintf("%s/us-central1-a/%s/%s", getTestProjectFromEnv(), instanceName, diskName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, // Check destroy isn't a good test here, see comment on testCheckAttachedDiskIsNowDetached @@ -34,7 +33,7 @@ func TestAccComputeAttachedDisk_basic(t *testing.T) { { Config: testAttachedDiskResource(diskName, instanceName), Check: resource.ComposeTestCheckFunc( - testCheckAttachedDiskIsNowDetached(instanceName, diskName), + testCheckAttachedDiskIsNowDetached(t, instanceName, diskName), ), }, }, @@ -44,11 +43,11 @@ func TestAccComputeAttachedDisk_basic(t *testing.T) { func TestAccComputeAttachedDisk_full(t *testing.T) { t.Parallel() - diskName := acctest.RandomWithPrefix("tf-test") - instanceName := acctest.RandomWithPrefix("tf-test") + diskName := fmt.Sprintf("tf-test-%d", randInt(t)) + instanceName := fmt.Sprintf("tf-test-%d", randInt(t)) importID := fmt.Sprintf("%s/us-central1-a/%s/%s", getTestProjectFromEnv(), instanceName, diskName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, // Check destroy isn't a good test here, see comment on testCheckAttachedDiskIsNowDetached @@ -71,11 +70,11 @@ func TestAccComputeAttachedDisk_full(t *testing.T) { func TestAccComputeAttachedDisk_region(t *testing.T) { t.Parallel() - diskName := acctest.RandomWithPrefix("tf-test") - instanceName := acctest.RandomWithPrefix("tf-test") + diskName := fmt.Sprintf("tf-test-%d", randInt(t)) + instanceName := fmt.Sprintf("tf-test-%d", randInt(t)) importID := fmt.Sprintf("%s/us-central1-a/%s/%s", getTestProjectFromEnv(), instanceName, diskName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, // Check destroy isn't a good test here, see comment on testCheckAttachedDiskIsNowDetached @@ -98,11 +97,11 @@ func TestAccComputeAttachedDisk_region(t *testing.T) { func TestAccComputeAttachedDisk_count(t *testing.T) { t.Parallel() - diskPrefix := acctest.RandomWithPrefix("tf-test") - instanceName := acctest.RandomWithPrefix("tf-test") + diskPrefix := fmt.Sprintf("tf-test-%d", randInt(t)) + instanceName := fmt.Sprintf("tf-test-%d", randInt(t)) count := 2 - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: nil, @@ -110,7 +109,7 @@ func TestAccComputeAttachedDisk_count(t *testing.T) { { Config: testAttachedDiskResourceCount(diskPrefix, instanceName, count), Check: resource.ComposeTestCheckFunc( - testCheckAttachedDiskContainsManyDisks(instanceName, count), + testCheckAttachedDiskContainsManyDisks(t, instanceName, count), ), }, }, @@ -125,9 +124,9 @@ func TestAccComputeAttachedDisk_count(t *testing.T) { // instance and the disk, whereas destroying just the attached disk should only detach the disk but // leave the instance and disk around. So just using a normal check destroy could end up with a // situation where the detach fails but since the instance/disk get destroyed we wouldn't notice. -func testCheckAttachedDiskIsNowDetached(instanceName, diskName string) resource.TestCheckFunc { +func testCheckAttachedDiskIsNowDetached(t *testing.T, instanceName, diskName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) instance, err := config.clientCompute.Instances.Get(getTestProjectFromEnv(), "us-central1-a", instanceName).Do() if err != nil { @@ -143,9 +142,9 @@ func testCheckAttachedDiskIsNowDetached(instanceName, diskName string) resource. } } -func testCheckAttachedDiskContainsManyDisks(instanceName string, count int) resource.TestCheckFunc { +func testCheckAttachedDiskContainsManyDisks(t *testing.T, instanceName string, count int) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) instance, err := config.clientCompute.Instances.Get(getTestProjectFromEnv(), "us-central1-a", instanceName).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_compute_autoscaler_test.go.erb b/third_party/terraform/tests/resource_compute_autoscaler_test.go.erb index 57b444841174..2f7a8fc55b35 100644 --- a/third_party/terraform/tests/resource_compute_autoscaler_test.go.erb +++ b/third_party/terraform/tests/resource_compute_autoscaler_test.go.erb @@ -6,7 +6,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -15,15 +14,15 @@ import ( func TestAccComputeAutoscaler_update(t *testing.T) { t.Parallel() - var it_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) - var tp_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) - var igm_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) - var autoscaler_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) + var it_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var tp_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var igm_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var autoscaler_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeAutoscalerDestroy, + CheckDestroy: testAccCheckComputeAutoscalerDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeAutoscaler_basic(it_name, tp_name, igm_name, autoscaler_name), @@ -49,15 +48,15 @@ func TestAccComputeAutoscaler_update(t *testing.T) { func TestAccComputeAutoscaler_multicondition(t *testing.T) { t.Parallel() - var it_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) - var tp_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) - var igm_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) - var autoscaler_name = fmt.Sprintf("autoscaler-test-%s", acctest.RandString(10)) + var it_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var tp_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var igm_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var autoscaler_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeAutoscalerDestroy, + CheckDestroy: testAccCheckComputeAutoscalerDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeAutoscaler_multicondition(it_name, tp_name, igm_name, autoscaler_name), @@ -71,6 +70,33 @@ func TestAccComputeAutoscaler_multicondition(t *testing.T) { }) } +<% unless version == 'ga' -%> +func TestAccComputeAutoscaler_scaleDownControl(t *testing.T) { + t.Parallel() + + var it_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var tp_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var igm_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + var autoscaler_name = fmt.Sprintf("autoscaler-test-%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeAutoscalerDestroyProducer(t), + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeAutoscaler_scaleDownControl(it_name, tp_name, igm_name, autoscaler_name), + }, + resource.TestStep{ + ResourceName: "google_compute_autoscaler.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} +<% end -%> + func testAccComputeAutoscaler_scaffolding(it_name, tp_name, igm_name string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { @@ -189,3 +215,30 @@ resource "google_compute_autoscaler" "foobar" { } `, autoscaler_name) } + +<% unless version == 'ga' -%> +func testAccComputeAutoscaler_scaleDownControl(it_name, tp_name, igm_name, autoscaler_name string) string { + return testAccComputeAutoscaler_scaffolding(it_name, tp_name, igm_name) + fmt.Sprintf(` +resource "google_compute_autoscaler" "foobar" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + zone = "us-central1-a" + target = google_compute_instance_group_manager.foobar.self_link + autoscaling_policy { + max_replicas = 10 + min_replicas = 1 + cooldown_period = 60 + cpu_utilization { + target = 0.5 + } + scale_down_control { + max_scaled_down_replicas { + percent = 80 + } + time_window_sec = 300 + } + } +} +`, autoscaler_name) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_compute_backend_bucket_signed_url_key_test.go b/third_party/terraform/tests/resource_compute_backend_bucket_signed_url_key_test.go index 341f1ceec759..ae6eb3bda204 100644 --- a/third_party/terraform/tests/resource_compute_backend_bucket_signed_url_key_test.go +++ b/third_party/terraform/tests/resource_compute_backend_bucket_signed_url_key_test.go @@ -6,7 +6,6 @@ import ( "strings" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -15,17 +14,17 @@ func TestAccComputeBackendBucketSignedUrlKey_basic(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendBucketSignedUrlKeyDestroy, + CheckDestroy: testAccCheckComputeBackendBucketSignedUrlKeyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendBucketSignedUrlKey_basic(context), - Check: testAccCheckComputeBackendBucketSignedUrlKeyCreated, + Check: testAccCheckComputeBackendBucketSignedUrlKeyCreatedProducer(t), }, }, }) @@ -53,29 +52,33 @@ resource "google_storage_bucket" "bucket" { `, context) } -func testAccCheckComputeBackendBucketSignedUrlKeyDestroy(s *terraform.State) error { - exists, err := checkComputeBackendBucketSignedUrlKeyExists(s) - if err != nil && !isGoogleApiErrorWithCode(err, 404) { - return err - } - if exists { - return fmt.Errorf("ComputeBackendBucketSignedUrlKey still exists") +func testAccCheckComputeBackendBucketSignedUrlKeyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + exists, err := checkComputeBackendBucketSignedUrlKeyExists(t, s) + if err != nil && !isGoogleApiErrorWithCode(err, 404) { + return err + } + if exists { + return fmt.Errorf("ComputeBackendBucketSignedUrlKey still exists") + } + return nil } - return nil } -func testAccCheckComputeBackendBucketSignedUrlKeyCreated(s *terraform.State) error { - exists, err := checkComputeBackendBucketSignedUrlKeyExists(s) - if err != nil { - return err - } - if !exists { - return fmt.Errorf("expected ComputeBackendBucketSignedUrlKey to have been created") +func testAccCheckComputeBackendBucketSignedUrlKeyCreatedProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + exists, err := checkComputeBackendBucketSignedUrlKeyExists(t, s) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("expected ComputeBackendBucketSignedUrlKey to have been created") + } + return nil } - return nil } -func checkComputeBackendBucketSignedUrlKeyExists(s *terraform.State) (bool, error) { +func checkComputeBackendBucketSignedUrlKeyExists(t *testing.T, s *terraform.State) (bool, error) { for name, rs := range s.RootModule().Resources { if rs.Type != "google_compute_backend_bucket_signed_url_key" { continue @@ -84,7 +87,7 @@ func checkComputeBackendBucketSignedUrlKeyExists(s *terraform.State) (bool, erro continue } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) keyName := rs.Primary.Attributes["name"] url, err := replaceVarsForTest(config, rs, "{{ComputeBasePath}}projects/{{project}}/global/backendBuckets/{{backend_bucket}}") diff --git a/third_party/terraform/tests/resource_compute_backend_bucket_test.go b/third_party/terraform/tests/resource_compute_backend_bucket_test.go index 0f1a71f382a2..16d3816e5610 100644 --- a/third_party/terraform/tests/resource_compute_backend_bucket_test.go +++ b/third_party/terraform/tests/resource_compute_backend_bucket_test.go @@ -4,21 +4,20 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeBackendBucket_basicModified(t *testing.T) { t.Parallel() - backendName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - storageName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - secondStorageName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + backendName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + storageName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + secondStorageName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendBucketDestroy, + CheckDestroy: testAccCheckComputeBackendBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendBucket_basic(backendName, storageName), @@ -44,13 +43,13 @@ func TestAccComputeBackendBucket_basicModified(t *testing.T) { func TestAccComputeBackendBucket_withCdnPolicy(t *testing.T) { t.Parallel() - backendName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - storageName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + backendName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + storageName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendBucketDestroy, + CheckDestroy: testAccCheckComputeBackendBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendBucket_withCdnPolicy(backendName, storageName), diff --git a/third_party/terraform/tests/resource_compute_backend_service_signed_url_key_test.go b/third_party/terraform/tests/resource_compute_backend_service_signed_url_key_test.go index 005c9dfd9b34..c081cbb185a3 100644 --- a/third_party/terraform/tests/resource_compute_backend_service_signed_url_key_test.go +++ b/third_party/terraform/tests/resource_compute_backend_service_signed_url_key_test.go @@ -6,7 +6,6 @@ import ( "strings" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -15,17 +14,17 @@ func TestAccComputeBackendServiceSignedUrlKey_basic(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceSignedUrlKeyDestroy, + CheckDestroy: testAccCheckComputeBackendServiceSignedUrlKeyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendServiceSignedUrlKey_basic(context), - Check: testAccCheckComputeBackendServiceSignedUrlKeyCreated, + Check: testAccCheckComputeBackendServiceSignedUrlKeyCreatedProducer(t), }, }, }) @@ -53,29 +52,33 @@ resource "google_compute_http_health_check" "zero" { `, context) } -func testAccCheckComputeBackendServiceSignedUrlKeyDestroy(s *terraform.State) error { - exists, err := checkComputeBackendServiceSignedUrlKeyExists(s) - if err != nil && !isGoogleApiErrorWithCode(err, 404) { - return err - } - if exists { - return fmt.Errorf("ComputeBackendServiceSignedUrlKey still exists") +func testAccCheckComputeBackendServiceSignedUrlKeyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + exists, err := checkComputeBackendServiceSignedUrlKeyExists(t, s) + if err != nil && !isGoogleApiErrorWithCode(err, 404) { + return err + } + if exists { + return fmt.Errorf("ComputeBackendServiceSignedUrlKey still exists") + } + return nil } - return nil } -func testAccCheckComputeBackendServiceSignedUrlKeyCreated(s *terraform.State) error { - exists, err := checkComputeBackendServiceSignedUrlKeyExists(s) - if err != nil { - return err - } - if !exists { - return fmt.Errorf("expected ComputeBackendServiceSignedUrlKey to have been created") +func testAccCheckComputeBackendServiceSignedUrlKeyCreatedProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + exists, err := checkComputeBackendServiceSignedUrlKeyExists(t, s) + if err != nil { + return err + } + if !exists { + return fmt.Errorf("expected ComputeBackendServiceSignedUrlKey to have been created") + } + return nil } - return nil } -func checkComputeBackendServiceSignedUrlKeyExists(s *terraform.State) (bool, error) { +func checkComputeBackendServiceSignedUrlKeyExists(t *testing.T, s *terraform.State) (bool, error) { for name, rs := range s.RootModule().Resources { if rs.Type != "google_compute_backend_service_signed_url_key" { continue @@ -84,7 +87,7 @@ func checkComputeBackendServiceSignedUrlKeyExists(s *terraform.State) (bool, err continue } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) keyName := rs.Primary.Attributes["name"] url, err := replaceVarsForTest(config, rs, "{{ComputeBasePath}}projects/{{project}}/global/backendServices/{{backend_service}}") diff --git a/third_party/terraform/tests/resource_compute_backend_service_test.go.erb b/third_party/terraform/tests/resource_compute_backend_service_test.go.erb index ce539c53eeb4..143941f0e3e0 100644 --- a/third_party/terraform/tests/resource_compute_backend_service_test.go.erb +++ b/third_party/terraform/tests/resource_compute_backend_service_test.go.erb @@ -6,7 +6,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -15,14 +14,14 @@ import ( func TestAccComputeBackendService_basic(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - extraCheckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + extraCheckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_basic(serviceName, checkName), @@ -48,14 +47,14 @@ func TestAccComputeBackendService_basic(t *testing.T) { func TestAccComputeBackendService_withBackend(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - igName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - itName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + igName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + itName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withBackend( @@ -80,14 +79,14 @@ func TestAccComputeBackendService_withBackend(t *testing.T) { } func TestAccComputeBackendService_withBackendAndIAP(t *testing.T) { - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - igName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - itName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + igName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + itName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withBackendAndIAP( @@ -115,13 +114,13 @@ func TestAccComputeBackendService_withBackendAndIAP(t *testing.T) { func TestAccComputeBackendService_updatePreservesOptionalParameters(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withSessionAffinity( @@ -148,13 +147,13 @@ func TestAccComputeBackendService_updatePreservesOptionalParameters(t *testing.T func TestAccComputeBackendService_withConnectionDraining(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withConnectionDraining(serviceName, checkName, 10), @@ -171,13 +170,13 @@ func TestAccComputeBackendService_withConnectionDraining(t *testing.T) { func TestAccComputeBackendService_withConnectionDrainingAndUpdate(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withConnectionDraining(serviceName, checkName, 10), @@ -202,13 +201,13 @@ func TestAccComputeBackendService_withConnectionDrainingAndUpdate(t *testing.T) func TestAccComputeBackendService_withHttpsHealthCheck(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withHttpsHealthCheck(serviceName, checkName), @@ -225,13 +224,13 @@ func TestAccComputeBackendService_withHttpsHealthCheck(t *testing.T) { func TestAccComputeBackendService_withCdnPolicy(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withCdnPolicy(serviceName, checkName), @@ -248,14 +247,14 @@ func TestAccComputeBackendService_withCdnPolicy(t *testing.T) { func TestAccComputeBackendService_withSecurityPolicy(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - polName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + polName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendService_withSecurityPolicy(serviceName, checkName, polName, "google_compute_security_policy.policy.self_link"), @@ -280,13 +279,13 @@ func TestAccComputeBackendService_withSecurityPolicy(t *testing.T) { func TestAccComputeBackendService_withCDNEnabled(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withCDNEnabled( @@ -304,13 +303,13 @@ func TestAccComputeBackendService_withCDNEnabled(t *testing.T) { func TestAccComputeBackendService_withSessionAffinity(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withSessionAffinity( @@ -337,13 +336,13 @@ func TestAccComputeBackendService_withSessionAffinity(t *testing.T) { func TestAccComputeBackendService_withAffinityCookieTtlSec(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withAffinityCookieTtlSec( @@ -361,15 +360,15 @@ func TestAccComputeBackendService_withAffinityCookieTtlSec(t *testing.T) { func TestAccComputeBackendService_withMaxConnections(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - igName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - itName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + igName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + itName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withMaxConnections( @@ -396,15 +395,15 @@ func TestAccComputeBackendService_withMaxConnections(t *testing.T) { func TestAccComputeBackendService_withMaxConnectionsPerInstance(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - igName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - itName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + igName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + itName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withMaxConnectionsPerInstance( @@ -431,17 +430,17 @@ func TestAccComputeBackendService_withMaxConnectionsPerInstance(t *testing.T) { func TestAccComputeBackendService_withMaxRatePerEndpoint(t *testing.T) { t.Parallel() - randSuffix := acctest.RandString(10) + randSuffix := randString(t, 10) service := fmt.Sprintf("tf-test-%s", randSuffix) instance := fmt.Sprintf("tf-test-%s", randSuffix) neg := fmt.Sprintf("tf-test-%s", randSuffix) network := fmt.Sprintf("tf-test-%s", randSuffix) check := fmt.Sprintf("tf-test-%s", randSuffix) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendService_withMaxRatePerEndpoint( @@ -468,17 +467,17 @@ func TestAccComputeBackendService_withMaxRatePerEndpoint(t *testing.T) { func TestAccComputeBackendService_withMaxConnectionsPerEndpoint(t *testing.T) { t.Parallel() - randSuffix := acctest.RandString(10) + randSuffix := randString(t, 10) service := fmt.Sprintf("tf-test-%s", randSuffix) instance := fmt.Sprintf("tf-test-%s", randSuffix) neg := fmt.Sprintf("tf-test-%s", randSuffix) network := fmt.Sprintf("tf-test-%s", randSuffix) check := fmt.Sprintf("tf-test-%s", randSuffix) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendService_withMaxConnectionsPerEndpoint( @@ -502,17 +501,16 @@ func TestAccComputeBackendService_withMaxConnectionsPerEndpoint(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccComputeBackendService_withCustomHeaders(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withCustomHeaders(serviceName, checkName), @@ -533,22 +531,20 @@ func TestAccComputeBackendService_withCustomHeaders(t *testing.T) { }, }) } -<% end -%> -<% unless version == 'ga' -%> func TestAccComputeBackendService_internalLoadBalancing(t *testing.T) { t.Parallel() - fr := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - proxy := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - urlmap := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + fr := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + proxy := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + backend := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + hc := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + urlmap := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendService_internalLoadBalancing(fr, proxy, backend, hc, urlmap), @@ -561,19 +557,17 @@ func TestAccComputeBackendService_internalLoadBalancing(t *testing.T) { }, }) } -<% end -%> -<% unless version == 'ga' -%> func TestAccComputeBackendService_withLogConfig(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeBackendService_withLogConfig(serviceName, checkName, 0.7), @@ -594,19 +588,17 @@ func TestAccComputeBackendService_withLogConfig(t *testing.T) { }, }) } -<% end -%> -<% unless version == 'ga' -%> func TestAccComputeBackendService_trafficDirectorUpdateBasic(t *testing.T) { t.Parallel() - backendName := fmt.Sprintf("foo-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("bar-%s", acctest.RandString(10)) + backendName := fmt.Sprintf("foo-%s", randString(t, 10)) + checkName := fmt.Sprintf("bar-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendService_trafficDirectorBasic(backendName, checkName), @@ -627,19 +619,18 @@ func TestAccComputeBackendService_trafficDirectorUpdateBasic(t *testing.T) { }, }) } -<% end -%> <% unless version == 'ga' -%> func TestAccComputeBackendService_trafficDirectorUpdateFull(t *testing.T) { t.Parallel() - backendName := fmt.Sprintf("foo-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("bar-%s", acctest.RandString(10)) + backendName := fmt.Sprintf("foo-%s", randString(t, 10)) + checkName := fmt.Sprintf("bar-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeBackendServiceDestroy, + CheckDestroy: testAccCheckComputeBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeBackendService_trafficDirectorFull(backendName, checkName), @@ -662,7 +653,6 @@ func TestAccComputeBackendService_trafficDirectorUpdateFull(t *testing.T) { } <% end -%> -<% unless version == 'ga' -%> func testAccComputeBackendService_trafficDirectorBasic(serviceName, checkName string) string { return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { @@ -695,9 +685,7 @@ resource "google_compute_health_check" "health_check" { } `, serviceName, checkName) } -<% end -%> -<% unless version == 'ga' -%> func testAccComputeBackendService_trafficDirectorUpdateBasic(serviceName, checkName string) string { return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { @@ -721,7 +709,6 @@ resource "google_compute_health_check" "health_check" { } `, serviceName, checkName) } -<% end -%> <% unless version == 'ga' -%> func testAccComputeBackendService_trafficDirectorFull(serviceName, checkName string) string { @@ -1385,7 +1372,6 @@ resource "google_compute_health_check" "default" { `, service, maxRate, instance, neg, network, network, check) } -<% unless version == 'ga' -%> func testAccComputeBackendService_withCustomHeaders(serviceName, checkName string) string { return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { @@ -1403,9 +1389,7 @@ resource "google_compute_http_health_check" "zero" { } `, serviceName, checkName) } -<% end -%> -<% unless version == 'ga' -%> <%# This test is for import functionality. It can be removed and added to examples when this goes GA %> func testAccComputeBackendService_internalLoadBalancing(fr, proxy, backend, hc, urlmap string) string { return fmt.Sprintf(` @@ -1503,9 +1487,7 @@ resource "google_compute_instance_template" "foobar" { } `, fr, proxy, backend, hc, urlmap) } -<% end -%> -<% unless version == 'ga' -%> func testAccComputeBackendService_withLogConfig(serviceName, checkName string, sampleRate float64) string { return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { @@ -1526,4 +1508,3 @@ resource "google_compute_http_health_check" "zero" { } `, serviceName, sampleRate, checkName) } -<% end -%> diff --git a/third_party/terraform/tests/resource_compute_disk_resource_policy_attachment_test.go b/third_party/terraform/tests/resource_compute_disk_resource_policy_attachment_test.go index 8ab3a2c89629..1a21c0f8dc6a 100644 --- a/third_party/terraform/tests/resource_compute_disk_resource_policy_attachment_test.go +++ b/third_party/terraform/tests/resource_compute_disk_resource_policy_attachment_test.go @@ -4,18 +4,17 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeDiskResourcePolicyAttachment_update(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - policyName := fmt.Sprintf("tf-test-policy-%s", acctest.RandString(10)) - policyName2 := fmt.Sprintf("tf-test-policy-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + policyName := fmt.Sprintf("tf-test-policy-%s", randString(t, 10)) + policyName2 := fmt.Sprintf("tf-test-policy-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_compute_disk_test.go.erb b/third_party/terraform/tests/resource_compute_disk_test.go.erb index fc867a95342b..ff086a01cb8a 100644 --- a/third_party/terraform/tests/resource_compute_disk_test.go.erb +++ b/third_party/terraform/tests/resource_compute_disk_test.go.erb @@ -8,7 +8,6 @@ import ( "strconv" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -207,10 +206,12 @@ func TestAccComputeDisk_imageDiffSuppressPublicVendorsFamilyNames(t *testing.T) } func TestAccComputeDisk_timeout(t *testing.T) { + // Vcr speeds up test, so it doesn't time out + skipIfVcr(t) t.Parallel() - diskName := acctest.RandomWithPrefix("tf-test-disk") - resource.Test(t, resource.TestCase{ + diskName := fmt.Sprintf("tf-test-disk-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -225,9 +226,9 @@ func TestAccComputeDisk_timeout(t *testing.T) { func TestAccComputeDisk_update(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -254,15 +255,15 @@ func TestAccComputeDisk_update(t *testing.T) { func TestAccComputeDisk_fromSnapshot(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - firstDiskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - snapshotName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + firstDiskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + snapshotName := fmt.Sprintf("tf-test-%s", randString(t, 10)) projectName := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeDiskDestroy, + CheckDestroy: testAccCheckComputeDiskDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeDisk_fromSnapshot(projectName, firstDiskName, snapshotName, diskName, "self_link"), @@ -287,21 +288,21 @@ func TestAccComputeDisk_fromSnapshot(t *testing.T) { func TestAccComputeDisk_encryption(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) var disk compute.Disk - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeDiskDestroy, + CheckDestroy: testAccCheckComputeDiskDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeDisk_encryption(diskName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeDiskExists( - "google_compute_disk.foobar", getTestProjectFromEnv(), &disk), + t, "google_compute_disk.foobar", getTestProjectFromEnv(), &disk), testAccCheckEncryptionKey( - "google_compute_disk.foobar", &disk), + t, "google_compute_disk.foobar", &disk), ), }, }, @@ -313,22 +314,22 @@ func TestAccComputeDisk_encryptionKMS(t *testing.T) { kms := BootstrapKMSKey(t) pid := getTestProjectFromEnv() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) importID := fmt.Sprintf("%s/%s/%s", pid, "us-central1-a", diskName) var disk compute.Disk - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeDiskDestroy, + CheckDestroy: testAccCheckComputeDiskDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeDisk_encryptionKMS(pid, diskName, kms.CryptoKey.Name), Check: resource.ComposeTestCheckFunc( testAccCheckComputeDiskExists( - "google_compute_disk.foobar", pid, &disk), + t, "google_compute_disk.foobar", pid, &disk), testAccCheckEncryptionKey( - "google_compute_disk.foobar", &disk), + t, "google_compute_disk.foobar", &disk), ), }, { @@ -344,13 +345,13 @@ func TestAccComputeDisk_encryptionKMS(t *testing.T) { func TestAccComputeDisk_deleteDetach(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - instanceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeDiskDestroy, + CheckDestroy: testAccCheckComputeDiskDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeDisk_deleteDetach(instanceName, diskName), @@ -377,16 +378,18 @@ func TestAccComputeDisk_deleteDetach(t *testing.T) { } func TestAccComputeDisk_deleteDetachIGM(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - diskName2 := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - mgrName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + diskName2 := fmt.Sprintf("tf-test-%s", randString(t, 10)) + mgrName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeDiskDestroy, + CheckDestroy: testAccCheckComputeDiskDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeDisk_deleteDetachIGM(diskName, mgrName), @@ -434,10 +437,10 @@ func TestAccComputeDisk_deleteDetachIGM(t *testing.T) { func TestAccComputeDisk_resourcePolicies(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - policyName := fmt.Sprintf("tf-test-policy-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + policyName := fmt.Sprintf("tf-test-policy-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -454,7 +457,7 @@ func TestAccComputeDisk_resourcePolicies(t *testing.T) { } <% end -%> -func testAccCheckComputeDiskExists(n, p string, disk *compute.Disk) resource.TestCheckFunc { +func testAccCheckComputeDiskExists(t *testing.T, n, p string, disk *compute.Disk) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -465,7 +468,7 @@ func testAccCheckComputeDiskExists(n, p string, disk *compute.Disk) resource.Tes return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.Disks.Get( p, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() @@ -483,7 +486,7 @@ func testAccCheckComputeDiskExists(n, p string, disk *compute.Disk) resource.Tes } } -func testAccCheckEncryptionKey(n string, disk *compute.Disk) resource.TestCheckFunc { +func testAccCheckEncryptionKey(t *testing.T, n string, disk *compute.Disk) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/third_party/terraform/tests/resource_compute_firewall_test.go.erb b/third_party/terraform/tests/resource_compute_firewall_test.go.erb index 11ffbfbeffc0..2e94835cac34 100644 --- a/third_party/terraform/tests/resource_compute_firewall_test.go.erb +++ b/third_party/terraform/tests/resource_compute_firewall_test.go.erb @@ -5,20 +5,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeFirewall_update(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeFirewall_basic(networkName, firewallName), @@ -51,13 +50,13 @@ func TestAccComputeFirewall_update(t *testing.T) { func TestAccComputeFirewall_priority(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeFirewall_priority(networkName, firewallName, 1001), @@ -74,13 +73,13 @@ func TestAccComputeFirewall_priority(t *testing.T) { func TestAccComputeFirewall_noSource(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeFirewall_noSource(networkName, firewallName), @@ -97,13 +96,13 @@ func TestAccComputeFirewall_noSource(t *testing.T) { func TestAccComputeFirewall_denied(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeFirewall_denied(networkName, firewallName), @@ -120,13 +119,13 @@ func TestAccComputeFirewall_denied(t *testing.T) { func TestAccComputeFirewall_egress(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeFirewall_egress(networkName, firewallName), @@ -143,16 +142,16 @@ func TestAccComputeFirewall_egress(t *testing.T) { func TestAccComputeFirewall_serviceAccounts(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - sourceSa := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - targetSa := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + sourceSa := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + targetSa := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeFirewall_serviceAccounts(sourceSa, targetSa, networkName, firewallName), @@ -169,13 +168,13 @@ func TestAccComputeFirewall_serviceAccounts(t *testing.T) { func TestAccComputeFirewall_disabled(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeFirewall_disabled(networkName, firewallName), @@ -200,16 +199,16 @@ func TestAccComputeFirewall_disabled(t *testing.T) { func TestAccComputeFirewall_enableLogging(t *testing.T) { t.Parallel() - networkName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) - firewallName := fmt.Sprintf("firewall-test-%s", acctest.RandString(10)) + networkName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) + firewallName := fmt.Sprintf("firewall-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeFirewallDestroy, + CheckDestroy: testAccCheckComputeFirewallDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeFirewall_enableLogging(networkName, firewallName, false), + Config: testAccComputeFirewall_enableLogging(networkName, firewallName, ""), }, { ResourceName: "google_compute_firewall.foobar", @@ -217,7 +216,7 @@ func TestAccComputeFirewall_enableLogging(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeFirewall_enableLogging(networkName, firewallName, true), + Config: testAccComputeFirewall_enableLogging(networkName, firewallName, "INCLUDE_ALL_METADATA"), }, { ResourceName: "google_compute_firewall.foobar", @@ -225,7 +224,15 @@ func TestAccComputeFirewall_enableLogging(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeFirewall_enableLogging(networkName, firewallName, false), + Config: testAccComputeFirewall_enableLogging(networkName, firewallName, "EXCLUDE_ALL_METADATA"), + }, + { + ResourceName: "google_compute_firewall.foobar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeFirewall_enableLogging(networkName, firewallName, ""), }, { ResourceName: "google_compute_firewall.foobar", @@ -413,10 +420,13 @@ resource "google_compute_firewall" "foobar" { `, network, firewall) } -func testAccComputeFirewall_enableLogging(network, firewall string, enableLogging bool) string { +func testAccComputeFirewall_enableLogging(network, firewall, logging string) string { enableLoggingCfg := "" - if enableLogging { - enableLoggingCfg = "enable_logging= true" + if logging != "" { + enableLoggingCfg = fmt.Sprintf(`log_config { + metadata = "%s" + } + `, logging) } return fmt.Sprintf(` resource "google_compute_network" "foobar" { diff --git a/third_party/terraform/tests/resource_compute_forwarding_rule_test.go.erb b/third_party/terraform/tests/resource_compute_forwarding_rule_test.go.erb index 2b13e46f2d6f..d1564c7e823c 100644 --- a/third_party/terraform/tests/resource_compute_forwarding_rule_test.go.erb +++ b/third_party/terraform/tests/resource_compute_forwarding_rule_test.go.erb @@ -5,20 +5,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeForwardingRule_update(t *testing.T) { t.Parallel() - poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) - ruleName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + poolName := fmt.Sprintf("tf-%s", randString(t, 10)) + ruleName := fmt.Sprintf("tf-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeForwardingRuleDestroy, + CheckDestroy: testAccCheckComputeForwardingRuleDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeForwardingRule_basic(poolName, ruleName), @@ -43,14 +42,14 @@ func TestAccComputeForwardingRule_update(t *testing.T) { func TestAccComputeForwardingRule_ip(t *testing.T) { t.Parallel() - addrName := fmt.Sprintf("tf-%s", acctest.RandString(10)) - poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) - ruleName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + addrName := fmt.Sprintf("tf-%s", randString(t, 10)) + poolName := fmt.Sprintf("tf-%s", randString(t, 10)) + ruleName := fmt.Sprintf("tf-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeForwardingRuleDestroy, + CheckDestroy: testAccCheckComputeForwardingRuleDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeForwardingRule_ip(addrName, poolName, ruleName), @@ -67,13 +66,13 @@ func TestAccComputeForwardingRule_ip(t *testing.T) { func TestAccComputeForwardingRule_networkTier(t *testing.T) { t.Parallel() - poolName := fmt.Sprintf("tf-%s", acctest.RandString(10)) - ruleName := fmt.Sprintf("tf-%s", acctest.RandString(10)) + poolName := fmt.Sprintf("tf-%s", randString(t, 10)) + ruleName := fmt.Sprintf("tf-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeForwardingRuleDestroy, + CheckDestroy: testAccCheckComputeForwardingRuleDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeForwardingRule_networkTier(poolName, ruleName), diff --git a/third_party/terraform/tests/resource_compute_global_address_test.go.erb b/third_party/terraform/tests/resource_compute_global_address_test.go.erb index 63b8638e7710..0302b983ea42 100644 --- a/third_party/terraform/tests/resource_compute_global_address_test.go.erb +++ b/third_party/terraform/tests/resource_compute_global_address_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" @@ -15,13 +14,13 @@ import ( func TestAccComputeGlobalAddress_ipv6(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeGlobalAddressDestroy, + CheckDestroy: testAccCheckComputeGlobalAddressDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeGlobalAddress_ipv6(), + Config: testAccComputeGlobalAddress_ipv6(randString(t, 10)), }, resource.TestStep{ ResourceName: "google_compute_global_address.foobar", @@ -35,13 +34,13 @@ func TestAccComputeGlobalAddress_ipv6(t *testing.T) { func TestAccComputeGlobalAddress_internal(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeGlobalAddressDestroy, + CheckDestroy: testAccCheckComputeGlobalAddressDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ - Config: testAccComputeGlobalAddress_internal(), + Config: testAccComputeGlobalAddress_internal(randString(t, 10), randString(t, 10)), }, resource.TestStep{ ResourceName: "google_compute_global_address.foobar", @@ -52,17 +51,17 @@ func TestAccComputeGlobalAddress_internal(t *testing.T) { }) } -func testAccComputeGlobalAddress_ipv6() string { +func testAccComputeGlobalAddress_ipv6(addressName string) string { return fmt.Sprintf(` resource "google_compute_global_address" "foobar" { name = "address-test-%s" description = "Created for Terraform acceptance testing" ip_version = "IPV6" } -`, acctest.RandString(10)) +`, addressName) } -func testAccComputeGlobalAddress_internal() string { +func testAccComputeGlobalAddress_internal(networkName, addressName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { name = "address-test-%s" @@ -76,5 +75,5 @@ resource "google_compute_global_address" "foobar" { address = "172.20.181.0" network = google_compute_network.foobar.self_link } -`, acctest.RandString(10), acctest.RandString(10)) +`, networkName, addressName) } diff --git a/third_party/terraform/tests/resource_compute_global_forwarding_rule_test.go.erb b/third_party/terraform/tests/resource_compute_global_forwarding_rule_test.go.erb index cc0c6673d7c2..751f20c0db98 100644 --- a/third_party/terraform/tests/resource_compute_global_forwarding_rule_test.go.erb +++ b/third_party/terraform/tests/resource_compute_global_forwarding_rule_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,17 +12,17 @@ import ( func TestAccComputeGlobalForwardingRule_updateTarget(t *testing.T) { t.Parallel() - fr := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - proxy := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - proxyUpdated := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - urlmap := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + fr := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + proxy := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + proxyUpdated := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + backend := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + hc := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + urlmap := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroy, + CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeGlobalForwardingRule_httpProxy(fr, "proxy", proxy, proxyUpdated, backend, hc, urlmap), @@ -56,16 +55,16 @@ func TestAccComputeGlobalForwardingRule_updateTarget(t *testing.T) { func TestAccComputeGlobalForwardingRule_ipv6(t *testing.T) { t.Parallel() - fr := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - proxy := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - urlmap := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + fr := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + proxy := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + backend := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + hc := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + urlmap := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroy, + CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeGlobalForwardingRule_ipv6(fr, proxy, backend, hc, urlmap), @@ -87,16 +86,16 @@ func TestAccComputeGlobalForwardingRule_ipv6(t *testing.T) { func TestAccComputeGlobalForwardingRule_labels(t *testing.T) { t.Parallel() - fr := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - proxy := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - urlmap := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + fr := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + proxy := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + backend := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + hc := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + urlmap := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroy, + CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeGlobalForwardingRule_labels(fr, proxy, backend, hc, urlmap), @@ -123,18 +122,18 @@ func TestAccComputeGlobalForwardingRule_labels(t *testing.T) { func TestAccComputeGlobalForwardingRule_internalLoadBalancing(t *testing.T) { t.Parallel() - fr := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - proxy := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - urlmap := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) - it := fmt.Sprintf("forwardrule-test-%s", acctest.RandString(10)) + fr := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + proxy := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + backend := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + hc := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + urlmap := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + igm := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) + it := fmt.Sprintf("forwardrule-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroy, + CheckDestroy: testAccCheckComputeGlobalForwardingRuleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeGlobalForwardingRule_internalLoadBalancing(fr, proxy, backend, hc, urlmap, igm, it), @@ -388,7 +387,7 @@ func testAccComputeGlobalForwardingRule_internalLoadBalancing(fr, proxy, backend resource "google_compute_global_forwarding_rule" "forwarding_rule" { name = "%s" target = google_compute_target_http_proxy.default.self_link - port_range = "80" + port_range = "8080" load_balancing_scheme = "INTERNAL_SELF_MANAGED" ip_address = "0.0.0.0" metadata_filters { @@ -492,7 +491,7 @@ func testAccComputeGlobalForwardingRule_internalLoadBalancingUpdate(fr, proxy, b resource "google_compute_global_forwarding_rule" "forwarding_rule" { name = "%s" target = google_compute_target_http_proxy.default.self_link - port_range = "80" + port_range = "8080" load_balancing_scheme = "INTERNAL_SELF_MANAGED" ip_address = "0.0.0.0" metadata_filters { diff --git a/third_party/terraform/tests/resource_compute_global_network_endpoint_test.go.erb b/third_party/terraform/tests/resource_compute_global_network_endpoint_test.go.erb index b3725eaf6be9..6e4c5e49921f 100644 --- a/third_party/terraform/tests/resource_compute_global_network_endpoint_test.go.erb +++ b/third_party/terraform/tests/resource_compute_global_network_endpoint_test.go.erb @@ -13,14 +13,14 @@ func TestAccComputeGlobalNetworkEndpoint_networkEndpointsBasic(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), "default_port": 90, "modified_port": 100, } negId := fmt.Sprintf("projects/%s/global/networkEndpointGroups/neg-%s", getTestProjectFromEnv(), context["random_suffix"]) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -37,7 +37,7 @@ func TestAccComputeGlobalNetworkEndpoint_networkEndpointsBasic(t *testing.T) { // Force-recreate old endpoint Config: testAccComputeGlobalNetworkEndpoint_networkEndpointsModified(context), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeNetworkEndpointWithPortsDestroyed(negId, "90"), + testAccCheckComputeNetworkEndpointWithPortsDestroyed(t, negId, "90"), ), }, { @@ -49,7 +49,7 @@ func TestAccComputeGlobalNetworkEndpoint_networkEndpointsBasic(t *testing.T) { // delete all endpoints Config: testAccComputeGlobalNetworkEndpoint_noNetworkEndpoints(context), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeNetworkEndpointWithPortsDestroyed(negId, "100"), + testAccCheckComputeNetworkEndpointWithPortsDestroyed(t, negId, "100"), ), }, }, diff --git a/third_party/terraform/tests/resource_compute_health_check_test.go b/third_party/terraform/tests/resource_compute_health_check_test.go index 930100846a62..55c0fe35270e 100644 --- a/third_party/terraform/tests/resource_compute_health_check_test.go +++ b/third_party/terraform/tests/resource_compute_health_check_test.go @@ -5,19 +5,18 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeHealthCheck_tcp_update(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHealthCheck_tcp(hckName), @@ -42,12 +41,12 @@ func TestAccComputeHealthCheck_tcp_update(t *testing.T) { func TestAccComputeHealthCheck_ssl_port_spec(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHealthCheck_ssl_fixed_port(hckName), @@ -64,12 +63,12 @@ func TestAccComputeHealthCheck_ssl_port_spec(t *testing.T) { func TestAccComputeHealthCheck_http_port_spec(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHealthCheck_http_port_spec(hckName), @@ -85,12 +84,12 @@ func TestAccComputeHealthCheck_http_port_spec(t *testing.T) { func TestAccComputeHealthCheck_https_serving_port(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHealthCheck_https_serving_port(hckName), @@ -107,12 +106,12 @@ func TestAccComputeHealthCheck_https_serving_port(t *testing.T) { func TestAccComputeHealthCheck_typeTransition(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHealthCheck_https(hckName), @@ -139,16 +138,16 @@ func TestAccComputeHealthCheck_typeTransition(t *testing.T) { func TestAccComputeHealthCheck_tcpAndSsl_shouldFail(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHealthCheck_tcpAndSsl_shouldFail(hckName), - ExpectError: regexp.MustCompile("conflicts with tcp_health_check"), + ExpectError: regexp.MustCompile("only one of `http2_health_check,http_health_check,https_health_check,ssl_health_check,tcp_health_check` can be specified"), }, }, }) diff --git a/third_party/terraform/tests/resource_compute_http_health_check_test.go b/third_party/terraform/tests/resource_compute_http_health_check_test.go index aefee40471a9..da8d59eccc46 100644 --- a/third_party/terraform/tests/resource_compute_http_health_check_test.go +++ b/third_party/terraform/tests/resource_compute_http_health_check_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -15,18 +14,18 @@ func TestAccComputeHttpHealthCheck_update(t *testing.T) { var healthCheck compute.HttpHealthCheck - hhckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hhckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHttpHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHttpHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHttpHealthCheck_update1(hhckName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeHttpHealthCheckExists( - "google_compute_http_health_check.foobar", &healthCheck), + t, "google_compute_http_health_check.foobar", &healthCheck), testAccCheckComputeHttpHealthCheckRequestPath( "/not_default", &healthCheck), testAccCheckComputeHttpHealthCheckThresholds( @@ -37,7 +36,7 @@ func TestAccComputeHttpHealthCheck_update(t *testing.T) { Config: testAccComputeHttpHealthCheck_update2(hhckName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeHttpHealthCheckExists( - "google_compute_http_health_check.foobar", &healthCheck), + t, "google_compute_http_health_check.foobar", &healthCheck), testAccCheckComputeHttpHealthCheckRequestPath( "/", &healthCheck), testAccCheckComputeHttpHealthCheckThresholds( @@ -48,7 +47,7 @@ func TestAccComputeHttpHealthCheck_update(t *testing.T) { }) } -func testAccCheckComputeHttpHealthCheckExists(n string, healthCheck *compute.HttpHealthCheck) resource.TestCheckFunc { +func testAccCheckComputeHttpHealthCheckExists(t *testing.T, n string, healthCheck *compute.HttpHealthCheck) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -59,7 +58,7 @@ func testAccCheckComputeHttpHealthCheckExists(n string, healthCheck *compute.Htt return fmt.Errorf("No name is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.HttpHealthChecks.Get( config.Project, rs.Primary.Attributes["name"]).Do() diff --git a/third_party/terraform/tests/resource_compute_https_health_check_test.go b/third_party/terraform/tests/resource_compute_https_health_check_test.go index 84e69ff6e848..814e0512ad75 100644 --- a/third_party/terraform/tests/resource_compute_https_health_check_test.go +++ b/third_party/terraform/tests/resource_compute_https_health_check_test.go @@ -4,19 +4,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeHttpsHealthCheck_update(t *testing.T) { t.Parallel() - hhckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hhckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeHttpsHealthCheckDestroy, + CheckDestroy: testAccCheckComputeHttpsHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeHttpsHealthCheck_update1(hhckName), diff --git a/third_party/terraform/tests/resource_compute_image_test.go b/third_party/terraform/tests/resource_compute_image_test.go index ec9bce49e5cd..e6f404ab8d6a 100644 --- a/third_party/terraform/tests/resource_compute_image_test.go +++ b/third_party/terraform/tests/resource_compute_image_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -13,13 +12,13 @@ import ( func TestAccComputeImage_withLicense(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeImageDestroy, + CheckDestroy: testAccCheckComputeImageDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeImage_license("image-test-" + acctest.RandString(10)), + Config: testAccComputeImage_license("image-test-" + randString(t, 10)), }, { ResourceName: "google_compute_image.foobar", @@ -35,32 +34,30 @@ func TestAccComputeImage_update(t *testing.T) { var image compute.Image - name := "image-test-" + acctest.RandString(10) + name := "image-test-" + randString(t, 10) // Only labels supports an update - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeImageDestroy, + CheckDestroy: testAccCheckComputeImageDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeImage_basic(name), Check: resource.ComposeTestCheckFunc( testAccCheckComputeImageExists( - "google_compute_image.foobar", &image), + t, "google_compute_image.foobar", &image), testAccCheckComputeImageContainsLabel(&image, "my-label", "my-label-value"), testAccCheckComputeImageContainsLabel(&image, "empty-label", ""), - testAccCheckComputeImageHasComputedFingerprint(&image, "google_compute_image.foobar"), ), }, { Config: testAccComputeImage_update(name), Check: resource.ComposeTestCheckFunc( testAccCheckComputeImageExists( - "google_compute_image.foobar", &image), + t, "google_compute_image.foobar", &image), testAccCheckComputeImageDoesNotContainLabel(&image, "my-label"), testAccCheckComputeImageContainsLabel(&image, "empty-label", "oh-look-theres-a-label-now"), testAccCheckComputeImageContainsLabel(&image, "new-field", "only-shows-up-when-updated"), - testAccCheckComputeImageHasComputedFingerprint(&image, "google_compute_image.foobar"), ), }, { @@ -78,16 +75,16 @@ func TestAccComputeImage_basedondisk(t *testing.T) { var image compute.Image - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeImageDestroy, + CheckDestroy: testAccCheckComputeImageDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeImage_basedondisk(), + Config: testAccComputeImage_basedondisk(randString(t, 10), randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeImageExists( - "google_compute_image.foobar", &image), + t, "google_compute_image.foobar", &image), testAccCheckComputeImageHasSourceDisk(&image), ), }, @@ -100,7 +97,7 @@ func TestAccComputeImage_basedondisk(t *testing.T) { }) } -func testAccCheckComputeImageExists(n string, image *compute.Image) resource.TestCheckFunc { +func testAccCheckComputeImageExists(t *testing.T, n string, image *compute.Image) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -111,7 +108,7 @@ func testAccCheckComputeImageExists(n string, image *compute.Image) resource.Tes return fmt.Errorf("No name is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.Images.Get( config.Project, rs.Primary.Attributes["name"]).Do() @@ -133,30 +130,30 @@ func TestAccComputeImage_resolveImage(t *testing.T) { t.Parallel() var image compute.Image - rand := acctest.RandString(10) + rand := randString(t, 10) name := fmt.Sprintf("test-image-%s", rand) fam := fmt.Sprintf("test-image-family-%s", rand) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeImageDestroy, + CheckDestroy: testAccCheckComputeImageDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeImage_resolving(name, fam), Check: resource.ComposeTestCheckFunc( testAccCheckComputeImageExists( - "google_compute_image.foobar", &image), - testAccCheckComputeImageResolution("google_compute_image.foobar"), + t, "google_compute_image.foobar", &image), + testAccCheckComputeImageResolution(t, "google_compute_image.foobar"), ), }, }, }) } -func testAccCheckComputeImageResolution(n string) resource.TestCheckFunc { +func testAccCheckComputeImageResolution(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) project := config.Project rs, ok := s.RootModule().Resources[n] @@ -239,28 +236,6 @@ func testAccCheckComputeImageDoesNotContainLabel(image *compute.Image, key strin } } -func testAccCheckComputeImageHasComputedFingerprint(image *compute.Image, resource string) resource.TestCheckFunc { - return func(s *terraform.State) error { - // First ensure we actually have a fingerprint - if image.LabelFingerprint == "" { - return fmt.Errorf("No fingerprint set in API read result") - } - - state := s.RootModule().Resources[resource] - if state == nil { - return fmt.Errorf("Unable to find resource named %s in resources", resource) - } - - storedFingerprint := state.Primary.Attributes["label_fingerprint"] - if storedFingerprint != image.LabelFingerprint { - return fmt.Errorf("Stored fingerprint doesn't match fingerprint found on server; stored '%s', server '%s'", - storedFingerprint, image.LabelFingerprint) - } - - return nil - } -} - func testAccCheckComputeImageHasSourceDisk(image *compute.Image) resource.TestCheckFunc { return func(s *terraform.State) error { if image.SourceType == "" { @@ -354,7 +329,7 @@ resource "google_compute_image" "foobar" { `, name) } -func testAccComputeImage_basedondisk() string { +func testAccComputeImage_basedondisk(diskName, imageName string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -371,5 +346,5 @@ resource "google_compute_image" "foobar" { name = "image-test-%s" source_disk = google_compute_disk.foobar.self_link } -`, acctest.RandString(10), acctest.RandString(10)) +`, diskName, imageName) } diff --git a/third_party/terraform/tests/resource_compute_instance_from_template_test.go b/third_party/terraform/tests/resource_compute_instance_from_template_test.go index 86decbd97795..e452b0378ea7 100644 --- a/third_party/terraform/tests/resource_compute_instance_from_template_test.go +++ b/third_party/terraform/tests/resource_compute_instance_from_template_test.go @@ -5,7 +5,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" compute "google.golang.org/api/compute/v1" @@ -15,19 +14,19 @@ func TestAccComputeInstanceFromTemplate_basic(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) resourceName := "google_compute_instance_from_template.foobar" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceFromTemplate_basic(instanceName, templateName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), // Check that fields were set based on the template resource.TestCheckResourceAttr(resourceName, "machine_type", "n1-standard-1"), @@ -43,21 +42,21 @@ func TestAccComputeInstanceFromTemplate_overrideBootDisk(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateDisk := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - overrideDisk := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateDisk := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + overrideDisk := fmt.Sprintf("terraform-test-%s", randString(t, 10)) resourceName := "google_compute_instance_from_template.inst" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceFromTemplate_overrideBootDisk(templateDisk, overrideDisk, templateName, instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), // Check that fields were set based on the template resource.TestCheckResourceAttr(resourceName, "boot_disk.#", "1"), @@ -72,21 +71,21 @@ func TestAccComputeInstanceFromTemplate_overrideAttachedDisk(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateDisk := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - overrideDisk := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateDisk := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + overrideDisk := fmt.Sprintf("terraform-test-%s", randString(t, 10)) resourceName := "google_compute_instance_from_template.inst" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceFromTemplate_overrideAttachedDisk(templateDisk, overrideDisk, templateName, instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), // Check that fields were set based on the template resource.TestCheckResourceAttr(resourceName, "attached_disk.#", "1"), @@ -101,21 +100,21 @@ func TestAccComputeInstanceFromTemplate_overrideScratchDisk(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateDisk := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - overrideDisk := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateDisk := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + overrideDisk := fmt.Sprintf("terraform-test-%s", randString(t, 10)) resourceName := "google_compute_instance_from_template.inst" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceFromTemplate_overrideScratchDisk(templateDisk, overrideDisk, templateName, instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), // Check that fields were set based on the template resource.TestCheckResourceAttr(resourceName, "scratch_disk.#", "1"), @@ -130,20 +129,20 @@ func TestAccComputeInstanceFromTemplate_overrideScheduling(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - templateName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - templateDisk := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + templateName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + templateDisk := fmt.Sprintf("tf-test-%s", randString(t, 10)) resourceName := "google_compute_instance_from_template.inst" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceFromTemplate_overrideScheduling(templateDisk, templateName, instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), ), }, }, @@ -154,8 +153,8 @@ func TestAccComputeInstanceFromTemplate_012_removableFields(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) resourceName := "google_compute_instance_from_template.inst" // First config is a basic instance from template, second tests the empty list syntax @@ -164,15 +163,15 @@ func TestAccComputeInstanceFromTemplate_012_removableFields(t *testing.T) { config2 := testAccComputeInstanceFromTemplate_012_removableFieldsTpl(templateName) + testAccComputeInstanceFromTemplate_012_removableFields2(instanceName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: config1, Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), resource.TestCheckResourceAttr(resourceName, "service_account.#", "1"), resource.TestCheckResourceAttr(resourceName, "service_account.0.scopes.#", "3"), @@ -181,7 +180,7 @@ func TestAccComputeInstanceFromTemplate_012_removableFields(t *testing.T) { { Config: config2, Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), // Check that fields were able to be removed resource.TestCheckResourceAttr(resourceName, "scratch_disk.#", "0"), @@ -195,19 +194,19 @@ func TestAccComputeInstanceFromTemplate_012_removableFields(t *testing.T) { func TestAccComputeInstanceFromTemplate_overrideMetadataDotStartupScript(t *testing.T) { var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - templateName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) + templateName := fmt.Sprintf("terraform-test-%s", randString(t, 10)) resourceName := "google_compute_instance_from_template.inst" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceFromTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceFromTemplate_overrideMetadataDotStartupScript(instanceName, templateName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists(resourceName, &instance), + testAccCheckComputeInstanceExists(t, resourceName, &instance), resource.TestCheckResourceAttr(resourceName, "metadata.startup-script", ""), ), }, @@ -216,22 +215,24 @@ func TestAccComputeInstanceFromTemplate_overrideMetadataDotStartupScript(t *test } -func testAccCheckComputeInstanceFromTemplateDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeInstanceFromTemplateDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_instance_from_template" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_instance_from_template" { + continue + } - _, err := config.clientCompute.Instances.Get( - config.Project, rs.Primary.Attributes["zone"], rs.Primary.ID).Do() - if err == nil { - return fmt.Errorf("Instance still exists") + _, err := config.clientCompute.Instances.Get( + config.Project, rs.Primary.Attributes["zone"], rs.Primary.ID).Do() + if err == nil { + return fmt.Errorf("Instance still exists") + } } - } - return nil + return nil + } } func testAccComputeInstanceFromTemplate_basic(instance, template string) string { diff --git a/third_party/terraform/tests/resource_compute_instance_group_manager_test.go b/third_party/terraform/tests/resource_compute_instance_group_manager_test.go.erb similarity index 70% rename from third_party/terraform/tests/resource_compute_instance_group_manager_test.go rename to third_party/terraform/tests/resource_compute_instance_group_manager_test.go.erb index 44a24c599e49..d993d095f71b 100644 --- a/third_party/terraform/tests/resource_compute_instance_group_manager_test.go +++ b/third_party/terraform/tests/resource_compute_instance_group_manager_test.go.erb @@ -1,10 +1,10 @@ +<% autogen_exception -%> package google import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,15 +12,15 @@ import ( func TestAccInstanceGroupManager_basic(t *testing.T) { t.Parallel() - template := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + target := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igm1 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igm2 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_basic(template, target, igm1, igm2), @@ -42,13 +42,13 @@ func TestAccInstanceGroupManager_basic(t *testing.T) { func TestAccInstanceGroupManager_targetSizeZero(t *testing.T) { t.Parallel() - templateName := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igmName := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + templateName := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igmName := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_targetSizeZero(templateName, igmName), @@ -65,16 +65,16 @@ func TestAccInstanceGroupManager_targetSizeZero(t *testing.T) { func TestAccInstanceGroupManager_update(t *testing.T) { t.Parallel() - template1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - template2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template1 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + target1 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + target2 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + template2 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_update(template1, target1, igm), @@ -92,21 +92,31 @@ func TestAccInstanceGroupManager_update(t *testing.T) { ImportState: true, ImportStateVerify: true, }, + { + Config: testAccInstanceGroupManager_update3(template1, target1, target2, template2, igm), + }, + { + ResourceName: "google_compute_instance_group_manager.igm-update", + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccInstanceGroupManager_updateLifecycle(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() tag1 := "tag1" tag2 := "tag2" - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_updateLifecycle(tag1, igm), @@ -129,14 +139,16 @@ func TestAccInstanceGroupManager_updateLifecycle(t *testing.T) { } func TestAccInstanceGroupManager_updatePolicy(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_rollingUpdatePolicy(igm), @@ -176,15 +188,17 @@ func TestAccInstanceGroupManager_updatePolicy(t *testing.T) { } func TestAccInstanceGroupManager_separateRegions(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() - igm1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm1 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igm2 := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_separateRegions(igm1, igm2), @@ -206,14 +220,14 @@ func TestAccInstanceGroupManager_separateRegions(t *testing.T) { func TestAccInstanceGroupManager_versions(t *testing.T) { t.Parallel() - primaryTemplate := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - canaryTemplate := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + primaryTemplate := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + canaryTemplate := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_versions(primaryTemplate, canaryTemplate, igm), @@ -230,15 +244,15 @@ func TestAccInstanceGroupManager_versions(t *testing.T) { func TestAccInstanceGroupManager_autoHealingPolicies(t *testing.T) { t.Parallel() - template := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - hck := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + target := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + hck := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccInstanceGroupManager_autoHealingPolicies(template, target, igm, hck), @@ -260,21 +274,58 @@ func TestAccInstanceGroupManager_autoHealingPolicies(t *testing.T) { }) } -func testAccCheckInstanceGroupManagerDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +<% unless version == 'ga' -%> +func TestAccInstanceGroupManager_stateful(t *testing.T) { + t.Parallel() + + template := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + target := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) + hck := fmt.Sprintf("tf-test-igm-%s", randString(t, 10)) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_instance_group_manager" { - continue - } - _, err := config.clientCompute.InstanceGroupManagers.Get( - config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() - if err == nil { - return fmt.Errorf("InstanceGroupManager still exists") + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccInstanceGroupManager_stateful(template, target, igm, hck), + }, + { + ResourceName: "google_compute_instance_group_manager.igm-basic", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccInstanceGroupManager_statefulUpdated(template, target, igm, hck), + }, + { + ResourceName: "google_compute_instance_group_manager.igm-basic", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +<% end -%> +func testAccCheckInstanceGroupManagerDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_instance_group_manager" { + continue + } + _, err := config.clientCompute.InstanceGroupManagers.Get( + config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() + if err == nil { + return fmt.Errorf("InstanceGroupManager still exists") + } } - } - return nil + return nil + } } func testAccInstanceGroupManager_basic(template, target, igm1, igm2 string) string { @@ -530,6 +581,92 @@ resource "google_compute_instance_group_manager" "igm-update" { `, template1, target1, target2, template2, igm) } +// Remove target pools +func testAccInstanceGroupManager_update3(template1, target1, target2, template2, igm string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-update" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_target_pool" "igm-update" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" +} + +resource "google_compute_target_pool" "igm-update2" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" +} + +resource "google_compute_instance_template" "igm-update2" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_instance_group_manager" "igm-update" { + description = "Terraform test instance group manager" + name = "%s" + + version { + name = "prod" + instance_template = google_compute_instance_template.igm-update2.self_link + } + + base_instance_name = "igm-update" + zone = "us-central1-c" + target_size = 3 + named_port { + name = "customhttp" + port = 8080 + } + named_port { + name = "customhttps" + port = 8443 + } +} +`, template1, target1, target2, template2, igm) +} + func testAccInstanceGroupManager_updateLifecycle(tag, igm string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { @@ -1023,3 +1160,154 @@ resource "google_compute_instance_group_manager" "igm-basic" { } `, primaryTemplate, canaryTemplate, igm) } + +<% unless version == 'ga' -%> +func testAccInstanceGroupManager_stateful(template, target, igm, hck string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-basic" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + device_name = "my-stateful-disk" + } + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + device_name = "non-stateful" + } + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + device_name = "my-stateful-disk2" + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_target_pool" "igm-basic" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" +} + +resource "google_compute_instance_group_manager" "igm-basic" { + description = "Terraform test instance group manager" + name = "%s" + version { + instance_template = google_compute_instance_template.igm-basic.self_link + name = "prod" + } + target_pools = [google_compute_target_pool.igm-basic.self_link] + base_instance_name = "igm-basic" + zone = "us-central1-c" + target_size = 2 + stateful_disk { + device_name = "my-stateful-disk" + delete_rule = "NEVER" + } +} + +resource "google_compute_http_health_check" "zero" { + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} +`, template, target, igm, hck) +} + +func testAccInstanceGroupManager_statefulUpdated(template, target, igm, hck string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-basic" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + device_name = "my-stateful-disk" + } + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + device_name = "non-stateful" + } + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + device_name = "my-stateful-disk2" + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_target_pool" "igm-basic" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" +} + +resource "google_compute_instance_group_manager" "igm-basic" { + description = "Terraform test instance group manager" + name = "%s" + version { + instance_template = google_compute_instance_template.igm-basic.self_link + name = "prod" + } + target_pools = [google_compute_target_pool.igm-basic.self_link] + base_instance_name = "igm-basic" + zone = "us-central1-c" + target_size = 2 + stateful_disk { + device_name = "my-stateful-disk" + delete_rule = "NEVER" + } + + stateful_disk { + device_name = "my-stateful-disk2" + delete_rule = "ON_PERMANENT_INSTANCE_DELETION" + } +} + +resource "google_compute_http_health_check" "zero" { + name = "%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} +`, template, target, igm, hck) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_compute_instance_group_test.go b/third_party/terraform/tests/resource_compute_instance_group_test.go index 42bd99ae72cb..afe10ab3b138 100644 --- a/third_party/terraform/tests/resource_compute_instance_group_test.go +++ b/third_party/terraform/tests/resource_compute_instance_group_test.go @@ -6,7 +6,6 @@ import ( "google.golang.org/api/compute/v1" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,21 +15,21 @@ func TestAccComputeInstanceGroup_basic(t *testing.T) { var instanceGroup compute.InstanceGroup var resourceName = "google_compute_instance_group.basic" - var instanceName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) var zone = "us-central1-c" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComputeInstanceGroup_destroy, + CheckDestroy: testAccComputeInstanceGroup_destroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceGroup_basic(zone, instanceName), Check: resource.ComposeTestCheckFunc( testAccComputeInstanceGroup_exists( - "google_compute_instance_group.basic", &instanceGroup), + t, "google_compute_instance_group.basic", &instanceGroup), testAccComputeInstanceGroup_exists( - "google_compute_instance_group.empty", &instanceGroup), + t, "google_compute_instance_group.empty", &instanceGroup), ), }, { @@ -51,15 +50,15 @@ func TestAccComputeInstanceGroup_basic(t *testing.T) { func TestAccComputeInstanceGroup_rename(t *testing.T) { t.Parallel() - var instanceName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) - var instanceGroupName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) - var backendName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) - var healthName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var instanceGroupName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var backendName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var healthName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComputeInstanceGroup_destroy, + CheckDestroy: testAccComputeInstanceGroup_destroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceGroup_rename(instanceName, instanceGroupName, backendName, healthName), @@ -85,19 +84,20 @@ func TestAccComputeInstanceGroup_update(t *testing.T) { t.Parallel() var instanceGroup compute.InstanceGroup - var instanceName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComputeInstanceGroup_destroy, + CheckDestroy: testAccComputeInstanceGroup_destroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceGroup_update(instanceName), Check: resource.ComposeTestCheckFunc( testAccComputeInstanceGroup_exists( - "google_compute_instance_group.update", &instanceGroup), + t, "google_compute_instance_group.update", &instanceGroup), testAccComputeInstanceGroup_named_ports( + t, "google_compute_instance_group.update", map[string]int64{"http": 8080, "https": 8443}, &instanceGroup), @@ -107,10 +107,11 @@ func TestAccComputeInstanceGroup_update(t *testing.T) { Config: testAccComputeInstanceGroup_update2(instanceName), Check: resource.ComposeTestCheckFunc( testAccComputeInstanceGroup_exists( - "google_compute_instance_group.update", &instanceGroup), + t, "google_compute_instance_group.update", &instanceGroup), testAccComputeInstanceGroup_updated( - "google_compute_instance_group.update", 1, &instanceGroup), + t, "google_compute_instance_group.update", 1, &instanceGroup), testAccComputeInstanceGroup_named_ports( + t, "google_compute_instance_group.update", map[string]int64{"http": 8081, "test": 8444}, &instanceGroup), @@ -124,18 +125,18 @@ func TestAccComputeInstanceGroup_outOfOrderInstances(t *testing.T) { t.Parallel() var instanceGroup compute.InstanceGroup - var instanceName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComputeInstanceGroup_destroy, + CheckDestroy: testAccComputeInstanceGroup_destroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceGroup_outOfOrderInstances(instanceName), Check: resource.ComposeTestCheckFunc( testAccComputeInstanceGroup_exists( - "google_compute_instance_group.group", &instanceGroup), + t, "google_compute_instance_group.group", &instanceGroup), ), }, }, @@ -146,48 +147,50 @@ func TestAccComputeInstanceGroup_network(t *testing.T) { t.Parallel() var instanceGroup compute.InstanceGroup - var instanceName = fmt.Sprintf("instancegroup-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccComputeInstanceGroup_destroy, + CheckDestroy: testAccComputeInstanceGroup_destroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceGroup_network(instanceName), Check: resource.ComposeTestCheckFunc( testAccComputeInstanceGroup_exists( - "google_compute_instance_group.with_instance", &instanceGroup), + t, "google_compute_instance_group.with_instance", &instanceGroup), testAccComputeInstanceGroup_hasCorrectNetwork( - "google_compute_instance_group.with_instance", "google_compute_network.ig_network", &instanceGroup), + t, "google_compute_instance_group.with_instance", "google_compute_network.ig_network", &instanceGroup), testAccComputeInstanceGroup_exists( - "google_compute_instance_group.without_instance", &instanceGroup), + t, "google_compute_instance_group.without_instance", &instanceGroup), testAccComputeInstanceGroup_hasCorrectNetwork( - "google_compute_instance_group.without_instance", "google_compute_network.ig_network", &instanceGroup), + t, "google_compute_instance_group.without_instance", "google_compute_network.ig_network", &instanceGroup), ), }, }, }) } -func testAccComputeInstanceGroup_destroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccComputeInstanceGroup_destroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_instance_group" { - continue - } - _, err := config.clientCompute.InstanceGroups.Get( - config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() - if err == nil { - return fmt.Errorf("InstanceGroup still exists") + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_instance_group" { + continue + } + _, err := config.clientCompute.InstanceGroups.Get( + config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() + if err == nil { + return fmt.Errorf("InstanceGroup still exists") + } } - } - return nil + return nil + } } -func testAccComputeInstanceGroup_exists(n string, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { +func testAccComputeInstanceGroup_exists(t *testing.T, n string, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -198,7 +201,7 @@ func testAccComputeInstanceGroup_exists(n string, instanceGroup *compute.Instanc return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.InstanceGroups.Get( config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() @@ -212,7 +215,7 @@ func testAccComputeInstanceGroup_exists(n string, instanceGroup *compute.Instanc } } -func testAccComputeInstanceGroup_updated(n string, size int64, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { +func testAccComputeInstanceGroup_updated(t *testing.T, n string, size int64, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -223,7 +226,7 @@ func testAccComputeInstanceGroup_updated(n string, size int64, instanceGroup *co return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) instanceGroup, err := config.clientCompute.InstanceGroups.Get( config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() @@ -241,7 +244,7 @@ func testAccComputeInstanceGroup_updated(n string, size int64, instanceGroup *co } } -func testAccComputeInstanceGroup_named_ports(n string, np map[string]int64, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { +func testAccComputeInstanceGroup_named_ports(t *testing.T, n string, np map[string]int64, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -252,7 +255,7 @@ func testAccComputeInstanceGroup_named_ports(n string, np map[string]int64, inst return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) instanceGroup, err := config.clientCompute.InstanceGroups.Get( config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() @@ -277,9 +280,9 @@ func testAccComputeInstanceGroup_named_ports(n string, np map[string]int64, inst } } -func testAccComputeInstanceGroup_hasCorrectNetwork(nInstanceGroup string, nNetwork string, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { +func testAccComputeInstanceGroup_hasCorrectNetwork(t *testing.T, nInstanceGroup string, nNetwork string, instanceGroup *compute.InstanceGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) rsInstanceGroup, ok := s.RootModule().Resources[nInstanceGroup] if !ok { @@ -343,7 +346,7 @@ resource "google_compute_instance_group" "basic" { description = "Terraform test instance group" name = "%s" zone = "%s" - instances = [google_compute_instance.ig_instance.self_link] + instances = [google_compute_instance.ig_instance.id] named_port { name = "http" port = "8080" diff --git a/third_party/terraform/tests/resource_compute_instance_migrate_test.go b/third_party/terraform/tests/resource_compute_instance_migrate_test.go index 3f04f42d5a10..3790cc9e8b74 100644 --- a/third_party/terraform/tests/resource_compute_instance_migrate_test.go +++ b/third_party/terraform/tests/resource_compute_instance_migrate_test.go @@ -7,10 +7,10 @@ import ( "os" "strings" "testing" + "time" "google.golang.org/api/compute/v1" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -85,7 +85,7 @@ func TestAccComputeInstanceMigrateState(t *testing.T) { config := getInitializedConfig(t) - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -108,7 +108,7 @@ func TestAccComputeInstanceMigrateState(t *testing.T) { if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -157,7 +157,7 @@ func TestAccComputeInstanceMigrateState_bootDisk(t *testing.T) { zone := "us-central1-f" // Seed test data - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -181,7 +181,7 @@ func TestAccComputeInstanceMigrateState_bootDisk(t *testing.T) { if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -225,7 +225,7 @@ func TestAccComputeInstanceMigrateState_v4FixBootDisk(t *testing.T) { zone := "us-central1-f" // Seed test data - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -249,7 +249,7 @@ func TestAccComputeInstanceMigrateState_v4FixBootDisk(t *testing.T) { if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -292,7 +292,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromSource(t *testing.T) { zone := "us-central1-f" // Seed test data - diskName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("instance-test-%s", randString(t, 10)) disk := &compute.Disk{ Name: diskName, SourceImage: "projects/debian-cloud/global/images/family/debian-9", @@ -302,13 +302,13 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromSource(t *testing.T) { if err != nil { t.Fatalf("Error creating disk: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "disk to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "disk to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } defer cleanUpDisk(config, diskName, zone) - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -334,7 +334,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromSource(t *testing.T) { if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr = computeOperationWait(config, op, config.Project, "instance to create") + waitErr = computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -373,7 +373,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromSource(t *testing.T zone := "us-central1-f" // Seed test data - diskName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("instance-test-%s", randString(t, 10)) disk := &compute.Disk{ Name: diskName, SourceImage: "projects/debian-cloud/global/images/family/debian-9", @@ -383,13 +383,13 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromSource(t *testing.T if err != nil { t.Fatalf("Error creating disk: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "disk to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "disk to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } defer cleanUpDisk(config, diskName, zone) - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -415,7 +415,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromSource(t *testing.T if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr = computeOperationWait(config, op, config.Project, "instance to create") + waitErr = computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -452,7 +452,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromEncryptionKey(t *testing config := getInitializedConfig(t) zone := "us-central1-f" - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -484,7 +484,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromEncryptionKey(t *testing if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -521,7 +521,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromEncryptionKey(t *te config := getInitializedConfig(t) zone := "us-central1-f" - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -553,7 +553,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromEncryptionKey(t *te if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -589,7 +589,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromAutoDeleteAndImage(t *te config := getInitializedConfig(t) zone := "us-central1-f" - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -624,7 +624,7 @@ func TestAccComputeInstanceMigrateState_attachedDiskFromAutoDeleteAndImage(t *te if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -662,7 +662,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromAutoDeleteAndImage( config := getInitializedConfig(t) zone := "us-central1-f" - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -697,7 +697,7 @@ func TestAccComputeInstanceMigrateState_v4FixAttachedDiskFromAutoDeleteAndImage( if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -735,7 +735,7 @@ func TestAccComputeInstanceMigrateState_scratchDisk(t *testing.T) { zone := "us-central1-f" // Seed test data - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -765,7 +765,7 @@ func TestAccComputeInstanceMigrateState_scratchDisk(t *testing.T) { if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -800,7 +800,7 @@ func TestAccComputeInstanceMigrateState_v4FixScratchDisk(t *testing.T) { zone := "us-central1-f" // Seed test data - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("instance-test-%s", randString(t, 10)) instance := &compute.Instance{ Name: instanceName, Disks: []*compute.AttachedDisk{ @@ -830,7 +830,7 @@ func TestAccComputeInstanceMigrateState_v4FixScratchDisk(t *testing.T) { if err != nil { t.Fatalf("Error creating instance: %s", err) } - waitErr := computeOperationWait(config, op, config.Project, "instance to create") + waitErr := computeOperationWaitTime(config, op, config.Project, "instance to create", 4*time.Minute) if waitErr != nil { t.Fatal(waitErr) } @@ -909,7 +909,7 @@ func cleanUpInstance(config *Config, instanceName, zone string) { } // Wait for the operation to complete - opErr := computeOperationWait(config, op, config.Project, "instance to delete") + opErr := computeOperationWaitTime(config, op, config.Project, "instance to delete", 4*time.Minute) if opErr != nil { log.Printf("[WARNING] Error deleting instance %q, dangling resources may exist: %s", instanceName, opErr) } @@ -923,13 +923,15 @@ func cleanUpDisk(config *Config, diskName, zone string) { } // Wait for the operation to complete - opErr := computeOperationWait(config, op, config.Project, "disk to delete") + opErr := computeOperationWaitTime(config, op, config.Project, "disk to delete", 4*time.Minute) if opErr != nil { log.Printf("[WARNING] Error deleting disk %q, dangling resources may exist: %s", diskName, opErr) } } func getInitializedConfig(t *testing.T) *Config { + // Migrate tests are non standard and handle the config directly + skipIfVcr(t) // Check that all required environment variables are set testAccPreCheck(t) diff --git a/third_party/terraform/tests/resource_compute_instance_template_test.go b/third_party/terraform/tests/resource_compute_instance_template_test.go.erb similarity index 83% rename from third_party/terraform/tests/resource_compute_instance_template_test.go rename to third_party/terraform/tests/resource_compute_instance_template_test.go.erb index aff6642b3847..b0746b0e3e38 100644 --- a/third_party/terraform/tests/resource_compute_instance_template_test.go +++ b/third_party/terraform/tests/resource_compute_instance_template_test.go.erb @@ -1,3 +1,5 @@ +// <% autogen_exception -%> + package google import ( @@ -8,7 +10,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" computeBeta "google.golang.org/api/compute/v0.beta" @@ -214,16 +215,16 @@ func TestAccComputeInstanceTemplate_basic(t *testing.T) { var instanceTemplate computeBeta.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_basic(), + Config: testAccComputeInstanceTemplate_basic(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateTag(&instanceTemplate, "foo"), testAccCheckComputeInstanceTemplateMetadata(&instanceTemplate, "foo", "bar"), testAccCheckComputeInstanceTemplateContainsLabel(&instanceTemplate, "my_label", "foobar"), @@ -244,16 +245,16 @@ func TestAccComputeInstanceTemplate_imageShorthand(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_imageShorthand(), + Config: testAccComputeInstanceTemplate_imageShorthand(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), ), }, { @@ -270,16 +271,16 @@ func TestAccComputeInstanceTemplate_preemptible(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_preemptible(), + Config: testAccComputeInstanceTemplate_preemptible(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateAutomaticRestart(&instanceTemplate, false), testAccCheckComputeInstanceTemplatePreemptible(&instanceTemplate, true), ), @@ -298,16 +299,16 @@ func TestAccComputeInstanceTemplate_IP(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_ip(), + Config: testAccComputeInstanceTemplate_ip(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateNetwork(&instanceTemplate), ), }, @@ -323,13 +324,13 @@ func TestAccComputeInstanceTemplate_IP(t *testing.T) { func TestAccComputeInstanceTemplate_networkTier(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_networkTier(), + Config: testAccComputeInstanceTemplate_networkTier(randString(t, 10)), }, { ResourceName: "google_compute_instance_template.foobar", @@ -346,16 +347,16 @@ func TestAccComputeInstanceTemplate_networkIP(t *testing.T) { var instanceTemplate compute.InstanceTemplate networkIP := "10.128.0.2" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_networkIP(networkIP), + Config: testAccComputeInstanceTemplate_networkIP(randString(t, 10), networkIP), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateNetwork(&instanceTemplate), testAccCheckComputeInstanceTemplateNetworkIP( "google_compute_instance_template.foobar", networkIP, &instanceTemplate), @@ -376,16 +377,16 @@ func TestAccComputeInstanceTemplate_networkIPAddress(t *testing.T) { var instanceTemplate compute.InstanceTemplate ipAddress := "10.128.0.2" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_networkIPAddress(ipAddress), + Config: testAccComputeInstanceTemplate_networkIPAddress(randString(t, 10), ipAddress), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateNetwork(&instanceTemplate), testAccCheckComputeInstanceTemplateNetworkIPAddress( "google_compute_instance_template.foobar", ipAddress, &instanceTemplate), @@ -403,13 +404,13 @@ func TestAccComputeInstanceTemplate_networkIPAddress(t *testing.T) { func TestAccComputeInstanceTemplate_disks(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_disks(), + Config: testAccComputeInstanceTemplate_disks(randString(t, 10)), }, { ResourceName: "google_compute_instance_template.foobar", @@ -423,13 +424,13 @@ func TestAccComputeInstanceTemplate_disks(t *testing.T) { func TestAccComputeInstanceTemplate_disksInvalid(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_disksInvalid(), + Config: testAccComputeInstanceTemplate_disksInvalid(randString(t, 10)), ExpectError: regexp.MustCompile("Cannot use `source`.*"), }, }, @@ -439,13 +440,13 @@ func TestAccComputeInstanceTemplate_disksInvalid(t *testing.T) { func TestAccComputeInstanceTemplate_regionDisks(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_regionDisks(), + Config: testAccComputeInstanceTemplate_regionDisks(randString(t, 10)), }, { ResourceName: "google_compute_instance_template.foobar", @@ -460,18 +461,18 @@ func TestAccComputeInstanceTemplate_subnet_auto(t *testing.T) { t.Parallel() var instanceTemplate compute.InstanceTemplate - network := "network-" + acctest.RandString(10) + network := "network-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_subnet_auto(network), + Config: testAccComputeInstanceTemplate_subnet_auto(network, randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateNetworkName(&instanceTemplate, network), ), }, @@ -489,16 +490,16 @@ func TestAccComputeInstanceTemplate_subnet_custom(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_subnet_custom(), + Config: testAccComputeInstanceTemplate_subnet_custom(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateSubnetwork(&instanceTemplate), ), }, @@ -512,6 +513,8 @@ func TestAccComputeInstanceTemplate_subnet_custom(t *testing.T) { } func TestAccComputeInstanceTemplate_subnet_xpn(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() var instanceTemplate compute.InstanceTemplate @@ -519,16 +522,16 @@ func TestAccComputeInstanceTemplate_subnet_xpn(t *testing.T) { billingId := getTestBillingAccountFromEnv(t) projectName := fmt.Sprintf("tf-testxpn-%d", time.Now().Unix()) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_subnet_xpn(org, billingId, projectName), + Config: testAccComputeInstanceTemplate_subnet_xpn(org, billingId, projectName, randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExistsInProject( - "google_compute_instance_template.foobar", fmt.Sprintf("%s-service", projectName), + t, "google_compute_instance_template.foobar", fmt.Sprintf("%s-service", projectName), &instanceTemplate), testAccCheckComputeInstanceTemplateSubnetwork(&instanceTemplate), ), @@ -542,16 +545,16 @@ func TestAccComputeInstanceTemplate_metadata_startup_script(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_startup_script(), + Config: testAccComputeInstanceTemplate_startup_script(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists( - "google_compute_instance_template.foobar", &instanceTemplate), + t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateStartupScript(&instanceTemplate, "echo 'Hello'"), ), }, @@ -564,15 +567,15 @@ func TestAccComputeInstanceTemplate_primaryAliasIpRange(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_primaryAliasIpRange(acctest.RandString(10)), + Config: testAccComputeInstanceTemplate_primaryAliasIpRange(randString(t, 10)), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateHasAliasIpRange(&instanceTemplate, "", "/24"), ), }, @@ -590,15 +593,15 @@ func TestAccComputeInstanceTemplate_secondaryAliasIpRange(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_secondaryAliasIpRange(acctest.RandString(10)), + Config: testAccComputeInstanceTemplate_secondaryAliasIpRange(randString(t, 10)), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateHasAliasIpRange(&instanceTemplate, "inst-test-secondary", "/24"), ), }, @@ -616,15 +619,15 @@ func TestAccComputeInstanceTemplate_guestAccelerator(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_guestAccelerator(acctest.RandString(10), 1), + Config: testAccComputeInstanceTemplate_guestAccelerator(randString(t, 10), 1), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateHasGuestAccelerator(&instanceTemplate, "nvidia-tesla-k80", 1), ), }, @@ -643,15 +646,15 @@ func TestAccComputeInstanceTemplate_guestAcceleratorSkip(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_guestAccelerator(acctest.RandString(10), 0), + Config: testAccComputeInstanceTemplate_guestAccelerator(randString(t, 10), 0), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateLacksGuestAccelerator(&instanceTemplate), ), }, @@ -665,15 +668,15 @@ func TestAccComputeInstanceTemplate_minCpuPlatform(t *testing.T) { var instanceTemplate compute.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_minCpuPlatform(acctest.RandString(10)), + Config: testAccComputeInstanceTemplate_minCpuPlatform(randString(t, 10)), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateHasMinCpuPlatform(&instanceTemplate, DEFAULT_MIN_CPU_TEST_VALUE), ), }, @@ -692,15 +695,15 @@ func TestAccComputeInstanceTemplate_EncryptKMS(t *testing.T) { var instanceTemplate compute.InstanceTemplate kms := BootstrapKMSKey(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_encryptionKMS(kms.CryptoKey.Name), + Config: testAccComputeInstanceTemplate_encryptionKMS(randString(t, 10), kms.CryptoKey.Name), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), ), }, { @@ -715,13 +718,13 @@ func TestAccComputeInstanceTemplate_EncryptKMS(t *testing.T) { func TestAccComputeInstanceTemplate_soleTenantNodeAffinities(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_soleTenantInstanceTemplate(), + Config: testAccComputeInstanceTemplate_soleTenantInstanceTemplate(randString(t, 10)), }, { ResourceName: "google_compute_instance_template.foobar", @@ -737,15 +740,15 @@ func TestAccComputeInstanceTemplate_shieldedVmConfig1(t *testing.T) { var instanceTemplate computeBeta.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_shieldedVmConfig(true, true, true), + Config: testAccComputeInstanceTemplate_shieldedVmConfig(randString(t, 10), true, true, true), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateHasShieldedVmConfig(&instanceTemplate, true, true, true), ), }, @@ -763,15 +766,15 @@ func TestAccComputeInstanceTemplate_shieldedVmConfig2(t *testing.T) { var instanceTemplate computeBeta.InstanceTemplate - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_shieldedVmConfig(true, true, false), + Config: testAccComputeInstanceTemplate_shieldedVmConfig(randString(t, 10), true, true, false), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceTemplateExists("google_compute_instance_template.foobar", &instanceTemplate), + testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateHasShieldedVmConfig(&instanceTemplate, true, true, false), ), }, @@ -787,13 +790,13 @@ func TestAccComputeInstanceTemplate_shieldedVmConfig2(t *testing.T) { func TestAccComputeInstanceTemplate_enableDisplay(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_enableDisplay(), + Config: testAccComputeInstanceTemplate_enableDisplay(randString(t, 10)), }, { ResourceName: "google_compute_instance_template.foobar", @@ -807,12 +810,12 @@ func TestAccComputeInstanceTemplate_enableDisplay(t *testing.T) { func TestAccComputeInstanceTemplate_invalidDiskType(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccComputeInstanceTemplate_invalidDiskType(), + Config: testAccComputeInstanceTemplate_invalidDiskType(randString(t, 10)), ExpectError: regexp.MustCompile("SCRATCH disks must have a disk_type of local-ssd"), }, }, @@ -820,16 +823,18 @@ func TestAccComputeInstanceTemplate_invalidDiskType(t *testing.T) { } func TestAccComputeInstanceTemplate_imageResourceTest(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() - diskName := "tf-test-disk-" + acctest.RandString(10) - computeImage := "tf-test-image-" + acctest.RandString(10) + diskName := "tf-test-disk-" + randString(t, 10) + computeImage := "tf-test-image-" + randString(t, 10) imageDesc1 := "Some description" imageDesc2 := "Some other description" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceTemplateDestroy, + CheckDestroy: testAccCheckComputeInstanceTemplateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstanceTemplate_imageResourceTest(diskName, computeImage, imageDesc1), @@ -853,41 +858,43 @@ func TestAccComputeInstanceTemplate_imageResourceTest(t *testing.T) { }) } -func testAccCheckComputeInstanceTemplateDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeInstanceTemplateDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_instance_template" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_instance_template" { + continue + } - splits := strings.Split(rs.Primary.ID, "/") - _, err := config.clientCompute.InstanceTemplates.Get( - config.Project, splits[len(splits)-1]).Do() - if err == nil { - return fmt.Errorf("Instance template still exists") + splits := strings.Split(rs.Primary.ID, "/") + _, err := config.clientCompute.InstanceTemplates.Get( + config.Project, splits[len(splits)-1]).Do() + if err == nil { + return fmt.Errorf("Instance template still exists") + } } - } - return nil + return nil + } } -func testAccCheckComputeInstanceTemplateExists(n string, instanceTemplate interface{}) resource.TestCheckFunc { +func testAccCheckComputeInstanceTemplateExists(t *testing.T, n string, instanceTemplate interface{}) resource.TestCheckFunc { if instanceTemplate == nil { panic("Attempted to check existence of Instance template that was nil.") } switch instanceTemplate.(type) { case *compute.InstanceTemplate: - return testAccCheckComputeInstanceTemplateExistsInProject(n, getTestProjectFromEnv(), instanceTemplate.(*compute.InstanceTemplate)) + return testAccCheckComputeInstanceTemplateExistsInProject(t, n, getTestProjectFromEnv(), instanceTemplate.(*compute.InstanceTemplate)) case *computeBeta.InstanceTemplate: - return testAccCheckComputeBetaInstanceTemplateExistsInProject(n, getTestProjectFromEnv(), instanceTemplate.(*computeBeta.InstanceTemplate)) + return testAccCheckComputeBetaInstanceTemplateExistsInProject(t, n, getTestProjectFromEnv(), instanceTemplate.(*computeBeta.InstanceTemplate)) default: panic("Attempted to check existence of an Instance template of unknown type.") } } -func testAccCheckComputeInstanceTemplateExistsInProject(n, p string, instanceTemplate *compute.InstanceTemplate) resource.TestCheckFunc { +func testAccCheckComputeInstanceTemplateExistsInProject(t *testing.T, n, p string, instanceTemplate *compute.InstanceTemplate) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -898,7 +905,7 @@ func testAccCheckComputeInstanceTemplateExistsInProject(n, p string, instanceTem return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) splits := strings.Split(rs.Primary.ID, "/") templateName := splits[len(splits)-1] @@ -918,7 +925,7 @@ func testAccCheckComputeInstanceTemplateExistsInProject(n, p string, instanceTem } } -func testAccCheckComputeBetaInstanceTemplateExistsInProject(n, p string, instanceTemplate *computeBeta.InstanceTemplate) resource.TestCheckFunc { +func testAccCheckComputeBetaInstanceTemplateExistsInProject(t *testing.T, n, p string, instanceTemplate *computeBeta.InstanceTemplate) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -929,7 +936,7 @@ func testAccCheckComputeBetaInstanceTemplateExistsInProject(n, p string, instanc return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) splits := strings.Split(rs.Primary.ID, "/") templateName := splits[len(splits)-1] @@ -1189,7 +1196,7 @@ func testAccCheckComputeInstanceTemplateLacksShieldedVmConfig(instanceTemplate * } } -func testAccComputeInstanceTemplate_basic() string { +func testAccComputeInstanceTemplate_basic(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1229,10 +1236,10 @@ resource "google_compute_instance_template" "foobar" { my_label = "foobar" } } -`, acctest.RandString(10)) +`, suffix) } -func testAccComputeInstanceTemplate_imageShorthand() string { +func testAccComputeInstanceTemplate_imageShorthand(suffix string) string { return fmt.Sprintf(` resource "google_compute_image" "foobar" { name = "test-%s" @@ -1283,10 +1290,10 @@ resource "google_compute_instance_template" "foobar" { my_label = "foobar" } } -`, acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix) } -func testAccComputeInstanceTemplate_preemptible() string { +func testAccComputeInstanceTemplate_preemptible(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1322,10 +1329,10 @@ resource "google_compute_instance_template" "foobar" { scopes = ["userinfo-email", "compute-ro", "storage-ro"] } } -`, acctest.RandString(10)) +`, suffix) } -func testAccComputeInstanceTemplate_ip() string { +func testAccComputeInstanceTemplate_ip(suffix string) string { return fmt.Sprintf(` resource "google_compute_address" "foo" { name = "instancet-test-%s" @@ -1356,10 +1363,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix) } -func testAccComputeInstanceTemplate_networkTier() string { +func testAccComputeInstanceTemplate_networkTier(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1381,10 +1388,10 @@ resource "google_compute_instance_template" "foobar" { } } } -`, acctest.RandString(10)) +`, suffix) } -func testAccComputeInstanceTemplate_networkIP(networkIP string) string { +func testAccComputeInstanceTemplate_networkIP(suffix, networkIP string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1409,10 +1416,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, acctest.RandString(10), networkIP) +`, suffix, networkIP) } -func testAccComputeInstanceTemplate_networkIPAddress(ipAddress string) string { +func testAccComputeInstanceTemplate_networkIPAddress(suffix, ipAddress string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1437,10 +1444,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, acctest.RandString(10), ipAddress) +`, suffix, ipAddress) } -func testAccComputeInstanceTemplate_disks() string { +func testAccComputeInstanceTemplate_disks(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1483,10 +1490,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix) } -func testAccComputeInstanceTemplate_disksInvalid() string { +func testAccComputeInstanceTemplate_disksInvalid(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1527,10 +1534,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix) } -func testAccComputeInstanceTemplate_regionDisks() string { +func testAccComputeInstanceTemplate_regionDisks(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1570,10 +1577,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix) } -func testAccComputeInstanceTemplate_subnet_auto(network string) string { +func testAccComputeInstanceTemplate_subnet_auto(network, suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1604,10 +1611,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, network, acctest.RandString(10)) +`, network, suffix) } -func testAccComputeInstanceTemplate_subnet_custom() string { +func testAccComputeInstanceTemplate_subnet_custom(suffix string) string { return fmt.Sprintf(` resource "google_compute_network" "network" { name = "network-%s" @@ -1646,10 +1653,10 @@ resource "google_compute_instance_template" "foobar" { foo = "bar" } } -`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix, suffix) } -func testAccComputeInstanceTemplate_subnet_xpn(org, billingId, projectName string) string { +func testAccComputeInstanceTemplate_subnet_xpn(org, billingId, projectName, suffix string) string { return fmt.Sprintf(` resource "google_project" "host_project" { name = "Test Project XPN Host" @@ -1725,10 +1732,10 @@ resource "google_compute_instance_template" "foobar" { } project = google_compute_shared_vpc_service_project.service_project.service_project } -`, projectName, org, billingId, projectName, org, billingId, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) +`, projectName, org, billingId, projectName, org, billingId, suffix, suffix, suffix) } -func testAccComputeInstanceTemplate_startup_script() string { +func testAccComputeInstanceTemplate_startup_script(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1756,7 +1763,7 @@ resource "google_compute_instance_template" "foobar" { metadata_startup_script = "echo 'Hello'" } -`, acctest.RandString(10)) +`, suffix) } func testAccComputeInstanceTemplate_primaryAliasIpRange(i string) string { @@ -1912,7 +1919,7 @@ resource "google_compute_instance_template" "foobar" { `, i, DEFAULT_MIN_CPU_TEST_VALUE) } -func testAccComputeInstanceTemplate_encryptionKMS(kmsLink string) string { +func testAccComputeInstanceTemplate_encryptionKMS(suffix, kmsLink string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1943,10 +1950,10 @@ resource "google_compute_instance_template" "foobar" { my_label = "foobar" } } -`, acctest.RandString(10), kmsLink) +`, suffix, kmsLink) } -func testAccComputeInstanceTemplate_soleTenantInstanceTemplate() string { +func testAccComputeInstanceTemplate_soleTenantInstanceTemplate(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -1955,7 +1962,7 @@ data "google_compute_image" "my_image" { resource "google_compute_instance_template" "foobar" { name = "instancet-test-%s" - machine_type = "n1-standard-1" + machine_type = "n1-standard-4" disk { source_image = data.google_compute_image.my_image.self_link @@ -1975,16 +1982,20 @@ resource "google_compute_instance_template" "foobar" { operator = "IN" values = ["testinstancetemplate"] } + +<% unless version == 'ga' -%> + min_node_cpus = 2 +<% end -%> } service_account { scopes = ["userinfo-email", "compute-ro", "storage-ro"] } } -`, acctest.RandString(10)) +`, suffix) } -func testAccComputeInstanceTemplate_shieldedVmConfig(enableSecureBoot bool, enableVtpm bool, enableIntegrityMonitoring bool) string { +func testAccComputeInstanceTemplate_shieldedVmConfig(suffix string, enableSecureBoot bool, enableVtpm bool, enableIntegrityMonitoring bool) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "centos-7" @@ -2012,10 +2023,10 @@ resource "google_compute_instance_template" "foobar" { enable_integrity_monitoring = %t } } -`, acctest.RandString(10), enableSecureBoot, enableVtpm, enableIntegrityMonitoring) +`, suffix, enableSecureBoot, enableVtpm, enableIntegrityMonitoring) } -func testAccComputeInstanceTemplate_enableDisplay() string { +func testAccComputeInstanceTemplate_enableDisplay(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "centos-7" @@ -2036,10 +2047,10 @@ resource "google_compute_instance_template" "foobar" { } enable_display = true } -`, acctest.RandString(10)) +`, suffix) } -func testAccComputeInstanceTemplate_invalidDiskType() string { +func testAccComputeInstanceTemplate_invalidDiskType(suffix string) string { return fmt.Sprintf(` # Use this datasource insead of hardcoded values when https://github.com/hashicorp/terraform/issues/22679 # is resolved. @@ -2071,7 +2082,7 @@ resource "google_compute_instance_template" "foobar" { network = "default" } } -`, acctest.RandString(10)) +`, suffix) } func testAccComputeInstanceTemplate_imageResourceTest(diskName string, imageName string, imageDescription string) string { diff --git a/third_party/terraform/tests/resource_compute_instance_test.go b/third_party/terraform/tests/resource_compute_instance_test.go.erb similarity index 81% rename from third_party/terraform/tests/resource_compute_instance_test.go rename to third_party/terraform/tests/resource_compute_instance_test.go.erb index 3e728d511209..b2ba5eb7f155 100644 --- a/third_party/terraform/tests/resource_compute_instance_test.go +++ b/third_party/terraform/tests/resource_compute_instance_test.go.erb @@ -1,20 +1,81 @@ +// <% autogen_exception -%> + package google import ( + "context" "fmt" + "log" "regexp" + "sort" "strconv" "strings" "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" computeBeta "google.golang.org/api/compute/v0.beta" "google.golang.org/api/compute/v1" ) +func init() { + resource.AddTestSweepers("ComputeInstance", &resource.Sweeper{ + Name: "ComputeInstance", + F: testSweepComputeInstance, + }) +} + +// At the time of writing, the CI only passes us-central1 as the region. +// Since we can read all instances across zones, we don't really use this param. +func testSweepComputeInstance(region string) error { + resourceName := "ComputeInstance" + log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) + + config, err := sharedConfigForRegion(region) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) + return err + } + + err = config.LoadAndValidate(context.Background()) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) + return err + } + + found, err := config.clientCompute.Instances.AggregatedList(config.Project).Do() + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error in response from request: %s", err) + return nil + } + + // Keep count of items that aren't sweepable for logging. + nonPrefixCount := 0 + for zone, itemList := range found.Items { + for _, instance := range itemList.Instances { + if !isSweepableTestResource(instance.Name) { + nonPrefixCount++ + continue + } + + // Don't wait on operations as we may have a lot to delete + _, err := config.clientCompute.Instances.Delete(config.Project, GetResourceNameFromSelfLink(zone), instance.Name).Do() + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error deleting %s resource %s : %s", resourceName, instance.Name, err) + } else { + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, instance.Name) + } + } + } + + if nonPrefixCount > 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) + } + + return nil +} + func computeInstanceImportStep(zone, instanceName string, additionalImportIgnores []string) resource.TestStep { // metadata is only read into state if set in the config // since importing doesn't know whether metadata.startup_script vs metadata_startup_script is set in the config, @@ -34,18 +95,18 @@ func TestAccComputeInstance_basic1(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasInstanceId(&instance, "google_compute_instance.foobar"), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceLabel(&instance, "my_key", "my_value"), @@ -67,18 +128,18 @@ func TestAccComputeInstance_basic2(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), @@ -92,18 +153,18 @@ func TestAccComputeInstance_basic3(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic3(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), @@ -117,18 +178,18 @@ func TestAccComputeInstance_basic4(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic4(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), @@ -142,18 +203,18 @@ func TestAccComputeInstance_basic5(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic5(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceTag(&instance, "foo"), testAccCheckComputeInstanceMetadata(&instance, "foo", "bar"), testAccCheckComputeInstanceDisk(&instance, instanceName, true, true), @@ -167,19 +228,19 @@ func TestAccComputeInstance_IP(t *testing.T) { t.Parallel() var instance compute.Instance - var ipName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var ipName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_ip(ipName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceAccessConfigHasNatIP(&instance), ), }, @@ -191,20 +252,20 @@ func TestAccComputeInstance_PTRRecord(t *testing.T) { t.Parallel() var instance compute.Instance - var ptrName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var ipName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var ptrName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var ipName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_PTRRecord(ptrName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceAccessConfigHasPTR(&instance), ), }, @@ -213,7 +274,7 @@ func TestAccComputeInstance_PTRRecord(t *testing.T) { Config: testAccComputeInstance_ip(ipName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceAccessConfigHasNatIP(&instance), ), }, @@ -224,18 +285,18 @@ func TestAccComputeInstance_PTRRecord(t *testing.T) { func TestAccComputeInstance_networkTier(t *testing.T) { var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_networkTier(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceAccessConfigHasNatIP(&instance), testAccCheckComputeInstanceHasAssignedNatIP, ), @@ -249,34 +310,34 @@ func TestAccComputeInstance_diskEncryption(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) bootEncryptionKey := "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" bootEncryptionKeyHash := "esTuF7d4eatX4cnc4JsiEiaI+Rff78JgPhA/v1zxX9E=" diskNameToEncryptionKey := map[string]*compute.CustomerEncryptionKey{ - fmt.Sprintf("instance-testd-%s", acctest.RandString(10)): { + fmt.Sprintf("tf-testd-%s", randString(t, 10)): { RawKey: "Ym9vdDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=", Sha256: "awJ7p57H+uVZ9axhJjl1D3lfC2MgA/wnt/z88Ltfvss=", }, - fmt.Sprintf("instance-testd-%s", acctest.RandString(10)): { + fmt.Sprintf("tf-testd-%s", randString(t, 10)): { RawKey: "c2Vjb25kNzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=", Sha256: "7TpIwUdtCOJpq2m+3nt8GFgppu6a2Xsj1t0Gexk13Yc=", }, - fmt.Sprintf("instance-testd-%s", acctest.RandString(10)): { + fmt.Sprintf("tf-testd-%s", randString(t, 10)): { RawKey: "dGhpcmQ2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=", Sha256: "b3pvaS7BjDbCKeLPPTx7yXBuQtxyMobCHN1QJR43xeM=", }, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstance_disks_encryption(bootEncryptionKey, diskNameToEncryptionKey, instanceName), + Config: testAccComputeInstance_disks_encryption(bootEncryptionKey, diskNameToEncryptionKey, instanceName, randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDiskEncryptionKey("google_compute_instance.foobar", &instance, bootEncryptionKeyHash, diskNameToEncryptionKey), ), }, @@ -288,26 +349,26 @@ func TestAccComputeInstance_diskEncryptionRestart(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) bootEncryptionKey := "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" bootEncryptionKeyHash := "esTuF7d4eatX4cnc4JsiEiaI+Rff78JgPhA/v1zxX9E=" diskNameToEncryptionKey := map[string]*compute.CustomerEncryptionKey{ - fmt.Sprintf("instance-testd-%s", acctest.RandString(10)): { + fmt.Sprintf("tf-testd-%s", randString(t, 10)): { RawKey: "Ym9vdDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=", Sha256: "awJ7p57H+uVZ9axhJjl1D3lfC2MgA/wnt/z88Ltfvss=", }, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_disks_encryption_restart(bootEncryptionKey, diskNameToEncryptionKey, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDiskEncryptionKey("google_compute_instance.foobar", &instance, bootEncryptionKeyHash, diskNameToEncryptionKey), ), }, @@ -315,7 +376,7 @@ func TestAccComputeInstance_diskEncryptionRestart(t *testing.T) { Config: testAccComputeInstance_disks_encryption_restartUpdate(bootEncryptionKey, diskNameToEncryptionKey, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDiskEncryptionKey("google_compute_instance.foobar", &instance, bootEncryptionKeyHash, diskNameToEncryptionKey), ), }, @@ -327,31 +388,31 @@ func TestAccComputeInstance_kmsDiskEncryption(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) kms := BootstrapKMSKey(t) bootKmsKeyName := kms.CryptoKey.Name diskNameToEncryptionKey := map[string]*compute.CustomerEncryptionKey{ - fmt.Sprintf("instance-testd-%s", acctest.RandString(10)): { + fmt.Sprintf("tf-testd-%s", randString(t, 10)): { KmsKeyName: kms.CryptoKey.Name, }, - fmt.Sprintf("instance-testd-%s", acctest.RandString(10)): { + fmt.Sprintf("tf-testd-%s", randString(t, 10)): { KmsKeyName: kms.CryptoKey.Name, }, - fmt.Sprintf("instance-testd-%s", acctest.RandString(10)): { + fmt.Sprintf("tf-testd-%s", randString(t, 10)): { KmsKeyName: kms.CryptoKey.Name, }, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstance_disks_kms(getTestProjectFromEnv(), bootKmsKeyName, diskNameToEncryptionKey, instanceName), + Config: testAccComputeInstance_disks_kms(getTestProjectFromEnv(), bootKmsKeyName, diskNameToEncryptionKey, instanceName, randString(t, 10)), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDiskKmsEncryptionKey("google_compute_instance.foobar", &instance, bootKmsKeyName, diskNameToEncryptionKey), ), }, @@ -364,19 +425,19 @@ func TestAccComputeInstance_attachedDisk(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var diskName = fmt.Sprintf("instance-testd-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var diskName = fmt.Sprintf("tf-testd-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_attachedDisk(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDisk(&instance, diskName, false, false), ), }, @@ -389,19 +450,19 @@ func TestAccComputeInstance_attachedDisk_sourceUrl(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var diskName = fmt.Sprintf("instance-testd-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var diskName = fmt.Sprintf("tf-testd-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_attachedDisk_sourceUrl(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDisk(&instance, diskName, false, false), ), }, @@ -414,19 +475,19 @@ func TestAccComputeInstance_attachedDisk_modeRo(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var diskName = fmt.Sprintf("instance-testd-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var diskName = fmt.Sprintf("tf-testd-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_attachedDisk_modeRo(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDisk(&instance, diskName, false, false), ), }, @@ -439,20 +500,20 @@ func TestAccComputeInstance_attachedDiskUpdate(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var diskName = fmt.Sprintf("instance-testd-%s", acctest.RandString(10)) - var diskName2 = fmt.Sprintf("instance-testd-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var diskName = fmt.Sprintf("tf-testd-%s", randString(t, 10)) + var diskName2 = fmt.Sprintf("tf-testd-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_attachedDisk(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDisk(&instance, diskName, false, false), ), }, @@ -461,7 +522,7 @@ func TestAccComputeInstance_attachedDiskUpdate(t *testing.T) { Config: testAccComputeInstance_addAttachedDisk(diskName, diskName2, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDisk(&instance, diskName, false, false), testAccCheckComputeInstanceDisk(&instance, diskName2, false, false), ), @@ -471,7 +532,7 @@ func TestAccComputeInstance_attachedDiskUpdate(t *testing.T) { Config: testAccComputeInstance_detachDisk(diskName, diskName2, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDisk(&instance, diskName, false, false), ), }, @@ -480,7 +541,7 @@ func TestAccComputeInstance_attachedDiskUpdate(t *testing.T) { Config: testAccComputeInstance_updateAttachedDiskEncryptionKey(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceDisk(&instance, diskName, false, false), ), }, @@ -492,19 +553,19 @@ func TestAccComputeInstance_bootDisk_source(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var diskName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var diskName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_bootDisk_source(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceBootDisk(&instance, diskName), ), }, @@ -517,19 +578,19 @@ func TestAccComputeInstance_bootDisk_sourceUrl(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var diskName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var diskName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_bootDisk_sourceUrl(diskName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceBootDisk(&instance, diskName), ), }, @@ -542,20 +603,20 @@ func TestAccComputeInstance_bootDisk_type(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) var diskType = "pd-ssd" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_bootDisk_type(instanceName, diskType), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), - testAccCheckComputeInstanceBootDiskType(instanceName, diskType), + t, "google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceBootDiskType(t, instanceName, diskType), ), }, }, @@ -565,13 +626,13 @@ func TestAccComputeInstance_bootDisk_type(t *testing.T) { func TestAccComputeInstance_bootDisk_mode(t *testing.T) { t.Parallel() - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) var diskMode = "READ_WRITE" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_bootDisk_mode(instanceName, diskMode), @@ -585,18 +646,18 @@ func TestAccComputeInstance_scratchDisk(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_scratchDisk(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.scratch", &instance), + t, "google_compute_instance.scratch", &instance), testAccCheckComputeInstanceScratchDisk(&instance, []string{"NVME", "SCSI"}), ), }, @@ -609,25 +670,25 @@ func TestAccComputeInstance_forceNewAndChangeMetadata(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, { Config: testAccComputeInstance_forceNewAndChangeMetadata(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceMetadata( &instance, "qux", "true"), ), @@ -640,25 +701,25 @@ func TestAccComputeInstance_update(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, { Config: testAccComputeInstance_update(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceMetadata( &instance, "bar", "baz"), testAccCheckComputeInstanceLabel(&instance, "only_me", "nothing_else"), @@ -674,19 +735,19 @@ func TestAccComputeInstance_stopInstanceToUpdate(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ // Set fields that require stopping the instance { Config: testAccComputeInstance_stopInstanceToUpdate(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, computeInstanceImportStep("us-central1-a", instanceName, []string{"allow_stopping_for_update"}), @@ -695,7 +756,7 @@ func TestAccComputeInstance_stopInstanceToUpdate(t *testing.T) { Config: testAccComputeInstance_stopInstanceToUpdate2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, computeInstanceImportStep("us-central1-a", instanceName, []string{"allow_stopping_for_update"}), @@ -704,7 +765,7 @@ func TestAccComputeInstance_stopInstanceToUpdate(t *testing.T) { Config: testAccComputeInstance_stopInstanceToUpdate3(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, computeInstanceImportStep("us-central1-a", instanceName, []string{"allow_stopping_for_update"}), @@ -716,18 +777,18 @@ func TestAccComputeInstance_serviceAccount(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_serviceAccount(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceServiceAccount(&instance, "https://www.googleapis.com/auth/compute.readonly"), testAccCheckComputeInstanceServiceAccount(&instance, @@ -745,18 +806,18 @@ func TestAccComputeInstance_scheduling(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_scheduling(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, computeInstanceImportStep("us-central1-a", instanceName, []string{}), @@ -764,7 +825,7 @@ func TestAccComputeInstance_scheduling(t *testing.T) { Config: testAccComputeInstance_schedulingUpdated(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, computeInstanceImportStep("us-central1-a", instanceName, []string{}), @@ -775,14 +836,14 @@ func TestAccComputeInstance_scheduling(t *testing.T) { func TestAccComputeInstance_soleTenantNodeAffinities(t *testing.T) { t.Parallel() - var instanceName = fmt.Sprintf("soletenanttest-%s", acctest.RandString(10)) - var templateName = fmt.Sprintf("nodetmpl-%s", acctest.RandString(10)) - var groupName = fmt.Sprintf("nodegroup-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-soletenant-%s", randString(t, 10)) + var templateName = fmt.Sprintf("tf-test-nodetmpl-%s", randString(t, 10)) + var groupName = fmt.Sprintf("tf-test-nodegroup-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_soleTenantNodeAffinities(instanceName, templateName, groupName), @@ -800,18 +861,18 @@ func TestAccComputeInstance_subnet_auto(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstance_subnet_auto(instanceName), + Config: testAccComputeInstance_subnet_auto(randString(t, 10), instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasSubnet(&instance), ), }, @@ -824,18 +885,18 @@ func TestAccComputeInstance_subnet_custom(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstance_subnet_custom(instanceName), + Config: testAccComputeInstance_subnet_custom(randString(t, 10), instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasSubnet(&instance), ), }, @@ -845,24 +906,26 @@ func TestAccComputeInstance_subnet_custom(t *testing.T) { } func TestAccComputeInstance_subnet_xpn(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - projectName := fmt.Sprintf("tf-xpntest-%d", time.Now().Unix()) + projectName := fmt.Sprintf("tf-test-xpn-%d", time.Now().Unix()) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstance_subnet_xpn(org, billingId, projectName, instanceName), + Config: testAccComputeInstance_subnet_xpn(org, billingId, projectName, instanceName, randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExistsInProject( - "google_compute_instance.foobar", fmt.Sprintf("%s-service", projectName), + t, "google_compute_instance.foobar", fmt.Sprintf("%s-service", projectName), &instance), testAccCheckComputeInstanceHasSubnet(&instance), ), @@ -875,18 +938,18 @@ func TestAccComputeInstance_networkIPAuto(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstance_networkIPAuto(instanceName), + Config: testAccComputeInstance_networkIPAuto(randString(t, 10), instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasAnyNetworkIP(&instance), ), }, @@ -898,18 +961,18 @@ func TestAccComputeInstance_network_ip_custom(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) var ipAddress = "10.0.200.200" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeInstance_network_ip_custom(instanceName, ipAddress), + Config: testAccComputeInstance_network_ip_custom(randString(t, 10), instanceName, ipAddress), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasNetworkIP(&instance, ipAddress), ), }, @@ -921,20 +984,20 @@ func TestAccComputeInstance_private_image_family(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) - var diskName = fmt.Sprintf("instance-testd-%s", acctest.RandString(10)) - var familyName = fmt.Sprintf("instance-testf-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) + var diskName = fmt.Sprintf("tf-testd-%s", randString(t, 10)) + var familyName = fmt.Sprintf("tf-testf-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_private_image_family(diskName, familyName, instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, }, @@ -945,18 +1008,18 @@ func TestAccComputeInstance_forceChangeMachineTypeManually(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic(instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), - testAccCheckComputeInstanceUpdateMachineType("google_compute_instance.foobar"), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceUpdateMachineType(t, "google_compute_instance.foobar"), ), ExpectNonEmptyPlan: true, }, @@ -969,19 +1032,19 @@ func TestAccComputeInstance_multiNic(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - networkName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - subnetworkName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + networkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetworkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_multiNic(instanceName, networkName, subnetworkName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMultiNic(&instance), ), }, @@ -994,17 +1057,17 @@ func TestAccComputeInstance_guestAccelerator(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_guestAccelerator(instanceName, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasGuestAccelerator(&instance, "nvidia-tesla-k80", 1), ), }, @@ -1018,17 +1081,17 @@ func TestAccComputeInstance_guestAcceleratorSkip(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_guestAccelerator(instanceName, 0), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceLacksGuestAccelerator(&instance), ), }, @@ -1041,17 +1104,17 @@ func TestAccComputeInstance_minCpuPlatform(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_minCpuPlatform(instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMinCpuPlatform(&instance, "Intel Haswell"), ), }, @@ -1064,18 +1127,18 @@ func TestAccComputeInstance_deletionProtectionExplicitFalse(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic_deletionProtectionFalse(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasConfiguredDeletionProtection(&instance, false), ), }, @@ -1087,18 +1150,18 @@ func TestAccComputeInstance_deletionProtectionExplicitTrueAndUpdateFalse(t *test t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic_deletionProtectionTrue(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasConfiguredDeletionProtection(&instance, true), ), }, @@ -1108,7 +1171,7 @@ func TestAccComputeInstance_deletionProtectionExplicitTrueAndUpdateFalse(t *test Config: testAccComputeInstance_basic_deletionProtectionFalse(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasConfiguredDeletionProtection(&instance, false), ), }, @@ -1120,17 +1183,17 @@ func TestAccComputeInstance_primaryAliasIpRange(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_primaryAliasIpRange(instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasAliasIpRange(&instance, "", "/24"), ), }, @@ -1143,19 +1206,19 @@ func TestAccComputeInstance_secondaryAliasIpRange(t *testing.T) { t.Parallel() var instance compute.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - networkName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - subnetName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + networkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_secondaryAliasIpRange(networkName, subnetName, instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasAliasIpRange(&instance, "inst-test-secondary", "172.16.0.0/24"), ), }, @@ -1163,7 +1226,7 @@ func TestAccComputeInstance_secondaryAliasIpRange(t *testing.T) { { Config: testAccComputeInstance_secondaryAliasIpRangeUpdate(networkName, subnetName, instanceName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasAliasIpRange(&instance, "", "10.0.1.0/24"), ), }, @@ -1176,12 +1239,12 @@ func TestAccComputeInstance_hostname(t *testing.T) { t.Parallel() var instance computeBeta.Instance - instanceName := fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_hostname(instanceName), @@ -1199,17 +1262,17 @@ func TestAccComputeInstance_shieldedVmConfig1(t *testing.T) { t.Parallel() var instance computeBeta.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_shieldedVmConfig(instanceName, true, true, true), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasShieldedVmConfig(&instance, true, true, true), ), }, @@ -1222,17 +1285,17 @@ func TestAccComputeInstance_shieldedVmConfig2(t *testing.T) { t.Parallel() var instance computeBeta.Instance - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_shieldedVmConfig(instanceName, true, true, false), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeInstanceExists("google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasShieldedVmConfig(&instance, true, true, false), ), }, @@ -1244,12 +1307,12 @@ func TestAccComputeInstance_shieldedVmConfig2(t *testing.T) { func TestAccComputeInstance_enableDisplay(t *testing.T) { t.Parallel() - instanceName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_enableDisplay(instanceName), @@ -1267,12 +1330,12 @@ func TestAccComputeInstance_desiredStatusOnCreation(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), @@ -1282,7 +1345,7 @@ func TestAccComputeInstance_desiredStatusOnCreation(t *testing.T) { Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "RUNNING", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1294,25 +1357,25 @@ func TestAccComputeInstance_desiredStatusUpdateBasic(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, { Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "RUNNING", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1320,7 +1383,7 @@ func TestAccComputeInstance_desiredStatusUpdateBasic(t *testing.T) { Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1328,7 +1391,7 @@ func TestAccComputeInstance_desiredStatusUpdateBasic(t *testing.T) { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1336,7 +1399,7 @@ func TestAccComputeInstance_desiredStatusUpdateBasic(t *testing.T) { Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "RUNNING", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1348,25 +1411,25 @@ func TestAccComputeInstance_desiredStatusTerminatedUpdateFields(t *testing.T) { t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), ), }, { Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1374,7 +1437,7 @@ func TestAccComputeInstance_desiredStatusTerminatedUpdateFields(t *testing.T) { Config: testAccComputeInstance_desiredStatusTerminatedUpdate(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceMetadata( &instance, "bar", "baz"), testAccCheckComputeInstanceLabel(&instance, "only_me", "nothing_else"), @@ -1390,18 +1453,18 @@ func TestAccComputeInstance_updateRunning_desiredStatusRunning_allowStoppingForU t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1409,7 +1472,7 @@ func TestAccComputeInstance_updateRunning_desiredStatusRunning_allowStoppingForU Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "RUNNING", true), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), @@ -1422,18 +1485,18 @@ func TestAccComputeInstance_updateRunning_desiredStatusNotSet_notAllowStoppingFo t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1453,18 +1516,18 @@ func TestAccComputeInstance_updateRunning_desiredStatusRunning_notAllowStoppingF t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1484,18 +1547,18 @@ func TestAccComputeInstance_updateRunning_desiredStatusTerminated_allowStoppingF t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1503,7 +1566,7 @@ func TestAccComputeInstance_updateRunning_desiredStatusTerminated_allowStoppingF Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "TERMINATED", true), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), @@ -1516,18 +1579,18 @@ func TestAccComputeInstance_updateRunning_desiredStatusTerminated_notAllowStoppi t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1535,7 +1598,7 @@ func TestAccComputeInstance_updateRunning_desiredStatusTerminated_notAllowStoppi Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), @@ -1548,18 +1611,18 @@ func TestAccComputeInstance_updateTerminated_desiredStatusNotSet_allowStoppingFo t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1567,7 +1630,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusNotSet_allowStoppingFo Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1575,7 +1638,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusNotSet_allowStoppingFo Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "", true), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), @@ -1588,18 +1651,18 @@ func TestAccComputeInstance_updateTerminated_desiredStatusTerminated_allowStoppi t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1607,7 +1670,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusTerminated_allowStoppi Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1615,7 +1678,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusTerminated_allowStoppi Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "TERMINATED", true), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), @@ -1628,18 +1691,18 @@ func TestAccComputeInstance_updateTerminated_desiredStatusNotSet_notAllowStoppin t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1647,7 +1710,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusNotSet_notAllowStoppin Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1655,7 +1718,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusNotSet_notAllowStoppin Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), @@ -1668,18 +1731,18 @@ func TestAccComputeInstance_updateTerminated_desiredStatusTerminated_notAllowSto t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1687,7 +1750,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusTerminated_notAllowSto Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1695,7 +1758,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusTerminated_notAllowSto Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), @@ -1708,18 +1771,18 @@ func TestAccComputeInstance_updateTerminated_desiredStatusRunning_allowStoppingF t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1727,7 +1790,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusRunning_allowStoppingF Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1735,7 +1798,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusRunning_allowStoppingF Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "RUNNING", true), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), @@ -1748,18 +1811,18 @@ func TestAccComputeInstance_updateTerminated_desiredStatusRunning_notAllowStoppi t.Parallel() var instance compute.Instance - var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10)) + var instanceName = fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeInstanceDestroy, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeInstance_basic2(instanceName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), }, @@ -1767,7 +1830,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusRunning_notAllowStoppi Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-1", "TERMINATED", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasStatus(&instance, "TERMINATED"), ), }, @@ -1775,7 +1838,7 @@ func TestAccComputeInstance_updateTerminated_desiredStatusRunning_notAllowStoppi Config: testAccComputeInstance_machineType_desiredStatus_allowStoppingForUpdate(instanceName, "n1-standard-2", "RUNNING", false), Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists( - "google_compute_instance.foobar", &instance), + t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasMachineType(&instance, "n1-standard-2"), testAccCheckComputeInstanceHasStatus(&instance, "RUNNING"), ), @@ -1784,7 +1847,25 @@ func TestAccComputeInstance_updateTerminated_desiredStatusRunning_notAllowStoppi }) } -func testAccCheckComputeInstanceUpdateMachineType(n string) resource.TestCheckFunc { +func TestAccComputeInstance_resourcePolicyCollocate(t *testing.T) { + t.Parallel() + + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeInstance_resourcePolicyCollocate(instanceName, randString(t, 10)), + }, + computeInstanceImportStep("us-east4-b", instanceName, []string{"allow_stopping_for_update"}), + }, + }) +} + +func testAccCheckComputeInstanceUpdateMachineType(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -1795,13 +1876,13 @@ func testAccCheckComputeInstanceUpdateMachineType(n string) resource.TestCheckFu return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) op, err := config.clientCompute.Instances.Stop(config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() if err != nil { return fmt.Errorf("Could not stop instance: %s", err) } - err = computeOperationWaitTime(config, op, config.Project, "Waiting on stop", 20) + err = computeOperationWaitTime(config, op, config.Project, "Waiting on stop", 20*time.Minute) if err != nil { return fmt.Errorf("Could not stop instance: %s", err) } @@ -1815,7 +1896,7 @@ func testAccCheckComputeInstanceUpdateMachineType(n string) resource.TestCheckFu if err != nil { return fmt.Errorf("Could not change machine type: %s", err) } - err = computeOperationWaitTime(config, op, config.Project, "Waiting machine type change", 20) + err = computeOperationWaitTime(config, op, config.Project, "Waiting machine type change", 20*time.Minute) if err != nil { return fmt.Errorf("Could not change machine type: %s", err) } @@ -1823,40 +1904,42 @@ func testAccCheckComputeInstanceUpdateMachineType(n string) resource.TestCheckFu } } -func testAccCheckComputeInstanceDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeInstanceDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_instance" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_instance" { + continue + } - _, err := config.clientCompute.Instances.Get( - config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() - if err == nil { - return fmt.Errorf("Instance still exists") + _, err := config.clientCompute.Instances.Get( + config.Project, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() + if err == nil { + return fmt.Errorf("Instance still exists") + } } - } - return nil + return nil + } } -func testAccCheckComputeInstanceExists(n string, instance interface{}) resource.TestCheckFunc { +func testAccCheckComputeInstanceExists(t *testing.T, n string, instance interface{}) resource.TestCheckFunc { if instance == nil { panic("Attempted to check existence of Instance that was nil.") } switch instance.(type) { case *compute.Instance: - return testAccCheckComputeInstanceExistsInProject(n, getTestProjectFromEnv(), instance.(*compute.Instance)) + return testAccCheckComputeInstanceExistsInProject(t, n, getTestProjectFromEnv(), instance.(*compute.Instance)) case *computeBeta.Instance: - return testAccCheckComputeBetaInstanceExistsInProject(n, getTestProjectFromEnv(), instance.(*computeBeta.Instance)) + return testAccCheckComputeBetaInstanceExistsInProject(t, n, getTestProjectFromEnv(), instance.(*computeBeta.Instance)) default: panic("Attempted to check existence of an Instance of unknown type.") } } -func testAccCheckComputeInstanceExistsInProject(n, p string, instance *compute.Instance) resource.TestCheckFunc { +func testAccCheckComputeInstanceExistsInProject(t *testing.T, n, p string, instance *compute.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -1867,7 +1950,7 @@ func testAccCheckComputeInstanceExistsInProject(n, p string, instance *compute.I return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.Instances.Get( p, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() @@ -1885,7 +1968,7 @@ func testAccCheckComputeInstanceExistsInProject(n, p string, instance *compute.I } } -func testAccCheckComputeBetaInstanceExistsInProject(n, p string, instance *computeBeta.Instance) resource.TestCheckFunc { +func testAccCheckComputeBetaInstanceExistsInProject(t *testing.T, n, p string, instance *computeBeta.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -1896,7 +1979,7 @@ func testAccCheckComputeBetaInstanceExistsInProject(n, p string, instance *compu return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientComputeBeta.Instances.Get( p, rs.Primary.Attributes["zone"], rs.Primary.Attributes["name"]).Do() @@ -2031,9 +2114,9 @@ func testAccCheckComputeInstanceBootDisk(instance *compute.Instance, source stri } } -func testAccCheckComputeInstanceBootDiskType(instanceName string, diskType string) resource.TestCheckFunc { +func testAccCheckComputeInstanceBootDiskType(t *testing.T, instanceName string, diskType string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) // boot disk is named the same as the Instance disk, err := config.clientCompute.Disks.Get(config.Project, "us-central1-a", instanceName).Do() @@ -2781,11 +2864,12 @@ resource "google_compute_instance" "foobar" { `, instance) } -func testAccComputeInstance_disks_encryption(bootEncryptionKey string, diskNameToEncryptionKey map[string]*compute.CustomerEncryptionKey, instance string) string { +func testAccComputeInstance_disks_encryption(bootEncryptionKey string, diskNameToEncryptionKey map[string]*compute.CustomerEncryptionKey, instance, suffix string) string { diskNames := []string{} for k := range diskNameToEncryptionKey { diskNames = append(diskNames, k) } + sort.Strings(diskNames) return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -2876,7 +2960,7 @@ resource "google_compute_instance" "foobar" { `, diskNames[0], diskNameToEncryptionKey[diskNames[0]].RawKey, diskNames[1], diskNameToEncryptionKey[diskNames[1]].RawKey, diskNames[2], diskNameToEncryptionKey[diskNames[2]].RawKey, - "instance-testd-"+acctest.RandString(10), + "tf-testd-"+suffix, instance, bootEncryptionKey, diskNameToEncryptionKey[diskNames[0]].RawKey, diskNameToEncryptionKey[diskNames[1]].RawKey, diskNameToEncryptionKey[diskNames[2]].RawKey) } @@ -2989,11 +3073,12 @@ resource "google_compute_instance" "foobar" { diskNameToEncryptionKey[diskNames[0]].RawKey) } -func testAccComputeInstance_disks_kms(pid string, bootEncryptionKey string, diskNameToEncryptionKey map[string]*compute.CustomerEncryptionKey, instance string) string { +func testAccComputeInstance_disks_kms(pid string, bootEncryptionKey string, diskNameToEncryptionKey map[string]*compute.CustomerEncryptionKey, instance, suffix string) string { diskNames := []string{} for k := range diskNameToEncryptionKey { diskNames = append(diskNames, k) } + sort.Strings(diskNames) return fmt.Sprintf(` data "google_project" "project" { project_id = "%s" @@ -3099,7 +3184,7 @@ resource "google_compute_instance" "foobar" { `, pid, diskNames[0], diskNameToEncryptionKey[diskNames[0]].KmsKeyName, diskNames[1], diskNameToEncryptionKey[diskNames[1]].KmsKeyName, diskNames[2], diskNameToEncryptionKey[diskNames[2]].KmsKeyName, - "instance-testd-"+acctest.RandString(10), + "tf-testd-"+suffix, instance, bootEncryptionKey, diskNameToEncryptionKey[diskNames[0]].KmsKeyName, diskNameToEncryptionKey[diskNames[1]].KmsKeyName) } @@ -3580,7 +3665,7 @@ resource "google_compute_instance" "foobar" { `, instance) } -func testAccComputeInstance_subnet_auto(instance string) string { +func testAccComputeInstance_subnet_auto(suffix, instance string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -3610,10 +3695,10 @@ resource "google_compute_instance" "foobar" { } } } -`, acctest.RandString(10), instance) +`, suffix, instance) } -func testAccComputeInstance_subnet_custom(instance string) string { +func testAccComputeInstance_subnet_custom(suffix, instance string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -3650,10 +3735,10 @@ resource "google_compute_instance" "foobar" { } } } -`, acctest.RandString(10), acctest.RandString(10), instance) +`, suffix, suffix, instance) } -func testAccComputeInstance_subnet_xpn(org, billingId, projectName, instance string) string { +func testAccComputeInstance_subnet_xpn(org, billingId, projectName, instance, suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -3701,7 +3786,7 @@ resource "google_compute_network" "inst-test-network" { } resource "google_compute_subnetwork" "inst-test-subnetwork" { - name = "inst-test-subnetwork-%s" + name = "tf-test-subnetwork-%s" ip_cidr_range = "10.0.0.0/16" region = "us-central1" network = google_compute_network.inst-test-network.self_link @@ -3727,10 +3812,10 @@ resource "google_compute_instance" "foobar" { } } } -`, projectName, org, billingId, projectName, org, billingId, acctest.RandString(10), acctest.RandString(10), instance) +`, projectName, org, billingId, projectName, org, billingId, suffix, suffix, instance) } -func testAccComputeInstance_networkIPAuto(instance string) string { +func testAccComputeInstance_networkIPAuto(suffix, instance string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -3742,7 +3827,7 @@ resource "google_compute_network" "inst-test-network" { } resource "google_compute_subnetwork" "inst-test-subnetwork" { - name = "inst-test-subnetwork-%s" + name = "tf-test-subnetwork-%s" ip_cidr_range = "10.0.0.0/16" region = "us-central1" network = google_compute_network.inst-test-network.self_link @@ -3765,10 +3850,10 @@ resource "google_compute_instance" "foobar" { } } } -`, acctest.RandString(10), acctest.RandString(10), instance) +`, suffix, suffix, instance) } -func testAccComputeInstance_network_ip_custom(instance, ipAddress string) string { +func testAccComputeInstance_network_ip_custom(suffix, instance, ipAddress string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -3780,7 +3865,7 @@ resource "google_compute_network" "inst-test-network" { } resource "google_compute_subnetwork" "inst-test-subnetwork" { - name = "inst-test-subnetwork-%s" + name = "tf-test-subnetwork-%s" ip_cidr_range = "10.0.0.0/16" region = "us-central1" network = google_compute_network.inst-test-network.self_link @@ -3804,7 +3889,7 @@ resource "google_compute_instance" "foobar" { } } } -`, acctest.RandString(10), acctest.RandString(10), instance, ipAddress) +`, suffix, suffix, instance, ipAddress) } func testAccComputeInstance_private_image_family(disk, family, instance string) string { @@ -4220,7 +4305,7 @@ data "google_compute_image" "my_image" { resource "google_compute_instance" "foobar" { name = "%s" - machine_type = "n1-standard-2" + machine_type = "n1-standard-8" zone = "us-central1-a" boot_disk { @@ -4251,11 +4336,11 @@ resource "google_compute_instance" "foobar" { operator = "IN" values = [google_compute_node_group.nodes.name] } - } -} -data "google_compute_node_types" "central1a" { - zone = "us-central1-a" +<% unless version == 'ga' -%> + min_node_cpus = 4 +<% end -%> + } } resource "google_compute_node_template" "nodetmpl" { @@ -4266,7 +4351,11 @@ resource "google_compute_node_template" "nodetmpl" { tfacc = "test" } - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" + +<% unless version == 'ga' -%> + cpu_overcommit_type = "ENABLED" +<% end -%> } resource "google_compute_node_group" "nodes" { @@ -4288,7 +4377,7 @@ data "google_compute_image" "my_image" { resource "google_compute_instance" "foobar" { name = "%s" - machine_type = "n1-standard-2" + machine_type = "n1-standard-8" zone = "us-central1-a" boot_disk { @@ -4319,11 +4408,11 @@ resource "google_compute_instance" "foobar" { operator = "IN" values = [google_compute_node_group.nodes.name] } - } -} -data "google_compute_node_types" "central1a" { - zone = "us-central1-a" +<% unless version == 'ga' -%> + min_node_cpus = 6 +<% end -%> + } } resource "google_compute_node_template" "nodetmpl" { @@ -4334,7 +4423,11 @@ resource "google_compute_node_template" "nodetmpl" { tfacc = "test" } - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" + +<% unless version == 'ga' -%> + cpu_overcommit_type = "ENABLED" +<% end -%> } resource "google_compute_node_group" "nodes" { @@ -4518,3 +4611,78 @@ resource "google_compute_instance" "foobar" { } `, instance) } + +func testAccComputeInstance_resourcePolicyCollocate(instance, suffix string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance" "foobar" { + name = "%s" + machine_type = "c2-standard-4" + zone = "us-east4-b" + can_ip_forward = false + tags = ["foo", "bar"] + + //deletion_protection = false is implicit in this config due to default value + + boot_disk { + initialize_params { + image = data.google_compute_image.my_image.self_link + } + } + + network_interface { + network = "default" + } + + scheduling { + # Instances with resource policies do not support live migration. + on_host_maintenance = "TERMINATE" + automatic_restart = false + } + + resource_policies = [google_compute_resource_policy.foo.self_link] +} + +resource "google_compute_instance" "second" { + name = "%s-2" + machine_type = "c2-standard-4" + zone = "us-east4-b" + can_ip_forward = false + tags = ["foo", "bar"] + + //deletion_protection = false is implicit in this config due to default value + + boot_disk { + initialize_params { + image = data.google_compute_image.my_image.self_link + } + } + + network_interface { + network = "default" + } + + scheduling { + # Instances with resource policies do not support live migration. + on_host_maintenance = "TERMINATE" + automatic_restart = false + } + + resource_policies = [google_compute_resource_policy.foo.self_link] +} + +resource "google_compute_resource_policy" "foo" { + name = "tf-test-policy-%s" + region = "us-east4" + group_placement_policy { + vm_count = 2 + collocation = "COLLOCATED" + } +} + +`, instance, instance, suffix) +} diff --git a/third_party/terraform/tests/resource_compute_network_endpoint_test.go.erb b/third_party/terraform/tests/resource_compute_network_endpoint_test.go.erb index accd6f5bef39..35d1b8e73ecb 100644 --- a/third_party/terraform/tests/resource_compute_network_endpoint_test.go.erb +++ b/third_party/terraform/tests/resource_compute_network_endpoint_test.go.erb @@ -4,16 +4,17 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) func TestAccComputeNetworkEndpoint_networkEndpointsBasic(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), "default_port": 90, "modified_port": 100, "add1_port": 101, @@ -22,7 +23,7 @@ func TestAccComputeNetworkEndpoint_networkEndpointsBasic(t *testing.T) { negId := fmt.Sprintf("projects/%s/zones/%s/networkEndpointGroups/neg-%s", getTestProjectFromEnv(), getTestZoneFromEnv(), context["random_suffix"]) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -39,7 +40,7 @@ func TestAccComputeNetworkEndpoint_networkEndpointsBasic(t *testing.T) { // Force-recreate old endpoint Config: testAccComputeNetworkEndpoint_networkEndpointsModified(context), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeNetworkEndpointWithPortsDestroyed(negId, "90"), + testAccCheckComputeNetworkEndpointWithPortsDestroyed(t, negId, "90"), ), }, { @@ -70,7 +71,7 @@ func TestAccComputeNetworkEndpoint_networkEndpointsBasic(t *testing.T) { // delete all endpoints Config: testAccComputeNetworkEndpoint_noNetworkEndpoints(context), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeNetworkEndpointWithPortsDestroyed(negId, "100"), + testAccCheckComputeNetworkEndpointWithPortsDestroyed(t, negId, "100"), ), }, }, @@ -157,7 +158,7 @@ resource "google_compute_subnetwork" "default" { } resource "google_compute_instance" "default" { - name = "neg-instance1-%{random_suffix}" + name = "tf-test-neg-%{random_suffix}" machine_type = "n1-standard-1" boot_disk { @@ -183,9 +184,9 @@ data "google_compute_image" "my_image" { // testAccCheckComputeNetworkEndpointDestroyed makes sure the endpoint with // given Terraform resource name and previous information (obtained from Exists) // was destroyed properly. -func testAccCheckComputeNetworkEndpointWithPortsDestroyed(negId string, ports ...string) resource.TestCheckFunc { +func testAccCheckComputeNetworkEndpointWithPortsDestroyed(t *testing.T, negId string, ports ...string) resource.TestCheckFunc { return func(s *terraform.State) error { - foundPorts, err := testAccComputeNetworkEndpointsListEndpointPorts(negId) + foundPorts, err := testAccComputeNetworkEndpointsListEndpointPorts(t, negId) if err != nil { return fmt.Errorf("unable to confirm endpoints with ports %+v was destroyed: %v", ports, err) } @@ -199,8 +200,8 @@ func testAccCheckComputeNetworkEndpointWithPortsDestroyed(negId string, ports .. } } -func testAccComputeNetworkEndpointsListEndpointPorts(negId string) (map[string]struct{}, error) { - config := testAccProvider.Meta().(*Config) +func testAccComputeNetworkEndpointsListEndpointPorts(t *testing.T, negId string) (map[string]struct{}, error) { + config := googleProviderConfig(t) url := fmt.Sprintf("https://www.googleapis.com/compute/beta/%s/listNetworkEndpoints", negId) res, err := sendRequest(config, "POST", "", url, nil) diff --git a/third_party/terraform/tests/resource_compute_network_peering_test.go b/third_party/terraform/tests/resource_compute_network_peering_test.go new file mode 100644 index 000000000000..cec5ceed1854 --- /dev/null +++ b/third_party/terraform/tests/resource_compute_network_peering_test.go @@ -0,0 +1,130 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccComputeNetworkPeering_basic(t *testing.T) { + t.Parallel() + + primaryNetworkName := fmt.Sprintf("network-test-1-%d", randInt(t)) + peeringName := fmt.Sprintf("peering-test-1-%d", randInt(t)) + importId := fmt.Sprintf("%s/%s/%s", getTestProjectFromEnv(), primaryNetworkName, peeringName) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccComputeNetworkPeeringDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeNetworkPeering_basic(primaryNetworkName, peeringName, randString(t, 10)), + }, + { + ResourceName: "google_compute_network_peering.foo", + ImportState: true, + ImportStateVerify: true, + ImportStateId: importId, + }, + }, + }) + +} + +func TestAccComputeNetworkPeering_subnetRoutes(t *testing.T) { + t.Parallel() + + primaryNetworkName := fmt.Sprintf("network-test-1-%d", randInt(t)) + peeringName := fmt.Sprintf("peering-test-%d", randInt(t)) + importId := fmt.Sprintf("%s/%s/%s", getTestProjectFromEnv(), primaryNetworkName, peeringName) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccComputeNetworkPeeringDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeNetworkPeering_subnetRoutes(primaryNetworkName, peeringName, randString(t, 10)), + }, + { + ResourceName: "google_compute_network_peering.bar", + ImportState: true, + ImportStateVerify: true, + ImportStateId: importId, + }, + }, + }) +} + +func testAccComputeNetworkPeeringDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_network_peering" { + continue + } + + _, err := config.clientCompute.Networks.Get( + config.Project, rs.Primary.ID).Do() + if err == nil { + return fmt.Errorf("Network peering still exists") + } + } + + return nil + } +} + +func testAccComputeNetworkPeering_basic(primaryNetworkName, peeringName, suffix string) string { + return fmt.Sprintf(` +resource "google_compute_network" "network1" { + name = "%s" + auto_create_subnetworks = false +} + +resource "google_compute_network_peering" "foo" { + name = "%s" + network = google_compute_network.network1.self_link + peer_network = google_compute_network.network2.self_link +} + +resource "google_compute_network" "network2" { + name = "network-test-2-%s" + auto_create_subnetworks = false +} + +resource "google_compute_network_peering" "bar" { + network = google_compute_network.network2.self_link + peer_network = google_compute_network.network1.self_link + name = "peering-test-2-%s" + import_custom_routes = true + export_custom_routes = true +} +`, primaryNetworkName, peeringName, suffix, suffix) +} + +func testAccComputeNetworkPeering_subnetRoutes(primaryNetworkName, peeringName, suffix string) string { + return fmt.Sprintf(` +resource "google_compute_network" "network1" { + name = "%s" + auto_create_subnetworks = false +} + +resource "google_compute_network" "network2" { + name = "network-test-2-%s" + auto_create_subnetworks = false +} + +resource "google_compute_network_peering" "bar" { + network = google_compute_network.network1.self_link + peer_network = google_compute_network.network2.self_link + name = "%s" + import_subnet_routes_with_public_ip = true + export_subnet_routes_with_public_ip = false +} +`, primaryNetworkName, suffix, peeringName) +} diff --git a/third_party/terraform/tests/resource_compute_network_peering_test.go.erb b/third_party/terraform/tests/resource_compute_network_peering_test.go.erb deleted file mode 100644 index 8a75a19d5ede..000000000000 --- a/third_party/terraform/tests/resource_compute_network_peering_test.go.erb +++ /dev/null @@ -1,167 +0,0 @@ -<% autogen_exception -%> -package google - -import ( - "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" - "github.com/hashicorp/terraform-plugin-sdk/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/terraform" - "strings" - "testing" - "google.golang.org/api/compute/v1" -) - -func TestAccComputeNetworkPeering_basic(t *testing.T) { - t.Parallel() - var peering_beta compute.NetworkPeering - - primaryNetworkName := acctest.RandomWithPrefix("network-test-1") - peeringName := acctest.RandomWithPrefix("peering-test-1") - importId := fmt.Sprintf("%s/%s/%s", getTestProjectFromEnv(), primaryNetworkName, peeringName) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccComputeNetworkPeeringDestroy, - Steps: []resource.TestStep{ - { - Config: testAccComputeNetworkPeering_basic(primaryNetworkName, peeringName), - Check: resource.ComposeTestCheckFunc( - // network foo - testAccCheckComputeNetworkPeeringExist("google_compute_network_peering.foo", &peering_beta), - testAccCheckComputeNetworkPeeringAutoCreateRoutes(true, &peering_beta), - testAccCheckComputeNetworkPeeringImportCustomRoutes(false, &peering_beta), - testAccCheckComputeNetworkPeeringExportCustomRoutes(false, &peering_beta), - - // network bar - testAccCheckComputeNetworkPeeringExist("google_compute_network_peering.bar", &peering_beta), - testAccCheckComputeNetworkPeeringAutoCreateRoutes(true, &peering_beta), - testAccCheckComputeNetworkPeeringImportCustomRoutes(true, &peering_beta), - testAccCheckComputeNetworkPeeringExportCustomRoutes(true, &peering_beta), - ), - }, - { - ResourceName: "google_compute_network_peering.foo", - ImportState: true, - ImportStateVerify: true, - ImportStateId: importId, - }, - }, - }) - -} - -func testAccComputeNetworkPeeringDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_network_peering" { - continue - } - - _, err := config.clientCompute.Networks.Get( - config.Project, rs.Primary.ID).Do() - if err == nil { - return fmt.Errorf("Network peering still exists") - } - } - - return nil -} - -func testAccCheckComputeNetworkPeeringExist(n string, peering *compute.NetworkPeering) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No ID is set") - } - - config := testAccProvider.Meta().(*Config) - - parts := strings.Split(rs.Primary.ID, "/") - if len(parts) != 2 { - return fmt.Errorf("Invalid network peering identifier: %s", rs.Primary.ID) - } - - networkName, peeringName := parts[0], parts[1] - - network, err := config.clientCompute.Networks.Get(config.Project, networkName).Do() - if err != nil { - return err - } - - found := findPeeringFromNetwork(network, peeringName) - if found == nil { - return fmt.Errorf("Network peering '%s' not found in network '%s'", peeringName, network.Name) - } - *peering = *found - - return nil - } -} - -func testAccCheckComputeNetworkPeeringAutoCreateRoutes(v bool, peering *compute.NetworkPeering) resource.TestCheckFunc { - return func(s *terraform.State) error { - - if peering.ExchangeSubnetRoutes != v { - return fmt.Errorf("should ExchangeSubnetRouts set to %t if AutoCreateRoutes is set to %t", v, v) - } - return nil - } -} - -func testAccCheckComputeNetworkPeeringImportCustomRoutes(v bool, peering *compute.NetworkPeering) resource.TestCheckFunc { - return func(s *terraform.State) error { - if peering.ImportCustomRoutes != v { - return fmt.Errorf("should ImportCustomRoutes set to %t", v) - } - - return nil - } -} - -func testAccCheckComputeNetworkPeeringExportCustomRoutes(v bool, peering *compute.NetworkPeering) resource.TestCheckFunc { - return func(s *terraform.State) error { - if peering.ExportCustomRoutes != v { - return fmt.Errorf("should ExportCustomRoutes set to %t", v) - } - - return nil - } -} - -func testAccComputeNetworkPeering_basic(primaryNetworkName, peeringName string) string { - s := ` -resource "google_compute_network" "network1" { - name = "%s" - auto_create_subnetworks = false -} - -resource "google_compute_network_peering" "foo" { - name = "%s" - network = google_compute_network.network1.self_link - peer_network = google_compute_network.network2.self_link -} - -resource "google_compute_network" "network2" { - name = "network-test-2-%s" - auto_create_subnetworks = false -} - -resource "google_compute_network_peering" "bar" { - network = google_compute_network.network2.self_link - peer_network = google_compute_network.network1.self_link - name = "peering-test-2-%s" -` - - s = s + - `import_custom_routes = true - export_custom_routes = true - ` - s = s + `}` - return fmt.Sprintf(s, primaryNetworkName, peeringName, acctest.RandString(10), acctest.RandString(10)) -} diff --git a/third_party/terraform/tests/resource_compute_network_test.go b/third_party/terraform/tests/resource_compute_network_test.go index adadf03a1b25..a630249445a0 100644 --- a/third_party/terraform/tests/resource_compute_network_test.go +++ b/third_party/terraform/tests/resource_compute_network_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -15,18 +14,18 @@ func TestAccComputeNetwork_explicitAutoSubnet(t *testing.T) { var network compute.Network - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeNetworkDestroy, + CheckDestroy: testAccCheckComputeNetworkDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeNetwork_basic(), + Config: testAccComputeNetwork_basic(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeNetworkExists( - "google_compute_network.bar", &network), + t, "google_compute_network.bar", &network), testAccCheckComputeNetworkIsAutoSubnet( - "google_compute_network.bar", &network), + t, "google_compute_network.bar", &network), ), }, { @@ -43,18 +42,18 @@ func TestAccComputeNetwork_customSubnet(t *testing.T) { var network compute.Network - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeNetworkDestroy, + CheckDestroy: testAccCheckComputeNetworkDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeNetwork_custom_subnet(), + Config: testAccComputeNetwork_custom_subnet(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeNetworkExists( - "google_compute_network.baz", &network), + t, "google_compute_network.baz", &network), testAccCheckComputeNetworkIsCustomSubnet( - "google_compute_network.baz", &network), + t, "google_compute_network.baz", &network), ), }, { @@ -70,20 +69,20 @@ func TestAccComputeNetwork_routingModeAndUpdate(t *testing.T) { t.Parallel() var network compute.Network - networkName := acctest.RandString(10) + networkName := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeNetworkDestroy, + CheckDestroy: testAccCheckComputeNetworkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeNetwork_routing_mode(networkName, "GLOBAL"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeNetworkExists( - "google_compute_network.acc_network_routing_mode", &network), + t, "google_compute_network.acc_network_routing_mode", &network), testAccCheckComputeNetworkHasRoutingMode( - "google_compute_network.acc_network_routing_mode", &network, "GLOBAL"), + t, "google_compute_network.acc_network_routing_mode", &network, "GLOBAL"), ), }, // Test updating the routing field (only updatable field). @@ -91,9 +90,9 @@ func TestAccComputeNetwork_routingModeAndUpdate(t *testing.T) { Config: testAccComputeNetwork_routing_mode(networkName, "REGIONAL"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeNetworkExists( - "google_compute_network.acc_network_routing_mode", &network), + t, "google_compute_network.acc_network_routing_mode", &network), testAccCheckComputeNetworkHasRoutingMode( - "google_compute_network.acc_network_routing_mode", &network, "REGIONAL"), + t, "google_compute_network.acc_network_routing_mode", &network, "REGIONAL"), ), }, }, @@ -107,18 +106,18 @@ func TestAccComputeNetwork_default_routing_mode(t *testing.T) { expectedRoutingMode := "REGIONAL" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeNetworkDestroy, + CheckDestroy: testAccCheckComputeNetworkDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeNetwork_basic(), + Config: testAccComputeNetwork_basic(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeNetworkExists( - "google_compute_network.bar", &network), + t, "google_compute_network.bar", &network), testAccCheckComputeNetworkHasRoutingMode( - "google_compute_network.bar", &network, expectedRoutingMode), + t, "google_compute_network.bar", &network, expectedRoutingMode), ), }, }, @@ -128,19 +127,19 @@ func TestAccComputeNetwork_default_routing_mode(t *testing.T) { func TestAccComputeNetwork_networkDeleteDefaultRoute(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeNetworkDestroy, + CheckDestroy: testAccCheckComputeNetworkDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeNetwork_deleteDefaultRoute(), + Config: testAccComputeNetwork_deleteDefaultRoute(randString(t, 10)), }, }, }) } -func testAccCheckComputeNetworkExists(n string, network *compute.Network) resource.TestCheckFunc { +func testAccCheckComputeNetworkExists(t *testing.T, n string, network *compute.Network) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -151,7 +150,7 @@ func testAccCheckComputeNetworkExists(n string, network *compute.Network) resour return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.Networks.Get( config.Project, rs.Primary.Attributes["name"]).Do() @@ -169,9 +168,9 @@ func testAccCheckComputeNetworkExists(n string, network *compute.Network) resour } } -func testAccCheckComputeNetworkIsAutoSubnet(n string, network *compute.Network) resource.TestCheckFunc { +func testAccCheckComputeNetworkIsAutoSubnet(t *testing.T, n string, network *compute.Network) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.Networks.Get( config.Project, network.Name).Do() @@ -191,9 +190,9 @@ func testAccCheckComputeNetworkIsAutoSubnet(n string, network *compute.Network) } } -func testAccCheckComputeNetworkIsCustomSubnet(n string, network *compute.Network) resource.TestCheckFunc { +func testAccCheckComputeNetworkIsCustomSubnet(t *testing.T, n string, network *compute.Network) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.Networks.Get( config.Project, network.Name).Do() @@ -213,9 +212,9 @@ func testAccCheckComputeNetworkIsCustomSubnet(n string, network *compute.Network } } -func testAccCheckComputeNetworkHasRoutingMode(n string, network *compute.Network, routingMode string) resource.TestCheckFunc { +func testAccCheckComputeNetworkHasRoutingMode(t *testing.T, n string, network *compute.Network, routingMode string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[n] if !ok { @@ -242,22 +241,22 @@ func testAccCheckComputeNetworkHasRoutingMode(n string, network *compute.Network } } -func testAccComputeNetwork_basic() string { +func testAccComputeNetwork_basic(suffix string) string { return fmt.Sprintf(` resource "google_compute_network" "bar" { name = "network-test-%s" auto_create_subnetworks = true } -`, acctest.RandString(10)) +`, suffix) } -func testAccComputeNetwork_custom_subnet() string { +func testAccComputeNetwork_custom_subnet(suffix string) string { return fmt.Sprintf(` resource "google_compute_network" "baz" { name = "network-test-%s" auto_create_subnetworks = false } -`, acctest.RandString(10)) +`, suffix) } func testAccComputeNetwork_routing_mode(network, routingMode string) string { @@ -269,12 +268,12 @@ resource "google_compute_network" "acc_network_routing_mode" { `, network, routingMode) } -func testAccComputeNetwork_deleteDefaultRoute() string { +func testAccComputeNetwork_deleteDefaultRoute(suffix string) string { return fmt.Sprintf(` resource "google_compute_network" "bar" { name = "network-test-%s" delete_default_routes_on_create = true auto_create_subnetworks = false } -`, acctest.RandString(10)) +`, suffix) } diff --git a/third_party/terraform/tests/resource_compute_node_group_test.go.erb b/third_party/terraform/tests/resource_compute_node_group_test.go.erb index 27b1630ebc86..b7b7f69dde11 100644 --- a/third_party/terraform/tests/resource_compute_node_group_test.go.erb +++ b/third_party/terraform/tests/resource_compute_node_group_test.go.erb @@ -8,7 +8,6 @@ import ( "strings" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,14 +15,14 @@ import ( func TestAccComputeNodeGroup_updateNodeTemplate(t *testing.T) { t.Parallel() - groupName := acctest.RandomWithPrefix("group-") - tmplPrefix := acctest.RandomWithPrefix("tmpl-") + groupName := fmt.Sprintf("group--%d", randInt(t)) + tmplPrefix := fmt.Sprintf("tmpl--%d", randInt(t)) var timeCreated time.Time - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeNodeGroupDestroy, + CheckDestroy: testAccCheckComputeNodeGroupDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeNodeGroup_updateNodeTemplate(groupName, tmplPrefix, "tmpl1"), @@ -88,20 +87,16 @@ func testAccCheckComputeNodeGroupCreationTimeBefore(prevTimeCreated *time.Time) func testAccComputeNodeGroup_updateNodeTemplate(groupName, tmplPrefix, tmplToUse string) string { return fmt.Sprintf(` -data "google_compute_node_types" "central1a" { - zone = "us-central1-a" -} - resource "google_compute_node_template" "tmpl1" { name = "%s-first" region = "us-central1" - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" } resource "google_compute_node_template" "tmpl2" { name = "%s-second" region = "us-central1" - node_type = data.google_compute_node_types.central1a.names[0] + node_type = "n1-node-96-624" } resource "google_compute_node_group" "nodes" { diff --git a/third_party/terraform/tests/resource_compute_per_instance_config_test.go.erb b/third_party/terraform/tests/resource_compute_per_instance_config_test.go.erb new file mode 100644 index 000000000000..0302ca4a2c9d --- /dev/null +++ b/third_party/terraform/tests/resource_compute_per_instance_config_test.go.erb @@ -0,0 +1,333 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccComputePerInstanceConfig_statefulBasic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "config_name" : fmt.Sprintf("instance-%s", randString(t, 10)), + "config_name2" : fmt.Sprintf("instance-%s", randString(t, 10)), + "config_name3" : fmt.Sprintf("instance-%s", randString(t, 10)), + "config_name4" : fmt.Sprintf("instance-%s", randString(t, 10)), + } + igmId := fmt.Sprintf("projects/%s/zones/%s/instanceGroupManagers/igm-%s", + getTestProjectFromEnv(), "us-central1-c", context["random_suffix"]) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + // Create one endpoint + Config: testAccComputePerInstanceConfig_statefulBasic(context), + }, + { + ResourceName: "google_compute_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // Force-recreate old config + Config: testAccComputePerInstanceConfig_statefulModified(context), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputePerInstanceConfigDestroyed(t, igmId, context["config_name"].(string)), + ), + }, + { + ResourceName: "google_compute_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // Add two new endpoints + Config: testAccComputePerInstanceConfig_statefulAdditional(context), + }, + { + ResourceName: "google_compute_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + ResourceName: "google_compute_per_instance_config.with_disks", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"most_disruptive_allowed_action", "minimal_action", "remove_instance_state_on_destroy"}, + }, + { + ResourceName: "google_compute_per_instance_config.add2", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // delete all configs + Config: testAccComputePerInstanceConfig_igm(context), + Check: resource.ComposeTestCheckFunc( + // Config with remove_instance_state_on_destroy = false won't be destroyed (config4) + testAccCheckComputePerInstanceConfigDestroyed(t, igmId, context["config_name2"].(string)), + testAccCheckComputePerInstanceConfigDestroyed(t, igmId, context["config_name3"].(string)), + ), + }, + }, + }) +} + +func TestAccComputePerInstanceConfig_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "config_name" : fmt.Sprintf("instance-%s", randString(t, 10)), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + // Create one config + Config: testAccComputePerInstanceConfig_statefulBasic(context), + }, + { + ResourceName: "google_compute_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // Update an existing config + Config: testAccComputePerInstanceConfig_update(context), + }, + { + ResourceName: "google_compute_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + }, + }) +} + +func testAccComputePerInstanceConfig_statefulBasic(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_per_instance_config" "default" { + zone = google_compute_instance_group_manager.igm.zone + instance_group_manager = google_compute_instance_group_manager.igm.name + name = "%{config_name}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "asdf" + } + } +} +`, context) + testAccComputePerInstanceConfig_igm(context) +} + +func testAccComputePerInstanceConfig_update(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_per_instance_config" "default" { + zone = google_compute_instance_group_manager.igm.zone + instance_group_manager = google_compute_instance_group_manager.igm.name + name = "%{config_name}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "asdf" + update = "12345" + } + } +} +`, context) + testAccComputePerInstanceConfig_igm(context) +} + +func testAccComputePerInstanceConfig_statefulModified(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_per_instance_config" "default" { + zone = google_compute_instance_group_manager.igm.zone + instance_group_manager = google_compute_instance_group_manager.igm.name + name = "%{config_name2}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "asdf" + } + } +} +`, context) + testAccComputePerInstanceConfig_igm(context) +} + +func testAccComputePerInstanceConfig_statefulAdditional(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_per_instance_config" "default" { + zone = google_compute_instance_group_manager.igm.zone + instance_group_manager = google_compute_instance_group_manager.igm.name + name = "%{config_name2}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "asdf" + } + } +} + +resource "google_compute_per_instance_config" "with_disks" { + zone = google_compute_instance_group_manager.igm.zone + instance_group_manager = google_compute_instance_group_manager.igm.name + name = "%{config_name3}" + most_disruptive_allowed_action = "REFRESH" + minimal_action = "REFRESH" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + meta = "123" + } + + disk { + device_name = "my-stateful-disk1" + source = google_compute_disk.disk.id + } + + disk { + device_name = "my-stateful-disk2" + source = google_compute_disk.disk1.id + } + + disk { + device_name = "my-stateful-disk3" + source = google_compute_disk.disk2.id + } + } +} + +resource "google_compute_per_instance_config" "add2" { + zone = google_compute_instance_group_manager.igm.zone + instance_group_manager = google_compute_instance_group_manager.igm.name + name = "%{config_name4}" + preserved_state { + metadata = { + foo = "abc" + } + } +} + +resource "google_compute_disk" "disk" { + name = "test-disk-%{random_suffix}" + type = "pd-ssd" + zone = google_compute_instance_group_manager.igm.zone + image = "debian-8-jessie-v20170523" + physical_block_size_bytes = 4096 +} + +resource "google_compute_disk" "disk1" { + name = "test-disk2-%{random_suffix}" + type = "pd-ssd" + zone = google_compute_instance_group_manager.igm.zone + image = "debian-cloud/debian-9" + physical_block_size_bytes = 4096 +} + +resource "google_compute_disk" "disk2" { + name = "test-disk3-%{random_suffix}" + type = "pd-ssd" + zone = google_compute_instance_group_manager.igm.zone + image = "https://www.googleapis.com/compute/v1/projects/gce-uefi-images/global/images/centos-7-v20190729" + physical_block_size_bytes = 4096 +} +`, context) + testAccComputePerInstanceConfig_igm(context) +} + +func testAccComputePerInstanceConfig_igm(context map[string]interface{}) string { + return Nprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-basic" { + name = "igm-temp-%{random_suffix}" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + device_name = "my-stateful-disk" + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_instance_group_manager" "igm" { + description = "Terraform test instance group manager" + name = "igm-%{random_suffix}" + + version { + name = "prod" + instance_template = google_compute_instance_template.igm-basic.self_link + } + + base_instance_name = "igm-no-tp" + zone = "us-central1-c" +} +`, context) +} + +// Checks that the per instance config with the given name was destroyed +func testAccCheckComputePerInstanceConfigDestroyed(t *testing.T, igmId, configName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + foundNames, err := testAccComputePerInstanceConfigListNames(t, igmId) + if err != nil { + return fmt.Errorf("unable to confirm config with name %s was destroyed: %v", configName, err) + } + if _, ok := foundNames[configName]; ok { + return fmt.Errorf("config with name %s still exists", configName) + } + + return nil + } +} + +func testAccComputePerInstanceConfigListNames(t *testing.T, igmId string) (map[string]struct{}, error) { + config := googleProviderConfig(t) + + url := fmt.Sprintf("https://www.googleapis.com/compute/beta/%s/listPerInstanceConfigs", igmId) + res, err := sendRequest(config, "POST", "", url, nil) + if err != nil { + return nil, err + } + + v, ok := res["items"] + if !ok || v == nil { + return nil, nil + } + items := v.([]interface{}) + instanceConfigs := make(map[string]struct{}) + for _, item := range items { + perInstanceConfig := item.(map[string]interface{}) + instanceConfigs[fmt.Sprintf("%v", perInstanceConfig["name"])] = struct{}{} + } + return instanceConfigs, nil +} +<% end -%> diff --git a/third_party/terraform/tests/resource_compute_project_default_network_tier_test.go b/third_party/terraform/tests/resource_compute_project_default_network_tier_test.go index 234968d38f1e..301ff4c9a820 100644 --- a/third_party/terraform/tests/resource_compute_project_default_network_tier_test.go +++ b/third_party/terraform/tests/resource_compute_project_default_network_tier_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -13,9 +12,9 @@ func TestAccComputeProjectDefaultNetworkTier_basic(t *testing.T) { org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - projectID := acctest.RandomWithPrefix("tf-test") + projectID := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -36,9 +35,9 @@ func TestAccComputeProjectDefaultNetworkTier_modify(t *testing.T) { org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - projectID := acctest.RandomWithPrefix("tf-test") + projectID := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_compute_project_metadata_item_test.go b/third_party/terraform/tests/resource_compute_project_metadata_item_test.go index 63f6d78bf6b4..213b014b0043 100644 --- a/third_party/terraform/tests/resource_compute_project_metadata_item_test.go +++ b/third_party/terraform/tests/resource_compute_project_metadata_item_test.go @@ -5,7 +5,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,12 +13,12 @@ func TestAccComputeProjectMetadataItem_basic(t *testing.T) { t.Parallel() // Key must be unique to avoid concurrent tests interfering with each other - key := "myKey" + acctest.RandString(10) + key := "myKey" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckProjectMetadataItemDestroy, + CheckDestroy: testAccCheckProjectMetadataItemDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectMetadataItem_basic("foobar", key, "myValue"), @@ -34,18 +33,20 @@ func TestAccComputeProjectMetadataItem_basic(t *testing.T) { } func TestAccComputeProjectMetadataItem_basicMultiple(t *testing.T) { + // Multiple fine grained items applied in same config + skipIfVcr(t) t.Parallel() // Generate a config of two config keys - key1 := "myKey" + acctest.RandString(10) - key2 := "myKey" + acctest.RandString(10) + key1 := "myKey" + randString(t, 10) + key2 := "myKey" + randString(t, 10) config := testAccProjectMetadataItem_basic("foobar", key1, "myValue") + testAccProjectMetadataItem_basic("foobar2", key2, "myOtherValue") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckProjectMetadataItemDestroy, + CheckDestroy: testAccCheckProjectMetadataItemDestroyProducer(t), Steps: []resource.TestStep{ { Config: config, @@ -68,12 +69,12 @@ func TestAccComputeProjectMetadataItem_basicWithEmptyVal(t *testing.T) { t.Parallel() // Key must be unique to avoid concurrent tests interfering with each other - key := "myKey" + acctest.RandString(10) + key := "myKey" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckProjectMetadataItemDestroy, + CheckDestroy: testAccCheckProjectMetadataItemDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectMetadataItem_basic("foobar", key, ""), @@ -91,12 +92,12 @@ func TestAccComputeProjectMetadataItem_basicUpdate(t *testing.T) { t.Parallel() // Key must be unique to avoid concurrent tests interfering with each other - key := "myKey" + acctest.RandString(10) + key := "myKey" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckProjectMetadataItemDestroy, + CheckDestroy: testAccCheckProjectMetadataItemDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectMetadataItem_basic("foobar", key, "myValue"), @@ -122,13 +123,13 @@ func TestAccComputeProjectMetadataItem_exists(t *testing.T) { t.Parallel() // Key must be unique to avoid concurrent tests interfering with each other - key := "myKey" + acctest.RandString(10) + key := "myKey" + randString(t, 10) originalConfig := testAccProjectMetadataItem_basic("foobar", key, "myValue") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckProjectMetadataItemDestroy, + CheckDestroy: testAccCheckProjectMetadataItemDestroyProducer(t), Steps: []resource.TestStep{ { Config: originalConfig, @@ -147,28 +148,30 @@ func TestAccComputeProjectMetadataItem_exists(t *testing.T) { }) } -func testAccCheckProjectMetadataItemDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckProjectMetadataItemDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - project, err := config.clientCompute.Projects.Get(config.Project).Do() - if err != nil { - return err - } + project, err := config.clientCompute.Projects.Get(config.Project).Do() + if err != nil { + return err + } - metadata := flattenMetadata(project.CommonInstanceMetadata) + metadata := flattenMetadata(project.CommonInstanceMetadata) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_project_metadata_item" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_project_metadata_item" { + continue + } - _, ok := metadata[rs.Primary.ID] - if ok { - return fmt.Errorf("Metadata key/value '%s': '%s' still exist", rs.Primary.Attributes["key"], rs.Primary.Attributes["value"]) + _, ok := metadata[rs.Primary.ID] + if ok { + return fmt.Errorf("Metadata key/value '%s': '%s' still exist", rs.Primary.Attributes["key"], rs.Primary.Attributes["value"]) + } } - } - return nil + return nil + } } func testAccProjectMetadataItem_basic(resourceName, key, val string) string { diff --git a/third_party/terraform/tests/resource_compute_project_metadata_test.go b/third_party/terraform/tests/resource_compute_project_metadata_test.go index 243a57f23a61..11db3437a173 100644 --- a/third_party/terraform/tests/resource_compute_project_metadata_test.go +++ b/third_party/terraform/tests/resource_compute_project_metadata_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -15,12 +14,12 @@ func TestAccComputeProjectMetadata_basic(t *testing.T) { org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - projectID := acctest.RandomWithPrefix("tf-test") + projectID := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeProjectMetadataDestroy, + CheckDestroy: testAccCheckComputeProjectMetadataDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeProject_basic0_metadata(projectID, pname, org, billingId), @@ -40,12 +39,12 @@ func TestAccComputeProjectMetadata_modify_1(t *testing.T) { org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - projectID := acctest.RandomWithPrefix("tf-test") + projectID := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeProjectMetadataDestroy, + CheckDestroy: testAccCheckComputeProjectMetadataDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeProject_modify0_metadata(projectID, pname, org, billingId), @@ -74,12 +73,12 @@ func TestAccComputeProjectMetadata_modify_2(t *testing.T) { org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - projectID := acctest.RandomWithPrefix("tf-test") + projectID := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeProjectMetadataDestroy, + CheckDestroy: testAccCheckComputeProjectMetadataDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeProject_basic0_metadata(projectID, pname, org, billingId), @@ -102,21 +101,23 @@ func TestAccComputeProjectMetadata_modify_2(t *testing.T) { }) } -func testAccCheckComputeProjectMetadataDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeProjectMetadataDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_project_metadata" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_project_metadata" { + continue + } - project, err := config.clientCompute.Projects.Get(rs.Primary.ID).Do() - if err == nil && len(project.CommonInstanceMetadata.Items) > 0 { - return fmt.Errorf("Error, metadata items still exist in %s", rs.Primary.ID) + project, err := config.clientCompute.Projects.Get(rs.Primary.ID).Do() + if err == nil && len(project.CommonInstanceMetadata.Items) > 0 { + return fmt.Errorf("Error, metadata items still exist in %s", rs.Primary.ID) + } } - } - return nil + return nil + } } func testAccComputeProject_basic0_metadata(projectID, name, org, billing string) string { diff --git a/third_party/terraform/tests/resource_compute_region_autoscaler_test.go.erb b/third_party/terraform/tests/resource_compute_region_autoscaler_test.go.erb index 8899709bc1c4..fa54faf391ec 100644 --- a/third_party/terraform/tests/resource_compute_region_autoscaler_test.go.erb +++ b/third_party/terraform/tests/resource_compute_region_autoscaler_test.go.erb @@ -6,22 +6,21 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" ) func TestAccComputeRegionAutoscaler_update(t *testing.T) { - var it_name = fmt.Sprintf("region-autoscaler-test-%s", acctest.RandString(10)) - var tp_name = fmt.Sprintf("region-autoscaler-test-%s", acctest.RandString(10)) - var igm_name = fmt.Sprintf("region-autoscaler-test-%s", acctest.RandString(10)) - var autoscaler_name = fmt.Sprintf("region-autoscaler-test-%s", acctest.RandString(10)) + var it_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) + var tp_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) + var igm_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) + var autoscaler_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionAutoscalerDestroy, + CheckDestroy: testAccCheckComputeRegionAutoscalerDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeRegionAutoscaler_basic(it_name, tp_name, igm_name, autoscaler_name), @@ -43,7 +42,34 @@ func TestAccComputeRegionAutoscaler_update(t *testing.T) { }) } -func testAccComputeRegionAutoscaler_basic(it_name, tp_name, igm_name, autoscaler_name string) string { +<% unless version == 'ga' -%> +func TestAccComputeRegionAutoscaler_scaleDownControl(t *testing.T) { + t.Parallel() + + var it_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) + var tp_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) + var igm_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) + var autoscaler_name = fmt.Sprintf("region-autoscaler-test-%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeRegionAutoscalerDestroyProducer(t), + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeRegionAutoscaler_scaleDownControl(it_name, tp_name, igm_name, autoscaler_name), + }, + resource.TestStep{ + ResourceName: "google_compute_region_autoscaler.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} +<% end -%> + +func testAccComputeRegionAutoscaler_scaffolding(it_name, tp_name, igm_name string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -89,6 +115,11 @@ resource "google_compute_region_instance_group_manager" "foobar" { region = "us-central1" } +`, it_name, tp_name, igm_name) +} + +func testAccComputeRegionAutoscaler_basic(it_name, tp_name, igm_name, autoscaler_name string) string { + return testAccComputeRegionAutoscaler_scaffolding(it_name, tp_name, igm_name) + fmt.Sprintf(` resource "google_compute_region_autoscaler" "foobar" { description = "Resource created for Terraform acceptance testing" name = "%s" @@ -103,55 +134,31 @@ resource "google_compute_region_autoscaler" "foobar" { } } } -`, it_name, tp_name, igm_name, autoscaler_name) +`, autoscaler_name) } func testAccComputeRegionAutoscaler_update(it_name, tp_name, igm_name, autoscaler_name string) string { - return fmt.Sprintf(` -data "google_compute_image" "my_image" { - family = "debian-9" - project = "debian-cloud" -} - -resource "google_compute_instance_template" "foobar" { - name = "%s" - machine_type = "n1-standard-1" - can_ip_forward = false - tags = ["foo", "bar"] - - disk { - source_image = data.google_compute_image.my_image.self_link - auto_delete = true - boot = true - } - - network_interface { - network = "default" - } - - service_account { - scopes = ["userinfo-email", "compute-ro", "storage-ro"] - } -} - -resource "google_compute_target_pool" "foobar" { - description = "Resource created for Terraform acceptance testing" - name = "%s" - session_affinity = "CLIENT_IP_PROTO" -} - -resource "google_compute_region_instance_group_manager" "foobar" { - description = "Terraform test instance group manager" + return testAccComputeRegionAutoscaler_scaffolding(it_name, tp_name, igm_name) + fmt.Sprintf(` +resource "google_compute_region_autoscaler" "foobar" { + description = "Resource created for Terraform acceptance testing" name = "%s" - version { - instance_template = google_compute_instance_template.foobar.self_link - name = "primary" + region = "us-central1" + target = google_compute_region_instance_group_manager.foobar.self_link + autoscaling_policy { + max_replicas = 10 + min_replicas = 1 + cooldown_period = 60 + cpu_utilization { + target = 0.5 + } } - target_pools = [google_compute_target_pool.foobar.self_link] - base_instance_name = "foobar" - region = "us-central1" +} +`, autoscaler_name) } +<% unless version == 'ga' -%> +func testAccComputeRegionAutoscaler_scaleDownControl(it_name, tp_name, igm_name, autoscaler_name string) string { + return testAccComputeRegionAutoscaler_scaffolding(it_name, tp_name, igm_name) + fmt.Sprintf(` resource "google_compute_region_autoscaler" "foobar" { description = "Resource created for Terraform acceptance testing" name = "%s" @@ -164,7 +171,14 @@ resource "google_compute_region_autoscaler" "foobar" { cpu_utilization { target = 0.5 } + scale_down_control { + max_scaled_down_replicas { + percent = 80 + } + time_window_sec = 300 + } } } -`, it_name, tp_name, igm_name, autoscaler_name) +`, autoscaler_name) } +<% end -%> diff --git a/third_party/terraform/tests/resource_compute_region_backend_service_test.go.erb b/third_party/terraform/tests/resource_compute_region_backend_service_test.go.erb index 08da2f2b5ced..16d64aa196b3 100644 --- a/third_party/terraform/tests/resource_compute_region_backend_service_test.go.erb +++ b/third_party/terraform/tests/resource_compute_region_backend_service_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -14,14 +13,14 @@ import ( func TestAccComputeRegionBackendService_basic(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - extraCheckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + extraCheckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionBackendServiceDestroy, + CheckDestroy: testAccCheckComputeRegionBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeRegionBackendService_basic(serviceName, checkName), @@ -47,14 +46,14 @@ func TestAccComputeRegionBackendService_basic(t *testing.T) { func TestAccComputeRegionBackendService_withBackendInternal(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - igName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - itName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + igName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + itName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionBackendServiceDestroy, + CheckDestroy: testAccCheckComputeRegionBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionBackendService_withInvalidInternalBackend( @@ -86,13 +85,13 @@ func TestAccComputeRegionBackendService_withBackendInternal(t *testing.T) { func TestAccComputeRegionBackendService_withBackendInternalManaged(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - igmName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - hcName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + igmName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + hcName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionBackendServiceDestroy, + CheckDestroy: testAccCheckComputeRegionBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionBackendService_internalManagedInvalidBackend(serviceName, igmName, hcName), @@ -117,16 +116,16 @@ func TestAccComputeRegionBackendService_withBackendInternalManaged(t *testing.T) func TestAccComputeRegionBackendService_withBackendMultiNic(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - net1Name := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - net2Name := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - igName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - itName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + net1Name := fmt.Sprintf("tf-test-%s", randString(t, 10)) + net2Name := fmt.Sprintf("tf-test-%s", randString(t, 10)) + igName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + itName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionBackendServiceDestroy, + CheckDestroy: testAccCheckComputeRegionBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeRegionBackendService_withBackendMultiNic( @@ -144,13 +143,13 @@ func TestAccComputeRegionBackendService_withBackendMultiNic(t *testing.T) { func TestAccComputeRegionBackendService_withConnectionDrainingAndUpdate(t *testing.T) { t.Parallel() - serviceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + serviceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + checkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionBackendServiceDestroy, + CheckDestroy: testAccCheckComputeRegionBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccComputeRegionBackendService_withConnectionDraining(serviceName, checkName, 10), @@ -172,17 +171,16 @@ func TestAccComputeRegionBackendService_withConnectionDrainingAndUpdate(t *testi }) } -<% unless version == 'ga' -%> func TestAccComputeRegionBackendService_ilbUpdateBasic(t *testing.T) { t.Parallel() - backendName := fmt.Sprintf("foo-%s", acctest.RandString(10)) - checkName := fmt.Sprintf("bar-%s", acctest.RandString(10)) + backendName := fmt.Sprintf("foo-%s", randString(t, 10)) + checkName := fmt.Sprintf("bar-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionBackendServiceDestroy, + CheckDestroy: testAccCheckComputeRegionBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionBackendService_ilbBasic(backendName, checkName), @@ -203,23 +201,22 @@ func TestAccComputeRegionBackendService_ilbUpdateBasic(t *testing.T) { }, }) } -<% end -%> <% unless version == 'ga' -%> func TestAccComputeRegionBackendService_ilbUpdateFull(t *testing.T) { t.Parallel() - randString := acctest.RandString(10) + randString := randString(t, 10) backendName := fmt.Sprintf("foo-%s", randString) checkName := fmt.Sprintf("bar-%s", randString) igName := fmt.Sprintf("baz-%s", randString) - instanceName := fmt.Sprintf("boz-%s", randString) + instanceName := fmt.Sprintf("tf-test-%s", randString) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionBackendServiceDestroy, + CheckDestroy: testAccCheckComputeRegionBackendServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionBackendService_ilbFull(backendName, checkName), @@ -242,12 +239,12 @@ func TestAccComputeRegionBackendService_ilbUpdateFull(t *testing.T) { } <% end -%> -<% unless version == 'ga' -%> func testAccComputeRegionBackendService_ilbBasic(serviceName, checkName string) string { return fmt.Sprintf(` resource "google_compute_region_backend_service" "foobar" { name = "%s" health_checks = [google_compute_health_check.health_check.self_link] + port_name = "http" protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" locality_lb_policy = "RING_HASH" @@ -276,14 +273,13 @@ resource "google_compute_health_check" "health_check" { } `, serviceName, checkName) } -<% end -%> -<% unless version == 'ga' -%> func testAccComputeRegionBackendService_ilbUpdateBasic(serviceName, checkName string) string { return fmt.Sprintf(` resource "google_compute_region_backend_service" "foobar" { name = "%s" health_checks = [google_compute_health_check.health_check.self_link] + port_name = "https" protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" locality_lb_policy = "RANDOM" @@ -303,7 +299,6 @@ resource "google_compute_health_check" "health_check" { } `, serviceName, checkName) } -<% end -%> <% unless version == 'ga' -%> func testAccComputeRegionBackendService_ilbFull(serviceName, checkName string) string { @@ -311,6 +306,7 @@ func testAccComputeRegionBackendService_ilbFull(serviceName, checkName string) s resource "google_compute_region_backend_service" "foobar" { name = "%s" health_checks = [google_compute_health_check.health_check.self_link] + port_name = "http" protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" locality_lb_policy = "MAGLEV" @@ -347,6 +343,7 @@ func testAccComputeRegionBackendService_ilbUpdateFull(serviceName, igName, insta resource "google_compute_region_backend_service" "foobar" { name = "%s" health_checks = [google_compute_health_check.health_check.self_link] + port_name = "https" protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" locality_lb_policy = "MAGLEV" @@ -406,6 +403,16 @@ resource "google_compute_region_backend_service" "foobar" { resource "google_compute_instance_group" "group" { name = "%s" instances = [google_compute_instance.ig_instance.self_link] + + named_port { + name = "http" + port = "8080" + } + + named_port { + name = "https" + port = "8443" + } } data "google_compute_image" "my_image" { @@ -669,6 +676,7 @@ data "google_compute_image" "my_image" { resource "google_compute_region_backend_service" "lipsum" { name = "%s" description = "Hello World 1234" + port_name = "http" protocol = "TCP" region = "us-central1" @@ -731,6 +739,7 @@ resource "google_compute_region_backend_service" "default" { } region = "us-central1" + port_name = "http" protocol = "HTTP" timeout_sec = 10 @@ -798,6 +807,7 @@ resource "google_compute_region_backend_service" "default" { } region = "us-central1" + port_name = "http" protocol = "HTTP" timeout_sec = 10 @@ -875,6 +885,7 @@ resource "google_compute_region_backend_service" "default" { } region = "us-central1" + port_name = "http" protocol = "HTTP" timeout_sec = 10 diff --git a/third_party/terraform/tests/resource_compute_region_disk_test.go b/third_party/terraform/tests/resource_compute_region_disk_test.go index 762172a76ccf..5f7451addd72 100644 --- a/third_party/terraform/tests/resource_compute_region_disk_test.go +++ b/third_party/terraform/tests/resource_compute_region_disk_test.go @@ -5,7 +5,6 @@ import ( "strconv" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" computeBeta "google.golang.org/api/compute/v0.beta" @@ -14,20 +13,20 @@ import ( func TestAccComputeRegionDisk_basic(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) var disk computeBeta.Disk - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionDiskDestroy, + CheckDestroy: testAccCheckComputeRegionDiskDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionDisk_basic(diskName, "self_link"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), ), }, { @@ -39,7 +38,7 @@ func TestAccComputeRegionDisk_basic(t *testing.T) { Config: testAccComputeRegionDisk_basic(diskName, "name"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), ), }, { @@ -54,20 +53,20 @@ func TestAccComputeRegionDisk_basic(t *testing.T) { func TestAccComputeRegionDisk_basicUpdate(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) var disk computeBeta.Disk - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionDiskDestroy, + CheckDestroy: testAccCheckComputeRegionDiskDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionDisk_basic(diskName, "self_link"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), ), }, { @@ -79,7 +78,7 @@ func TestAccComputeRegionDisk_basicUpdate(t *testing.T) { Config: testAccComputeRegionDisk_basicUpdated(diskName, "self_link"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), resource.TestCheckResourceAttr("google_compute_region_disk.regiondisk", "size", "100"), testAccCheckComputeRegionDiskHasLabel(&disk, "my-label", "my-updated-label-value"), testAccCheckComputeRegionDiskHasLabel(&disk, "a-new-label", "a-new-label-value"), @@ -98,19 +97,19 @@ func TestAccComputeRegionDisk_basicUpdate(t *testing.T) { func TestAccComputeRegionDisk_encryption(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) var disk computeBeta.Disk - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionDiskDestroy, + CheckDestroy: testAccCheckComputeRegionDiskDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionDisk_encryption(diskName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), testAccCheckRegionDiskEncryptionKey( "google_compute_region_disk.regiondisk", &disk), ), @@ -122,22 +121,22 @@ func TestAccComputeRegionDisk_encryption(t *testing.T) { func TestAccComputeRegionDisk_deleteDetach(t *testing.T) { t.Parallel() - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - regionDiskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - regionDiskName2 := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - instanceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + regionDiskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + regionDiskName2 := fmt.Sprintf("tf-test-%s", randString(t, 10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) var disk computeBeta.Disk - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionDiskDestroy, + CheckDestroy: testAccCheckComputeRegionDiskDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionDisk_deleteDetach(instanceName, diskName, regionDiskName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), ), }, // this needs to be an additional step so we refresh and see the instance @@ -148,7 +147,7 @@ func TestAccComputeRegionDisk_deleteDetach(t *testing.T) { Config: testAccComputeRegionDisk_deleteDetach(instanceName, diskName, regionDiskName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), testAccCheckComputeRegionDiskInstances( "google_compute_region_disk.regiondisk", &disk), ), @@ -158,7 +157,7 @@ func TestAccComputeRegionDisk_deleteDetach(t *testing.T) { Config: testAccComputeRegionDisk_deleteDetach(instanceName, diskName, regionDiskName2), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), ), }, // Add the extra step like before @@ -166,7 +165,7 @@ func TestAccComputeRegionDisk_deleteDetach(t *testing.T) { Config: testAccComputeRegionDisk_deleteDetach(instanceName, diskName, regionDiskName2), Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionDiskExists( - "google_compute_region_disk.regiondisk", &disk), + t, "google_compute_region_disk.regiondisk", &disk), testAccCheckComputeRegionDiskInstances( "google_compute_region_disk.regiondisk", &disk), ), @@ -175,7 +174,7 @@ func TestAccComputeRegionDisk_deleteDetach(t *testing.T) { }) } -func testAccCheckComputeRegionDiskExists(n string, disk *computeBeta.Disk) resource.TestCheckFunc { +func testAccCheckComputeRegionDiskExists(t *testing.T, n string, disk *computeBeta.Disk) resource.TestCheckFunc { return func(s *terraform.State) error { p := getTestProjectFromEnv() rs, ok := s.RootModule().Resources[n] @@ -187,7 +186,7 @@ func testAccCheckComputeRegionDiskExists(n string, disk *computeBeta.Disk) resou return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientComputeBeta.RegionDisks.Get( p, rs.Primary.Attributes["region"], rs.Primary.Attributes["name"]).Do() diff --git a/third_party/terraform/tests/resource_compute_region_health_check_test.go.erb b/third_party/terraform/tests/resource_compute_region_health_check_test.go.erb index a9e1c104bb83..9e6a90d86417 100644 --- a/third_party/terraform/tests/resource_compute_region_health_check_test.go.erb +++ b/third_party/terraform/tests/resource_compute_region_health_check_test.go.erb @@ -6,7 +6,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,12 +13,12 @@ import ( func TestAccComputeRegionHealthCheck_tcp_update(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionHealthCheckDestroy, + CheckDestroy: testAccCheckComputeRegionHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionHealthCheck_tcp(hckName), @@ -44,12 +43,12 @@ func TestAccComputeRegionHealthCheck_tcp_update(t *testing.T) { func TestAccComputeRegionHealthCheck_ssl_port_spec(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionHealthCheckDestroy, + CheckDestroy: testAccCheckComputeRegionHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionHealthCheck_ssl_fixed_port(hckName), @@ -66,12 +65,12 @@ func TestAccComputeRegionHealthCheck_ssl_port_spec(t *testing.T) { func TestAccComputeRegionHealthCheck_http_port_spec(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionHealthCheckDestroy, + CheckDestroy: testAccCheckComputeRegionHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionHealthCheck_http_port_spec(hckName), @@ -92,12 +91,12 @@ func TestAccComputeRegionHealthCheck_http_port_spec(t *testing.T) { func TestAccComputeRegionHealthCheck_https_serving_port(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionHealthCheckDestroy, + CheckDestroy: testAccCheckComputeRegionHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionHealthCheck_https_serving_port(hckName), @@ -114,12 +113,12 @@ func TestAccComputeRegionHealthCheck_https_serving_port(t *testing.T) { func TestAccComputeRegionHealthCheck_typeTransition(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionHealthCheckDestroy, + CheckDestroy: testAccCheckComputeRegionHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionHealthCheck_https(hckName), @@ -146,16 +145,16 @@ func TestAccComputeRegionHealthCheck_typeTransition(t *testing.T) { func TestAccComputeRegionHealthCheck_tcpAndSsl_shouldFail(t *testing.T) { t.Parallel() - hckName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hckName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRegionHealthCheckDestroy, + CheckDestroy: testAccCheckComputeRegionHealthCheckDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionHealthCheck_tcpAndSsl_shouldFail(hckName), - ExpectError: regexp.MustCompile("conflicts with tcp_health_check"), + ExpectError: regexp.MustCompile("only one of `http2_health_check,http_health_check,https_health_check,ssl_health_check,tcp_health_check` can be specified"), }, }, }) diff --git a/third_party/terraform/tests/resource_compute_region_instance_group_manager_test.go b/third_party/terraform/tests/resource_compute_region_instance_group_manager_test.go.erb similarity index 72% rename from third_party/terraform/tests/resource_compute_region_instance_group_manager_test.go rename to third_party/terraform/tests/resource_compute_region_instance_group_manager_test.go.erb index c6db92936c96..8405cf866e8e 100644 --- a/third_party/terraform/tests/resource_compute_region_instance_group_manager_test.go +++ b/third_party/terraform/tests/resource_compute_region_instance_group_manager_test.go.erb @@ -1,3 +1,5 @@ +<% autogen_exception -%> + package google import ( @@ -5,7 +7,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,15 +14,15 @@ import ( func TestAccRegionInstanceGroupManager_basic(t *testing.T) { t.Parallel() - template := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + target := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm1 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm2 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_basic(template, target, igm1, igm2), @@ -43,13 +44,13 @@ func TestAccRegionInstanceGroupManager_basic(t *testing.T) { func TestAccRegionInstanceGroupManager_targetSizeZero(t *testing.T) { t.Parallel() - templateName := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igmName := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + templateName := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igmName := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_targetSizeZero(templateName, igmName), @@ -66,16 +67,16 @@ func TestAccRegionInstanceGroupManager_targetSizeZero(t *testing.T) { func TestAccRegionInstanceGroupManager_update(t *testing.T) { t.Parallel() - template1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - template2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template1 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + target1 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + target2 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + template2 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_update(template1, target1, igm), @@ -93,21 +94,31 @@ func TestAccRegionInstanceGroupManager_update(t *testing.T) { ImportState: true, ImportStateVerify: true, }, + { + Config: testAccRegionInstanceGroupManager_update3(template1, target1, target2, template2, igm), + }, + { + ResourceName: "google_compute_region_instance_group_manager.igm-update", + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccRegionInstanceGroupManager_updateLifecycle(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() tag1 := "tag1" tag2 := "tag2" - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_updateLifecycle(tag1, igm), @@ -130,14 +141,16 @@ func TestAccRegionInstanceGroupManager_updateLifecycle(t *testing.T) { } func TestAccRegionInstanceGroupManager_rollingUpdatePolicy(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_rollingUpdatePolicy(igm), @@ -165,15 +178,17 @@ func TestAccRegionInstanceGroupManager_rollingUpdatePolicy(t *testing.T) { } func TestAccRegionInstanceGroupManager_separateRegions(t *testing.T) { + // Randomness in instance template + skipIfVcr(t) t.Parallel() - igm1 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm2 := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + igm1 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm2 := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_separateRegions(igm1, igm2), @@ -195,14 +210,14 @@ func TestAccRegionInstanceGroupManager_separateRegions(t *testing.T) { func TestAccRegionInstanceGroupManager_versions(t *testing.T) { t.Parallel() - primaryTemplate := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - canaryTemplate := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + primaryTemplate := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + canaryTemplate := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_versions(primaryTemplate, canaryTemplate, igm), @@ -219,15 +234,15 @@ func TestAccRegionInstanceGroupManager_versions(t *testing.T) { func TestAccRegionInstanceGroupManager_autoHealingPolicies(t *testing.T) { t.Parallel() - template := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - target := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - hck := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + target := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + hck := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_autoHealingPolicies(template, target, igm, hck), @@ -252,14 +267,14 @@ func TestAccRegionInstanceGroupManager_autoHealingPolicies(t *testing.T) { func TestAccRegionInstanceGroupManager_distributionPolicy(t *testing.T) { t.Parallel() - template := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) - igm := fmt.Sprintf("igm-test-%s", acctest.RandString(10)) + template := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) zones := []string{"us-central1-a", "us-central1-b"} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroy, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRegionInstanceGroupManager_distributionPolicy(template, igm, zones), @@ -273,21 +288,55 @@ func TestAccRegionInstanceGroupManager_distributionPolicy(t *testing.T) { }) } -func testAccCheckRegionInstanceGroupManagerDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +<% unless version == 'ga' -%> +func TestAccRegionInstanceGroupManager_stateful(t *testing.T) { + t.Parallel() + + template := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) + igm := fmt.Sprintf("tf-test-rigm-%s", randString(t, 10)) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_region_instance_group_manager" { - continue - } - _, err := config.clientCompute.RegionInstanceGroupManagers.Get( - rs.Primary.Attributes["project"], rs.Primary.Attributes["region"], rs.Primary.Attributes["name"]).Do() - if err == nil { - return fmt.Errorf("RegionInstanceGroupManager still exists") + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRegionInstanceGroupManagerDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccRegionInstanceGroupManager_stateful(template, igm), + }, + { + ResourceName: "google_compute_region_instance_group_manager.igm-basic", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccRegionInstanceGroupManager_statefulUpdate(template, igm), + }, + { + ResourceName: "google_compute_region_instance_group_manager.igm-basic", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +<% end -%> +func testAccCheckRegionInstanceGroupManagerDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_region_instance_group_manager" { + continue + } + _, err := config.clientCompute.RegionInstanceGroupManagers.Get( + rs.Primary.Attributes["project"], rs.Primary.Attributes["region"], rs.Primary.Attributes["name"]).Do() + if err == nil { + return fmt.Errorf("RegionInstanceGroupManager still exists") + } } - } - return nil + return nil + } } func testAccRegionInstanceGroupManager_basic(template, target, igm1, igm2 string) string { @@ -543,6 +592,92 @@ resource "google_compute_region_instance_group_manager" "igm-update" { `, template1, target1, target2, template2, igm) } +// Remove target pools +func testAccRegionInstanceGroupManager_update3(template1, target1, target2, template2, igm string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-update" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_target_pool" "igm-update" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" +} + +resource "google_compute_target_pool" "igm-update2" { + description = "Resource created for Terraform acceptance testing" + name = "%s" + session_affinity = "CLIENT_IP_PROTO" +} + +resource "google_compute_instance_template" "igm-update2" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_region_instance_group_manager" "igm-update" { + description = "Terraform test instance group manager" + name = "%s" + + version { + instance_template = google_compute_instance_template.igm-update2.self_link + name = "primary" + } + + base_instance_name = "igm-update" + region = "us-central1" + target_size = 3 + named_port { + name = "customhttp" + port = 8080 + } + named_port { + name = "customhttps" + port = 8443 + } +} +`, template1, target1, target2, template2, igm) +} + func testAccRegionInstanceGroupManager_updateLifecycle(tag, igm string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { @@ -1042,3 +1177,122 @@ resource "google_compute_region_instance_group_manager" "igm-rolling-update-poli } `, igm) } + +<% unless version == 'ga' -%> +func testAccRegionInstanceGroupManager_stateful(template, igm string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-basic" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + device_name = "stateful-disk" + } + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + device_name = "stateful-disk2" + } + network_interface { + network = "default" + } +} + +resource "google_compute_region_instance_group_manager" "igm-basic" { + description = "Terraform test instance group manager" + name = "%s" + + version { + instance_template = google_compute_instance_template.igm-basic.self_link + name = "primary" + } + + base_instance_name = "igm-basic" + region = "us-central1" + target_size = 2 + update_policy { + instance_redistribution_type = "NONE" + type = "OPPORTUNISTIC" + minimal_action = "REPLACE" + max_surge_fixed = 0 + max_unavailable_fixed = 6 + min_ready_sec = 20 + } + stateful_disk { + device_name = "stateful-disk" + delete_rule = "NEVER" + } +} +`, template, igm) +} + +func testAccRegionInstanceGroupManager_statefulUpdate(template, igm string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-basic" { + name = "%s" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + device_name = "stateful-disk" + } + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + device_name = "stateful-disk2" + } + network_interface { + network = "default" + } +} + +resource "google_compute_region_instance_group_manager" "igm-basic" { + description = "Terraform test instance group manager" + name = "%s" + + version { + instance_template = google_compute_instance_template.igm-basic.self_link + name = "primary" + } + + base_instance_name = "igm-basic" + region = "us-central1" + target_size = 2 + + update_policy { + instance_redistribution_type = "NONE" + type = "OPPORTUNISTIC" + minimal_action = "REPLACE" + max_surge_fixed = 0 + max_unavailable_fixed = 6 + min_ready_sec = 20 + } + stateful_disk { + device_name = "stateful-disk" + delete_rule = "NEVER" + } + stateful_disk { + device_name = "stateful-disk2" + delete_rule = "NEVER" + } +} +`, template, igm) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_compute_region_per_instance_config_test.go.erb b/third_party/terraform/tests/resource_compute_region_per_instance_config_test.go.erb new file mode 100644 index 000000000000..802e85279890 --- /dev/null +++ b/third_party/terraform/tests/resource_compute_region_per_instance_config_test.go.erb @@ -0,0 +1,320 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccComputeRegionPerInstanceConfig_statefulBasic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "config_name" : fmt.Sprintf("instance-%s", randString(t, 10)), + "config_name2" : fmt.Sprintf("instance-%s", randString(t, 10)), + "config_name3" : fmt.Sprintf("instance-%s", randString(t, 10)), + "config_name4" : fmt.Sprintf("instance-%s", randString(t, 10)), + } + rigmId := fmt.Sprintf("projects/%s/regions/%s/instanceGroupManagers/rigm-%s", + getTestProjectFromEnv(), "us-central1", context["random_suffix"]) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + // Create one endpoint + Config: testAccComputeRegionPerInstanceConfig_statefulBasic(context), + }, + { + ResourceName: "google_compute_region_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // Force-recreate old config + Config: testAccComputeRegionPerInstanceConfig_statefulModified(context), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeRegionPerInstanceConfigDestroyed(t, rigmId, context["config_name"].(string)), + ), + }, + { + ResourceName: "google_compute_region_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // Add two new endpoints + Config: testAccComputeRegionPerInstanceConfig_statefulAdditional(context), + }, + { + ResourceName: "google_compute_region_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + ResourceName: "google_compute_region_per_instance_config.with_disks", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"most_disruptive_allowed_action", "minimal_action", "remove_instance_state_on_destroy"}, + }, + { + ResourceName: "google_compute_region_per_instance_config.add2", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // delete all configs + Config: testAccComputeRegionPerInstanceConfig_rigm(context), + Check: resource.ComposeTestCheckFunc( + // Config with remove_instance_state_on_destroy = false won't be destroyed (config4) + testAccCheckComputeRegionPerInstanceConfigDestroyed(t, rigmId, context["config_name2"].(string)), + testAccCheckComputeRegionPerInstanceConfigDestroyed(t, rigmId, context["config_name3"].(string)), + ), + }, + }, + }) +} + +func TestAccComputeRegionPerInstanceConfig_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "config_name" : fmt.Sprintf("instance-%s", randString(t, 10)), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + // Create one config + Config: testAccComputeRegionPerInstanceConfig_statefulBasic(context), + }, + { + ResourceName: "google_compute_region_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + { + // Update an existing config + Config: testAccComputeRegionPerInstanceConfig_update(context), + }, + { + ResourceName: "google_compute_region_per_instance_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_instance_state_on_destroy"}, + }, + }, + }) +} + +func testAccComputeRegionPerInstanceConfig_statefulBasic(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_region_per_instance_config" "default" { + region = google_compute_region_instance_group_manager.rigm.region + region_instance_group_manager = google_compute_region_instance_group_manager.rigm.name + name = "%{config_name}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "asdf" + } + } +} +`, context) + testAccComputeRegionPerInstanceConfig_rigm(context) +} + +func testAccComputeRegionPerInstanceConfig_update(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_region_per_instance_config" "default" { + region = google_compute_region_instance_group_manager.rigm.region + region_instance_group_manager = google_compute_region_instance_group_manager.rigm.name + name = "%{config_name}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "foo" + updated = "12345" + } + } +} +`, context) + testAccComputeRegionPerInstanceConfig_rigm(context) +} + +func testAccComputeRegionPerInstanceConfig_statefulModified(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_region_per_instance_config" "default" { + region = google_compute_region_instance_group_manager.rigm.region + region_instance_group_manager = google_compute_region_instance_group_manager.rigm.name + name = "%{config_name2}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "asdf" + } + } +} +`, context) + testAccComputeRegionPerInstanceConfig_rigm(context) +} + +func testAccComputeRegionPerInstanceConfig_statefulAdditional(context map[string]interface{}) string { + return Nprintf(` +resource "google_compute_region_per_instance_config" "default" { + region = google_compute_region_instance_group_manager.rigm.region + region_instance_group_manager = google_compute_region_instance_group_manager.rigm.name + name = "%{config_name2}" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + asdf = "asdf" + } + } +} + +resource "google_compute_region_per_instance_config" "with_disks" { + region = google_compute_region_instance_group_manager.rigm.region + region_instance_group_manager = google_compute_region_instance_group_manager.rigm.name + name = "%{config_name3}" + most_disruptive_allowed_action = "REFRESH" + minimal_action = "REFRESH" + remove_instance_state_on_destroy = true + preserved_state { + metadata = { + meta = "123" + } + + disk { + device_name = "my-stateful-disk1" + source = google_compute_disk.disk.id + } + + disk { + device_name = "my-stateful-disk2" + source = google_compute_disk.disk1.id + } + + disk { + device_name = "my-stateful-disk3" + source = google_compute_disk.disk2.id + } + } +} + +resource "google_compute_region_per_instance_config" "add2" { + region = google_compute_region_instance_group_manager.rigm.region + region_instance_group_manager = google_compute_region_instance_group_manager.rigm.name + name = "%{config_name4}" + preserved_state { + metadata = { + foo = "abc" + } + } +} + +resource "google_compute_disk" "disk" { + name = "test-disk-%{random_suffix}" + type = "pd-ssd" + zone = "us-central1-c" + image = "debian-8-jessie-v20170523" + physical_block_size_bytes = 4096 +} + +resource "google_compute_disk" "disk1" { + name = "test-disk2-%{random_suffix}" + type = "pd-ssd" + zone = "us-central1-c" + image = "debian-cloud/debian-9" + physical_block_size_bytes = 4096 +} + +resource "google_compute_disk" "disk2" { + name = "test-disk3-%{random_suffix}" + type = "pd-ssd" + zone = "us-central1-c" + image = "https://www.googleapis.com/compute/v1/projects/gce-uefi-images/global/images/centos-7-v20190729" + physical_block_size_bytes = 4096 +} +`, context) + testAccComputeRegionPerInstanceConfig_rigm(context) +} + +func testAccComputeRegionPerInstanceConfig_rigm(context map[string]interface{}) string { + return Nprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "rigm-basic" { + name = "rigm-temp-%{random_suffix}" + machine_type = "n1-standard-1" + can_ip_forward = false + tags = ["foo", "bar"] + + disk { + source_image = data.google_compute_image.my_image.self_link + auto_delete = true + boot = true + device_name = "my-stateful-disk" + } + + network_interface { + network = "default" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_region_instance_group_manager" "rigm" { + description = "Terraform test instance group manager" + name = "rigm-%{random_suffix}" + + version { + name = "prod" + instance_template = google_compute_instance_template.rigm-basic.self_link + } + + base_instance_name = "rigm-no-tp" + region = "us-central1" + + update_policy { + instance_redistribution_type = "NONE" + type = "OPPORTUNISTIC" + minimal_action = "REPLACE" + max_surge_fixed = 0 + max_unavailable_fixed = 6 + min_ready_sec = 20 + } +} +`, context) +} + +// Checks that the per instance config with the given name was destroyed +func testAccCheckComputeRegionPerInstanceConfigDestroyed(t *testing.T, rigmId, configName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + foundNames, err := testAccComputePerInstanceConfigListNames(t, rigmId) + if err != nil { + return fmt.Errorf("unable to confirm config with name %s was destroyed: %v", configName, err) + } + if _, ok := foundNames[configName]; ok { + return fmt.Errorf("config with name %s still exists", configName) + } + + return nil + } +} +<% end -%> diff --git a/third_party/terraform/tests/resource_compute_region_target_http_proxy_test.go.erb b/third_party/terraform/tests/resource_compute_region_target_http_proxy_test.go.erb index 661ca2072195..125223e31544 100644 --- a/third_party/terraform/tests/resource_compute_region_target_http_proxy_test.go.erb +++ b/third_party/terraform/tests/resource_compute_region_target_http_proxy_test.go.erb @@ -1,28 +1,26 @@ <% autogen_exception -%> package google -<% unless version == 'ga' -%> import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeRegionTargetHttpProxy_update(t *testing.T) { t.Parallel() - target := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - urlmap1 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - urlmap2 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + target := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + backend := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + hc := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + urlmap1 := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + urlmap2 := fmt.Sprintf("thttp-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetHttpProxyDestroy, + CheckDestroy: testAccCheckComputeTargetHttpProxyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionTargetHttpProxy_basic1(target, backend, hc, urlmap1, urlmap2), @@ -177,4 +175,3 @@ resource "google_compute_region_url_map" "foobar2" { } `, target, backend, hc, urlmap1, urlmap2) } -<% end -%> diff --git a/third_party/terraform/tests/resource_compute_region_target_https_proxy_test.go.erb b/third_party/terraform/tests/resource_compute_region_target_https_proxy_test.go.erb index e38474cdf5b4..1d321115acac 100644 --- a/third_party/terraform/tests/resource_compute_region_target_https_proxy_test.go.erb +++ b/third_party/terraform/tests/resource_compute_region_target_https_proxy_test.go.erb @@ -1,24 +1,22 @@ <% autogen_exception -%> package google -<% unless version == 'ga' -%> import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeRegionTargetHttpsProxy_update(t *testing.T) { t.Parallel() - resourceSuffix := acctest.RandString(10) + resourceSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetHttpsProxyDestroy, + CheckDestroy: testAccCheckComputeTargetHttpsProxyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionTargetHttpsProxy_basic1(resourceSuffix), @@ -232,4 +230,3 @@ resource "google_compute_region_ssl_certificate" "foobar2" { } `, id, id, id, id, id, id, id, id, id) } -<% end -%> diff --git a/third_party/terraform/tests/resource_compute_region_url_map_test.go.erb b/third_party/terraform/tests/resource_compute_region_url_map_test.go.erb index 7c63851083be..482770e727d7 100644 --- a/third_party/terraform/tests/resource_compute_region_url_map_test.go.erb +++ b/third_party/terraform/tests/resource_compute_region_url_map_test.go.erb @@ -1,12 +1,10 @@ <% autogen_exception -%> package google -<% unless version == 'ga' -%> import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,12 +12,12 @@ import ( func TestAccComputeRegionUrlMap_update_path_matcher(t *testing.T) { t.Parallel() - randomSuffix := acctest.RandString(10) + randomSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionUrlMap_basic1(randomSuffix), @@ -44,12 +42,12 @@ func TestAccComputeRegionUrlMap_update_path_matcher(t *testing.T) { func TestAccComputeRegionUrlMap_advanced(t *testing.T) { t.Parallel() - randomSuffix := acctest.RandString(10) + randomSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionUrlMap_advanced1(randomSuffix), @@ -74,12 +72,12 @@ func TestAccComputeRegionUrlMap_advanced(t *testing.T) { func TestAccComputeRegionUrlMap_noPathRulesWithUpdate(t *testing.T) { t.Parallel() - randomSuffix := acctest.RandString(10) + randomSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionUrlMap_noPathRules(randomSuffix), @@ -104,12 +102,12 @@ func TestAccComputeRegionUrlMap_noPathRulesWithUpdate(t *testing.T) { func TestAccComputeRegionUrlMap_ilbPathUpdate(t *testing.T) { t.Parallel() - randomSuffix := acctest.RandString(10) + randomSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionUrlMap_ilbPath(randomSuffix), @@ -134,12 +132,12 @@ func TestAccComputeRegionUrlMap_ilbPathUpdate(t *testing.T) { func TestAccComputeRegionUrlMap_ilbRouteUpdate(t *testing.T) { t.Parallel() - randomSuffix := acctest.RandString(10) + randomSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeRegionUrlMap_ilbRoute(randomSuffix), @@ -161,6 +159,28 @@ func TestAccComputeRegionUrlMap_ilbRouteUpdate(t *testing.T) { }) } +func TestAccComputeRegionUrlMap_defaultUrlRedirect(t *testing.T) { + t.Parallel() + + randomSuffix := randString(t, 10) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeRegionUrlMap_defaultUrlRedirectConfig(randomSuffix), + }, + { + ResourceName: "google_compute_region_url_map.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccComputeRegionUrlMap_basic1(randomSuffix string) string { return fmt.Sprintf(` resource "google_compute_region_backend_service" "foobar" { @@ -823,4 +843,15 @@ resource "google_compute_region_health_check" "default" { } `, randomSuffix, randomSuffix, randomSuffix, randomSuffix) } -<% end -%> + +func testAccComputeRegionUrlMap_defaultUrlRedirectConfig(randomSuffix string) string { + return fmt.Sprintf(` +resource "google_compute_region_url_map" "foobar" { + name = "urlmap-test-%s" + default_url_redirect { + https_redirect = true + strip_query = false + } +} +`, randomSuffix) +} diff --git a/third_party/terraform/tests/resource_compute_reservation_test.go b/third_party/terraform/tests/resource_compute_reservation_test.go index bc6b0de1e4b4..29b6939cf127 100644 --- a/third_party/terraform/tests/resource_compute_reservation_test.go +++ b/third_party/terraform/tests/resource_compute_reservation_test.go @@ -4,19 +4,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeReservation_update(t *testing.T) { t.Parallel() - reservationName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + reservationName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeReservationDestroy, + CheckDestroy: testAccCheckComputeReservationDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeReservation_basic(reservationName, "2"), diff --git a/third_party/terraform/tests/resource_compute_resource_policy_test.go b/third_party/terraform/tests/resource_compute_resource_policy_test.go new file mode 100644 index 000000000000..ea0f0c30369c --- /dev/null +++ b/third_party/terraform/tests/resource_compute_resource_policy_test.go @@ -0,0 +1,79 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccComputeResourcePolicy_attached(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeResourcePolicyDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeResourcePolicy_attached(randString(t, 10)), + }, + { + ResourceName: "google_compute_resource_policy.foo", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccComputeResourcePolicy_attached(suffix string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-9" + project = "debian-cloud" +} + +resource "google_compute_instance" "foobar" { + name = "tf-test-%s" + machine_type = "n1-standard-1" + zone = "us-central1-a" + can_ip_forward = false + tags = ["foo", "bar"] + + //deletion_protection = false is implicit in this config due to default value + + boot_disk { + initialize_params { + image = data.google_compute_image.my_image.self_link + } + } + + network_interface { + network = "default" + } + + metadata = { + foo = "bar" + baz = "qux" + startup-script = "echo Hello" + } + + labels = { + my_key = "my_value" + my_other_key = "my_other_value" + } + + resource_policies = [google_compute_resource_policy.foo.self_link] +} + +resource "google_compute_resource_policy" "foo" { + name = "tf-test-policy-%s" + region = "us-central1" + group_placement_policy { + availability_domain_count = 2 + } +} + +`, suffix, suffix) +} diff --git a/third_party/terraform/tests/resource_compute_route_test.go b/third_party/terraform/tests/resource_compute_route_test.go index 34dce3134831..a261393cdc2e 100644 --- a/third_party/terraform/tests/resource_compute_route_test.go +++ b/third_party/terraform/tests/resource_compute_route_test.go @@ -4,20 +4,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeRoute_defaultInternetGateway(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouteDestroy, + CheckDestroy: testAccCheckComputeRouteDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRoute_defaultInternetGateway(), + Config: testAccComputeRoute_defaultInternetGateway(randString(t, 10)), }, { ResourceName: "google_compute_route.foobar", @@ -29,16 +28,16 @@ func TestAccComputeRoute_defaultInternetGateway(t *testing.T) { } func TestAccComputeRoute_hopInstance(t *testing.T) { - instanceName := "tf" + acctest.RandString(10) + instanceName := "tf-test-" + randString(t, 10) zone := "us-central1-b" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouteDestroy, + CheckDestroy: testAccCheckComputeRouteDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRoute_hopInstance(instanceName, zone), + Config: testAccComputeRoute_hopInstance(instanceName, zone, randString(t, 10)), }, { ResourceName: "google_compute_route.foobar", @@ -49,7 +48,7 @@ func TestAccComputeRoute_hopInstance(t *testing.T) { }) } -func testAccComputeRoute_defaultInternetGateway() string { +func testAccComputeRoute_defaultInternetGateway(suffix string) string { return fmt.Sprintf(` resource "google_compute_route" "foobar" { name = "route-test-%s" @@ -58,10 +57,10 @@ resource "google_compute_route" "foobar" { next_hop_gateway = "default-internet-gateway" priority = 100 } -`, acctest.RandString(10)) +`, suffix) } -func testAccComputeRoute_hopInstance(instanceName, zone string) string { +func testAccComputeRoute_hopInstance(instanceName, zone, suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -92,5 +91,5 @@ resource "google_compute_route" "foobar" { next_hop_instance_zone = google_compute_instance.foo.zone priority = 100 } -`, instanceName, zone, acctest.RandString(10)) +`, instanceName, zone, suffix) } diff --git a/third_party/terraform/tests/resource_compute_router_bgp_peer_test.go b/third_party/terraform/tests/resource_compute_router_bgp_peer_test.go index 8ce3fe76712c..621e1f94c3bb 100644 --- a/third_party/terraform/tests/resource_compute_router_bgp_peer_test.go +++ b/third_party/terraform/tests/resource_compute_router_bgp_peer_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,16 +11,16 @@ import ( func TestAccComputeRouterPeer_basic(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + routerName := fmt.Sprintf("tf-test-router-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterPeerDestroy, + CheckDestroy: testAccCheckComputeRouterPeerDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterPeerBasic(testId), + Config: testAccComputeRouterPeerBasic(routerName), Check: testAccCheckComputeRouterPeerExists( - "google_compute_router_peer.foobar"), + t, "google_compute_router_peer.foobar"), }, { ResourceName: "google_compute_router_peer.foobar", @@ -29,9 +28,9 @@ func TestAccComputeRouterPeer_basic(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouterPeerKeepRouter(testId), + Config: testAccComputeRouterPeerKeepRouter(routerName), Check: testAccCheckComputeRouterPeerDelete( - "google_compute_router_peer.foobar"), + t, "google_compute_router_peer.foobar"), }, }, }) @@ -40,16 +39,16 @@ func TestAccComputeRouterPeer_basic(t *testing.T) { func TestAccComputeRouterPeer_advertiseMode(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + routerName := fmt.Sprintf("tf-test-router-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterPeerDestroy, + CheckDestroy: testAccCheckComputeRouterPeerDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterPeerAdvertiseMode(testId), + Config: testAccComputeRouterPeerAdvertiseMode(routerName), Check: testAccCheckComputeRouterPeerExists( - "google_compute_router_peer.foobar"), + t, "google_compute_router_peer.foobar"), }, { ResourceName: "google_compute_router_peer.foobar", @@ -60,42 +59,44 @@ func TestAccComputeRouterPeer_advertiseMode(t *testing.T) { }) } -func testAccCheckComputeRouterPeerDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeRouterPeerDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - routersService := config.clientCompute.Routers + routersService := config.clientCompute.Routers - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_router" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_router" { + continue + } - project, err := getTestProject(rs.Primary, config) - if err != nil { - return err - } + project, err := getTestProject(rs.Primary, config) + if err != nil { + return err + } - region, err := getTestRegion(rs.Primary, config) - if err != nil { - return err - } + region, err := getTestRegion(rs.Primary, config) + if err != nil { + return err + } - routerName := rs.Primary.Attributes["router"] + routerName := rs.Primary.Attributes["router"] - _, err = routersService.Get(project, region, routerName).Do() + _, err = routersService.Get(project, region, routerName).Do() - if err == nil { - return fmt.Errorf("Error, Router %s in region %s still exists", - routerName, region) + if err == nil { + return fmt.Errorf("Error, Router %s in region %s still exists", + routerName, region) + } } - } - return nil + return nil + } } -func testAccCheckComputeRouterPeerDelete(n string) resource.TestCheckFunc { +func testAccCheckComputeRouterPeerDelete(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) routersService := config.clientCompute.Routers @@ -136,7 +137,7 @@ func testAccCheckComputeRouterPeerDelete(n string) resource.TestCheckFunc { } } -func testAccCheckComputeRouterPeerExists(n string) resource.TestCheckFunc { +func testAccCheckComputeRouterPeerExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -147,7 +148,7 @@ func testAccCheckComputeRouterPeerExists(n string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) project, err := getTestProject(rs.Primary, config) if err != nil { @@ -180,32 +181,32 @@ func testAccCheckComputeRouterPeerExists(n string) resource.TestCheckFunc { } } -func testAccComputeRouterPeerBasic(testId string) string { +func testAccComputeRouterPeerBasic(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-peer-test-%s" + name = "%s-net" } resource "google_compute_subnetwork" "foobar" { - name = "router-peer-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region } resource "google_compute_vpn_gateway" "foobar" { - name = "router-peer-test-%s" + name = "%s-gateway" network = google_compute_network.foobar.self_link region = google_compute_subnetwork.foobar.region } resource "google_compute_forwarding_rule" "foobar_esp" { - name = "router-peer-test-%s-1" + name = "%s-frfr1" region = google_compute_vpn_gateway.foobar.region ip_protocol = "ESP" ip_address = google_compute_address.foobar.address @@ -213,7 +214,7 @@ resource "google_compute_forwarding_rule" "foobar_esp" { } resource "google_compute_forwarding_rule" "foobar_udp500" { - name = "router-peer-test-%s-2" + name = "%s-fr2" region = google_compute_forwarding_rule.foobar_esp.region ip_protocol = "UDP" port_range = "500-500" @@ -222,7 +223,7 @@ resource "google_compute_forwarding_rule" "foobar_udp500" { } resource "google_compute_forwarding_rule" "foobar_udp4500" { - name = "router-peer-test-%s-3" + name = "%s-fr3" region = google_compute_forwarding_rule.foobar_udp500.region ip_protocol = "UDP" port_range = "4500-4500" @@ -231,7 +232,7 @@ resource "google_compute_forwarding_rule" "foobar_udp4500" { } resource "google_compute_router" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp500.region network = google_compute_network.foobar.self_link bgp { @@ -240,7 +241,7 @@ resource "google_compute_router" "foobar" { } resource "google_compute_vpn_tunnel" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp4500.region target_vpn_gateway = google_compute_vpn_gateway.foobar.self_link shared_secret = "unguessable" @@ -249,7 +250,7 @@ resource "google_compute_vpn_tunnel" "foobar" { } resource "google_compute_router_interface" "foobar" { - name = "router-peer-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region ip_range = "169.254.3.1/30" @@ -257,7 +258,7 @@ resource "google_compute_router_interface" "foobar" { } resource "google_compute_router_peer" "foobar" { - name = "router-peer-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region peer_ip_address = "169.254.3.2" @@ -265,35 +266,35 @@ resource "google_compute_router_peer" "foobar" { advertised_route_priority = 100 interface = google_compute_router_interface.foobar.name } -`, testId, testId, testId, testId, testId, testId, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) } -func testAccComputeRouterPeerKeepRouter(testId string) string { +func testAccComputeRouterPeerKeepRouter(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-peer-test-%s" + name = "%s-net" } resource "google_compute_subnetwork" "foobar" { - name = "router-peer-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region } resource "google_compute_vpn_gateway" "foobar" { - name = "router-peer-test-%s" + name = "%s-gateway" network = google_compute_network.foobar.self_link region = google_compute_subnetwork.foobar.region } resource "google_compute_forwarding_rule" "foobar_esp" { - name = "router-peer-test-%s-1" + name = "%s-fr1" region = google_compute_vpn_gateway.foobar.region ip_protocol = "ESP" ip_address = google_compute_address.foobar.address @@ -301,7 +302,7 @@ resource "google_compute_forwarding_rule" "foobar_esp" { } resource "google_compute_forwarding_rule" "foobar_udp500" { - name = "router-peer-test-%s-2" + name = "%s-fr2" region = google_compute_forwarding_rule.foobar_esp.region ip_protocol = "UDP" port_range = "500-500" @@ -310,7 +311,7 @@ resource "google_compute_forwarding_rule" "foobar_udp500" { } resource "google_compute_forwarding_rule" "foobar_udp4500" { - name = "router-peer-test-%s-3" + name = "%s-fr3" region = google_compute_forwarding_rule.foobar_udp500.region ip_protocol = "UDP" port_range = "4500-4500" @@ -319,7 +320,7 @@ resource "google_compute_forwarding_rule" "foobar_udp4500" { } resource "google_compute_router" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp500.region network = google_compute_network.foobar.self_link bgp { @@ -328,7 +329,7 @@ resource "google_compute_router" "foobar" { } resource "google_compute_vpn_tunnel" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp4500.region target_vpn_gateway = google_compute_vpn_gateway.foobar.self_link shared_secret = "unguessable" @@ -337,41 +338,41 @@ resource "google_compute_vpn_tunnel" "foobar" { } resource "google_compute_router_interface" "foobar" { - name = "router-peer-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region ip_range = "169.254.3.1/30" vpn_tunnel = google_compute_vpn_tunnel.foobar.name } -`, testId, testId, testId, testId, testId, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) } -func testAccComputeRouterPeerAdvertiseMode(testId string) string { +func testAccComputeRouterPeerAdvertiseMode(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-peer-test-%s" + name = "%s-net" } resource "google_compute_subnetwork" "foobar" { - name = "router-peer-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-peer-test-%s" + name = "%s-addr" region = google_compute_subnetwork.foobar.region } resource "google_compute_vpn_gateway" "foobar" { - name = "router-peer-test-%s" + name = "%s-gateway" network = google_compute_network.foobar.self_link region = google_compute_subnetwork.foobar.region } resource "google_compute_forwarding_rule" "foobar_esp" { - name = "router-peer-test-%s-1" + name = "%s-fr1" region = google_compute_vpn_gateway.foobar.region ip_protocol = "ESP" ip_address = google_compute_address.foobar.address @@ -379,7 +380,7 @@ resource "google_compute_forwarding_rule" "foobar_esp" { } resource "google_compute_forwarding_rule" "foobar_udp500" { - name = "router-peer-test-%s-2" + name = "%s-fr2" region = google_compute_forwarding_rule.foobar_esp.region ip_protocol = "UDP" port_range = "500-500" @@ -388,7 +389,7 @@ resource "google_compute_forwarding_rule" "foobar_udp500" { } resource "google_compute_forwarding_rule" "foobar_udp4500" { - name = "router-peer-test-%s-3" + name = "%s-fr3" region = google_compute_forwarding_rule.foobar_udp500.region ip_protocol = "UDP" port_range = "4500-4500" @@ -397,7 +398,7 @@ resource "google_compute_forwarding_rule" "foobar_udp4500" { } resource "google_compute_router" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp500.region network = google_compute_network.foobar.self_link bgp { @@ -406,7 +407,7 @@ resource "google_compute_router" "foobar" { } resource "google_compute_vpn_tunnel" "foobar" { - name = "router-peer-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp4500.region target_vpn_gateway = google_compute_vpn_gateway.foobar.self_link shared_secret = "unguessable" @@ -415,7 +416,7 @@ resource "google_compute_vpn_tunnel" "foobar" { } resource "google_compute_router_interface" "foobar" { - name = "router-peer-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region ip_range = "169.254.3.1/30" @@ -423,7 +424,7 @@ resource "google_compute_router_interface" "foobar" { } resource "google_compute_router_peer" "foobar" { - name = "router-peer-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region peer_ip_address = "169.254.3.2" @@ -436,5 +437,5 @@ resource "google_compute_router_peer" "foobar" { } interface = google_compute_router_interface.foobar.name } -`, testId, testId, testId, testId, testId, testId, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) } diff --git a/third_party/terraform/tests/resource_compute_router_interface_test.go b/third_party/terraform/tests/resource_compute_router_interface_test.go index 82afd0773756..cfe7ed59363d 100644 --- a/third_party/terraform/tests/resource_compute_router_interface_test.go +++ b/third_party/terraform/tests/resource_compute_router_interface_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,16 +11,16 @@ import ( func TestAccComputeRouterInterface_basic(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + routerName := fmt.Sprintf("tf-test-router-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterInterfaceDestroy, + CheckDestroy: testAccCheckComputeRouterInterfaceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterInterfaceBasic(testId), + Config: testAccComputeRouterInterfaceBasic(routerName), Check: testAccCheckComputeRouterInterfaceExists( - "google_compute_router_interface.foobar"), + t, "google_compute_router_interface.foobar"), }, { ResourceName: "google_compute_router_interface.foobar", @@ -29,9 +28,9 @@ func TestAccComputeRouterInterface_basic(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouterInterfaceKeepRouter(testId), + Config: testAccComputeRouterInterfaceKeepRouter(routerName), Check: testAccCheckComputeRouterInterfaceDelete( - "google_compute_router_interface.foobar"), + t, "google_compute_router_interface.foobar"), }, }, }) @@ -40,16 +39,16 @@ func TestAccComputeRouterInterface_basic(t *testing.T) { func TestAccComputeRouterInterface_withTunnel(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + routerName := fmt.Sprintf("tf-test-router-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterInterfaceDestroy, + CheckDestroy: testAccCheckComputeRouterInterfaceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterInterfaceWithTunnel(testId), + Config: testAccComputeRouterInterfaceWithTunnel(routerName), Check: testAccCheckComputeRouterInterfaceExists( - "google_compute_router_interface.foobar"), + t, "google_compute_router_interface.foobar"), }, { ResourceName: "google_compute_router_interface.foobar", @@ -60,42 +59,44 @@ func TestAccComputeRouterInterface_withTunnel(t *testing.T) { }) } -func testAccCheckComputeRouterInterfaceDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeRouterInterfaceDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - routersService := config.clientCompute.Routers + routersService := config.clientCompute.Routers - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_router" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_router" { + continue + } - project, err := getTestProject(rs.Primary, config) - if err != nil { - return err - } + project, err := getTestProject(rs.Primary, config) + if err != nil { + return err + } - region, err := getTestRegion(rs.Primary, config) - if err != nil { - return err - } + region, err := getTestRegion(rs.Primary, config) + if err != nil { + return err + } - routerName := rs.Primary.Attributes["router"] + routerName := rs.Primary.Attributes["router"] - _, err = routersService.Get(project, region, routerName).Do() + _, err = routersService.Get(project, region, routerName).Do() - if err == nil { - return fmt.Errorf("Error, Router %s in region %s still exists", - routerName, region) + if err == nil { + return fmt.Errorf("Error, Router %s in region %s still exists", + routerName, region) + } } - } - return nil + return nil + } } -func testAccCheckComputeRouterInterfaceDelete(n string) resource.TestCheckFunc { +func testAccCheckComputeRouterInterfaceDelete(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) routersService := config.clientCompute.Routers @@ -136,7 +137,7 @@ func testAccCheckComputeRouterInterfaceDelete(n string) resource.TestCheckFunc { } } -func testAccCheckComputeRouterInterfaceExists(n string) resource.TestCheckFunc { +func testAccCheckComputeRouterInterfaceExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -147,7 +148,7 @@ func testAccCheckComputeRouterInterfaceExists(n string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) project, err := getTestProject(rs.Primary, config) if err != nil { @@ -180,32 +181,32 @@ func testAccCheckComputeRouterInterfaceExists(n string) resource.TestCheckFunc { } } -func testAccComputeRouterInterfaceBasic(testId string) string { +func testAccComputeRouterInterfaceBasic(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-interface-test-%s" + name = "%s-net" } resource "google_compute_subnetwork" "foobar" { - name = "router-interface-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-interface-test-%s" + name = "%s-addr" region = google_compute_subnetwork.foobar.region } resource "google_compute_vpn_gateway" "foobar" { - name = "router-interface-test-%s" + name = "%s-gateway" network = google_compute_network.foobar.self_link region = google_compute_subnetwork.foobar.region } resource "google_compute_forwarding_rule" "foobar_esp" { - name = "router-interface-test-%s-1" + name = "%s-fr1" region = google_compute_vpn_gateway.foobar.region ip_protocol = "ESP" ip_address = google_compute_address.foobar.address @@ -213,7 +214,7 @@ resource "google_compute_forwarding_rule" "foobar_esp" { } resource "google_compute_forwarding_rule" "foobar_udp500" { - name = "router-interface-test-%s-2" + name = "%s-fr2" region = google_compute_forwarding_rule.foobar_esp.region ip_protocol = "UDP" port_range = "500-500" @@ -222,7 +223,7 @@ resource "google_compute_forwarding_rule" "foobar_udp500" { } resource "google_compute_forwarding_rule" "foobar_udp4500" { - name = "router-interface-test-%s-3" + name = "%s-fr3" region = google_compute_forwarding_rule.foobar_udp500.region ip_protocol = "UDP" port_range = "4500-4500" @@ -231,7 +232,7 @@ resource "google_compute_forwarding_rule" "foobar_udp4500" { } resource "google_compute_router" "foobar" { - name = "router-interface-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp500.region network = google_compute_network.foobar.self_link bgp { @@ -240,40 +241,40 @@ resource "google_compute_router" "foobar" { } resource "google_compute_router_interface" "foobar" { - name = "router-interface-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region ip_range = "169.254.3.1/30" } -`, testId, testId, testId, testId, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) } -func testAccComputeRouterInterfaceKeepRouter(testId string) string { +func testAccComputeRouterInterfaceKeepRouter(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-interface-test-%s" + name = "tf-test-%s" } resource "google_compute_subnetwork" "foobar" { - name = "router-interface-test-subnetwork-%s" + name = "tf-test-router-interface-subnetwork-%s" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-interface-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region } resource "google_compute_vpn_gateway" "foobar" { - name = "router-interface-test-%s" + name = "%s" network = google_compute_network.foobar.self_link region = google_compute_subnetwork.foobar.region } resource "google_compute_forwarding_rule" "foobar_esp" { - name = "router-interface-test-%s-1" + name = "%s-1" region = google_compute_vpn_gateway.foobar.region ip_protocol = "ESP" ip_address = google_compute_address.foobar.address @@ -281,7 +282,7 @@ resource "google_compute_forwarding_rule" "foobar_esp" { } resource "google_compute_forwarding_rule" "foobar_udp500" { - name = "router-interface-test-%s-2" + name = "%s-2" region = google_compute_forwarding_rule.foobar_esp.region ip_protocol = "UDP" port_range = "500-500" @@ -290,7 +291,7 @@ resource "google_compute_forwarding_rule" "foobar_udp500" { } resource "google_compute_forwarding_rule" "foobar_udp4500" { - name = "router-interface-test-%s-3" + name = "%s-3" region = google_compute_forwarding_rule.foobar_udp500.region ip_protocol = "UDP" port_range = "4500-4500" @@ -299,42 +300,42 @@ resource "google_compute_forwarding_rule" "foobar_udp4500" { } resource "google_compute_router" "foobar" { - name = "router-interface-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp500.region network = google_compute_network.foobar.self_link bgp { asn = 64514 } } -`, testId, testId, testId, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) } -func testAccComputeRouterInterfaceWithTunnel(testId string) string { +func testAccComputeRouterInterfaceWithTunnel(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-interface-test-%s" + name = "tf-test-%s" } resource "google_compute_subnetwork" "foobar" { - name = "router-interface-test-subnetwork-%s" + name = "tf-test-router-interface-subnetwork-%s" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-interface-test-%s" + name = "%s-addr" region = google_compute_subnetwork.foobar.region } resource "google_compute_vpn_gateway" "foobar" { - name = "router-interface-test-%s" + name = "%s-gateway" network = google_compute_network.foobar.self_link region = google_compute_subnetwork.foobar.region } resource "google_compute_forwarding_rule" "foobar_esp" { - name = "router-interface-test-%s-1" + name = "%s-fr1" region = google_compute_vpn_gateway.foobar.region ip_protocol = "ESP" ip_address = google_compute_address.foobar.address @@ -342,7 +343,7 @@ resource "google_compute_forwarding_rule" "foobar_esp" { } resource "google_compute_forwarding_rule" "foobar_udp500" { - name = "router-interface-test-%s-2" + name = "%s-fr2" region = google_compute_forwarding_rule.foobar_esp.region ip_protocol = "UDP" port_range = "500-500" @@ -351,7 +352,7 @@ resource "google_compute_forwarding_rule" "foobar_udp500" { } resource "google_compute_forwarding_rule" "foobar_udp4500" { - name = "router-interface-test-%s-3" + name = "%s-fr3" region = google_compute_forwarding_rule.foobar_udp500.region ip_protocol = "UDP" port_range = "4500-4500" @@ -360,7 +361,7 @@ resource "google_compute_forwarding_rule" "foobar_udp4500" { } resource "google_compute_router" "foobar" { - name = "router-interface-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp500.region network = google_compute_network.foobar.self_link bgp { @@ -369,7 +370,7 @@ resource "google_compute_router" "foobar" { } resource "google_compute_vpn_tunnel" "foobar" { - name = "router-interface-test-%s" + name = "%s" region = google_compute_forwarding_rule.foobar_udp4500.region target_vpn_gateway = google_compute_vpn_gateway.foobar.self_link shared_secret = "unguessable" @@ -378,11 +379,11 @@ resource "google_compute_vpn_tunnel" "foobar" { } resource "google_compute_router_interface" "foobar" { - name = "router-interface-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region ip_range = "169.254.3.1/30" vpn_tunnel = google_compute_vpn_tunnel.foobar.name } -`, testId, testId, testId, testId, testId, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) } diff --git a/third_party/terraform/tests/resource_compute_router_nat_test.go.erb b/third_party/terraform/tests/resource_compute_router_nat_test.go.erb index c3426571cb21..ff060103b805 100644 --- a/third_party/terraform/tests/resource_compute_router_nat_test.go.erb +++ b/third_party/terraform/tests/resource_compute_router_nat_test.go.erb @@ -6,7 +6,6 @@ import ( "regexp" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -17,43 +16,45 @@ func TestAccComputeRouterNat_basic(t *testing.T) { project := getTestProjectFromEnv() region := getTestRegionFromEnv() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-nat-%s", testId) + + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterNatDestroy, + CheckDestroy: testAccCheckComputeRouterNatDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterNatBasic(testId), + Config: testAccComputeRouterNatBasic(routerName), }, { + // implicitly full ImportStateId ResourceName: "google_compute_router_nat.foobar", - // implicitly: ImportStateId: fmt.Sprintf("%s/%s/router-nat-test-%s/router-nat-test-%s", project, region, testId, testId), ImportState: true, ImportStateVerify: true, }, { ResourceName: "google_compute_router_nat.foobar", - ImportStateId: fmt.Sprintf("%s/%s/router-nat-test-%s/router-nat-test-%s", project, region, testId, testId), + ImportStateId: fmt.Sprintf("%s/%s/%s/%s", project, region, routerName, routerName), ImportState: true, ImportStateVerify: true, }, { ResourceName: "google_compute_router_nat.foobar", - ImportStateId: fmt.Sprintf("%s/router-nat-test-%s/router-nat-test-%s", region, testId, testId), + ImportStateId: fmt.Sprintf("%s/%s/%s", region, routerName, routerName), ImportState: true, ImportStateVerify: true, }, { ResourceName: "google_compute_router_nat.foobar", - ImportStateId: fmt.Sprintf("router-nat-test-%s/router-nat-test-%s", testId, testId), + ImportStateId: fmt.Sprintf("%s/%s", routerName, routerName), ImportState: true, ImportStateVerify: true, }, { - Config: testAccComputeRouterNatKeepRouter(testId), + Config: testAccComputeRouterNatKeepRouter(routerName), Check: testAccCheckComputeRouterNatDelete( - "google_compute_router_nat.foobar"), + t, "google_compute_router_nat.foobar"), }, }, }) @@ -62,14 +63,16 @@ func TestAccComputeRouterNat_basic(t *testing.T) { func TestAccComputeRouterNat_update(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-nat-%s", testId) + + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterNatDestroy, + CheckDestroy: testAccCheckComputeRouterNatDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterNatBasicBeforeUpdate(testId), + Config: testAccComputeRouterNatBasicBeforeUpdate(routerName), }, { ResourceName: "google_compute_router_nat.foobar", @@ -77,7 +80,7 @@ func TestAccComputeRouterNat_update(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouterNatUpdated(testId), + Config: testAccComputeRouterNatUpdated(routerName), }, { ResourceName: "google_compute_router_nat.foobar", @@ -85,7 +88,7 @@ func TestAccComputeRouterNat_update(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouterNatBasicBeforeUpdate(testId), + Config: testAccComputeRouterNatBasicBeforeUpdate(routerName), }, { ResourceName: "google_compute_router_nat.foobar", @@ -99,14 +102,16 @@ func TestAccComputeRouterNat_update(t *testing.T) { func TestAccComputeRouterNat_withManualIpAndSubnetConfiguration(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-nat-%s", testId) + + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterNatDestroy, + CheckDestroy: testAccCheckComputeRouterNatDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterNatWithManualIpAndSubnetConfiguration(testId), + Config: testAccComputeRouterNatWithManualIpAndSubnetConfiguration(routerName), }, { ResourceName: "google_compute_router_nat.foobar", @@ -121,20 +126,22 @@ func TestAccComputeRouterNat_withManualIpAndSubnetConfiguration(t *testing.T) { func TestAccComputeRouterNat_withNatIpsAndDrainNatIps(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-nat-%s", testId) + + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterNatDestroy, + CheckDestroy: testAccCheckComputeRouterNatDestroyProducer(t), Steps: []resource.TestStep{ // (ERROR): Creation with drain nat IPs should fail { - Config: testAccComputeRouterNatWithOneDrainOneRemovedNatIps(testId), + Config: testAccComputeRouterNatWithOneDrainOneRemovedNatIps(routerName), ExpectError: regexp.MustCompile("New RouterNat cannot have drain_nat_ips"), }, // Create NAT with three nat IPs { - Config: testAccComputeRouterNatWithNatIps(testId), + Config: testAccComputeRouterNatWithNatIps(routerName), }, { ResourceName: "google_compute_router_nat.foobar", @@ -143,12 +150,12 @@ func TestAccComputeRouterNat_withNatIpsAndDrainNatIps(t *testing.T) { }, // (ERROR) - Should not allow draining IPs still in natIps { - Config: testAccComputeRouterNatWithInvalidDrainNatIpsStillInNatIps(testId), + Config: testAccComputeRouterNatWithInvalidDrainNatIpsStillInNatIps(routerName), ExpectError: regexp.MustCompile("cannot be drained if still set in nat_ips"), }, // natIps #1, #2, #3--> natIp #2, drainNatIp #3 { - Config: testAccComputeRouterNatWithOneDrainOneRemovedNatIps(testId), + Config: testAccComputeRouterNatWithOneDrainOneRemovedNatIps(routerName), }, { ResourceName: "google_compute_router_nat.foobar", @@ -157,7 +164,7 @@ func TestAccComputeRouterNat_withNatIpsAndDrainNatIps(t *testing.T) { }, // (ERROR): Should not be able to drain previously removed natIps (#1) { - Config: testAccComputeRouterNatWithInvalidDrainMissingNatIp(testId), + Config: testAccComputeRouterNatWithInvalidDrainMissingNatIp(routerName), ExpectError: regexp.MustCompile("was not previously set in nat_ips"), }, }, @@ -166,41 +173,43 @@ func TestAccComputeRouterNat_withNatIpsAndDrainNatIps(t *testing.T) { <% end -%> -func testAccCheckComputeRouterNatDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeRouterNatDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - routersService := config.clientCompute.Routers + routersService := config.clientCompute.Routers - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_router" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_router" { + continue + } - project, err := getTestProject(rs.Primary, config) - if err != nil { - return err - } + project, err := getTestProject(rs.Primary, config) + if err != nil { + return err + } - region, err := getTestRegion(rs.Primary, config) - if err != nil { - return err - } + region, err := getTestRegion(rs.Primary, config) + if err != nil { + return err + } - routerName := rs.Primary.Attributes["router"] + routerName := rs.Primary.Attributes["router"] - _, err = routersService.Get(project, region, routerName).Do() + _, err = routersService.Get(project, region, routerName).Do() - if err == nil { - return fmt.Errorf("Error, Router %s in region %s still exists", routerName, region) + if err == nil { + return fmt.Errorf("Error, Router %s in region %s still exists", routerName, region) + } } - } - return nil + return nil + } } -func testAccCheckComputeRouterNatDelete(n string) resource.TestCheckFunc { +func testAccCheckComputeRouterNatDelete(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) routersService := config.clientComputeBeta.Routers @@ -240,27 +249,27 @@ func testAccCheckComputeRouterNatDelete(n string) resource.TestCheckFunc { } } -func testAccComputeRouterNatBasic(testId string) string { +func testAccComputeRouterNatBasic(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-nat-test-%s" + name = "%s-net" } resource "google_compute_subnetwork" "foobar" { - name = "router-nat-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_router" "foobar" { - name = "router-nat-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.self_link } resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region nat_ip_allocate_option = "AUTO_ONLY" @@ -270,36 +279,36 @@ resource "google_compute_router_nat" "foobar" { filter = "ERRORS_ONLY" } } -`, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName) } // Like basic but with extra resources -func testAccComputeRouterNatBasicBeforeUpdate(randPrefix string) string { +func testAccComputeRouterNatBasicBeforeUpdate(routerName string) string { return fmt.Sprintf(` resource "google_compute_router" "foobar" { - name = "router-nat-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.self_link } resource "google_compute_network" "foobar" { - name = "router-nat-test-%s" + name = "%s-net" } resource "google_compute_subnetwork" "foobar" { - name = "router-nat-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-nat-test-%s" + name = "%s-addr" region = google_compute_subnetwork.foobar.region } resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region nat_ip_allocate_option = "AUTO_ONLY" @@ -310,35 +319,35 @@ resource "google_compute_router_nat" "foobar" { filter = "ERRORS_ONLY" } } -`, randPrefix, randPrefix, randPrefix, randPrefix, randPrefix) +`, routerName, routerName, routerName, routerName, routerName) } -func testAccComputeRouterNatUpdated(randPrefix string) string { +func testAccComputeRouterNatUpdated(routerName string) string { return fmt.Sprintf(` resource "google_compute_router" "foobar" { - name = "router-nat-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.self_link } resource "google_compute_network" "foobar" { - name = "router-nat-test-%s" + name = "%s-net" } resource "google_compute_subnetwork" "foobar" { - name = "router-nat-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-nat-test-%s" + name = "%s-addr" region = google_compute_subnetwork.foobar.region } resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region @@ -362,30 +371,30 @@ resource "google_compute_router_nat" "foobar" { filter = "TRANSLATIONS_ONLY" } } -`, randPrefix, randPrefix, randPrefix, randPrefix, randPrefix) +`, routerName, routerName, routerName, routerName, routerName) } -func testAccComputeRouterNatWithManualIpAndSubnetConfiguration(testId string) string { +func testAccComputeRouterNatWithManualIpAndSubnetConfiguration(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-nat-test-%s" + name = "%s-net" auto_create_subnetworks = "false" } resource "google_compute_subnetwork" "foobar" { - name = "router-nat-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "foobar" { - name = "router-nat-test-%s" + name = "router-nat-%s-addr" region = google_compute_subnetwork.foobar.region } resource "google_compute_router" "foobar" { - name = "router-nat-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.self_link bgp { @@ -394,7 +403,7 @@ resource "google_compute_router" "foobar" { } resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region nat_ip_allocate_option = "MANUAL_ONLY" @@ -405,53 +414,53 @@ resource "google_compute_router_nat" "foobar" { source_ip_ranges_to_nat = ["ALL_IP_RANGES"] } } -`, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName) } <% unless version == 'ga' -%> -func testAccComputeRouterNatBaseResourcesWithNatIps(testId string) string { +func testAccComputeRouterNatBaseResourcesWithNatIps(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-nat-test-%s" + name = "%s-net" auto_create_subnetworks = "false" } resource "google_compute_subnetwork" "foobar" { - name = "router-nat-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_address" "addr1" { - name = "router-nat-test-%s-1" + name = "%s-addr1" region = google_compute_subnetwork.foobar.region } resource "google_compute_address" "addr2" { - name = "router-nat-test-%s-2" + name = "%s-addr2" region = google_compute_subnetwork.foobar.region } resource "google_compute_address" "addr3" { - name = "router-nat-test-%s-3" + name = "%s-addr3" region = google_compute_subnetwork.foobar.region } resource "google_compute_router" "foobar" { - name = "router-nat-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.self_link } -`, testId, testId, testId, testId, testId, testId) +`, routerName, routerName, routerName, routerName, routerName, routerName) } -func testAccComputeRouterNatWithNatIps(testId string) string { +func testAccComputeRouterNatWithNatIps(routerName string) string { return fmt.Sprintf(` %s resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region @@ -468,15 +477,15 @@ resource "google_compute_router_nat" "foobar" { source_ip_ranges_to_nat = ["ALL_IP_RANGES"] } } -`, testAccComputeRouterNatBaseResourcesWithNatIps(testId), testId) +`, testAccComputeRouterNatBaseResourcesWithNatIps(routerName), routerName) } -func testAccComputeRouterNatWithOneDrainOneRemovedNatIps(testId string) string { +func testAccComputeRouterNatWithOneDrainOneRemovedNatIps(routerName string) string { return fmt.Sprintf(` %s resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region @@ -495,15 +504,15 @@ resource "google_compute_router_nat" "foobar" { google_compute_address.addr3.self_link, ] } -`, testAccComputeRouterNatBaseResourcesWithNatIps(testId), testId) +`, testAccComputeRouterNatBaseResourcesWithNatIps(routerName), routerName) } -func testAccComputeRouterNatWithInvalidDrainMissingNatIp(testId string) string { +func testAccComputeRouterNatWithInvalidDrainMissingNatIp(routerName string) string { return fmt.Sprintf(` %s resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region @@ -523,15 +532,15 @@ resource "google_compute_router_nat" "foobar" { google_compute_address.addr3.self_link, ] } -`, testAccComputeRouterNatBaseResourcesWithNatIps(testId), testId) +`, testAccComputeRouterNatBaseResourcesWithNatIps(routerName), routerName) } -func testAccComputeRouterNatWithInvalidDrainNatIpsStillInNatIps(testId string) string { +func testAccComputeRouterNatWithInvalidDrainNatIpsStillInNatIps(routerName string) string { return fmt.Sprintf(` %s resource "google_compute_router_nat" "foobar" { - name = "router-nat-test-%s" + name = "%s" router = google_compute_router.foobar.name region = google_compute_router.foobar.region @@ -552,29 +561,29 @@ resource "google_compute_router_nat" "foobar" { google_compute_address.addr3.self_link, ] } -`, testAccComputeRouterNatBaseResourcesWithNatIps(testId), testId) +`, testAccComputeRouterNatBaseResourcesWithNatIps(routerName), routerName) } <% end -%> -func testAccComputeRouterNatKeepRouter(testId string) string { +func testAccComputeRouterNatKeepRouter(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-nat-test-%s" + name = "%s" auto_create_subnetworks = "false" } resource "google_compute_subnetwork" "foobar" { - name = "router-nat-test-subnetwork-%s" + name = "%s" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "us-central1" } resource "google_compute_router" "foobar" { - name = "router-nat-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.self_link } -`, testId, testId, testId) +`, routerName, routerName, routerName) } diff --git a/third_party/terraform/tests/resource_compute_router_test.go b/third_party/terraform/tests/resource_compute_router_test.go index d0f043c89ea9..2b0eb304d072 100644 --- a/third_party/terraform/tests/resource_compute_router_test.go +++ b/third_party/terraform/tests/resource_compute_router_test.go @@ -4,22 +4,22 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeRouter_basic(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-%s", testId) resourceRegion := "europe-west1" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterDestroy, + CheckDestroy: testAccCheckComputeRouterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterBasic(testId, resourceRegion), + Config: testAccComputeRouterBasic(routerName, resourceRegion), }, { ResourceName: "google_compute_router.foobar", @@ -33,15 +33,16 @@ func TestAccComputeRouter_basic(t *testing.T) { func TestAccComputeRouter_noRegion(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-%s", testId) providerRegion := "us-central1" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterDestroy, + CheckDestroy: testAccCheckComputeRouterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterNoRegion(testId, providerRegion), + Config: testAccComputeRouterNoRegion(routerName, providerRegion), }, { ResourceName: "google_compute_router.foobar", @@ -55,14 +56,15 @@ func TestAccComputeRouter_noRegion(t *testing.T) { func TestAccComputeRouter_full(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-%s", testId) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterDestroy, + CheckDestroy: testAccCheckComputeRouterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterFull(testId), + Config: testAccComputeRouterFull(routerName), }, { ResourceName: "google_compute_router.foobar", @@ -76,15 +78,16 @@ func TestAccComputeRouter_full(t *testing.T) { func TestAccComputeRouter_update(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-%s", testId) region := getTestRegionFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterDestroy, + CheckDestroy: testAccCheckComputeRouterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterBasic(testId, region), + Config: testAccComputeRouterBasic(routerName, region), }, { ResourceName: "google_compute_router.foobar", @@ -92,7 +95,7 @@ func TestAccComputeRouter_update(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouterFull(testId), + Config: testAccComputeRouterFull(routerName), }, { ResourceName: "google_compute_router.foobar", @@ -100,7 +103,7 @@ func TestAccComputeRouter_update(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouterBasic(testId, region), + Config: testAccComputeRouterBasic(routerName, region), }, { ResourceName: "google_compute_router.foobar", @@ -114,15 +117,16 @@ func TestAccComputeRouter_update(t *testing.T) { func TestAccComputeRouter_updateAddRemoveBGP(t *testing.T) { t.Parallel() - testId := acctest.RandString(10) + testId := randString(t, 10) + routerName := fmt.Sprintf("tf-test-router-%s", testId) region := getTestRegionFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeRouterDestroy, + CheckDestroy: testAccCheckComputeRouterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeRouterBasic(testId, region), + Config: testAccComputeRouterBasic(routerName, region), }, { ResourceName: "google_compute_router.foobar", @@ -130,7 +134,7 @@ func TestAccComputeRouter_updateAddRemoveBGP(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouter_noBGP(testId, region), + Config: testAccComputeRouter_noBGP(routerName, region), }, { ResourceName: "google_compute_router.foobar", @@ -138,7 +142,7 @@ func TestAccComputeRouter_updateAddRemoveBGP(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccComputeRouterBasic(testId, region), + Config: testAccComputeRouterBasic(routerName, region), }, { ResourceName: "google_compute_router.foobar", @@ -149,64 +153,64 @@ func TestAccComputeRouter_updateAddRemoveBGP(t *testing.T) { }) } -func testAccComputeRouterBasic(testId, resourceRegion string) string { +func testAccComputeRouterBasic(routerName, resourceRegion string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-test-%s" + name = "%s-net" auto_create_subnetworks = false } resource "google_compute_subnetwork" "foobar" { - name = "router-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "%s" } resource "google_compute_router" "foobar" { - name = "router-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.name bgp { asn = 4294967294 } } -`, testId, testId, resourceRegion, testId) +`, routerName, routerName, resourceRegion, routerName) } -func testAccComputeRouterNoRegion(testId, providerRegion string) string { +func testAccComputeRouterNoRegion(routerName, providerRegion string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-test-%s" + name = "%s-net" auto_create_subnetworks = false } resource "google_compute_subnetwork" "foobar" { - name = "router-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "%s" } resource "google_compute_router" "foobar" { - name = "router-test-%s" + name = "%s" network = google_compute_network.foobar.name bgp { asn = 64514 } } -`, testId, testId, providerRegion, testId) +`, routerName, routerName, providerRegion, routerName) } -func testAccComputeRouterFull(testId string) string { +func testAccComputeRouterFull(routerName string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-test-%s" + name = "%s-net" auto_create_subnetworks = false } resource "google_compute_router" "foobar" { - name = "router-test-%s" + name = "%s" network = google_compute_network.foobar.name bgp { asn = 64514 @@ -220,27 +224,27 @@ resource "google_compute_router" "foobar" { } } } -`, testId, testId) +`, routerName, routerName) } -func testAccComputeRouter_noBGP(testId, resourceRegion string) string { +func testAccComputeRouter_noBGP(routerName, resourceRegion string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { - name = "router-test-%s" + name = "%s-net" auto_create_subnetworks = false } resource "google_compute_subnetwork" "foobar" { - name = "router-test-subnetwork-%s" + name = "%s-subnet" network = google_compute_network.foobar.self_link ip_cidr_range = "10.0.0.0/16" region = "%s" } resource "google_compute_router" "foobar" { - name = "router-test-%s" + name = "%s" region = google_compute_subnetwork.foobar.region network = google_compute_network.foobar.name } -`, testId, testId, resourceRegion, testId) +`, routerName, routerName, resourceRegion, routerName) } diff --git a/third_party/terraform/tests/resource_compute_security_policy_test.go.erb b/third_party/terraform/tests/resource_compute_security_policy_test.go.erb index 3e2dd0a4d7e7..30672e66acc5 100644 --- a/third_party/terraform/tests/resource_compute_security_policy_test.go.erb +++ b/third_party/terraform/tests/resource_compute_security_policy_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,12 +12,12 @@ import ( func TestAccComputeSecurityPolicy_basic(t *testing.T) { t.Parallel() - spName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + spName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSecurityPolicyDestroy, + CheckDestroy: testAccCheckComputeSecurityPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSecurityPolicy_basic(spName), @@ -35,12 +34,12 @@ func TestAccComputeSecurityPolicy_basic(t *testing.T) { func TestAccComputeSecurityPolicy_withRule(t *testing.T) { t.Parallel() - spName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + spName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSecurityPolicyDestroy, + CheckDestroy: testAccCheckComputeSecurityPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSecurityPolicy_withRule(spName), @@ -58,12 +57,12 @@ func TestAccComputeSecurityPolicy_withRule(t *testing.T) { func TestAccComputeSecurityPolicy_withRuleExpr(t *testing.T) { t.Parallel() - spName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + spName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSecurityPolicyDestroy, + CheckDestroy: testAccCheckComputeSecurityPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSecurityPolicy_withRuleExpr(spName), @@ -81,12 +80,12 @@ func TestAccComputeSecurityPolicy_withRuleExpr(t *testing.T) { func TestAccComputeSecurityPolicy_update(t *testing.T) { t.Parallel() - spName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + spName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSecurityPolicyDestroy, + CheckDestroy: testAccCheckComputeSecurityPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSecurityPolicy_withRule(spName), @@ -123,23 +122,25 @@ func TestAccComputeSecurityPolicy_update(t *testing.T) { }) } -func testAccCheckComputeSecurityPolicyDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckComputeSecurityPolicyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_security_policy" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_security_policy" { + continue + } - pol := rs.Primary.Attributes["name"] + pol := rs.Primary.Attributes["name"] - _, err := config.clientComputeBeta.SecurityPolicies.Get(config.Project, pol).Do() - if err == nil { - return fmt.Errorf("Security policy %q still exists", pol) + _, err := config.clientComputeBeta.SecurityPolicies.Get(config.Project, pol).Do() + if err == nil { + return fmt.Errorf("Security policy %q still exists", pol) + } } - } - return nil + return nil + } } func testAccComputeSecurityPolicy_basic(spName string) string { diff --git a/third_party/terraform/tests/resource_compute_shared_vpc_test.go b/third_party/terraform/tests/resource_compute_shared_vpc_test.go index f2ec7e1d3c20..661865f1458f 100644 --- a/third_party/terraform/tests/resource_compute_shared_vpc_test.go +++ b/third_party/terraform/tests/resource_compute_shared_vpc_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,21 +12,21 @@ func TestAccComputeSharedVpc_basic(t *testing.T) { org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - hostProject := acctest.RandomWithPrefix("tf-test-h") - serviceProject := acctest.RandomWithPrefix("tf-test-s") + hostProject := fmt.Sprintf("tf-test-h-%d", randInt(t)) + serviceProject := fmt.Sprintf("tf-test-s-%d", randInt(t)) hostProjectResourceName := "google_compute_shared_vpc_host_project.host" serviceProjectResourceName := "google_compute_shared_vpc_service_project.service" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccComputeSharedVpc_basic(hostProject, serviceProject, org, billingId), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeSharedVpcHostProject(hostProject, true), - testAccCheckComputeSharedVpcServiceProject(hostProject, serviceProject, true), + testAccCheckComputeSharedVpcHostProject(t, hostProject, true), + testAccCheckComputeSharedVpcServiceProject(t, hostProject, serviceProject, true), ), }, // Test import. @@ -45,17 +44,17 @@ func TestAccComputeSharedVpc_basic(t *testing.T) { { Config: testAccComputeSharedVpc_disabled(hostProject, serviceProject, org, billingId), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeSharedVpcHostProject(hostProject, false), - testAccCheckComputeSharedVpcServiceProject(hostProject, serviceProject, false), + testAccCheckComputeSharedVpcHostProject(t, hostProject, false), + testAccCheckComputeSharedVpcServiceProject(t, hostProject, serviceProject, false), ), }, }, }) } -func testAccCheckComputeSharedVpcHostProject(hostProject string, enabled bool) resource.TestCheckFunc { +func testAccCheckComputeSharedVpcHostProject(t *testing.T, hostProject string, enabled bool) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.Projects.Get(hostProject).Do() if err != nil { @@ -74,9 +73,9 @@ func testAccCheckComputeSharedVpcHostProject(hostProject string, enabled bool) r } } -func testAccCheckComputeSharedVpcServiceProject(hostProject, serviceProject string, enabled bool) resource.TestCheckFunc { +func testAccCheckComputeSharedVpcServiceProject(t *testing.T, hostProject, serviceProject string, enabled bool) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) serviceHostProject, err := config.clientCompute.Projects.GetXpnHost(serviceProject).Do() if err != nil { if enabled { diff --git a/third_party/terraform/tests/resource_compute_snapshot_test.go b/third_party/terraform/tests/resource_compute_snapshot_test.go index e1bd6444d2fa..7ef534885eb2 100644 --- a/third_party/terraform/tests/resource_compute_snapshot_test.go +++ b/third_party/terraform/tests/resource_compute_snapshot_test.go @@ -4,20 +4,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccComputeSnapshot_encryption(t *testing.T) { t.Parallel() - snapshotName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - diskName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + snapshotName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + diskName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSnapshotDestroy, + CheckDestroy: testAccCheckComputeSnapshotDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSnapshot_encryption(snapshotName, diskName), diff --git a/third_party/terraform/tests/resource_compute_ssl_certificate_test.go b/third_party/terraform/tests/resource_compute_ssl_certificate_test.go index 5d6b981b5b76..0fe97c84eff1 100644 --- a/third_party/terraform/tests/resource_compute_ssl_certificate_test.go +++ b/third_party/terraform/tests/resource_compute_ssl_certificate_test.go @@ -9,18 +9,20 @@ import ( ) func TestAccComputeSslCertificate_no_name(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSslCertificateDestroy, + CheckDestroy: testAccCheckComputeSslCertificateDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSslCertificate_no_name(), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSslCertificateExists( - "google_compute_ssl_certificate.foobar"), + t, "google_compute_ssl_certificate.foobar"), ), }, { @@ -33,7 +35,7 @@ func TestAccComputeSslCertificate_no_name(t *testing.T) { }) } -func testAccCheckComputeSslCertificateExists(n string) resource.TestCheckFunc { +func testAccCheckComputeSslCertificateExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -44,7 +46,7 @@ func testAccCheckComputeSslCertificateExists(n string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) // We don't specify a name, but it is saved during create name := rs.Primary.Attributes["name"] diff --git a/third_party/terraform/tests/resource_compute_ssl_policy_test.go b/third_party/terraform/tests/resource_compute_ssl_policy_test.go index dee4fabfdd0b..87168f6f9820 100644 --- a/third_party/terraform/tests/resource_compute_ssl_policy_test.go +++ b/third_party/terraform/tests/resource_compute_ssl_policy_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" compute "google.golang.org/api/compute/v1" @@ -14,18 +13,18 @@ func TestAccComputeSslPolicy_update(t *testing.T) { t.Parallel() var sslPolicy compute.SslPolicy - sslPolicyName := fmt.Sprintf("test-ssl-policy-%s", acctest.RandString(10)) + sslPolicyName := fmt.Sprintf("test-ssl-policy-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSslPolicyDestroy, + CheckDestroy: testAccCheckComputeSslPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSslUpdate1(sslPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSslPolicyExists( - "google_compute_ssl_policy.update", &sslPolicy), + t, "google_compute_ssl_policy.update", &sslPolicy), resource.TestCheckResourceAttr( "google_compute_ssl_policy.update", "profile", "MODERN"), resource.TestCheckResourceAttr( @@ -41,7 +40,7 @@ func TestAccComputeSslPolicy_update(t *testing.T) { Config: testAccComputeSslUpdate2(sslPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSslPolicyExists( - "google_compute_ssl_policy.update", &sslPolicy), + t, "google_compute_ssl_policy.update", &sslPolicy), resource.TestCheckResourceAttr( "google_compute_ssl_policy.update", "profile", "RESTRICTED"), resource.TestCheckResourceAttr( @@ -61,18 +60,18 @@ func TestAccComputeSslPolicy_update_to_custom(t *testing.T) { t.Parallel() var sslPolicy compute.SslPolicy - sslPolicyName := fmt.Sprintf("test-ssl-policy-%s", acctest.RandString(10)) + sslPolicyName := fmt.Sprintf("test-ssl-policy-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSslPolicyDestroy, + CheckDestroy: testAccCheckComputeSslPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSslUpdate1(sslPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSslPolicyExists( - "google_compute_ssl_policy.update", &sslPolicy), + t, "google_compute_ssl_policy.update", &sslPolicy), resource.TestCheckResourceAttr( "google_compute_ssl_policy.update", "profile", "MODERN"), resource.TestCheckResourceAttr( @@ -88,7 +87,7 @@ func TestAccComputeSslPolicy_update_to_custom(t *testing.T) { Config: testAccComputeSslUpdate3(sslPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSslPolicyExists( - "google_compute_ssl_policy.update", &sslPolicy), + t, "google_compute_ssl_policy.update", &sslPolicy), resource.TestCheckResourceAttr( "google_compute_ssl_policy.update", "profile", "CUSTOM"), resource.TestCheckResourceAttr( @@ -108,18 +107,18 @@ func TestAccComputeSslPolicy_update_from_custom(t *testing.T) { t.Parallel() var sslPolicy compute.SslPolicy - sslPolicyName := fmt.Sprintf("test-ssl-policy-%s", acctest.RandString(10)) + sslPolicyName := fmt.Sprintf("test-ssl-policy-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSslPolicyDestroy, + CheckDestroy: testAccCheckComputeSslPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSslUpdate3(sslPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSslPolicyExists( - "google_compute_ssl_policy.update", &sslPolicy), + t, "google_compute_ssl_policy.update", &sslPolicy), resource.TestCheckResourceAttr( "google_compute_ssl_policy.update", "profile", "CUSTOM"), resource.TestCheckResourceAttr( @@ -135,7 +134,7 @@ func TestAccComputeSslPolicy_update_from_custom(t *testing.T) { Config: testAccComputeSslUpdate1(sslPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSslPolicyExists( - "google_compute_ssl_policy.update", &sslPolicy), + t, "google_compute_ssl_policy.update", &sslPolicy), resource.TestCheckResourceAttr( "google_compute_ssl_policy.update", "profile", "MODERN"), resource.TestCheckResourceAttr( @@ -151,7 +150,7 @@ func TestAccComputeSslPolicy_update_from_custom(t *testing.T) { }) } -func testAccCheckComputeSslPolicyExists(n string, sslPolicy *compute.SslPolicy) resource.TestCheckFunc { +func testAccCheckComputeSslPolicyExists(t *testing.T, n string, sslPolicy *compute.SslPolicy) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -162,7 +161,7 @@ func testAccCheckComputeSslPolicyExists(n string, sslPolicy *compute.SslPolicy) return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) project, err := getTestProject(rs.Primary, config) if err != nil { diff --git a/third_party/terraform/tests/resource_compute_subnetwork_iam_test.go b/third_party/terraform/tests/resource_compute_subnetwork_iam_test.go index f90caba8d0a6..892a0e4b5ccf 100644 --- a/third_party/terraform/tests/resource_compute_subnetwork_iam_test.go +++ b/third_party/terraform/tests/resource_compute_subnetwork_iam_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -14,12 +13,12 @@ func TestAccComputeSubnetworkIamPolicy(t *testing.T) { t.Parallel() project := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.networkUser" region := getTestRegionFromEnv() - subnetwork := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + subnetwork := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_compute_subnetwork_test.go b/third_party/terraform/tests/resource_compute_subnetwork_test.go index 8ef69140c62e..dad396194c31 100644 --- a/third_party/terraform/tests/resource_compute_subnetwork_test.go +++ b/third_party/terraform/tests/resource_compute_subnetwork_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -54,23 +53,23 @@ func TestAccComputeSubnetwork_basic(t *testing.T) { var subnetwork1 compute.Subnetwork var subnetwork2 compute.Subnetwork - cnName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - subnetwork1Name := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - subnetwork2Name := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - subnetwork3Name := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cnName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetwork1Name := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetwork2Name := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetwork3Name := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSubnetworkDestroy, + CheckDestroy: testAccCheckComputeSubnetworkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSubnetwork_basic(cnName, subnetwork1Name, subnetwork2Name, subnetwork3Name), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-ref-by-url", &subnetwork1), + t, "google_compute_subnetwork.network-ref-by-url", &subnetwork1), testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-ref-by-name", &subnetwork2), + t, "google_compute_subnetwork.network-ref-by-name", &subnetwork2), ), }, { @@ -92,19 +91,19 @@ func TestAccComputeSubnetwork_update(t *testing.T) { var subnetwork compute.Subnetwork - cnName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - subnetworkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cnName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetworkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSubnetworkDestroy, + CheckDestroy: testAccCheckComputeSubnetworkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSubnetwork_update1(cnName, "10.2.0.0/24", subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-private-google-access", &subnetwork), + t, "google_compute_subnetwork.network-with-private-google-access", &subnetwork), ), }, { @@ -112,7 +111,7 @@ func TestAccComputeSubnetwork_update(t *testing.T) { Config: testAccComputeSubnetwork_update2(cnName, "10.2.0.0/16", subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-private-google-access", &subnetwork), + t, "google_compute_subnetwork.network-with-private-google-access", &subnetwork), ), }, { @@ -120,7 +119,7 @@ func TestAccComputeSubnetwork_update(t *testing.T) { Config: testAccComputeSubnetwork_update2(cnName, "10.2.0.0/24", subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-private-google-access", &subnetwork), + t, "google_compute_subnetwork.network-with-private-google-access", &subnetwork), ), }, { @@ -128,7 +127,7 @@ func TestAccComputeSubnetwork_update(t *testing.T) { Config: testAccComputeSubnetwork_update3(cnName, "10.2.0.0/24", subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-private-google-access", &subnetwork), + t, "google_compute_subnetwork.network-with-private-google-access", &subnetwork), ), }, { @@ -149,25 +148,25 @@ func TestAccComputeSubnetwork_secondaryIpRanges(t *testing.T) { var subnetwork compute.Subnetwork - cnName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - subnetworkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cnName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetworkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSubnetworkDestroy, + CheckDestroy: testAccCheckComputeSubnetworkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSubnetwork_secondaryIpRanges_update1(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeSubnetworkExists("google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), + testAccCheckComputeSubnetworkExists(t, "google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), testAccCheckComputeSubnetworkHasSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update1", "192.168.10.0/24"), ), }, { Config: testAccComputeSubnetwork_secondaryIpRanges_update2(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeSubnetworkExists("google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), + testAccCheckComputeSubnetworkExists(t, "google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), testAccCheckComputeSubnetworkHasSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update1", "192.168.10.0/24"), testAccCheckComputeSubnetworkHasSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update2", "192.168.11.0/24"), ), @@ -175,7 +174,7 @@ func TestAccComputeSubnetwork_secondaryIpRanges(t *testing.T) { { Config: testAccComputeSubnetwork_secondaryIpRanges_update3(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeSubnetworkExists("google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), + testAccCheckComputeSubnetworkExists(t, "google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), testAccCheckComputeSubnetworkHasSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update1", "192.168.10.0/24"), testAccCheckComputeSubnetworkHasSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update2", "192.168.11.0/24"), ), @@ -183,7 +182,7 @@ func TestAccComputeSubnetwork_secondaryIpRanges(t *testing.T) { { Config: testAccComputeSubnetwork_secondaryIpRanges_update4(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeSubnetworkExists("google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), + testAccCheckComputeSubnetworkExists(t, "google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), testAccCheckComputeSubnetworkHasNotSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update1", "192.168.10.0/24"), testAccCheckComputeSubnetworkHasNotSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update2", "192.168.11.0/24"), ), @@ -191,7 +190,7 @@ func TestAccComputeSubnetwork_secondaryIpRanges(t *testing.T) { { Config: testAccComputeSubnetwork_secondaryIpRanges_update1(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( - testAccCheckComputeSubnetworkExists("google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), + testAccCheckComputeSubnetworkExists(t, "google_compute_subnetwork.network-with-private-secondary-ip-ranges", &subnetwork), testAccCheckComputeSubnetworkHasSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update1", "192.168.10.0/24"), testAccCheckComputeSubnetworkHasNotSecondaryIpRange(&subnetwork, "tf-test-secondary-range-update2", "192.168.11.0/24"), ), @@ -205,19 +204,19 @@ func TestAccComputeSubnetwork_flowLogs(t *testing.T) { var subnetwork compute.Subnetwork - cnName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - subnetworkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cnName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetworkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSubnetworkDestroy, + CheckDestroy: testAccCheckComputeSubnetworkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSubnetwork_flowLogs(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-flow-logs", &subnetwork), + t, "google_compute_subnetwork.network-with-flow-logs", &subnetwork), ), }, { @@ -229,7 +228,7 @@ func TestAccComputeSubnetwork_flowLogs(t *testing.T) { Config: testAccComputeSubnetwork_flowLogsUpdate(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-flow-logs", &subnetwork), + t, "google_compute_subnetwork.network-with-flow-logs", &subnetwork), ), }, { @@ -241,7 +240,7 @@ func TestAccComputeSubnetwork_flowLogs(t *testing.T) { Config: testAccComputeSubnetwork_flowLogsDelete(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-flow-logs", &subnetwork), + t, "google_compute_subnetwork.network-with-flow-logs", &subnetwork), ), }, { @@ -258,19 +257,19 @@ func TestAccComputeSubnetwork_flowLogsMigrate(t *testing.T) { var subnetwork compute.Subnetwork - cnName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - subnetworkName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cnName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + subnetworkName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeSubnetworkDestroy, + CheckDestroy: testAccCheckComputeSubnetworkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeSubnetwork_flowLogsMigrate(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-flow-logs", &subnetwork), + t, "google_compute_subnetwork.network-with-flow-logs", &subnetwork), ), }, { @@ -282,7 +281,7 @@ func TestAccComputeSubnetwork_flowLogsMigrate(t *testing.T) { Config: testAccComputeSubnetwork_flowLogsMigrate2(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-flow-logs", &subnetwork), + t, "google_compute_subnetwork.network-with-flow-logs", &subnetwork), ), }, { @@ -294,7 +293,7 @@ func TestAccComputeSubnetwork_flowLogsMigrate(t *testing.T) { Config: testAccComputeSubnetwork_flowLogsMigrate3(cnName, subnetworkName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeSubnetworkExists( - "google_compute_subnetwork.network-with-flow-logs", &subnetwork), + t, "google_compute_subnetwork.network-with-flow-logs", &subnetwork), ), }, { @@ -306,7 +305,7 @@ func TestAccComputeSubnetwork_flowLogsMigrate(t *testing.T) { }) } -func testAccCheckComputeSubnetworkExists(n string, subnetwork *compute.Subnetwork) resource.TestCheckFunc { +func testAccCheckComputeSubnetworkExists(t *testing.T, n string, subnetwork *compute.Subnetwork) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -317,7 +316,7 @@ func testAccCheckComputeSubnetworkExists(n string, subnetwork *compute.Subnetwor return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) region := rs.Primary.Attributes["region"] subnet_name := rs.Primary.Attributes["name"] diff --git a/third_party/terraform/tests/resource_compute_target_http_proxy_test.go b/third_party/terraform/tests/resource_compute_target_http_proxy_test.go index f6d9132af030..a85fc9ace97a 100644 --- a/third_party/terraform/tests/resource_compute_target_http_proxy_test.go +++ b/third_party/terraform/tests/resource_compute_target_http_proxy_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,22 +11,22 @@ import ( func TestAccComputeTargetHttpProxy_update(t *testing.T) { t.Parallel() - target := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - urlmap1 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) - urlmap2 := fmt.Sprintf("thttp-test-%s", acctest.RandString(10)) + target := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + backend := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + hc := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + urlmap1 := fmt.Sprintf("thttp-test-%s", randString(t, 10)) + urlmap2 := fmt.Sprintf("thttp-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetHttpProxyDestroy, + CheckDestroy: testAccCheckComputeTargetHttpProxyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeTargetHttpProxy_basic1(target, backend, hc, urlmap1, urlmap2), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetHttpProxyExists( - "google_compute_target_http_proxy.foobar"), + t, "google_compute_target_http_proxy.foobar"), ), }, @@ -35,14 +34,14 @@ func TestAccComputeTargetHttpProxy_update(t *testing.T) { Config: testAccComputeTargetHttpProxy_basic2(target, backend, hc, urlmap1, urlmap2), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetHttpProxyExists( - "google_compute_target_http_proxy.foobar"), + t, "google_compute_target_http_proxy.foobar"), ), }, }, }) } -func testAccCheckComputeTargetHttpProxyExists(n string) resource.TestCheckFunc { +func testAccCheckComputeTargetHttpProxyExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -53,7 +52,7 @@ func testAccCheckComputeTargetHttpProxyExists(n string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) name := rs.Primary.Attributes["name"] found, err := config.clientCompute.TargetHttpProxies.Get( diff --git a/third_party/terraform/tests/resource_compute_target_https_proxy_test.go b/third_party/terraform/tests/resource_compute_target_https_proxy_test.go index 12ff7230052b..145fbe75f2e9 100644 --- a/third_party/terraform/tests/resource_compute_target_https_proxy_test.go +++ b/third_party/terraform/tests/resource_compute_target_https_proxy_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/compute/v1" @@ -18,20 +17,20 @@ func TestAccComputeTargetHttpsProxy_update(t *testing.T) { t.Parallel() var proxy compute.TargetHttpsProxy - resourceSuffix := acctest.RandString(10) + resourceSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetHttpsProxyDestroy, + CheckDestroy: testAccCheckComputeTargetHttpsProxyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeTargetHttpsProxy_basic1(resourceSuffix), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetHttpsProxyExists( - "google_compute_target_https_proxy.foobar", &proxy), + t, "google_compute_target_https_proxy.foobar", &proxy), testAccComputeTargetHttpsProxyDescription("Resource created for Terraform acceptance testing", &proxy), - testAccComputeTargetHttpsProxyHasSslCertificate("httpsproxy-test-cert1-"+resourceSuffix, &proxy), + testAccComputeTargetHttpsProxyHasSslCertificate(t, "httpsproxy-test-cert1-"+resourceSuffix, &proxy), ), }, @@ -39,17 +38,17 @@ func TestAccComputeTargetHttpsProxy_update(t *testing.T) { Config: testAccComputeTargetHttpsProxy_basic2(resourceSuffix), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetHttpsProxyExists( - "google_compute_target_https_proxy.foobar", &proxy), + t, "google_compute_target_https_proxy.foobar", &proxy), testAccComputeTargetHttpsProxyDescription("Resource created for Terraform acceptance testing", &proxy), - testAccComputeTargetHttpsProxyHasSslCertificate("httpsproxy-test-cert1-"+resourceSuffix, &proxy), - testAccComputeTargetHttpsProxyHasSslCertificate("httpsproxy-test-cert2-"+resourceSuffix, &proxy), + testAccComputeTargetHttpsProxyHasSslCertificate(t, "httpsproxy-test-cert1-"+resourceSuffix, &proxy), + testAccComputeTargetHttpsProxyHasSslCertificate(t, "httpsproxy-test-cert2-"+resourceSuffix, &proxy), ), }, }, }) } -func testAccCheckComputeTargetHttpsProxyExists(n string, proxy *compute.TargetHttpsProxy) resource.TestCheckFunc { +func testAccCheckComputeTargetHttpsProxyExists(t *testing.T, n string, proxy *compute.TargetHttpsProxy) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -60,7 +59,7 @@ func testAccCheckComputeTargetHttpsProxyExists(n string, proxy *compute.TargetHt return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) name := rs.Primary.Attributes["name"] found, err := config.clientCompute.TargetHttpsProxies.Get( @@ -88,9 +87,9 @@ func testAccComputeTargetHttpsProxyDescription(description string, proxy *comput } } -func testAccComputeTargetHttpsProxyHasSslCertificate(cert string, proxy *compute.TargetHttpsProxy) resource.TestCheckFunc { +func testAccComputeTargetHttpsProxyHasSslCertificate(t *testing.T, cert string, proxy *compute.TargetHttpsProxy) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) certUrl := fmt.Sprintf(canonicalSslCertificateTemplate, config.Project, cert) for _, sslCertificate := range proxy.SslCertificates { diff --git a/third_party/terraform/tests/resource_compute_target_pool_test.go b/third_party/terraform/tests/resource_compute_target_pool_test.go index b346c6414923..a2ce53482efb 100644 --- a/third_party/terraform/tests/resource_compute_target_pool_test.go +++ b/third_party/terraform/tests/resource_compute_target_pool_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,19 +11,19 @@ import ( func TestAccComputeTargetPool_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetPoolDestroy, + CheckDestroy: testAccCheckComputeTargetPoolDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeTargetPool_basic(), + Config: testAccComputeTargetPool_basic(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetPoolExists( - "google_compute_target_pool.foo"), + t, "google_compute_target_pool.foo"), testAccCheckComputeTargetPoolHealthCheck("google_compute_target_pool.foo", "google_compute_http_health_check.foobar"), testAccCheckComputeTargetPoolExists( - "google_compute_target_pool.bar"), + t, "google_compute_target_pool.bar"), testAccCheckComputeTargetPoolHealthCheck("google_compute_target_pool.bar", "google_compute_http_health_check.foobar"), ), }, @@ -40,14 +39,14 @@ func TestAccComputeTargetPool_basic(t *testing.T) { func TestAccComputeTargetPool_update(t *testing.T) { t.Parallel() - tpname := fmt.Sprintf("tptest-%s", acctest.RandString(10)) - name1 := fmt.Sprintf("tptest-%s", acctest.RandString(10)) - name2 := fmt.Sprintf("tptest-%s", acctest.RandString(10)) + tpname := fmt.Sprintf("tf-test-%s", randString(t, 10)) + name1 := fmt.Sprintf("tf-test-%s", randString(t, 10)) + name2 := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetPoolDestroy, + CheckDestroy: testAccCheckComputeTargetPoolDestroyProducer(t), Steps: []resource.TestStep{ { // Create target pool with no instances attached @@ -80,25 +79,27 @@ func TestAccComputeTargetPool_update(t *testing.T) { }) } -func testAccCheckComputeTargetPoolDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_compute_target_pool" { - continue +func testAccCheckComputeTargetPoolDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_target_pool" { + continue + } + + _, err := config.clientCompute.TargetPools.Get( + config.Project, config.Region, rs.Primary.Attributes["name"]).Do() + if err == nil { + return fmt.Errorf("TargetPool still exists") + } } - _, err := config.clientCompute.TargetPools.Get( - config.Project, config.Region, rs.Primary.Attributes["name"]).Do() - if err == nil { - return fmt.Errorf("TargetPool still exists") - } + return nil } - - return nil } -func testAccCheckComputeTargetPoolExists(n string) resource.TestCheckFunc { +func testAccCheckComputeTargetPoolExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -109,7 +110,7 @@ func testAccCheckComputeTargetPoolExists(n string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientCompute.TargetPools.Get( config.Project, config.Region, rs.Primary.Attributes["name"]).Do() @@ -146,7 +147,7 @@ func testAccCheckComputeTargetPoolHealthCheck(targetPool, healthCheck string) re } } -func testAccComputeTargetPool_basic() string { +func testAccComputeTargetPool_basic(suffix string) string { return fmt.Sprintf(` data "google_compute_image" "my_image" { family = "debian-9" @@ -159,7 +160,7 @@ resource "google_compute_http_health_check" "foobar" { } resource "google_compute_instance" "foobar" { - name = "inst-tp-test-%s" + name = "tf-test-%s" machine_type = "n1-standard-1" zone = "us-central1-a" @@ -186,19 +187,19 @@ resource "google_compute_target_pool" "foo" { resource "google_compute_target_pool" "bar" { description = "Resource created for Terraform acceptance testing" - name = "tpool-test-%s" + name = "tpool-test-2-%s" health_checks = [ google_compute_http_health_check.foobar.self_link, ] } -`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix, suffix, suffix) } func testAccComputeTargetPool_update(tpname, instances, name1, name2 string) string { return fmt.Sprintf(` resource "google_compute_target_pool" "foo" { description = "Resource created for Terraform acceptance testing" - name = "tpool-test-%s" + name = "%s" instances = [%s] } diff --git a/third_party/terraform/tests/resource_compute_target_ssl_proxy_test.go b/third_party/terraform/tests/resource_compute_target_ssl_proxy_test.go index 4a0d57621208..eb55d652093c 100644 --- a/third_party/terraform/tests/resource_compute_target_ssl_proxy_test.go +++ b/third_party/terraform/tests/resource_compute_target_ssl_proxy_test.go @@ -4,44 +4,43 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) func TestAccComputeTargetSslProxy_update(t *testing.T) { - target := fmt.Sprintf("tssl-test-%s", acctest.RandString(10)) - sslPolicy := fmt.Sprintf("tssl-test-%s", acctest.RandString(10)) - cert1 := fmt.Sprintf("tssl-test-%s", acctest.RandString(10)) - cert2 := fmt.Sprintf("tssl-test-%s", acctest.RandString(10)) - backend1 := fmt.Sprintf("tssl-test-%s", acctest.RandString(10)) - backend2 := fmt.Sprintf("tssl-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("tssl-test-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ + target := fmt.Sprintf("tssl-test-%s", randString(t, 10)) + sslPolicy := fmt.Sprintf("tssl-test-%s", randString(t, 10)) + cert1 := fmt.Sprintf("tssl-test-%s", randString(t, 10)) + cert2 := fmt.Sprintf("tssl-test-%s", randString(t, 10)) + backend1 := fmt.Sprintf("tssl-test-%s", randString(t, 10)) + backend2 := fmt.Sprintf("tssl-test-%s", randString(t, 10)) + hc := fmt.Sprintf("tssl-test-%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetSslProxyDestroy, + CheckDestroy: testAccCheckComputeTargetSslProxyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeTargetSslProxy_basic1(target, sslPolicy, cert1, backend1, hc), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetSslProxy( - "google_compute_target_ssl_proxy.foobar", "NONE", cert1), + t, "google_compute_target_ssl_proxy.foobar", "NONE", cert1), ), }, { Config: testAccComputeTargetSslProxy_basic2(target, sslPolicy, cert1, cert2, backend1, backend2, hc), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetSslProxy( - "google_compute_target_ssl_proxy.foobar", "PROXY_V1", cert2), + t, "google_compute_target_ssl_proxy.foobar", "PROXY_V1", cert2), ), }, }, }) } -func testAccCheckComputeTargetSslProxy(n, proxyHeader, sslCert string) resource.TestCheckFunc { +func testAccCheckComputeTargetSslProxy(t *testing.T, n, proxyHeader, sslCert string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -52,7 +51,7 @@ func testAccCheckComputeTargetSslProxy(n, proxyHeader, sslCert string) resource. return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) name := rs.Primary.Attributes["name"] found, err := config.clientCompute.TargetSslProxies.Get( diff --git a/third_party/terraform/tests/resource_compute_target_tcp_proxy_test.go b/third_party/terraform/tests/resource_compute_target_tcp_proxy_test.go index 6536ae2c5464..c056146e2679 100644 --- a/third_party/terraform/tests/resource_compute_target_tcp_proxy_test.go +++ b/third_party/terraform/tests/resource_compute_target_tcp_proxy_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,34 +11,34 @@ import ( func TestAccComputeTargetTcpProxy_update(t *testing.T) { t.Parallel() - target := fmt.Sprintf("ttcp-test-%s", acctest.RandString(10)) - backend := fmt.Sprintf("ttcp-test-%s", acctest.RandString(10)) - hc := fmt.Sprintf("ttcp-test-%s", acctest.RandString(10)) + target := fmt.Sprintf("ttcp-test-%s", randString(t, 10)) + backend := fmt.Sprintf("ttcp-test-%s", randString(t, 10)) + hc := fmt.Sprintf("ttcp-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeTargetTcpProxyDestroy, + CheckDestroy: testAccCheckComputeTargetTcpProxyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeTargetTcpProxy_basic1(target, backend, hc), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetTcpProxyExists( - "google_compute_target_tcp_proxy.foobar"), + t, "google_compute_target_tcp_proxy.foobar"), ), }, { Config: testAccComputeTargetTcpProxy_basic2(target, backend, hc), Check: resource.ComposeTestCheckFunc( testAccCheckComputeTargetTcpProxyExists( - "google_compute_target_tcp_proxy.foobar"), + t, "google_compute_target_tcp_proxy.foobar"), ), }, }, }) } -func testAccCheckComputeTargetTcpProxyExists(n string) resource.TestCheckFunc { +func testAccCheckComputeTargetTcpProxyExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -50,7 +49,7 @@ func testAccCheckComputeTargetTcpProxyExists(n string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) name := rs.Primary.Attributes["name"] found, err := config.clientCompute.TargetTcpProxies.Get( diff --git a/third_party/terraform/tests/resource_compute_url_map_test.go.erb b/third_party/terraform/tests/resource_compute_url_map_test.go.erb index 7d20134bffe9..c17e6e91b121 100644 --- a/third_party/terraform/tests/resource_compute_url_map_test.go.erb +++ b/third_party/terraform/tests/resource_compute_url_map_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,19 +12,19 @@ import ( func TestAccComputeUrlMap_update_path_matcher(t *testing.T) { t.Parallel() - bsName := fmt.Sprintf("urlmap-test-%s", acctest.RandString(10)) - hcName := fmt.Sprintf("urlmap-test-%s", acctest.RandString(10)) - umName := fmt.Sprintf("urlmap-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + bsName := fmt.Sprintf("urlmap-test-%s", randString(t, 10)) + hcName := fmt.Sprintf("urlmap-test-%s", randString(t, 10)) + umName := fmt.Sprintf("urlmap-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeUrlMap_basic1(bsName, hcName, umName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeUrlMapExists( - "google_compute_url_map.foobar"), + t, "google_compute_url_map.foobar"), ), }, @@ -33,7 +32,7 @@ func TestAccComputeUrlMap_update_path_matcher(t *testing.T) { Config: testAccComputeUrlMap_basic2(bsName, hcName, umName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeUrlMapExists( - "google_compute_url_map.foobar"), + t, "google_compute_url_map.foobar"), ), }, }, @@ -43,60 +42,113 @@ func TestAccComputeUrlMap_update_path_matcher(t *testing.T) { func TestAccComputeUrlMap_advanced(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeUrlMap_advanced1(), + Config: testAccComputeUrlMap_advanced1(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeUrlMapExists( - "google_compute_url_map.foobar"), + t, "google_compute_url_map.foobar"), ), }, { - Config: testAccComputeUrlMap_advanced2(), + Config: testAccComputeUrlMap_advanced2(randString(t, 10)), Check: resource.ComposeTestCheckFunc( testAccCheckComputeUrlMapExists( - "google_compute_url_map.foobar"), + t, "google_compute_url_map.foobar"), ), }, }, }) } +func TestAccComputeUrlMap_defaultRouteActionPathUrlRewrite(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeUrlMap_defaultRouteActionPathUrlRewrite(randString(t, 10)), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeUrlMapExists( + t, "google_compute_url_map.foobar"), + ), + }, + { + Config: testAccComputeUrlMap_defaultRouteActionPathUrlRewrite_update(randString(t, 10)), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeUrlMapExists( + t, "google_compute_url_map.foobar"), + ), + }, + }, + }) +} + +func TestAccComputeUrlMap_defaultRouteActionUrlRewrite(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeUrlMap_defaultRouteActionUrlRewrite(randString(t, 10)), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeUrlMapExists( + t, "google_compute_url_map.foobar"), + ), + }, + + { + Config: testAccComputeUrlMap_defaultRouteActionUrlRewrite_update(randString(t, 10)), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeUrlMapExists( + t, "google_compute_url_map.foobar"), + ), + }, + }, + }) +} + func TestAccComputeUrlMap_noPathRulesWithUpdate(t *testing.T) { t.Parallel() - bsName := fmt.Sprintf("urlmap-test-%s", acctest.RandString(10)) - hcName := fmt.Sprintf("urlmap-test-%s", acctest.RandString(10)) - umName := fmt.Sprintf("urlmap-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + bsName := fmt.Sprintf("urlmap-test-%s", randString(t, 10)) + hcName := fmt.Sprintf("urlmap-test-%s", randString(t, 10)) + umName := fmt.Sprintf("urlmap-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeUrlMap_noPathRules(bsName, hcName, umName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeUrlMapExists( - "google_compute_url_map.foobar"), + t, "google_compute_url_map.foobar"), ), }, { Config: testAccComputeUrlMap_basic1(bsName, hcName, umName), Check: resource.ComposeTestCheckFunc( testAccCheckComputeUrlMapExists( - "google_compute_url_map.foobar"), + t, "google_compute_url_map.foobar"), ), }, }, }) } -func testAccCheckComputeUrlMapExists(n string) resource.TestCheckFunc { +func testAccCheckComputeUrlMapExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -107,7 +159,7 @@ func testAccCheckComputeUrlMapExists(n string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) name := rs.Primary.Attributes["name"] found, err := config.clientCompute.UrlMaps.Get( @@ -123,18 +175,84 @@ func testAccCheckComputeUrlMapExists(n string) resource.TestCheckFunc { } } +func TestAccComputeUrlMap_defaultRouteActionTrafficDirectorPathUpdate(t *testing.T) { + t.Parallel() + + randString := randString(t, 10) + + bsName := fmt.Sprintf("urlmap-test-%s", randString) + hcName := fmt.Sprintf("urlmap-test-%s", randString) + umName := fmt.Sprintf("urlmap-test-%s", randString) + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeUrlMap_defaultRouteActionTrafficDirectorPath(bsName, hcName, umName), + }, + { + ResourceName: "google_compute_url_map.foobar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeUrlMap_defaultRouteActionTrafficDirectorPathUpdate(bsName, hcName, umName), + }, + { + ResourceName: "google_compute_url_map.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccComputeUrlMap_defaultRouteActionTrafficDirectorUpdate(t *testing.T) { + t.Parallel() + + randString := randString(t, 10) + + bsName := fmt.Sprintf("urlmap-test-%s", randString) + hcName := fmt.Sprintf("urlmap-test-%s", randString) + umName := fmt.Sprintf("urlmap-test-%s", randString) + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeUrlMap_defaultRouteActionTrafficDirector(bsName, hcName, umName), + }, + { + ResourceName: "google_compute_url_map.foobar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeUrlMap_defaultRouteActionTrafficDirectorUpdate(bsName, hcName, umName), + }, + { + ResourceName: "google_compute_url_map.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccComputeUrlMap_trafficDirectorUpdate(t *testing.T) { t.Parallel() - randString := acctest.RandString(10) + randString := randString(t, 10) bsName := fmt.Sprintf("urlmap-test-%s", randString) hcName := fmt.Sprintf("urlmap-test-%s", randString) umName := fmt.Sprintf("urlmap-test-%s", randString) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeUrlMap_trafficDirector(bsName, hcName, umName), @@ -159,15 +277,15 @@ func TestAccComputeUrlMap_trafficDirectorUpdate(t *testing.T) { func TestAccComputeUrlMap_trafficDirectorPathUpdate(t *testing.T) { t.Parallel() - randString := acctest.RandString(10) + randString := randString(t, 10) bsName := fmt.Sprintf("urlmap-test-%s", randString) hcName := fmt.Sprintf("urlmap-test-%s", randString) umName := fmt.Sprintf("urlmap-test-%s", randString) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeUrlMap_trafficDirectorPath(bsName, hcName, umName), @@ -192,15 +310,15 @@ func TestAccComputeUrlMap_trafficDirectorPathUpdate(t *testing.T) { func TestAccComputeUrlMap_trafficDirectorRemoveRouteRule(t *testing.T) { t.Parallel() - randString := acctest.RandString(10) + randString := randString(t, 10) bsName := fmt.Sprintf("urlmap-test-%s", randString) hcName := fmt.Sprintf("urlmap-test-%s", randString) umName := fmt.Sprintf("urlmap-test-%s", randString) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeUrlMapDestroy, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccComputeUrlMap_trafficDirector(bsName, hcName, umName), @@ -222,6 +340,28 @@ func TestAccComputeUrlMap_trafficDirectorRemoveRouteRule(t *testing.T) { }) } +func TestAccComputeUrlMap_defaultUrlRedirect(t *testing.T) { + t.Parallel() + + randomSuffix := randString(t, 10) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeUrlMapDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeUrlMap_defaultUrlRedirectConfig(randomSuffix), + }, + { + ResourceName: "google_compute_url_map.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccComputeUrlMap_basic1(bsName, hcName, umName string) string { return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { @@ -306,7 +446,7 @@ resource "google_compute_url_map" "foobar" { `, bsName, hcName, umName) } -func testAccComputeUrlMap_advanced1() string { +func testAccComputeUrlMap_advanced1(suffix string) string { return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { name = "urlmap-test-%s" @@ -354,10 +494,10 @@ resource "google_compute_url_map" "foobar" { } } } -`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix, suffix) } -func testAccComputeUrlMap_advanced2() string { +func testAccComputeUrlMap_advanced2(suffix string) string { return fmt.Sprintf(` resource "google_compute_backend_service" "foobar" { name = "urlmap-test-%s" @@ -425,7 +565,159 @@ resource "google_compute_url_map" "foobar" { } } } -`, acctest.RandString(10), acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix, suffix) +} + +func testAccComputeUrlMap_defaultRouteActionPathUrlRewrite(suffix string) string { + return fmt.Sprintf(` +resource "google_compute_backend_service" "foobar" { + name = "urlmap-test-%s" + health_checks = [google_compute_http_health_check.zero.self_link] +} + +resource "google_compute_http_health_check" "zero" { + name = "urlmap-test-%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} + +resource "google_compute_url_map" "foobar" { + name = "urlmap-test-%s" + default_service = google_compute_backend_service.foobar.self_link + + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "blep" + } + + path_matcher { + default_service = google_compute_backend_service.foobar.self_link + name = "blep" + + path_rule { + paths = ["/home"] + service = google_compute_backend_service.foobar.self_link + } + + path_rule { + paths = ["/login"] + service = google_compute_backend_service.foobar.self_link + } + + default_route_action { + url_rewrite { + host_rewrite = "my-new-host" + path_prefix_rewrite = "my-new-path" + } + } + } +} +`, suffix, suffix, suffix) +} + +func testAccComputeUrlMap_defaultRouteActionPathUrlRewrite_update(suffix string) string { + return fmt.Sprintf(` +resource "google_compute_backend_service" "foobar" { + name = "urlmap-test-%s" + health_checks = [google_compute_http_health_check.zero.self_link] +} + +resource "google_compute_http_health_check" "zero" { + name = "urlmap-test-%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} + +resource "google_compute_url_map" "foobar" { + name = "urlmap-test-%s" + default_service = google_compute_backend_service.foobar.self_link + + host_rule { + hosts = ["mysite.com", "myothersite.com"] + path_matcher = "blep" + } + + path_matcher { + default_service = google_compute_backend_service.foobar.self_link + name = "blep" + + path_rule { + paths = ["/home"] + service = google_compute_backend_service.foobar.self_link + } + + path_rule { + paths = ["/login"] + service = google_compute_backend_service.foobar.self_link + } + + default_route_action { + url_rewrite { + host_rewrite = "a-different-host" + path_prefix_rewrite = "a-different-path" + } + } + } +} +`, suffix, suffix, suffix) +} + +func testAccComputeUrlMap_defaultRouteActionUrlRewrite(suffix string) string { + return fmt.Sprintf(` +resource "google_compute_backend_service" "foobar" { + name = "urlmap-test-%s" + health_checks = [google_compute_http_health_check.zero.self_link] +} + +resource "google_compute_http_health_check" "zero" { + name = "urlmap-test-%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} + +resource "google_compute_url_map" "foobar" { + name = "urlmap-test-%s" + default_service = google_compute_backend_service.foobar.self_link + + default_route_action { + url_rewrite { + host_rewrite = "my-new-host" + path_prefix_rewrite = "my-new-path" + } + } +} +`, suffix, suffix, suffix) +} + +func testAccComputeUrlMap_defaultRouteActionUrlRewrite_update(suffix string) string { + return fmt.Sprintf(` +resource "google_compute_backend_service" "foobar" { + name = "urlmap-test-%s" + health_checks = [google_compute_http_health_check.zero.self_link] +} + +resource "google_compute_http_health_check" "zero" { + name = "urlmap-test-%s" + request_path = "/" + check_interval_sec = 1 + timeout_sec = 1 +} + +resource "google_compute_url_map" "foobar" { + name = "urlmap-test-%s" + default_service = google_compute_backend_service.foobar.self_link + + default_route_action { + url_rewrite { + host_rewrite = "a-different-host" + path_prefix_rewrite = "a-different-path" + } + } +} +`, suffix, suffix, suffix) } func testAccComputeUrlMap_noPathRules(bsName, hcName, umName string) string { @@ -942,3 +1234,454 @@ resource "google_compute_health_check" "default" { } `, umName, bsName, bsName, hcName) } + +func testAccComputeUrlMap_defaultRouteActionTrafficDirectorPath(bsName, hcName, umName string) string { + return fmt.Sprintf(` +resource "google_compute_url_map" "foobar" { + name = "%s" + description = "a description" + default_service = google_compute_backend_service.home.self_link + + host_rule { + hosts = ["mysite.com"] + path_matcher = "allpaths" + } + + path_matcher { + name = "allpaths" + + default_route_action { + cors_policy { + allow_credentials = true + allow_headers = ["Allowed content"] + allow_methods = ["GET"] + allow_origin_regexes = ["abc.*"] + allow_origins = ["Allowed origin"] + expose_headers = ["Exposed header"] + max_age = 30 + disabled = true + } + fault_injection_policy { + abort { + http_status = 234 + percentage = 5.6 + } + delay { + fixed_delay { + seconds = 0 + nanos = 50000 + } + percentage = 7.8 + } + } + request_mirror_policy { + backend_service = google_compute_backend_service.home.self_link + } + retry_policy { + num_retries = 4 + per_try_timeout { + seconds = 30 + } + retry_conditions = ["5xx", "deadline-exceeded"] + } + timeout { + seconds = 20 + nanos = 750000000 + } + url_rewrite { + host_rewrite = "A replacement header" + path_prefix_rewrite = "A replacement path" + } + weighted_backend_services { + backend_service = google_compute_backend_service.home.self_link + weight = 400 + header_action { + request_headers_to_remove = ["RemoveMe"] + request_headers_to_add { + header_name = "AddMe" + header_value = "MyValue" + replace = true + } + response_headers_to_remove = ["RemoveMe"] + response_headers_to_add { + header_name = "AddMe" + header_value = "MyValue" + replace = false + } + } + } + } + } + + test { + service = google_compute_backend_service.home.self_link + host = "hi.com" + path = "/home" + } +} + +resource "google_compute_backend_service" "home" { + name = "%s" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_backend_service" "home2" { + name = "%s-2" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_health_check" "default" { + name = "%s" + http_health_check { + port = 80 + } +} + +`, umName, bsName, bsName, hcName) +} + +func testAccComputeUrlMap_defaultRouteActionTrafficDirectorPathUpdate(bsName, hcName, umName string) string { + return fmt.Sprintf(` +resource "google_compute_url_map" "foobar" { + name = "%s" + description = "a description" + default_service = google_compute_backend_service.home2.self_link + + host_rule { + hosts = ["mysite.com"] + path_matcher = "allpaths2" + } + + path_matcher { + name = "allpaths2" + + default_route_action { + cors_policy { + allow_credentials = false + allow_headers = ["Allowed content updated"] + allow_methods = ["PUT"] + allow_origin_regexes = ["abcdef.*"] + allow_origins = ["Allowed origin updated"] + expose_headers = ["Exposed header updated"] + max_age = 31 + disabled = false + } + fault_injection_policy { + abort { + http_status = 235 + percentage = 6.7 + } + delay { + fixed_delay { + seconds = 1 + nanos = 40000 + } + percentage = 8.9 + } + } + request_mirror_policy { + backend_service = google_compute_backend_service.home.self_link + } + retry_policy { + num_retries = 5 + per_try_timeout { + seconds = 31 + } + retry_conditions = ["5xx"] + } + timeout { + seconds = 21 + nanos = 760000000 + } + url_rewrite { + host_rewrite = "A replacement header updated" + path_prefix_rewrite = "A replacement path updated" + } + weighted_backend_services { + backend_service = google_compute_backend_service.home.self_link + weight = 400 + header_action { + request_headers_to_remove = ["RemoveMeUpdated"] + request_headers_to_add { + header_name = "AddMeUpdated" + header_value = "MyValueUpdated" + replace = false + } + response_headers_to_remove = ["RemoveMeUpdated"] + response_headers_to_add { + header_name = "AddMeUpdated" + header_value = "MyValueUpdated" + replace = true + } + } + } + } + } + + test { + service = google_compute_backend_service.home.self_link + host = "hi.com" + path = "/home" + } +} + +resource "google_compute_backend_service" "home" { + name = "%s" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_backend_service" "home2" { + name = "%s-2" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_health_check" "default" { + name = "%s" + http_health_check { + port = 80 + } +} +`, umName, bsName, bsName, hcName) +} + + +func testAccComputeUrlMap_defaultRouteActionTrafficDirector(bsName, hcName, umName string) string { + return fmt.Sprintf(` +resource "google_compute_url_map" "foobar" { + name = "%s" + description = "a description" + + default_route_action { + cors_policy { + allow_credentials = true + allow_headers = ["Allowed content"] + allow_methods = ["GET"] + allow_origin_regexes = ["abc.*"] + allow_origins = ["Allowed origin"] + expose_headers = ["Exposed header"] + max_age = 30 + disabled = true + } + fault_injection_policy { + abort { + http_status = 234 + percentage = 5.6 + } + delay { + fixed_delay { + seconds = 0 + nanos = 50000 + } + percentage = 7.8 + } + } + request_mirror_policy { + backend_service = google_compute_backend_service.home.self_link + } + retry_policy { + num_retries = 4 + per_try_timeout { + seconds = 30 + } + retry_conditions = ["5xx", "deadline-exceeded"] + } + timeout { + seconds = 20 + nanos = 750000000 + } + url_rewrite { + host_rewrite = "A replacement header" + path_prefix_rewrite = "A replacement path" + } + weighted_backend_services { + backend_service = google_compute_backend_service.home.self_link + weight = 400 + header_action { + request_headers_to_remove = ["RemoveMe"] + request_headers_to_add { + header_name = "AddMe" + header_value = "MyValue" + replace = true + } + response_headers_to_remove = ["RemoveMe"] + response_headers_to_add { + header_name = "AddMe" + header_value = "MyValue" + replace = false + } + } + } + } + + test { + service = google_compute_backend_service.home.self_link + host = "hi.com" + path = "/home" + } +} + +resource "google_compute_backend_service" "home" { + name = "%s" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_backend_service" "home2" { + name = "%s-2" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_health_check" "default" { + name = "%s" + http_health_check { + port = 80 + } +} + +`, umName, bsName, bsName, hcName) +} + +func testAccComputeUrlMap_defaultRouteActionTrafficDirectorUpdate(bsName, hcName, umName string) string { + return fmt.Sprintf(` +resource "google_compute_url_map" "foobar" { + name = "%s" + description = "a description" + + default_route_action { + cors_policy { + allow_credentials = false + allow_headers = ["Allowed content updated"] + allow_methods = ["PUT"] + allow_origin_regexes = ["abcdef.*"] + allow_origins = ["Allowed origin updated"] + expose_headers = ["Exposed header updated"] + max_age = 31 + disabled = false + } + fault_injection_policy { + abort { + http_status = 235 + percentage = 6.7 + } + delay { + fixed_delay { + seconds = 1 + nanos = 40000 + } + percentage = 8.9 + } + } + request_mirror_policy { + backend_service = google_compute_backend_service.home2.self_link + } + retry_policy { + num_retries = 5 + per_try_timeout { + seconds = 31 + } + retry_conditions = ["5xx"] + } + timeout { + seconds = 21 + nanos = 760000000 + } + url_rewrite { + host_rewrite = "A replacement header updated" + path_prefix_rewrite = "A replacement path updated" + } + weighted_backend_services { + backend_service = google_compute_backend_service.home2.self_link + weight = 400 + header_action { + request_headers_to_remove = ["RemoveMeUpdated"] + request_headers_to_add { + header_name = "AddMeUpdated" + header_value = "MyValueUpdated" + replace = false + } + response_headers_to_remove = ["RemoveMeUpdated"] + response_headers_to_add { + header_name = "AddMeUpdated" + header_value = "MyValueUpdated" + replace = true + } + } + } + } + + test { + service = google_compute_backend_service.home2.self_link + host = "hi.com" + path = "/home" + } +} + +resource "google_compute_backend_service" "home" { + name = "%s" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_backend_service" "home2" { + name = "%s-2" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + health_checks = [google_compute_health_check.default.self_link] + load_balancing_scheme = "INTERNAL_SELF_MANAGED" +} + +resource "google_compute_health_check" "default" { + name = "%s" + http_health_check { + port = 80 + } +} +`, umName, bsName, bsName, hcName) +} + +func testAccComputeUrlMap_defaultUrlRedirectConfig(randomSuffix string) string { + return fmt.Sprintf(` +resource "google_compute_url_map" "foobar" { + name = "urlmap-test-%s" + default_url_redirect { + https_redirect = true + strip_query = false + } +} +`, randomSuffix) +} diff --git a/third_party/terraform/tests/resource_compute_vpn_tunnel_test.go b/third_party/terraform/tests/resource_compute_vpn_tunnel_test.go index b8a4830712c4..c36c988ab96a 100644 --- a/third_party/terraform/tests/resource_compute_vpn_tunnel_test.go +++ b/third_party/terraform/tests/resource_compute_vpn_tunnel_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -18,13 +17,13 @@ func TestAccComputeVpnTunnel_regionFromGateway(t *testing.T) { region = "us-west1" } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeVpnTunnelDestroy, + CheckDestroy: testAccCheckComputeVpnTunnelDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeVpnTunnel_regionFromGateway(region), + Config: testAccComputeVpnTunnel_regionFromGateway(randString(t, 10), region), }, { ResourceName: "google_compute_vpn_tunnel.foobar", @@ -40,14 +39,14 @@ func TestAccComputeVpnTunnel_regionFromGateway(t *testing.T) { func TestAccComputeVpnTunnel_router(t *testing.T) { t.Parallel() - router := fmt.Sprintf("tf-test-tunnel-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + router := fmt.Sprintf("tf-test-tunnel-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeVpnTunnelDestroy, + CheckDestroy: testAccCheckComputeVpnTunnelDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeVpnTunnelRouter(router), + Config: testAccComputeVpnTunnelRouter(randString(t, 10), router), }, { ResourceName: "google_compute_vpn_tunnel.foobar", @@ -62,13 +61,13 @@ func TestAccComputeVpnTunnel_router(t *testing.T) { func TestAccComputeVpnTunnel_defaultTrafficSelectors(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeVpnTunnelDestroy, + CheckDestroy: testAccCheckComputeVpnTunnelDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeVpnTunnelDefaultTrafficSelectors(), + Config: testAccComputeVpnTunnelDefaultTrafficSelectors(randString(t, 10)), }, { ResourceName: "google_compute_vpn_tunnel.foobar", @@ -80,7 +79,7 @@ func TestAccComputeVpnTunnel_defaultTrafficSelectors(t *testing.T) { }) } -func testAccComputeVpnTunnel_regionFromGateway(region string) string { +func testAccComputeVpnTunnel_regionFromGateway(suffix, region string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { name = "tf-test-%[1]s" @@ -141,10 +140,10 @@ resource "google_compute_vpn_tunnel" "foobar" { depends_on = [google_compute_forwarding_rule.foobar_udp4500] } -`, acctest.RandString(10), region) +`, suffix, region) } -func testAccComputeVpnTunnelRouter(router string) string { +func testAccComputeVpnTunnelRouter(suffix, router string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { name = "tf-test-%[1]s" @@ -212,10 +211,10 @@ resource "google_compute_vpn_tunnel" "foobar" { peer_ip = "8.8.8.8" router = google_compute_router.foobar.self_link } -`, acctest.RandString(10), router) +`, suffix, router) } -func testAccComputeVpnTunnelDefaultTrafficSelectors() string { +func testAccComputeVpnTunnelDefaultTrafficSelectors(suffix string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { name = "tf-test-%[1]s" @@ -266,5 +265,5 @@ resource "google_compute_vpn_tunnel" "foobar" { shared_secret = "unguessable" peer_ip = "8.8.8.8" } -`, acctest.RandString(10)) +`, suffix) } diff --git a/third_party/terraform/tests/resource_container_analysis_occurrence_test.go b/third_party/terraform/tests/resource_container_analysis_occurrence_test.go new file mode 100644 index 000000000000..f095d0c24285 --- /dev/null +++ b/third_party/terraform/tests/resource_container_analysis_occurrence_test.go @@ -0,0 +1,277 @@ +package google + +import ( + "encoding/base64" + "fmt" + "io/ioutil" + "testing" + + "crypto/sha512" + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "google.golang.org/api/cloudkms/v1" +) + +const testAttestationOccurrenceImageUrl = "gcr.io/cloud-marketplace/google/ubuntu1804" +const testAttestationOccurrenceImageDigest = "sha256:3593cd4ac7d782d460dc86ba9870a3beaf81c8f5cdbcc8880bf9a5ef6af10c5a" +const testAttestationOccurrencePayloadTemplate = "test-fixtures/binauthz/generated_payload.json.tmpl" + +var testAttestationOccurrenceFullImagePath = fmt.Sprintf("%s@%s", testAttestationOccurrenceImageUrl, testAttestationOccurrenceImageDigest) + +func getTestOccurrenceAttestationPayload(t *testing.T) string { + payloadTmpl, err := ioutil.ReadFile(testAttestationOccurrencePayloadTemplate) + if err != nil { + t.Fatal(err.Error()) + } + return fmt.Sprintf(string(payloadTmpl), + testAttestationOccurrenceImageUrl, + testAttestationOccurrenceImageDigest) +} + +func getSignedTestOccurrenceAttestationPayload( + t *testing.T, config *Config, + signingKey bootstrappedKMS, rawPayload string) string { + pbytes := []byte(rawPayload) + ssum := sha512.Sum512(pbytes) + hashed := base64.StdEncoding.EncodeToString(ssum[:]) + signed, err := config.clientKms.Projects.Locations.KeyRings.CryptoKeys. + CryptoKeyVersions.AsymmetricSign( + fmt.Sprintf("%s/cryptoKeyVersions/1", signingKey.CryptoKey.Name), + &cloudkms.AsymmetricSignRequest{ + Digest: &cloudkms.Digest{ + Sha512: hashed, + }, + }).Do() + if err != nil { + t.Fatalf("Unable to sign attestation payload with KMS key: %s", err) + } + + return signed.Signature +} + +func TestAccContainerAnalysisOccurrence_basic(t *testing.T) { + t.Parallel() + randSuffix := randString(t, 10) + + config := BootstrapConfig(t) + if config == nil { + return + } + + signKey := BootstrapKMSKeyWithPurpose(t, "ASYMMETRIC_SIGN") + payload := getTestOccurrenceAttestationPayload(t) + signed := getSignedTestOccurrenceAttestationPayload(t, config, signKey, payload) + params := map[string]interface{}{ + "random_suffix": randSuffix, + "image_url": testAttestationOccurrenceFullImagePath, + "key_ring": GetResourceNameFromSelfLink(signKey.KeyRing.Name), + "crypto_key": GetResourceNameFromSelfLink(signKey.CryptoKey.Name), + "payload": base64.StdEncoding.EncodeToString([]byte(payload)), + "signature": base64.StdEncoding.EncodeToString([]byte(signed)), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerAnalysisNoteDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccContainerAnalysisOccurence_basic(params), + }, + { + ResourceName: "google_container_analysis_occurrence.occurrence", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccContainerAnalysisOccurrence_multipleSignatures(t *testing.T) { + t.Parallel() + randSuffix := randString(t, 10) + + config := BootstrapConfig(t) + if config == nil { + return + } + + payload := getTestOccurrenceAttestationPayload(t) + key1 := BootstrapKMSKeyWithPurposeInLocationAndName(t, "ASYMMETRIC_SIGN", "global", "tf-bootstrap-binauthz-key1") + signature1 := getSignedTestOccurrenceAttestationPayload(t, config, key1, payload) + + key2 := BootstrapKMSKeyWithPurposeInLocationAndName(t, "ASYMMETRIC_SIGN", "global", "tf-bootstrap-binauthz-key2") + signature2 := getSignedTestOccurrenceAttestationPayload(t, config, key2, payload) + + paramsMultipleSignatures := map[string]interface{}{ + "random_suffix": randSuffix, + "image_url": testAttestationOccurrenceFullImagePath, + "key_ring": GetResourceNameFromSelfLink(key1.KeyRing.Name), + "payload": base64.StdEncoding.EncodeToString([]byte(payload)), + "key1": GetResourceNameFromSelfLink(key1.CryptoKey.Name), + "signature1": base64.StdEncoding.EncodeToString([]byte(signature1)), + "key2": GetResourceNameFromSelfLink(key2.CryptoKey.Name), + "signature2": base64.StdEncoding.EncodeToString([]byte(signature2)), + } + paramsSingle := map[string]interface{}{ + "random_suffix": randSuffix, + "image_url": testAttestationOccurrenceFullImagePath, + "key_ring": GetResourceNameFromSelfLink(key1.KeyRing.Name), + "crypto_key": GetResourceNameFromSelfLink(key1.CryptoKey.Name), + "payload": base64.StdEncoding.EncodeToString([]byte(payload)), + "signature": base64.StdEncoding.EncodeToString([]byte(signature1)), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerAnalysisNoteDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccContainerAnalysisOccurence_multipleSignatures(paramsMultipleSignatures), + }, + { + ResourceName: "google_container_analysis_occurrence.occurrence", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccContainerAnalysisOccurence_basic(paramsSingle), + }, + { + ResourceName: "google_container_analysis_occurrence.occurrence", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccContainerAnalysisOccurence_basic(params map[string]interface{}) string { + return Nprintf(` +resource "google_binary_authorization_attestor" "attestor" { + name = "test-attestor%{random_suffix}" + attestation_authority_note { + note_reference = google_container_analysis_note.note.name + public_keys { + id = data.google_kms_crypto_key_version.version.id + pkix_public_key { + public_key_pem = data.google_kms_crypto_key_version.version.public_key[0].pem + signature_algorithm = data.google_kms_crypto_key_version.version.public_key[0].algorithm + } + } + } +} + +resource "google_container_analysis_note" "note" { + name = "test-attestor-note%{random_suffix}" + attestation_authority { + hint { + human_readable_name = "Attestor Note" + } + } +} + +data "google_kms_key_ring" "keyring" { + name = "%{key_ring}" + location = "global" +} + +data "google_kms_crypto_key" "crypto-key" { + name = "%{crypto_key}" + key_ring = data.google_kms_key_ring.keyring.self_link +} + +data "google_kms_crypto_key_version" "version" { + crypto_key = data.google_kms_crypto_key.crypto-key.self_link +} + +resource "google_container_analysis_occurrence" "occurrence" { + resource_uri = "%{image_url}" + note_name = google_container_analysis_note.note.id + + attestation { + serialized_payload = "%{payload}" + signatures { + public_key_id = data.google_kms_crypto_key_version.version.id + signature = "%{signature}" + } + } +} +`, params) +} + +func testAccContainerAnalysisOccurence_multipleSignatures(params map[string]interface{}) string { + return Nprintf(` +resource "google_binary_authorization_attestor" "attestor" { + name = "test-attestor%{random_suffix}" + attestation_authority_note { + note_reference = google_container_analysis_note.note.name + public_keys { + id = data.google_kms_crypto_key_version.version-key1.id + pkix_public_key { + public_key_pem = data.google_kms_crypto_key_version.version-key1.public_key[0].pem + signature_algorithm = data.google_kms_crypto_key_version.version-key1.public_key[0].algorithm + } + } + + public_keys { + id = data.google_kms_crypto_key_version.version-key2.id + pkix_public_key { + public_key_pem = data.google_kms_crypto_key_version.version-key2.public_key[0].pem + signature_algorithm = data.google_kms_crypto_key_version.version-key2.public_key[0].algorithm + } + } + } +} + +resource "google_container_analysis_note" "note" { + name = "test-attestor-note%{random_suffix}" + attestation_authority { + hint { + human_readable_name = "Attestor Note" + } + } +} + +data "google_kms_key_ring" "keyring" { + name = "%{key_ring}" + location = "global" +} + +data "google_kms_crypto_key" "crypto-key1" { + name = "%{key1}" + key_ring = data.google_kms_key_ring.keyring.self_link +} + +data "google_kms_crypto_key" "crypto-key2" { + name = "%{key2}" + key_ring = data.google_kms_key_ring.keyring.self_link +} + +data "google_kms_crypto_key_version" "version-key1" { + crypto_key = data.google_kms_crypto_key.crypto-key1.self_link +} + +data "google_kms_crypto_key_version" "version-key2" { + crypto_key = data.google_kms_crypto_key.crypto-key2.self_link +} + +resource "google_container_analysis_occurrence" "occurrence" { + resource_uri = "%{image_url}" + note_name = google_container_analysis_note.note.id + + attestation { + serialized_payload = "%{payload}" + signatures { + public_key_id = data.google_kms_crypto_key_version.version-key1.id + signature = "%{signature1}" + } + + signatures { + public_key_id = data.google_kms_crypto_key_version.version-key2.id + signature = "%{signature2}" + } + } +} +`, params) +} diff --git a/third_party/terraform/tests/resource_container_cluster_test.go.erb b/third_party/terraform/tests/resource_container_cluster_test.go.erb index 3612df8045fe..c1b72fae4eb6 100644 --- a/third_party/terraform/tests/resource_container_cluster_test.go.erb +++ b/third_party/terraform/tests/resource_container_cluster_test.go.erb @@ -66,7 +66,7 @@ func TestAccContainerCluster_basic(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_basic(clusterName), @@ -92,7 +92,7 @@ func TestAccContainerCluster_basic(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_misc(t *testing.T) { @@ -102,7 +102,7 @@ func TestAccContainerCluster_misc(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_misc(clusterName), @@ -127,39 +127,40 @@ func TestAccContainerCluster_misc(t *testing.T) { ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withAddons(t *testing.T) { t.Parallel() clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + pid := getTestProjectFromEnv() vcrTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withAddons(clusterName), + Config: testAccContainerCluster_withAddons(pid, clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, ImportStateVerifyIgnore: []string{"min_master_version"}, }, { - Config: testAccContainerCluster_updateAddons(clusterName), + Config: testAccContainerCluster_updateAddons(pid, clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withMasterAuthConfig(t *testing.T) { @@ -170,7 +171,7 @@ func TestAccContainerCluster_withMasterAuthConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withMasterAuth(clusterName), @@ -217,7 +218,7 @@ func TestAccContainerCluster_withMasterAuthConfig(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withMasterAuthConfig_NoCert(t *testing.T) { @@ -228,7 +229,7 @@ func TestAccContainerCluster_withMasterAuthConfig_NoCert(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withMasterAuthNoCert(clusterName), @@ -242,7 +243,7 @@ func TestAccContainerCluster_withMasterAuthConfig_NoCert(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withAuthenticatorGroupsConfig(t *testing.T) { @@ -252,7 +253,7 @@ func TestAccContainerCluster_withAuthenticatorGroupsConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withAuthenticatorGroupsConfig(containerNetName, clusterName), @@ -263,7 +264,7 @@ func TestAccContainerCluster_withAuthenticatorGroupsConfig(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNetworkPolicyEnabled(t *testing.T) { @@ -274,7 +275,7 @@ func TestAccContainerCluster_withNetworkPolicyEnabled(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNetworkPolicyEnabled(clusterName), @@ -334,7 +335,7 @@ func TestAccContainerCluster_withNetworkPolicyEnabled(t *testing.T) { ExpectNonEmptyPlan: false, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% unless version == 'ga' -%> @@ -344,46 +345,72 @@ func TestAccContainerCluster_withReleaseChannelEnabled(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "STABLE"), }, { - ResourceName: "google_container_cluster.with_release_channel", - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_release_channel", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, }, { - Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "REGULAR"), + Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "UNSPECIFIED"), }, { - ResourceName: "google_container_cluster.with_release_channel", - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_release_channel", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, }, + }, + }) +} + +func TestAccContainerCluster_withReleaseChannelEnabledDefaultVersion(t *testing.T) { + t.Parallel() + clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), + Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "RAPID"), + Config: testAccContainerCluster_withReleaseChannelEnabledDefaultVersion(clusterName, "REGULAR"), }, { - ResourceName: "google_container_cluster.with_release_channel", - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_release_channel", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, + }, + { + Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "REGULAR"), + }, + { + ResourceName: "google_container_cluster.with_release_channel", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, }, { Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "UNSPECIFIED"), }, { - ResourceName: "google_container_cluster.with_release_channel", - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_release_channel", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withInvalidReleaseChannel(t *testing.T) { @@ -392,14 +419,56 @@ func TestAccContainerCluster_withInvalidReleaseChannel(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "CANARY"), ExpectError: regexp.MustCompile(`config is invalid: expected release_channel\.0\.channel to be one of \[UNSPECIFIED RAPID REGULAR STABLE\], got CANARY`), }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) +} + +func TestAccContainerCluster_withTelemetryEnabled(t *testing.T) { + t.Parallel() + clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccContainerCluster_withTelemetryEnabled(clusterName, "ENABLED"), + }, + { + ResourceName: "google_container_cluster.with_cluster_telemetry", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, + }, + { + Config: testAccContainerCluster_withTelemetryEnabled(clusterName, "DISABLED"), + }, + { + ResourceName: "google_container_cluster.with_cluster_telemetry", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, + }, + { + Config: testAccContainerCluster_withTelemetryEnabled(clusterName, "SYSTEM_ONLY"), + }, + { + ResourceName: "google_container_cluster.with_cluster_telemetry", + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, + }, + }, + }) } <% end -%> @@ -411,7 +480,7 @@ func TestAccContainerCluster_withMasterAuthorizedNetworksConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withMasterAuthorizedNetworksConfig(clusterName, []string{}, ""), @@ -463,7 +532,7 @@ func TestAccContainerCluster_withMasterAuthorizedNetworksConfig(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_regional(t *testing.T) { @@ -474,7 +543,7 @@ func TestAccContainerCluster_regional(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_regional(clusterName), @@ -485,7 +554,7 @@ func TestAccContainerCluster_regional(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_regionalWithNodePool(t *testing.T) { @@ -497,7 +566,7 @@ func TestAccContainerCluster_regionalWithNodePool(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_regionalWithNodePool(clusterName, npName), @@ -508,7 +577,7 @@ func TestAccContainerCluster_regionalWithNodePool(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_regionalWithNodeLocations(t *testing.T) { @@ -519,7 +588,7 @@ func TestAccContainerCluster_regionalWithNodeLocations(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_regionalNodeLocations(clusterName), @@ -538,7 +607,7 @@ func TestAccContainerCluster_regionalWithNodeLocations(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% unless version == 'ga' -%> @@ -551,7 +620,7 @@ func TestAccContainerCluster_withTpu(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withTpu(containerNetName, clusterName), @@ -565,7 +634,7 @@ func TestAccContainerCluster_withTpu(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% end -%> @@ -578,7 +647,7 @@ func TestAccContainerCluster_withPrivateClusterConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withPrivateClusterConfig(containerNetName, clusterName), @@ -589,7 +658,7 @@ func TestAccContainerCluster_withPrivateClusterConfig(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withPrivateClusterConfigMissingCidrBlock(t *testing.T) { @@ -601,14 +670,14 @@ func TestAccContainerCluster_withPrivateClusterConfigMissingCidrBlock(t *testing vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withPrivateClusterConfigMissingCidrBlock(containerNetName, clusterName), ExpectError: regexp.MustCompile("master_ipv4_cidr_block must be set if enable_private_nodes == true"), }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% unless version == 'ga' -%> @@ -620,7 +689,7 @@ func TestAccContainerCluster_withIntraNodeVisibility(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withIntraNodeVisibility(clusterName), @@ -645,7 +714,7 @@ func TestAccContainerCluster_withIntraNodeVisibility(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% end -%> @@ -657,7 +726,7 @@ func TestAccContainerCluster_withVersion(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withVersion(clusterName), @@ -669,10 +738,12 @@ func TestAccContainerCluster_withVersion(t *testing.T) { ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_updateVersion(t *testing.T) { + // TODO re-enable this test when GKE supports multiple versions concurrently + t.Skip("Only a single GKE version is supported currently by the API, this test cannot pass") t.Parallel() clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) @@ -680,7 +751,7 @@ func TestAccContainerCluster_updateVersion(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withLowerVersion(clusterName), @@ -701,7 +772,7 @@ func TestAccContainerCluster_updateVersion(t *testing.T) { ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodeConfig(t *testing.T) { @@ -712,7 +783,7 @@ func TestAccContainerCluster_withNodeConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodeConfig(clusterName), @@ -731,7 +802,7 @@ func TestAccContainerCluster_withNodeConfig(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodeConfigScopeAlias(t *testing.T) { @@ -742,7 +813,7 @@ func TestAccContainerCluster_withNodeConfigScopeAlias(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodeConfigScopeAlias(clusterName), @@ -753,7 +824,7 @@ func TestAccContainerCluster_withNodeConfigScopeAlias(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodeConfigShieldedInstanceConfig(t *testing.T) { @@ -764,7 +835,7 @@ func TestAccContainerCluster_withNodeConfigShieldedInstanceConfig(t *testing.T) vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodeConfigShieldedInstanceConfig(clusterName), @@ -775,7 +846,7 @@ func TestAccContainerCluster_withNodeConfigShieldedInstanceConfig(t *testing.T) ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% unless version.nil? || version == 'ga' -%> @@ -787,7 +858,7 @@ func TestAccContainerCluster_withWorkloadMetadataConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withWorkloadMetadataConfig(clusterName), @@ -803,7 +874,7 @@ func TestAccContainerCluster_withWorkloadMetadataConfig(t *testing.T) { ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withSandboxConfig(t *testing.T) { @@ -814,7 +885,7 @@ func TestAccContainerCluster_withSandboxConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withSandboxConfig(clusterName), @@ -831,8 +902,34 @@ func TestAccContainerCluster_withSandboxConfig(t *testing.T) { ImportStateVerify: true, ImportStateVerifyIgnore: []string{"min_master_version"}, }, + { + // GKE sets automatic labels and taints on nodes. This makes + // sure we ignore the automatic ones and keep our own. + Config: testAccContainerCluster_withSandboxConfig(clusterName), + // When we use PlanOnly without ExpectNonEmptyPlan, we're + // guaranteeing that the computed fields of the resources don't + // force an unintentional change to the plan. That is, we + // expect this part of the test to pass only if the plan + // doesn't change. + PlanOnly: true, + }, + { + // Now we'll modify the labels, which should force a change to + // the plan. We make sure we don't over-suppress and end up + // eliminating the labels or taints we asked for. This will + // destroy and recreate the cluster as labels are immutable. + Config: testAccContainerCluster_withSandboxConfig_changeLabels(clusterName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_container_cluster.with_sandbox_config", + "node_config.0.labels.test.terraform.io/gke-sandbox", "true"), + resource.TestCheckResourceAttr("google_container_cluster.with_sandbox_config", + "node_config.0.labels.test.terraform.io/gke-sandbox-amended", "also-true"), + resource.TestCheckResourceAttr("google_container_cluster.with_sandbox_config", + "node_config.0.taint.0.key", "test.terraform.io/gke-sandbox"), + ), + }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withBootDiskKmsKey(t *testing.T) { @@ -843,7 +940,7 @@ func TestAccContainerCluster_withBootDiskKmsKey(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withBootDiskKmsKey(getTestProjectFromEnv(), clusterName), @@ -855,7 +952,7 @@ func TestAccContainerCluster_withBootDiskKmsKey(t *testing.T) { ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% end -%> @@ -868,7 +965,7 @@ func TestAccContainerCluster_network(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_networkRef(clusterName, network), @@ -884,7 +981,7 @@ func TestAccContainerCluster_network(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_backend(t *testing.T) { @@ -895,7 +992,7 @@ func TestAccContainerCluster_backend(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_backendRef(clusterName), @@ -906,7 +1003,7 @@ func TestAccContainerCluster_backend(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolBasic(t *testing.T) { @@ -918,7 +1015,7 @@ func TestAccContainerCluster_withNodePoolBasic(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodePoolBasic(clusterName, npName), @@ -929,10 +1026,12 @@ func TestAccContainerCluster_withNodePoolBasic(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolUpdateVersion(t *testing.T) { + // TODO re-enable this test when GKE supports multiple versions concurrently + t.Skip("Only a single GKE version is supported currently by the API, this test cannot pass") t.Parallel() clusterName := fmt.Sprintf("tf-test-cluster-nodepool-%s", randString(t, 10)) @@ -941,7 +1040,7 @@ func TestAccContainerCluster_withNodePoolUpdateVersion(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodePoolLowerVersion(clusterName, npName), @@ -962,7 +1061,7 @@ func TestAccContainerCluster_withNodePoolUpdateVersion(t *testing.T) { ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolResize(t *testing.T) { @@ -973,7 +1072,7 @@ func TestAccContainerCluster_withNodePoolResize(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodePoolNodeLocations(clusterName, npName), @@ -998,7 +1097,7 @@ func TestAccContainerCluster_withNodePoolResize(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolAutoscaling(t *testing.T) { @@ -1010,7 +1109,7 @@ func TestAccContainerCluster_withNodePoolAutoscaling(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerCluster_withNodePoolAutoscaling(clusterName, npName), @@ -1049,10 +1148,12 @@ func TestAccContainerCluster_withNodePoolAutoscaling(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolNamePrefix(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) @@ -1061,7 +1162,7 @@ func TestAccContainerCluster_withNodePoolNamePrefix(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodePoolNamePrefix(clusterName, npNamePrefix), @@ -1073,7 +1174,7 @@ func TestAccContainerCluster_withNodePoolNamePrefix(t *testing.T) { ImportStateVerifyIgnore: []string{"node_pool.0.name_prefix"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolMultiple(t *testing.T) { @@ -1085,7 +1186,7 @@ func TestAccContainerCluster_withNodePoolMultiple(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodePoolMultiple(clusterName, npNamePrefix), @@ -1096,7 +1197,7 @@ func TestAccContainerCluster_withNodePoolMultiple(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolConflictingNameFields(t *testing.T) { @@ -1108,14 +1209,14 @@ func TestAccContainerCluster_withNodePoolConflictingNameFields(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodePoolConflictingNameFields(clusterName, npPrefix), ExpectError: regexp.MustCompile("Cannot specify both name and name_prefix for a node_pool"), }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withNodePoolNodeConfig(t *testing.T) { @@ -1127,7 +1228,7 @@ func TestAccContainerCluster_withNodePoolNodeConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withNodePoolNodeConfig(cluster, np), @@ -1138,7 +1239,7 @@ func TestAccContainerCluster_withNodePoolNodeConfig(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withMaintenanceWindow(t *testing.T) { @@ -1150,7 +1251,7 @@ func TestAccContainerCluster_withMaintenanceWindow(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withMaintenanceWindow(clusterName, "03:00"), @@ -1176,7 +1277,7 @@ func TestAccContainerCluster_withMaintenanceWindow(t *testing.T) { ImportStateVerifyIgnore: []string{"maintenance_policy.#"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withRecurringMaintenanceWindow(t *testing.T) { @@ -1187,7 +1288,7 @@ func TestAccContainerCluster_withRecurringMaintenanceWindow(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withRecurringMaintenanceWindow(cluster, "2019-01-01T00:00:00Z", "2019-01-02T00:00:00Z"), @@ -1221,7 +1322,7 @@ func TestAccContainerCluster_withRecurringMaintenanceWindow(t *testing.T) { ImportStateVerifyIgnore: []string{"maintenance_policy.#"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(t *testing.T) { @@ -1232,7 +1333,7 @@ func TestAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(t *t vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(containerNetName, clusterName), @@ -1243,7 +1344,7 @@ func TestAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(t *t ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withIPAllocationPolicy_specificIPRanges(t *testing.T) { @@ -1254,7 +1355,7 @@ func TestAccContainerCluster_withIPAllocationPolicy_specificIPRanges(t *testing. vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withIPAllocationPolicy_specificIPRanges(containerNetName, clusterName), @@ -1265,7 +1366,7 @@ func TestAccContainerCluster_withIPAllocationPolicy_specificIPRanges(t *testing. ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withIPAllocationPolicy_specificSizes(t *testing.T) { @@ -1276,7 +1377,7 @@ func TestAccContainerCluster_withIPAllocationPolicy_specificSizes(t *testing.T) vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withIPAllocationPolicy_specificSizes(containerNetName, clusterName), @@ -1287,7 +1388,7 @@ func TestAccContainerCluster_withIPAllocationPolicy_specificSizes(t *testing.T) ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_nodeAutoprovisioning(t *testing.T) { @@ -1298,7 +1399,7 @@ func TestAccContainerCluster_nodeAutoprovisioning(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_autoprovisioning(clusterName, true), @@ -1327,7 +1428,7 @@ func TestAccContainerCluster_nodeAutoprovisioning(t *testing.T) { ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_nodeAutoprovisioningDefaults(t *testing.T) { @@ -1338,7 +1439,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaults(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_autoprovisioningDefaults(clusterName, false), @@ -1359,162 +1460,193 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaults(t *testing.T) { ExpectNonEmptyPlan: false, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } -<% unless version == 'ga' -%> -func TestAccContainerCluster_withAutoscalingProfile(t *testing.T) { +func TestAccContainerCluster_withShieldedNodes(t *testing.T) { t.Parallel() - clusterName := fmt.Sprintf("cluster-test-%s", randString(t, 10)) + + clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withAutoscalingProfile(clusterName, "BALANCED"), + Config: testAccContainerCluster_withShieldedNodes(clusterName, true), }, { - ResourceName: "google_container_cluster.autoscaling_with_profile", - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_shielded_nodes", + ImportState: true, + ImportStateVerify: true, }, { - Config: testAccContainerCluster_withAutoscalingProfile(clusterName, "OPTIMIZE_UTILIZATION"), + Config: testAccContainerCluster_withShieldedNodes(clusterName, false), }, { - ResourceName: "google_container_cluster.autoscaling_with_profile", - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_shielded_nodes", + ImportState: true, + ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } -func TestAccContainerCluster_withInvalidAutoscalingProfile(t *testing.T) { +func TestAccContainerCluster_withWorkloadIdentityConfig(t *testing.T) { t.Parallel() - clusterName := fmt.Sprintf("cluster-test-%s", randString(t, 10)) + + clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + pid := getTestProjectFromEnv() + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withAutoscalingProfile(clusterName, "AS_CHEAP_AS_POSSIBLE"), - ExpectError: regexp.MustCompile(`config is invalid: expected cluster_autoscaling\.0\.autoscaling_profile to be one of \[BALANCED OPTIMIZE_UTILIZATION\], got AS_CHEAP_AS_POSSIBLE`), + Config: testAccContainerCluster_withWorkloadIdentityConfigEnabled(pid, clusterName), + }, + { + ResourceName: "google_container_cluster.with_workload_identity_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + }, + { + Config: testAccContainerCluster_updateWorkloadIdentityConfig(pid, clusterName, false), + }, + { + ResourceName: "google_container_cluster.with_workload_identity_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + }, + { + Config: testAccContainerCluster_updateWorkloadIdentityConfig(pid, clusterName, true), + }, + { + ResourceName: "google_container_cluster.with_workload_identity_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) + } -func TestAccContainerCluster_sharedVpc(t *testing.T) { +<% unless version == 'ga' -%> +// consider merging this test with TestAccContainerCluster_nodeAutoprovisioningDefaults +// once the feature is GA +func TestAccContainerCluster_nodeAutoprovisioningDefaultsMinCpuPlatform(t *testing.T) { t.Parallel() clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) - org := getTestOrgFromEnv(t) - billingId := getTestBillingAccountFromEnv(t) - projectName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - suffix := randString(t, 10) + includeMinCpuPlatform := true vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerCluster_sharedVpc(org, billingId, projectName, clusterName, suffix), + Config: testAccContainerCluster_autoprovisioningDefaultsMinCpuPlatform(clusterName, includeMinCpuPlatform), }, { - ResourceName: "google_container_cluster.shared_vpc_cluster", - ImportStateId: fmt.Sprintf("%s-service/us-central1-a/%s", projectName, clusterName), + ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, + }, + { + Config: testAccContainerCluster_autoprovisioningDefaultsMinCpuPlatform(clusterName, !includeMinCpuPlatform), + }, + { + ResourceName: "google_container_cluster.with_autoprovisioning", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version"}, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } -func TestAccContainerCluster_withWorkloadIdentityConfig(t *testing.T) { +func TestAccContainerCluster_withAutoscalingProfile(t *testing.T) { t.Parallel() - - clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) - pid := getTestProjectFromEnv() - + clusterName := fmt.Sprintf("cluster-test-%s", randString(t, 10)) vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withWorkloadIdentityConfigEnabled(pid, clusterName), - }, - { - ResourceName: "google_container_cluster.with_workload_identity_config", - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccContainerCluster_updateWorkloadMetadataConfig(pid, clusterName, "SECURE"), + Config: testAccContainerCluster_withAutoscalingProfile(clusterName, "BALANCED"), }, { - ResourceName: "google_container_cluster.with_workload_identity_config", + ResourceName: "google_container_cluster.autoscaling_with_profile", + ImportStateIdPrefix: "us-central1-a/", ImportState: true, ImportStateVerify: true, }, { - Config: testAccContainerCluster_updateWorkloadIdentityConfig(pid, clusterName, false), + Config: testAccContainerCluster_withAutoscalingProfile(clusterName, "OPTIMIZE_UTILIZATION"), }, { - ResourceName: "google_container_cluster.with_workload_identity_config", + ResourceName: "google_container_cluster.autoscaling_with_profile", + ImportStateIdPrefix: "us-central1-a/", ImportState: true, ImportStateVerify: true, }, + }, + }) +} + +func TestAccContainerCluster_withInvalidAutoscalingProfile(t *testing.T) { + t.Parallel() + clusterName := fmt.Sprintf("cluster-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), + Steps: []resource.TestStep{ { - Config: testAccContainerCluster_updateWorkloadIdentityConfig(pid, clusterName, true), - }, - { - ResourceName: "google_container_cluster.with_workload_identity_config", - ImportState: true, - ImportStateVerify: true, + Config: testAccContainerCluster_withAutoscalingProfile(clusterName, "AS_CHEAP_AS_POSSIBLE"), + ExpectError: regexp.MustCompile(`config is invalid: expected cluster_autoscaling\.0\.autoscaling_profile to be one of \[BALANCED OPTIMIZE_UTILIZATION\], got AS_CHEAP_AS_POSSIBLE`), }, }, - }, testAccCheckContainerClusterDestroyProducer) - + }) } -func TestAccContainerCluster_withBinaryAuthorization(t *testing.T) { +func TestAccContainerCluster_sharedVpc(t *testing.T) { t.Parallel() clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + org := getTestOrgFromEnv(t) + billingId := getTestBillingAccountFromEnv(t) + projectName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + suffix := randString(t, 10) vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withBinaryAuthorization(clusterName, true), - }, - { - ResourceName: "google_container_cluster.with_binary_authorization", - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccContainerCluster_withBinaryAuthorization(clusterName, false), + Config: testAccContainerCluster_sharedVpc(org, billingId, projectName, clusterName, suffix), }, { - ResourceName: "google_container_cluster.with_binary_authorization", + ResourceName: "google_container_cluster.shared_vpc_cluster", + ImportStateId: fmt.Sprintf("%s-service/us-central1-a/%s", projectName, clusterName), ImportState: true, ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } -func TestAccContainerCluster_withShieldedNodes(t *testing.T) { + +func TestAccContainerCluster_withBinaryAuthorization(t *testing.T) { t.Parallel() clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) @@ -1522,26 +1654,26 @@ func TestAccContainerCluster_withShieldedNodes(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerCluster_withShieldedNodes(clusterName, true), + Config: testAccContainerCluster_withBinaryAuthorization(clusterName, true), }, { - ResourceName: "google_container_cluster.with_shielded_nodes", + ResourceName: "google_container_cluster.with_binary_authorization", ImportState: true, ImportStateVerify: true, }, { - Config: testAccContainerCluster_withShieldedNodes(clusterName, false), + Config: testAccContainerCluster_withBinaryAuthorization(clusterName, false), }, { - ResourceName: "google_container_cluster.with_shielded_nodes", + ResourceName: "google_container_cluster.with_binary_authorization", ImportState: true, ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withFlexiblePodCIDR(t *testing.T) { @@ -1553,7 +1685,7 @@ func TestAccContainerCluster_withFlexiblePodCIDR(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withFlexiblePodCIDR(containerNetName, clusterName), @@ -1564,7 +1696,7 @@ func TestAccContainerCluster_withFlexiblePodCIDR(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } <% end -%> @@ -1582,7 +1714,7 @@ func TestAccContainerCluster_errorCleanDanglingCluster(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: initConfig, @@ -1603,7 +1735,7 @@ func TestAccContainerCluster_errorCleanDanglingCluster(t *testing.T) { ExpectNonEmptyPlan: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_errorNoClusterCreated(t *testing.T) { @@ -1612,17 +1744,16 @@ func TestAccContainerCluster_errorNoClusterCreated(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withInvalidLocation("wonderland"), ExpectError: regexp.MustCompile(`Permission denied on 'locations/wonderland' \(or it may not exist\).`), }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } -<% unless version == 'ga' -%> func TestAccContainerCluster_withDatabaseEncryption(t *testing.T) { t.Parallel() @@ -1638,20 +1769,27 @@ func TestAccContainerCluster_withDatabaseEncryption(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withDatabaseEncryption(clusterName, kmsData), }, { - ResourceName: "google_container_cluster.with_database_encryption", + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccContainerCluster_basic(clusterName), + }, + { + ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } -<% end -%> func TestAccContainerCluster_withResourceUsageExportConfig(t *testing.T) { t.Parallel() @@ -1663,7 +1801,7 @@ func TestAccContainerCluster_withResourceUsageExportConfig(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withResourceUsageExportConfig(clusterName, datesetId, "true"), @@ -1690,7 +1828,7 @@ func TestAccContainerCluster_withResourceUsageExportConfig(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withMasterAuthorizedNetworksDisabled(t *testing.T) { @@ -1702,7 +1840,7 @@ func TestAccContainerCluster_withMasterAuthorizedNetworksDisabled(t *testing.T) vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withMasterAuthorizedNetworksDisabled(containerNetName, clusterName), @@ -1716,7 +1854,7 @@ func TestAccContainerCluster_withMasterAuthorizedNetworksDisabled(t *testing.T) ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func TestAccContainerCluster_withEnableKubernetesAlpha(t *testing.T) { @@ -1728,7 +1866,7 @@ func TestAccContainerCluster_withEnableKubernetesAlpha(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerCluster_withEnableKubernetesAlpha(clusterName, npName), @@ -1739,7 +1877,7 @@ func TestAccContainerCluster_withEnableKubernetesAlpha(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckContainerClusterDestroyProducer) + }) } func testAccContainerCluster_masterAuthorizedNetworksDisabled(t *testing.T, resource_name string) resource.TestCheckFunc { @@ -1749,7 +1887,7 @@ func testAccContainerCluster_masterAuthorizedNetworksDisabled(t *testing.T, reso return fmt.Errorf("can't find %s in state", resource_name) } - config := getTestAccProviders(t.Name())["google"].(*schema.Provider).Meta().(*Config) + config := googleProviderConfig(t) attributes := rs.Primary.Attributes cluster, err := config.clientContainer.Projects.Zones.Clusters.Get( @@ -1766,28 +1904,9 @@ func testAccContainerCluster_masterAuthorizedNetworksDisabled(t *testing.T, reso } } -func testAccCheckContainerClusterDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_container_cluster" { - continue - } - - attributes := rs.Primary.Attributes - _, err := config.clientContainer.Projects.Zones.Clusters.Get( - config.Project, attributes["location"], attributes["name"]).Do() - if err == nil { - return fmt.Errorf("Cluster still exists") - } - } - - return nil -} - -func testAccCheckContainerClusterDestroyProducer(provider *schema.Provider) func(s *terraform.State) error { +func testAccCheckContainerClusterDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { - config := provider.Meta().(*Config) + config := googleProviderConfig(t) for _, rs := range s.RootModule().Resources { if rs.Type != "google_container_cluster" { @@ -1978,8 +2097,12 @@ resource "google_container_cluster" "primary" { `, name) } -func testAccContainerCluster_withAddons(clusterName string) string { +func testAccContainerCluster_withAddons(projectID string, clusterName string) string { return fmt.Sprintf(` +data "google_project" "project" { + project_id = "%s" +} + resource "google_container_cluster" "primary" { name = "%s" location = "us-central1-a" @@ -1987,6 +2110,10 @@ resource "google_container_cluster" "primary" { min_master_version = "latest" + workload_identity_config { + identity_namespace = "${data.google_project.project.project_id}.svc.id.goog" + } + addons_config { http_load_balancing { disabled = true @@ -1997,25 +2124,38 @@ resource "google_container_cluster" "primary" { network_policy_config { disabled = true } + cloudrun_config { + disabled = true + } <% unless version == 'ga' -%> istio_config { disabled = true auth = "AUTH_MUTUAL_TLS" } - cloudrun_config { - disabled = true + dns_cache_config { + enabled = false } - dns_cache_config { + gce_persistent_disk_csi_driver_config { enabled = false } + kalm_config { + enabled = false + } + config_connector_config { + enabled = false + } <% end -%> } } -`, clusterName) +`, projectID, clusterName) } -func testAccContainerCluster_updateAddons(clusterName string) string { +func testAccContainerCluster_updateAddons(projectID string, clusterName string) string { return fmt.Sprintf(` +data "google_project" "project" { + project_id = "%s" +} + resource "google_container_cluster" "primary" { name = "%s" location = "us-central1-a" @@ -2023,6 +2163,10 @@ resource "google_container_cluster" "primary" { min_master_version = "latest" + workload_identity_config { + identity_namespace = "${data.google_project.project.project_id}.svc.id.goog" + } + addons_config { http_load_balancing { disabled = false @@ -2033,21 +2177,30 @@ resource "google_container_cluster" "primary" { network_policy_config { disabled = false } + cloudrun_config { + disabled = false + } <% unless version == 'ga' -%> istio_config { disabled = false auth = "AUTH_NONE" } - cloudrun_config { - disabled = false - } dns_cache_config { enabled = true } + gce_persistent_disk_csi_driver_config { + enabled = true + } + kalm_config { + enabled = true + } + config_connector_config { + enabled = true + } <% end -%> } } -`, clusterName) +`, projectID, clusterName) } func testAccContainerCluster_withMasterAuth(clusterName string) string { @@ -2149,6 +2302,37 @@ resource "google_container_cluster" "with_release_channel" { } `, clusterName, channel) } + +func testAccContainerCluster_withReleaseChannelEnabledDefaultVersion(clusterName string, channel string) string { + return fmt.Sprintf(` + +data "google_container_engine_versions" "central1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "with_release_channel" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + min_master_version = data.google_container_engine_versions.central1a.release_channel_default_version["%s"] +} +`, clusterName, channel) +} + +func testAccContainerCluster_withTelemetryEnabled(clusterName string, telemetryType string) string { + return fmt.Sprintf(` +resource "google_container_cluster" "with_cluster_telemetry" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + min_master_version = "1.15" + + cluster_telemetry { + type = "%s" + } +} +`, clusterName, telemetryType) +} <% end -%> func testAccContainerCluster_removeNetworkPolicy(clusterName string) string { @@ -2234,6 +2418,9 @@ resource "google_container_cluster" "with_authenticator_groups" { security_group = "gke-security-groups@mydomain.tld" } +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> ip_allocation_policy { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name @@ -2367,8 +2554,9 @@ resource "google_container_cluster" "with_tpu" { enable_tpu = true - network = google_compute_network.container_network.name - subnetwork = google_compute_subnetwork.container_subnetwork.name + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name + networking_mode = "VPC_NATIVE" private_cluster_config { enable_private_endpoint = true @@ -2640,16 +2828,17 @@ resource "google_container_cluster" "with_workload_metadata_config" { `, clusterName) } -func testAccContainerCluster_updateWorkloadMetadataConfig(projectID string, clusterName string, workloadMetadataConfig string) string { +func testAccContainerCluster_withSandboxConfig(clusterName string) string { return fmt.Sprintf(` -data "google_project" "project" { - project_id = "%s" +data "google_container_engine_versions" "central1a" { + location = "us-central1-a" } -resource "google_container_cluster" "with_workload_identity_config" { +resource "google_container_cluster" "with_sandbox_config" { name = "%s" location = "us-central1-a" initial_node_count = 1 + min_master_version = data.google_container_engine_versions.central1a.latest_master_version node_config { oauth_scopes = [ @@ -2657,15 +2846,27 @@ resource "google_container_cluster" "with_workload_identity_config" { "https://www.googleapis.com/auth/monitoring", ] - workload_metadata_config { - node_metadata = "%s" + image_type = "COS_CONTAINERD" + + sandbox_config { + sandbox_type = "gvisor" + } + + labels = { + "test.terraform.io/gke-sandbox" = "true" + } + + taint { + key = "test.terraform.io/gke-sandbox" + value = "true" + effect = "NO_SCHEDULE" } } } -`, projectID, clusterName, workloadMetadataConfig) +`, clusterName) } -func testAccContainerCluster_withSandboxConfig(clusterName string) string { +func testAccContainerCluster_withSandboxConfig_changeLabels(clusterName string) string { return fmt.Sprintf(` data "google_container_engine_versions" "central1a" { location = "us-central1-a" @@ -2688,6 +2889,17 @@ resource "google_container_cluster" "with_sandbox_config" { sandbox_config { sandbox_type = "gvisor" } + + labels = { + "test.terraform.io/gke-sandbox" = "true" + "test.terraform.io/gke-sandbox-amended" = "also-true" + } + + taint { + key = "test.terraform.io/gke-sandbox" + value = "true" + effect = "NO_SCHEDULE" + } } } `, clusterName) @@ -3002,6 +3214,45 @@ if monitoringWrite { return config } +<% unless version == 'ga' -%> +func testAccContainerCluster_autoprovisioningDefaultsMinCpuPlatform(cluster string, includeMinCpuPlatform bool) string { + minCpuPlatformCfg := "" + if includeMinCpuPlatform { + minCpuPlatformCfg = `min_cpu_platform = "Intel Haswell"` + } + + return fmt.Sprintf(` +data "google_container_engine_versions" "central1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "with_autoprovisioning" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + + min_master_version = data.google_container_engine_versions.central1a.latest_master_version + + cluster_autoscaling { + enabled = true + + resource_limits { + resource_type = "cpu" + maximum = 2 + } + resource_limits { + resource_type = "memory" + maximum = 2048 + } + + auto_provisioning_defaults { + %s + } + } +}`, cluster, minCpuPlatformCfg) +} +<% end -%> + func testAccContainerCluster_withNodePoolAutoscaling(cluster, np string) string { return fmt.Sprintf(` resource "google_container_cluster" "with_node_pool" { @@ -3197,6 +3448,9 @@ resource "google_container_cluster" "with_ip_allocation_policy" { network = google_compute_network.container_network.name subnetwork = google_compute_subnetwork.container_subnetwork.name +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> initial_node_count = 1 ip_allocation_policy { cluster_secondary_range_name = "pods" @@ -3228,6 +3482,10 @@ resource "google_container_cluster" "with_ip_allocation_policy" { subnetwork = google_compute_subnetwork.container_subnetwork.name initial_node_count = 1 + +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> ip_allocation_policy { cluster_ipv4_cidr_block = "10.0.0.0/16" services_ipv4_cidr_block = "10.1.0.0/16" @@ -3258,6 +3516,10 @@ resource "google_container_cluster" "with_ip_allocation_policy" { subnetwork = google_compute_subnetwork.container_subnetwork.name initial_node_count = 1 + +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> ip_allocation_policy { cluster_ipv4_cidr_block = "/16" services_ipv4_cidr_block = "/22" @@ -3335,6 +3597,9 @@ resource "google_container_cluster" "with_private_cluster" { location = "us-central1-a" initial_node_count = 1 +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> network = google_compute_network.container_network.name subnetwork = google_compute_subnetwork.container_subnetwork.name @@ -3342,8 +3607,9 @@ resource "google_container_cluster" "with_private_cluster" { enable_private_endpoint = true enable_private_nodes = true } - master_authorized_networks_config { - } + + master_authorized_networks_config {} + ip_allocation_policy { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name @@ -3382,6 +3648,12 @@ resource "google_container_cluster" "with_private_cluster" { location = "us-central1-a" initial_node_count = 1 +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" + default_snat_status { + disabled = true + } +<% end -%> network = google_compute_network.container_network.name subnetwork = google_compute_subnetwork.container_subnetwork.name @@ -3389,6 +3661,11 @@ resource "google_container_cluster" "with_private_cluster" { enable_private_endpoint = true enable_private_nodes = true master_ipv4_cidr_block = "10.42.0.0/28" +<% unless version == 'ga' -%> + master_global_access_config { + enabled = true + } +<% end -%> } master_authorized_networks_config { } @@ -3399,6 +3676,69 @@ resource "google_container_cluster" "with_private_cluster" { } `, containerNetName, clusterName) } + +func testAccContainerCluster_withShieldedNodes(clusterName string, enabled bool) string { + return fmt.Sprintf(` +resource "google_container_cluster" "with_shielded_nodes" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + + enable_shielded_nodes = %v +} +`, clusterName, enabled) +} + +func testAccContainerCluster_withWorkloadIdentityConfigEnabled(projectID string, clusterName string) string { + return fmt.Sprintf(` +data "google_project" "project" { + project_id = "%s" +} + +resource "google_container_cluster" "with_workload_identity_config" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + + workload_identity_config { + identity_namespace = "${data.google_project.project.project_id}.svc.id.goog" + } + remove_default_node_pool = true + +} +`, projectID, clusterName) +} + +func testAccContainerCluster_updateWorkloadIdentityConfig(projectID string, clusterName string, enable bool) string { + workloadIdentityConfig := "" + if enable { + workloadIdentityConfig = ` + workload_identity_config { + identity_namespace = "${data.google_project.project.project_id}.svc.id.goog" + }` + } else { + workloadIdentityConfig = ` + workload_identity_config { + identity_namespace = "" + }` + } + return fmt.Sprintf(` +data "google_project" "project" { + project_id = "%s" +} + +resource "google_container_cluster" "with_workload_identity_config" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + remove_default_node_pool = true + %s +} +`, projectID, clusterName, workloadIdentityConfig) +} + + + <% unless version.nil? || version == 'ga' -%> func testAccContainerCluster_sharedVpc(org, billingId, projectName, name string, suffix string) string { return fmt.Sprintf(` @@ -3488,8 +3828,9 @@ resource "google_container_cluster" "shared_vpc_cluster" { initial_node_count = 1 project = google_compute_shared_vpc_service_project.service_project.service_project - network = google_compute_network.shared_network.self_link - subnetwork = google_compute_subnetwork.shared_subnetwork.self_link + networking_mode = "VPC_NATIVE" + network = google_compute_network.shared_network.self_link + subnetwork = google_compute_subnetwork.shared_subnetwork.self_link ip_allocation_policy { cluster_secondary_range_name = google_compute_subnetwork.shared_subnetwork.secondary_ip_range[0].range_name @@ -3505,46 +3846,6 @@ resource "google_container_cluster" "shared_vpc_cluster" { `, projectName, org, billingId, projectName, org, billingId, suffix, suffix, name) } -func testAccContainerCluster_withWorkloadIdentityConfigEnabled(projectID string, clusterName string) string { - return fmt.Sprintf(` -data "google_project" "project" { - project_id = "%s" -} - -resource "google_container_cluster" "with_workload_identity_config" { - name = "%s" - location = "us-central1-a" - initial_node_count = 1 - - workload_identity_config { - identity_namespace = "${data.google_project.project.project_id}.svc.id.goog" - } -} -`, projectID, clusterName) -} - -func testAccContainerCluster_updateWorkloadIdentityConfig(projectID string, clusterName string, enable bool) string { - workloadIdentityConfig := "" - if enable { - workloadIdentityConfig = ` - workload_identity_config { - identity_namespace = "${data.google_project.project.project_id}.svc.id.goog" - }` - } - return fmt.Sprintf(` -data "google_project" "project" { - project_id = "%s" -} - -resource "google_container_cluster" "with_workload_identity_config" { - name = "%s" - location = "us-central1-a" - initial_node_count = 1 - %s -} -`, projectID, clusterName, workloadIdentityConfig) -} - func testAccContainerCluster_withBinaryAuthorization(clusterName string, enabled bool) string { return fmt.Sprintf(` resource "google_container_cluster" "with_binary_authorization" { @@ -3557,18 +3858,6 @@ resource "google_container_cluster" "with_binary_authorization" { `, clusterName, enabled) } -func testAccContainerCluster_withShieldedNodes(clusterName string, enabled bool) string { - return fmt.Sprintf(` -resource "google_container_cluster" "with_shielded_nodes" { - name = "%s" - location = "us-central1-a" - initial_node_count = 1 - - enable_shielded_nodes = %v -} -`, clusterName, enabled) -} - func testAccContainerCluster_withFlexiblePodCIDR(containerNetName string, clusterName string) string { return fmt.Sprintf(` resource "google_compute_network" "container_network" { @@ -3599,8 +3888,9 @@ resource "google_container_cluster" "with_flexible_cidr" { location = "us-central1-a" initial_node_count = 3 - network = google_compute_network.container_network.name - subnetwork = google_compute_subnetwork.container_subnetwork.name + networking_mode = "VPC_NATIVE" + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name private_cluster_config { enable_private_endpoint = true @@ -3639,6 +3929,9 @@ resource "google_container_cluster" "cidr_error_preempt" { name = "%s" location = "us-central1-a" +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> network = google_compute_network.container_network.name subnetwork = google_compute_subnetwork.container_subnetwork.name @@ -3665,6 +3958,9 @@ resource "google_container_cluster" "cidr_error_overlap" { initial_node_count = 1 +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> ip_allocation_policy { cluster_ipv4_cidr_block = "10.0.0.0/16" services_ipv4_cidr_block = "10.1.0.0/16" @@ -3683,7 +3979,6 @@ resource "google_container_cluster" "with_resource_labels" { `, location) } -<% unless version == 'ga' -%> func testAccContainerCluster_withDatabaseEncryption(clusterName string, kmsData bootstrappedKMS) string { return fmt.Sprintf(` data "google_project" "project" { @@ -3704,7 +3999,7 @@ resource "google_kms_key_ring_iam_policy" "test_key_ring_iam_policy" { policy_data = data.google_iam_policy.test_kms_binding.policy_data } -resource "google_container_cluster" "with_database_encryption" { +resource "google_container_cluster" "primary" { name = "%[3]s" location = "us-central1-a" initial_node_count = 1 @@ -3716,7 +4011,6 @@ resource "google_container_cluster" "with_database_encryption" { } `, kmsData.KeyRing.Name, kmsData.CryptoKey.Name, clusterName) } -<% end -%> func testAccContainerCluster_withMasterAuthorizedNetworksDisabled(containerNetName string, clusterName string) string { return fmt.Sprintf(` @@ -3748,6 +4042,9 @@ resource "google_container_cluster" "with_private_cluster" { location = "us-central1-a" initial_node_count = 1 +<% unless version == 'ga' -%> + networking_mode = "VPC_NATIVE" +<% end -%> network = google_compute_network.container_network.name subnetwork = google_compute_subnetwork.container_subnetwork.name diff --git a/third_party/terraform/tests/resource_container_node_pool_test.go.erb b/third_party/terraform/tests/resource_container_node_pool_test.go.erb index 511b6f665565..acf568a3e44f 100644 --- a/third_party/terraform/tests/resource_container_node_pool_test.go.erb +++ b/third_party/terraform/tests/resource_container_node_pool_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,13 +12,13 @@ import ( func TestAccContainerNodePool_basic(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_basic(cluster, np), @@ -33,18 +32,17 @@ func TestAccContainerNodePool_basic(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccContainerNodePool_nodeLocations(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) - network := fmt.Sprintf("tf-test-net-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) + network := fmt.Sprintf("tf-test-net-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_nodeLocations(cluster, np, network), @@ -57,19 +55,18 @@ func TestAccContainerNodePool_nodeLocations(t *testing.T) { }, }) } -<% end -%> func TestAccContainerNodePool_maxPodsPerNode(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) - network := fmt.Sprintf("tf-test-net-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) + network := fmt.Sprintf("tf-test-net-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_maxPodsPerNode(cluster, np, network), @@ -84,14 +81,16 @@ func TestAccContainerNodePool_maxPodsPerNode(t *testing.T) { } func TestAccContainerNodePool_namePrefix(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_namePrefix(cluster, "tf-np-"), @@ -107,14 +106,16 @@ func TestAccContainerNodePool_namePrefix(t *testing.T) { } func TestAccContainerNodePool_noName(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_noName(cluster), @@ -131,13 +132,13 @@ func TestAccContainerNodePool_noName(t *testing.T) { func TestAccContainerNodePool_withNodeConfig(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - nodePool := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + nodePool := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_withNodeConfig(cluster, nodePool), @@ -166,16 +167,17 @@ func TestAccContainerNodePool_withNodeConfig(t *testing.T) { } <% unless version.nil? || version == 'ga' -%> -func TestAccContainerNodePool_withWorkloadMetadataConfig(t *testing.T) { +func TestAccContainerNodePool_withWorkloadIdentityConfig(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) + pid := getTestProjectFromEnv() + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-np-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_withWorkloadMetadataConfig(cluster, np), @@ -194,22 +196,6 @@ func TestAccContainerNodePool_withWorkloadMetadataConfig(t *testing.T) { "node_config.0.workload_metadata_config.0.node_metadata", }, }, - }, - }) -} - -func TestAccContainerNodePool_withWorkloadIdentityConfig(t *testing.T) { - t.Parallel() - - pid := getTestProjectFromEnv() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, - Steps: []resource.TestStep{ { Config: testAccContainerNodePool_withWorkloadMetadataConfig_gkeMetadataServer(pid, cluster, np), Check: resource.ComposeTestCheckFunc( @@ -229,13 +215,13 @@ func TestAccContainerNodePool_withWorkloadIdentityConfig(t *testing.T) { func TestAccContainerNodePool_withSandboxConfig(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-np-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_withSandboxConfig(cluster, np), @@ -249,6 +235,32 @@ func TestAccContainerNodePool_withSandboxConfig(t *testing.T) { ImportState: true, ImportStateVerify: true, }, + { + // GKE sets automatic labels and taints on nodes. This makes + // sure we ignore the automatic ones and keep our own. + Config: testAccContainerNodePool_withSandboxConfig(cluster, np), + // When we use PlanOnly without ExpectNonEmptyPlan, we're + // guaranteeing that the computed fields of the resources don't + // force an unintentional change to the plan. That is, we + // expect this part of the test to pass only if the plan + // doesn't change. + PlanOnly: true, + }, + { + // Now we'll modify the taints, which should force a change to + // the plan. We make sure we don't over-suppress and end up + // eliminating the labels or taints we asked for. This will + // destroy and recreate the node pool as taints are immutable. + Config: testAccContainerNodePool_withSandboxConfig_changeTaints(cluster, np), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_container_node_pool.with_sandbox_config", + "node_config.0.labels.test.terraform.io/gke-sandbox", "true"), + resource.TestCheckResourceAttr("google_container_node_pool.with_sandbox_config", + "node_config.0.taint.0.key", "test.terraform.io/gke-sandbox"), + resource.TestCheckResourceAttr("google_container_node_pool.with_sandbox_config", + "node_config.0.taint.1.key", "test.terraform.io/gke-sandbox-amended"), + ), + }, }, }) } @@ -256,13 +268,13 @@ func TestAccContainerNodePool_withSandboxConfig(t *testing.T) { func TestAccContainerNodePool_withBootDiskKmsKey(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-np-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_withBootDiskKmsKey(getTestProjectFromEnv(), cluster, np), @@ -280,13 +292,13 @@ func TestAccContainerNodePool_withBootDiskKmsKey(t *testing.T) { func TestAccContainerNodePool_withUpgradeSettings(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-np-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_withUpgradeSettings(cluster, np, 2, 3), @@ -311,13 +323,13 @@ func TestAccContainerNodePool_withUpgradeSettings(t *testing.T) { func TestAccContainerNodePool_withInvalidUpgradeSettings(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-np-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_withUpgradeSettings(cluster, np, 0, 0), @@ -330,13 +342,13 @@ func TestAccContainerNodePool_withInvalidUpgradeSettings(t *testing.T) { func TestAccContainerNodePool_withGPU(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-np-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_withGPU(cluster, np), @@ -353,18 +365,18 @@ func TestAccContainerNodePool_withGPU(t *testing.T) { func TestAccContainerNodePool_withManagement(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - nodePool := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + nodePool := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) management := ` management { - auto_repair = "true" - auto_upgrade = "true" + auto_repair = "false" + auto_upgrade = "false" }` - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_withManagement(cluster, nodePool, ""), @@ -372,9 +384,9 @@ func TestAccContainerNodePool_withManagement(t *testing.T) { resource.TestCheckResourceAttr( "google_container_node_pool.np_with_management", "management.#", "1"), resource.TestCheckResourceAttr( - "google_container_node_pool.np_with_management", "management.0.auto_repair", "false"), + "google_container_node_pool.np_with_management", "management.0.auto_repair", "true"), resource.TestCheckResourceAttr( - "google_container_node_pool.np_with_management", "management.0.auto_repair", "false"), + "google_container_node_pool.np_with_management", "management.0.auto_upgrade", "true"), ), }, resource.TestStep{ @@ -388,9 +400,9 @@ func TestAccContainerNodePool_withManagement(t *testing.T) { resource.TestCheckResourceAttr( "google_container_node_pool.np_with_management", "management.#", "1"), resource.TestCheckResourceAttr( - "google_container_node_pool.np_with_management", "management.0.auto_repair", "true"), + "google_container_node_pool.np_with_management", "management.0.auto_repair", "false"), resource.TestCheckResourceAttr( - "google_container_node_pool.np_with_management", "management.0.auto_repair", "true"), + "google_container_node_pool.np_with_management", "management.0.auto_upgrade", "false"), ), }, resource.TestStep{ @@ -405,13 +417,13 @@ func TestAccContainerNodePool_withManagement(t *testing.T) { func TestAccContainerNodePool_withNodeConfigScopeAlias(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-np-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-np-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_withNodeConfigScopeAlias(cluster, np), @@ -429,13 +441,13 @@ func TestAccContainerNodePool_withNodeConfigScopeAlias(t *testing.T) { func TestAccContainerNodePool_regionalAutoscaling(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_regionalAutoscaling(cluster, np), @@ -483,13 +495,13 @@ func TestAccContainerNodePool_regionalAutoscaling(t *testing.T) { func TestAccContainerNodePool_autoscaling(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_autoscaling(cluster, np), @@ -537,13 +549,13 @@ func TestAccContainerNodePool_autoscaling(t *testing.T) { func TestAccContainerNodePool_resize(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_additionalZones(cluster, np), @@ -571,16 +583,21 @@ func TestAccContainerNodePool_resize(t *testing.T) { }) } + func TestAccContainerNodePool_version(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + // Re-enable this test when there is more than one acceptable node pool version + // for the current master version + t.Skip() - resource.Test(t, resource.TestCase{ + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerClusterDestroy, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_version(cluster, np), @@ -613,13 +630,13 @@ func TestAccContainerNodePool_version(t *testing.T) { func TestAccContainerNodePool_regionalClusters(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_regionalClusters(cluster, np), @@ -636,13 +653,13 @@ func TestAccContainerNodePool_regionalClusters(t *testing.T) { func TestAccContainerNodePool_012_ConfigModeAttr(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerNodePool_012_ConfigModeAttr1(cluster, np), @@ -667,13 +684,13 @@ func TestAccContainerNodePool_012_ConfigModeAttr(t *testing.T) { func TestAccContainerNodePool_EmptyGuestAccelerator(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ { // Test alternative way to specify an empty node pool @@ -712,13 +729,13 @@ func TestAccContainerNodePool_EmptyGuestAccelerator(t *testing.T) { func TestAccContainerNodePool_shieldedInstanceConfig(t *testing.T) { t.Parallel() - cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(10)) - np := fmt.Sprintf("tf-test-nodepool-%s", acctest.RandString(10)) + cluster := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10)) + np := fmt.Sprintf("tf-test-nodepool-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerNodePoolDestroy, + CheckDestroy: testAccCheckContainerNodePoolDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccContainerNodePool_shieldedInstanceConfig(cluster, np), @@ -733,38 +750,40 @@ func TestAccContainerNodePool_shieldedInstanceConfig(t *testing.T) { }) } -func testAccCheckContainerNodePoolDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_container_node_pool" { - continue +func testAccCheckContainerNodePoolDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_container_node_pool" { + continue + } + + attributes := rs.Primary.Attributes + location := attributes["location"] + + var err error + if location != "" { + _, err = config.clientContainer.Projects.Zones.Clusters.NodePools.Get( + config.Project, attributes["location"], attributes["cluster"], attributes["name"]).Do() + } else { + name := fmt.Sprintf( + "projects/%s/locations/%s/clusters/%s/nodePools/%s", + config.Project, + attributes["location"], + attributes["cluster"], + attributes["name"], + ) + _, err = config.clientContainerBeta.Projects.Locations.Clusters.NodePools.Get(name).Do() + } + + if err == nil { + return fmt.Errorf("NodePool still exists") + } } - attributes := rs.Primary.Attributes - location := attributes["location"] - - var err error - if location != "" { - _, err = config.clientContainer.Projects.Zones.Clusters.NodePools.Get( - config.Project, attributes["location"], attributes["cluster"], attributes["name"]).Do() - } else { - name := fmt.Sprintf( - "projects/%s/locations/%s/clusters/%s/nodePools/%s", - config.Project, - attributes["location"], - attributes["cluster"], - attributes["name"], - ) - _, err = config.clientContainerBeta.Projects.Locations.Clusters.NodePools.Get(name).Do() - } - - if err == nil { - return fmt.Errorf("NodePool still exists") - } + return nil } - - return nil } func testAccContainerNodePool_basic(cluster, np string) string { @@ -784,7 +803,6 @@ resource "google_container_node_pool" "np" { `, cluster, np) } -<% unless version == 'ga' -%> func testAccContainerNodePool_nodeLocations(cluster, np, network string) string { return fmt.Sprintf(` resource "google_compute_network" "container_network" { @@ -842,7 +860,6 @@ resource "google_container_node_pool" "np" { } `, network, cluster, np) } -<% end -%> func testAccContainerNodePool_maxPodsPerNode(cluster, np, network string) string { return fmt.Sprintf(` @@ -1265,9 +1282,71 @@ resource "google_container_node_pool" "with_sandbox_config" { initial_node_count = 1 node_config { image_type = "COS_CONTAINERD" + + sandbox_config { + sandbox_type = "gvisor" + } + + labels = { + "test.terraform.io/gke-sandbox" = "true" + } + + taint { + key = "test.terraform.io/gke-sandbox" + value = "true" + effect = "NO_SCHEDULE" + } + + oauth_scopes = [ + "https://www.googleapis.com/auth/logging.write", + "https://www.googleapis.com/auth/monitoring", + ] + } +} +`, cluster, np) +} + +func testAccContainerNodePool_withSandboxConfig_changeTaints(cluster, np string) string { + return fmt.Sprintf(` +data "google_container_engine_versions" "central1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "cluster" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + min_master_version = data.google_container_engine_versions.central1a.latest_master_version +} + +resource "google_container_node_pool" "with_sandbox_config" { + name = "%s" + location = "us-central1-a" + cluster = google_container_cluster.cluster.name + initial_node_count = 1 + node_config { + image_type = "COS_CONTAINERD" + sandbox_config { sandbox_type = "gvisor" } + + labels = { + "test.terraform.io/gke-sandbox" = "true" + } + + taint { + key = "test.terraform.io/gke-sandbox" + value = "true" + effect = "NO_SCHEDULE" + } + + taint { + key = "test.terraform.io/gke-sandbox-amended" + value = "also-true" + effect = "NO_SCHEDULE" + } + oauth_scopes = [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", @@ -1366,7 +1445,6 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-c" initial_node_count = 1 - node_version = data.google_container_engine_versions.central1c.latest_node_version min_master_version = data.google_container_engine_versions.central1c.latest_master_version } diff --git a/third_party/terraform/tests/resource_container_registry_test.go b/third_party/terraform/tests/resource_container_registry_test.go index fdc5a89683c8..b4698705aae7 100644 --- a/third_party/terraform/tests/resource_container_registry_test.go +++ b/third_party/terraform/tests/resource_container_registry_test.go @@ -4,14 +4,13 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccContainerRegistry_basic(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -24,9 +23,9 @@ func TestAccContainerRegistry_basic(t *testing.T) { func TestAccContainerRegistry_iam(t *testing.T) { t.Parallel() - account := acctest.RandString(10) + account := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_containeranalysis_note_test.go.erb b/third_party/terraform/tests/resource_containeranalysis_note_test.go.erb index ef5e8d306742..e119bb95ff10 100644 --- a/third_party/terraform/tests/resource_containeranalysis_note_test.go.erb +++ b/third_party/terraform/tests/resource_containeranalysis_note_test.go.erb @@ -6,7 +6,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,12 +13,12 @@ import ( func TestAccContainerAnalysisNote_basic(t *testing.T) { t.Parallel() - name := acctest.RandString(10) - readableName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + name := randString(t, 10) + readableName := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerAnalysisNoteDestroy, + CheckDestroy: testAccCheckContainerAnalysisNoteDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerAnalysisNoteBasic(name, readableName), @@ -36,13 +35,13 @@ func TestAccContainerAnalysisNote_basic(t *testing.T) { func TestAccContainerAnalysisNote_update(t *testing.T) { t.Parallel() - name := acctest.RandString(10) - readableName := acctest.RandString(10) - readableName2 := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + name := randString(t, 10) + readableName := randString(t, 10) + readableName2 := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckContainerAnalysisNoteDestroy, + CheckDestroy: testAccCheckContainerAnalysisNoteDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccContainerAnalysisNoteBasic(name, readableName), diff --git a/third_party/terraform/tests/resource_data_catalog_entry_group_test.go b/third_party/terraform/tests/resource_data_catalog_entry_group_test.go new file mode 100644 index 000000000000..79b8c4e699a8 --- /dev/null +++ b/third_party/terraform/tests/resource_data_catalog_entry_group_test.go @@ -0,0 +1,47 @@ +package google + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccDataCatalogEntryGroup_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataCatalogEntryGroupDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataCatalogEntryGroup_dataCatalogEntryGroupBasicExample(context), + }, + { + ResourceName: "google_data_catalog_entry_group.basic_entry_group", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataCatalogEntryGroup_dataCatalogEntryGroupFullExample(context), + }, + { + ResourceName: "google_data_catalog_entry_group.basic_entry_group", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataCatalogEntryGroup_dataCatalogEntryGroupBasicExample(context), + }, + { + ResourceName: "google_data_catalog_entry_group.basic_entry_group", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} diff --git a/third_party/terraform/tests/resource_data_catalog_entry_test.go b/third_party/terraform/tests/resource_data_catalog_entry_test.go new file mode 100644 index 000000000000..dff03dc831f3 --- /dev/null +++ b/third_party/terraform/tests/resource_data_catalog_entry_test.go @@ -0,0 +1,47 @@ +package google + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccDataCatalogEntry_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataCatalogEntryDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataCatalogEntry_dataCatalogEntryBasicExample(context), + }, + { + ResourceName: "google_data_catalog_entry.basic_entry", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataCatalogEntry_dataCatalogEntryFullExample(context), + }, + { + ResourceName: "google_data_catalog_entry.basic_entry", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataCatalogEntry_dataCatalogEntryBasicExample(context), + }, + { + ResourceName: "google_data_catalog_entry.basic_entry", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} diff --git a/third_party/terraform/tests/resource_data_catalog_tag_test.go b/third_party/terraform/tests/resource_data_catalog_tag_test.go new file mode 100644 index 000000000000..f09d92005c84 --- /dev/null +++ b/third_party/terraform/tests/resource_data_catalog_tag_test.go @@ -0,0 +1,122 @@ +package google + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccDataCatalogTag_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "force_delete": true, + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataCatalogEntryDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataCatalogTag_dataCatalogEntryTagBasicExample(context), + }, + { + ResourceName: "google_data_catalog_tag.basic_tag", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataCatalogTag_dataCatalogEntryTag_update(context), + }, + { + ResourceName: "google_data_catalog_tag.basic_tag", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataCatalogTag_dataCatalogEntryTagBasicExample(context), + }, + { + ResourceName: "google_data_catalog_tag.basic_tag", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccDataCatalogTag_dataCatalogEntryTag_update(context map[string]interface{}) string { + return Nprintf(` +resource "google_data_catalog_entry" "entry" { + entry_group = google_data_catalog_entry_group.entry_group.id + entry_id = "tf_test_my_entry%{random_suffix}" + + user_specified_type = "my_custom_type" + user_specified_system = "SomethingExternal" +} + +resource "google_data_catalog_entry_group" "entry_group" { + entry_group_id = "tf_test_my_entry_group%{random_suffix}" +} + +resource "google_data_catalog_tag_template" "tag_template" { + tag_template_id = "tf_test_my_template%{random_suffix}" + region = "us-central1" + display_name = "Demo Tag Template" + + fields { + field_id = "source" + display_name = "Source of data asset" + type { + primitive_type = "STRING" + } + is_required = true + } + + fields { + field_id = "num_rows" + display_name = "Number of rows in the data asset" + type { + primitive_type = "DOUBLE" + } + } + + fields { + field_id = "pii_type" + display_name = "PII type" + type { + enum_type { + allowed_values { + display_name = "EMAIL" + } + allowed_values { + display_name = "SOCIAL SECURITY NUMBER" + } + allowed_values { + display_name = "NONE" + } + } + } + } + + force_delete = "%{force_delete}" +} + +resource "google_data_catalog_tag" "basic_tag" { + parent = google_data_catalog_entry.entry.id + template = google_data_catalog_tag_template.tag_template.id + + fields { + field_name = "source" + string_value = "my-new-string" + } + + fields { + field_name = "num_rows" + double_value = 5 + } +} +`, context) +} diff --git a/third_party/terraform/tests/resource_data_fusion_instance_test.go.erb b/third_party/terraform/tests/resource_data_fusion_instance_test.go.erb index ddee647b716b..ccb4f262d7e1 100644 --- a/third_party/terraform/tests/resource_data_fusion_instance_test.go.erb +++ b/third_party/terraform/tests/resource_data_fusion_instance_test.go.erb @@ -6,7 +6,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,9 +13,9 @@ import ( func TestAccDataFusionInstance_update(t *testing.T) { t.Parallel() - instanceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + instanceName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -46,6 +45,7 @@ resource "google_data_fusion_instance" "foobar" { name = "%s" region = "us-central1" type = "BASIC" + version = "6.1.1" } `, instanceName) } @@ -63,6 +63,7 @@ resource "google_data_fusion_instance" "foobar" { label1 = "value1" label2 = "value2" } + version = "6.1.1" } `, instanceName) } diff --git a/third_party/terraform/tests/resource_dataflow_flex_template_job_test.go.erb b/third_party/terraform/tests/resource_dataflow_flex_template_job_test.go.erb new file mode 100644 index 000000000000..a52ae29479e0 --- /dev/null +++ b/third_party/terraform/tests/resource_dataflow_flex_template_job_test.go.erb @@ -0,0 +1,90 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccDataflowFlexTemplateJob_basic(t *testing.T) { + t.Parallel() + + randStr := randString(t, 10) + bucket := "tf-test-dataflow-gcs-" + randStr + job := "tf-test-dataflow-job-" + randStr + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataflowFlowFlexTemplateJob_basic(bucket, job), + Check: resource.ComposeTestCheckFunc( + testAccDataflowJobExists(t, "google_dataflow_flex_template_job.big_data"), + ), + }, + }, + }) +} + +// note: this config creates a job that doesn't actually do anything +func testAccDataflowFlowFlexTemplateJob_basic(bucket, job string) string { + return fmt.Sprintf(` +resource "google_storage_bucket" "temp" { + name = "%s" + force_destroy = true +} + +resource "google_storage_bucket_object" "flex_template" { + name = "flex_template.json" + bucket = google_storage_bucket.temp.name + content = < diff --git a/third_party/terraform/tests/resource_dataflow_job_test.go b/third_party/terraform/tests/resource_dataflow_job_test.go index cf34cbaf8e82..899b217f2eca 100644 --- a/third_party/terraform/tests/resource_dataflow_job_test.go +++ b/third_party/terraform/tests/resource_dataflow_job_test.go @@ -6,7 +6,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" @@ -16,25 +15,29 @@ import ( const ( testDataflowJobTemplateWordCountUrl = "gs://dataflow-templates/latest/Word_Count" testDataflowJobSampleFileUrl = "gs://dataflow-samples/shakespeare/various.txt" + testDataflowJobTemplateTextToPubsub = "gs://dataflow-templates/latest/Stream_GCS_Text_to_Cloud_PubSub" ) func TestAccDataflowJob_basic(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) bucket := "tf-test-dataflow-gcs-" + randStr job := "tf-test-dataflow-job-" + randStr zone := "us-central1-f" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataflowJobDestroy, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataflowJob_zone(bucket, job, zone), Check: resource.ComposeTestCheckFunc( - testAccDataflowJobExists("google_dataflow_job.big_data"), + testAccDataflowJobExists(t, "google_dataflow_job.big_data"), ), }, }, @@ -42,21 +45,24 @@ func TestAccDataflowJob_basic(t *testing.T) { } func TestAccDataflowJob_withRegion(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) bucket := "tf-test-dataflow-gcs-" + randStr job := "tf-test-dataflow-job-" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataflowJobRegionDestroy, + CheckDestroy: testAccCheckDataflowJobRegionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataflowJob_region(bucket, job), Check: resource.ComposeTestCheckFunc( - testAccRegionalDataflowJobExists("google_dataflow_job.big_data", "us-central1"), + testAccRegionalDataflowJobExists(t, "google_dataflow_job.big_data", "us-central1"), ), }, }, @@ -64,23 +70,26 @@ func TestAccDataflowJob_withRegion(t *testing.T) { } func TestAccDataflowJob_withServiceAccount(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) bucket := "tf-test-dataflow-gcs-" + randStr job := "tf-test-dataflow-job-" + randStr accountId := "tf-test-dataflow-sa" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataflowJobDestroy, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataflowJob_serviceAccount(bucket, job, accountId), Check: resource.ComposeTestCheckFunc( - testAccDataflowJobExists("google_dataflow_job.big_data"), - testAccDataflowJobHasServiceAccount("google_dataflow_job.big_data", accountId), + testAccDataflowJobExists(t, "google_dataflow_job.big_data"), + testAccDataflowJobHasServiceAccount(t, "google_dataflow_job.big_data", accountId), ), }, }, @@ -88,23 +97,26 @@ func TestAccDataflowJob_withServiceAccount(t *testing.T) { } func TestAccDataflowJob_withNetwork(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) bucket := "tf-test-dataflow-gcs-" + randStr job := "tf-test-dataflow-job-" + randStr network := "tf-test-dataflow-net" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataflowJobDestroy, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataflowJob_network(bucket, job, network), Check: resource.ComposeTestCheckFunc( - testAccDataflowJobExists("google_dataflow_job.big_data"), - testAccDataflowJobHasNetwork("google_dataflow_job.big_data", network), + testAccDataflowJobExists(t, "google_dataflow_job.big_data"), + testAccDataflowJobHasNetwork(t, "google_dataflow_job.big_data", network), ), }, }, @@ -112,24 +124,27 @@ func TestAccDataflowJob_withNetwork(t *testing.T) { } func TestAccDataflowJob_withSubnetwork(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) bucket := "tf-test-dataflow-gcs-" + randStr job := "tf-test-dataflow-job-" + randStr network := "tf-test-dataflow-net" + randStr subnetwork := "tf-test-dataflow-subnet" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataflowJobDestroy, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataflowJob_subnetwork(bucket, job, network, subnetwork), Check: resource.ComposeTestCheckFunc( - testAccDataflowJobExists("google_dataflow_job.big_data"), - testAccDataflowJobHasSubnetwork("google_dataflow_job.big_data", subnetwork), + testAccDataflowJobExists(t, "google_dataflow_job.big_data"), + testAccDataflowJobHasSubnetwork(t, "google_dataflow_job.big_data", subnetwork), ), }, }, @@ -137,24 +152,27 @@ func TestAccDataflowJob_withSubnetwork(t *testing.T) { } func TestAccDataflowJob_withLabels(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) bucket := "tf-test-dataflow-gcs-" + randStr job := "tf-test-dataflow-job-" + randStr key := "my-label" value := "my-value" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataflowJobDestroy, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataflowJob_labels(bucket, job, key, value), Check: resource.ComposeTestCheckFunc( - testAccDataflowJobExists("google_dataflow_job.with_labels"), - testAccDataflowJobHasLabels("google_dataflow_job.with_labels", key), + testAccDataflowJobExists(t, "google_dataflow_job.with_labels"), + testAccDataflowJobHasLabels(t, "google_dataflow_job.with_labels", key), ), }, }, @@ -162,68 +180,163 @@ func TestAccDataflowJob_withLabels(t *testing.T) { } func TestAccDataflowJob_withIpConfig(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) bucket := "tf-test-dataflow-gcs-" + randStr job := "tf-test-dataflow-job-" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataflowJobDestroy, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataflowJob_ipConfig(bucket, job), Check: resource.ComposeTestCheckFunc( - testAccDataflowJobExists("google_dataflow_job.big_data"), + testAccDataflowJobExists(t, "google_dataflow_job.big_data"), ), }, }, }) } -func testAccCheckDataflowJobDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_dataflow_job" { - continue - } +func TestAccDataflowJobWithAdditionalExperiments(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) + t.Parallel() - config := testAccProvider.Meta().(*Config) - job, err := config.clientDataflow.Projects.Jobs.Get(config.Project, rs.Primary.ID).Do() - if job != nil { - if _, ok := dataflowTerminalStatesMap[job.CurrentState]; !ok { - return fmt.Errorf("Job still present") + randStr := randString(t, 10) + bucket := "tf-test-dataflow-gcs-" + randStr + job := "tf-test-dataflow-job-" + randStr + additionalExperiments := []string{"enable_stackdriver_agent_metrics", "shuffle_mode=service"} + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataflowJob_additionalExperiments(bucket, job, additionalExperiments), + Check: resource.ComposeTestCheckFunc( + testAccDataflowJobExists(t, "google_dataflow_job.with_additional_experiments"), + testAccDataflowJobHasExperiments(t, "google_dataflow_job.with_additional_experiments", additionalExperiments), + ), + }, + }, + }) +} + +func TestAccDataflowJob_streamUpdate(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) + t.Parallel() + + suffix := randString(t, 10) + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataflowJob_updateStream(suffix, "google_storage_bucket.bucket1.url"), + Check: resource.ComposeTestCheckFunc( + testAccDataflowJobExists(t, "google_dataflow_job.pubsub_stream"), + ), + }, + { + Config: testAccDataflowJob_updateStream(suffix, "google_storage_bucket.bucket2.url"), + Check: resource.ComposeTestCheckFunc( + testAccDataflowJobHasTempLocation(t, "google_dataflow_job.pubsub_stream", "gs://tf-test-bucket2-"+suffix), + ), + }, + }, + }) +} + +func TestAccDataflowJob_virtualUpdate(t *testing.T) { + // Dataflow responses include serialized java classes and bash commands + // This makes body comparison infeasible + skipIfVcr(t) + t.Parallel() + + suffix := randString(t, 10) + + // If the update is virtual-only, the ID should remain the same after updating. + var id string + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataflowJob_virtualUpdate(suffix, "drain"), + Check: resource.ComposeTestCheckFunc( + testAccDataflowJobExists(t, "google_dataflow_job.pubsub_stream"), + testAccDataflowSetId(t, "google_dataflow_job.pubsub_stream", &id), + ), + }, + { + Config: testAccDataflowJob_virtualUpdate(suffix, "cancel"), + Check: resource.ComposeTestCheckFunc( + testAccDataflowCheckId(t, "google_dataflow_job.pubsub_stream", &id), + resource.TestCheckResourceAttr("google_dataflow_job.pubsub_stream", "on_delete", "cancel"), + ), + }, + }, + }) +} + +func testAccCheckDataflowJobDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_dataflow_job" { + continue + } + + config := googleProviderConfig(t) + job, err := config.clientDataflow.Projects.Jobs.Get(config.Project, rs.Primary.ID).Do() + if job != nil { + if _, ok := dataflowTerminalStatesMap[job.CurrentState]; !ok { + return fmt.Errorf("Job still present") + } + } else if err != nil { + return err } - } else if err != nil { - return err } - } - return nil + return nil + } } -func testAccCheckDataflowJobRegionDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_dataflow_job" { - continue - } +func testAccCheckDataflowJobRegionDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_dataflow_job" { + continue + } - config := testAccProvider.Meta().(*Config) - job, err := config.clientDataflow.Projects.Locations.Jobs.Get(config.Project, "us-central1", rs.Primary.ID).Do() - if job != nil { - if _, ok := dataflowTerminalStatesMap[job.CurrentState]; !ok { - return fmt.Errorf("Job still present") + config := googleProviderConfig(t) + job, err := config.clientDataflow.Projects.Locations.Jobs.Get(config.Project, "us-central1", rs.Primary.ID).Do() + if job != nil { + if _, ok := dataflowTerminalStatesMap[job.CurrentState]; !ok { + return fmt.Errorf("Job still present") + } + } else if err != nil { + return err } - } else if err != nil { - return err } - } - return nil + return nil + } } -func testAccDataflowJobExists(resource string) resource.TestCheckFunc { +func testAccDataflowJobExists(t *testing.T, resource string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resource] if !ok { @@ -233,7 +346,7 @@ func testAccDataflowJobExists(resource string) resource.TestCheckFunc { return fmt.Errorf("no ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientDataflow.Projects.Jobs.Get(config.Project, rs.Primary.ID).Do() if err != nil { return fmt.Errorf("could not confirm Dataflow Job %q exists: %v", rs.Primary.ID, err) @@ -243,9 +356,35 @@ func testAccDataflowJobExists(resource string) resource.TestCheckFunc { } } -func testAccDataflowJobHasNetwork(res, expected string) resource.TestCheckFunc { +func testAccDataflowSetId(t *testing.T, resource string, id *string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resource] + if !ok { + return fmt.Errorf("resource %q not in state", resource) + } + + *id = rs.Primary.ID + return nil + } +} + +func testAccDataflowCheckId(t *testing.T, resource string, id *string) resource.TestCheckFunc { return func(s *terraform.State) error { - instanceTmpl, err := testAccDataflowJobGetGeneratedInstanceTemplate(s, res) + rs, ok := s.RootModule().Resources[resource] + if !ok { + return fmt.Errorf("resource %q not in state", resource) + } + + if rs.Primary.ID != *id { + return fmt.Errorf("ID did not match. Expected %s, received %s", *id, rs.Primary.ID) + } + return nil + } +} + +func testAccDataflowJobHasNetwork(t *testing.T, res, expected string) resource.TestCheckFunc { + return func(s *terraform.State) error { + instanceTmpl, err := testAccDataflowJobGetGeneratedInstanceTemplate(t, s, res) if err != nil { return fmt.Errorf("Error getting dataflow job instance template: %s", err) } @@ -260,9 +399,9 @@ func testAccDataflowJobHasNetwork(res, expected string) resource.TestCheckFunc { } } -func testAccDataflowJobHasSubnetwork(res, expected string) resource.TestCheckFunc { +func testAccDataflowJobHasSubnetwork(t *testing.T, res, expected string) resource.TestCheckFunc { return func(s *terraform.State) error { - instanceTmpl, err := testAccDataflowJobGetGeneratedInstanceTemplate(s, res) + instanceTmpl, err := testAccDataflowJobGetGeneratedInstanceTemplate(t, s, res) if err != nil { return fmt.Errorf("Error getting dataflow job instance template: %s", err) } @@ -277,9 +416,9 @@ func testAccDataflowJobHasSubnetwork(res, expected string) resource.TestCheckFun } } -func testAccDataflowJobHasServiceAccount(res, expectedId string) resource.TestCheckFunc { +func testAccDataflowJobHasServiceAccount(t *testing.T, res, expectedId string) resource.TestCheckFunc { return func(s *terraform.State) error { - instanceTmpl, err := testAccDataflowJobGetGeneratedInstanceTemplate(s, res) + instanceTmpl, err := testAccDataflowJobGetGeneratedInstanceTemplate(t, s, res) if err != nil { return fmt.Errorf("Error getting dataflow job instance template: %s", err) } @@ -295,7 +434,7 @@ func testAccDataflowJobHasServiceAccount(res, expectedId string) resource.TestCh } } -func testAccDataflowJobGetGeneratedInstanceTemplate(s *terraform.State, res string) (*compute.InstanceTemplate, error) { +func testAccDataflowJobGetGeneratedInstanceTemplate(t *testing.T, s *terraform.State, res string) (*compute.InstanceTemplate, error) { rs, ok := s.RootModule().Resources[res] if !ok { return nil, fmt.Errorf("resource %q not in state", res) @@ -305,7 +444,7 @@ func testAccDataflowJobGetGeneratedInstanceTemplate(s *terraform.State, res stri } filter := fmt.Sprintf("properties.labels.dataflow_job_id = %s", rs.Primary.ID) - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) var instanceTemplate *compute.InstanceTemplate @@ -336,7 +475,7 @@ func testAccDataflowJobGetGeneratedInstanceTemplate(s *terraform.State, res stri return instanceTemplate, nil } -func testAccRegionalDataflowJobExists(res, region string) resource.TestCheckFunc { +func testAccRegionalDataflowJobExists(t *testing.T, res, region string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[res] if !ok { @@ -346,7 +485,7 @@ func testAccRegionalDataflowJobExists(res, region string) resource.TestCheckFunc if rs.Primary.ID == "" { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientDataflow.Projects.Locations.Jobs.Get(config.Project, region, rs.Primary.ID).Do() if err != nil { return fmt.Errorf("Job does not exist") @@ -356,7 +495,7 @@ func testAccRegionalDataflowJobExists(res, region string) resource.TestCheckFunc } } -func testAccDataflowJobHasLabels(res, key string) resource.TestCheckFunc { +func testAccDataflowJobHasLabels(t *testing.T, res, key string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[res] if !ok { @@ -366,7 +505,7 @@ func testAccDataflowJobHasLabels(res, key string) resource.TestCheckFunc { if rs.Primary.ID == "" { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) job, err := config.clientDataflow.Projects.Jobs.Get(config.Project, rs.Primary.ID).Do() if err != nil { @@ -381,6 +520,69 @@ func testAccDataflowJobHasLabels(res, key string) resource.TestCheckFunc { } } +func testAccDataflowJobHasExperiments(t *testing.T, res string, experiments []string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[res] + if !ok { + return fmt.Errorf("resource %q not found in state", res) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + config := googleProviderConfig(t) + + job, err := config.clientDataflow.Projects.Jobs.Get(config.Project, rs.Primary.ID).View("JOB_VIEW_ALL").Do() + if err != nil { + return fmt.Errorf("dataflow job does not exist") + } + + for _, expectedExperiment := range experiments { + var contains = false + for _, actualExperiment := range job.Environment.Experiments { + if actualExperiment == expectedExperiment { + contains = true + } + } + if contains != true { + return fmt.Errorf("Expected experiment '%s' not found in experiments", expectedExperiment) + } + } + + return nil + } +} + +func testAccDataflowJobHasTempLocation(t *testing.T, res, targetLocation string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[res] + if !ok { + return fmt.Errorf("resource %q not found in state", res) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + config := googleProviderConfig(t) + + job, err := config.clientDataflow.Projects.Jobs.Get(config.Project, rs.Primary.ID).View("JOB_VIEW_ALL").Do() + if err != nil { + return fmt.Errorf("dataflow job does not exist") + } + sdkPipelineOptions, err := ConvertToMap(job.Environment.SdkPipelineOptions) + if err != nil { + return err + } + optionsMap := sdkPipelineOptions["options"].(map[string]interface{}) + + if optionsMap["tempLocation"] != targetLocation { + return fmt.Errorf("Temp locations do not match. Got %s while expecting %s", optionsMap["tempLocation"], targetLocation) + } + + return nil + } +} + func testAccDataflowJob_zone(bucket, job, zone string) string { return fmt.Sprintf(` resource "google_storage_bucket" "temp" { @@ -580,3 +782,74 @@ resource "google_dataflow_job" "with_labels" { `, bucket, job, labelKey, labelVal, testDataflowJobTemplateWordCountUrl, testDataflowJobSampleFileUrl) } + +func testAccDataflowJob_additionalExperiments(bucket string, job string, experiments []string) string { + return fmt.Sprintf(` +resource "google_storage_bucket" "temp" { + name = "%s" + force_destroy = true +} + +resource "google_dataflow_job" "with_additional_experiments" { + name = "%s" + + additional_experiments = ["%s"] + + template_gcs_path = "%s" + temp_gcs_location = google_storage_bucket.temp.url + parameters = { + inputFile = "%s" + output = "${google_storage_bucket.temp.url}/output" + } + on_delete = "cancel" +} +`, bucket, job, strings.Join(experiments, `", "`), testDataflowJobTemplateWordCountUrl, testDataflowJobSampleFileUrl) +} + +func testAccDataflowJob_updateStream(suffix, tempLocation string) string { + return fmt.Sprintf(` +resource "google_pubsub_topic" "topic" { + name = "tf-test-dataflow-job-%s" +} +resource "google_storage_bucket" "bucket1" { + name = "tf-test-bucket1-%s" + force_destroy = true +} +resource "google_storage_bucket" "bucket2" { + name = "tf-test-bucket2-%s" + force_destroy = true +} +resource "google_dataflow_job" "pubsub_stream" { + name = "tf-test-dataflow-job-%s" + template_gcs_path = "%s" + temp_gcs_location = %s + parameters = { + inputFilePattern = "${google_storage_bucket.bucket1.url}/*.json" + outputTopic = google_pubsub_topic.topic.id + } + on_delete = "cancel" +} + `, suffix, suffix, suffix, suffix, testDataflowJobTemplateTextToPubsub, tempLocation) +} + +func testAccDataflowJob_virtualUpdate(suffix, onDelete string) string { + return fmt.Sprintf(` +resource "google_pubsub_topic" "topic" { + name = "tf-test-dataflow-job-%s" +} +resource "google_storage_bucket" "bucket" { + name = "tf-test-bucket-%s" + force_destroy = true +} +resource "google_dataflow_job" "pubsub_stream" { + name = "tf-test-dataflow-job-%s" + template_gcs_path = "%s" + temp_gcs_location = google_storage_bucket.bucket.url + parameters = { + inputFilePattern = "${google_storage_bucket.bucket.url}/*.json" + outputTopic = google_pubsub_topic.topic.id + } + on_delete = "%s" +} + `, suffix, suffix, suffix, testDataflowJobTemplateTextToPubsub, onDelete) +} diff --git a/third_party/terraform/tests/resource_dataproc_cluster_iam_test.go b/third_party/terraform/tests/resource_dataproc_cluster_iam_test.go index 01f4b5505f9c..ecdd59346684 100644 --- a/third_party/terraform/tests/resource_dataproc_cluster_iam_test.go +++ b/third_party/terraform/tests/resource_dataproc_cluster_iam_test.go @@ -4,21 +4,20 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataprocClusterIamBinding(t *testing.T) { t.Parallel() - cluster := "tf-dataproc-iam-" + acctest.RandString(10) - account := "tf-dataproc-iam-" + acctest.RandString(10) + cluster := "tf-dataproc-iam-" + randString(t, 10) + account := "tf-dataproc-iam-" + randString(t, 10) role := "roles/editor" importId := fmt.Sprintf("projects/%s/regions/%s/clusters/%s %s", getTestProjectFromEnv(), "us-central1", cluster, role) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -53,8 +52,8 @@ func TestAccDataprocClusterIamBinding(t *testing.T) { func TestAccDataprocClusterIamMember(t *testing.T) { t.Parallel() - cluster := "tf-dataproc-iam-" + acctest.RandString(10) - account := "tf-dataproc-iam-" + acctest.RandString(10) + cluster := "tf-dataproc-iam-" + randString(t, 10) + account := "tf-dataproc-iam-" + randString(t, 10) role := "roles/editor" importId := fmt.Sprintf("projects/%s/regions/%s/clusters/%s %s serviceAccount:%s", @@ -64,7 +63,7 @@ func TestAccDataprocClusterIamMember(t *testing.T) { role, serviceAccountCanonicalEmail(account)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -91,14 +90,14 @@ func TestAccDataprocClusterIamMember(t *testing.T) { func TestAccDataprocClusterIamPolicy(t *testing.T) { t.Parallel() - cluster := "tf-dataproc-iam-" + acctest.RandString(10) - account := "tf-dataproc-iam-" + acctest.RandString(10) + cluster := "tf-dataproc-iam-" + randString(t, 10) + account := "tf-dataproc-iam-" + randString(t, 10) role := "roles/editor" importId := fmt.Sprintf("projects/%s/regions/%s/clusters/%s", getTestProjectFromEnv(), "us-central1", cluster) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_dataproc_cluster_test.go.erb b/third_party/terraform/tests/resource_dataproc_cluster_test.go.erb index c6e945b861f1..6a89584a9522 100644 --- a/third_party/terraform/tests/resource_dataproc_cluster_test.go.erb +++ b/third_party/terraform/tests/resource_dataproc_cluster_test.go.erb @@ -10,7 +10,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" @@ -137,8 +136,8 @@ func TestDataprocDiffSuppress(t *testing.T) { func TestAccDataprocCluster_missingZoneGlobalRegion1(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -153,8 +152,8 @@ func TestAccDataprocCluster_missingZoneGlobalRegion1(t *testing.T) { func TestAccDataprocCluster_missingZoneGlobalRegion2(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -170,16 +169,16 @@ func TestAccDataprocCluster_basic(t *testing.T) { t.Parallel() var cluster dataproc.Cluster - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_basic(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.basic", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.basic", &cluster), // Default behaviour is for Dataproc to autogen or autodiscover a config bucket resource.TestCheckResourceAttrSet("google_dataproc_cluster.basic", "cluster_config.0.bucket"), @@ -218,22 +217,22 @@ func TestAccDataprocCluster_basic(t *testing.T) { func TestAccDataprocCluster_withAccelerators(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster project := getTestProjectFromEnv() acceleratorType := "nvidia-tesla-k80" zone := "us-central1-c" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withAccelerators(rnd, acceleratorType, zone), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.accelerated_cluster", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.accelerated_cluster", &cluster), testAccCheckDataprocClusterAccelerator(&cluster, project, 1, 1), ), }, @@ -282,16 +281,16 @@ func TestAccDataprocCluster_withInternalIpOnlyTrue(t *testing.T) { t.Parallel() var cluster dataproc.Cluster - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withInternalIpOnlyTrue(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.basic", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.basic", &cluster), // Testing behavior for Dataproc to use only internal IP addresses resource.TestCheckResourceAttr("google_dataproc_cluster.basic", "cluster_config.0.gce_cluster_config.0.internal_ip_only", "true"), @@ -305,16 +304,16 @@ func TestAccDataprocCluster_withMetadataAndTags(t *testing.T) { t.Parallel() var cluster dataproc.Cluster - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withMetadataAndTags(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.basic", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.basic", &cluster), resource.TestCheckResourceAttr("google_dataproc_cluster.basic", "cluster_config.0.gce_cluster_config.0.metadata.foo", "bar"), resource.TestCheckResourceAttr("google_dataproc_cluster.basic", "cluster_config.0.gce_cluster_config.0.metadata.baz", "qux"), @@ -328,17 +327,17 @@ func TestAccDataprocCluster_withMetadataAndTags(t *testing.T) { func TestAccDataprocCluster_singleNodeCluster(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_singleNodeCluster(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.single_node_cluster", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.single_node_cluster", &cluster), resource.TestCheckResourceAttr("google_dataproc_cluster.single_node_cluster", "cluster_config.0.master_config.0.num_instances", "1"), resource.TestCheckResourceAttr("google_dataproc_cluster.single_node_cluster", "cluster_config.0.worker_config.0.num_instances", "0"), @@ -354,22 +353,30 @@ func TestAccDataprocCluster_singleNodeCluster(t *testing.T) { func TestAccDataprocCluster_updatable(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_updatable(rnd, 2, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.updatable", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.updatable", &cluster), resource.TestCheckResourceAttr("google_dataproc_cluster.updatable", "cluster_config.0.master_config.0.num_instances", "1"), resource.TestCheckResourceAttr("google_dataproc_cluster.updatable", "cluster_config.0.worker_config.0.num_instances", "2"), resource.TestCheckResourceAttr("google_dataproc_cluster.updatable", "cluster_config.0.preemptible_worker_config.0.num_instances", "1")), }, + { + Config: testAccDataprocCluster_updatable(rnd, 2, 0), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.updatable", &cluster), + resource.TestCheckResourceAttr("google_dataproc_cluster.updatable", "cluster_config.0.master_config.0.num_instances", "1"), + resource.TestCheckResourceAttr("google_dataproc_cluster.updatable", "cluster_config.0.worker_config.0.num_instances", "2"), + resource.TestCheckResourceAttr("google_dataproc_cluster.updatable", "cluster_config.0.preemptible_worker_config.0.num_instances", "0")), + }, { Config: testAccDataprocCluster_updatable(rnd, 3, 2), Check: resource.ComposeTestCheckFunc( @@ -384,20 +391,20 @@ func TestAccDataprocCluster_updatable(t *testing.T) { func TestAccDataprocCluster_withStagingBucket(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - clusterName := fmt.Sprintf("dproc-cluster-test-%s", rnd) + clusterName := fmt.Sprintf("tf-test-dproc-%s", rnd) bucketName := fmt.Sprintf("%s-bucket", clusterName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withStagingBucketAndCluster(clusterName, bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_bucket", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_bucket", &cluster), resource.TestCheckResourceAttr("google_dataproc_cluster.with_bucket", "cluster_config.0.staging_bucket", bucketName), resource.TestCheckResourceAttr("google_dataproc_cluster.with_bucket", "cluster_config.0.bucket", bucketName)), }, @@ -406,7 +413,7 @@ func TestAccDataprocCluster_withStagingBucket(t *testing.T) { // but leaving the storage bucket (should not be auto deleted) Config: testAccDataprocCluster_withStagingBucketOnly(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocStagingBucketExists(bucketName), + testAccCheckDataprocStagingBucketExists(t, bucketName), ), }, }, @@ -416,22 +423,22 @@ func TestAccDataprocCluster_withStagingBucket(t *testing.T) { func TestAccDataprocCluster_withInitAction(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - bucketName := fmt.Sprintf("dproc-cluster-test-%s-init-bucket", rnd) + bucketName := fmt.Sprintf("tf-test-dproc-%s-init-bucket", rnd) objectName := "msg.txt" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withInitAction(rnd, bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_init_action", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_init_action", &cluster), resource.TestCheckResourceAttr("google_dataproc_cluster.with_init_action", "cluster_config.0.initialization_action.#", "2"), resource.TestCheckResourceAttr("google_dataproc_cluster.with_init_action", "cluster_config.0.initialization_action.0.timeout_sec", "500"), - testAccCheckDataprocClusterInitActionSucceeded(bucketName, objectName), + testAccCheckDataprocClusterInitActionSucceeded(t, bucketName, objectName), ), }, }, @@ -441,17 +448,17 @@ func TestAccDataprocCluster_withInitAction(t *testing.T) { func TestAccDataprocCluster_withConfigOverrides(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withConfigOverrides(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_config_overrides", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_config_overrides", &cluster), validateDataprocCluster_withConfigOverrides("google_dataproc_cluster.with_config_overrides", &cluster), ), }, @@ -462,22 +469,22 @@ func TestAccDataprocCluster_withConfigOverrides(t *testing.T) { func TestAccDataprocCluster_withServiceAcc(t *testing.T) { t.Parallel() - sa := "a" + acctest.RandString(10) + sa := "a" + randString(t, 10) saEmail := fmt.Sprintf("%s@%s.iam.gserviceaccount.com", sa, getTestProjectFromEnv()) - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withServiceAcc(sa, rnd), Check: resource.ComposeTestCheckFunc( testAccCheckDataprocClusterExists( - "google_dataproc_cluster.with_service_account", &cluster), + t, "google_dataproc_cluster.with_service_account", &cluster), testAccCheckDataprocClusterHasServiceScopes(t, &cluster, "https://www.googleapis.com/auth/cloud.useraccounts.readonly", "https://www.googleapis.com/auth/devstorage.read_write", @@ -494,17 +501,17 @@ func TestAccDataprocCluster_withServiceAcc(t *testing.T) { func TestAccDataprocCluster_withImageVersion(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withImageVersion(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_image_version", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_image_version", &cluster), resource.TestCheckResourceAttr("google_dataproc_cluster.with_image_version", "cluster_config.0.software_config.0.image_version", "1.3.7-deb9"), ), }, @@ -515,17 +522,17 @@ func TestAccDataprocCluster_withImageVersion(t *testing.T) { func TestAccDataprocCluster_withOptionalComponents(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withOptionalComponents(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_opt_components", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_opt_components", &cluster), testAccCheckDataprocClusterHasOptionalComponents(&cluster, "ANACONDA", "ZOOKEEPER"), ), }, @@ -537,23 +544,23 @@ func TestAccDataprocCluster_withOptionalComponents(t *testing.T) { func TestAccDataprocCluster_withLifecycleConfigIdleDeleteTtl(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withLifecycleConfigIdleDeleteTtl(rnd, "600s"), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_lifecycle_config", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_lifecycle_config", &cluster), ), }, { Config: testAccDataprocCluster_withLifecycleConfigIdleDeleteTtl(rnd, "610s"), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_lifecycle_config", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_lifecycle_config", &cluster), ), }, }, @@ -563,26 +570,26 @@ func TestAccDataprocCluster_withLifecycleConfigIdleDeleteTtl(t *testing.T) { func TestAccDataprocCluster_withLifecycleConfigAutoDeletion(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) now := time.Now() fmtString := "2006-01-02T15:04:05.072Z" var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withLifecycleConfigAutoDeletionTime(rnd, now.Add(time.Hour * 10).Format(fmtString)), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_lifecycle_config", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_lifecycle_config", &cluster), ), }, { Config: testAccDataprocCluster_withLifecycleConfigAutoDeletionTime(rnd, now.Add(time.Hour * 20).Format(fmtString)), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_lifecycle_config", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_lifecycle_config", &cluster), ), }, }, @@ -593,17 +600,17 @@ func TestAccDataprocCluster_withLifecycleConfigAutoDeletion(t *testing.T) { func TestAccDataprocCluster_withLabels(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withLabels(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_labels", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), // We only provide one, but GCP adds three, so expect 4. This means unfortunately a // diff will exist unless the user adds these in. An alternative approach would @@ -621,45 +628,70 @@ func TestAccDataprocCluster_withLabels(t *testing.T) { } func TestAccDataprocCluster_withNetworkRefs(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() var c1, c2 dataproc.Cluster - rnd := acctest.RandString(10) + rnd := randString(t, 10) netName := fmt.Sprintf(`dproc-cluster-test-%s-net`, rnd) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withNetworkRefs(rnd, netName), Check: resource.ComposeTestCheckFunc( // successful creation of the clusters is good enough to assess it worked - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_net_ref_by_url", &c1), - testAccCheckDataprocClusterExists("google_dataproc_cluster.with_net_ref_by_name", &c2), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_net_ref_by_url", &c1), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_net_ref_by_name", &c2), ), }, }, }) } +<% unless version == 'ga' -%> +func TestAccDataprocCluster_withEndpointConfig(t *testing.T) { + t.Parallel() + + var cluster dataproc.Cluster + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDataprocClusterDestroy(t), + Steps: []resource.TestStep{ + { + Config: testAccDataprocCluster_withEndpointConfig(rnd), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_endpoint_config", &cluster), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_endpoint_config", "cluster_config.0.endpoint_config.0.enable_http_port_access", "true"), + ), + }, + }, + }) +} +<% end -%> + func TestAccDataprocCluster_KMS(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) kms := BootstrapKMSKey(t) pid := getTestProjectFromEnv() var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_KMS(pid, rnd, kms.CryptoKey.Name), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.kms", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.kms", &cluster), ), }, }, @@ -669,28 +701,28 @@ func TestAccDataprocCluster_KMS(t *testing.T) { func TestAccDataprocCluster_withKerberos(t *testing.T) { t.Parallel() - rnd := acctest.RandString(10) + rnd := randString(t, 10) kms := BootstrapKMSKey(t) var cluster dataproc.Cluster - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocClusterDestroy(), + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withKerberos(rnd, kms.CryptoKey.Name), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocClusterExists("google_dataproc_cluster.kerb", &cluster), + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.kerb", &cluster), ), }, }, }) } -func testAccCheckDataprocClusterDestroy() resource.TestCheckFunc { +func testAccCheckDataprocClusterDestroy(t *testing.T) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) for _, rs := range s.RootModule().Resources { if rs.Type != "google_dataproc_cluster" { @@ -752,10 +784,10 @@ func validateBucketExists(bucket string, config *Config) (bool, error) { return true, nil } -func testAccCheckDataprocStagingBucketExists(bucketName string) resource.TestCheckFunc { +func testAccCheckDataprocStagingBucketExists(t *testing.T, bucketName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) exists, err := validateBucketExists(bucketName, config) if err != nil { @@ -779,12 +811,12 @@ func testAccCheckDataprocClusterHasOptionalComponents(cluster *dataproc.Cluster, } } -func testAccCheckDataprocClusterInitActionSucceeded(bucket, object string) resource.TestCheckFunc { +func testAccCheckDataprocClusterInitActionSucceeded(t *testing.T, bucket, object string) resource.TestCheckFunc { // The init script will have created an object in the specified bucket. // Ensure it exists return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientStorage.Objects.Get(bucket, object).Do() if err != nil { return fmt.Errorf("Unable to verify init action success: Error reading object %s in bucket %s: %v", object, bucket, err) @@ -850,7 +882,7 @@ func validateDataprocCluster_withConfigOverrides(n string, cluster *dataproc.Clu } } -func testAccCheckDataprocClusterExists(n string, cluster *dataproc.Cluster) resource.TestCheckFunc { +func testAccCheckDataprocClusterExists(t *testing.T, n string, cluster *dataproc.Cluster) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -861,7 +893,7 @@ func testAccCheckDataprocClusterExists(n string, cluster *dataproc.Cluster) reso return fmt.Errorf("No ID is set for Dataproc cluster") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) project, err := getTestProject(rs.Primary, config) if err != nil { return err @@ -888,7 +920,7 @@ func testAccCheckDataprocClusterExists(n string, cluster *dataproc.Cluster) reso func testAccCheckDataproc_missingZoneGlobalRegion1(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "basic" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "global" } `, rnd) @@ -897,7 +929,7 @@ resource "google_dataproc_cluster" "basic" { func testAccCheckDataproc_missingZoneGlobalRegion2(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "basic" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "global" cluster_config { @@ -912,7 +944,7 @@ resource "google_dataproc_cluster" "basic" { func testAccDataprocCluster_basic(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "basic" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" } `, rnd) @@ -921,7 +953,7 @@ resource "google_dataproc_cluster" "basic" { func testAccDataprocCluster_withAccelerators(rnd, acceleratorType, zone string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "accelerated_cluster" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -954,7 +986,7 @@ variable "subnetwork_cidr" { } resource "google_compute_network" "dataproc_network" { - name = "dataproc-internalip-network-%s" + name = "tf-test-dproc-net-%s" auto_create_subnetworks = false } @@ -963,7 +995,7 @@ resource "google_compute_network" "dataproc_network" { # deploying a Dataproc cluster with Internal IP Only enabled. # resource "google_compute_subnetwork" "dataproc_subnetwork" { - name = "dataproc-internalip-subnetwork-%s" + name = "tf-test-dproc-subnet-%s" ip_cidr_range = var.subnetwork_cidr network = google_compute_network.dataproc_network.self_link region = "us-central1" @@ -978,7 +1010,7 @@ resource "google_compute_subnetwork" "dataproc_subnetwork" { # internally as part of their configuration or this will just hang. # resource "google_compute_firewall" "dataproc_network_firewall" { - name = "dproc-cluster-test-allow-internal" + name = "tf-test-dproc-firewall-%s" description = "Firewall rules for dataproc Terraform acceptance testing" network = google_compute_network.dataproc_network.name @@ -1000,7 +1032,7 @@ resource "google_compute_firewall" "dataproc_network_firewall" { } resource "google_dataproc_cluster" "basic" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" depends_on = [google_compute_firewall.dataproc_network_firewall] @@ -1011,13 +1043,13 @@ resource "google_dataproc_cluster" "basic" { } } } -`, rnd, rnd, rnd) +`, rnd, rnd, rnd, rnd) } func testAccDataprocCluster_withMetadataAndTags(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "basic" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1036,7 +1068,7 @@ resource "google_dataproc_cluster" "basic" { func testAccDataprocCluster_singleNodeCluster(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "single_node_cluster" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1054,7 +1086,7 @@ resource "google_dataproc_cluster" "single_node_cluster" { func testAccDataprocCluster_withConfigOverrides(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "with_config_overrides" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1112,7 +1144,7 @@ EOL } resource "google_dataproc_cluster" "with_init_action" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1145,7 +1177,7 @@ resource "google_dataproc_cluster" "with_init_action" { func testAccDataprocCluster_updatable(rnd string, w, p int) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "updatable" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1217,7 +1249,7 @@ resource "google_dataproc_cluster" "with_bucket" { func testAccDataprocCluster_withLabels(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "with_labels" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" labels = { @@ -1233,10 +1265,27 @@ resource "google_dataproc_cluster" "with_labels" { `, rnd) } +<% unless version == 'ga' -%> +func testAccDataprocCluster_withEndpointConfig(rnd string) string { + return fmt.Sprintf(` +resource "google_dataproc_cluster" "with_endpoint_config" { + name = "tf-test-%s" + region = "us-central1" + + cluster_config { + endpoint_config { + enable_http_port_access = "true" + } + } +} +`, rnd) +} +<% end -%> + func testAccDataprocCluster_withImageVersion(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "with_image_version" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1251,7 +1300,7 @@ resource "google_dataproc_cluster" "with_image_version" { func testAccDataprocCluster_withOptionalComponents(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "with_opt_components" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1267,7 +1316,7 @@ resource "google_dataproc_cluster" "with_opt_components" { func testAccDataprocCluster_withLifecycleConfigIdleDeleteTtl(rnd, tm string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "with_lifecycle_config" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1282,7 +1331,7 @@ resource "google_dataproc_cluster" "with_lifecycle_config" { func testAccDataprocCluster_withLifecycleConfigAutoDeletionTime(rnd, tm string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "with_lifecycle_config" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1362,7 +1411,7 @@ resource "google_compute_network" "dataproc_network" { # internally as part of their configuration or this will just hang. # resource "google_compute_firewall" "dataproc_network_firewall" { - name = "dproc-cluster-test-%s-allow-internal" + name = "tf-test-dproc-%s" description = "Firewall rules for dataproc Terraform acceptance testing" network = google_compute_network.dataproc_network.name source_ranges = ["192.168.0.0/16"] @@ -1383,7 +1432,7 @@ resource "google_compute_firewall" "dataproc_network_firewall" { } resource "google_dataproc_cluster" "with_net_ref_by_name" { - name = "dproc-cluster-test-%s-name" + name = "tf-test-dproc-net-%s" region = "us-central1" depends_on = [google_compute_firewall.dataproc_network_firewall] @@ -1409,7 +1458,7 @@ resource "google_dataproc_cluster" "with_net_ref_by_name" { } resource "google_dataproc_cluster" "with_net_ref_by_url" { - name = "dproc-cluster-test-%s-url" + name = "tf-test-dproc-url-%s" region = "us-central1" depends_on = [google_compute_firewall.dataproc_network_firewall] @@ -1451,7 +1500,7 @@ resource "google_project_iam_member" "kms-project-binding" { resource "google_dataproc_cluster" "kms" { depends_on = [google_project_iam_member.kms-project-binding] - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { @@ -1466,7 +1515,7 @@ resource "google_dataproc_cluster" "kms" { func testAccDataprocCluster_withKerberos(rnd, kmsKey string) string { return fmt.Sprintf(` resource "google_storage_bucket" "bucket" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" } resource "google_storage_bucket_object" "password" { name = "dataproc-password-%s" @@ -1475,7 +1524,7 @@ resource "google_storage_bucket_object" "password" { } resource "google_dataproc_cluster" "kerb" { - name = "dproc-cluster-test-%s" + name = "tf-test-dproc-%s" region = "us-central1" cluster_config { diff --git a/third_party/terraform/tests/resource_dataproc_job_iam_test.go b/third_party/terraform/tests/resource_dataproc_job_iam_test.go index 98821d272ea7..2b085a8ccbf1 100644 --- a/third_party/terraform/tests/resource_dataproc_job_iam_test.go +++ b/third_party/terraform/tests/resource_dataproc_job_iam_test.go @@ -4,22 +4,21 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDataprocJobIamBinding(t *testing.T) { t.Parallel() - cluster := "tf-dataproc-iam-cluster" + acctest.RandString(10) - job := "tf-dataproc-iam-job-" + acctest.RandString(10) - account := "tf-dataproc-iam-" + acctest.RandString(10) + cluster := "tf-dataproc-iam-cluster" + randString(t, 10) + job := "tf-dataproc-iam-job-" + randString(t, 10) + account := "tf-dataproc-iam-" + randString(t, 10) role := "roles/editor" importId := fmt.Sprintf("projects/%s/regions/%s/jobs/%s %s", getTestProjectFromEnv(), "us-central1", job, role) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -50,9 +49,9 @@ func TestAccDataprocJobIamBinding(t *testing.T) { func TestAccDataprocJobIamMember(t *testing.T) { t.Parallel() - cluster := "tf-dataproc-iam-cluster" + acctest.RandString(10) - job := "tf-dataproc-iam-jobid-" + acctest.RandString(10) - account := "tf-dataproc-iam-" + acctest.RandString(10) + cluster := "tf-dataproc-iam-cluster" + randString(t, 10) + job := "tf-dataproc-iam-jobid-" + randString(t, 10) + account := "tf-dataproc-iam-" + randString(t, 10) role := "roles/editor" importId := fmt.Sprintf("projects/%s/regions/%s/jobs/%s %s serviceAccount:%s", @@ -62,7 +61,7 @@ func TestAccDataprocJobIamMember(t *testing.T) { role, serviceAccountCanonicalEmail(account)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -83,15 +82,15 @@ func TestAccDataprocJobIamMember(t *testing.T) { func TestAccDataprocJobIamPolicy(t *testing.T) { t.Parallel() - cluster := "tf-dataproc-iam-cluster" + acctest.RandString(10) - job := "tf-dataproc-iam-jobid-" + acctest.RandString(10) - account := "tf-dataproc-iam-" + acctest.RandString(10) + cluster := "tf-dataproc-iam-cluster" + randString(t, 10) + job := "tf-dataproc-iam-jobid-" + randString(t, 10) + account := "tf-dataproc-iam-" + randString(t, 10) role := "roles/editor" importId := fmt.Sprintf("projects/%s/regions/%s/jobs/%s", getTestProjectFromEnv(), "us-central1", job) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_dataproc_job_test.go b/third_party/terraform/tests/resource_dataproc_job_test.go index 4fdb85cf0811..94d134ffc4c3 100644 --- a/third_party/terraform/tests/resource_dataproc_job_test.go +++ b/third_party/terraform/tests/resource_dataproc_job_test.go @@ -6,11 +6,11 @@ import ( "log" "strings" "testing" + "time" // "regexp" "github.com/hashicorp/errwrap" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/dataproc/v1" @@ -26,10 +26,10 @@ type jobTestField struct { // func TestAccDataprocJob_failForMissingJobConfig(t *testing.T) { // t.Parallel() -// resource.Test(t, resource.TestCase{ +// vcrTest(t, resource.TestCase{ // PreCheck: func() { testAccPreCheck(t) }, // Providers: testAccProviders, -// CheckDestroy: testAccCheckDataprocJobDestroy, +// CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), // Steps: []resource.TestStep{ // { // Config: testAccDataprocJob_missingJobConf(), @@ -43,24 +43,24 @@ func TestAccDataprocJob_updatable(t *testing.T) { t.Parallel() var job dataproc.Job - rnd := acctest.RandString(10) + rnd := randString(t, 10) jobId := fmt.Sprintf("dproc-update-job-id-%s", rnd) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocJobDestroy, + CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataprocJob_updatable(rnd, jobId, "false"), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.updatable", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.updatable", &job), resource.TestCheckResourceAttr("google_dataproc_job.updatable", "force_delete", "false"), ), }, { Config: testAccDataprocJob_updatable(rnd, jobId, "true"), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.updatable", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.updatable", &job), resource.TestCheckResourceAttr("google_dataproc_job.updatable", "force_delete", "true"), ), }, @@ -72,18 +72,18 @@ func TestAccDataprocJob_PySpark(t *testing.T) { t.Parallel() var job dataproc.Job - rnd := acctest.RandString(10) + rnd := randString(t, 10) jobId := fmt.Sprintf("dproc-custom-job-id-%s", rnd) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocJobDestroy, + CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataprocJob_pySpark(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.pyspark", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.pyspark", &job), // Custom supplied job_id resource.TestCheckResourceAttr("google_dataproc_job.pyspark", "reference.0.job_id", jobId), @@ -99,7 +99,7 @@ func TestAccDataprocJob_PySpark(t *testing.T) { "google_dataproc_job.pyspark", "pyspark_config", &job), // Wait until job completes successfully - testAccCheckDataprocJobCompletesSuccessfully("google_dataproc_job.pyspark", &job), + testAccCheckDataprocJobCompletesSuccessfully(t, "google_dataproc_job.pyspark", &job), ), }, }, @@ -110,16 +110,16 @@ func TestAccDataprocJob_Spark(t *testing.T) { t.Parallel() var job dataproc.Job - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocJobDestroy, + CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataprocJob_spark(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.spark", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.spark", &job), // Autogenerated / computed values resource.TestCheckResourceAttrSet("google_dataproc_job.spark", "reference.0.job_id"), @@ -131,7 +131,7 @@ func TestAccDataprocJob_Spark(t *testing.T) { "google_dataproc_job.spark", "spark_config", &job), // Wait until job completes successfully - testAccCheckDataprocJobCompletesSuccessfully("google_dataproc_job.spark", &job), + testAccCheckDataprocJobCompletesSuccessfully(t, "google_dataproc_job.spark", &job), ), }, }, @@ -142,16 +142,16 @@ func TestAccDataprocJob_Hadoop(t *testing.T) { t.Parallel() var job dataproc.Job - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocJobDestroy, + CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataprocJob_hadoop(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.hadoop", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.hadoop", &job), // Autogenerated / computed values resource.TestCheckResourceAttrSet("google_dataproc_job.hadoop", "reference.0.job_id"), @@ -163,7 +163,7 @@ func TestAccDataprocJob_Hadoop(t *testing.T) { "google_dataproc_job.hadoop", "hadoop_config", &job), // Wait until job completes successfully - testAccCheckDataprocJobCompletesSuccessfully("google_dataproc_job.hadoop", &job), + testAccCheckDataprocJobCompletesSuccessfully(t, "google_dataproc_job.hadoop", &job), ), }, }, @@ -174,16 +174,16 @@ func TestAccDataprocJob_Hive(t *testing.T) { t.Parallel() var job dataproc.Job - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocJobDestroy, + CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataprocJob_hive(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.hive", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.hive", &job), // Autogenerated / computed values resource.TestCheckResourceAttrSet("google_dataproc_job.hive", "reference.0.job_id"), @@ -195,7 +195,7 @@ func TestAccDataprocJob_Hive(t *testing.T) { "google_dataproc_job.hive", "hive_config", &job), // Wait until job completes successfully - testAccCheckDataprocJobCompletesSuccessfully("google_dataproc_job.hive", &job), + testAccCheckDataprocJobCompletesSuccessfully(t, "google_dataproc_job.hive", &job), ), }, }, @@ -206,16 +206,16 @@ func TestAccDataprocJob_Pig(t *testing.T) { t.Parallel() var job dataproc.Job - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocJobDestroy, + CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataprocJob_pig(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.pig", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.pig", &job), // Autogenerated / computed values resource.TestCheckResourceAttrSet("google_dataproc_job.pig", "reference.0.job_id"), @@ -227,7 +227,7 @@ func TestAccDataprocJob_Pig(t *testing.T) { "google_dataproc_job.pig", "pig_config", &job), // Wait until job completes successfully - testAccCheckDataprocJobCompletesSuccessfully("google_dataproc_job.pig", &job), + testAccCheckDataprocJobCompletesSuccessfully(t, "google_dataproc_job.pig", &job), ), }, }, @@ -238,16 +238,16 @@ func TestAccDataprocJob_SparkSql(t *testing.T) { t.Parallel() var job dataproc.Job - rnd := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + rnd := randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDataprocJobDestroy, + CheckDestroy: testAccCheckDataprocJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataprocJob_sparksql(rnd), Check: resource.ComposeTestCheckFunc( - testAccCheckDataprocJobExists("google_dataproc_job.sparksql", &job), + testAccCheckDataprocJobExists(t, "google_dataproc_job.sparksql", &job), // Autogenerated / computed values resource.TestCheckResourceAttrSet("google_dataproc_job.sparksql", "reference.0.job_id"), @@ -259,52 +259,54 @@ func TestAccDataprocJob_SparkSql(t *testing.T) { "google_dataproc_job.sparksql", "sparksql_config", &job), // Wait until job completes successfully - testAccCheckDataprocJobCompletesSuccessfully("google_dataproc_job.sparksql", &job), + testAccCheckDataprocJobCompletesSuccessfully(t, "google_dataproc_job.sparksql", &job), ), }, }, }) } -func testAccCheckDataprocJobDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckDataprocJobDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_dataproc_job" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_dataproc_job" { + continue + } - if rs.Primary.ID == "" { - return fmt.Errorf("Unable to verify delete of dataproc job ID is empty") - } - attributes := rs.Primary.Attributes + if rs.Primary.ID == "" { + return fmt.Errorf("Unable to verify delete of dataproc job ID is empty") + } + attributes := rs.Primary.Attributes - project, err := getTestProject(rs.Primary, config) - if err != nil { - return err - } + project, err := getTestProject(rs.Primary, config) + if err != nil { + return err + } - parts := strings.Split(rs.Primary.ID, "/") - job_id := parts[len(parts)-1] - _, err = config.clientDataproc.Projects.Regions.Jobs.Get( - project, attributes["region"], job_id).Do() - if err != nil { - if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { - return nil - } else if ok { - return fmt.Errorf("Error making GCP platform call: http code error : %d, http message error: %s", gerr.Code, gerr.Message) + parts := strings.Split(rs.Primary.ID, "/") + job_id := parts[len(parts)-1] + _, err = config.clientDataproc.Projects.Regions.Jobs.Get( + project, attributes["region"], job_id).Do() + if err != nil { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 404 { + return nil + } else if ok { + return fmt.Errorf("Error making GCP platform call: http code error : %d, http message error: %s", gerr.Code, gerr.Message) + } + return fmt.Errorf("Error making GCP platform call: %s", err.Error()) } - return fmt.Errorf("Error making GCP platform call: %s", err.Error()) + return fmt.Errorf("Dataproc job still exists") } - return fmt.Errorf("Dataproc job still exists") - } - return nil + return nil + } } -func testAccCheckDataprocJobCompletesSuccessfully(n string, job *dataproc.Job) resource.TestCheckFunc { +func testAccCheckDataprocJobCompletesSuccessfully(t *testing.T, n string, job *dataproc.Job) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) attributes := s.RootModule().Resources[n].Primary.Attributes region := attributes["region"] @@ -313,9 +315,9 @@ func testAccCheckDataprocJobCompletesSuccessfully(n string, job *dataproc.Job) r return err } - jobCompleteTimeoutMins := 5 + jobCompleteTimeoutMins := 5 * time.Minute waitErr := dataprocJobOperationWait(config, region, project, job.Reference.JobId, - "Awaiting Dataproc job completion", jobCompleteTimeoutMins, 1) + "Awaiting Dataproc job completion", jobCompleteTimeoutMins) if waitErr != nil { return waitErr } @@ -358,7 +360,7 @@ func testAccCheckDataprocJobCompletesSuccessfully(n string, job *dataproc.Job) r } } -func testAccCheckDataprocJobExists(n string, job *dataproc.Job) resource.TestCheckFunc { +func testAccCheckDataprocJobExists(t *testing.T, n string, job *dataproc.Job) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -369,7 +371,7 @@ func testAccCheckDataprocJobExists(n string, job *dataproc.Job) resource.TestChe return fmt.Errorf("No ID is set for Dataproc job") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) parts := strings.Split(s.RootModule().Resources[n].Primary.ID, "/") jobId := parts[len(parts)-1] project, err := getTestProject(s.RootModule().Resources[n].Primary, config) diff --git a/third_party/terraform/tests/resource_deployment_manager_deployment_test.go b/third_party/terraform/tests/resource_deployment_manager_deployment_test.go index a2a84cc558d3..225ae878ba99 100644 --- a/third_party/terraform/tests/resource_deployment_manager_deployment_test.go +++ b/third_party/terraform/tests/resource_deployment_manager_deployment_test.go @@ -3,7 +3,6 @@ package google import ( "bytes" "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "io/ioutil" @@ -15,19 +14,19 @@ import ( func TestAccDeploymentManagerDeployment_basicFile(t *testing.T) { t.Parallel() - randSuffix := acctest.RandString(10) + randSuffix := randString(t, 10) deploymentId := "tf-dm-" + randSuffix accountId := "tf-dm-account-" + randSuffix yamlPath := createYamlConfigFileForTest(t, "test-fixtures/deploymentmanager/service_account.yml.tmpl", map[string]interface{}{ "account_id": accountId, }) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: resource.ComposeTestCheckFunc( - testAccCheckDeploymentManagerDeploymentDestroy, - testDeploymentManagerDeploymentVerifyServiceAccountMissing(accountId)), + testAccCheckDeploymentManagerDeploymentDestroyProducer(t), + testDeploymentManagerDeploymentVerifyServiceAccountMissing(t, accountId)), Steps: []resource.TestStep{ { Config: testAccDeploymentManagerDeployment_basicFile(deploymentId, yamlPath), @@ -45,14 +44,14 @@ func TestAccDeploymentManagerDeployment_basicFile(t *testing.T) { func TestAccDeploymentManagerDeployment_deleteInvalidOnCreate(t *testing.T) { t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) deploymentName := "tf-dm-" + randStr accountId := "tf-dm-" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDeploymentManagerDestroyInvalidDeployment(deploymentName), + CheckDestroy: testAccCheckDeploymentManagerDestroyInvalidDeployment(t, deploymentName), Steps: []resource.TestStep{ { Config: testAccDeploymentManagerDeployment_invalidCreatePolicy(deploymentName, accountId), @@ -65,14 +64,14 @@ func TestAccDeploymentManagerDeployment_deleteInvalidOnCreate(t *testing.T) { func TestAccDeploymentManagerDeployment_createDeletePolicy(t *testing.T) { t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) deploymentName := "tf-dm-" + randStr accountId := "tf-dm-" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDeploymentManagerDeploymentDestroy, + CheckDestroy: testAccCheckDeploymentManagerDeploymentDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDeploymentManagerDeployment_createDeletePolicy(deploymentName, accountId), @@ -90,23 +89,23 @@ func TestAccDeploymentManagerDeployment_createDeletePolicy(t *testing.T) { func TestAccDeploymentManagerDeployment_imports(t *testing.T) { t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) deploymentName := "tf-dm-" + randStr accountId := "tf-dm-" + randStr importFilepath := createYamlConfigFileForTest(t, "test-fixtures/deploymentmanager/service_account.yml.tmpl", map[string]interface{}{ "account_id": "{{ env['name'] }}", }) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: resource.ComposeTestCheckFunc( - testAccCheckDeploymentManagerDeploymentDestroy, - testDeploymentManagerDeploymentVerifyServiceAccountMissing(accountId)), + testAccCheckDeploymentManagerDeploymentDestroyProducer(t), + testDeploymentManagerDeploymentVerifyServiceAccountMissing(t, accountId)), Steps: []resource.TestStep{ { Config: testAccDeploymentManagerDeployment_imports(deploymentName, accountId, importFilepath), - Check: testDeploymentManagerDeploymentVerifyServiceAccountExists(accountId), + Check: testDeploymentManagerDeploymentVerifyServiceAccountExists(t, accountId), }, { ResourceName: "google_deployment_manager_deployment.deployment", @@ -121,21 +120,21 @@ func TestAccDeploymentManagerDeployment_imports(t *testing.T) { func TestAccDeploymentManagerDeployment_update(t *testing.T) { t.Parallel() - randStr := acctest.RandString(10) + randStr := randString(t, 10) deploymentName := "tf-dm-" + randStr accountId := "tf-dm-first" + randStr accountId2 := "tf-dm-second" + randStr - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: resource.ComposeTestCheckFunc( - testAccCheckDeploymentManagerDeploymentDestroy, - testDeploymentManagerDeploymentVerifyServiceAccountMissing(accountId)), + testAccCheckDeploymentManagerDeploymentDestroyProducer(t), + testDeploymentManagerDeploymentVerifyServiceAccountMissing(t, accountId)), Steps: []resource.TestStep{ { Config: testAccDeploymentManagerDeployment_preview(deploymentName, accountId), - Check: testDeploymentManagerDeploymentVerifyServiceAccountMissing(accountId), + Check: testDeploymentManagerDeploymentVerifyServiceAccountMissing(t, accountId), }, { ResourceName: "google_deployment_manager_deployment.deployment", @@ -145,7 +144,7 @@ func TestAccDeploymentManagerDeployment_update(t *testing.T) { }, { Config: testAccDeploymentManagerDeployment_previewUpdated(deploymentName, accountId2), - Check: testDeploymentManagerDeploymentVerifyServiceAccountMissing(accountId2), + Check: testDeploymentManagerDeploymentVerifyServiceAccountMissing(t, accountId2), }, { ResourceName: "google_deployment_manager_deployment.deployment", @@ -156,7 +155,7 @@ func TestAccDeploymentManagerDeployment_update(t *testing.T) { { // Turn preview to false Config: testAccDeploymentManagerDeployment_deployed(deploymentName, accountId), - Check: testDeploymentManagerDeploymentVerifyServiceAccountExists(accountId), + Check: testDeploymentManagerDeploymentVerifyServiceAccountExists(t, accountId), }, { ResourceName: "google_deployment_manager_deployment.deployment", @@ -351,9 +350,9 @@ EOF `, deployment, accountId, accountId) } -func testDeploymentManagerDeploymentVerifyServiceAccountMissing(accountId string) resource.TestCheckFunc { +func testDeploymentManagerDeploymentVerifyServiceAccountMissing(t *testing.T, accountId string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) exists, err := testCheckDeploymentServiceAccountExists(accountId, config) if err != nil { return err @@ -365,9 +364,9 @@ func testDeploymentManagerDeploymentVerifyServiceAccountMissing(accountId string } } -func testDeploymentManagerDeploymentVerifyServiceAccountExists(accountId string) resource.TestCheckFunc { +func testDeploymentManagerDeploymentVerifyServiceAccountExists(t *testing.T, accountId string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) exists, err := testCheckDeploymentServiceAccountExists(accountId, config) if err != nil { return err @@ -391,7 +390,7 @@ func testCheckDeploymentServiceAccountExists(accountId string, config *Config) ( return true, nil } -func testAccCheckDeploymentManagerDestroyInvalidDeployment(deploymentName string) resource.TestCheckFunc { +func testAccCheckDeploymentManagerDestroyInvalidDeployment(t *testing.T, deploymentName string) resource.TestCheckFunc { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { if rs.Type == "google_deployment_manager_deployment" { @@ -399,7 +398,7 @@ func testAccCheckDeploymentManagerDestroyInvalidDeployment(deploymentName string } } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) url := fmt.Sprintf("%sprojects/%s/global/deployments/%s", config.DeploymentManagerBasePath, getTestProjectFromEnv(), deploymentName) _, err := sendRequest(config, "GET", "", url, nil) if !isGoogleApiErrorWithCode(err, 404) { @@ -412,29 +411,31 @@ func testAccCheckDeploymentManagerDestroyInvalidDeployment(deploymentName string } } -func testAccCheckDeploymentManagerDeploymentDestroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_deployment_manager_deployment" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } +func testAccCheckDeploymentManagerDeploymentDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_deployment_manager_deployment" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) - url, err := replaceVarsForTest(config, rs, "{{DeploymentManagerBasePath}}projects/{{project}}/global/deployments/{{name}}") - if err != nil { - return err - } + url, err := replaceVarsForTest(config, rs, "{{DeploymentManagerBasePath}}projects/{{project}}/global/deployments/{{name}}") + if err != nil { + return err + } - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("DeploymentManagerDeployment still exists at %s", url) + _, err = sendRequest(config, "GET", "", url, nil) + if err == nil { + return fmt.Errorf("DeploymentManagerDeployment still exists at %s", url) + } } - } - return nil + return nil + } } func createYamlConfigFileForTest(t *testing.T, sourcePath string, context map[string]interface{}) string { diff --git a/third_party/terraform/tests/resource_dialogflow_agent_test.go.erb b/third_party/terraform/tests/resource_dialogflow_agent_test.go.erb index c17a668c7b9c..a075ee4085fd 100644 --- a/third_party/terraform/tests/resource_dialogflow_agent_test.go.erb +++ b/third_party/terraform/tests/resource_dialogflow_agent_test.go.erb @@ -4,7 +4,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -14,10 +13,10 @@ func TestAccDialogflowAgent_update(t *testing.T) { context := map[string]interface{}{ "org_id": getTestOrgFromEnv(t), "billing_account": getTestBillingAccountFromEnv(t), - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -116,7 +115,7 @@ func testAccDialogflowAgent_full2(context map[string]interface{}) string { display_name = "tf-test-%{random_suffix}update" default_language_code = "en" supported_language_codes = ["no"] - time_zone = "America/New_York" + time_zone = "Europe/London" description = "Description 2!" avatar_uri = "https://storage.cloud.google.com/dialogflow-test-host-image/cloud-logo-2.png" enable_logging = false diff --git a/third_party/terraform/tests/resource_dialogflow_entity_type_test.go.erb b/third_party/terraform/tests/resource_dialogflow_entity_type_test.go.erb new file mode 100644 index 000000000000..ba6b65060252 --- /dev/null +++ b/third_party/terraform/tests/resource_dialogflow_entity_type_test.go.erb @@ -0,0 +1,141 @@ +<% autogen_exception -%> +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) +func TestAccDialogflowEntityType_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "org_id": getTestOrgFromEnv(t), + "billing_account": getTestBillingAccountFromEnv(t), + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDialogflowEntityType_full1(context), + }, + { + ResourceName: "google_dialogflow_entity_type.foobar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDialogflowEntityType_full2(context), + }, + { + ResourceName: "google_dialogflow_entity_type.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccDialogflowEntityType_full1(context map[string]interface{}) string { + return Nprintf(` + resource "google_project" "agent_project" { + name = "tf-test-dialogflow-%{random_suffix}" + project_id = "tf-test-dialogflow-%{random_suffix}" + org_id = "%{org_id}" + billing_account = "%{billing_account}" + } + + resource "google_project_service" "agent_project" { + project = google_project.agent_project.project_id + service = "dialogflow.googleapis.com" + disable_dependent_services = false + } + + resource "google_service_account" "dialogflow_service_account" { + account_id = "tf-test-dialogflow-%{random_suffix}" + } + + resource "google_project_iam_member" "agent_create" { + project = google_project_service.agent_project.project + role = "roles/dialogflow.admin" + member = "serviceAccount:${google_service_account.dialogflow_service_account.email}" + } + + resource "google_dialogflow_agent" "agent" { + project = google_project.agent_project.project_id + display_name = "tf-test-agent-%{random_suffix}" + default_language_code = "en" + time_zone = "America/New_York" + depends_on = [google_project_iam_member.agent_create] + } + + resource "google_dialogflow_entity_type" "foobar" { + depends_on = [google_dialogflow_agent.agent] + project = google_project.agent_project.project_id + display_name = "tf-test-entity-%{random_suffix}" + kind = "KIND_MAP" + enable_fuzzy_extraction = true + entities { + value = "value1" + synonyms = ["synonym1","synonym2"] + } + entities { + value = "value2" + synonyms = ["synonym3","synonym4"] + } + } + `, context) +} + +func testAccDialogflowEntityType_full2(context map[string]interface{}) string { + return Nprintf(` + resource "google_project" "agent_project" { + name = "tf-test-dialogflow-%{random_suffix}" + project_id = "tf-test-dialogflow-%{random_suffix}" + org_id = "%{org_id}" + billing_account = "%{billing_account}" + } + + resource "google_project_service" "agent_project" { + project = google_project.agent_project.project_id + service = "dialogflow.googleapis.com" + disable_dependent_services = false + } + + resource "google_service_account" "dialogflow_service_account" { + account_id = "tf-test-dialogflow-%{random_suffix}" + } + + resource "google_project_iam_member" "agent_create" { + project = google_project_service.agent_project.project + role = "roles/dialogflow.admin" + member = "serviceAccount:${google_service_account.dialogflow_service_account.email}" + } + + resource "google_dialogflow_agent" "agent" { + project = google_project.agent_project.project_id + display_name = "tf-test-agent-%{random_suffix}" + default_language_code = "en" + time_zone = "America/New_York" + depends_on = [google_project_iam_member.agent_create] + } + + resource "google_dialogflow_entity_type" "foobar" { + depends_on = [google_dialogflow_agent.agent] + project = google_project.agent_project.project_id + display_name = "tf-test-entity2-%{random_suffix}" + kind = "KIND_LIST" + enable_fuzzy_extraction = false + entities { + value = "value1" + synonyms = ["value1"] + } + } + `, context) +} \ No newline at end of file diff --git a/third_party/terraform/tests/resource_dialogflow_intent_test.go.erb b/third_party/terraform/tests/resource_dialogflow_intent_test.go.erb index ac9335051703..561854bff0ea 100644 --- a/third_party/terraform/tests/resource_dialogflow_intent_test.go.erb +++ b/third_party/terraform/tests/resource_dialogflow_intent_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -15,10 +14,10 @@ func TestAccDialogflowIntent_basic(t *testing.T) { context := map[string]interface{}{ "org_id": getTestOrgFromEnv(t), "billing_account": getTestBillingAccountFromEnv(t), - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -40,10 +39,10 @@ func TestAccDialogflowIntent_update(t *testing.T) { context := map[string]interface{}{ "org_id": getTestOrgFromEnv(t), "billing_account": getTestBillingAccountFromEnv(t), - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_dns_managed_zone_test.go.erb b/third_party/terraform/tests/resource_dns_managed_zone_test.go.erb index 02f9e58a1db7..c2f88f4ddf64 100644 --- a/third_party/terraform/tests/resource_dns_managed_zone_test.go.erb +++ b/third_party/terraform/tests/resource_dns_managed_zone_test.go.erb @@ -5,19 +5,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccDNSManagedZone_update(t *testing.T) { t.Parallel() - zoneSuffix := acctest.RandString(10) + zoneSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsManagedZone_basic(zoneSuffix, "description1"), @@ -42,12 +41,12 @@ func TestAccDNSManagedZone_update(t *testing.T) { func TestAccDNSManagedZone_privateUpdate(t *testing.T) { t.Parallel() - zoneSuffix := acctest.RandString(10) + zoneSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccDnsManagedZone_privateUpdate(zoneSuffix, "network-1", "network-2"), @@ -72,12 +71,12 @@ func TestAccDNSManagedZone_privateUpdate(t *testing.T) { func TestAccDNSManagedZone_dnssec_update(t *testing.T) { t.Parallel() - zoneSuffix := acctest.RandString(10) + zoneSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccDnsManagedZone_dnssec_on(zoneSuffix), @@ -102,12 +101,12 @@ func TestAccDNSManagedZone_dnssec_update(t *testing.T) { func TestAccDNSManagedZone_dnssec_empty(t *testing.T) { t.Parallel() - zoneSuffix := acctest.RandString(10) + zoneSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccDnsManagedZone_dnssec_empty(zoneSuffix), @@ -121,16 +120,15 @@ func TestAccDNSManagedZone_dnssec_empty(t *testing.T) { }) } -<% unless version.nil? || version == 'ga' -%> func TestAccDNSManagedZone_privateForwardingUpdate(t *testing.T) { t.Parallel() - zoneSuffix := acctest.RandString(10) + zoneSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccDnsManagedZone_privateForwardingUpdate(zoneSuffix, "172.16.1.10", "172.16.1.20", "default", "private"), @@ -151,18 +149,17 @@ func TestAccDNSManagedZone_privateForwardingUpdate(t *testing.T) { }, }) } -<% end -%> <% unless version.nil? || version == 'ga' -%> func TestAccDNSManagedZone_reverseLookup(t *testing.T) { t.Parallel() - zoneSuffix := acctest.RandString(10) + zoneSuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccDnsManagedZone_reverseLookup(zoneSuffix), @@ -273,12 +270,12 @@ resource "google_dns_managed_zone" "private" { } resource "google_compute_network" "network-1" { - name = "network-1-%s" + name = "tf-test-net-1-%s" auto_create_subnetworks = false } resource "google_compute_network" "network-2" { - name = "network-2-%s" + name = "tf-test-net-2-%s" auto_create_subnetworks = false } @@ -289,7 +286,6 @@ resource "google_compute_network" "network-3" { `, suffix, first_network, second_network, suffix, suffix, suffix) } -<% unless version.nil? || version == 'ga' -%> func testAccDnsManagedZone_privateForwardingUpdate(suffix, first_nameserver, second_nameserver, first_forwarding_path, second_forwarding_path string) string { return fmt.Sprintf(` resource "google_dns_managed_zone" "private" { @@ -316,12 +312,11 @@ resource "google_dns_managed_zone" "private" { } resource "google_compute_network" "network-1" { - name = "network-1-%s" + name = "tf-test-net-1-%s" auto_create_subnetworks = false } `, suffix, first_nameserver, first_forwarding_path, second_nameserver, second_forwarding_path, suffix) } -<% end -%> <% unless version.nil? || version == 'ga' -%> func testAccDnsManagedZone_reverseLookup(suffix string) string { @@ -336,7 +331,7 @@ resource "google_dns_managed_zone" "reverse" { } resource "google_compute_network" "network-1" { - name = "network-1-%s" + name = "tf-test-net-1-%s" auto_create_subnetworks = false } `, suffix, suffix) @@ -423,13 +418,13 @@ func TestDnsManagedZoneImport_parseImportId(t *testing.T) { func TestAccDNSManagedZone_importWithProject(t *testing.T) { t.Parallel() - zoneSuffix := acctest.RandString(10) + zoneSuffix := randString(t, 10) project := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSManagedZoneDestroy, + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsManagedZone_basicWithProject(zoneSuffix, "description1", project), diff --git a/third_party/terraform/tests/resource_dns_policy_test.go.erb b/third_party/terraform/tests/resource_dns_policy_test.go.erb index 519788a416d5..55c2a9e81bd8 100644 --- a/third_party/terraform/tests/resource_dns_policy_test.go.erb +++ b/third_party/terraform/tests/resource_dns_policy_test.go.erb @@ -5,20 +5,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) -<% unless version.nil? || version == 'ga' -%> func TestAccDNSPolicy_update(t *testing.T) { t.Parallel() - policySuffix := acctest.RandString(10) + policySuffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDNSPolicyDestroy, + CheckDestroy: testAccCheckDNSPolicyDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccDnsPolicy_privateUpdate(policySuffix, "true", "172.16.1.10", "network-1"), @@ -68,4 +66,3 @@ resource "google_compute_network" "network-2" { } `, suffix, forwarding, nameserver, network, suffix, suffix) } -<% end -%> diff --git a/third_party/terraform/tests/resource_dns_record_set_test.go b/third_party/terraform/tests/resource_dns_record_set_test.go index dab0d63916f2..c3cb10319ad7 100644 --- a/third_party/terraform/tests/resource_dns_record_set_test.go +++ b/third_party/terraform/tests/resource_dns_record_set_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -37,17 +36,17 @@ func TestIpv6AddressDiffSuppress(t *testing.T) { func TestAccDNSRecordSet_basic(t *testing.T) { t.Parallel() - zoneName := fmt.Sprintf("dnszone-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + zoneName := fmt.Sprintf("dnszone-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDnsRecordSetDestroy, + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsRecordSet_basic(zoneName, "127.0.0.10", 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, { @@ -70,31 +69,31 @@ func TestAccDNSRecordSet_basic(t *testing.T) { func TestAccDNSRecordSet_modify(t *testing.T) { t.Parallel() - zoneName := fmt.Sprintf("dnszone-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + zoneName := fmt.Sprintf("dnszone-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDnsRecordSetDestroy, + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsRecordSet_basic(zoneName, "127.0.0.10", 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, { Config: testAccDnsRecordSet_basic(zoneName, "127.0.0.11", 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, { Config: testAccDnsRecordSet_basic(zoneName, "127.0.0.11", 600), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, }, @@ -104,24 +103,24 @@ func TestAccDNSRecordSet_modify(t *testing.T) { func TestAccDNSRecordSet_changeType(t *testing.T) { t.Parallel() - zoneName := fmt.Sprintf("dnszone-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + zoneName := fmt.Sprintf("dnszone-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDnsRecordSetDestroy, + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsRecordSet_basic(zoneName, "127.0.0.10", 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, { Config: testAccDnsRecordSet_bigChange(zoneName, 600), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, }, @@ -131,17 +130,17 @@ func TestAccDNSRecordSet_changeType(t *testing.T) { func TestAccDNSRecordSet_ns(t *testing.T) { t.Parallel() - zoneName := fmt.Sprintf("dnszone-test-ns-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + zoneName := fmt.Sprintf("dnszone-test-ns-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDnsRecordSetDestroy, + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsRecordSet_ns(zoneName, 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, { @@ -157,17 +156,17 @@ func TestAccDNSRecordSet_ns(t *testing.T) { func TestAccDNSRecordSet_nestedNS(t *testing.T) { t.Parallel() - zoneName := fmt.Sprintf("dnszone-test-ns-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + zoneName := fmt.Sprintf("dnszone-test-ns-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDnsRecordSetDestroy, + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsRecordSet_nestedNS(zoneName, 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, }, @@ -177,17 +176,17 @@ func TestAccDNSRecordSet_nestedNS(t *testing.T) { func TestAccDNSRecordSet_quotedTXT(t *testing.T) { t.Parallel() - zoneName := fmt.Sprintf("dnszone-test-txt-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + zoneName := fmt.Sprintf("dnszone-test-txt-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDnsRecordSetDestroy, + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsRecordSet_quotedTXT(zoneName, 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, }, @@ -197,41 +196,43 @@ func TestAccDNSRecordSet_quotedTXT(t *testing.T) { func TestAccDNSRecordSet_uppercaseMX(t *testing.T) { t.Parallel() - zoneName := fmt.Sprintf("dnszone-test-txt-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + zoneName := fmt.Sprintf("dnszone-test-txt-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckDnsRecordSetDestroy, + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDnsRecordSet_uppercaseMX(zoneName, 300), Check: resource.ComposeTestCheckFunc( testAccCheckDnsRecordSetExists( - "google_dns_record_set.foobar", zoneName), + t, "google_dns_record_set.foobar", zoneName), ), }, }, }) } -func testAccCheckDnsRecordSetDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - // Deletion of the managed_zone implies everything is gone - if rs.Type == "google_dns_managed_zone" { - _, err := config.clientDns.ManagedZones.Get( - config.Project, rs.Primary.ID).Do() - if err == nil { - return fmt.Errorf("DNS ManagedZone still exists") +func testAccCheckDnsRecordSetDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + // Deletion of the managed_zone implies everything is gone + if rs.Type == "google_dns_managed_zone" { + _, err := config.clientDns.ManagedZones.Get( + config.Project, rs.Primary.ID).Do() + if err == nil { + return fmt.Errorf("DNS ManagedZone still exists") + } } } - } - return nil + return nil + } } -func testAccCheckDnsRecordSetExists(resourceType, resourceName string) resource.TestCheckFunc { +func testAccCheckDnsRecordSetExists(t *testing.T, resourceType, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceType] if !ok { @@ -245,7 +246,7 @@ func testAccCheckDnsRecordSetExists(resourceType, resourceName string) resource. return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) resp, err := config.clientDns.ResourceRecordSets.List( config.Project, resourceName).Name(dnsName).Type(dnsType).Do() diff --git a/third_party/terraform/tests/resource_endpoints_service_test.go b/third_party/terraform/tests/resource_endpoints_service_test.go index 05c1ee3f916b..aa7d12413214 100644 --- a/third_party/terraform/tests/resource_endpoints_service_test.go +++ b/third_party/terraform/tests/resource_endpoints_service_test.go @@ -7,23 +7,30 @@ import ( "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) func TestAccEndpointsService_basic(t *testing.T) { t.Parallel() - serviceId := "tf-test" + acctest.RandString(10) + serviceId := "tf-test" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - CheckDestroy: testAccCheckEndpointServiceDestroy, + CheckDestroy: testAccCheckEndpointServiceDestroyProducer(t), Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccEndpointsService_basic(serviceId, getTestProjectFromEnv()), - Check: testAccCheckEndpointExistsByName(serviceId), + Config: testAccEndpointsService_basic(serviceId, getTestProjectFromEnv(), "1"), + Check: testAccCheckEndpointExistsByName(t, serviceId), + }, + { + Config: testAccEndpointsService_basic(serviceId, getTestProjectFromEnv(), "2"), + Check: testAccCheckEndpointExistsByName(t, serviceId), + }, + { + Config: testAccEndpointsService_basic(serviceId, getTestProjectFromEnv(), "3"), + Check: testAccCheckEndpointExistsByName(t, serviceId), }, }, }) @@ -31,16 +38,16 @@ func TestAccEndpointsService_basic(t *testing.T) { func TestAccEndpointsService_grpc(t *testing.T) { t.Parallel() - serviceId := "tf-test" + acctest.RandString(10) + serviceId := "tf-test" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckEndpointServiceDestroy, + CheckDestroy: testAccCheckEndpointServiceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccEndpointsService_grpc(serviceId, getTestProjectFromEnv()), - Check: testAccCheckEndpointExistsByName(serviceId), + Check: testAccCheckEndpointExistsByName(t, serviceId), }, }, }) @@ -98,7 +105,7 @@ func TestEndpointsService_grpcMigrateState(t *testing.T) { } } -func testAccEndpointsService_basic(serviceId, project string) string { +func testAccEndpointsService_basic(serviceId, project, rev string) string { return fmt.Sprintf(` resource "google_endpoints_service" "endpoints_service" { service_name = "%[1]s.endpoints.%[2]s.cloud.goog" @@ -107,7 +114,7 @@ resource "google_endpoints_service" "endpoints_service" { swagger: "2.0" info: description: "A simple Google Cloud Endpoints API example." - title: "Endpoints Example" + title: "Endpoints Example, rev. %[3]s" version: "1.0.0" host: "%[1]s.endpoints.%[2]s.cloud.goog" basePath: "/" @@ -146,7 +153,14 @@ definitions: EOF } -`, serviceId, project) + +resource "random_id" "foo" { + keepers = { + config_id = google_endpoints_service.endpoints_service.config_id + } + byte_length = 8 +} +`, serviceId, project, rev) } func testAccEndpointsService_grpc(serviceId, project string) string { @@ -169,9 +183,9 @@ EOF `, serviceId, project) } -func testAccCheckEndpointExistsByName(serviceId string) resource.TestCheckFunc { +func testAccCheckEndpointExistsByName(t *testing.T, serviceId string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) service, err := config.clientServiceMan.Services.GetConfig( fmt.Sprintf("%s.endpoints.%s.cloud.goog", serviceId, config.Project)).Do() if err != nil { @@ -185,28 +199,30 @@ func testAccCheckEndpointExistsByName(serviceId string) resource.TestCheckFunc { } } -func testAccCheckEndpointServiceDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckEndpointServiceDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for name, rs := range s.RootModule().Resources { - if strings.HasPrefix(name, "data.") { - continue - } - if rs.Type != "google_endpoints_service" { - continue - } + for name, rs := range s.RootModule().Resources { + if strings.HasPrefix(name, "data.") { + continue + } + if rs.Type != "google_endpoints_service" { + continue + } - serviceName := rs.Primary.Attributes["service_name"] - service, err := config.clientServiceMan.Services.GetConfig(serviceName).Do() - if err != nil { - // ServiceManagement returns 403 if service doesn't exist. - if !isGoogleApiErrorWithCode(err, 403) { - return err + serviceName := rs.Primary.Attributes["service_name"] + service, err := config.clientServiceMan.Services.GetConfig(serviceName).Do() + if err != nil { + // ServiceManagement returns 403 if service doesn't exist. + if !isGoogleApiErrorWithCode(err, 403) { + return err + } + } + if service != nil { + return fmt.Errorf("expected service %q to have been destroyed, got %+v", service.Name, service) } } - if service != nil { - return fmt.Errorf("expected service %q to have been destroyed, got %+v", service.Name, service) - } + return nil } - return nil } diff --git a/third_party/terraform/tests/resource_filestore_instance_test.go.erb b/third_party/terraform/tests/resource_filestore_instance_test.go.erb index fd34c5e20dc5..1084558ebd07 100644 --- a/third_party/terraform/tests/resource_filestore_instance_test.go.erb +++ b/third_party/terraform/tests/resource_filestore_instance_test.go.erb @@ -7,7 +7,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -15,12 +14,12 @@ import ( func TestAccFilestoreInstance_update(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("tf-test") + name := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckFilestoreInstanceDestroy, + CheckDestroy: testAccCheckFilestoreInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testAccFilestoreInstance_update(name), diff --git a/third_party/terraform/tests/resource_firebase_web_app_test.go.erb b/third_party/terraform/tests/resource_firebase_web_app_test.go.erb new file mode 100644 index 000000000000..5b9f3b1dfb34 --- /dev/null +++ b/third_party/terraform/tests/resource_firebase_web_app_test.go.erb @@ -0,0 +1,71 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccFirebaseWebApp_firebaseWebAppFull(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "org_id": getTestOrgFromEnv(t), + "random_suffix": randString(t, 10), + "display_name": "Display Name N", + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersOiCS, + Steps: []resource.TestStep{ + { + Config: testAccFirebaseWebApp_firebaseWebAppFull(context, ""), + }, + { + Config: testAccFirebaseWebApp_firebaseWebAppFull(context, "2"), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("data.google_firebase_web_app_config.default", "api_key"), + resource.TestCheckResourceAttrSet("data.google_firebase_web_app_config.default", "database_url"), + resource.TestCheckResourceAttrSet("data.google_firebase_web_app_config.default", "auth_domain"), + resource.TestCheckResourceAttrSet("data.google_firebase_web_app_config.default", "storage_bucket"), + ), + }, + }, + }) +} + +func testAccFirebaseWebApp_firebaseWebAppFull(context map[string]interface{}, update string) string { + context["display_name"] = context["display_name"].(string) + update + return Nprintf(` +resource "google_project" "default" { + provider = google-beta + + project_id = "tf-test%{random_suffix}" + name = "tf-test%{random_suffix}" + org_id = "%{org_id}" +} + +resource "google_firebase_project" "default" { + provider = google-beta + project = google_project.default.project_id +} + +resource "google_firebase_web_app" "default" { + provider = google-beta + project = google_project.default.project_id + display_name = "%{display_name} %{random_suffix}" + + depends_on = [google_firebase_project.default] +} + +data "google_firebase_web_app_config" "default" { + provider = google-beta + web_app_id = google_firebase_web_app.default.app_id +} +`, context) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_google_billing_account_iam_test.go b/third_party/terraform/tests/resource_google_billing_account_iam_test.go index 2cfec392cd80..22710263b804 100644 --- a/third_party/terraform/tests/resource_google_billing_account_iam_test.go +++ b/third_party/terraform/tests/resource_google_billing_account_iam_test.go @@ -6,25 +6,26 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) func TestAccBillingAccountIam(t *testing.T) { + // Deletes two fine-grained resources in same step + skipIfVcr(t) t.Parallel() billing := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/billing.viewer" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccBillingAccountIamBinding_basic(account, billing, role), - Check: testAccCheckGoogleBillingAccountIamBindingExists("foo", role, []string{ + Check: testAccCheckGoogleBillingAccountIamBindingExists(t, "foo", role, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, @@ -37,7 +38,7 @@ func TestAccBillingAccountIam(t *testing.T) { { // Test Iam Binding update Config: testAccBillingAccountIamBinding_update(account, billing, role), - Check: testAccCheckGoogleBillingAccountIamBindingExists("foo", role, []string{ + Check: testAccCheckGoogleBillingAccountIamBindingExists(t, "foo", role, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), @@ -51,7 +52,7 @@ func TestAccBillingAccountIam(t *testing.T) { { // Test Iam Member creation (no update for member, no need to test) Config: testAccBillingAccountIamMember_basic(account, billing, role), - Check: testAccCheckGoogleBillingAccountIamMemberExists("foo", "roles/billing.viewer", + Check: testAccCheckGoogleBillingAccountIamMemberExists(t, "foo", "roles/billing.viewer", fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), ), }, @@ -65,14 +66,14 @@ func TestAccBillingAccountIam(t *testing.T) { }) } -func testAccCheckGoogleBillingAccountIamBindingExists(bindingResourceName, role string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleBillingAccountIamBindingExists(t *testing.T, bindingResourceName, role string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { bindingRs, ok := s.RootModule().Resources["google_billing_account_iam_binding."+bindingResourceName] if !ok { return fmt.Errorf("Not found: %s", bindingResourceName) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientBilling.BillingAccounts.GetIamPolicy("billingAccounts/" + bindingRs.Primary.Attributes["billing_account_id"]).Do() if err != nil { return err @@ -95,14 +96,14 @@ func testAccCheckGoogleBillingAccountIamBindingExists(bindingResourceName, role } } -func testAccCheckGoogleBillingAccountIamMemberExists(n, role, member string) resource.TestCheckFunc { +func testAccCheckGoogleBillingAccountIamMemberExists(t *testing.T, n, role, member string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_billing_account_iam_member."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientBilling.BillingAccounts.GetIamPolicy("billingAccounts/" + rs.Primary.Attributes["billing_account_id"]).Do() if err != nil { return err diff --git a/third_party/terraform/tests/resource_google_folder_iam_audit_config_test.go b/third_party/terraform/tests/resource_google_folder_iam_audit_config_test.go new file mode 100644 index 000000000000..a1f142aff6c3 --- /dev/null +++ b/third_party/terraform/tests/resource_google_folder_iam_audit_config_test.go @@ -0,0 +1,400 @@ +package google + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +// Test that an IAM audit config can be applied to a folder +func TestAccFolderIamAuditConfig_basic(t *testing.T) { + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + service := "cloudkms.googleapis.com" + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply an IAM audit config + { + Config: testAccFolderAssociateAuditConfigBasic(org, fname, service), + }, + }, + }) +} + +// Test that multiple IAM audit configs can be applied to a folder, one at a time +func TestAccFolderIamAuditConfig_multiple(t *testing.T) { + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + service := "cloudkms.googleapis.com" + service2 := "cloudsql.googleapis.com" + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply an IAM audit config + { + Config: testAccFolderAssociateAuditConfigBasic(org, fname, service), + }, + // Apply another IAM audit config + { + Config: testAccFolderAssociateAuditConfigMultiple(org, fname, service, service2), + }, + }, + }) +} + +// Test that multiple IAM audit configs can be applied to a folder all at once +func TestAccFolderIamAuditConfig_multipleAtOnce(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + service := "cloudkms.googleapis.com" + service2 := "cloudsql.googleapis.com" + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply an IAM audit config + { + Config: testAccFolderAssociateAuditConfigMultiple(org, fname, service, service2), + }, + }, + }) +} + +// Test that an IAM audit config can be updated once applied to a folder +func TestAccFolderIamAuditConfig_update(t *testing.T) { + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + service := "cloudkms.googleapis.com" + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply an IAM audit config + { + Config: testAccFolderAssociateAuditConfigBasic(org, fname, service), + }, + // Apply an updated IAM audit config + { + Config: testAccFolderAssociateAuditConfigUpdated(org, fname, service), + }, + // Drop the original member + { + Config: testAccFolderAssociateAuditConfigDropMemberFromBasic(org, fname, service), + }, + }, + }) +} + +// Test that an IAM audit config can be removed from a folder +func TestAccFolderIamAuditConfig_remove(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + service := "cloudkms.googleapis.com" + service2 := "cloudsql.googleapis.com" + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply multiple IAM audit configs + { + Config: testAccFolderAssociateAuditConfigMultiple(org, fname, service, service2), + }, + // Remove the audit configs + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + }, + }) +} + +// Test adding exempt first exempt member +func TestAccFolderIamAuditConfig_addFirstExemptMember(t *testing.T) { + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + service := "cloudkms.googleapis.com" + members := []string{} + members2 := []string{"user:paddy@hashicorp.com"} + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply IAM audit config with no members + { + Config: testAccFolderAssociateAuditConfigMembers(org, fname, service, members), + }, + // Apply IAM audit config with one member + { + Config: testAccFolderAssociateAuditConfigMembers(org, fname, service, members2), + }, + }, + }) +} + +// test removing last exempt member +func TestAccFolderIamAuditConfig_removeLastExemptMember(t *testing.T) { + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + service := "cloudkms.googleapis.com" + members2 := []string{} + members := []string{"user:paddy@hashicorp.com"} + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply IAM audit config with member + { + Config: testAccFolderAssociateAuditConfigMembers(org, fname, service, members), + }, + // Apply IAM audit config with no members + { + Config: testAccFolderAssociateAuditConfigMembers(org, fname, service, members2), + }, + }, + }) +} + +// test changing log type with no exempt members +func TestAccFolderIamAuditConfig_updateNoExemptMembers(t *testing.T) { + t.Parallel() + + org := getTestOrgFromEnv(t) + fname := "terraform-" + randString(t, 10) + logType := "DATA_READ" + logType2 := "DATA_WRITE" + service := "cloudkms.googleapis.com" + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + // Create a new folder + { + Config: testAccFolderIamBasic(org, fname), + Check: resource.ComposeTestCheckFunc( + testAccFolderExistingPolicy(t, org, fname), + ), + }, + // Apply IAM audit config with DATA_READ + { + Config: testAccFolderAssociateAuditConfigLogType(org, fname, service, logType), + }, + // Apply IAM audit config with DATA_WRITE + { + Config: testAccFolderAssociateAuditConfigLogType(org, fname, service, logType2), + }, + }, + }) +} + +func testAccFolderAssociateAuditConfigBasic(org, fname, service string) string { + return fmt.Sprintf(` +resource "google_folder" "acceptance" { + parent = "organizations/%s" + display_name = "%s" +} + +resource "google_folder_iam_audit_config" "acceptance" { + folder = google_folder.acceptance.name + service = "%s" + audit_log_config { + log_type = "DATA_READ" + exempted_members = [ + "user:paddy@hashicorp.com", + "user:paddy@carvers.co", + ] + } +} +`, org, fname, service) +} + +func testAccFolderAssociateAuditConfigMultiple(org, fname, service, service2 string) string { + return fmt.Sprintf(` +resource "google_folder" "acceptance" { + parent = "organizations/%s" + display_name = "%s" +} + +resource "google_folder_iam_audit_config" "acceptance" { + folder = google_folder.acceptance.name + service = "%s" + audit_log_config { + log_type = "DATA_READ" + exempted_members = [ + "user:paddy@hashicorp.com", + "user:paddy@carvers.co", + ] + } +} + +resource "google_folder_iam_audit_config" "multiple" { + folder = google_folder.acceptance.name + service = "%s" + audit_log_config { + log_type = "DATA_WRITE" + } +} +`, org, fname, service, service2) +} + +func testAccFolderAssociateAuditConfigUpdated(org, fname, service string) string { + return fmt.Sprintf(` +resource "google_folder" "acceptance" { + parent = "organizations/%s" + display_name = "%s" +} + +resource "google_folder_iam_audit_config" "acceptance" { + folder = google_folder.acceptance.name + service = "%s" + audit_log_config { + log_type = "DATA_WRITE" + exempted_members = [ + "user:admin@hashicorptest.com", + "user:paddy@carvers.co", + ] + } +} +`, org, fname, service) +} + +func testAccFolderAssociateAuditConfigDropMemberFromBasic(org, fname, service string) string { + return fmt.Sprintf(` +resource "google_folder" "acceptance" { + parent = "organizations/%s" + display_name = "%s" +} + +resource "google_folder_iam_audit_config" "acceptance" { + folder = google_folder.acceptance.name + service = "%s" + audit_log_config { + log_type = "DATA_READ" + exempted_members = [ + "user:paddy@hashicorp.com", + ] + } +} +`, org, fname, service) +} + +func testAccFolderAssociateAuditConfigMembers(org, fname, service string, members []string) string { + var memberStr string + if len(members) > 0 { + for pos, member := range members { + members[pos] = "\"" + member + "\"," + } + memberStr = "\n exempted_members = [" + strings.Join(members, "\n") + "\n ]" + } + return fmt.Sprintf(` +resource "google_folder" "acceptance" { + parent = "organizations/%s" + display_name = "%s" +} + +resource "google_folder_iam_audit_config" "acceptance" { + folder = google_folder.acceptance.name + service = "%s" + audit_log_config { + log_type = "DATA_READ"%s + } +} +`, org, fname, service, memberStr) +} + +func testAccFolderAssociateAuditConfigLogType(org, fname, service, logType string) string { + return fmt.Sprintf(` +resource "google_folder" "acceptance" { + parent = "organizations/%s" + display_name = "%s" +} + +resource "google_folder_iam_audit_config" "acceptance" { + folder = google_folder.acceptance.name + service = "%s" + audit_log_config { + log_type = "%s" + } +} +`, org, fname, service, logType) +} diff --git a/third_party/terraform/tests/resource_google_folder_iam_binding_test.go b/third_party/terraform/tests/resource_google_folder_iam_binding_test.go index 86741b87bff5..8cd41959faa5 100644 --- a/third_party/terraform/tests/resource_google_folder_iam_binding_test.go +++ b/third_party/terraform/tests/resource_google_folder_iam_binding_test.go @@ -6,7 +6,6 @@ import ( "testing" "github.com/hashicorp/errwrap" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudresourcemanager/v1" @@ -18,8 +17,8 @@ func TestAccFolderIamBinding_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -27,14 +26,14 @@ func TestAccFolderIamBinding_basic(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply an IAM binding { Config: testAccFolderAssociateBindingBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -46,11 +45,13 @@ func TestAccFolderIamBinding_basic(t *testing.T) { // Test that multiple IAM bindings can be applied to a folder, one at a time func TestAccFolderIamBinding_multiple(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -58,14 +59,14 @@ func TestAccFolderIamBinding_multiple(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply an IAM binding { Config: testAccFolderAssociateBindingBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -75,11 +76,11 @@ func TestAccFolderIamBinding_multiple(t *testing.T) { { Config: testAccFolderAssociateBindingMultiple(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/viewer", Members: []string{"user:paddy@hashicorp.com"}, }, org, fname), - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -91,11 +92,13 @@ func TestAccFolderIamBinding_multiple(t *testing.T) { // Test that multiple IAM bindings can be applied to a folder all at once func TestAccFolderIamBinding_multipleAtOnce(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -103,18 +106,18 @@ func TestAccFolderIamBinding_multipleAtOnce(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply an IAM binding { Config: testAccFolderAssociateBindingMultiple(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -129,8 +132,8 @@ func TestAccFolderIamBinding_update(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -138,14 +141,14 @@ func TestAccFolderIamBinding_update(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply an IAM binding { Config: testAccFolderAssociateBindingBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -155,7 +158,7 @@ func TestAccFolderIamBinding_update(t *testing.T) { { Config: testAccFolderAssociateBindingUpdated(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com", "user:paddy@hashicorp.com"}, }, org, fname), @@ -165,7 +168,7 @@ func TestAccFolderIamBinding_update(t *testing.T) { { Config: testAccFolderAssociateBindingDropMemberFromBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:paddy@hashicorp.com"}, }, org, fname), @@ -177,11 +180,13 @@ func TestAccFolderIamBinding_update(t *testing.T) { // Test that an IAM binding can be removed from a folder func TestAccFolderIamBinding_remove(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -189,18 +194,18 @@ func TestAccFolderIamBinding_remove(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply multiple IAM bindings { Config: testAccFolderAssociateBindingMultiple(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/viewer", Members: []string{"user:paddy@hashicorp.com"}, }, org, fname), - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -210,16 +215,16 @@ func TestAccFolderIamBinding_remove(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, }, }) } -func testAccCheckGoogleFolderIamBindingExists(expected *cloudresourcemanager.Binding, org, fname string) resource.TestCheckFunc { +func testAccCheckGoogleFolderIamBindingExists(t *testing.T, expected *cloudresourcemanager.Binding, org, fname string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) folderPolicy, err := getFolderIamPolicyByParentAndDisplayName("organizations/"+org, fname, config) if err != nil { return fmt.Errorf("Failed to retrieve IAM policy for folder %q: %s", fname, err) diff --git a/third_party/terraform/tests/resource_google_folder_iam_member_test.go b/third_party/terraform/tests/resource_google_folder_iam_member_test.go index e2f138dd3df8..5c3b1187f430 100644 --- a/third_party/terraform/tests/resource_google_folder_iam_member_test.go +++ b/third_party/terraform/tests/resource_google_folder_iam_member_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "google.golang.org/api/cloudresourcemanager/v1" ) @@ -14,8 +13,8 @@ func TestAccFolderIamMember_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -23,14 +22,14 @@ func TestAccFolderIamMember_basic(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply an IAM binding { Config: testAccFolderAssociateMemberBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -42,11 +41,12 @@ func TestAccFolderIamMember_basic(t *testing.T) { // Test that multiple IAM bindings can be applied to a folder func TestAccFolderIamMember_multiple(t *testing.T) { + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -54,14 +54,14 @@ func TestAccFolderIamMember_multiple(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply an IAM binding { Config: testAccFolderAssociateMemberBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com"}, }, org, fname), @@ -71,7 +71,7 @@ func TestAccFolderIamMember_multiple(t *testing.T) { { Config: testAccFolderAssociateMemberMultiple(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com", "user:paddy@hashicorp.com"}, }, org, fname), @@ -83,11 +83,12 @@ func TestAccFolderIamMember_multiple(t *testing.T) { // Test that an IAM binding can be removed from a folder func TestAccFolderIamMember_remove(t *testing.T) { + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - fname := "terraform-" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + fname := "terraform-" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -95,14 +96,14 @@ func TestAccFolderIamMember_remove(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, // Apply multiple IAM bindings { Config: testAccFolderAssociateMemberMultiple(org, fname), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderIamBindingExists(&cloudresourcemanager.Binding{ + testAccCheckGoogleFolderIamBindingExists(t, &cloudresourcemanager.Binding{ Role: "roles/compute.instanceAdmin", Members: []string{"user:admin@hashicorptest.com", "user:paddy@hashicorp.com"}, }, org, fname), @@ -112,7 +113,7 @@ func TestAccFolderIamMember_remove(t *testing.T) { { Config: testAccFolderIamBasic(org, fname), Check: resource.ComposeTestCheckFunc( - testAccFolderExistingPolicy(org, fname), + testAccFolderExistingPolicy(t, org, fname), ), }, }, diff --git a/third_party/terraform/tests/resource_google_folder_iam_policy_test.go b/third_party/terraform/tests/resource_google_folder_iam_policy_test.go index d382111c75d3..534c48212639 100644 --- a/third_party/terraform/tests/resource_google_folder_iam_policy_test.go +++ b/third_party/terraform/tests/resource_google_folder_iam_policy_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" resourceManagerV2Beta1 "google.golang.org/api/cloudresourcemanager/v2beta1" @@ -13,14 +12,14 @@ import ( func TestAccFolderIamPolicy_basic(t *testing.T) { t.Parallel() - folderDisplayName := "tf-test-" + acctest.RandString(10) + folderDisplayName := "tf-test-" + randString(t, 10) org := getTestOrgFromEnv(t) parent := "organizations/" + org - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderIamPolicyDestroy, + CheckDestroy: testAccCheckGoogleFolderIamPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolderIamPolicy_basic(folderDisplayName, parent, "roles/viewer", "user:admin@hashicorptest.com"), @@ -45,14 +44,14 @@ func TestAccFolderIamPolicy_basic(t *testing.T) { func TestAccFolderIamPolicy_auditConfigs(t *testing.T) { t.Parallel() - folderDisplayName := "tf-test-" + acctest.RandString(10) + folderDisplayName := "tf-test-" + randString(t, 10) org := getTestOrgFromEnv(t) parent := "organizations/" + org - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderIamPolicyDestroy, + CheckDestroy: testAccCheckGoogleFolderIamPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolderIamPolicy_auditConfigs(folderDisplayName, parent, "roles/viewer", "user:admin@hashicorptest.com"), @@ -66,28 +65,30 @@ func TestAccFolderIamPolicy_auditConfigs(t *testing.T) { }) } -func testAccCheckGoogleFolderIamPolicyDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckGoogleFolderIamPolicyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_folder_iam_policy" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_folder_iam_policy" { + continue + } - folder := rs.Primary.Attributes["folder"] - policy, err := config.clientResourceManagerV2Beta1.Folders.GetIamPolicy(folder, &resourceManagerV2Beta1.GetIamPolicyRequest{}).Do() + folder := rs.Primary.Attributes["folder"] + policy, err := config.clientResourceManagerV2Beta1.Folders.GetIamPolicy(folder, &resourceManagerV2Beta1.GetIamPolicyRequest{}).Do() - if err != nil && len(policy.Bindings) > 0 { - return fmt.Errorf("Folder '%s' policy hasn't been deleted.", folder) + if err != nil && len(policy.Bindings) > 0 { + return fmt.Errorf("Folder '%s' policy hasn't been deleted.", folder) + } } + return nil } - return nil } // Confirm that a folder has an IAM policy with at least 1 binding -func testAccFolderExistingPolicy(org, fname string) resource.TestCheckFunc { +func testAccFolderExistingPolicy(t *testing.T, org, fname string) resource.TestCheckFunc { return func(s *terraform.State) error { - c := testAccProvider.Meta().(*Config) + c := googleProviderConfig(t) var err error originalPolicy, err = getFolderIamPolicyByParentAndDisplayName("organizations/"+org, fname, c) if err != nil { diff --git a/third_party/terraform/tests/resource_google_folder_organization_policy_test.go b/third_party/terraform/tests/resource_google_folder_organization_policy_test.go index 83acdc35d0e7..85d44d048b9e 100644 --- a/third_party/terraform/tests/resource_google_folder_organization_policy_test.go +++ b/third_party/terraform/tests/resource_google_folder_organization_policy_test.go @@ -6,7 +6,6 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudresourcemanager/v1" @@ -15,23 +14,23 @@ import ( func TestAccFolderOrganizationPolicy_boolean(t *testing.T) { t.Parallel() - folder := acctest.RandomWithPrefix("tf-test") + folder := fmt.Sprintf("tf-test-%d", randInt(t)) org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { // Test creation of an enforced boolean policy Config: testAccFolderOrganizationPolicy_boolean(org, folder, true), - Check: testAccCheckGoogleFolderOrganizationBooleanPolicy("bool", true), + Check: testAccCheckGoogleFolderOrganizationBooleanPolicy(t, "bool", true), }, { // Test update from enforced to not Config: testAccFolderOrganizationPolicy_boolean(org, folder, false), - Check: testAccCheckGoogleFolderOrganizationBooleanPolicy("bool", false), + Check: testAccCheckGoogleFolderOrganizationBooleanPolicy(t, "bool", false), }, { Config: " ", @@ -40,12 +39,12 @@ func TestAccFolderOrganizationPolicy_boolean(t *testing.T) { { // Test creation of a not enforced boolean policy Config: testAccFolderOrganizationPolicy_boolean(org, folder, false), - Check: testAccCheckGoogleFolderOrganizationBooleanPolicy("bool", false), + Check: testAccCheckGoogleFolderOrganizationBooleanPolicy(t, "bool", false), }, { // Test update from not enforced to enforced Config: testAccFolderOrganizationPolicy_boolean(org, folder, true), - Check: testAccCheckGoogleFolderOrganizationBooleanPolicy("bool", true), + Check: testAccCheckGoogleFolderOrganizationBooleanPolicy(t, "bool", true), }, }, }) @@ -54,17 +53,17 @@ func TestAccFolderOrganizationPolicy_boolean(t *testing.T) { func TestAccFolderOrganizationPolicy_list_allowAll(t *testing.T) { t.Parallel() - folder := acctest.RandomWithPrefix("tf-test") + folder := fmt.Sprintf("tf-test-%d", randInt(t)) org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolderOrganizationPolicy_list_allowAll(org, folder), - Check: testAccCheckGoogleFolderOrganizationListPolicyAll("list", "ALLOW"), + Check: testAccCheckGoogleFolderOrganizationListPolicyAll(t, "list", "ALLOW"), }, { ResourceName: "google_folder_organization_policy.list", @@ -78,17 +77,17 @@ func TestAccFolderOrganizationPolicy_list_allowAll(t *testing.T) { func TestAccFolderOrganizationPolicy_list_allowSome(t *testing.T) { t.Parallel() - folder := acctest.RandomWithPrefix("tf-test") + folder := fmt.Sprintf("tf-test-%d", randInt(t)) org := getTestOrgFromEnv(t) project := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolderOrganizationPolicy_list_allowSome(org, folder, project), - Check: testAccCheckGoogleFolderOrganizationListPolicyAllowedValues("list", []string{"projects/" + project}), + Check: testAccCheckGoogleFolderOrganizationListPolicyAllowedValues(t, "list", []string{"projects/" + project}), }, { ResourceName: "google_folder_organization_policy.list", @@ -102,16 +101,16 @@ func TestAccFolderOrganizationPolicy_list_allowSome(t *testing.T) { func TestAccFolderOrganizationPolicy_list_denySome(t *testing.T) { t.Parallel() - folder := acctest.RandomWithPrefix("tf-test") + folder := fmt.Sprintf("tf-test-%d", randInt(t)) org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolderOrganizationPolicy_list_denySome(org, folder), - Check: testAccCheckGoogleFolderOrganizationListPolicyDeniedValues("list", DENIED_ORG_POLICIES), + Check: testAccCheckGoogleFolderOrganizationListPolicyDeniedValues(t, "list", DENIED_ORG_POLICIES), }, { ResourceName: "google_folder_organization_policy.list", @@ -125,20 +124,20 @@ func TestAccFolderOrganizationPolicy_list_denySome(t *testing.T) { func TestAccFolderOrganizationPolicy_list_update(t *testing.T) { t.Parallel() - folder := acctest.RandomWithPrefix("tf-test") + folder := fmt.Sprintf("tf-test-%d", randInt(t)) org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleFolderOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolderOrganizationPolicy_list_allowAll(org, folder), - Check: testAccCheckGoogleFolderOrganizationListPolicyAll("list", "ALLOW"), + Check: testAccCheckGoogleFolderOrganizationListPolicyAll(t, "list", "ALLOW"), }, { Config: testAccFolderOrganizationPolicy_list_denySome(org, folder), - Check: testAccCheckGoogleFolderOrganizationListPolicyDeniedValues("list", DENIED_ORG_POLICIES), + Check: testAccCheckGoogleFolderOrganizationListPolicyDeniedValues(t, "list", DENIED_ORG_POLICIES), }, { ResourceName: "google_folder_organization_policy.list", @@ -152,16 +151,16 @@ func TestAccFolderOrganizationPolicy_list_update(t *testing.T) { func TestAccFolderOrganizationPolicy_restore_defaultTrue(t *testing.T) { t.Parallel() - folder := acctest.RandomWithPrefix("tf-test") + folder := fmt.Sprintf("tf-test-%d", randInt(t)) org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolderOrganizationPolicy_restore_defaultTrue(org, folder), - Check: getGoogleFolderOrganizationRestoreDefaultTrue("restore", &cloudresourcemanager.RestoreDefault{}), + Check: getGoogleFolderOrganizationRestoreDefaultTrue(t, "restore", &cloudresourcemanager.RestoreDefault{}), }, { ResourceName: "google_folder_organization_policy.restore", @@ -172,34 +171,36 @@ func TestAccFolderOrganizationPolicy_restore_defaultTrue(t *testing.T) { }) } -func testAccCheckGoogleFolderOrganizationPolicyDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_folder_organization_policy" { - continue - } - - folder := canonicalFolderId(rs.Primary.Attributes["folder"]) - constraint := canonicalOrgPolicyConstraint(rs.Primary.Attributes["constraint"]) - policy, err := config.clientResourceManager.Folders.GetOrgPolicy(folder, &cloudresourcemanager.GetOrgPolicyRequest{ - Constraint: constraint, - }).Do() - - if err != nil { - return err - } - - if policy.ListPolicy != nil || policy.BooleanPolicy != nil { - return fmt.Errorf("Org policy with constraint '%s' hasn't been cleared", constraint) +func testAccCheckGoogleFolderOrganizationPolicyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_folder_organization_policy" { + continue + } + + folder := canonicalFolderId(rs.Primary.Attributes["folder"]) + constraint := canonicalOrgPolicyConstraint(rs.Primary.Attributes["constraint"]) + policy, err := config.clientResourceManager.Folders.GetOrgPolicy(folder, &cloudresourcemanager.GetOrgPolicyRequest{ + Constraint: constraint, + }).Do() + + if err != nil { + return err + } + + if policy.ListPolicy != nil || policy.BooleanPolicy != nil { + return fmt.Errorf("Org policy with constraint '%s' hasn't been cleared", constraint) + } } + return nil } - return nil } -func testAccCheckGoogleFolderOrganizationBooleanPolicy(n string, enforced bool) resource.TestCheckFunc { +func testAccCheckGoogleFolderOrganizationBooleanPolicy(t *testing.T, n string, enforced bool) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleFolderOrganizationPolicyTestResource(s, n) + policy, err := getGoogleFolderOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -212,9 +213,9 @@ func testAccCheckGoogleFolderOrganizationBooleanPolicy(n string, enforced bool) } } -func testAccCheckGoogleFolderOrganizationListPolicyAll(n, policyType string) resource.TestCheckFunc { +func testAccCheckGoogleFolderOrganizationListPolicyAll(t *testing.T, n, policyType string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleFolderOrganizationPolicyTestResource(s, n) + policy, err := getGoogleFolderOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -231,9 +232,9 @@ func testAccCheckGoogleFolderOrganizationListPolicyAll(n, policyType string) res } } -func testAccCheckGoogleFolderOrganizationListPolicyAllowedValues(n string, values []string) resource.TestCheckFunc { +func testAccCheckGoogleFolderOrganizationListPolicyAllowedValues(t *testing.T, n string, values []string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleFolderOrganizationPolicyTestResource(s, n) + policy, err := getGoogleFolderOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -248,9 +249,9 @@ func testAccCheckGoogleFolderOrganizationListPolicyAllowedValues(n string, value } } -func testAccCheckGoogleFolderOrganizationListPolicyDeniedValues(n string, values []string) resource.TestCheckFunc { +func testAccCheckGoogleFolderOrganizationListPolicyDeniedValues(t *testing.T, n string, values []string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleFolderOrganizationPolicyTestResource(s, n) + policy, err := getGoogleFolderOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -265,10 +266,10 @@ func testAccCheckGoogleFolderOrganizationListPolicyDeniedValues(n string, values } } -func getGoogleFolderOrganizationRestoreDefaultTrue(n string, policyDefault *cloudresourcemanager.RestoreDefault) resource.TestCheckFunc { +func getGoogleFolderOrganizationRestoreDefaultTrue(t *testing.T, n string, policyDefault *cloudresourcemanager.RestoreDefault) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleFolderOrganizationPolicyTestResource(s, n) + policy, err := getGoogleFolderOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -281,7 +282,7 @@ func getGoogleFolderOrganizationRestoreDefaultTrue(n string, policyDefault *clou } } -func getGoogleFolderOrganizationPolicyTestResource(s *terraform.State, n string) (*cloudresourcemanager.OrgPolicy, error) { +func getGoogleFolderOrganizationPolicyTestResource(t *testing.T, s *terraform.State, n string) (*cloudresourcemanager.OrgPolicy, error) { rn := "google_folder_organization_policy." + n rs, ok := s.RootModule().Resources[rn] if !ok { @@ -292,7 +293,7 @@ func getGoogleFolderOrganizationPolicyTestResource(s *terraform.State, n string) return nil, fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) folder := canonicalFolderId(rs.Primary.Attributes["folder"]) return config.clientResourceManager.Folders.GetOrgPolicy(folder, &cloudresourcemanager.GetOrgPolicyRequest{ diff --git a/third_party/terraform/tests/resource_google_folder_test.go b/third_party/terraform/tests/resource_google_folder_test.go index 08fc42b91821..ea7c6b3fdc74 100644 --- a/third_party/terraform/tests/resource_google_folder_test.go +++ b/third_party/terraform/tests/resource_google_folder_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" @@ -14,21 +13,21 @@ import ( func TestAccFolder_rename(t *testing.T) { t.Parallel() - folderDisplayName := "tf-test-" + acctest.RandString(10) - newFolderDisplayName := "tf-test-renamed-" + acctest.RandString(10) + folderDisplayName := "tf-test-" + randString(t, 10) + newFolderDisplayName := "tf-test-renamed-" + randString(t, 10) org := getTestOrgFromEnv(t) parent := "organizations/" + org folder := resourceManagerV2Beta1.Folder{} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderDestroy, + CheckDestroy: testAccCheckGoogleFolderDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolder_basic(folderDisplayName, parent), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderExists("google_folder.folder1", &folder), + testAccCheckGoogleFolderExists(t, "google_folder.folder1", &folder), testAccCheckGoogleFolderParent(&folder, parent), testAccCheckGoogleFolderDisplayName(&folder, folderDisplayName), ), @@ -36,7 +35,7 @@ func TestAccFolder_rename(t *testing.T) { { Config: testAccFolder_basic(newFolderDisplayName, parent), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderExists("google_folder.folder1", &folder), + testAccCheckGoogleFolderExists(t, "google_folder.folder1", &folder), testAccCheckGoogleFolderParent(&folder, parent), testAccCheckGoogleFolderDisplayName(&folder, newFolderDisplayName), )}, @@ -52,22 +51,22 @@ func TestAccFolder_rename(t *testing.T) { func TestAccFolder_moveParent(t *testing.T) { t.Parallel() - folder1DisplayName := "tf-test-" + acctest.RandString(10) - folder2DisplayName := "tf-test-" + acctest.RandString(10) + folder1DisplayName := "tf-test-" + randString(t, 10) + folder2DisplayName := "tf-test-" + randString(t, 10) org := getTestOrgFromEnv(t) parent := "organizations/" + org folder1 := resourceManagerV2Beta1.Folder{} folder2 := resourceManagerV2Beta1.Folder{} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleFolderDestroy, + CheckDestroy: testAccCheckGoogleFolderDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccFolder_basic(folder1DisplayName, parent), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderExists("google_folder.folder1", &folder1), + testAccCheckGoogleFolderExists(t, "google_folder.folder1", &folder1), testAccCheckGoogleFolderParent(&folder1, parent), testAccCheckGoogleFolderDisplayName(&folder1, folder1DisplayName), ), @@ -75,9 +74,9 @@ func TestAccFolder_moveParent(t *testing.T) { { Config: testAccFolder_move(folder1DisplayName, folder2DisplayName, parent), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleFolderExists("google_folder.folder1", &folder1), + testAccCheckGoogleFolderExists(t, "google_folder.folder1", &folder1), testAccCheckGoogleFolderDisplayName(&folder1, folder1DisplayName), - testAccCheckGoogleFolderExists("google_folder.folder2", &folder2), + testAccCheckGoogleFolderExists(t, "google_folder.folder2", &folder2), testAccCheckGoogleFolderParent(&folder2, parent), testAccCheckGoogleFolderDisplayName(&folder2, folder2DisplayName), ), @@ -86,24 +85,26 @@ func TestAccFolder_moveParent(t *testing.T) { }) } -func testAccCheckGoogleFolderDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckGoogleFolderDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_folder" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_folder" { + continue + } - folder, err := config.clientResourceManagerV2Beta1.Folders.Get(rs.Primary.ID).Do() - if err != nil || folder.LifecycleState != "DELETE_REQUESTED" { - return fmt.Errorf("Folder '%s' hasn't been marked for deletion", rs.Primary.Attributes["display_name"]) + folder, err := config.clientResourceManagerV2Beta1.Folders.Get(rs.Primary.ID).Do() + if err != nil || folder.LifecycleState != "DELETE_REQUESTED" { + return fmt.Errorf("Folder '%s' hasn't been marked for deletion", rs.Primary.Attributes["display_name"]) + } } - } - return nil + return nil + } } -func testAccCheckGoogleFolderExists(n string, folder *resourceManagerV2Beta1.Folder) resource.TestCheckFunc { +func testAccCheckGoogleFolderExists(t *testing.T, n string, folder *resourceManagerV2Beta1.Folder) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -114,7 +115,7 @@ func testAccCheckGoogleFolderExists(n string, folder *resourceManagerV2Beta1.Fol return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientResourceManagerV2Beta1.Folders.Get(rs.Primary.ID).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_google_organization_iam_audit_config_test.go b/third_party/terraform/tests/resource_google_organization_iam_audit_config_test.go index e5416da6cd14..a518df948d98 100644 --- a/third_party/terraform/tests/resource_google_organization_iam_audit_config_test.go +++ b/third_party/terraform/tests/resource_google_organization_iam_audit_config_test.go @@ -27,7 +27,7 @@ func TestAccOrganizationIamAuditConfig_basic(t *testing.T) { } org := getTestOrgFromEnv(t) service := "cloudkms.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -49,7 +49,7 @@ func TestAccOrganizationIamAuditConfig_multiple(t *testing.T) { service := "cloudkms.googleapis.com" service2 := "cloudsql.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -69,6 +69,8 @@ func TestAccOrganizationIamAuditConfig_multiple(t *testing.T) { // Test that multiple IAM audit configs can be applied to an organization all at once func TestAccOrganizationIamAuditConfig_multipleAtOnce(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) if os.Getenv(runOrgIamAuditConfigTestEnvVar) != "true" { t.Skipf("Environment variable %s is not set, skipping.", runOrgIamAuditConfigTestEnvVar) } @@ -76,7 +78,7 @@ func TestAccOrganizationIamAuditConfig_multipleAtOnce(t *testing.T) { service := "cloudkms.googleapis.com" service2 := "cloudsql.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -98,7 +100,7 @@ func TestAccOrganizationIamAuditConfig_update(t *testing.T) { org := getTestOrgFromEnv(t) service := "cloudkms.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -125,6 +127,8 @@ func TestAccOrganizationIamAuditConfig_update(t *testing.T) { // Test that an IAM audit config can be removed from an organization func TestAccOrganizationIamAuditConfig_remove(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) if os.Getenv(runOrgIamAuditConfigTestEnvVar) != "true" { t.Skipf("Environment variable %s is not set, skipping.", runOrgIamAuditConfigTestEnvVar) } @@ -132,7 +136,7 @@ func TestAccOrganizationIamAuditConfig_remove(t *testing.T) { service := "cloudkms.googleapis.com" service2 := "cloudsql.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -162,7 +166,7 @@ func TestAccOrganizationIamAuditConfig_addFirstExemptMember(t *testing.T) { members := []string{} members2 := []string{"user:paddy@hashicorp.com"} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -191,7 +195,7 @@ func TestAccOrganizationIamAuditConfig_removeLastExemptMember(t *testing.T) { members := []string{"user:paddy@hashicorp.com"} members2 := []string{} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -220,7 +224,7 @@ func TestAccOrganizationIamAuditConfig_updateNoExemptMembers(t *testing.T) { logType2 := "DATA_WRITE" service := "cloudkms.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_google_organization_iam_custom_role_test.go b/third_party/terraform/tests/resource_google_organization_iam_custom_role_test.go index f40c98243815..74233f90f1e7 100644 --- a/third_party/terraform/tests/resource_google_organization_iam_custom_role_test.go +++ b/third_party/terraform/tests/resource_google_organization_iam_custom_role_test.go @@ -6,7 +6,6 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -15,16 +14,17 @@ func TestAccOrganizationIamCustomRole_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - roleId := "tfIamCustomRole" + acctest.RandString(10) + roleId := "tfIamCustomRole" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationIamCustomRoleDestroy, + CheckDestroy: testAccCheckGoogleOrganizationIamCustomRoleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCheckGoogleOrganizationIamCustomRole_basic(org, roleId), Check: testAccCheckGoogleOrganizationIamCustomRole( + t, "google_organization_iam_custom_role.foo", "My Custom Role", "foo", @@ -34,6 +34,7 @@ func TestAccOrganizationIamCustomRole_basic(t *testing.T) { { Config: testAccCheckGoogleOrganizationIamCustomRole_update(org, roleId), Check: testAccCheckGoogleOrganizationIamCustomRole( + t, "google_organization_iam_custom_role.foo", "My Custom Role Updated", "bar", @@ -53,27 +54,27 @@ func TestAccOrganizationIamCustomRole_undelete(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - roleId := "tfIamCustomRole" + acctest.RandString(10) + roleId := "tfIamCustomRole" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationIamCustomRoleDestroy, + CheckDestroy: testAccCheckGoogleOrganizationIamCustomRoleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCheckGoogleOrganizationIamCustomRole_basic(org, roleId), - Check: testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus("google_organization_iam_custom_role.foo", false), + Check: testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus(t, "google_organization_iam_custom_role.foo", false), }, // Soft-delete { Config: testAccCheckGoogleOrganizationIamCustomRole_basic(org, roleId), - Check: testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus("google_organization_iam_custom_role.foo", true), + Check: testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus(t, "google_organization_iam_custom_role.foo", true), Destroy: true, }, // Undelete { Config: testAccCheckGoogleOrganizationIamCustomRole_basic(org, roleId), - Check: testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus("google_organization_iam_custom_role.foo", false), + Check: testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus(t, "google_organization_iam_custom_role.foo", false), }, }, }) @@ -83,16 +84,17 @@ func TestAccOrganizationIamCustomRole_createAfterDestroy(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - roleId := "tfIamCustomRole" + acctest.RandString(10) + roleId := "tfIamCustomRole" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationIamCustomRoleDestroy, + CheckDestroy: testAccCheckGoogleOrganizationIamCustomRoleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCheckGoogleOrganizationIamCustomRole_basic(org, roleId), Check: testAccCheckGoogleOrganizationIamCustomRole( + t, "google_organization_iam_custom_role.foo", "My Custom Role", "foo", @@ -108,6 +110,7 @@ func TestAccOrganizationIamCustomRole_createAfterDestroy(t *testing.T) { { Config: testAccCheckGoogleOrganizationIamCustomRole_basic(org, roleId), Check: testAccCheckGoogleOrganizationIamCustomRole( + t, "google_organization_iam_custom_role.foo", "My Custom Role", "foo", @@ -118,30 +121,32 @@ func TestAccOrganizationIamCustomRole_createAfterDestroy(t *testing.T) { }) } -func testAccCheckGoogleOrganizationIamCustomRoleDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckGoogleOrganizationIamCustomRoleDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_organization_iam_custom_role" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_organization_iam_custom_role" { + continue + } - role, err := config.clientIAM.Organizations.Roles.Get(rs.Primary.ID).Do() + role, err := config.clientIAM.Organizations.Roles.Get(rs.Primary.ID).Do() - if err != nil { - return err - } + if err != nil { + return err + } + + if !role.Deleted { + return fmt.Errorf("Iam custom role still exists") + } - if !role.Deleted { - return fmt.Errorf("Iam custom role still exists") } + return nil } - - return nil } -func testAccCheckGoogleOrganizationIamCustomRole(n, title, description, stage string, permissions []string) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationIamCustomRole(t *testing.T, n, title, description, stage string, permissions []string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -152,7 +157,7 @@ func testAccCheckGoogleOrganizationIamCustomRole(n, title, description, stage st return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) role, err := config.clientIAM.Organizations.Roles.Get(rs.Primary.ID).Do() if err != nil { @@ -181,7 +186,7 @@ func testAccCheckGoogleOrganizationIamCustomRole(n, title, description, stage st } } -func testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus(n string, deleted bool) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus(t *testing.T, n string, deleted bool) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -192,7 +197,7 @@ func testAccCheckGoogleOrganizationIamCustomRoleDeletionStatus(n string, deleted return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) role, err := config.clientIAM.Organizations.Roles.Get(rs.Primary.ID).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_google_organization_iam_test.go b/third_party/terraform/tests/resource_google_organization_iam_test.go index 4a39bfa1fb1d..ba44d1f0ea5f 100644 --- a/third_party/terraform/tests/resource_google_organization_iam_test.go +++ b/third_party/terraform/tests/resource_google_organization_iam_test.go @@ -7,7 +7,6 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudresourcemanager/v1" @@ -27,16 +26,16 @@ func TestAccOrganizationIam(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") - roleId := "tfIamTest" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + account := fmt.Sprintf("tf-test-%d", randInt(t)) + roleId := "tfIamTest" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccOrganizationIamBinding_basic(account, roleId, org), - Check: testAccCheckGoogleOrganizationIamBindingExists("foo", "test-role", []string{ + Check: testAccCheckGoogleOrganizationIamBindingExists(t, "foo", "test-role", []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, @@ -49,7 +48,7 @@ func TestAccOrganizationIam(t *testing.T) { { // Test Iam Binding update Config: testAccOrganizationIamBinding_update(account, roleId, org), - Check: testAccCheckGoogleOrganizationIamBindingExists("foo", "test-role", []string{ + Check: testAccCheckGoogleOrganizationIamBindingExists(t, "foo", "test-role", []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), @@ -63,7 +62,7 @@ func TestAccOrganizationIam(t *testing.T) { { // Test Iam Member creation (no update for member, no need to test) Config: testAccOrganizationIamMember_basic(account, org), - Check: testAccCheckGoogleOrganizationIamMemberExists("foo", "roles/browser", + Check: testAccCheckGoogleOrganizationIamMemberExists(t, "foo", "roles/browser", fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), ), }, @@ -77,7 +76,7 @@ func TestAccOrganizationIam(t *testing.T) { }) } -func testAccCheckGoogleOrganizationIamBindingExists(bindingResourceName, roleResourceName string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationIamBindingExists(t *testing.T, bindingResourceName, roleResourceName string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { bindingRs, ok := s.RootModule().Resources["google_organization_iam_binding."+bindingResourceName] if !ok { @@ -89,7 +88,7 @@ func testAccCheckGoogleOrganizationIamBindingExists(bindingResourceName, roleRes return fmt.Errorf("Not found: %s", roleResourceName) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientResourceManager.Organizations.GetIamPolicy("organizations/"+bindingRs.Primary.Attributes["org_id"], &cloudresourcemanager.GetIamPolicyRequest{}).Do() if err != nil { return err @@ -112,14 +111,14 @@ func testAccCheckGoogleOrganizationIamBindingExists(bindingResourceName, roleRes } } -func testAccCheckGoogleOrganizationIamMemberExists(n, role, member string) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationIamMemberExists(t *testing.T, n, role, member string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_organization_iam_member."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientResourceManager.Organizations.GetIamPolicy("organizations/"+rs.Primary.Attributes["org_id"], &cloudresourcemanager.GetIamPolicyRequest{}).Do() if err != nil { return err diff --git a/third_party/terraform/tests/resource_google_organization_policy_test.go b/third_party/terraform/tests/resource_google_organization_policy_test.go index 22a9cd8561da..6bd17e1ef5f3 100644 --- a/third_party/terraform/tests/resource_google_organization_policy_test.go +++ b/third_party/terraform/tests/resource_google_organization_policy_test.go @@ -43,20 +43,20 @@ func TestAccOrganizationPolicy(t *testing.T) { func testAccOrganizationPolicy_boolean(t *testing.T) { org := getTestOrgTargetFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { // Test creation of an enforced boolean policy Config: testAccOrganizationPolicyConfig_boolean(org, true), - Check: testAccCheckGoogleOrganizationBooleanPolicy("bool", true), + Check: testAccCheckGoogleOrganizationBooleanPolicy(t, "bool", true), }, { // Test update from enforced to not Config: testAccOrganizationPolicyConfig_boolean(org, false), - Check: testAccCheckGoogleOrganizationBooleanPolicy("bool", false), + Check: testAccCheckGoogleOrganizationBooleanPolicy(t, "bool", false), }, { Config: " ", @@ -65,12 +65,12 @@ func testAccOrganizationPolicy_boolean(t *testing.T) { { // Test creation of a not enforced boolean policy Config: testAccOrganizationPolicyConfig_boolean(org, false), - Check: testAccCheckGoogleOrganizationBooleanPolicy("bool", false), + Check: testAccCheckGoogleOrganizationBooleanPolicy(t, "bool", false), }, { // Test update from not enforced to enforced Config: testAccOrganizationPolicyConfig_boolean(org, true), - Check: testAccCheckGoogleOrganizationBooleanPolicy("bool", true), + Check: testAccCheckGoogleOrganizationBooleanPolicy(t, "bool", true), }, { ResourceName: "google_organization_policy.bool", @@ -84,14 +84,14 @@ func testAccOrganizationPolicy_boolean(t *testing.T) { func testAccOrganizationPolicy_list_allowAll(t *testing.T) { org := getTestOrgTargetFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccOrganizationPolicyConfig_list_allowAll(org), - Check: testAccCheckGoogleOrganizationListPolicyAll("list", "ALLOW"), + Check: testAccCheckGoogleOrganizationListPolicyAll(t, "list", "ALLOW"), }, { ResourceName: "google_organization_policy.list", @@ -105,14 +105,14 @@ func testAccOrganizationPolicy_list_allowAll(t *testing.T) { func testAccOrganizationPolicy_list_allowSome(t *testing.T) { org := getTestOrgTargetFromEnv(t) project := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccOrganizationPolicyConfig_list_allowSome(org, project), - Check: testAccCheckGoogleOrganizationListPolicyAllowedValues("list", []string{"projects/" + project, "projects/debian-cloud"}), + Check: testAccCheckGoogleOrganizationListPolicyAllowedValues(t, "list", []string{"projects/" + project, "projects/debian-cloud"}), }, { ResourceName: "google_organization_policy.list", @@ -125,14 +125,14 @@ func testAccOrganizationPolicy_list_allowSome(t *testing.T) { func testAccOrganizationPolicy_list_denySome(t *testing.T) { org := getTestOrgTargetFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccOrganizationPolicyConfig_list_denySome(org), - Check: testAccCheckGoogleOrganizationListPolicyDeniedValues("list", DENIED_ORG_POLICIES), + Check: testAccCheckGoogleOrganizationListPolicyDeniedValues(t, "list", DENIED_ORG_POLICIES), }, { ResourceName: "google_organization_policy.list", @@ -145,18 +145,18 @@ func testAccOrganizationPolicy_list_denySome(t *testing.T) { func testAccOrganizationPolicy_list_update(t *testing.T) { org := getTestOrgTargetFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccOrganizationPolicyConfig_list_allowAll(org), - Check: testAccCheckGoogleOrganizationListPolicyAll("list", "ALLOW"), + Check: testAccCheckGoogleOrganizationListPolicyAll(t, "list", "ALLOW"), }, { Config: testAccOrganizationPolicyConfig_list_denySome(org), - Check: testAccCheckGoogleOrganizationListPolicyDeniedValues("list", DENIED_ORG_POLICIES), + Check: testAccCheckGoogleOrganizationListPolicyDeniedValues(t, "list", DENIED_ORG_POLICIES), }, { ResourceName: "google_organization_policy.list", @@ -169,10 +169,10 @@ func testAccOrganizationPolicy_list_update(t *testing.T) { func testAccOrganizationPolicy_list_inheritFromParent(t *testing.T) { org := getTestOrgTargetFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccOrganizationPolicyConfig_list_inheritFromParent(org), @@ -188,14 +188,14 @@ func testAccOrganizationPolicy_list_inheritFromParent(t *testing.T) { func testAccOrganizationPolicy_restore_defaultTrue(t *testing.T) { org := getTestOrgTargetFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccOrganizationPolicyConfig_restore_defaultTrue(org), - Check: testAccCheckGoogleOrganizationRestoreDefaultTrue("restore", &cloudresourcemanager.RestoreDefault{}), + Check: testAccCheckGoogleOrganizationRestoreDefaultTrue(t, "restore", &cloudresourcemanager.RestoreDefault{}), }, { ResourceName: "google_organization_policy.restore", @@ -206,34 +206,36 @@ func testAccOrganizationPolicy_restore_defaultTrue(t *testing.T) { }) } -func testAccCheckGoogleOrganizationPolicyDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_organization_policy" { - continue - } - - org := "organizations/" + rs.Primary.Attributes["org_id"] - constraint := canonicalOrgPolicyConstraint(rs.Primary.Attributes["constraint"]) - policy, err := config.clientResourceManager.Organizations.GetOrgPolicy(org, &cloudresourcemanager.GetOrgPolicyRequest{ - Constraint: constraint, - }).Do() - - if err != nil { - return err - } - - if policy.ListPolicy != nil || policy.BooleanPolicy != nil { - return fmt.Errorf("Org policy with constraint '%s' hasn't been cleared", constraint) +func testAccCheckGoogleOrganizationPolicyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_organization_policy" { + continue + } + + org := "organizations/" + rs.Primary.Attributes["org_id"] + constraint := canonicalOrgPolicyConstraint(rs.Primary.Attributes["constraint"]) + policy, err := config.clientResourceManager.Organizations.GetOrgPolicy(org, &cloudresourcemanager.GetOrgPolicyRequest{ + Constraint: constraint, + }).Do() + + if err != nil { + return err + } + + if policy.ListPolicy != nil || policy.BooleanPolicy != nil { + return fmt.Errorf("Org policy with constraint '%s' hasn't been cleared", constraint) + } } + return nil } - return nil } -func testAccCheckGoogleOrganizationBooleanPolicy(n string, enforced bool) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationBooleanPolicy(t *testing.T, n string, enforced bool) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleOrganizationPolicyTestResource(s, n) + policy, err := getGoogleOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -246,9 +248,9 @@ func testAccCheckGoogleOrganizationBooleanPolicy(n string, enforced bool) resour } } -func testAccCheckGoogleOrganizationListPolicyAll(n, policyType string) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationListPolicyAll(t *testing.T, n, policyType string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleOrganizationPolicyTestResource(s, n) + policy, err := getGoogleOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -265,9 +267,9 @@ func testAccCheckGoogleOrganizationListPolicyAll(n, policyType string) resource. } } -func testAccCheckGoogleOrganizationListPolicyAllowedValues(n string, values []string) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationListPolicyAllowedValues(t *testing.T, n string, values []string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleOrganizationPolicyTestResource(s, n) + policy, err := getGoogleOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -282,9 +284,9 @@ func testAccCheckGoogleOrganizationListPolicyAllowedValues(n string, values []st } } -func testAccCheckGoogleOrganizationListPolicyDeniedValues(n string, values []string) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationListPolicyDeniedValues(t *testing.T, n string, values []string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleOrganizationPolicyTestResource(s, n) + policy, err := getGoogleOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -299,10 +301,10 @@ func testAccCheckGoogleOrganizationListPolicyDeniedValues(n string, values []str } } -func testAccCheckGoogleOrganizationRestoreDefaultTrue(n string, policyDefault *cloudresourcemanager.RestoreDefault) resource.TestCheckFunc { +func testAccCheckGoogleOrganizationRestoreDefaultTrue(t *testing.T, n string, policyDefault *cloudresourcemanager.RestoreDefault) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleOrganizationPolicyTestResource(s, n) + policy, err := getGoogleOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -315,7 +317,7 @@ func testAccCheckGoogleOrganizationRestoreDefaultTrue(n string, policyDefault *c } } -func getGoogleOrganizationPolicyTestResource(s *terraform.State, n string) (*cloudresourcemanager.OrgPolicy, error) { +func getGoogleOrganizationPolicyTestResource(t *testing.T, s *terraform.State, n string) (*cloudresourcemanager.OrgPolicy, error) { rn := "google_organization_policy." + n rs, ok := s.RootModule().Resources[rn] if !ok { @@ -326,7 +328,7 @@ func getGoogleOrganizationPolicyTestResource(s *terraform.State, n string) (*clo return nil, fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) return config.clientResourceManager.Organizations.GetOrgPolicy("organizations/"+rs.Primary.Attributes["org_id"], &cloudresourcemanager.GetOrgPolicyRequest{ Constraint: rs.Primary.Attributes["constraint"], diff --git a/third_party/terraform/tests/resource_google_project_iam_audit_config_test.go b/third_party/terraform/tests/resource_google_project_iam_audit_config_test.go index 62d338dd492a..b41d64e8bf89 100644 --- a/third_party/terraform/tests/resource_google_project_iam_audit_config_test.go +++ b/third_party/terraform/tests/resource_google_project_iam_audit_config_test.go @@ -5,7 +5,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -23,9 +22,9 @@ func TestAccProjectIamAuditConfig_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "cloudkms.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -33,7 +32,7 @@ func TestAccProjectIamAuditConfig_basic(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM audit config @@ -50,11 +49,11 @@ func TestAccProjectIamAuditConfig_multiple(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "cloudkms.googleapis.com" service2 := "cloudsql.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -62,7 +61,7 @@ func TestAccProjectIamAuditConfig_multiple(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM audit config @@ -81,14 +80,16 @@ func TestAccProjectIamAuditConfig_multiple(t *testing.T) { // Test that multiple IAM audit configs can be applied to a project all at once func TestAccProjectIamAuditConfig_multipleAtOnce(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "cloudkms.googleapis.com" service2 := "cloudsql.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -96,7 +97,7 @@ func TestAccProjectIamAuditConfig_multipleAtOnce(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM audit config @@ -114,10 +115,10 @@ func TestAccProjectIamAuditConfig_update(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "cloudkms.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -125,7 +126,7 @@ func TestAccProjectIamAuditConfig_update(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM audit config @@ -151,14 +152,16 @@ func TestAccProjectIamAuditConfig_update(t *testing.T) { // Test that an IAM audit config can be removed from a project func TestAccProjectIamAuditConfig_remove(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "cloudkms.googleapis.com" service2 := "cloudsql.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -166,7 +169,7 @@ func TestAccProjectIamAuditConfig_remove(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply multiple IAM audit configs @@ -180,7 +183,7 @@ func TestAccProjectIamAuditConfig_remove(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, }, @@ -192,12 +195,12 @@ func TestAccProjectIamAuditConfig_addFirstExemptMember(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "cloudkms.googleapis.com" members := []string{} members2 := []string{"user:paddy@hashicorp.com"} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -205,7 +208,7 @@ func TestAccProjectIamAuditConfig_addFirstExemptMember(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply IAM audit config with no members @@ -228,12 +231,12 @@ func TestAccProjectIamAuditConfig_removeLastExemptMember(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "cloudkms.googleapis.com" members2 := []string{} members := []string{"user:paddy@hashicorp.com"} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -241,7 +244,7 @@ func TestAccProjectIamAuditConfig_removeLastExemptMember(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply IAM audit config with member @@ -259,17 +262,17 @@ func TestAccProjectIamAuditConfig_removeLastExemptMember(t *testing.T) { }) } -// test changing service with no exempt members +// test changing log type with no exempt members func TestAccProjectIamAuditConfig_updateNoExemptMembers(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) logType := "DATA_READ" logType2 := "DATA_WRITE" service := "cloudkms.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -277,7 +280,7 @@ func TestAccProjectIamAuditConfig_updateNoExemptMembers(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply IAM audit config with DATA_READ @@ -286,7 +289,7 @@ func TestAccProjectIamAuditConfig_updateNoExemptMembers(t *testing.T) { }, projectIamAuditConfigImportStep("google_project_iam_audit_config.acceptance", pid, service), - // Apply IAM audit config with DATA_WRITe + // Apply IAM audit config with DATA_WRITE { Config: testAccProjectAssociateAuditConfigLogType(pid, pname, org, service, logType2), }, diff --git a/third_party/terraform/tests/resource_google_project_iam_binding_test.go.erb b/third_party/terraform/tests/resource_google_project_iam_binding_test.go.erb index 6d03f3728af5..c4c9cb05cad5 100644 --- a/third_party/terraform/tests/resource_google_project_iam_binding_test.go.erb +++ b/third_party/terraform/tests/resource_google_project_iam_binding_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -23,9 +22,9 @@ func TestAccProjectIamBinding_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.instanceAdmin" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -33,7 +32,7 @@ func TestAccProjectIamBinding_basic(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -50,11 +49,11 @@ func TestAccProjectIamBinding_multiple(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.instanceAdmin" role2 := "roles/viewer" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -62,7 +61,7 @@ func TestAccProjectIamBinding_multiple(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -81,14 +80,16 @@ func TestAccProjectIamBinding_multiple(t *testing.T) { // Test that multiple IAM bindings can be applied to a project all at once func TestAccProjectIamBinding_multipleAtOnce(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.instanceAdmin" role2 := "roles/viewer" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -96,7 +97,7 @@ func TestAccProjectIamBinding_multipleAtOnce(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -114,10 +115,10 @@ func TestAccProjectIamBinding_update(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.instanceAdmin" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -125,7 +126,7 @@ func TestAccProjectIamBinding_update(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -151,14 +152,16 @@ func TestAccProjectIamBinding_update(t *testing.T) { // Test that an IAM binding can be removed from a project func TestAccProjectIamBinding_remove(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.instanceAdmin" role2 := "roles/viewer" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -166,7 +169,7 @@ func TestAccProjectIamBinding_remove(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply multiple IAM bindings @@ -180,7 +183,7 @@ func TestAccProjectIamBinding_remove(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, }, @@ -192,9 +195,9 @@ func TestAccProjectIamBinding_noMembers(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.instanceAdmin" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -202,7 +205,7 @@ func TestAccProjectIamBinding_noMembers(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -214,15 +217,14 @@ func TestAccProjectIamBinding_noMembers(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccProjectIamBinding_withCondition(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/compute.instanceAdmin" conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -230,7 +232,7 @@ func TestAccProjectIamBinding_withCondition(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -246,7 +248,6 @@ func TestAccProjectIamBinding_withCondition(t *testing.T) { }, }) } -<% end -%> func testAccProjectAssociateBindingBasic(pid, name, org, role string) string { return fmt.Sprintf(` @@ -334,7 +335,6 @@ resource "google_project_iam_binding" "acceptance" { `, pid, name, org, role) } -<% unless version == 'ga' -%> func testAccProjectAssociateBinding_withCondition(pid, name, org, role, conditionTitle string) string { return fmt.Sprintf(` resource "google_project" "acceptance" { @@ -355,4 +355,3 @@ resource "google_project_iam_binding" "acceptance" { } `, pid, name, org, role, conditionTitle) } -<% end -%> diff --git a/third_party/terraform/tests/resource_google_project_iam_custom_role_test.go b/third_party/terraform/tests/resource_google_project_iam_custom_role_test.go index c08ac1ff7b30..085090399ff2 100644 --- a/third_party/terraform/tests/resource_google_project_iam_custom_role_test.go +++ b/third_party/terraform/tests/resource_google_project_iam_custom_role_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,12 +11,12 @@ import ( func TestAccProjectIamCustomRole_basic(t *testing.T) { t.Parallel() - roleId := "tfIamCustomRole" + acctest.RandString(10) + roleId := "tfIamCustomRole" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroy, + CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCheckGoogleProjectIamCustomRole_basic(roleId), @@ -43,16 +42,16 @@ func TestAccProjectIamCustomRole_basic(t *testing.T) { func TestAccProjectIamCustomRole_undelete(t *testing.T) { t.Parallel() - roleId := "tfIamCustomRole" + acctest.RandString(10) + roleId := "tfIamCustomRole" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroy, + CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCheckGoogleProjectIamCustomRole_basic(roleId), - Check: testAccCheckGoogleProjectIamCustomRoleDeletionStatus("google_project_iam_custom_role.foo", false), + Check: testAccCheckGoogleProjectIamCustomRoleDeletionStatus(t, "google_project_iam_custom_role.foo", false), }, { ResourceName: "google_project_iam_custom_role.foo", @@ -62,14 +61,14 @@ func TestAccProjectIamCustomRole_undelete(t *testing.T) { // Soft-delete { Config: testAccCheckGoogleProjectIamCustomRole_basic(roleId), - Check: testAccCheckGoogleProjectIamCustomRoleDeletionStatus("google_project_iam_custom_role.foo", true), + Check: testAccCheckGoogleProjectIamCustomRoleDeletionStatus(t, "google_project_iam_custom_role.foo", true), Destroy: true, }, // Terraform doesn't have a config because of Destroy: true, so an import step would fail // Undelete { Config: testAccCheckGoogleProjectIamCustomRole_basic(roleId), - Check: testAccCheckGoogleProjectIamCustomRoleDeletionStatus("google_project_iam_custom_role.foo", false), + Check: testAccCheckGoogleProjectIamCustomRoleDeletionStatus(t, "google_project_iam_custom_role.foo", false), }, { ResourceName: "google_project_iam_custom_role.foo", @@ -83,11 +82,11 @@ func TestAccProjectIamCustomRole_undelete(t *testing.T) { func TestAccProjectIamCustomRole_createAfterDestroy(t *testing.T) { t.Parallel() - roleId := "tfIamCustomRole" + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + roleId := "tfIamCustomRole" + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroy, + CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccCheckGoogleProjectIamCustomRole_basic(roleId), @@ -115,30 +114,32 @@ func TestAccProjectIamCustomRole_createAfterDestroy(t *testing.T) { }) } -func testAccCheckGoogleProjectIamCustomRoleDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckGoogleProjectIamCustomRoleDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_project_iam_custom_role" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_project_iam_custom_role" { + continue + } - role, err := config.clientIAM.Projects.Roles.Get(rs.Primary.ID).Do() + role, err := config.clientIAM.Projects.Roles.Get(rs.Primary.ID).Do() - if err != nil { - return err - } + if err != nil { + return err + } + + if !role.Deleted { + return fmt.Errorf("Iam custom role still exists") + } - if !role.Deleted { - return fmt.Errorf("Iam custom role still exists") } + return nil } - - return nil } -func testAccCheckGoogleProjectIamCustomRoleDeletionStatus(n string, deleted bool) resource.TestCheckFunc { +func testAccCheckGoogleProjectIamCustomRoleDeletionStatus(t *testing.T, n string, deleted bool) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -149,7 +150,7 @@ func testAccCheckGoogleProjectIamCustomRoleDeletionStatus(n string, deleted bool return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) role, err := config.clientIAM.Projects.Roles.Get(rs.Primary.ID).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_google_project_iam_member_test.go.erb b/third_party/terraform/tests/resource_google_project_iam_member_test.go.erb index b9a3253652d1..79d73d79d7bf 100644 --- a/third_party/terraform/tests/resource_google_project_iam_member_test.go.erb +++ b/third_party/terraform/tests/resource_google_project_iam_member_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -23,11 +22,11 @@ func TestAccProjectIamMember_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) resourceName := "google_project_iam_member.acceptance" role := "roles/compute.instanceAdmin" member := "user:admin@hashicorptest.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -35,7 +34,7 @@ func TestAccProjectIamMember_basic(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -49,19 +48,21 @@ func TestAccProjectIamMember_basic(t *testing.T) { // Test that multiple IAM bindings can be applied to a project func TestAccProjectIamMember_multiple(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) skipIfEnvNotSet(t, "GOOGLE_ORG") - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) resourceName := "google_project_iam_member.acceptance" resourceName2 := "google_project_iam_member.multiple" role := "roles/compute.instanceAdmin" member := "user:admin@hashicorptest.com" member2 := "user:paddy@hashicorp.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -69,7 +70,7 @@ func TestAccProjectIamMember_multiple(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -90,18 +91,20 @@ func TestAccProjectIamMember_multiple(t *testing.T) { // Test that an IAM binding can be removed from a project func TestAccProjectIamMember_remove(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) skipIfEnvNotSet(t, "GOOGLE_ORG") - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) resourceName := "google_project_iam_member.acceptance" role := "roles/compute.instanceAdmin" member := "user:admin@hashicorptest.com" member2 := "user:paddy@hashicorp.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -109,7 +112,7 @@ func TestAccProjectIamMember_remove(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, @@ -124,24 +127,23 @@ func TestAccProjectIamMember_remove(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, }, }) } -<% unless version == 'ga' -%> func TestAccProjectIamMember_withCondition(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) resourceName := "google_project_iam_member.acceptance" role := "roles/compute.instanceAdmin" member := "user:admin@hashicorptest.com" conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -149,7 +151,7 @@ func TestAccProjectIamMember_withCondition(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM binding @@ -165,7 +167,6 @@ func TestAccProjectIamMember_withCondition(t *testing.T) { }, }) } -<% end -%> func testAccProjectAssociateMemberBasic(pid, name, org, role, member string) string { return fmt.Sprintf(` @@ -205,7 +206,6 @@ resource "google_project_iam_member" "multiple" { `, pid, name, org, role, member, role2, member2) } -<% unless version == 'ga' -%> func testAccProjectAssociateMember_withCondition(pid, name, org, role, member, conditionTitle string) string { return fmt.Sprintf(` resource "google_project" "acceptance" { @@ -226,4 +226,3 @@ resource "google_project_iam_member" "acceptance" { } `, pid, name, org, role, member, conditionTitle) } -<% end -%> diff --git a/third_party/terraform/tests/resource_google_project_iam_policy_test.go.erb b/third_party/terraform/tests/resource_google_project_iam_policy_test.go.erb index cb7952f15198..7de89ee41280 100644 --- a/third_party/terraform/tests/resource_google_project_iam_policy_test.go.erb +++ b/third_party/terraform/tests/resource_google_project_iam_policy_test.go.erb @@ -6,7 +6,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudresourcemanager/v1" @@ -17,8 +16,8 @@ func TestAccProjectIamPolicy_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -26,7 +25,7 @@ func TestAccProjectIamPolicy_basic(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM policy from a data source. The application @@ -47,8 +46,8 @@ func TestAccProjectIamPolicy_emptyMembers(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -64,8 +63,8 @@ func TestAccProjectIamPolicy_expanded(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -84,8 +83,8 @@ func TestAccProjectIamPolicy_basicAuditConfig(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -93,7 +92,7 @@ func TestAccProjectIamPolicy_basicAuditConfig(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM policy from a data source. The application @@ -114,8 +113,8 @@ func TestAccProjectIamPolicy_expandedAuditConfig(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -129,13 +128,12 @@ func TestAccProjectIamPolicy_expandedAuditConfig(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccProjectIamPolicy_withCondition(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -143,7 +141,7 @@ func TestAccProjectIamPolicy_withCondition(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccProjectExistingPolicy(pid), + testAccProjectExistingPolicy(t, pid), ), }, // Apply an IAM policy from a data source. The application @@ -158,7 +156,6 @@ func TestAccProjectIamPolicy_withCondition(t *testing.T) { }, }) } -<% end -%> func getStatePrimaryResource(s *terraform.State, res, expectedID string) (*terraform.InstanceState, error) { // Get the project resource @@ -166,7 +163,7 @@ func getStatePrimaryResource(s *terraform.State, res, expectedID string) (*terra if !ok { return nil, fmt.Errorf("Not found: %s", res) } - if resource.Primary.Attributes["id"] != expectedID && expectedID != "" { + if expectedID != "" && !compareProjectName("", resource.Primary.Attributes["id"], expectedID, nil) { return nil, fmt.Errorf("Expected project %q to match ID %q in state", resource.Primary.ID, expectedID) } return resource.Primary, nil @@ -217,9 +214,9 @@ func testAccCheckGoogleProjectIamPolicyExists(projectRes, policyRes, pid string) } // Confirm that a project has an IAM policy with at least 1 binding -func testAccProjectExistingPolicy(pid string) resource.TestCheckFunc { +func testAccProjectExistingPolicy(t *testing.T, pid string) resource.TestCheckFunc { return func(s *terraform.State) error { - c := testAccProvider.Meta().(*Config) + c := googleProviderConfig(t) var err error originalPolicy, err = getProjectIamPolicy(pid, c) if err != nil { @@ -432,7 +429,6 @@ data "google_iam_policy" "expanded" { `, pid, name, org) } -<% unless version == 'ga' -%> func testAccProjectAssociatePolicy_withCondition(pid, name, org string) string { return fmt.Sprintf(` resource "google_project" "acceptance" { @@ -468,4 +464,3 @@ data "google_iam_policy" "admin" { } `, pid, name, org) } -<% end -%> diff --git a/third_party/terraform/tests/resource_google_project_organization_policy_test.go b/third_party/terraform/tests/resource_google_project_organization_policy_test.go index e6fbdbcf7a8a..641cecad69fd 100644 --- a/third_party/terraform/tests/resource_google_project_organization_policy_test.go +++ b/third_party/terraform/tests/resource_google_project_organization_policy_test.go @@ -40,20 +40,20 @@ func TestAccProjectOrganizationPolicy(t *testing.T) { func testAccProjectOrganizationPolicy_boolean(t *testing.T) { projectId := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { // Test creation of an enforced boolean policy Config: testAccProjectOrganizationPolicyConfig_boolean(projectId, true), - Check: testAccCheckGoogleProjectOrganizationBooleanPolicy("bool", true), + Check: testAccCheckGoogleProjectOrganizationBooleanPolicy(t, "bool", true), }, { // Test update from enforced to not Config: testAccProjectOrganizationPolicyConfig_boolean(projectId, false), - Check: testAccCheckGoogleProjectOrganizationBooleanPolicy("bool", false), + Check: testAccCheckGoogleProjectOrganizationBooleanPolicy(t, "bool", false), }, { Config: " ", @@ -62,12 +62,12 @@ func testAccProjectOrganizationPolicy_boolean(t *testing.T) { { // Test creation of a not enforced boolean policy Config: testAccProjectOrganizationPolicyConfig_boolean(projectId, false), - Check: testAccCheckGoogleProjectOrganizationBooleanPolicy("bool", false), + Check: testAccCheckGoogleProjectOrganizationBooleanPolicy(t, "bool", false), }, { // Test update from not enforced to enforced Config: testAccProjectOrganizationPolicyConfig_boolean(projectId, true), - Check: testAccCheckGoogleProjectOrganizationBooleanPolicy("bool", true), + Check: testAccCheckGoogleProjectOrganizationBooleanPolicy(t, "bool", true), }, }, }) @@ -76,14 +76,14 @@ func testAccProjectOrganizationPolicy_boolean(t *testing.T) { func testAccProjectOrganizationPolicy_list_allowAll(t *testing.T) { projectId := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectOrganizationPolicyConfig_list_allowAll(projectId), - Check: testAccCheckGoogleProjectOrganizationListPolicyAll("list", "ALLOW"), + Check: testAccCheckGoogleProjectOrganizationListPolicyAll(t, "list", "ALLOW"), }, { ResourceName: "google_project_organization_policy.list", @@ -97,14 +97,14 @@ func testAccProjectOrganizationPolicy_list_allowAll(t *testing.T) { func testAccProjectOrganizationPolicy_list_allowSome(t *testing.T) { project := getTestProjectFromEnv() canonicalProject := canonicalProjectId(project) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectOrganizationPolicyConfig_list_allowSome(project), - Check: testAccCheckGoogleProjectOrganizationListPolicyAllowedValues("list", []string{canonicalProject}), + Check: testAccCheckGoogleProjectOrganizationListPolicyAllowedValues(t, "list", []string{canonicalProject}), }, { ResourceName: "google_project_organization_policy.list", @@ -117,14 +117,14 @@ func testAccProjectOrganizationPolicy_list_allowSome(t *testing.T) { func testAccProjectOrganizationPolicy_list_denySome(t *testing.T) { projectId := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectOrganizationPolicyConfig_list_denySome(projectId), - Check: testAccCheckGoogleProjectOrganizationListPolicyDeniedValues("list", DENIED_ORG_POLICIES), + Check: testAccCheckGoogleProjectOrganizationListPolicyDeniedValues(t, "list", DENIED_ORG_POLICIES), }, { ResourceName: "google_project_organization_policy.list", @@ -137,18 +137,18 @@ func testAccProjectOrganizationPolicy_list_denySome(t *testing.T) { func testAccProjectOrganizationPolicy_list_update(t *testing.T) { projectId := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectOrganizationPolicyConfig_list_allowAll(projectId), - Check: testAccCheckGoogleProjectOrganizationListPolicyAll("list", "ALLOW"), + Check: testAccCheckGoogleProjectOrganizationListPolicyAll(t, "list", "ALLOW"), }, { Config: testAccProjectOrganizationPolicyConfig_list_denySome(projectId), - Check: testAccCheckGoogleProjectOrganizationListPolicyDeniedValues("list", DENIED_ORG_POLICIES), + Check: testAccCheckGoogleProjectOrganizationListPolicyDeniedValues(t, "list", DENIED_ORG_POLICIES), }, { ResourceName: "google_project_organization_policy.list", @@ -162,14 +162,14 @@ func testAccProjectOrganizationPolicy_list_update(t *testing.T) { func testAccProjectOrganizationPolicy_restore_defaultTrue(t *testing.T) { projectId := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectOrganizationPolicyConfig_restore_defaultTrue(projectId), - Check: getGoogleProjectOrganizationRestoreDefaultTrue("restore", &cloudresourcemanager.RestoreDefault{}), + Check: getGoogleProjectOrganizationRestoreDefaultTrue(t, "restore", &cloudresourcemanager.RestoreDefault{}), }, { ResourceName: "google_project_organization_policy.restore", @@ -183,14 +183,14 @@ func testAccProjectOrganizationPolicy_restore_defaultTrue(t *testing.T) { func testAccProjectOrganizationPolicy_none(t *testing.T) { projectId := getTestProjectFromEnv() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroy, + CheckDestroy: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccProjectOrganizationPolicyConfig_none(projectId), - Check: testAccCheckGoogleProjectOrganizationPolicyDestroy, + Check: testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t), }, { ResourceName: "google_project_organization_policy.none", @@ -201,34 +201,36 @@ func testAccProjectOrganizationPolicy_none(t *testing.T) { }) } -func testAccCheckGoogleProjectOrganizationPolicyDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_project_organization_policy" { - continue - } - - projectId := canonicalProjectId(rs.Primary.Attributes["project"]) - constraint := canonicalOrgPolicyConstraint(rs.Primary.Attributes["constraint"]) - policy, err := config.clientResourceManager.Projects.GetOrgPolicy(projectId, &cloudresourcemanager.GetOrgPolicyRequest{ - Constraint: constraint, - }).Do() - - if err != nil { - return err - } - - if policy.ListPolicy != nil || policy.BooleanPolicy != nil { - return fmt.Errorf("Org policy with constraint '%s' hasn't been cleared", constraint) +func testAccCheckGoogleProjectOrganizationPolicyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_project_organization_policy" { + continue + } + + projectId := canonicalProjectId(rs.Primary.Attributes["project"]) + constraint := canonicalOrgPolicyConstraint(rs.Primary.Attributes["constraint"]) + policy, err := config.clientResourceManager.Projects.GetOrgPolicy(projectId, &cloudresourcemanager.GetOrgPolicyRequest{ + Constraint: constraint, + }).Do() + + if err != nil { + return err + } + + if policy.ListPolicy != nil || policy.BooleanPolicy != nil { + return fmt.Errorf("Org policy with constraint '%s' hasn't been cleared", constraint) + } } + return nil } - return nil } -func testAccCheckGoogleProjectOrganizationBooleanPolicy(n string, enforced bool) resource.TestCheckFunc { +func testAccCheckGoogleProjectOrganizationBooleanPolicy(t *testing.T, n string, enforced bool) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleProjectOrganizationPolicyTestResource(s, n) + policy, err := getGoogleProjectOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -241,9 +243,9 @@ func testAccCheckGoogleProjectOrganizationBooleanPolicy(n string, enforced bool) } } -func testAccCheckGoogleProjectOrganizationListPolicyAll(n, policyType string) resource.TestCheckFunc { +func testAccCheckGoogleProjectOrganizationListPolicyAll(t *testing.T, n, policyType string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleProjectOrganizationPolicyTestResource(s, n) + policy, err := getGoogleProjectOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -264,9 +266,9 @@ func testAccCheckGoogleProjectOrganizationListPolicyAll(n, policyType string) re } } -func testAccCheckGoogleProjectOrganizationListPolicyAllowedValues(n string, values []string) resource.TestCheckFunc { +func testAccCheckGoogleProjectOrganizationListPolicyAllowedValues(t *testing.T, n string, values []string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleProjectOrganizationPolicyTestResource(s, n) + policy, err := getGoogleProjectOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -281,9 +283,9 @@ func testAccCheckGoogleProjectOrganizationListPolicyAllowedValues(n string, valu } } -func testAccCheckGoogleProjectOrganizationListPolicyDeniedValues(n string, values []string) resource.TestCheckFunc { +func testAccCheckGoogleProjectOrganizationListPolicyDeniedValues(t *testing.T, n string, values []string) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleProjectOrganizationPolicyTestResource(s, n) + policy, err := getGoogleProjectOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -298,10 +300,10 @@ func testAccCheckGoogleProjectOrganizationListPolicyDeniedValues(n string, value } } -func getGoogleProjectOrganizationRestoreDefaultTrue(n string, policyDefault *cloudresourcemanager.RestoreDefault) resource.TestCheckFunc { +func getGoogleProjectOrganizationRestoreDefaultTrue(t *testing.T, n string, policyDefault *cloudresourcemanager.RestoreDefault) resource.TestCheckFunc { return func(s *terraform.State) error { - policy, err := getGoogleProjectOrganizationPolicyTestResource(s, n) + policy, err := getGoogleProjectOrganizationPolicyTestResource(t, s, n) if err != nil { return err } @@ -314,7 +316,7 @@ func getGoogleProjectOrganizationRestoreDefaultTrue(n string, policyDefault *clo } } -func getGoogleProjectOrganizationPolicyTestResource(s *terraform.State, n string) (*cloudresourcemanager.OrgPolicy, error) { +func getGoogleProjectOrganizationPolicyTestResource(t *testing.T, s *terraform.State, n string) (*cloudresourcemanager.OrgPolicy, error) { rn := "google_project_organization_policy." + n rs, ok := s.RootModule().Resources[rn] if !ok { @@ -325,7 +327,7 @@ func getGoogleProjectOrganizationPolicyTestResource(s *terraform.State, n string return nil, fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) projectId := canonicalProjectId(rs.Primary.Attributes["project"]) return config.clientResourceManager.Projects.GetOrgPolicy(projectId, &cloudresourcemanager.GetOrgPolicyRequest{ diff --git a/third_party/terraform/tests/resource_google_project_service_test.go b/third_party/terraform/tests/resource_google_project_service_test.go index 73d32b9eb917..958502dcc949 100644 --- a/third_party/terraform/tests/resource_google_project_service_test.go +++ b/third_party/terraform/tests/resource_google_project_service_test.go @@ -6,7 +6,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,18 +13,20 @@ import ( // Test that services can be enabled and disabled on a project func TestAccProjectService_basic(t *testing.T) { t.Parallel() + // Multiple fine-grained resources + skipIfVcr(t) org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) services := []string{"iam.googleapis.com", "cloudresourcemanager.googleapis.com"} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccProjectService_basic(services, pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccCheckProjectService(services, pid, true), + testAccCheckProjectService(t, services, pid, true), ), }, { @@ -44,21 +45,21 @@ func TestAccProjectService_basic(t *testing.T) { { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccCheckProjectService(services, pid, false), + testAccCheckProjectService(t, services, pid, false), ), }, // Create services with disabling turned off. { Config: testAccProjectService_noDisable(services, pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccCheckProjectService(services, pid, true), + testAccCheckProjectService(t, services, pid, true), ), }, // Check that services are still enabled even after the resources are deleted. { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccCheckProjectService(services, pid, true), + testAccCheckProjectService(t, services, pid, true), ), }, }, @@ -66,14 +67,16 @@ func TestAccProjectService_basic(t *testing.T) { } func TestAccProjectService_disableDependentServices(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) services := []string{"cloudbuild.googleapis.com", "containerregistry.googleapis.com"} - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -111,16 +114,16 @@ func TestAccProjectService_handleNotFound(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") + pid := fmt.Sprintf("tf-test-%d", randInt(t)) service := "iam.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccProjectService_handleNotFound(service, pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccCheckProjectService([]string{service}, pid, true), + testAccCheckProjectService(t, []string{service}, pid, true), ), }, // Delete the project, implicitly deletes service, expect the plan to want to create the service again @@ -136,8 +139,8 @@ func TestAccProjectService_renamedService(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("tf-test-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -154,9 +157,9 @@ func TestAccProjectService_renamedService(t *testing.T) { }) } -func testAccCheckProjectService(services []string, pid string, expectEnabled bool) resource.TestCheckFunc { +func testAccCheckProjectService(t *testing.T, services []string, pid string, expectEnabled bool) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) currentlyEnabled, err := listCurrentlyEnabledServices(pid, config, time.Minute*10) if err != nil { diff --git a/third_party/terraform/tests/resource_google_project_test.go b/third_party/terraform/tests/resource_google_project_test.go index 38a0f340ef3f..7605f2289fb4 100644 --- a/third_party/terraform/tests/resource_google_project_test.go +++ b/third_party/terraform/tests/resource_google_project_test.go @@ -11,7 +11,6 @@ import ( "testing" "github.com/davecgh/go-spew/spew" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudresourcemanager/v1" @@ -81,8 +80,8 @@ func TestAccProject_createWithoutOrg(t *testing.T) { t.Skip("Service accounts cannot create projects without a parent. Requires user credentials.") } - pid := acctest.RandomWithPrefix(testPrefix) - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("%s-%d", testPrefix, randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -103,8 +102,8 @@ func TestAccProject_create(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix(testPrefix) - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("%s-%d", testPrefix, randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -127,8 +126,8 @@ func TestAccProject_billing(t *testing.T) { skipIfEnvNotSet(t, "GOOGLE_BILLING_ACCOUNT_2") billingId2 := os.Getenv("GOOGLE_BILLING_ACCOUNT_2") billingId := getTestBillingAccountFromEnv(t) - pid := acctest.RandomWithPrefix(testPrefix) - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("%s-%d", testPrefix, randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -136,7 +135,7 @@ func TestAccProject_billing(t *testing.T) { { Config: testAccProject_createBilling(pid, pname, org, billingId), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleProjectHasBillingAccount("google_project.acceptance", pid, billingId), + testAccCheckGoogleProjectHasBillingAccount(t, "google_project.acceptance", pid, billingId), ), }, // Make sure import supports billing account @@ -150,14 +149,14 @@ func TestAccProject_billing(t *testing.T) { { Config: testAccProject_createBilling(pid, pname, org, billingId2), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleProjectHasBillingAccount("google_project.acceptance", pid, billingId2), + testAccCheckGoogleProjectHasBillingAccount(t, "google_project.acceptance", pid, billingId2), ), }, // Unlink the billing account { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleProjectHasBillingAccount("google_project.acceptance", pid, ""), + testAccCheckGoogleProjectHasBillingAccount(t, "google_project.acceptance", pid, ""), ), }, }, @@ -169,15 +168,15 @@ func TestAccProject_labels(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix(testPrefix) - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("%s-%d", testPrefix, randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccProject_labels(pid, pname, org, map[string]string{"test": "that"}), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleProjectHasLabels("google_project.acceptance", pid, map[string]string{"test": "that"}), + testAccCheckGoogleProjectHasLabels(t, "google_project.acceptance", pid, map[string]string{"test": "that"}), ), }, // Make sure import supports labels @@ -192,7 +191,7 @@ func TestAccProject_labels(t *testing.T) { Config: testAccProject_labels(pid, pname, org, map[string]string{"label": "label-value"}), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleProjectExists("google_project.acceptance", pid), - testAccCheckGoogleProjectHasLabels("google_project.acceptance", pid, map[string]string{"label": "label-value"}), + testAccCheckGoogleProjectHasLabels(t, "google_project.acceptance", pid, map[string]string{"label": "label-value"}), ), }, // update project delete labels @@ -200,7 +199,7 @@ func TestAccProject_labels(t *testing.T) { Config: testAccProject_create(pid, pname, org), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleProjectExists("google_project.acceptance", pid), - testAccCheckGoogleProjectHasNoLabels("google_project.acceptance", pid), + testAccCheckGoogleProjectHasNoLabels(t, "google_project.acceptance", pid), ), }, }, @@ -211,9 +210,9 @@ func TestAccProject_deleteDefaultNetwork(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix(testPrefix) + pid := fmt.Sprintf("%s-%d", testPrefix, randInt(t)) billingId := getTestBillingAccountFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -228,9 +227,9 @@ func TestAccProject_parentFolder(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - pid := acctest.RandomWithPrefix(testPrefix) - folderDisplayName := testPrefix + acctest.RandString(10) - resource.Test(t, resource.TestCase{ + pid := fmt.Sprintf("%s-%d", testPrefix, randInt(t)) + folderDisplayName := testPrefix + randString(t, 10) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -261,7 +260,7 @@ func testAccCheckGoogleProjectExists(r, pid string) resource.TestCheckFunc { } } -func testAccCheckGoogleProjectHasBillingAccount(r, pid, billingId string) resource.TestCheckFunc { +func testAccCheckGoogleProjectHasBillingAccount(t *testing.T, r, pid, billingId string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[r] if !ok { @@ -275,7 +274,7 @@ func testAccCheckGoogleProjectHasBillingAccount(r, pid, billingId string) resour // Actual value in API should match state and expected // Read the billing account - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) ba, err := config.clientBilling.Projects.GetBillingInfo(prefixedProject(pid)).Do() if err != nil { return fmt.Errorf("Error reading billing account for project %q: %v", prefixedProject(pid), err) @@ -287,7 +286,7 @@ func testAccCheckGoogleProjectHasBillingAccount(r, pid, billingId string) resour } } -func testAccCheckGoogleProjectHasLabels(r, pid string, expected map[string]string) resource.TestCheckFunc { +func testAccCheckGoogleProjectHasLabels(t *testing.T, r, pid string, expected map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[r] if !ok { @@ -300,7 +299,7 @@ func testAccCheckGoogleProjectHasLabels(r, pid string, expected map[string]strin } // Actual value in API should match state and expected - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientResourceManager.Projects.Get(pid).Do() if err != nil { @@ -329,7 +328,7 @@ func testAccCheckGoogleProjectHasLabels(r, pid string, expected map[string]strin } } -func testAccCheckGoogleProjectHasNoLabels(r, pid string) resource.TestCheckFunc { +func testAccCheckGoogleProjectHasNoLabels(t *testing.T, r, pid string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[r] if !ok { @@ -342,7 +341,7 @@ func testAccCheckGoogleProjectHasNoLabels(r, pid string) resource.TestCheckFunc } // Actual value in API should match state and expected - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientResourceManager.Projects.Get(pid).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_google_security_scanner_scan_config_test.go.erb b/third_party/terraform/tests/resource_google_security_scanner_scan_config_test.go.erb index 39516c3d871b..7cc519ea5610 100644 --- a/third_party/terraform/tests/resource_google_security_scanner_scan_config_test.go.erb +++ b/third_party/terraform/tests/resource_google_security_scanner_scan_config_test.go.erb @@ -5,15 +5,14 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccSecurityScannerScanConfig_scanConfigUpdate(t *testing.T) { t.Parallel() - firstAddressSuffix := acctest.RandString(10) - secondAddressSuffix := acctest.RandString(10) + firstAddressSuffix := randString(t, 10) + secondAddressSuffix := randString(t, 10) context := map[string]interface{}{ "random_suffix": firstAddressSuffix, "random_suffix2": secondAddressSuffix, @@ -31,10 +30,10 @@ func TestAccSecurityScannerScanConfig_scanConfigUpdate(t *testing.T) { "max_qps": 20, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSecurityScannerScanConfigDestroy, + CheckDestroy: testAccCheckSecurityScannerScanConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSecurityScannerScanConfig(context), diff --git a/third_party/terraform/tests/resource_google_service_account_iam_test.go.erb b/third_party/terraform/tests/resource_google_service_account_iam_test.go.erb index 35c7be01725e..50da43d41a9b 100644 --- a/third_party/terraform/tests/resource_google_service_account_iam_test.go.erb +++ b/third_party/terraform/tests/resource_google_service_account_iam_test.go.erb @@ -5,7 +5,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,15 +12,15 @@ import ( func TestAccServiceAccountIamBinding(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountIamBinding_basic(account), - Check: testAccCheckGoogleServiceAccountIam(account, 1), + Check: testAccCheckGoogleServiceAccountIam(t, account, 1), }, { ResourceName: "google_service_account_iam_binding.foo", @@ -33,21 +32,20 @@ func TestAccServiceAccountIamBinding(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccServiceAccountIamBinding_withCondition(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) conditionExpr := `request.time < timestamp(\"2020-01-01T00:00:00Z\")` conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountIamBinding_withCondition(account, "user:admin@hashicorptest.com", conditionTitle, conditionExpr), - Check: testAccCheckGoogleServiceAccountIam(account, 1), + Check: testAccCheckGoogleServiceAccountIam(t, account, 1), }, { ResourceName: "google_service_account_iam_binding.foo", @@ -60,19 +58,21 @@ func TestAccServiceAccountIamBinding_withCondition(t *testing.T) { } func TestAccServiceAccountIamBinding_withAndWithoutCondition(t *testing.T) { + // Resource creation race condition + skipIfVcr(t) t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) conditionExpr := `request.time < timestamp(\"2020-01-01T00:00:00Z\")` conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountIamBinding_withAndWithoutCondition(account, "user:admin@hashicorptest.com", conditionTitle, conditionExpr), - Check: testAccCheckGoogleServiceAccountIam(account, 2), + Check: testAccCheckGoogleServiceAccountIam(t, account, 2), }, { ResourceName: "google_service_account_iam_binding.foo", @@ -89,21 +89,20 @@ func TestAccServiceAccountIamBinding_withAndWithoutCondition(t *testing.T) { }, }) } -<% end -%> func TestAccServiceAccountIamMember(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) identity := fmt.Sprintf("serviceAccount:%s", serviceAccountCanonicalEmail(account)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountIamMember_basic(account), - Check: testAccCheckGoogleServiceAccountIam(account, 1), + Check: testAccCheckGoogleServiceAccountIam(t, account, 1), }, { ResourceName: "google_service_account_iam_member.foo", @@ -115,21 +114,20 @@ func TestAccServiceAccountIamMember(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccServiceAccountIamMember_withCondition(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) identity := fmt.Sprintf("serviceAccount:%s", serviceAccountCanonicalEmail(account)) conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountIamMember_withCondition(account, conditionTitle), - Check: testAccCheckGoogleServiceAccountIam(account, 1), + Check: testAccCheckGoogleServiceAccountIam(t, account, 1), }, { ResourceName: "google_service_account_iam_member.foo", @@ -142,19 +140,21 @@ func TestAccServiceAccountIamMember_withCondition(t *testing.T) { } func TestAccServiceAccountIamMember_withAndWithoutCondition(t *testing.T) { + // Resource creation race condition + skipIfVcr(t) t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) identity := fmt.Sprintf("serviceAccount:%s", serviceAccountCanonicalEmail(account)) conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountIamMember_withAndWithoutCondition(account, conditionTitle), - Check: testAccCheckGoogleServiceAccountIam(account, 2), + Check: testAccCheckGoogleServiceAccountIam(t, account, 2), }, { ResourceName: "google_service_account_iam_member.foo", @@ -171,14 +171,13 @@ func TestAccServiceAccountIamMember_withAndWithoutCondition(t *testing.T) { }, }) } -<% end -%> func TestAccServiceAccountIamPolicy(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -195,13 +194,12 @@ func TestAccServiceAccountIamPolicy(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccServiceAccountIamPolicy_withCondition(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -217,13 +215,12 @@ func TestAccServiceAccountIamPolicy_withCondition(t *testing.T) { }, }) } -<% end -%> // Ensure that our tests only create the expected number of bindings. // The content of the binding is tested in the import tests. -func testAccCheckGoogleServiceAccountIam(account string, numBindings int) resource.TestCheckFunc { +func testAccCheckGoogleServiceAccountIam(t *testing.T, account string, numBindings int) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientIAM.Projects.ServiceAccounts.GetIamPolicy(serviceAccountCanonicalId(account)).OptionsRequestedPolicyVersion(iamPolicyVersion).Do() if err != nil { return err @@ -260,7 +257,6 @@ resource "google_service_account_iam_binding" "foo" { `, account) } -<% unless version == 'ga' -%> func testAccServiceAccountIamBinding_withCondition(account, member, conditionTitle, conditionExpr string) string { return fmt.Sprintf(` resource "google_service_account" "test_account" { @@ -306,7 +302,6 @@ resource "google_service_account_iam_binding" "foo2" { } `, account, member, member, conditionTitle, conditionExpr) } -<% end -%> func testAccServiceAccountIamMember_basic(account string) string { return fmt.Sprintf(` @@ -323,7 +318,6 @@ resource "google_service_account_iam_member" "foo" { `, account) } -<% unless version == 'ga' -%> func testAccServiceAccountIamMember_withCondition(account, conditionTitle string) string { return fmt.Sprintf(` resource "google_service_account" "test_account" { @@ -369,7 +363,6 @@ resource "google_service_account_iam_member" "foo2" { } `, account, conditionTitle) } -<% end -%> func testAccServiceAccountIamPolicy_basic(account string) string { return fmt.Sprintf(` @@ -393,7 +386,6 @@ resource "google_service_account_iam_policy" "foo" { `, account) } -<% unless version == 'ga' -%> func testAccServiceAccountIamPolicy_withCondition(account string) string { return fmt.Sprintf(` resource "google_service_account" "test_account" { @@ -420,4 +412,3 @@ resource "google_service_account_iam_policy" "foo" { } `, account) } -<% end -%> diff --git a/third_party/terraform/tests/resource_google_service_account_key_test.go b/third_party/terraform/tests/resource_google_service_account_key_test.go index ec68338e106b..dbab8df45856 100644 --- a/third_party/terraform/tests/resource_google_service_account_key_test.go +++ b/third_party/terraform/tests/resource_google_service_account_key_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,16 +13,16 @@ func TestAccServiceAccountKey_basic(t *testing.T) { t.Parallel() resourceName := "google_service_account_key.acceptance" - accountID := "a" + acctest.RandString(10) + accountID := "a" + randString(t, 10) displayName := "Terraform Test" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountKey(accountID, displayName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleServiceAccountKeyExists(resourceName), + testAccCheckGoogleServiceAccountKeyExists(t, resourceName), resource.TestCheckResourceAttrSet(resourceName, "public_key"), resource.TestCheckResourceAttrSet(resourceName, "valid_after"), resource.TestCheckResourceAttrSet(resourceName, "valid_before"), @@ -38,16 +37,16 @@ func TestAccServiceAccountKey_fromEmail(t *testing.T) { t.Parallel() resourceName := "google_service_account_key.acceptance" - accountID := "a" + acctest.RandString(10) + accountID := "a" + randString(t, 10) displayName := "Terraform Test" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccServiceAccountKey_fromEmail(accountID, displayName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleServiceAccountKeyExists(resourceName), + testAccCheckGoogleServiceAccountKeyExists(t, resourceName), resource.TestCheckResourceAttrSet(resourceName, "public_key"), resource.TestCheckResourceAttrSet(resourceName, "valid_after"), resource.TestCheckResourceAttrSet(resourceName, "valid_before"), @@ -58,7 +57,7 @@ func TestAccServiceAccountKey_fromEmail(t *testing.T) { }) } -func testAccCheckGoogleServiceAccountKeyExists(r string) resource.TestCheckFunc { +func testAccCheckGoogleServiceAccountKeyExists(t *testing.T, r string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[r] @@ -69,7 +68,7 @@ func testAccCheckGoogleServiceAccountKeyExists(r string) resource.TestCheckFunc if rs.Primary.ID == "" { return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientIAM.Projects.ServiceAccounts.Keys.Get(rs.Primary.ID).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_google_service_account_test.go b/third_party/terraform/tests/resource_google_service_account_test.go index f82b518e7c1a..0fe4e8ea2508 100644 --- a/third_party/terraform/tests/resource_google_service_account_test.go +++ b/third_party/terraform/tests/resource_google_service_account_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,7 +12,7 @@ import ( func TestAccServiceAccount_basic(t *testing.T) { t.Parallel() - accountId := "a" + acctest.RandString(10) + accountId := "a" + randString(t, 10) uniqueId := "" displayName := "Terraform Test" displayName2 := "Terraform Test Update" @@ -21,7 +20,7 @@ func TestAccServiceAccount_basic(t *testing.T) { desc2 := "" project := getTestProjectFromEnv() expectedEmail := fmt.Sprintf("%s@%s.iam.gserviceaccount.com", accountId, project) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_healthcare_dataset_iam_test.go.erb b/third_party/terraform/tests/resource_healthcare_dataset_iam_test.go similarity index 84% rename from third_party/terraform/tests/resource_healthcare_dataset_iam_test.go.erb rename to third_party/terraform/tests/resource_healthcare_dataset_iam_test.go index 1cd7d37c3a66..4cd1299f9385 100644 --- a/third_party/terraform/tests/resource_healthcare_dataset_iam_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_dataset_iam_test.go @@ -1,13 +1,11 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" "reflect" "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -18,9 +16,9 @@ func TestAccHealthcareDatasetIamBinding(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.datasetAdmin" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, @@ -28,14 +26,14 @@ func TestAccHealthcareDatasetIamBinding(t *testing.T) { Name: datasetName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccHealthcareDatasetIamBinding_basic(account, datasetName, roleId), - Check: testAccCheckGoogleHealthcareDatasetIam(datasetId.datasetId(), roleId, []string{ + Check: testAccCheckGoogleHealthcareDatasetIam(t, datasetId.datasetId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -48,7 +46,7 @@ func TestAccHealthcareDatasetIamBinding(t *testing.T) { { // Test Iam Binding update Config: testAccHealthcareDatasetIamBinding_update(account, datasetName, roleId), - Check: testAccCheckGoogleHealthcareDatasetIam(datasetId.datasetId(), roleId, []string{ + Check: testAccCheckGoogleHealthcareDatasetIam(t, datasetId.datasetId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, projectId), }), @@ -67,9 +65,9 @@ func TestAccHealthcareDatasetIamMember(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.datasetViewer" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, @@ -77,14 +75,14 @@ func TestAccHealthcareDatasetIamMember(t *testing.T) { Name: datasetName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccHealthcareDatasetIamMember_basic(account, datasetName, roleId), - Check: testAccCheckGoogleHealthcareDatasetIam(datasetId.datasetId(), roleId, []string{ + Check: testAccCheckGoogleHealthcareDatasetIam(t, datasetId.datasetId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -102,9 +100,9 @@ func TestAccHealthcareDatasetIamPolicy(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.datasetAdmin" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, @@ -112,13 +110,13 @@ func TestAccHealthcareDatasetIamPolicy(t *testing.T) { Name: datasetName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccHealthcareDatasetIamPolicy_basic(account, datasetName, roleId), - Check: testAccCheckGoogleHealthcareDatasetIam(datasetId.datasetId(), roleId, []string{ + Check: testAccCheckGoogleHealthcareDatasetIam(t, datasetId.datasetId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -132,9 +130,9 @@ func TestAccHealthcareDatasetIamPolicy(t *testing.T) { }) } -func testAccCheckGoogleHealthcareDatasetIam(datasetId, role string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareDatasetIam(t *testing.T, datasetId, role string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientHealthcare.Projects.Locations.Datasets.GetIamPolicy(datasetId).Do() if err != nil { return err @@ -253,6 +251,3 @@ resource "google_healthcare_dataset_iam_policy" "foo" { } `, account, DEFAULT_HEALTHCARE_TEST_LOCATION, datasetName, roleId) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_healthcare_dataset_test.go.erb b/third_party/terraform/tests/resource_healthcare_dataset_test.go similarity index 77% rename from third_party/terraform/tests/resource_healthcare_dataset_test.go.erb rename to third_party/terraform/tests/resource_healthcare_dataset_test.go index ccb2f2472f60..1d9d5b4594db 100644 --- a/third_party/terraform/tests/resource_healthcare_dataset_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_dataset_test.go @@ -1,12 +1,9 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" - "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -75,14 +72,14 @@ func TestAccHealthcareDataset_basic(t *testing.T) { t.Parallel() location := "us-central1" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) timeZone := "America/New_York" resourceName := "google_healthcare_dataset.dataset" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckHealthcareDatasetDestroy, + CheckDestroy: testAccCheckHealthcareDatasetDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleHealthcareDataset_basic(datasetName, location), @@ -95,7 +92,7 @@ func TestAccHealthcareDataset_basic(t *testing.T) { { Config: testGoogleHealthcareDataset_update(datasetName, location, timeZone), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleHealthcareDatasetUpdate(timeZone), + testAccCheckGoogleHealthcareDatasetUpdate(t, timeZone), ), }, { @@ -107,39 +104,14 @@ func TestAccHealthcareDataset_basic(t *testing.T) { }) } -func testAccCheckHealthcareDatasetDestroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_healthcare_dataset" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - config := testAccProvider.Meta().(*Config) - - url, err := replaceVarsForTest(config, rs, "{{HealthcareBasePath}}projects/{{project}}/locations/{{location}}/datasets/{{name}}") - if err != nil { - return err - } - - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("HealthcareDataset still exists at %s", url) - } - } - - return nil -} - -func testAccCheckGoogleHealthcareDatasetUpdate(timeZone string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareDatasetUpdate(t *testing.T, timeZone string) resource.TestCheckFunc { return func(s *terraform.State) error { for _, rs := range s.RootModule().Resources { if rs.Type != "google_healthcare_dataset" { continue } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) gcpResourceUri, err := replaceVarsForTest(config, rs, "projects/{{project}}/locations/{{location}}/datasets/{{name}}") if err != nil { @@ -178,6 +150,3 @@ resource "google_healthcare_dataset" "dataset" { } `, datasetName, location, timeZone) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_healthcare_dicom_store_iam_test.go.erb b/third_party/terraform/tests/resource_healthcare_dicom_store_iam_test.go similarity index 86% rename from third_party/terraform/tests/resource_healthcare_dicom_store_iam_test.go.erb rename to third_party/terraform/tests/resource_healthcare_dicom_store_iam_test.go index fb53430d25cd..8b491c1c3433 100644 --- a/third_party/terraform/tests/resource_healthcare_dicom_store_iam_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_dicom_store_iam_test.go @@ -1,13 +1,11 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" "reflect" "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,24 +14,24 @@ func TestAccHealthcareDicomStoreIamBinding(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.dicomStoreAdmin" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - dicomStoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + dicomStoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccHealthcareDicomStoreIamBinding_basic(account, datasetName, dicomStoreName, roleId), - Check: testAccCheckGoogleHealthcareDicomStoreIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleHealthcareDicomStoreIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -46,7 +44,7 @@ func TestAccHealthcareDicomStoreIamBinding(t *testing.T) { { // Test Iam Binding update Config: testAccHealthcareDicomStoreIamBinding_update(account, datasetName, dicomStoreName, roleId), - Check: testAccCheckGoogleHealthcareDicomStoreIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleHealthcareDicomStoreIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, projectId), }), @@ -65,24 +63,24 @@ func TestAccHealthcareDicomStoreIamMember(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.dicomEditor" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - dicomStoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + dicomStoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccHealthcareDicomStoreIamMember_basic(account, datasetName, dicomStoreName, roleId), - Check: testAccCheckGoogleHealthcareDicomStoreIamMemberExists("foo", roleId, + Check: testAccCheckGoogleHealthcareDicomStoreIamMemberExists(t, "foo", roleId, fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), ), }, @@ -100,24 +98,24 @@ func TestAccHealthcareDicomStoreIamPolicy(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.dicomViewer" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - dicomStoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + dicomStoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Policy creation (no update for policy, no need to test) Config: testAccHealthcareDicomStoreIamPolicy_basic(account, datasetName, dicomStoreName, roleId), - Check: testAccCheckGoogleHealthcareDicomStoreIamPolicyExists("foo", roleId, + Check: testAccCheckGoogleHealthcareDicomStoreIamPolicyExists(t, "foo", roleId, fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), ), }, @@ -131,14 +129,14 @@ func TestAccHealthcareDicomStoreIamPolicy(t *testing.T) { }) } -func testAccCheckGoogleHealthcareDicomStoreIamBindingExists(bindingResourceName, roleId string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareDicomStoreIamBindingExists(t *testing.T, bindingResourceName, roleId string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { bindingRs, ok := s.RootModule().Resources[fmt.Sprintf("google_healthcare_dicom_store_iam_binding.%s", bindingResourceName)] if !ok { return fmt.Errorf("Not found: %s", bindingResourceName) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) dicomStoreId, err := parseHealthcareDicomStoreId(bindingRs.Primary.Attributes["dicom_store_id"], config) if err != nil { @@ -167,14 +165,14 @@ func testAccCheckGoogleHealthcareDicomStoreIamBindingExists(bindingResourceName, } } -func testAccCheckGoogleHealthcareDicomStoreIamMemberExists(n, role, member string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareDicomStoreIamMemberExists(t *testing.T, n, role, member string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_healthcare_dicom_store_iam_member."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) dicomStoreId, err := parseHealthcareDicomStoreId(rs.Primary.Attributes["dicom_store_id"], config) if err != nil { @@ -202,14 +200,14 @@ func testAccCheckGoogleHealthcareDicomStoreIamMemberExists(n, role, member strin } } -func testAccCheckGoogleHealthcareDicomStoreIamPolicyExists(n, role, policy string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareDicomStoreIamPolicyExists(t *testing.T, n, role, policy string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_healthcare_dicom_store_iam_policy."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) dicomStoreId, err := parseHealthcareDicomStoreId(rs.Primary.Attributes["dicom_store_id"], config) if err != nil { @@ -353,6 +351,3 @@ resource "google_healthcare_dicom_store_iam_policy" "foo" { } `, account, datasetName, dicomStoreName, roleId) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_healthcare_dicom_store_test.go.erb b/third_party/terraform/tests/resource_healthcare_dicom_store_test.go similarity index 76% rename from third_party/terraform/tests/resource_healthcare_dicom_store_test.go.erb rename to third_party/terraform/tests/resource_healthcare_dicom_store_test.go index 4ae2b2dbbdc7..b6ac868e4e6b 100644 --- a/third_party/terraform/tests/resource_healthcare_dicom_store_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_dicom_store_test.go @@ -1,13 +1,10 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" "path" - "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -75,15 +72,15 @@ func TestAccHealthcareDicomStoreIdParsing(t *testing.T) { func TestAccHealthcareDicomStore_basic(t *testing.T) { t.Parallel() - datasetName := fmt.Sprintf("tf-test-dataset-%s", acctest.RandString(10)) - dicomStoreName := fmt.Sprintf("tf-test-dicom-store-%s", acctest.RandString(10)) - pubsubTopic := fmt.Sprintf("tf-test-topic-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-dataset-%s", randString(t, 10)) + dicomStoreName := fmt.Sprintf("tf-test-dicom-store-%s", randString(t, 10)) + pubsubTopic := fmt.Sprintf("tf-test-topic-%s", randString(t, 10)) resourceName := "google_healthcare_dicom_store.default" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckHealthcareDicomStoreDestroy, + CheckDestroy: testAccCheckHealthcareDicomStoreDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleHealthcareDicomStore_basic(dicomStoreName, datasetName), @@ -96,7 +93,7 @@ func TestAccHealthcareDicomStore_basic(t *testing.T) { { Config: testGoogleHealthcareDicomStore_update(dicomStoreName, datasetName, pubsubTopic), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleHealthcareDicomStoreUpdate(pubsubTopic), + testAccCheckGoogleHealthcareDicomStoreUpdate(t, pubsubTopic), ), }, { @@ -104,15 +101,14 @@ func TestAccHealthcareDicomStore_basic(t *testing.T) { ImportState: true, ImportStateVerify: true, }, - // TODO(b/148536607): Uncomment once b/148536607 is fixed. - // { - // Config: testGoogleHealthcareDicomStore_basic(dicomStoreName, datasetName), - // }, - // { - // ResourceName: resourceName, - // ImportState: true, - // ImportStateVerify: true, - // }, + { + Config: testGoogleHealthcareDicomStore_basic(dicomStoreName, datasetName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -157,32 +153,7 @@ resource "google_pubsub_topic" "topic" { `, dicomStoreName, datasetName, pubsubTopic) } -func testAccCheckHealthcareDicomStoreDestroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_healthcare_dicom_store" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - config := testAccProvider.Meta().(*Config) - - url, err := replaceVarsForTest(config, rs, "{{HealthcareBasePath}}{{dataset}}/dicomStores/{{name}}") - if err != nil { - return err - } - - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("HealthcareDicomStore still exists at %s", url) - } - } - - return nil -} - -func testAccCheckGoogleHealthcareDicomStoreUpdate(pubsubTopic string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareDicomStoreUpdate(t *testing.T, pubsubTopic string) resource.TestCheckFunc { return func(s *terraform.State) error { var foundResource = false for _, rs := range s.RootModule().Resources { @@ -191,7 +162,7 @@ func testAccCheckGoogleHealthcareDicomStoreUpdate(pubsubTopic string) resource.T } foundResource = true - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) gcpResourceUri, err := replaceVarsForTest(config, rs, "{{dataset}}/dicomStores/{{name}}") if err != nil { @@ -219,6 +190,3 @@ func testAccCheckGoogleHealthcareDicomStoreUpdate(pubsubTopic string) resource.T return nil } } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_healthcare_fhir_store_iam_test.go.erb b/third_party/terraform/tests/resource_healthcare_fhir_store_iam_test.go similarity index 86% rename from third_party/terraform/tests/resource_healthcare_fhir_store_iam_test.go.erb rename to third_party/terraform/tests/resource_healthcare_fhir_store_iam_test.go index 9e8d3e4e8fea..1d1e8e5f914a 100644 --- a/third_party/terraform/tests/resource_healthcare_fhir_store_iam_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_fhir_store_iam_test.go @@ -1,13 +1,11 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" "reflect" "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,24 +14,24 @@ func TestAccHealthcareFhirStoreIamBinding(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.fhirStoreAdmin" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - fhirStoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + fhirStoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccHealthcareFhirStoreIamBinding_basic(account, datasetName, fhirStoreName, roleId), - Check: testAccCheckGoogleHealthcareFhirStoreIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleHealthcareFhirStoreIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -46,7 +44,7 @@ func TestAccHealthcareFhirStoreIamBinding(t *testing.T) { { // Test Iam Binding update Config: testAccHealthcareFhirStoreIamBinding_update(account, datasetName, fhirStoreName, roleId), - Check: testAccCheckGoogleHealthcareFhirStoreIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleHealthcareFhirStoreIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, projectId), }), @@ -65,24 +63,24 @@ func TestAccHealthcareFhirStoreIamMember(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.fhirResourceEditor" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - fhirStoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + fhirStoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccHealthcareFhirStoreIamMember_basic(account, datasetName, fhirStoreName, roleId), - Check: testAccCheckGoogleHealthcareFhirStoreIamMemberExists("foo", roleId, + Check: testAccCheckGoogleHealthcareFhirStoreIamMemberExists(t, "foo", roleId, fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), ), }, @@ -100,24 +98,24 @@ func TestAccHealthcareFhirStoreIamPolicy(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.fhirResourceEditor" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - fhirStoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + fhirStoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Policy creation (no update for policy, no need to test) Config: testAccHealthcareFhirStoreIamPolicy_basic(account, datasetName, fhirStoreName, roleId), - Check: testAccCheckGoogleHealthcareFhirStoreIamPolicyExists("foo", roleId, + Check: testAccCheckGoogleHealthcareFhirStoreIamPolicyExists(t, "foo", roleId, fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), ), }, @@ -131,14 +129,14 @@ func TestAccHealthcareFhirStoreIamPolicy(t *testing.T) { }) } -func testAccCheckGoogleHealthcareFhirStoreIamBindingExists(bindingResourceName, roleId string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareFhirStoreIamBindingExists(t *testing.T, bindingResourceName, roleId string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { bindingRs, ok := s.RootModule().Resources[fmt.Sprintf("google_healthcare_fhir_store_iam_binding.%s", bindingResourceName)] if !ok { return fmt.Errorf("Not found: %s", bindingResourceName) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) fhirStoreId, err := parseHealthcareFhirStoreId(bindingRs.Primary.Attributes["fhir_store_id"], config) if err != nil { @@ -167,14 +165,14 @@ func testAccCheckGoogleHealthcareFhirStoreIamBindingExists(bindingResourceName, } } -func testAccCheckGoogleHealthcareFhirStoreIamMemberExists(n, role, member string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareFhirStoreIamMemberExists(t *testing.T, n, role, member string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_healthcare_fhir_store_iam_member."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) fhirStoreId, err := parseHealthcareFhirStoreId(rs.Primary.Attributes["fhir_store_id"], config) if err != nil { @@ -202,14 +200,14 @@ func testAccCheckGoogleHealthcareFhirStoreIamMemberExists(n, role, member string } } -func testAccCheckGoogleHealthcareFhirStoreIamPolicyExists(n, role, policy string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareFhirStoreIamPolicyExists(t *testing.T, n, role, policy string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_healthcare_fhir_store_iam_policy."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) fhirStoreId, err := parseHealthcareFhirStoreId(rs.Primary.Attributes["fhir_store_id"], config) if err != nil { @@ -254,6 +252,7 @@ resource "google_healthcare_dataset" "dataset" { resource "google_healthcare_fhir_store" "fhir_store" { dataset = google_healthcare_dataset.dataset.id name = "%s" + version = "R4" } resource "google_healthcare_fhir_store_iam_binding" "foo" { @@ -284,6 +283,7 @@ resource "google_healthcare_dataset" "dataset" { resource "google_healthcare_fhir_store" "fhir_store" { dataset = google_healthcare_dataset.dataset.id name = "%s" + version = "R4" } resource "google_healthcare_fhir_store_iam_binding" "foo" { @@ -312,6 +312,7 @@ resource "google_healthcare_dataset" "dataset" { resource "google_healthcare_fhir_store" "fhir_store" { dataset = google_healthcare_dataset.dataset.id name = "%s" + version = "R4" } resource "google_healthcare_fhir_store_iam_member" "foo" { @@ -337,6 +338,7 @@ resource "google_healthcare_dataset" "dataset" { resource "google_healthcare_fhir_store" "fhir_store" { dataset = google_healthcare_dataset.dataset.id name = "%s" + version = "R4" } data "google_iam_policy" "foo" { @@ -353,6 +355,3 @@ resource "google_healthcare_fhir_store_iam_policy" "foo" { } `, account, datasetName, fhirStoreName, roleId) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_healthcare_fhir_store_test.go.erb b/third_party/terraform/tests/resource_healthcare_fhir_store_test.go similarity index 81% rename from third_party/terraform/tests/resource_healthcare_fhir_store_test.go.erb rename to third_party/terraform/tests/resource_healthcare_fhir_store_test.go index 740cc575e7cc..2ba1258456a3 100644 --- a/third_party/terraform/tests/resource_healthcare_fhir_store_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_fhir_store_test.go @@ -1,13 +1,10 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" "path" - "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -75,15 +72,15 @@ func TestAccHealthcareFhirStoreIdParsing(t *testing.T) { func TestAccHealthcareFhirStore_basic(t *testing.T) { t.Parallel() - datasetName := fmt.Sprintf("tf-test-dataset-%s", acctest.RandString(10)) - fhirStoreName := fmt.Sprintf("tf-test-fhir-store-%s", acctest.RandString(10)) - pubsubTopic := fmt.Sprintf("tf-test-topic-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-dataset-%s", randString(t, 10)) + fhirStoreName := fmt.Sprintf("tf-test-fhir-store-%s", randString(t, 10)) + pubsubTopic := fmt.Sprintf("tf-test-topic-%s", randString(t, 10)) resourceName := "google_healthcare_fhir_store.default" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckHealthcareFhirStoreDestroy, + CheckDestroy: testAccCheckHealthcareFhirStoreDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleHealthcareFhirStore_basic(fhirStoreName, datasetName), @@ -96,7 +93,7 @@ func TestAccHealthcareFhirStore_basic(t *testing.T) { { Config: testGoogleHealthcareFhirStore_update(fhirStoreName, datasetName, pubsubTopic), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleHealthcareFhirStoreUpdate(pubsubTopic), + testAccCheckGoogleHealthcareFhirStoreUpdate(t, pubsubTopic), ), }, { @@ -126,6 +123,7 @@ resource "google_healthcare_fhir_store" "default" { disable_referential_integrity = false disable_resource_versioning = false enable_history_import = false + version = "R4" } resource "google_healthcare_dataset" "dataset" { @@ -142,6 +140,8 @@ resource "google_healthcare_fhir_store" "default" { dataset = google_healthcare_dataset.dataset.id enable_update_create = true + version = "R4" + notification_config { pubsub_topic = google_pubsub_topic.topic.id @@ -163,32 +163,7 @@ resource "google_pubsub_topic" "topic" { `, fhirStoreName, datasetName, pubsubTopic) } -func testAccCheckHealthcareFhirStoreDestroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_healthcare_fhir_store" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - config := testAccProvider.Meta().(*Config) - - url, err := replaceVarsForTest(config, rs, "{{HealthcareBasePath}}{{dataset}}/fhirStores/{{name}}") - if err != nil { - return err - } - - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("HealthcareFhirStore still exists at %s", url) - } - } - - return nil -} - -func testAccCheckGoogleHealthcareFhirStoreUpdate(pubsubTopic string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareFhirStoreUpdate(t *testing.T, pubsubTopic string) resource.TestCheckFunc { return func(s *terraform.State) error { var foundResource = false for _, rs := range s.RootModule().Resources { @@ -197,7 +172,7 @@ func testAccCheckGoogleHealthcareFhirStoreUpdate(pubsubTopic string) resource.Te } foundResource = true - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) gcpResourceUri, err := replaceVarsForTest(config, rs, "{{dataset}}/fhirStores/{{name}}") if err != nil { @@ -236,6 +211,3 @@ func testAccCheckGoogleHealthcareFhirStoreUpdate(pubsubTopic string) resource.Te return nil } } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_healthcare_hl7_v2_store_iam_test.go.erb b/third_party/terraform/tests/resource_healthcare_hl7_v2_store_iam_test.go similarity index 86% rename from third_party/terraform/tests/resource_healthcare_hl7_v2_store_iam_test.go.erb rename to third_party/terraform/tests/resource_healthcare_hl7_v2_store_iam_test.go index 0072e1bcef79..05c8d17b44dc 100644 --- a/third_party/terraform/tests/resource_healthcare_hl7_v2_store_iam_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_hl7_v2_store_iam_test.go @@ -1,13 +1,11 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" "reflect" "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,24 +14,24 @@ func TestAccHealthcareHl7V2StoreIamBinding(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.hl7V2StoreAdmin" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - hl7V2StoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hl7V2StoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccHealthcareHl7V2StoreIamBinding_basic(account, datasetName, hl7V2StoreName, roleId), - Check: testAccCheckGoogleHealthcareHl7V2StoreIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleHealthcareHl7V2StoreIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -46,7 +44,7 @@ func TestAccHealthcareHl7V2StoreIamBinding(t *testing.T) { { // Test Iam Binding update Config: testAccHealthcareHl7V2StoreIamBinding_update(account, datasetName, hl7V2StoreName, roleId), - Check: testAccCheckGoogleHealthcareHl7V2StoreIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleHealthcareHl7V2StoreIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, projectId), }), @@ -65,24 +63,24 @@ func TestAccHealthcareHl7V2StoreIamMember(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.hl7V2Editor" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - hl7V2StoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hl7V2StoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccHealthcareHl7V2StoreIamMember_basic(account, datasetName, hl7V2StoreName, roleId), - Check: testAccCheckGoogleHealthcareHl7V2StoreIamMemberExists("foo", roleId, + Check: testAccCheckGoogleHealthcareHl7V2StoreIamMemberExists(t, "foo", roleId, fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), ), }, @@ -100,24 +98,24 @@ func TestAccHealthcareHl7V2StoreIamPolicy(t *testing.T) { t.Parallel() projectId := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/healthcare.hl7V2Consumer" - datasetName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-%s", randString(t, 10)) datasetId := &healthcareDatasetId{ Project: projectId, Location: DEFAULT_HEALTHCARE_TEST_LOCATION, Name: datasetName, } - hl7V2StoreName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + hl7V2StoreName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Policy creation (no update for policy, no need to test) Config: testAccHealthcareHl7V2StoreIamPolicy_basic(account, datasetName, hl7V2StoreName, roleId), - Check: testAccCheckGoogleHealthcareHl7V2StoreIamPolicyExists("foo", roleId, + Check: testAccCheckGoogleHealthcareHl7V2StoreIamPolicyExists(t, "foo", roleId, fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), ), }, @@ -131,14 +129,14 @@ func TestAccHealthcareHl7V2StoreIamPolicy(t *testing.T) { }) } -func testAccCheckGoogleHealthcareHl7V2StoreIamBindingExists(bindingResourceName, roleId string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareHl7V2StoreIamBindingExists(t *testing.T, bindingResourceName, roleId string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { bindingRs, ok := s.RootModule().Resources[fmt.Sprintf("google_healthcare_hl7_v2_store_iam_binding.%s", bindingResourceName)] if !ok { return fmt.Errorf("Not found: %s", bindingResourceName) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) hl7V2StoreId, err := parseHealthcareHl7V2StoreId(bindingRs.Primary.Attributes["hl7_v2_store_id"], config) if err != nil { @@ -167,14 +165,14 @@ func testAccCheckGoogleHealthcareHl7V2StoreIamBindingExists(bindingResourceName, } } -func testAccCheckGoogleHealthcareHl7V2StoreIamMemberExists(n, role, member string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareHl7V2StoreIamMemberExists(t *testing.T, n, role, member string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_healthcare_hl7_v2_store_iam_member."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) hl7V2StoreId, err := parseHealthcareHl7V2StoreId(rs.Primary.Attributes["hl7_v2_store_id"], config) if err != nil { @@ -202,14 +200,14 @@ func testAccCheckGoogleHealthcareHl7V2StoreIamMemberExists(n, role, member strin } } -func testAccCheckGoogleHealthcareHl7V2StoreIamPolicyExists(n, role, policy string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareHl7V2StoreIamPolicyExists(t *testing.T, n, role, policy string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_healthcare_hl7_v2_store_iam_policy."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) hl7V2StoreId, err := parseHealthcareHl7V2StoreId(rs.Primary.Attributes["hl7_v2_store_id"], config) if err != nil { @@ -353,6 +351,3 @@ resource "google_healthcare_hl7_v2_store_iam_policy" "foo" { } `, account, datasetName, hl7V2StoreName, roleId) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_healthcare_hl7_v2_store_test.go.erb b/third_party/terraform/tests/resource_healthcare_hl7_v2_store_test.go similarity index 78% rename from third_party/terraform/tests/resource_healthcare_hl7_v2_store_test.go.erb rename to third_party/terraform/tests/resource_healthcare_hl7_v2_store_test.go index 19c0f31ad2f6..7184b4c1c092 100644 --- a/third_party/terraform/tests/resource_healthcare_hl7_v2_store_test.go.erb +++ b/third_party/terraform/tests/resource_healthcare_hl7_v2_store_test.go @@ -1,13 +1,10 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" "path" - "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -75,15 +72,15 @@ func TestAccHealthcareHl7V2StoreIdParsing(t *testing.T) { func TestAccHealthcareHl7V2Store_basic(t *testing.T) { t.Parallel() - datasetName := fmt.Sprintf("tf-test-dataset-%s", acctest.RandString(10)) - hl7_v2StoreName := fmt.Sprintf("tf-test-hl7_v2-store-%s", acctest.RandString(10)) - pubsubTopic := fmt.Sprintf("tf-test-topic-%s", acctest.RandString(10)) + datasetName := fmt.Sprintf("tf-test-dataset-%s", randString(t, 10)) + hl7_v2StoreName := fmt.Sprintf("tf-test-hl7_v2-store-%s", randString(t, 10)) + pubsubTopic := fmt.Sprintf("tf-test-topic-%s", randString(t, 10)) resourceName := "google_healthcare_hl7_v2_store.default" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckHealthcareHl7V2StoreDestroy, + CheckDestroy: testAccCheckHealthcareHl7V2StoreDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleHealthcareHl7V2Store_basic(hl7_v2StoreName, datasetName), @@ -96,7 +93,7 @@ func TestAccHealthcareHl7V2Store_basic(t *testing.T) { { Config: testGoogleHealthcareHl7V2Store_update(hl7_v2StoreName, datasetName, pubsubTopic), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleHealthcareHl7V2StoreUpdate(pubsubTopic), + testAccCheckGoogleHealthcareHl7V2StoreUpdate(t, pubsubTopic), ), }, { @@ -141,7 +138,7 @@ resource "google_healthcare_hl7_v2_store" "default" { segment_terminator = "Jw==" } - notification_config { + notification_configs { pubsub_topic = google_pubsub_topic.topic.id } @@ -161,32 +158,7 @@ resource "google_pubsub_topic" "topic" { `, hl7_v2StoreName, datasetName, pubsubTopic) } -func testAccCheckHealthcareHl7V2StoreDestroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_healthcare_hl7_v2_store" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - config := testAccProvider.Meta().(*Config) - - url, err := replaceVarsForTest(config, rs, "{{HealthcareBasePath}}{{dataset}}/hl7V2Stores/{{name}}") - if err != nil { - return err - } - - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("HealthcareHl7V2Store still exists at %s", url) - } - } - - return nil -} - -func testAccCheckGoogleHealthcareHl7V2StoreUpdate(pubsubTopic string) resource.TestCheckFunc { +func testAccCheckGoogleHealthcareHl7V2StoreUpdate(t *testing.T, pubsubTopic string) resource.TestCheckFunc { return func(s *terraform.State) error { var foundResource = false for _, rs := range s.RootModule().Resources { @@ -195,7 +167,7 @@ func testAccCheckGoogleHealthcareHl7V2StoreUpdate(pubsubTopic string) resource.T } foundResource = true - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) gcpResourceUri, err := replaceVarsForTest(config, rs, "{{dataset}}/hl7V2Stores/{{name}}") if err != nil { @@ -223,9 +195,12 @@ func testAccCheckGoogleHealthcareHl7V2StoreUpdate(pubsubTopic string) resource.T return fmt.Errorf("hl7_v2_store labels not updated: %s", gcpResourceUri) } - topicName := path.Base(response.NotificationConfig.PubsubTopic) - if topicName != pubsubTopic { - return fmt.Errorf("hl7_v2_store 'NotificationConfig' not updated ('%s' != '%s'): %s", topicName, pubsubTopic, gcpResourceUri) + notifications := response.NotificationConfigs + if len(notifications) > 0 { + topicName := path.Base(notifications[0].PubsubTopic) + if topicName != pubsubTopic { + return fmt.Errorf("hl7_v2_store 'NotificationConfig' not updated ('%s' != '%s'): %s", topicName, pubsubTopic, gcpResourceUri) + } } } @@ -235,6 +210,3 @@ func testAccCheckGoogleHealthcareHl7V2StoreUpdate(pubsubTopic string) resource.T return nil } } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/tests/resource_iap_brand_test.go b/third_party/terraform/tests/resource_iap_brand_test.go index 2a35fe4133ad..53878e109d69 100644 --- a/third_party/terraform/tests/resource_iap_brand_test.go +++ b/third_party/terraform/tests/resource_iap_brand_test.go @@ -3,7 +3,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -13,10 +12,10 @@ func TestAccIapBrand_iapBrandExample(t *testing.T) { context := map[string]interface{}{ "org_id": getTestOrgFromEnv(t), "org_domain": getTestOrgDomainFromEnv(t), - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_identity_platform_default_supported_idp_config_test.go b/third_party/terraform/tests/resource_identity_platform_default_supported_idp_config_test.go index 177205352c92..41e3b5f0ba68 100644 --- a/third_party/terraform/tests/resource_identity_platform_default_supported_idp_config_test.go +++ b/third_party/terraform/tests/resource_identity_platform_default_supported_idp_config_test.go @@ -5,7 +5,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,13 +13,13 @@ func TestAccIdentityPlatformDefaultSupportedIdpConfig_defaultSupportedIdpConfigU t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckIdentityPlatformDefaultSupportedIdpConfigDestroy, + CheckDestroy: testAccCheckIdentityPlatformDefaultSupportedIdpConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccIdentityPlatformDefaultSupportedIdpConfig_defaultSupportedIdpConfigBasic(context), @@ -42,29 +41,31 @@ func TestAccIdentityPlatformDefaultSupportedIdpConfig_defaultSupportedIdpConfigU }) } -func testAccCheckIdentityPlatformDefaultSupportedIdpConfigDestroy(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_identity_platform_default_supported_idp_config" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } +func testAccCheckIdentityPlatformDefaultSupportedIdpConfigDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_identity_platform_default_supported_idp_config" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) - url, err := replaceVarsForTest(config, rs, "{{IdentityPlatformBasePath}}projects/{{project}}/defaultSupportedIdpConfigs/{{client_id}}") - if err != nil { - return err - } + url, err := replaceVarsForTest(config, rs, "{{IdentityPlatformBasePath}}projects/{{project}}/defaultSupportedIdpConfigs/{{client_id}}") + if err != nil { + return err + } - _, err = sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("IdentityPlatformDefaultSupportedIdpConfig still exists at %s", url) + _, err = sendRequest(config, "GET", "", url, nil) + if err == nil { + return fmt.Errorf("IdentityPlatformDefaultSupportedIdpConfig still exists at %s", url) + } } - } - return nil + return nil + } } func testAccIdentityPlatformDefaultSupportedIdpConfig_defaultSupportedIdpConfigBasic(context map[string]interface{}) string { diff --git a/third_party/terraform/tests/resource_identity_platform_inbound_saml_config_test.go b/third_party/terraform/tests/resource_identity_platform_inbound_saml_config_test.go index fa3a0e65741c..de52e1ec38be 100644 --- a/third_party/terraform/tests/resource_identity_platform_inbound_saml_config_test.go +++ b/third_party/terraform/tests/resource_identity_platform_inbound_saml_config_test.go @@ -3,7 +3,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -11,13 +10,13 @@ func TestAccIdentityPlatformInboundSamlConfig_inboundSamlConfigUpdate(t *testing t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckIdentityPlatformInboundSamlConfigDestroy, + CheckDestroy: testAccCheckIdentityPlatformInboundSamlConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccIdentityPlatformInboundSamlConfig_inboundSamlConfigBasic(context), diff --git a/third_party/terraform/tests/resource_identity_platform_oauth_idp_config_test.go b/third_party/terraform/tests/resource_identity_platform_oauth_idp_config_test.go index c163d3e431d5..489f4bb7737f 100644 --- a/third_party/terraform/tests/resource_identity_platform_oauth_idp_config_test.go +++ b/third_party/terraform/tests/resource_identity_platform_oauth_idp_config_test.go @@ -3,7 +3,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -11,13 +10,13 @@ func TestAccIdentityPlatformOauthIdpConfig_identityPlatformOauthIdpConfigUpdate( t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckIdentityPlatformOauthIdpConfigDestroy, + CheckDestroy: testAccCheckIdentityPlatformOauthIdpConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccIdentityPlatformOauthIdpConfig_identityPlatformOauthIdpConfigBasic(context), diff --git a/third_party/terraform/tests/resource_identity_platform_tenant_default_supported_idp_config_test.go b/third_party/terraform/tests/resource_identity_platform_tenant_default_supported_idp_config_test.go index d2eceaea22bf..071949bc6d04 100644 --- a/third_party/terraform/tests/resource_identity_platform_tenant_default_supported_idp_config_test.go +++ b/third_party/terraform/tests/resource_identity_platform_tenant_default_supported_idp_config_test.go @@ -3,7 +3,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -11,13 +10,13 @@ func TestAccIdentityPlatformTenantDefaultSupportedIdpConfig_identityPlatformTena t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckIdentityPlatformTenantDefaultSupportedIdpConfigDestroy, + CheckDestroy: testAccCheckIdentityPlatformTenantDefaultSupportedIdpConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccIdentityPlatformTenantDefaultSupportedIdpConfig_identityPlatformTenantDefaultSupportedIdpConfigBasic(context), diff --git a/third_party/terraform/tests/resource_identity_platform_tenant_indound_saml_config_test.go b/third_party/terraform/tests/resource_identity_platform_tenant_indound_saml_config_test.go index 06c04de6ca8f..1d9a71a0fe40 100644 --- a/third_party/terraform/tests/resource_identity_platform_tenant_indound_saml_config_test.go +++ b/third_party/terraform/tests/resource_identity_platform_tenant_indound_saml_config_test.go @@ -3,7 +3,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -11,13 +10,13 @@ func TestAccIdentityPlatformTenantInboundSamlConfig_identityPlatformTenantInboun t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckIdentityPlatformTenantInboundSamlConfigDestroy, + CheckDestroy: testAccCheckIdentityPlatformTenantInboundSamlConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccIdentityPlatformTenantInboundSamlConfig_identityPlatformTenantInboundSamlConfigBasic(context), diff --git a/third_party/terraform/tests/resource_identity_platform_tenant_oauth_idp_config_test.go b/third_party/terraform/tests/resource_identity_platform_tenant_oauth_idp_config_test.go index d745021d7d34..ca975e3cf55b 100644 --- a/third_party/terraform/tests/resource_identity_platform_tenant_oauth_idp_config_test.go +++ b/third_party/terraform/tests/resource_identity_platform_tenant_oauth_idp_config_test.go @@ -3,7 +3,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -11,13 +10,13 @@ func TestAccIdentityPlatformTenantOauthIdpConfig_identityPlatformTenantOauthIdpC t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckIdentityPlatformTenantOauthIdpConfigDestroy, + CheckDestroy: testAccCheckIdentityPlatformTenantOauthIdpConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccIdentityPlatformTenantOauthIdpConfig_identityPlatformTenantOauthIdpConfigBasic(context), diff --git a/third_party/terraform/tests/resource_identity_platform_tenant_test.go b/third_party/terraform/tests/resource_identity_platform_tenant_test.go index a272aea7b730..575e1807bd30 100644 --- a/third_party/terraform/tests/resource_identity_platform_tenant_test.go +++ b/third_party/terraform/tests/resource_identity_platform_tenant_test.go @@ -3,7 +3,6 @@ package google import ( "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -11,13 +10,13 @@ func TestAccIdentityPlatformTenant_identityPlatformTenantUpdate(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckIdentityPlatformTenantDestroy, + CheckDestroy: testAccCheckIdentityPlatformTenantDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccIdentityPlatformTenant_identityPlatformTenantBasic(context), diff --git a/third_party/terraform/tests/resource_kms_crypto_key_iam_test.go.erb b/third_party/terraform/tests/resource_kms_crypto_key_iam_test.go.erb index e94d650a03a6..b5640eaf20d4 100644 --- a/third_party/terraform/tests/resource_kms_crypto_key_iam_test.go.erb +++ b/third_party/terraform/tests/resource_kms_crypto_key_iam_test.go.erb @@ -7,7 +7,6 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,26 +15,26 @@ func TestAccKmsCryptoKeyIamBinding(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyDecrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, Location: DEFAULT_KMS_TEST_LOCATION, Name: keyRingName, } - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccKmsCryptoKeyIamBinding_basic(projectId, orgId, billingAccount, account, keyRingName, cryptoKeyName, roleId), - Check: testAccCheckGoogleKmsCryptoKeyIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleKmsCryptoKeyIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -48,7 +47,7 @@ func TestAccKmsCryptoKeyIamBinding(t *testing.T) { { // Test Iam Binding update Config: testAccKmsCryptoKeyIamBinding_update(projectId, orgId, billingAccount, account, keyRingName, cryptoKeyName, roleId), - Check: testAccCheckGoogleKmsCryptoKeyIamBindingExists("foo", roleId, []string{ + Check: testAccCheckGoogleKmsCryptoKeyIamBindingExists(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, projectId), }), @@ -68,20 +67,20 @@ func TestAccKmsCryptoKeyIamBinding_withCondition(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyDecrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, Location: DEFAULT_KMS_TEST_LOCATION, Name: keyRingName, } - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -103,26 +102,26 @@ func TestAccKmsCryptoKeyIamMember(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, Location: DEFAULT_KMS_TEST_LOCATION, Name: keyRingName, } - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccKmsCryptoKeyIamMember_basic(projectId, orgId, billingAccount, account, keyRingName, cryptoKeyName, roleId), - Check: testAccCheckGoogleKmsCryptoKeyIamMemberExists("foo", roleId, + Check: testAccCheckGoogleKmsCryptoKeyIamMemberExists(t, "foo", roleId, fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), ), }, @@ -141,20 +140,20 @@ func TestAccKmsCryptoKeyIamMember_withCondition(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, Location: DEFAULT_KMS_TEST_LOCATION, Name: keyRingName, } - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -176,26 +175,26 @@ func TestAccKmsCryptoKeyIamPolicy(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, Location: DEFAULT_KMS_TEST_LOCATION, Name: keyRingName, } - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccKmsCryptoKeyIamPolicy_basic(projectId, orgId, billingAccount, account, keyRingName, cryptoKeyName, roleId), - Check: testAccCheckGoogleCryptoKmsKeyIam("foo", roleId, []string{ + Check: testAccCheckGoogleCryptoKmsKeyIam(t, "foo", roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -214,21 +213,21 @@ func TestAccKmsCryptoKeyIamPolicy_withCondition(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, Location: DEFAULT_KMS_TEST_LOCATION, Name: keyRingName, } - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) conditionTitle := "expires_after_2019_12_31" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -246,14 +245,14 @@ func TestAccKmsCryptoKeyIamPolicy_withCondition(t *testing.T) { } <% end -%> -func testAccCheckGoogleKmsCryptoKeyIamBindingExists(bindingResourceName, roleId string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleKmsCryptoKeyIamBindingExists(t *testing.T, bindingResourceName, roleId string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { bindingRs, ok := s.RootModule().Resources[fmt.Sprintf("google_kms_crypto_key_iam_binding.%s", bindingResourceName)] if !ok { return fmt.Errorf("Not found: %s", bindingResourceName) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) cryptoKeyId, err := parseKmsCryptoKeyId(bindingRs.Primary.Attributes["crypto_key_id"], config) if err != nil { @@ -282,14 +281,14 @@ func testAccCheckGoogleKmsCryptoKeyIamBindingExists(bindingResourceName, roleId } } -func testAccCheckGoogleKmsCryptoKeyIamMemberExists(n, role, member string) resource.TestCheckFunc { +func testAccCheckGoogleKmsCryptoKeyIamMemberExists(t *testing.T, n, role, member string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_kms_crypto_key_iam_member."+n] if !ok { return fmt.Errorf("Not found: %s", n) } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) cryptoKeyId, err := parseKmsCryptoKeyId(rs.Primary.Attributes["crypto_key_id"], config) if err != nil { @@ -317,14 +316,14 @@ func testAccCheckGoogleKmsCryptoKeyIamMemberExists(n, role, member string) resou } } -func testAccCheckGoogleCryptoKmsKeyIam(n, role string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleCryptoKmsKeyIam(t *testing.T, n, role string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources["google_kms_crypto_key_iam_policy."+n] if !ok { return fmt.Errorf("IAM policy resource not found") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) cryptoKeyId, err := parseKmsCryptoKeyId(rs.Primary.Attributes["crypto_key_id"], config) if err != nil { diff --git a/third_party/terraform/tests/resource_kms_crypto_key_test.go b/third_party/terraform/tests/resource_kms_crypto_key_test.go index 6ce8f64192b6..e49a797f282f 100644 --- a/third_party/terraform/tests/resource_kms_crypto_key_test.go +++ b/third_party/terraform/tests/resource_kms_crypto_key_test.go @@ -5,7 +5,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -168,14 +167,14 @@ func TestCryptoKeyStateUpgradeV0(t *testing.T) { func TestAccKmsCryptoKey_basic(t *testing.T) { t.Parallel() - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) projectOrg := getTestOrgFromEnv(t) location := getTestRegionFromEnv() projectBillingAccount := getTestBillingAccountFromEnv(t) - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -192,8 +191,8 @@ func TestAccKmsCryptoKey_basic(t *testing.T) { Config: testGoogleKmsCryptoKey_removed(projectId, projectOrg, projectBillingAccount, keyRingName), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleKmsCryptoKeyWasRemovedFromState("google_kms_crypto_key.crypto_key"), - testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(projectId, location, keyRingName, cryptoKeyName), - testAccCheckGoogleKmsCryptoKeyRotationDisabled(projectId, location, keyRingName, cryptoKeyName), + testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(t, projectId, location, keyRingName, cryptoKeyName), + testAccCheckGoogleKmsCryptoKeyRotationDisabled(t, projectId, location, keyRingName, cryptoKeyName), ), }, }, @@ -201,18 +200,20 @@ func TestAccKmsCryptoKey_basic(t *testing.T) { } func TestAccKmsCryptoKey_rotation(t *testing.T) { + // when rotation is set, next rotation time is set using time.Now + skipIfVcr(t) t.Parallel() - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) projectOrg := getTestOrgFromEnv(t) location := getTestRegionFromEnv() projectBillingAccount := getTestBillingAccountFromEnv(t) - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) rotationPeriod := "100000s" updatedRotationPeriod := "7776000s" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -245,8 +246,8 @@ func TestAccKmsCryptoKey_rotation(t *testing.T) { Config: testGoogleKmsCryptoKey_removed(projectId, projectOrg, projectBillingAccount, keyRingName), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleKmsCryptoKeyWasRemovedFromState("google_kms_crypto_key.crypto_key"), - testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(projectId, location, keyRingName, cryptoKeyName), - testAccCheckGoogleKmsCryptoKeyRotationDisabled(projectId, location, keyRingName, cryptoKeyName), + testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(t, projectId, location, keyRingName, cryptoKeyName), + testAccCheckGoogleKmsCryptoKeyRotationDisabled(t, projectId, location, keyRingName, cryptoKeyName), ), }, }, @@ -256,16 +257,16 @@ func TestAccKmsCryptoKey_rotation(t *testing.T) { func TestAccKmsCryptoKey_template(t *testing.T) { t.Parallel() - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) projectOrg := getTestOrgFromEnv(t) location := getTestRegionFromEnv() projectBillingAccount := getTestBillingAccountFromEnv(t) - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - cryptoKeyName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + cryptoKeyName := fmt.Sprintf("tf-test-%s", randString(t, 10)) algorithm := "EC_SIGN_P256_SHA256" updatedAlgorithm := "EC_SIGN_P384_SHA384" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -290,8 +291,8 @@ func TestAccKmsCryptoKey_template(t *testing.T) { Config: testGoogleKmsCryptoKey_removed(projectId, projectOrg, projectBillingAccount, keyRingName), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleKmsCryptoKeyWasRemovedFromState("google_kms_crypto_key.crypto_key"), - testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(projectId, location, keyRingName, cryptoKeyName), - testAccCheckGoogleKmsCryptoKeyRotationDisabled(projectId, location, keyRingName, cryptoKeyName), + testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(t, projectId, location, keyRingName, cryptoKeyName), + testAccCheckGoogleKmsCryptoKeyRotationDisabled(t, projectId, location, keyRingName, cryptoKeyName), ), }, }, @@ -314,9 +315,9 @@ func testAccCheckGoogleKmsCryptoKeyWasRemovedFromState(resourceName string) reso // KMS KeyRings cannot be deleted. This ensures that the CryptoKey resource's CryptoKeyVersion // sub-resources were scheduled to be destroyed, rendering the key itself inoperable. -func testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(projectId, location, keyRingName, cryptoKeyName string) resource.TestCheckFunc { +func testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(t *testing.T, projectId, location, keyRingName, cryptoKeyName string) resource.TestCheckFunc { return func(_ *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) gcpResourceUri := fmt.Sprintf("projects/%s/locations/%s/keyRings/%s/cryptoKeys/%s", projectId, location, keyRingName, cryptoKeyName) response, err := config.clientKms.Projects.Locations.KeyRings.CryptoKeys.CryptoKeyVersions.List(gcpResourceUri).Do() @@ -339,9 +340,9 @@ func testAccCheckGoogleKmsCryptoKeyVersionsDestroyed(projectId, location, keyRin // KMS KeyRings cannot be deleted. This ensures that the CryptoKey autorotation // was disabled to prevent more versions of the key from being created. -func testAccCheckGoogleKmsCryptoKeyRotationDisabled(projectId, location, keyRingName, cryptoKeyName string) resource.TestCheckFunc { +func testAccCheckGoogleKmsCryptoKeyRotationDisabled(t *testing.T, projectId, location, keyRingName, cryptoKeyName string) resource.TestCheckFunc { return func(_ *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) gcpResourceUri := fmt.Sprintf("projects/%s/locations/%s/keyRings/%s/cryptoKeys/%s", projectId, location, keyRingName, cryptoKeyName) response, err := config.clientKms.Projects.Locations.KeyRings.CryptoKeys.Get(gcpResourceUri).Do() diff --git a/third_party/terraform/tests/resource_kms_key_ring_iam_test.go.erb b/third_party/terraform/tests/resource_kms_key_ring_iam_test.go.erb index f07d4b8dc2ac..8826e2006df7 100644 --- a/third_party/terraform/tests/resource_kms_key_ring_iam_test.go.erb +++ b/third_party/terraform/tests/resource_kms_key_ring_iam_test.go.erb @@ -7,7 +7,6 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -18,11 +17,11 @@ func TestAccKmsKeyRingIamBinding(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyDecrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, @@ -30,14 +29,14 @@ func TestAccKmsKeyRingIamBinding(t *testing.T) { Name: keyRingName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Binding creation Config: testAccKmsKeyRingIamBinding_basic(projectId, orgId, billingAccount, account, keyRingName, roleId), - Check: testAccCheckGoogleKmsKeyRingIam(keyRingId.keyRingId(), roleId, []string{ + Check: testAccCheckGoogleKmsKeyRingIam(t, keyRingId.keyRingId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -50,7 +49,7 @@ func TestAccKmsKeyRingIamBinding(t *testing.T) { { // Test Iam Binding update Config: testAccKmsKeyRingIamBinding_update(projectId, orgId, billingAccount, account, keyRingName, roleId), - Check: testAccCheckGoogleKmsKeyRingIam(keyRingId.keyRingId(), roleId, []string{ + Check: testAccCheckGoogleKmsKeyRingIam(t, keyRingId.keyRingId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, projectId), }), @@ -70,11 +69,11 @@ func TestAccKmsKeyRingIamBinding_withCondition(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyDecrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) conditionTitle := "expires_after_2019_12_31" keyRingId := &kmsKeyRingId{ @@ -83,7 +82,7 @@ func TestAccKmsKeyRingIamBinding_withCondition(t *testing.T) { Name: keyRingName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -105,11 +104,11 @@ func TestAccKmsKeyRingIamMember(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, @@ -117,14 +116,14 @@ func TestAccKmsKeyRingIamMember(t *testing.T) { Name: keyRingName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccKmsKeyRingIamMember_basic(projectId, orgId, billingAccount, account, keyRingName, roleId), - Check: testAccCheckGoogleKmsKeyRingIam(keyRingId.keyRingId(), roleId, []string{ + Check: testAccCheckGoogleKmsKeyRingIam(t, keyRingId.keyRingId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -143,11 +142,11 @@ func TestAccKmsKeyRingIamMember_withCondition(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) conditionTitle := "expires_after_2019_12_31" keyRingId := &kmsKeyRingId{ @@ -156,7 +155,7 @@ func TestAccKmsKeyRingIamMember_withCondition(t *testing.T) { Name: keyRingName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -178,11 +177,11 @@ func TestAccKmsKeyRingIamPolicy(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) keyRingId := &kmsKeyRingId{ Project: projectId, @@ -190,13 +189,13 @@ func TestAccKmsKeyRingIamPolicy(t *testing.T) { Name: keyRingName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccKmsKeyRingIamPolicy_basic(projectId, orgId, billingAccount, account, keyRingName, roleId), - Check: testAccCheckGoogleKmsKeyRingIam(keyRingId.keyRingId(), roleId, []string{ + Check: testAccCheckGoogleKmsKeyRingIam(t, keyRingId.keyRingId(), roleId, []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, projectId), }), }, @@ -215,11 +214,11 @@ func TestAccKmsKeyRingIamPolicy_withCondition(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) billingAccount := getTestBillingAccountFromEnv(t) - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) roleId := "roles/cloudkms.cryptoKeyEncrypter" - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) conditionTitle := "expires_after_2019_12_31" keyRingId := &kmsKeyRingId{ @@ -228,7 +227,7 @@ func TestAccKmsKeyRingIamPolicy_withCondition(t *testing.T) { Name: keyRingName, } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -246,9 +245,9 @@ func TestAccKmsKeyRingIamPolicy_withCondition(t *testing.T) { } <% end -%> -func testAccCheckGoogleKmsKeyRingIam(keyRingId, role string, members []string) resource.TestCheckFunc { +func testAccCheckGoogleKmsKeyRingIam(t *testing.T, keyRingId, role string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientKms.Projects.Locations.KeyRings.GetIamPolicy(keyRingId).Do() if err != nil { return err diff --git a/third_party/terraform/tests/resource_kms_key_ring_import_job_test.go b/third_party/terraform/tests/resource_kms_key_ring_import_job_test.go new file mode 100644 index 000000000000..9c81538976d7 --- /dev/null +++ b/third_party/terraform/tests/resource_kms_key_ring_import_job_test.go @@ -0,0 +1,48 @@ +package google + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccKmsKeyRingImportJob_basic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testGoogleKmsKeyRingImportJob_basic(context), + }, + { + ResourceName: "google_kms_key_ring_import_job.import-job", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"key_ring", "import_job_id", "stateß"}, + }, + }, + }) +} + +func testGoogleKmsKeyRingImportJob_basic(context map[string]interface{}) string { + return Nprintf(` +resource "google_kms_key_ring" "keyring" { + name = "tf-test-import-job-%{random_suffix}" + location = "global" +} + +resource "google_kms_key_ring_import_job" "import-job" { + key_ring = google_kms_key_ring.keyring.id + import_job_id = "my-import-job" + + import_method = "RSA_OAEP_3072_SHA1_AES_256" + protection_level = "SOFTWARE" +} +`, context) +} diff --git a/third_party/terraform/tests/resource_kms_key_ring_test.go b/third_party/terraform/tests/resource_kms_key_ring_test.go index b1d81f119b75..51e04042ab08 100644 --- a/third_party/terraform/tests/resource_kms_key_ring_test.go +++ b/third_party/terraform/tests/resource_kms_key_ring_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -72,12 +71,12 @@ func TestKeyRingIdParsing(t *testing.T) { } func TestAccKmsKeyRing_basic(t *testing.T) { - projectId := acctest.RandomWithPrefix("tf-test") + projectId := fmt.Sprintf("tf-test-%d", randInt(t)) projectOrg := getTestOrgFromEnv(t) projectBillingAccount := getTestBillingAccountFromEnv(t) - keyRingName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + keyRingName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGoogleKmsKeyRingWasRemovedFromState("google_kms_key_ring.key_ring"), diff --git a/third_party/terraform/tests/resource_kms_secret_ciphertext_test.go b/third_party/terraform/tests/resource_kms_secret_ciphertext_test.go index 33e8e891ed11..1df5683f5680 100644 --- a/third_party/terraform/tests/resource_kms_secret_ciphertext_test.go +++ b/third_party/terraform/tests/resource_kms_secret_ciphertext_test.go @@ -6,7 +6,6 @@ import ( "log" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/cloudkms/v1" @@ -17,17 +16,17 @@ func TestAccKmsSecretCiphertext_basic(t *testing.T) { kms := BootstrapKMSKey(t) - plaintext := fmt.Sprintf("secret-%s", acctest.RandString(10)) + plaintext := fmt.Sprintf("secret-%s", randString(t, 10)) aad := "plainaad" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testGoogleKmsSecretCiphertext(kms.CryptoKey.Name, plaintext), Check: func(s *terraform.State) error { - plaintext, err := testAccDecryptSecretDataWithCryptoKey(s, kms.CryptoKey.Name, "google_kms_secret_ciphertext.acceptance", "") + plaintext, err := testAccDecryptSecretDataWithCryptoKey(t, s, kms.CryptoKey.Name, "google_kms_secret_ciphertext.acceptance", "") if err != nil { return err @@ -40,7 +39,7 @@ func TestAccKmsSecretCiphertext_basic(t *testing.T) { { Config: testGoogleKmsSecretCiphertext_withAAD(kms.CryptoKey.Name, plaintext, aad), Check: func(s *terraform.State) error { - plaintext, err := testAccDecryptSecretDataWithCryptoKey(s, kms.CryptoKey.Name, "google_kms_secret_ciphertext.acceptance", aad) + plaintext, err := testAccDecryptSecretDataWithCryptoKey(t, s, kms.CryptoKey.Name, "google_kms_secret_ciphertext.acceptance", aad) if err != nil { return err @@ -53,8 +52,8 @@ func TestAccKmsSecretCiphertext_basic(t *testing.T) { }) } -func testAccDecryptSecretDataWithCryptoKey(s *terraform.State, cryptoKeyId string, secretCiphertextResourceName, aad string) (string, error) { - config := testAccProvider.Meta().(*Config) +func testAccDecryptSecretDataWithCryptoKey(t *testing.T, s *terraform.State, cryptoKeyId string, secretCiphertextResourceName, aad string) (string, error) { + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[secretCiphertextResourceName] if !ok { return "", fmt.Errorf("Resource not found: %s", secretCiphertextResourceName) diff --git a/third_party/terraform/tests/resource_logging_billing_account_exclusion_test.go b/third_party/terraform/tests/resource_logging_billing_account_exclusion_test.go index ad87216cb4b3..14bf07e14b8c 100644 --- a/third_party/terraform/tests/resource_logging_billing_account_exclusion_test.go +++ b/third_party/terraform/tests/resource_logging_billing_account_exclusion_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -33,13 +32,13 @@ func TestAccLoggingBillingAccountExclusion(t *testing.T) { func testAccLoggingBillingAccountExclusion_basic(t *testing.T) { billingAccount := getTestBillingAccountFromEnv(t) - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) - description := "Description " + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) + description := "Description " + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingBillingAccountExclusionDestroy, + CheckDestroy: testAccCheckLoggingBillingAccountExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingBillingAccountExclusion_basicCfg(exclusionName, description, billingAccount), @@ -55,14 +54,14 @@ func testAccLoggingBillingAccountExclusion_basic(t *testing.T) { func testAccLoggingBillingAccountExclusion_update(t *testing.T) { billingAccount := getTestBillingAccountFromEnv(t) - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) - descriptionBefore := "Basic BillingAccount Logging Exclusion" + acctest.RandString(10) - descriptionAfter := "Updated Basic BillingAccount Logging Exclusion" + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) + descriptionBefore := "Basic BillingAccount Logging Exclusion" + randString(t, 10) + descriptionAfter := "Updated Basic BillingAccount Logging Exclusion" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingBillingAccountExclusionDestroy, + CheckDestroy: testAccCheckLoggingBillingAccountExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingBillingAccountExclusion_basicCfg(exclusionName, descriptionBefore, billingAccount), @@ -87,13 +86,13 @@ func testAccLoggingBillingAccountExclusion_update(t *testing.T) { func testAccLoggingBillingAccountExclusion_multiple(t *testing.T) { billingAccount := getTestBillingAccountFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingBillingAccountExclusionDestroy, + CheckDestroy: testAccCheckLoggingBillingAccountExclusionDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccLoggingBillingAccountExclusion_multipleCfg(billingAccount), + Config: testAccLoggingBillingAccountExclusion_multipleCfg("tf-test-exclusion-"+randString(t, 10), billingAccount), }, { ResourceName: "google_logging_billing_account_exclusion.basic0", @@ -114,23 +113,25 @@ func testAccLoggingBillingAccountExclusion_multiple(t *testing.T) { }) } -func testAccCheckLoggingBillingAccountExclusionDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingBillingAccountExclusionDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_billing_account_exclusion" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_billing_account_exclusion" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.BillingAccounts.Exclusions.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("billingAccount exclusion still exists") + _, err := config.clientLogging.BillingAccounts.Exclusions.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("billingAccount exclusion still exists") + } } - } - return nil + return nil + } } func testAccLoggingBillingAccountExclusion_basicCfg(exclusionName, description, billingAccount string) string { @@ -144,17 +145,17 @@ resource "google_logging_billing_account_exclusion" "basic" { `, exclusionName, billingAccount, description, getTestProjectFromEnv()) } -func testAccLoggingBillingAccountExclusion_multipleCfg(billingAccount string) string { +func testAccLoggingBillingAccountExclusion_multipleCfg(exclusionName, billingAccount string) string { s := "" for i := 0; i < 3; i++ { s += fmt.Sprintf(` resource "google_logging_billing_account_exclusion" "basic%d" { - name = "%s" + name = "%s%d" billing_account = "%s" description = "Basic BillingAccount Logging Exclusion" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" } -`, i, "tf-test-exclusion-"+acctest.RandString(10), billingAccount, getTestProjectFromEnv()) +`, i, exclusionName, i, billingAccount, getTestProjectFromEnv()) } return s } diff --git a/third_party/terraform/tests/resource_logging_billing_account_sink_test.go b/third_party/terraform/tests/resource_logging_billing_account_sink_test.go index 02f0073b05d7..5b9e64804b04 100644 --- a/third_party/terraform/tests/resource_logging_billing_account_sink_test.go +++ b/third_party/terraform/tests/resource_logging_billing_account_sink_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/logging/v2" @@ -13,21 +12,21 @@ import ( func TestAccLoggingBillingAccountSink_basic(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) billingAccount := getTestBillingAccountFromEnv(t) var sink logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroy, + CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingBillingAccountSink_basic(sinkName, bucketName, billingAccount), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.basic", &sink), + testAccCheckLoggingBillingAccountSinkExists(t, "google_logging_billing_account_sink.basic", &sink), testAccCheckLoggingBillingAccountSink(&sink, "google_logging_billing_account_sink.basic"), ), }, { @@ -42,28 +41,28 @@ func TestAccLoggingBillingAccountSink_basic(t *testing.T) { func TestAccLoggingBillingAccountSink_update(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - updatedBucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + updatedBucketName := "tf-test-sink-bucket-" + randString(t, 10) billingAccount := getTestBillingAccountFromEnv(t) var sinkBefore, sinkAfter logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroy, + CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingBillingAccountSink_update(sinkName, bucketName, billingAccount), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.update", &sinkBefore), + testAccCheckLoggingBillingAccountSinkExists(t, "google_logging_billing_account_sink.update", &sinkBefore), testAccCheckLoggingBillingAccountSink(&sinkBefore, "google_logging_billing_account_sink.update"), ), }, { Config: testAccLoggingBillingAccountSink_update(sinkName, updatedBucketName, billingAccount), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.update", &sinkAfter), + testAccCheckLoggingBillingAccountSinkExists(t, "google_logging_billing_account_sink.update", &sinkAfter), testAccCheckLoggingBillingAccountSink(&sinkAfter, "google_logging_billing_account_sink.update"), ), }, { @@ -87,14 +86,14 @@ func TestAccLoggingBillingAccountSink_update(t *testing.T) { func TestAccLoggingBillingAccountSink_updateBigquerySink(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bqDatasetID := "tf_test_sink_" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bqDatasetID := "tf_test_sink_" + randString(t, 10) billingAccount := getTestBillingAccountFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroy, + CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingBillingAccountSink_bigquery_before(sinkName, bqDatasetID, billingAccount), @@ -119,21 +118,21 @@ func TestAccLoggingBillingAccountSink_updateBigquerySink(t *testing.T) { func TestAccLoggingBillingAccountSink_heredoc(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) billingAccount := getTestBillingAccountFromEnv(t) var sink logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroy, + CheckDestroy: testAccCheckLoggingBillingAccountSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingBillingAccountSink_heredoc(sinkName, bucketName, billingAccount), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingBillingAccountSinkExists("google_logging_billing_account_sink.heredoc", &sink), + testAccCheckLoggingBillingAccountSinkExists(t, "google_logging_billing_account_sink.heredoc", &sink), testAccCheckLoggingBillingAccountSink(&sink, "google_logging_billing_account_sink.heredoc"), ), }, { @@ -145,32 +144,34 @@ func TestAccLoggingBillingAccountSink_heredoc(t *testing.T) { }) } -func testAccCheckLoggingBillingAccountSinkDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingBillingAccountSinkDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_billing_account_sink" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_billing_account_sink" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.BillingAccounts.Sinks.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("billing sink still exists") + _, err := config.clientLogging.BillingAccounts.Sinks.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("billing sink still exists") + } } - } - return nil + return nil + } } -func testAccCheckLoggingBillingAccountSinkExists(n string, sink *logging.LogSink) resource.TestCheckFunc { +func testAccCheckLoggingBillingAccountSinkExists(t *testing.T, n string, sink *logging.LogSink) resource.TestCheckFunc { return func(s *terraform.State) error { attributes, err := getResourceAttributes(n, s) if err != nil { return err } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) si, err := config.clientLogging.BillingAccounts.Sinks.Get(attributes["id"]).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_logging_bucket_config_test.go b/third_party/terraform/tests/resource_logging_bucket_config_test.go new file mode 100644 index 000000000000..5db04c75d0c0 --- /dev/null +++ b/third_party/terraform/tests/resource_logging_bucket_config_test.go @@ -0,0 +1,215 @@ +package google + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccLoggingBucketConfigFolder_basic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "folder_name": "tf-test-" + randString(t, 10), + "org_id": getTestOrgFromEnv(t), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccLoggingBucketConfigFolder_basic(context, 30), + }, + { + ResourceName: "google_logging_folder_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"folder"}, + }, + { + Config: testAccLoggingBucketConfigFolder_basic(context, 40), + }, + { + ResourceName: "google_logging_folder_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"folder"}, + }, + }, + }) +} + +func TestAccLoggingBucketConfigProject_basic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "project_name": "tf-test-" + randString(t, 10), + "org_id": getTestOrgFromEnv(t), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccLoggingBucketConfigProject_basic(context, 30), + }, + { + ResourceName: "google_logging_project_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project"}, + }, + { + Config: testAccLoggingBucketConfigProject_basic(context, 40), + }, + { + ResourceName: "google_logging_project_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project"}, + }, + }, + }) +} + +func TestAccLoggingBucketConfigBillingAccount_basic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "billing_account_name": "billingAccounts/" + getTestBillingAccountFromEnv(t), + "org_id": getTestOrgFromEnv(t), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccLoggingBucketConfigBillingAccount_basic(context, 30), + }, + { + ResourceName: "google_logging_billing_account_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"billing_account"}, + }, + { + Config: testAccLoggingBucketConfigBillingAccount_basic(context, 40), + }, + { + ResourceName: "google_logging_billing_account_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"billing_account"}, + }, + }, + }) +} + +func TestAccLoggingBucketConfigOrganization_basic(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + "org_id": getTestOrgFromEnv(t), + } + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccLoggingBucketConfigOrganization_basic(context, 30), + }, + { + ResourceName: "google_logging_organization_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"organization"}, + }, + { + Config: testAccLoggingBucketConfigOrganization_basic(context, 40), + }, + { + ResourceName: "google_logging_organization_bucket_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"organization"}, + }, + }, + }) +} + +func testAccLoggingBucketConfigFolder_basic(context map[string]interface{}, retention int) string { + return fmt.Sprintf(Nprintf(` +resource "google_folder" "default" { + display_name = "%{folder_name}" + parent = "organizations/%{org_id}" +} + +resource "google_logging_folder_bucket_config" "basic" { + folder = google_folder.default.name + location = "global" + retention_days = %d + description = "retention test %d days" + bucket_id = "_Default" +} +`, context), retention, retention) +} + +func testAccLoggingBucketConfigProject_basic(context map[string]interface{}, retention int) string { + return fmt.Sprintf(Nprintf(` +resource "google_project" "default" { + project_id = "%{project_name}" + name = "%{project_name}" + org_id = "%{org_id}" +} + +resource "google_logging_project_bucket_config" "basic" { + project = google_project.default.name + location = "global" + retention_days = %d + description = "retention test %d days" + bucket_id = "_Default" +} +`, context), retention, retention) +} + +func testAccLoggingBucketConfigBillingAccount_basic(context map[string]interface{}, retention int) string { + return fmt.Sprintf(Nprintf(` + +data "google_billing_account" "default" { + billing_account = "%{billing_account_name}" +} + +resource "google_logging_billing_account_bucket_config" "basic" { + billing_account = data.google_billing_account.default.billing_account + location = "global" + retention_days = %d + description = "retention test %d days" + bucket_id = "_Default" +} +`, context), retention, retention) +} + +func testAccLoggingBucketConfigOrganization_basic(context map[string]interface{}, retention int) string { + return fmt.Sprintf(Nprintf(` +data "google_organization" "default" { + organization = "%{org_id}" +} + +resource "google_logging_organization_bucket_config" "basic" { + organization = data.google_organization.default.organization + location = "global" + retention_days = %d + description = "retention test %d days" + bucket_id = "_Default" +} +`, context), retention, retention) +} diff --git a/third_party/terraform/tests/resource_logging_folder_exclusion_test.go b/third_party/terraform/tests/resource_logging_folder_exclusion_test.go index e268d699da54..1a7ec08be9d8 100644 --- a/third_party/terraform/tests/resource_logging_folder_exclusion_test.go +++ b/third_party/terraform/tests/resource_logging_folder_exclusion_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -34,14 +33,14 @@ func TestAccLoggingFolderExclusion(t *testing.T) { func testAccLoggingFolderExclusion_basic(t *testing.T) { org := getTestOrgFromEnv(t) - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) - description := "Description " + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) + description := "Description " + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderExclusionDestroy, + CheckDestroy: testAccCheckLoggingFolderExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderExclusion_basicCfg(exclusionName, description, folderName, "organizations/"+org), @@ -57,9 +56,9 @@ func testAccLoggingFolderExclusion_basic(t *testing.T) { func testAccLoggingFolderExclusion_folderAcceptsFullFolderPath(t *testing.T) { org := getTestOrgFromEnv(t) - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) - description := "Description " + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) + description := "Description " + randString(t, 10) checkFn := func(s []*terraform.InstanceState) error { loggingExclusionId, err := parseLoggingExclusionId(s[0].ID) @@ -75,10 +74,10 @@ func testAccLoggingFolderExclusion_folderAcceptsFullFolderPath(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderExclusionDestroy, + CheckDestroy: testAccCheckLoggingFolderExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderExclusion_withFullFolderPath(exclusionName, description, folderName, "organizations/"+org), @@ -100,16 +99,16 @@ func testAccLoggingFolderExclusion_folderAcceptsFullFolderPath(t *testing.T) { func testAccLoggingFolderExclusion_update(t *testing.T) { org := getTestOrgFromEnv(t) - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) parent := "organizations/" + org - descriptionBefore := "Basic Folder Logging Exclusion" + acctest.RandString(10) - descriptionAfter := "Updated Basic Folder Logging Exclusion" + acctest.RandString(10) + descriptionBefore := "Basic Folder Logging Exclusion" + randString(t, 10) + descriptionAfter := "Updated Basic Folder Logging Exclusion" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderExclusionDestroy, + CheckDestroy: testAccCheckLoggingFolderExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderExclusion_basicCfg(exclusionName, descriptionBefore, folderName, parent), @@ -133,16 +132,16 @@ func testAccLoggingFolderExclusion_update(t *testing.T) { func testAccLoggingFolderExclusion_multiple(t *testing.T) { org := getTestOrgFromEnv(t) - folderName := "tf-test-folder-" + acctest.RandString(10) + folderName := "tf-test-folder-" + randString(t, 10) parent := "organizations/" + org - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderExclusionDestroy, + CheckDestroy: testAccCheckLoggingFolderExclusionDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccLoggingFolderExclusion_multipleCfg(folderName, parent), + Config: testAccLoggingFolderExclusion_multipleCfg(folderName, parent, "tf-test-exclusion-"+randString(t, 10)), }, { ResourceName: "google_logging_folder_exclusion.basic0", @@ -163,23 +162,25 @@ func testAccLoggingFolderExclusion_multiple(t *testing.T) { }) } -func testAccCheckLoggingFolderExclusionDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingFolderExclusionDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_folder_exclusion" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_folder_exclusion" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.Folders.Exclusions.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("folder exclusion still exists") + _, err := config.clientLogging.Folders.Exclusions.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("folder exclusion still exists") + } } - } - return nil + return nil + } } func testAccLoggingFolderExclusion_basicCfg(exclusionName, description, folderName, folderParent string) string { @@ -214,7 +215,7 @@ resource "google_folder" "my-folder" { `, exclusionName, description, getTestProjectFromEnv(), folderName, folderParent) } -func testAccLoggingFolderExclusion_multipleCfg(folderName, folderParent string) string { +func testAccLoggingFolderExclusion_multipleCfg(folderName, folderParent, exclusionName string) string { s := fmt.Sprintf(` resource "google_folder" "my-folder" { display_name = "%s" @@ -225,12 +226,12 @@ resource "google_folder" "my-folder" { for i := 0; i < 3; i++ { s += fmt.Sprintf(` resource "google_logging_folder_exclusion" "basic%d" { - name = "%s" + name = "%s%d" folder = element(split("/", google_folder.my-folder.name), 1) description = "Basic Folder Logging Exclusion" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" } -`, i, "tf-test-exclusion-"+acctest.RandString(10), getTestProjectFromEnv()) +`, i, exclusionName, i, getTestProjectFromEnv()) } return s } diff --git a/third_party/terraform/tests/resource_logging_folder_sink_test.go b/third_party/terraform/tests/resource_logging_folder_sink_test.go index 800231292644..5a7b950d301f 100644 --- a/third_party/terraform/tests/resource_logging_folder_sink_test.go +++ b/third_party/terraform/tests/resource_logging_folder_sink_test.go @@ -6,7 +6,6 @@ import ( "strconv" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/logging/v2" @@ -16,21 +15,21 @@ func TestAccLoggingFolderSink_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) var sink logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderSinkDestroy, + CheckDestroy: testAccCheckLoggingFolderSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderSink_basic(sinkName, bucketName, folderName, "organizations/"+org), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sink), + testAccCheckLoggingFolderSinkExists(t, "google_logging_folder_sink.basic", &sink), testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.basic"), ), }, { @@ -46,14 +45,14 @@ func TestAccLoggingFolderSink_removeOptionals(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderSinkDestroy, + CheckDestroy: testAccCheckLoggingFolderSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderSink_basic(sinkName, bucketName, folderName, "organizations/"+org), @@ -79,21 +78,21 @@ func TestAccLoggingFolderSink_folderAcceptsFullFolderPath(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) var sink logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderSinkDestroy, + CheckDestroy: testAccCheckLoggingFolderSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderSink_withFullFolderPath(sinkName, bucketName, folderName, "organizations/"+org), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sink), + testAccCheckLoggingFolderSinkExists(t, "google_logging_folder_sink.basic", &sink), testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.basic"), ), }, { @@ -109,29 +108,29 @@ func TestAccLoggingFolderSink_update(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - updatedBucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + updatedBucketName := "tf-test-sink-bucket-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) parent := "organizations/" + org var sinkBefore, sinkAfter logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderSinkDestroy, + CheckDestroy: testAccCheckLoggingFolderSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderSink_basic(sinkName, bucketName, folderName, parent), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sinkBefore), + testAccCheckLoggingFolderSinkExists(t, "google_logging_folder_sink.basic", &sinkBefore), testAccCheckLoggingFolderSink(&sinkBefore, "google_logging_folder_sink.basic"), ), }, { Config: testAccLoggingFolderSink_basic(sinkName, updatedBucketName, folderName, parent), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.basic", &sinkAfter), + testAccCheckLoggingFolderSinkExists(t, "google_logging_folder_sink.basic", &sinkAfter), testAccCheckLoggingFolderSink(&sinkAfter, "google_logging_folder_sink.basic"), ), }, { @@ -156,14 +155,14 @@ func TestAccLoggingFolderSink_updateBigquerySink(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bqDatasetID := "tf_test_sink_" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bqDatasetID := "tf_test_sink_" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderSinkDestroy, + CheckDestroy: testAccCheckLoggingFolderSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderSink_bigquery_before(sinkName, bqDatasetID, folderName, "organizations/"+org), @@ -189,21 +188,21 @@ func TestAccLoggingFolderSink_heredoc(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - folderName := "tf-test-folder-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + folderName := "tf-test-folder-" + randString(t, 10) var sink logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingFolderSinkDestroy, + CheckDestroy: testAccCheckLoggingFolderSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingFolderSink_heredoc(sinkName, bucketName, folderName, "organizations/"+org), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingFolderSinkExists("google_logging_folder_sink.heredoc", &sink), + testAccCheckLoggingFolderSinkExists(t, "google_logging_folder_sink.heredoc", &sink), testAccCheckLoggingFolderSink(&sink, "google_logging_folder_sink.heredoc"), ), }, { @@ -215,32 +214,34 @@ func TestAccLoggingFolderSink_heredoc(t *testing.T) { }) } -func testAccCheckLoggingFolderSinkDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingFolderSinkDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_folder_sink" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_folder_sink" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.Folders.Sinks.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("folder sink still exists") + _, err := config.clientLogging.Folders.Sinks.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("folder sink still exists") + } } - } - return nil + return nil + } } -func testAccCheckLoggingFolderSinkExists(n string, sink *logging.LogSink) resource.TestCheckFunc { +func testAccCheckLoggingFolderSinkExists(t *testing.T, n string, sink *logging.LogSink) resource.TestCheckFunc { return func(s *terraform.State) error { attributes, err := getResourceAttributes(n, s) if err != nil { return err } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) si, err := config.clientLogging.Folders.Sinks.Get(attributes["id"]).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_logging_metric_test.go b/third_party/terraform/tests/resource_logging_metric_test.go index 3c4c57c9e40a..4b3b2ab147ee 100644 --- a/third_party/terraform/tests/resource_logging_metric_test.go +++ b/third_party/terraform/tests/resource_logging_metric_test.go @@ -4,21 +4,20 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccLoggingMetric_update(t *testing.T) { t.Parallel() - suffix := acctest.RandString(10) + suffix := randString(t, 10) filter := "resource.type=gae_app AND severity>=ERROR" updatedFilter := "resource.type=gae_app AND severity=ERROR" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingMetricDestroy, + CheckDestroy: testAccCheckLoggingMetricDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingMetric_update(suffix, filter), @@ -43,13 +42,13 @@ func TestAccLoggingMetric_update(t *testing.T) { func TestAccLoggingMetric_explicitBucket(t *testing.T) { t.Parallel() - suffix := acctest.RandString(10) + suffix := randString(t, 10) filter := "resource.type=gae_app AND severity>=ERROR" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingMetricDestroy, + CheckDestroy: testAccCheckLoggingMetricDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingMetric_explicitBucket(suffix, filter), diff --git a/third_party/terraform/tests/resource_logging_organization_exclusion_test.go b/third_party/terraform/tests/resource_logging_organization_exclusion_test.go index 5d77fb2125b1..31b3e6044c3f 100644 --- a/third_party/terraform/tests/resource_logging_organization_exclusion_test.go +++ b/third_party/terraform/tests/resource_logging_organization_exclusion_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -33,13 +32,13 @@ func TestAccLoggingOrganizationExclusion(t *testing.T) { func testAccLoggingOrganizationExclusion_basic(t *testing.T) { org := getTestOrgFromEnv(t) - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) - description := "Description " + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) + description := "Description " + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingOrganizationExclusionDestroy, + CheckDestroy: testAccCheckLoggingOrganizationExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingOrganizationExclusion_basicCfg(exclusionName, description, org), @@ -55,14 +54,14 @@ func testAccLoggingOrganizationExclusion_basic(t *testing.T) { func testAccLoggingOrganizationExclusion_update(t *testing.T) { org := getTestOrgFromEnv(t) - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) - descriptionBefore := "Basic Organization Logging Exclusion" + acctest.RandString(10) - descriptionAfter := "Updated Basic Organization Logging Exclusion" + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) + descriptionBefore := "Basic Organization Logging Exclusion" + randString(t, 10) + descriptionAfter := "Updated Basic Organization Logging Exclusion" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingOrganizationExclusionDestroy, + CheckDestroy: testAccCheckLoggingOrganizationExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingOrganizationExclusion_basicCfg(exclusionName, descriptionBefore, org), @@ -87,13 +86,13 @@ func testAccLoggingOrganizationExclusion_update(t *testing.T) { func testAccLoggingOrganizationExclusion_multiple(t *testing.T) { org := getTestOrgFromEnv(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingOrganizationExclusionDestroy, + CheckDestroy: testAccCheckLoggingOrganizationExclusionDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccLoggingOrganizationExclusion_multipleCfg(org), + Config: testAccLoggingOrganizationExclusion_multipleCfg("tf-test-exclusion-"+randString(t, 10), org), }, { ResourceName: "google_logging_organization_exclusion.basic0", @@ -114,23 +113,25 @@ func testAccLoggingOrganizationExclusion_multiple(t *testing.T) { }) } -func testAccCheckLoggingOrganizationExclusionDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingOrganizationExclusionDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_organization_exclusion" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_organization_exclusion" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.Organizations.Exclusions.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("organization exclusion still exists") + _, err := config.clientLogging.Organizations.Exclusions.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("organization exclusion still exists") + } } - } - return nil + return nil + } } func testAccLoggingOrganizationExclusion_basicCfg(exclusionName, description, orgId string) string { @@ -144,17 +145,17 @@ resource "google_logging_organization_exclusion" "basic" { `, exclusionName, orgId, description, getTestProjectFromEnv()) } -func testAccLoggingOrganizationExclusion_multipleCfg(orgId string) string { +func testAccLoggingOrganizationExclusion_multipleCfg(exclusionName, orgId string) string { s := "" for i := 0; i < 3; i++ { s += fmt.Sprintf(` resource "google_logging_organization_exclusion" "basic%d" { - name = "%s" + name = "%s%d" org_id = "%s" description = "Basic Organization Logging Exclusion" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" } -`, i, "tf-test-exclusion-"+acctest.RandString(10), orgId, getTestProjectFromEnv()) +`, i, exclusionName, i, orgId, getTestProjectFromEnv()) } return s } diff --git a/third_party/terraform/tests/resource_logging_organization_sink_test.go b/third_party/terraform/tests/resource_logging_organization_sink_test.go index 0bb8c54d45c4..bb21dfd8f2c1 100644 --- a/third_party/terraform/tests/resource_logging_organization_sink_test.go +++ b/third_party/terraform/tests/resource_logging_organization_sink_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/logging/v2" @@ -15,20 +14,20 @@ func TestAccLoggingOrganizationSink_basic(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) var sink logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingOrganizationSinkDestroy, + CheckDestroy: testAccCheckLoggingOrganizationSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingOrganizationSink_basic(sinkName, bucketName, org), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.basic", &sink), + testAccCheckLoggingOrganizationSinkExists(t, "google_logging_organization_sink.basic", &sink), testAccCheckLoggingOrganizationSink(&sink, "google_logging_organization_sink.basic"), ), }, { @@ -44,27 +43,27 @@ func TestAccLoggingOrganizationSink_update(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - updatedBucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + updatedBucketName := "tf-test-sink-bucket-" + randString(t, 10) var sinkBefore, sinkAfter logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingOrganizationSinkDestroy, + CheckDestroy: testAccCheckLoggingOrganizationSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingOrganizationSink_update(sinkName, bucketName, org), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.update", &sinkBefore), + testAccCheckLoggingOrganizationSinkExists(t, "google_logging_organization_sink.update", &sinkBefore), testAccCheckLoggingOrganizationSink(&sinkBefore, "google_logging_organization_sink.update"), ), }, { Config: testAccLoggingOrganizationSink_update(sinkName, updatedBucketName, org), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.update", &sinkAfter), + testAccCheckLoggingOrganizationSinkExists(t, "google_logging_organization_sink.update", &sinkAfter), testAccCheckLoggingOrganizationSink(&sinkAfter, "google_logging_organization_sink.update"), ), }, { @@ -89,13 +88,13 @@ func TestAccLoggingOrganizationSink_updateBigquerySink(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bqDatasetID := "tf_test_sink_" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bqDatasetID := "tf_test_sink_" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingOrganizationSinkDestroy, + CheckDestroy: testAccCheckLoggingOrganizationSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingOrganizationSink_bigquery_before(sinkName, bqDatasetID, org), @@ -121,20 +120,20 @@ func TestAccLoggingOrganizationSink_heredoc(t *testing.T) { t.Parallel() org := getTestOrgFromEnv(t) - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) var sink logging.LogSink - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingOrganizationSinkDestroy, + CheckDestroy: testAccCheckLoggingOrganizationSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingOrganizationSink_heredoc(sinkName, bucketName, org), Check: resource.ComposeTestCheckFunc( - testAccCheckLoggingOrganizationSinkExists("google_logging_organization_sink.heredoc", &sink), + testAccCheckLoggingOrganizationSinkExists(t, "google_logging_organization_sink.heredoc", &sink), testAccCheckLoggingOrganizationSink(&sink, "google_logging_organization_sink.heredoc"), ), }, { @@ -146,32 +145,34 @@ func TestAccLoggingOrganizationSink_heredoc(t *testing.T) { }) } -func testAccCheckLoggingOrganizationSinkDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingOrganizationSinkDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_organization_sink" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_organization_sink" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.Organizations.Sinks.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("organization sink still exists") + _, err := config.clientLogging.Organizations.Sinks.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("organization sink still exists") + } } - } - return nil + return nil + } } -func testAccCheckLoggingOrganizationSinkExists(n string, sink *logging.LogSink) resource.TestCheckFunc { +func testAccCheckLoggingOrganizationSinkExists(t *testing.T, n string, sink *logging.LogSink) resource.TestCheckFunc { return func(s *terraform.State) error { attributes, err := getResourceAttributes(n, s) if err != nil { return err } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) si, err := config.clientLogging.Organizations.Sinks.Get(attributes["id"]).Do() if err != nil { diff --git a/third_party/terraform/tests/resource_logging_project_exclusion_test.go b/third_party/terraform/tests/resource_logging_project_exclusion_test.go index a83a7e414a02..7b169b866c6e 100644 --- a/third_party/terraform/tests/resource_logging_project_exclusion_test.go +++ b/third_party/terraform/tests/resource_logging_project_exclusion_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -33,12 +32,12 @@ func TestAccLoggingProjectExclusion(t *testing.T) { } func testAccLoggingProjectExclusion_basic(t *testing.T) { - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectExclusionDestroy, + CheckDestroy: testAccCheckLoggingProjectExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingProjectExclusion_basicCfg(exclusionName), @@ -53,12 +52,12 @@ func testAccLoggingProjectExclusion_basic(t *testing.T) { } func testAccLoggingProjectExclusion_disablePreservesFilter(t *testing.T) { - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectExclusionDestroy, + CheckDestroy: testAccCheckLoggingProjectExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingProjectExclusion_basicCfg(exclusionName), @@ -81,12 +80,12 @@ func testAccLoggingProjectExclusion_disablePreservesFilter(t *testing.T) { } func testAccLoggingProjectExclusion_update(t *testing.T) { - exclusionName := "tf-test-exclusion-" + acctest.RandString(10) + exclusionName := "tf-test-exclusion-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectExclusionDestroy, + CheckDestroy: testAccCheckLoggingProjectExclusionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingProjectExclusion_basicCfg(exclusionName), @@ -109,13 +108,13 @@ func testAccLoggingProjectExclusion_update(t *testing.T) { } func testAccLoggingProjectExclusion_multiple(t *testing.T) { - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectExclusionDestroy, + CheckDestroy: testAccCheckLoggingProjectExclusionDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccLoggingProjectExclusion_multipleCfg(), + Config: testAccLoggingProjectExclusion_multipleCfg("tf-test-exclusion-" + randString(t, 10)), }, { ResourceName: "google_logging_project_exclusion.basic0", @@ -136,23 +135,25 @@ func testAccLoggingProjectExclusion_multiple(t *testing.T) { }) } -func testAccCheckLoggingProjectExclusionDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingProjectExclusionDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_project_exclusion" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_project_exclusion" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.Projects.Exclusions.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("project exclusion %s still exists", attributes["id"]) + _, err := config.clientLogging.Projects.Exclusions.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("project exclusion %s still exists", attributes["id"]) + } } - } - return nil + return nil + } } func testAccLoggingProjectExclusion_basicCfg(name string) string { @@ -186,16 +187,16 @@ resource "google_logging_project_exclusion" "basic" { `, name, getTestProjectFromEnv()) } -func testAccLoggingProjectExclusion_multipleCfg() string { +func testAccLoggingProjectExclusion_multipleCfg(exclusionName string) string { s := "" for i := 0; i < 3; i++ { s += fmt.Sprintf(` resource "google_logging_project_exclusion" "basic%d" { - name = "%s" + name = "%s%d" description = "Basic Project Logging Exclusion" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" } -`, i, "tf-test-exclusion-"+acctest.RandString(10), getTestProjectFromEnv()) +`, i, exclusionName, i, getTestProjectFromEnv()) } return s } diff --git a/third_party/terraform/tests/resource_logging_project_sink_test.go b/third_party/terraform/tests/resource_logging_project_sink_test.go index 276e7f3b5475..5a5d69d5d172 100644 --- a/third_party/terraform/tests/resource_logging_project_sink_test.go +++ b/third_party/terraform/tests/resource_logging_project_sink_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,13 +11,13 @@ import ( func TestAccLoggingProjectSink_basic(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectSinkDestroy, + CheckDestroy: testAccCheckLoggingProjectSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingProjectSink_basic(sinkName, getTestProjectFromEnv(), bucketName), @@ -35,14 +34,14 @@ func TestAccLoggingProjectSink_basic(t *testing.T) { func TestAccLoggingProjectSink_updatePreservesUniqueWriter(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) - updatedBucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) + updatedBucketName := "tf-test-sink-bucket-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectSinkDestroy, + CheckDestroy: testAccCheckLoggingProjectSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingProjectSink_uniqueWriter(sinkName, bucketName), @@ -67,13 +66,13 @@ func TestAccLoggingProjectSink_updatePreservesUniqueWriter(t *testing.T) { func TestAccLoggingProjectSink_updateBigquerySink(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bqDatasetID := "tf_test_sink_" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bqDatasetID := "tf_test_sink_" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectSinkDestroy, + CheckDestroy: testAccCheckLoggingProjectSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingProjectSink_bigquery_before(sinkName, bqDatasetID), @@ -98,13 +97,13 @@ func TestAccLoggingProjectSink_updateBigquerySink(t *testing.T) { func TestAccLoggingProjectSink_heredoc(t *testing.T) { t.Parallel() - sinkName := "tf-test-sink-" + acctest.RandString(10) - bucketName := "tf-test-sink-bucket-" + acctest.RandString(10) + sinkName := "tf-test-sink-" + randString(t, 10) + bucketName := "tf-test-sink-bucket-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckLoggingProjectSinkDestroy, + CheckDestroy: testAccCheckLoggingProjectSinkDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccLoggingProjectSink_heredoc(sinkName, getTestProjectFromEnv(), bucketName), @@ -118,23 +117,25 @@ func TestAccLoggingProjectSink_heredoc(t *testing.T) { }) } -func testAccCheckLoggingProjectSinkDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckLoggingProjectSinkDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_logging_project_sink" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_logging_project_sink" { + continue + } - attributes := rs.Primary.Attributes + attributes := rs.Primary.Attributes - _, err := config.clientLogging.Projects.Sinks.Get(attributes["id"]).Do() - if err == nil { - return fmt.Errorf("project sink still exists") + _, err := config.clientLogging.Projects.Sinks.Get(attributes["id"]).Do() + if err == nil { + return fmt.Errorf("project sink still exists") + } } - } - return nil + return nil + } } func testAccLoggingProjectSink_basic(name, project, bucketName string) string { diff --git a/third_party/terraform/tests/resource_memcache_instance_test.go.erb b/third_party/terraform/tests/resource_memcache_instance_test.go.erb new file mode 100644 index 000000000000..8c74e7b4a7f4 --- /dev/null +++ b/third_party/terraform/tests/resource_memcache_instance_test.go.erb @@ -0,0 +1,124 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' %> +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccMemcacheInstance_update(t *testing.T) { + t.Parallel() + + prefix := fmt.Sprintf("%d", randInt(t)) + name := fmt.Sprintf("tf-test-%s", prefix) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckMemcacheInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccMemcacheInstance_update(prefix, name), + }, + { + ResourceName: "google_memcache_instance.test", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccMemcacheInstance_update2(prefix, name), + }, + { + ResourceName: "google_memcache_instance.test", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccMemcacheInstance_update(prefix, name string) string { + return fmt.Sprintf(` +resource "google_compute_network" "network" { + name = "tf-test%s" +} + +resource "google_compute_global_address" "service_range" { + name = "tf-test%s" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 16 + network = google_compute_network.network.id +} + +resource "google_service_networking_connection" "private_service_connection" { + network = google_compute_network.network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.service_range.name] +} + +resource "google_memcache_instance" "test" { + name = "%s" + region = "us-central1" + authorized_network = google_service_networking_connection.private_service_connection.network + + node_config { + cpu_count = 1 + memory_size_mb = 1024 + } + node_count = 1 + + memcache_parameters { + params = { + "listen-backlog" = "2048" + "max-item-size" = "8388608" + } + } +} +`, prefix, prefix, name) +} + +func testAccMemcacheInstance_update2(prefix, name string) string { + return fmt.Sprintf(` +resource "google_compute_network" "network" { + name = "tf-test%s" +} + +resource "google_compute_global_address" "service_range" { + name = "tf-test%s" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 16 + network = google_compute_network.network.id +} + +resource "google_service_networking_connection" "private_service_connection" { + network = google_compute_network.network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.service_range.name] +} + +resource "google_memcache_instance" "test" { + name = "%s" + region = "us-central1" + authorized_network = google_service_networking_connection.private_service_connection.network + + node_config { + cpu_count = 1 + memory_size_mb = 1024 + } + node_count = 2 + + memcache_parameters { + params = { + "listen-backlog" = "2048" + "max-item-size" = "8388608" + } + } +} +`, prefix, prefix, name) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_monitoring_alert_policy_test.go b/third_party/terraform/tests/resource_monitoring_alert_policy_test.go index 6b97bae68a2b..e235eba5f179 100644 --- a/third_party/terraform/tests/resource_monitoring_alert_policy_test.go +++ b/third_party/terraform/tests/resource_monitoring_alert_policy_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -33,14 +32,14 @@ func TestAccMonitoringAlertPolicy(t *testing.T) { func testAccMonitoringAlertPolicy_basic(t *testing.T) { - alertName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - conditionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + alertName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + conditionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) filter := `metric.type=\"compute.googleapis.com/instance/disk/write_bytes_count\" AND resource.type=\"gce_instance\"` - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAlertPolicyDestroy, + CheckDestroy: testAccCheckAlertPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccMonitoringAlertPolicy_basicCfg(alertName, conditionName, "ALIGN_RATE", filter), @@ -56,17 +55,17 @@ func testAccMonitoringAlertPolicy_basic(t *testing.T) { func testAccMonitoringAlertPolicy_update(t *testing.T) { - alertName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - conditionName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + alertName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + conditionName := fmt.Sprintf("tf-test-%s", randString(t, 10)) filter1 := `metric.type=\"compute.googleapis.com/instance/disk/write_bytes_count\" AND resource.type=\"gce_instance\"` aligner1 := "ALIGN_RATE" filter2 := `metric.type=\"compute.googleapis.com/instance/cpu/utilization\" AND resource.type=\"gce_instance\"` aligner2 := "ALIGN_MAX" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAlertPolicyDestroy, + CheckDestroy: testAccCheckAlertPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccMonitoringAlertPolicy_basicCfg(alertName, conditionName, aligner1, filter1), @@ -90,14 +89,14 @@ func testAccMonitoringAlertPolicy_update(t *testing.T) { func testAccMonitoringAlertPolicy_full(t *testing.T) { - alertName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - conditionName1 := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - conditionName2 := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + alertName := fmt.Sprintf("tf-test-%s", randString(t, 10)) + conditionName1 := fmt.Sprintf("tf-test-%s", randString(t, 10)) + conditionName2 := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAlertPolicyDestroy, + CheckDestroy: testAccCheckAlertPolicyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccMonitoringAlertPolicy_fullCfg(alertName, conditionName1, conditionName2), @@ -111,25 +110,27 @@ func testAccMonitoringAlertPolicy_full(t *testing.T) { }) } -func testAccCheckAlertPolicyDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckAlertPolicyDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_monitoring_alert_policy" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_monitoring_alert_policy" { + continue + } - name := rs.Primary.Attributes["name"] + name := rs.Primary.Attributes["name"] - url := fmt.Sprintf("https://monitoring.googleapis.com/v3/%s", name) - _, err := sendRequest(config, "GET", "", url, nil) + url := fmt.Sprintf("https://monitoring.googleapis.com/v3/%s", name) + _, err := sendRequest(config, "GET", "", url, nil) - if err == nil { - return fmt.Errorf("Error, alert policy %s still exists", name) + if err == nil { + return fmt.Errorf("Error, alert policy %s still exists", name) + } } - } - return nil + return nil + } } func testAccMonitoringAlertPolicy_basicCfg(alertName, conditionName, aligner, filter string) string { diff --git a/third_party/terraform/tests/resource_monitoring_dashboard_test.go b/third_party/terraform/tests/resource_monitoring_dashboard_test.go new file mode 100644 index 000000000000..8292deacb3d5 --- /dev/null +++ b/third_party/terraform/tests/resource_monitoring_dashboard_test.go @@ -0,0 +1,271 @@ +package google + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccMonitoringDashboard_basic(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckMonitoringDashboardDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccMonitoringDashboard_basic(), + }, + { + ResourceName: "google_monitoring_dashboard.dashboard", + ImportState: true, + ImportStateVerify: true, + // Default import format uses the ID, which contains the project # + // Testing import formats with the project name don't work because we set + // the ID on import to what the user specified, which won't match the ID + // from the apply + ImportStateVerifyIgnore: []string{"project"}, + }, + }, + }) +} + +func TestAccMonitoringDashboard_gridLayout(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckMonitoringDashboardDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccMonitoringDashboard_gridLayout(), + }, + { + ResourceName: "google_monitoring_dashboard.dashboard", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project"}, + }, + }, + }) +} + +func TestAccMonitoringDashboard_rowLayout(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckMonitoringDashboardDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccMonitoringDashboard_rowLayout(), + }, + { + ResourceName: "google_monitoring_dashboard.dashboard", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project"}, + }, + }, + }) +} + +func TestAccMonitoringDashboard_update(t *testing.T) { + t.Parallel() + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckMonitoringDashboardDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccMonitoringDashboard_rowLayout(), + }, + { + ResourceName: "google_monitoring_dashboard.dashboard", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project"}, + }, + { + Config: testAccMonitoringDashboard_basic(), + }, + { + ResourceName: "google_monitoring_dashboard.dashboard", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project"}, + }, + { + Config: testAccMonitoringDashboard_gridLayout(), + }, + { + ResourceName: "google_monitoring_dashboard.dashboard", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project"}, + }, + }, + }) +} + +func testAccCheckMonitoringDashboardDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_monitoring_dashboard" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } + + config := googleProviderConfig(t) + + url, err := replaceVarsForTest(config, rs, "{{MonitoringBasePath}}v1/{{name}}") + if err != nil { + return err + } + + _, err = sendRequest(config, "GET", "", url, nil, isMonitoringConcurrentEditError) + if err == nil { + return fmt.Errorf("MonitoringDashboard still exists at %s", url) + } + } + + return nil + } +} + +func testAccMonitoringDashboard_basic() string { + return fmt.Sprintf(` +resource "google_monitoring_dashboard" "dashboard" { + dashboard_json = < +package google + +<% unless version == 'ga' %> +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccNotebooksEnvironment_create(t *testing.T) { + t.Parallel() + + prefix := fmt.Sprintf("%d", randInt(t)) + name := fmt.Sprintf("tf-env-%s", prefix) + + vcrTest(t, resource.TestCase{ + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccNotebooksEnvironment_create(name), + }, + { + ResourceName: "google_notebooks_environment.test", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccNotebooksEnvironment_create(name string) string { + return fmt.Sprintf(` + +resource "google_notebooks_environment" "test" { + name = "%s" + location = "us-west1-a" + container_image { + repository = "gcr.io/deeplearning-platform-release/base-cpu" + } +} +`, name) +} + +<% end -%> diff --git a/third_party/terraform/tests/resource_notebooks_instance_container_test.go.erb b/third_party/terraform/tests/resource_notebooks_instance_container_test.go.erb new file mode 100644 index 000000000000..818a761d2efb --- /dev/null +++ b/third_party/terraform/tests/resource_notebooks_instance_container_test.go.erb @@ -0,0 +1,53 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' %> +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccNotebooksInstance_create_container(t *testing.T) { + t.Parallel() + + prefix := fmt.Sprintf("%d", randInt(t)) + name := fmt.Sprintf("tf-%s", prefix) + + vcrTest(t, resource.TestCase{ + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccNotebooksInstance_create_container(name), + }, + { + ResourceName: "google_notebooks_instance.test", + ImportState: true, + ImportStateVerify: true, + ExpectNonEmptyPlan: true, + ImportStateVerifyIgnore: []string{"container_image", "metadata", "vm_image"}, + }, + }, + }) +} + +func testAccNotebooksInstance_create_container(name string) string { + return fmt.Sprintf(` + +resource "google_notebooks_instance" "test" { + name = "%s" + location = "us-west1-a" + machine_type = "n1-standard-1" + metadata = { + proxy-mode = "service_account" + terraform = "true" + } + container_image { + repository = "gcr.io/deeplearning-platform-release/base-cpu" + tag = "latest" + } +} +`, name) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_notebooks_instance_gpu_test.go.erb b/third_party/terraform/tests/resource_notebooks_instance_gpu_test.go.erb new file mode 100644 index 000000000000..893aca9458b1 --- /dev/null +++ b/third_party/terraform/tests/resource_notebooks_instance_gpu_test.go.erb @@ -0,0 +1,58 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' %> +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccNotebooksInstance_create_gpu(t *testing.T) { + t.Parallel() + + prefix := fmt.Sprintf("%d", randInt(t)) + name := fmt.Sprintf("tf-%s", prefix) + + vcrTest(t, resource.TestCase{ + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccNotebooksInstance_create_gpu(name), + }, + { + ResourceName: "google_notebooks_instance.test", + ImportState: true, + ImportStateVerify: true, + ExpectNonEmptyPlan: true, + ImportStateVerifyIgnore: []string{"container_image", "metadata", "vm_image"}, + }, + }, + }) +} + +func testAccNotebooksInstance_create_gpu(name string) string { + return fmt.Sprintf(` + +resource "google_notebooks_instance" "test" { + name = "%s" + location = "us-west1-a" + machine_type = "n1-standard-1" + metadata = { + proxy-mode = "service_account" + terraform = "true" + } + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-gpu" + } + install_gpu_driver = true + accelerator_config { + type = "NVIDIA_TESLA_T4" + core_count = 1 + } +} +`, name) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_notebooks_instance_test.go.erb b/third_party/terraform/tests/resource_notebooks_instance_test.go.erb new file mode 100644 index 000000000000..ced60526fbe7 --- /dev/null +++ b/third_party/terraform/tests/resource_notebooks_instance_test.go.erb @@ -0,0 +1,128 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' %> + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" +) + +func TestAccNotebooksInstance_create_vm_image(t *testing.T) { + t.Parallel() + + prefix := fmt.Sprintf("%d", randInt(t)) + name := fmt.Sprintf("tf-%s", prefix) + + vcrTest(t, resource.TestCase{ + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccNotebooksInstance_create_vm_image(name), + }, + { + ResourceName: "google_notebooks_instance.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"vm_image", "metadata"}, + }, + }, + }) +} + +func TestAccNotebooksInstance_update(t *testing.T) { + context := map[string]interface{}{ + "random_suffix": randString(t, 10), + } + + vcrTest(t, resource.TestCase{ + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccNotebooksInstance_basic(context), + }, + { + ResourceName: "google_notebooks_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"vm_image"}, + }, + { + Config: testAccNotebooksInstance_update(context), + }, + { + ResourceName: "google_notebooks_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"vm_image"}, + }, + { + Config: testAccNotebooksInstance_basic(context), + }, + { + ResourceName: "google_notebooks_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"vm_image"}, + }, + }, + }) +} + +func testAccNotebooksInstance_create_vm_image(name string) string { + return fmt.Sprintf(` + +resource "google_notebooks_instance" "test" { + name = "%s" + location = "us-west1-a" + machine_type = "n1-standard-1" + metadata = { + proxy-mode = "service_account" + terraform = "true" + } + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-cpu" + } +} +`, name) +} + +func testAccNotebooksInstance_basic(context map[string]interface{}) string { + return Nprintf(` +resource "google_notebooks_instance" "instance" { + name = "tf-test-notebooks-instance%{random_suffix}" + location = "us-central1-a" + machine_type = "n1-standard-1" + + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-cpu" + } +} +`, context) +} + +func testAccNotebooksInstance_update(context map[string]interface{}) string { + return Nprintf(` +resource "google_notebooks_instance" "instance" { + name = "tf-test-notebooks-instance%{random_suffix}" + location = "us-central1-a" + machine_type = "n1-standard-1" + + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-cpu" + } + + labels = { + key = "value" + } +} +`, context) +} + + +<% end -%> diff --git a/third_party/terraform/tests/resource_pubsub_subscription_iam_test.go b/third_party/terraform/tests/resource_pubsub_subscription_iam_test.go index 5f948297823e..f6c41f63655f 100644 --- a/third_party/terraform/tests/resource_pubsub_subscription_iam_test.go +++ b/third_party/terraform/tests/resource_pubsub_subscription_iam_test.go @@ -6,7 +6,6 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,25 +13,25 @@ import ( func TestAccPubsubSubscriptionIamBinding(t *testing.T) { t.Parallel() - topic := "tf-test-topic-iam-" + acctest.RandString(10) - subscription := "tf-test-sub-iam-" + acctest.RandString(10) - account := "tf-test-iam-" + acctest.RandString(10) + topic := "tf-test-topic-iam-" + randString(t, 10) + subscription := "tf-test-sub-iam-" + randString(t, 10) + account := "tf-test-iam-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test IAM Binding creation Config: testAccPubsubSubscriptionIamBinding_basic(subscription, topic, account), - Check: testAccCheckPubsubSubscriptionIam(subscription, "roles/pubsub.subscriber", []string{ + Check: testAccCheckPubsubSubscriptionIam(t, subscription, "roles/pubsub.subscriber", []string{ fmt.Sprintf("serviceAccount:%s-1@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, { // Test IAM Binding update Config: testAccPubsubSubscriptionIamBinding_update(subscription, topic, account), - Check: testAccCheckPubsubSubscriptionIam(subscription, "roles/pubsub.subscriber", []string{ + Check: testAccCheckPubsubSubscriptionIam(t, subscription, "roles/pubsub.subscriber", []string{ fmt.Sprintf("serviceAccount:%s-1@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), @@ -50,19 +49,19 @@ func TestAccPubsubSubscriptionIamBinding(t *testing.T) { func TestAccPubsubSubscriptionIamMember(t *testing.T) { t.Parallel() - topic := "tf-test-topic-iam-" + acctest.RandString(10) - subscription := "tf-test-sub-iam-" + acctest.RandString(10) - account := "tf-test-iam-" + acctest.RandString(10) + topic := "tf-test-topic-iam-" + randString(t, 10) + subscription := "tf-test-sub-iam-" + randString(t, 10) + account := "tf-test-iam-" + randString(t, 10) accountEmail := fmt.Sprintf("%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccPubsubSubscriptionIamMember_basic(subscription, topic, account), - Check: testAccCheckPubsubSubscriptionIam(subscription, "roles/pubsub.subscriber", []string{ + Check: testAccCheckPubsubSubscriptionIam(t, subscription, "roles/pubsub.subscriber", []string{ fmt.Sprintf("serviceAccount:%s", accountEmail), }), }, @@ -79,23 +78,23 @@ func TestAccPubsubSubscriptionIamMember(t *testing.T) { func TestAccPubsubSubscriptionIamPolicy(t *testing.T) { t.Parallel() - topic := "tf-test-topic-iam-" + acctest.RandString(10) - subscription := "tf-test-sub-iam-" + acctest.RandString(10) - account := "tf-test-iam-" + acctest.RandString(10) + topic := "tf-test-topic-iam-" + randString(t, 10) + subscription := "tf-test-sub-iam-" + randString(t, 10) + account := "tf-test-iam-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccPubsubSubscriptionIamPolicy_basic(subscription, topic, account, "roles/pubsub.subscriber"), - Check: testAccCheckPubsubSubscriptionIam(subscription, "roles/pubsub.subscriber", []string{ + Check: testAccCheckPubsubSubscriptionIam(t, subscription, "roles/pubsub.subscriber", []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, { Config: testAccPubsubSubscriptionIamPolicy_basic(subscription, topic, account, "roles/pubsub.viewer"), - Check: testAccCheckPubsubSubscriptionIam(subscription, "roles/pubsub.viewer", []string{ + Check: testAccCheckPubsubSubscriptionIam(t, subscription, "roles/pubsub.viewer", []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, @@ -109,9 +108,9 @@ func TestAccPubsubSubscriptionIamPolicy(t *testing.T) { }) } -func testAccCheckPubsubSubscriptionIam(subscription, role string, members []string) resource.TestCheckFunc { +func testAccCheckPubsubSubscriptionIam(t *testing.T, subscription, role string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientPubsub.Projects.Subscriptions.GetIamPolicy(getComputedSubscriptionName(getTestProjectFromEnv(), subscription)).Do() if err != nil { return err diff --git a/third_party/terraform/tests/resource_pubsub_subscription_test.go b/third_party/terraform/tests/resource_pubsub_subscription_test.go index 3d8c6ca9f8e0..91b92000cf8b 100644 --- a/third_party/terraform/tests/resource_pubsub_subscription_test.go +++ b/third_party/terraform/tests/resource_pubsub_subscription_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,13 +11,13 @@ import ( func TestAccPubsubSubscription_emptyTTL(t *testing.T) { t.Parallel() - topic := fmt.Sprintf("tf-test-topic-%s", acctest.RandString(10)) - subscription := fmt.Sprintf("tf-test-sub-%s", acctest.RandString(10)) + topic := fmt.Sprintf("tf-test-topic-%s", randString(t, 10)) + subscription := fmt.Sprintf("tf-test-sub-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckPubsubSubscriptionDestroy, + CheckDestroy: testAccCheckPubsubSubscriptionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccPubsubSubscription_emptyTTL(topic, subscription), @@ -36,13 +35,13 @@ func TestAccPubsubSubscription_emptyTTL(t *testing.T) { func TestAccPubsubSubscription_basic(t *testing.T) { t.Parallel() - topic := fmt.Sprintf("tf-test-topic-%s", acctest.RandString(10)) - subscription := fmt.Sprintf("tf-test-sub-%s", acctest.RandString(10)) + topic := fmt.Sprintf("tf-test-topic-%s", randString(t, 10)) + subscription := fmt.Sprintf("tf-test-sub-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckPubsubSubscriptionDestroy, + CheckDestroy: testAccCheckPubsubSubscriptionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccPubsubSubscription_basic(topic, subscription, "bar", 20), @@ -60,14 +59,14 @@ func TestAccPubsubSubscription_basic(t *testing.T) { func TestAccPubsubSubscription_update(t *testing.T) { t.Parallel() - topic := fmt.Sprintf("tf-test-topic-%s", acctest.RandString(10)) - subscriptionShort := fmt.Sprintf("tf-test-sub-%s", acctest.RandString(10)) + topic := fmt.Sprintf("tf-test-topic-%s", randString(t, 10)) + subscriptionShort := fmt.Sprintf("tf-test-sub-%s", randString(t, 10)) subscriptionLong := fmt.Sprintf("projects/%s/subscriptions/%s", getTestProjectFromEnv(), subscriptionShort) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckPubsubSubscriptionDestroy, + CheckDestroy: testAccCheckPubsubSubscriptionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccPubsubSubscription_basic(topic, subscriptionShort, "bar", 20), @@ -97,14 +96,14 @@ func TestAccPubsubSubscription_update(t *testing.T) { func TestAccPubsubSubscription_push(t *testing.T) { t.Parallel() - topicFoo := fmt.Sprintf("tf-test-topic-foo-%s", acctest.RandString(10)) - subscription := fmt.Sprintf("tf-test-sub-foo-%s", acctest.RandString(10)) - saAccount := fmt.Sprintf("tf-test-pubsub-%s", acctest.RandString(10)) + topicFoo := fmt.Sprintf("tf-test-topic-foo-%s", randString(t, 10)) + subscription := fmt.Sprintf("tf-test-sub-foo-%s", randString(t, 10)) + saAccount := fmt.Sprintf("tf-test-pubsub-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckPubsubSubscriptionDestroy, + CheckDestroy: testAccCheckPubsubSubscriptionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccPubsubSubscription_push(topicFoo, saAccount, subscription), @@ -126,20 +125,20 @@ func TestAccPubsubSubscription_push(t *testing.T) { func TestAccPubsubSubscription_pollOnCreate(t *testing.T) { t.Parallel() - topic := fmt.Sprintf("tf-test-topic-foo-%s", acctest.RandString(10)) - subscription := fmt.Sprintf("tf-test-topic-foo-%s", acctest.RandString(10)) + topic := fmt.Sprintf("tf-test-topic-foo-%s", randString(t, 10)) + subscription := fmt.Sprintf("tf-test-topic-foo-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckPubsubSubscriptionDestroy, + CheckDestroy: testAccCheckPubsubSubscriptionDestroyProducer(t), Steps: []resource.TestStep{ { // Create only the topic Config: testAccPubsubSubscription_topicOnly(topic), // Read from non-existent subscription created in next step // so API negative-caches result - Check: testAccCheckPubsubSubscriptionCache404(subscription), + Check: testAccCheckPubsubSubscriptionCache404(t, subscription), }, { // Create the subscription - if the polling fails, @@ -279,9 +278,9 @@ func TestGetComputedTopicName(t *testing.T) { } } -func testAccCheckPubsubSubscriptionCache404(subName string) resource.TestCheckFunc { +func testAccCheckPubsubSubscriptionCache404(t *testing.T, subName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) url := fmt.Sprintf("%sprojects/%s/subscriptions/%s", config.PubsubBasePath, getTestProjectFromEnv(), subName) resp, err := sendRequest(config, "GET", "", url, nil) if err == nil { diff --git a/third_party/terraform/tests/resource_pubsub_topic_iam_test.go b/third_party/terraform/tests/resource_pubsub_topic_iam_test.go index 22898cd19f6c..6192c95a11df 100644 --- a/third_party/terraform/tests/resource_pubsub_topic_iam_test.go +++ b/third_party/terraform/tests/resource_pubsub_topic_iam_test.go @@ -6,7 +6,6 @@ import ( "sort" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -14,17 +13,17 @@ import ( func TestAccPubsubTopicIamBinding(t *testing.T) { t.Parallel() - topic := "tf-test-topic-iam-" + acctest.RandString(10) - account := "tf-test-topic-iam-" + acctest.RandString(10) + topic := "tf-test-topic-iam-" + randString(t, 10) + account := "tf-test-topic-iam-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test IAM Binding creation Config: testAccPubsubTopicIamBinding_basic(topic, account), - Check: testAccCheckPubsubTopicIam(topic, "roles/pubsub.publisher", []string{ + Check: testAccCheckPubsubTopicIam(t, topic, "roles/pubsub.publisher", []string{ fmt.Sprintf("serviceAccount:%s-1@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, @@ -37,7 +36,7 @@ func TestAccPubsubTopicIamBinding(t *testing.T) { { // Test IAM Binding update Config: testAccPubsubTopicIamBinding_update(topic, account), - Check: testAccCheckPubsubTopicIam(topic, "roles/pubsub.publisher", []string{ + Check: testAccCheckPubsubTopicIam(t, topic, "roles/pubsub.publisher", []string{ fmt.Sprintf("serviceAccount:%s-1@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), fmt.Sprintf("serviceAccount:%s-2@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), @@ -55,17 +54,17 @@ func TestAccPubsubTopicIamBinding(t *testing.T) { func TestAccPubsubTopicIamBinding_topicName(t *testing.T) { t.Parallel() - topic := "tf-test-topic-iam-" + acctest.RandString(10) - account := "tf-test-topic-iam-" + acctest.RandString(10) + topic := "tf-test-topic-iam-" + randString(t, 10) + account := "tf-test-topic-iam-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test IAM Binding creation Config: testAccPubsubTopicIamBinding_topicName(topic, account), - Check: testAccCheckPubsubTopicIam(topic, "roles/pubsub.publisher", []string{ + Check: testAccCheckPubsubTopicIam(t, topic, "roles/pubsub.publisher", []string{ fmt.Sprintf("serviceAccount:%s-1@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, @@ -77,18 +76,18 @@ func TestAccPubsubTopicIamBinding_topicName(t *testing.T) { func TestAccPubsubTopicIamMember(t *testing.T) { t.Parallel() - topic := "tf-test-topic-iam-" + acctest.RandString(10) - account := "tf-test-topic-iam-" + acctest.RandString(10) + topic := "tf-test-topic-iam-" + randString(t, 10) + account := "tf-test-topic-iam-" + randString(t, 10) accountEmail := fmt.Sprintf("%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { // Test Iam Member creation (no update for member, no need to test) Config: testAccPubsubTopicIamMember_basic(topic, account), - Check: testAccCheckPubsubTopicIam(topic, "roles/pubsub.publisher", []string{ + Check: testAccCheckPubsubTopicIam(t, topic, "roles/pubsub.publisher", []string{ fmt.Sprintf("serviceAccount:%s", accountEmail), }), }, @@ -105,22 +104,22 @@ func TestAccPubsubTopicIamMember(t *testing.T) { func TestAccPubsubTopicIamPolicy(t *testing.T) { t.Parallel() - topic := "tf-test-topic-iam-" + acctest.RandString(10) - account := "tf-test-topic-iam-" + acctest.RandString(10) + topic := "tf-test-topic-iam-" + randString(t, 10) + account := "tf-test-topic-iam-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccPubsubTopicIamPolicy_basic(topic, account, "roles/pubsub.publisher"), - Check: testAccCheckPubsubTopicIam(topic, "roles/pubsub.publisher", []string{ + Check: testAccCheckPubsubTopicIam(t, topic, "roles/pubsub.publisher", []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, { Config: testAccPubsubTopicIamPolicy_basic(topic, account, "roles/pubsub.subscriber"), - Check: testAccCheckPubsubTopicIam(topic, "roles/pubsub.subscriber", []string{ + Check: testAccCheckPubsubTopicIam(t, topic, "roles/pubsub.subscriber", []string{ fmt.Sprintf("serviceAccount:%s@%s.iam.gserviceaccount.com", account, getTestProjectFromEnv()), }), }, @@ -134,9 +133,9 @@ func TestAccPubsubTopicIamPolicy(t *testing.T) { }) } -func testAccCheckPubsubTopicIam(topic, role string, members []string) resource.TestCheckFunc { +func testAccCheckPubsubTopicIam(t *testing.T, topic, role string, members []string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) p, err := config.clientPubsub.Projects.Topics.GetIamPolicy(getComputedTopicName(getTestProjectFromEnv(), topic)).Do() if err != nil { return err diff --git a/third_party/terraform/tests/resource_pubsub_topic_test.go b/third_party/terraform/tests/resource_pubsub_topic_test.go index 136285e9623c..c080e1f3bd2d 100644 --- a/third_party/terraform/tests/resource_pubsub_topic_test.go +++ b/third_party/terraform/tests/resource_pubsub_topic_test.go @@ -2,13 +2,9 @@ package google import ( "fmt" - "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/helper/schema" - "github.com/hashicorp/terraform-plugin-sdk/terraform" ) func TestAccPubsubTopic_update(t *testing.T) { @@ -19,7 +15,7 @@ func TestAccPubsubTopic_update(t *testing.T) { vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckPubsubTopicDestroy, + CheckDestroy: testAccCheckPubsubTopicDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccPubsubTopic_update(topic, "foo", "bar"), @@ -40,7 +36,7 @@ func TestAccPubsubTopic_update(t *testing.T) { ImportStateVerify: true, }, }, - }, testAccCheckPubsubTopicDestroyProducer) + }) } func TestAccPubsubTopic_cmek(t *testing.T) { @@ -48,12 +44,12 @@ func TestAccPubsubTopic_cmek(t *testing.T) { kms := BootstrapKMSKey(t) pid := getTestProjectFromEnv() - topicName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + topicName := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckPubsubTopicDestroy, + CheckDestroy: testAccCheckPubsubTopicDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccPubsubTopic_cmek(pid, topicName, kms.CryptoKey.Name), @@ -114,31 +110,3 @@ resource "google_pubsub_topic" "topic" { } `, pid, topicName, kmsKey) } - -// Temporary until all destroy functions can be reworked to take a provider as an argument -func testAccCheckPubsubTopicDestroyProducer(provider *schema.Provider) func(s *terraform.State) error { - return func(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_pubsub_topic" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - config := provider.Meta().(*Config) - - url, err := replaceVarsForTest(config, rs, "{{PubsubBasePath}}projects/{{project}}/topics/{{name}}") - if err != nil { - return err - } - - _, err = sendRequest(config, "GET", "", url, nil, pubsubTopicProjectNotReady) - if err == nil { - return fmt.Errorf("PubsubTopic still exists at %s", url) - } - } - - return nil - } -} diff --git a/third_party/terraform/tests/resource_redis_instance_test.go b/third_party/terraform/tests/resource_redis_instance_test.go index cece2781afff..c0bd52ee6a00 100644 --- a/third_party/terraform/tests/resource_redis_instance_test.go +++ b/third_party/terraform/tests/resource_redis_instance_test.go @@ -4,19 +4,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccRedisInstance_update(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("tf-test") + name := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRedisInstanceDestroy, + CheckDestroy: testAccCheckRedisInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRedisInstance_update(name), @@ -41,7 +40,7 @@ func TestAccRedisInstance_update(t *testing.T) { func TestAccRedisInstance_regionFromLocation(t *testing.T) { t.Parallel() - name := acctest.RandomWithPrefix("tf-test") + name := fmt.Sprintf("tf-test-%d", randInt(t)) // Pick a zone that isn't in the provider-specified region so we know we // didn't fall back to that one. @@ -52,10 +51,10 @@ func TestAccRedisInstance_regionFromLocation(t *testing.T) { zone = "us-central1-a" } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRedisInstanceDestroy, + CheckDestroy: testAccCheckRedisInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRedisInstance_regionFromLocation(name, zone), diff --git a/third_party/terraform/tests/resource_resourcemanager_lien_test.go b/third_party/terraform/tests/resource_resourcemanager_lien_test.go index c2975b032951..d176e035e4d2 100644 --- a/third_party/terraform/tests/resource_resourcemanager_lien_test.go +++ b/third_party/terraform/tests/resource_resourcemanager_lien_test.go @@ -5,7 +5,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" resourceManager "google.golang.org/api/cloudresourcemanager/v1" @@ -14,20 +13,20 @@ import ( func TestAccResourceManagerLien_basic(t *testing.T) { t.Parallel() - projectName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + projectName := fmt.Sprintf("tf-test-%s", randString(t, 10)) org := getTestOrgFromEnv(t) var lien resourceManager.Lien - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckResourceManagerLienDestroy, + CheckDestroy: testAccCheckResourceManagerLienDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccResourceManagerLien_basic(projectName, org), Check: resource.ComposeTestCheckFunc( testAccCheckResourceManagerLienExists( - "google_resource_manager_lien.lien", projectName, &lien), + t, "google_resource_manager_lien.lien", projectName, &lien), ), }, { @@ -46,7 +45,7 @@ func TestAccResourceManagerLien_basic(t *testing.T) { }) } -func testAccCheckResourceManagerLienExists(n, projectName string, lien *resourceManager.Lien) resource.TestCheckFunc { +func testAccCheckResourceManagerLienExists(t *testing.T, n, projectName string, lien *resourceManager.Lien) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -57,7 +56,7 @@ func testAccCheckResourceManagerLienExists(n, projectName string, lien *resource return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientResourceManager.Liens.List().Parent(fmt.Sprintf("projects/%s", projectName)).Do() if err != nil { @@ -73,21 +72,23 @@ func testAccCheckResourceManagerLienExists(n, projectName string, lien *resource } } -func testAccCheckResourceManagerLienDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckResourceManagerLienDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_resource_manager_lien" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_resource_manager_lien" { + continue + } - _, err := config.clientResourceManager.Liens.List().Parent(fmt.Sprintf("projects/%s", rs.Primary.Attributes["parent"])).Do() - if err == nil { - return fmt.Errorf("Lien %s still exists", rs.Primary.ID) + _, err := config.clientResourceManager.Liens.List().Parent(fmt.Sprintf("projects/%s", rs.Primary.Attributes["parent"])).Do() + if err == nil { + return fmt.Errorf("Lien %s still exists", rs.Primary.ID) + } } - } - return nil + return nil + } } func testAccResourceManagerLien_basic(projectName, org string) string { diff --git a/third_party/terraform/tests/resource_runtimeconfig_config_test.go b/third_party/terraform/tests/resource_runtimeconfig_config_test.go index ee0f2bfc5371..38d4dd35ccb2 100644 --- a/third_party/terraform/tests/resource_runtimeconfig_config_test.go +++ b/third_party/terraform/tests/resource_runtimeconfig_config_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/runtimeconfig/v1beta1" @@ -14,19 +13,19 @@ func TestAccRuntimeconfigConfig_basic(t *testing.T) { t.Parallel() var runtimeConfig runtimeconfig.RuntimeConfig - configName := fmt.Sprintf("runtimeconfig-test-%s", acctest.RandString(10)) + configName := fmt.Sprintf("runtimeconfig-test-%s", randString(t, 10)) description := "my test description" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRuntimeconfigConfigDestroy, + CheckDestroy: testAccCheckRuntimeconfigConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRuntimeconfigConfig_basicDescription(configName, description), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeConfigExists( - "google_runtimeconfig_config.foobar", &runtimeConfig), + t, "google_runtimeconfig_config.foobar", &runtimeConfig), testAccCheckRuntimeConfigDescription(&runtimeConfig, description), ), }, @@ -43,27 +42,27 @@ func TestAccRuntimeconfig_update(t *testing.T) { t.Parallel() var runtimeConfig runtimeconfig.RuntimeConfig - configName := fmt.Sprintf("runtimeconfig-test-%s", acctest.RandString(10)) + configName := fmt.Sprintf("runtimeconfig-test-%s", randString(t, 10)) firstDescription := "my test description" secondDescription := "my updated test description" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRuntimeconfigConfigDestroy, + CheckDestroy: testAccCheckRuntimeconfigConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRuntimeconfigConfig_basicDescription(configName, firstDescription), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeConfigExists( - "google_runtimeconfig_config.foobar", &runtimeConfig), + t, "google_runtimeconfig_config.foobar", &runtimeConfig), testAccCheckRuntimeConfigDescription(&runtimeConfig, firstDescription), ), }, { Config: testAccRuntimeconfigConfig_basicDescription(configName, secondDescription), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeConfigExists( - "google_runtimeconfig_config.foobar", &runtimeConfig), + t, "google_runtimeconfig_config.foobar", &runtimeConfig), testAccCheckRuntimeConfigDescription(&runtimeConfig, secondDescription), ), }, @@ -75,26 +74,26 @@ func TestAccRuntimeconfig_updateEmptyDescription(t *testing.T) { t.Parallel() var runtimeConfig runtimeconfig.RuntimeConfig - configName := fmt.Sprintf("runtimeconfig-test-%s", acctest.RandString(10)) + configName := fmt.Sprintf("runtimeconfig-test-%s", randString(t, 10)) description := "my test description" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRuntimeconfigConfigDestroy, + CheckDestroy: testAccCheckRuntimeconfigConfigDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRuntimeconfigConfig_basicDescription(configName, description), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeConfigExists( - "google_runtimeconfig_config.foobar", &runtimeConfig), + t, "google_runtimeconfig_config.foobar", &runtimeConfig), testAccCheckRuntimeConfigDescription(&runtimeConfig, description), ), }, { Config: testAccRuntimeconfigConfig_emptyDescription(configName), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeConfigExists( - "google_runtimeconfig_config.foobar", &runtimeConfig), + t, "google_runtimeconfig_config.foobar", &runtimeConfig), testAccCheckRuntimeConfigDescription(&runtimeConfig, ""), ), }, @@ -112,7 +111,7 @@ func testAccCheckRuntimeConfigDescription(runtimeConfig *runtimeconfig.RuntimeCo } } -func testAccCheckRuntimeConfigExists(resourceName string, runtimeConfig *runtimeconfig.RuntimeConfig) resource.TestCheckFunc { +func testAccCheckRuntimeConfigExists(t *testing.T, resourceName string, runtimeConfig *runtimeconfig.RuntimeConfig) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -123,7 +122,7 @@ func testAccCheckRuntimeConfigExists(resourceName string, runtimeConfig *runtime return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientRuntimeconfig.Projects.Configs.Get(rs.Primary.ID).Do() if err != nil { @@ -136,22 +135,24 @@ func testAccCheckRuntimeConfigExists(resourceName string, runtimeConfig *runtime } } -func testAccCheckRuntimeconfigConfigDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckRuntimeconfigConfigDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_runtimeconfig_config" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_runtimeconfig_config" { + continue + } - _, err := config.clientRuntimeconfig.Projects.Configs.Get(rs.Primary.ID).Do() + _, err := config.clientRuntimeconfig.Projects.Configs.Get(rs.Primary.ID).Do() - if err == nil { - return fmt.Errorf("Runtimeconfig still exists") + if err == nil { + return fmt.Errorf("Runtimeconfig still exists") + } } - } - return nil + return nil + } } func testAccRuntimeconfigConfig_basicDescription(name, description string) string { diff --git a/third_party/terraform/tests/resource_runtimeconfig_variable_test.go b/third_party/terraform/tests/resource_runtimeconfig_variable_test.go index e0cc4dfa47c1..d00b815f795f 100644 --- a/third_party/terraform/tests/resource_runtimeconfig_variable_test.go +++ b/third_party/terraform/tests/resource_runtimeconfig_variable_test.go @@ -6,7 +6,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/runtimeconfig/v1beta1" @@ -17,19 +16,19 @@ func TestAccRuntimeconfigVariable_basic(t *testing.T) { var variable runtimeconfig.Variable - varName := fmt.Sprintf("variable-test-%s", acctest.RandString(10)) + varName := fmt.Sprintf("variable-test-%s", randString(t, 10)) varText := "this is my test value" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRuntimeconfigVariableDestroy, + CheckDestroy: testAccCheckRuntimeconfigVariableDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccRuntimeconfigVariable_basicText(varName, varText), + Config: testAccRuntimeconfigVariable_basicText(randString(t, 10), varName, varText), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeconfigVariableExists( - "google_runtimeconfig_variable.foobar", &variable), + t, "google_runtimeconfig_variable.foobar", &variable), testAccCheckRuntimeconfigVariableText(&variable, varText), testAccCheckRuntimeconfigVariableUpdateTime("google_runtimeconfig_variable.foobar"), ), @@ -48,28 +47,28 @@ func TestAccRuntimeconfigVariable_basicUpdate(t *testing.T) { var variable runtimeconfig.Variable - configName := fmt.Sprintf("some-name-%s", acctest.RandString(10)) - varName := fmt.Sprintf("variable-test-%s", acctest.RandString(10)) + configName := fmt.Sprintf("some-name-%s", randString(t, 10)) + varName := fmt.Sprintf("variable-test-%s", randString(t, 10)) varText := "this is my test value" varText2 := "this is my updated value" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRuntimeconfigVariableDestroy, + CheckDestroy: testAccCheckRuntimeconfigVariableDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccRuntimeconfigVariable_basicTextUpdate(configName, varName, varText), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeconfigVariableExists( - "google_runtimeconfig_variable.foobar", &variable), + t, "google_runtimeconfig_variable.foobar", &variable), testAccCheckRuntimeconfigVariableText(&variable, varText), ), }, { Config: testAccRuntimeconfigVariable_basicTextUpdate(configName, varName, varText2), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeconfigVariableExists( - "google_runtimeconfig_variable.foobar", &variable), + t, "google_runtimeconfig_variable.foobar", &variable), testAccCheckRuntimeconfigVariableText(&variable, varText2), ), }, @@ -82,19 +81,19 @@ func TestAccRuntimeconfigVariable_basicValue(t *testing.T) { var variable runtimeconfig.Variable - varName := fmt.Sprintf("variable-test-%s", acctest.RandString(10)) + varName := fmt.Sprintf("variable-test-%s", randString(t, 10)) varValue := "Zm9vYmFyCg==" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckRuntimeconfigVariableDestroy, + CheckDestroy: testAccCheckRuntimeconfigVariableDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccRuntimeconfigVariable_basicValue(varName, varValue), + Config: testAccRuntimeconfigVariable_basicValue(randString(t, 10), varName, varValue), Check: resource.ComposeTestCheckFunc( testAccCheckRuntimeconfigVariableExists( - "google_runtimeconfig_variable.foobar", &variable), + t, "google_runtimeconfig_variable.foobar", &variable), testAccCheckRuntimeconfigVariableValue(&variable, varValue), testAccCheckRuntimeconfigVariableUpdateTime("google_runtimeconfig_variable.foobar"), ), @@ -109,14 +108,16 @@ func TestAccRuntimeconfigVariable_basicValue(t *testing.T) { } func TestAccRuntimeconfigVariable_errorsOnBothValueAndText(t *testing.T) { + // Unit test, no HTTP interactions + skipIfVcr(t) t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccRuntimeconfigVariable_invalidBothTextValue(), + Config: testAccRuntimeconfigVariable_invalidBothTextValue(randString(t, 10)), ExpectError: regexp.MustCompile("conflicts with"), }, }, @@ -126,19 +127,19 @@ func TestAccRuntimeconfigVariable_errorsOnBothValueAndText(t *testing.T) { func TestAccRuntimeconfigVariable_errorsOnMissingValueAndText(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccRuntimeconfigVariable_invalidMissingTextValue(), + Config: testAccRuntimeconfigVariable_invalidMissingTextValue(randString(t, 10)), ExpectError: regexp.MustCompile("You must specify one of value or text"), }, }, }) } -func testAccCheckRuntimeconfigVariableExists(resourceName string, variable *runtimeconfig.Variable) resource.TestCheckFunc { +func testAccCheckRuntimeconfigVariableExists(t *testing.T, resourceName string, variable *runtimeconfig.Variable) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -149,7 +150,7 @@ func testAccCheckRuntimeconfigVariableExists(resourceName string, variable *runt return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientRuntimeconfig.Projects.Configs.Variables.Get(rs.Primary.ID).Do() if err != nil { @@ -206,25 +207,27 @@ func testAccCheckRuntimeconfigVariableValue(variable *runtimeconfig.Variable, va } } -func testAccCheckRuntimeconfigVariableDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccCheckRuntimeconfigVariableDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_runtimeconfig_variable" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_runtimeconfig_variable" { + continue + } - _, err := config.clientRuntimeconfig.Projects.Configs.Variables.Get(rs.Primary.ID).Do() + _, err := config.clientRuntimeconfig.Projects.Configs.Variables.Get(rs.Primary.ID).Do() - if err == nil { - return fmt.Errorf("Runtimeconfig variable still exists") + if err == nil { + return fmt.Errorf("Runtimeconfig variable still exists") + } } - } - return nil + return nil + } } -func testAccRuntimeconfigVariable_basicText(name, text string) string { +func testAccRuntimeconfigVariable_basicText(suffix, name, text string) string { return fmt.Sprintf(` resource "google_runtimeconfig_config" "foobar" { name = "some-config-%s" @@ -235,7 +238,7 @@ resource "google_runtimeconfig_variable" "foobar" { name = "%s" text = "%s" } -`, acctest.RandString(10), name, text) +`, suffix, name, text) } func testAccRuntimeconfigVariable_basicTextUpdate(configName, name, text string) string { @@ -252,7 +255,7 @@ resource "google_runtimeconfig_variable" "foobar" { `, configName, name, text) } -func testAccRuntimeconfigVariable_basicValue(name, value string) string { +func testAccRuntimeconfigVariable_basicValue(suffix, name, value string) string { return fmt.Sprintf(` resource "google_runtimeconfig_config" "foobar" { name = "some-config-%s" @@ -263,10 +266,10 @@ resource "google_runtimeconfig_variable" "foobar" { name = "%s" value = "%s" } -`, acctest.RandString(10), name, value) +`, suffix, name, value) } -func testAccRuntimeconfigVariable_invalidBothTextValue() string { +func testAccRuntimeconfigVariable_invalidBothTextValue(suffix string) string { return fmt.Sprintf(` resource "google_runtimeconfig_config" "foobar" { name = "some-config-%s" @@ -278,10 +281,10 @@ resource "google_runtimeconfig_variable" "foobar" { text = "here's my value" value = "Zm9vYmFyCg==" } -`, acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix) } -func testAccRuntimeconfigVariable_invalidMissingTextValue() string { +func testAccRuntimeconfigVariable_invalidMissingTextValue(suffix string) string { return fmt.Sprintf(` resource "google_runtimeconfig_config" "foobar" { name = "some-config-%s" @@ -291,5 +294,5 @@ resource "google_runtimeconfig_variable" "foobar" { parent = google_runtimeconfig_config.foobar.name name = "my-variable-namespace/%s" } -`, acctest.RandString(10), acctest.RandString(10)) +`, suffix, suffix) } diff --git a/third_party/terraform/tests/resource_secret_manager_secret_test.go.erb b/third_party/terraform/tests/resource_secret_manager_secret_test.go.erb index 606d41a76341..198a98dcc5fc 100644 --- a/third_party/terraform/tests/resource_secret_manager_secret_test.go.erb +++ b/third_party/terraform/tests/resource_secret_manager_secret_test.go.erb @@ -1,13 +1,11 @@ <% autogen_exception -%> package google -<% unless version == 'ga' -%> import ( "fmt" "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,13 +14,13 @@ func TestAccSecretManagerSecret_import(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSecretManagerSecretDestroy, + CheckDestroy: testAccCheckSecretManagerSecretDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSecretManagerSecret_basic(context), @@ -58,4 +56,3 @@ resource "google_secret_manager_secret" "secret-basic" { } `, context) } -<% end -%> diff --git a/third_party/terraform/tests/resource_secret_manager_secret_version_test.go.erb b/third_party/terraform/tests/resource_secret_manager_secret_version_test.go.erb index 278cad02e6c9..6ab65a039c70 100644 --- a/third_party/terraform/tests/resource_secret_manager_secret_version_test.go.erb +++ b/third_party/terraform/tests/resource_secret_manager_secret_version_test.go.erb @@ -1,13 +1,11 @@ <% autogen_exception -%> package google -<% unless version == 'ga' -%> import ( "fmt" "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -16,13 +14,13 @@ func TestAccSecretManagerSecretVersion_update(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(10), + "random_suffix": randString(t, 10), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSecretManagerSecretVersionDestroy, + CheckDestroy: testAccCheckSecretManagerSecretVersionDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSecretManagerSecretVersion_basic(context), @@ -100,5 +98,3 @@ resource "google_secret_manager_secret_version" "secret-version-basic" { } `, context) } - -<% end -%> diff --git a/third_party/terraform/tests/resource_security_center_source_test.go b/third_party/terraform/tests/resource_security_center_source_test.go index 995192462385..1c3fd2863519 100644 --- a/third_party/terraform/tests/resource_security_center_source_test.go +++ b/third_party/terraform/tests/resource_security_center_source_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -12,9 +11,9 @@ func TestAccSecurityCenterSource_basic(t *testing.T) { t.Parallel() orgId := getTestOrgFromEnv(t) - suffix := acctest.RandString(10) + suffix := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_service_directory_endpoint_test.go.erb b/third_party/terraform/tests/resource_service_directory_endpoint_test.go.erb new file mode 100644 index 000000000000..4366bde72eaa --- /dev/null +++ b/third_party/terraform/tests/resource_service_directory_endpoint_test.go.erb @@ -0,0 +1,106 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccServiceDirectoryEndpoint_serviceDirectoryEndpointUpdateExample(t *testing.T) { + t.Parallel() + + project := getTestProjectFromEnv() + location := "us-central1" + testId := fmt.Sprintf("tf-test-example-endpoint%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckServiceDirectoryEndpointDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccServiceDirectoryEndpoint_basic(location, testId), + }, + { + ResourceName: "google_service_directory_endpoint.example", + ImportState: true, + ImportStateVerify: true, + }, + { + ResourceName: "google_service_directory_endpoint.example", + // {{project}}/{{location}}/{{namespace_id}}/{{service_id}}/{{endpoint_id}} + ImportStateId: fmt.Sprintf("%s/%s/%s/%s/%s", project, location, testId, testId, testId), + ImportState: true, + ImportStateVerify: true, + }, + { + ResourceName: "google_service_directory_endpoint.example", + // {{location}}/{{namespace_id}}/{{service_id}}/{{endpoint_id}} + ImportStateId: fmt.Sprintf("%s/%s/%s/%s", location, testId, testId, testId), + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccServiceDirectoryEndpoint_update(location, testId), + }, + { + ResourceName: "google_service_directory_endpoint.example", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccServiceDirectoryEndpoint_basic(location, testId string) string { + return fmt.Sprintf(` +resource "google_service_directory_namespace" "example" { + namespace_id = "%s" + location = "%s" +} + +resource "google_service_directory_service" "example" { + service_id = "%s" + namespace = google_service_directory_namespace.example.id +} + +resource "google_service_directory_endpoint" "example" { + endpoint_id = "%s" + service = google_service_directory_service.example.id +} +`, testId, location, testId, testId) +} + +func testAccServiceDirectoryEndpoint_update(location, testId string) string { + return fmt.Sprintf(` +resource "google_service_directory_namespace" "example" { + namespace_id = "%s" + location = "%s" +} + +resource "google_service_directory_service" "example" { + service_id = "%s" + namespace = google_service_directory_namespace.example.id +} + +resource "google_service_directory_endpoint" "example" { + endpoint_id = "%s" + service = google_service_directory_service.example.id + + metadata = { + stage = "prod" + region = "us-central1" + } + + address = "1.2.3.4" + port = 5353 +} +`, testId, location, testId, testId) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_service_directory_namespace_test.go.erb b/third_party/terraform/tests/resource_service_directory_namespace_test.go.erb new file mode 100644 index 000000000000..42591bb88393 --- /dev/null +++ b/third_party/terraform/tests/resource_service_directory_namespace_test.go.erb @@ -0,0 +1,83 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccServiceDirectoryNamespace_serviceDirectoryNamespaceUpdateExample(t *testing.T) { + t.Parallel() + + project := getTestProjectFromEnv() + location := "us-central1" + testId := fmt.Sprintf("tf-test-example-namespace%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckServiceDirectoryNamespaceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccServiceDirectoryNamespace_basic(location, testId), + }, + { + ResourceName: "google_service_directory_namespace.example", + ImportState: true, + ImportStateVerify: true, + }, + { + ResourceName: "google_service_directory_namespace.example", + // {{project}}/{{location}}/{{namespace_id}} + ImportStateId: fmt.Sprintf("%s/%s/%s", project, location, testId), + ImportState: true, + ImportStateVerify: true, + }, + { + ResourceName: "google_service_directory_namespace.example", + // {{location}}/{{namespace_id}} + ImportStateId: fmt.Sprintf("%s/%s", location, testId), + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccServiceDirectoryNamespace_update(location, testId), + }, + { + ResourceName: "google_service_directory_namespace.example", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccServiceDirectoryNamespace_basic(location, testId string) string { + return fmt.Sprintf(` +resource "google_service_directory_namespace" "example" { + namespace_id = "%s" + location = "%s" +} +`, testId, location) +} + +func testAccServiceDirectoryNamespace_update(location, testId string) string { + return fmt.Sprintf(` +resource "google_service_directory_namespace" "example" { + namespace_id = "%s" + location = "%s" + + labels = { + key = "value" + foo = "bar" + } +} +`, testId, location) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_service_directory_service_test.go.erb b/third_party/terraform/tests/resource_service_directory_service_test.go.erb new file mode 100644 index 000000000000..bf526ca93ce3 --- /dev/null +++ b/third_party/terraform/tests/resource_service_directory_service_test.go.erb @@ -0,0 +1,92 @@ +<% autogen_exception -%> +package google +<% unless version == 'ga' -%> + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/terraform" +) + +func TestAccServiceDirectoryService_serviceDirectoryServiceUpdateExample(t *testing.T) { + t.Parallel() + + project := getTestProjectFromEnv() + location := "us-central1" + testId := fmt.Sprintf("tf-test-example-service%s", randString(t, 10)) + + vcrTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckServiceDirectoryServiceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccServiceDirectoryService_basic(location, testId), + }, + { + ResourceName: "google_service_directory_service.example", + ImportState: true, + ImportStateVerify: true, + }, + { + ResourceName: "google_service_directory_service.example", + // {{project}}/{{location}}/{{namespace_id}}/{{service_id}} + ImportStateId: fmt.Sprintf("%s/%s/%s/%s", project, location, testId, testId), + ImportState: true, + ImportStateVerify: true, + }, + { + ResourceName: "google_service_directory_service.example", + // {{location}}/{{namespace_id}}/{{service_id}} + ImportStateId: fmt.Sprintf("%s/%s/%s", location, testId, testId), + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccServiceDirectoryService_update(location, testId), + }, + { + ResourceName: "google_service_directory_service.example", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccServiceDirectoryService_basic(location, testId string) string { + return fmt.Sprintf(` +resource "google_service_directory_namespace" "example" { + namespace_id = "%s" + location = "%s" +} + +resource "google_service_directory_service" "example" { + service_id = "%s" + namespace = google_service_directory_namespace.example.id +} +`, testId, location, testId) +} + +func testAccServiceDirectoryService_update(location, testId string) string { + return fmt.Sprintf(` +resource "google_service_directory_namespace" "example" { + namespace_id = "%s" + location = "%s" +} + +resource "google_service_directory_service" "example" { + service_id = "%s" + namespace = google_service_directory_namespace.example.id + + metadata = { + stage = "prod" + region = "us-central1" + } +} +`, testId, location, testId) +} +<% end -%> diff --git a/third_party/terraform/tests/resource_service_networking_connection_test.go b/third_party/terraform/tests/resource_service_networking_connection_test.go index 55d91a5244bc..c04c40e1c468 100644 --- a/third_party/terraform/tests/resource_service_networking_connection_test.go +++ b/third_party/terraform/tests/resource_service_networking_connection_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -13,13 +12,13 @@ func TestAccServiceNetworkingConnection_create(t *testing.T) { t.Parallel() network := BootstrapSharedTestNetwork(t, "service-networking-connection-create") - addr := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + addr := fmt.Sprintf("tf-test-%s", randString(t, 10)) service := "servicenetworking.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testServiceNetworkingConnectionDestroy(service, network), + CheckDestroy: testServiceNetworkingConnectionDestroy(t, service, network), Steps: []resource.TestStep{ { Config: testAccServiceNetworkingConnection(network, addr, "servicenetworking.googleapis.com"), @@ -37,14 +36,14 @@ func TestAccServiceNetworkingConnection_update(t *testing.T) { t.Parallel() network := BootstrapSharedTestNetwork(t, "service-networking-connection-update") - addr1 := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - addr2 := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + addr1 := fmt.Sprintf("tf-test-%s", randString(t, 10)) + addr2 := fmt.Sprintf("tf-test-%s", randString(t, 10)) service := "servicenetworking.googleapis.com" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testServiceNetworkingConnectionDestroy(service, network), + CheckDestroy: testServiceNetworkingConnectionDestroy(t, service, network), Steps: []resource.TestStep{ { Config: testAccServiceNetworkingConnection(network, addr1, "servicenetworking.googleapis.com"), @@ -67,9 +66,9 @@ func TestAccServiceNetworkingConnection_update(t *testing.T) { } -func testServiceNetworkingConnectionDestroy(parent, network string) resource.TestCheckFunc { +func testServiceNetworkingConnectionDestroy(t *testing.T, parent, network string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) parentService := "services/" + parent networkName := fmt.Sprintf("projects/%s/global/networks/%s", getTestProjectFromEnv(), network) diff --git a/third_party/terraform/tests/resource_sourcerepo_repository_test.go b/third_party/terraform/tests/resource_sourcerepo_repository_test.go index 77f0a4d327ee..6038f0597349 100644 --- a/third_party/terraform/tests/resource_sourcerepo_repository_test.go +++ b/third_party/terraform/tests/resource_sourcerepo_repository_test.go @@ -4,18 +4,17 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccSourceRepoRepository_basic(t *testing.T) { t.Parallel() - repositoryName := fmt.Sprintf("source-repo-repository-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + repositoryName := fmt.Sprintf("source-repo-repository-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSourceRepoRepositoryDestroy, + CheckDestroy: testAccCheckSourceRepoRepositoryDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSourceRepoRepository_basic(repositoryName), @@ -32,13 +31,13 @@ func TestAccSourceRepoRepository_basic(t *testing.T) { func TestAccSourceRepoRepository_update(t *testing.T) { t.Parallel() - repositoryName := fmt.Sprintf("source-repo-repository-test-%s", acctest.RandString(10)) - accountId := fmt.Sprintf("account-id-%s", acctest.RandString(10)) - topicName := fmt.Sprintf("topic-name-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + repositoryName := fmt.Sprintf("source-repo-repository-test-%s", randString(t, 10)) + accountId := fmt.Sprintf("account-id-%s", randString(t, 10)) + topicName := fmt.Sprintf("topic-name-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSourceRepoRepositoryDestroy, + CheckDestroy: testAccCheckSourceRepoRepositoryDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSourceRepoRepository_basic(repositoryName), diff --git a/third_party/terraform/tests/resource_spanner_database_iam_test.go b/third_party/terraform/tests/resource_spanner_database_iam_test.go index a509ea69b43e..920cf8cb9ee7 100644 --- a/third_party/terraform/tests/resource_spanner_database_iam_test.go +++ b/third_party/terraform/tests/resource_spanner_database_iam_test.go @@ -4,20 +4,19 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccSpannerDatabaseIamBinding(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/spanner.databaseAdmin" project := getTestProjectFromEnv() - database := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - instance := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + database := fmt.Sprintf("tf-test-%s", randString(t, 10)) + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -56,12 +55,12 @@ func TestAccSpannerDatabaseIamMember(t *testing.T) { t.Parallel() project := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/spanner.databaseAdmin" - database := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - instance := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + database := fmt.Sprintf("tf-test-%s", randString(t, 10)) + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -87,12 +86,12 @@ func TestAccSpannerDatabaseIamPolicy(t *testing.T) { t.Parallel() project := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/spanner.databaseAdmin" - database := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - instance := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + database := fmt.Sprintf("tf-test-%s", randString(t, 10)) + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_spanner_database_test.go b/third_party/terraform/tests/resource_spanner_database_test.go index 530479a34130..c8d237c75f1a 100644 --- a/third_party/terraform/tests/resource_spanner_database_test.go +++ b/third_party/terraform/tests/resource_spanner_database_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -12,14 +11,14 @@ func TestAccSpannerDatabase_basic(t *testing.T) { t.Parallel() project := getTestProjectFromEnv() - rnd := acctest.RandString(10) + rnd := randString(t, 10) instanceName := fmt.Sprintf("my-instance-%s", rnd) databaseName := fmt.Sprintf("mydb_%s", rnd) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSpannerDatabaseDestroy, + CheckDestroy: testAccCheckSpannerDatabaseDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSpannerDatabase_basic(instanceName, databaseName), diff --git a/third_party/terraform/tests/resource_spanner_instance_iam_test.go b/third_party/terraform/tests/resource_spanner_instance_iam_test.go index 9b46c19a300a..0008c1d1b3f6 100644 --- a/third_party/terraform/tests/resource_spanner_instance_iam_test.go +++ b/third_party/terraform/tests/resource_spanner_instance_iam_test.go @@ -4,19 +4,18 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccSpannerInstanceIamBinding(t *testing.T) { t.Parallel() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/spanner.databaseAdmin" project := getTestProjectFromEnv() - instance := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -53,11 +52,11 @@ func TestAccSpannerInstanceIamMember(t *testing.T) { t.Parallel() project := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/spanner.databaseAdmin" - instance := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -82,11 +81,11 @@ func TestAccSpannerInstanceIamPolicy(t *testing.T) { t.Parallel() project := getTestProjectFromEnv() - account := acctest.RandomWithPrefix("tf-test") + account := fmt.Sprintf("tf-test-%d", randInt(t)) role := "roles/spanner.databaseAdmin" - instance := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + instance := fmt.Sprintf("tf-test-%s", randString(t, 10)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_spanner_instance_test.go b/third_party/terraform/tests/resource_spanner_instance_test.go index e269036c351d..c0106a9b3555 100644 --- a/third_party/terraform/tests/resource_spanner_instance_test.go +++ b/third_party/terraform/tests/resource_spanner_instance_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -51,11 +50,11 @@ func expectEquals(t *testing.T, expected, actual string) { func TestAccSpannerInstance_basic(t *testing.T) { t.Parallel() - idName := fmt.Sprintf("spanner-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + idName := fmt.Sprintf("spanner-test-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSpannerInstanceDestroy, + CheckDestroy: testAccCheckSpannerInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSpannerInstance_basic(idName), @@ -73,13 +72,15 @@ func TestAccSpannerInstance_basic(t *testing.T) { } func TestAccSpannerInstance_basicWithAutogenName(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() - displayName := fmt.Sprintf("spanner-test-%s-dname", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + displayName := fmt.Sprintf("spanner-test-%s-dname", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSpannerInstanceDestroy, + CheckDestroy: testAccCheckSpannerInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSpannerInstance_basicWithAutogenName(displayName), @@ -97,14 +98,16 @@ func TestAccSpannerInstance_basicWithAutogenName(t *testing.T) { } func TestAccSpannerInstance_update(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() - dName1 := fmt.Sprintf("spanner-dname1-%s", acctest.RandString(10)) - dName2 := fmt.Sprintf("spanner-dname2-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + dName1 := fmt.Sprintf("spanner-dname1-%s", randString(t, 10)) + dName2 := fmt.Sprintf("spanner-dname2-%s", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckSpannerInstanceDestroy, + CheckDestroy: testAccCheckSpannerInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSpannerInstance_update(dName1, 1, false), diff --git a/third_party/terraform/tests/resource_sql_database_instance_test.go.erb b/third_party/terraform/tests/resource_sql_database_instance_test.go.erb index c22a671b7c56..feaeb2615272 100644 --- a/third_party/terraform/tests/resource_sql_database_instance_test.go.erb +++ b/third_party/terraform/tests/resource_sql_database_instance_test.go.erb @@ -8,7 +8,6 @@ import ( "strings" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" sqladmin "google.golang.org/api/sqladmin/v1beta4" @@ -105,7 +104,7 @@ func testSweepDatabases(region string) error { return nil } - err = sqlAdminOperationWait(config, op, config.Project, "Stop Replica") + err = sqlAdminOperationWaitTime(config, op, config.Project, "Stop Replica", 10 * time.Minute) if err != nil { if strings.Contains(err.Error(), "does not exist") { log.Printf("Replication operation not found") @@ -137,7 +136,7 @@ func testSweepDatabases(region string) error { return nil } - err = sqlAdminOperationWait(config, op, config.Project, "Delete Instance") + err = sqlAdminOperationWaitTime(config, op, config.Project, "Delete Instance", 10 * time.Minute) if err != nil { if strings.Contains(err.Error(), "does not exist") { log.Printf("SQL instance not found") @@ -153,12 +152,14 @@ func testSweepDatabases(region string) error { } func TestAccSqlDatabaseInstance_basicInferredName(t *testing.T) { + // Randomness + skipIfVcr(t) t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testGoogleSqlDatabaseInstance_basic2, @@ -175,17 +176,17 @@ func TestAccSqlDatabaseInstance_basicInferredName(t *testing.T) { func TestAccSqlDatabaseInstance_basicSecondGen(t *testing.T) { t.Parallel() - databaseName := "tf-test-" + acctest.RandString(10) + databaseName := "tf-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( testGoogleSqlDatabaseInstance_basic3, databaseName), - Check: testAccCheckGoogleSqlDatabaseRootUserDoesNotExist(databaseName), + Check: testAccCheckGoogleSqlDatabaseRootUserDoesNotExist(t, databaseName), }, resource.TestStep{ ResourceName: "google_sql_database_instance.instance", @@ -196,17 +197,16 @@ func TestAccSqlDatabaseInstance_basicSecondGen(t *testing.T) { }) } -<% unless version == 'ga' -%> func TestAccSqlDatabaseInstance_basicMSSQL(t *testing.T) { t.Parallel() - databaseName := "tf-test-" + acctest.RandString(10) - rootPassword := acctest.RandString(15) + databaseName := "tf-test-" + randString(t, 10) + rootPassword := randString(t, 15) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: fmt.Sprintf( @@ -221,21 +221,20 @@ func TestAccSqlDatabaseInstance_basicMSSQL(t *testing.T) { }, }) } -<% end -%> func TestAccSqlDatabaseInstance_dontDeleteDefaultUserOnReplica(t *testing.T) { t.Parallel() - databaseName := "sql-instance-test-" + acctest.RandString(10) - failoverName := "sql-instance-test-failover-" + acctest.RandString(10) + databaseName := "sql-instance-test-" + randString(t, 10) + failoverName := "sql-instance-test-failover-" + randString(t, 10) // 1. Create an instance. // 2. Add a root@'%' user. // 3. Create a replica and assert it succeeds (it'll fail if we try to delete the root user thinking it's a // default user) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: testGoogleSqlDatabaseInstanceConfig_withoutReplica(databaseName), @@ -248,18 +247,18 @@ func TestAccSqlDatabaseInstance_dontDeleteDefaultUserOnReplica(t *testing.T) { resource.TestStep{ PreConfig: func() { // Add a root user - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) user := sqladmin.User{ Name: "root", Host: "%", - Password: acctest.RandString(26), + Password: randString(t, 26), } op, err := config.clientSqlAdmin.Users.Insert(config.Project, databaseName, &user).Do() if err != nil { t.Errorf("Error while inserting root@%% user: %s", err) return } - err = sqlAdminOperationWait(config, op, config.Project, "Waiting for user to insert") + err = sqlAdminOperationWaitTime(config, op, config.Project, "Waiting for user to insert", 10 * time.Minute) if err != nil { t.Errorf("Error while waiting for user insert operation to complete: %s", err.Error()) } @@ -274,12 +273,12 @@ func TestAccSqlDatabaseInstance_dontDeleteDefaultUserOnReplica(t *testing.T) { func TestAccSqlDatabaseInstance_settings_basic(t *testing.T) { t.Parallel() - databaseName := "tf-test-" + acctest.RandString(10) + databaseName := "tf-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -297,12 +296,12 @@ func TestAccSqlDatabaseInstance_settings_basic(t *testing.T) { func TestAccSqlDatabaseInstance_replica(t *testing.T) { t.Parallel() - databaseID := acctest.RandInt() + databaseID := randInt(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -332,13 +331,13 @@ func TestAccSqlDatabaseInstance_replica(t *testing.T) { func TestAccSqlDatabaseInstance_slave(t *testing.T) { t.Parallel() - masterID := acctest.RandInt() - slaveID := acctest.RandInt() + masterID := randInt(t) + slaveID := randInt(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -361,12 +360,12 @@ func TestAccSqlDatabaseInstance_slave(t *testing.T) { func TestAccSqlDatabaseInstance_highAvailability(t *testing.T) { t.Parallel() - instanceID := acctest.RandInt() + instanceID := randInt(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -384,12 +383,12 @@ func TestAccSqlDatabaseInstance_highAvailability(t *testing.T) { func TestAccSqlDatabaseInstance_diskspecs(t *testing.T) { t.Parallel() - masterID := acctest.RandInt() + masterID := randInt(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -407,12 +406,12 @@ func TestAccSqlDatabaseInstance_diskspecs(t *testing.T) { func TestAccSqlDatabaseInstance_maintenance(t *testing.T) { t.Parallel() - masterID := acctest.RandInt() + masterID := randInt(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -430,12 +429,12 @@ func TestAccSqlDatabaseInstance_maintenance(t *testing.T) { func TestAccSqlDatabaseInstance_settings_upgrade(t *testing.T) { t.Parallel() - databaseName := "tf-test-" + acctest.RandString(10) + databaseName := "tf-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -462,12 +461,12 @@ func TestAccSqlDatabaseInstance_settings_upgrade(t *testing.T) { func TestAccSqlDatabaseInstance_settingsDowngrade(t *testing.T) { t.Parallel() - databaseName := "tf-test-" + acctest.RandString(10) + databaseName := "tf-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -495,12 +494,12 @@ func TestAccSqlDatabaseInstance_settingsDowngrade(t *testing.T) { func TestAccSqlDatabaseInstance_authNets(t *testing.T) { t.Parallel() - databaseID := acctest.RandInt() + databaseID := randInt(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -538,12 +537,12 @@ func TestAccSqlDatabaseInstance_authNets(t *testing.T) { func TestAccSqlDatabaseInstance_multipleOperations(t *testing.T) { t.Parallel() - databaseID, instanceID, userID := acctest.RandString(8), acctest.RandString(8), acctest.RandString(8) + databaseID, instanceID, userID := randString(t, 8), randString(t, 8), randString(t, 8) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( @@ -561,17 +560,17 @@ func TestAccSqlDatabaseInstance_multipleOperations(t *testing.T) { func TestAccSqlDatabaseInstance_basic_with_user_labels(t *testing.T) { t.Parallel() - databaseName := "tf-test-" + acctest.RandString(10) + databaseName := "tf-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ resource.TestStep{ Config: fmt.Sprintf( testGoogleSqlDatabaseInstance_basic_with_user_labels, databaseName), - Check: testAccCheckGoogleSqlDatabaseRootUserDoesNotExist(databaseName), + Check: testAccCheckGoogleSqlDatabaseRootUserDoesNotExist(t, databaseName), }, resource.TestStep{ ResourceName: "google_sql_database_instance.instance", @@ -596,14 +595,14 @@ func TestAccSqlDatabaseInstance_basic_with_user_labels(t *testing.T) { func TestAccSqlDatabaseInstance_withPrivateNetwork(t *testing.T) { t.Parallel() - databaseName := "tf-test-" + acctest.RandString(10) - addressName := "tf-test-" + acctest.RandString(10) + databaseName := "tf-test-" + randString(t, 10) + addressName := "tf-test-" + randString(t, 10) networkName := BootstrapSharedTestNetwork(t, "sql-instance-private") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseInstanceDestroy, + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccSqlDatabaseInstance_withPrivateNetwork(databaseName, networkName, addressName), @@ -618,26 +617,28 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork(t *testing.T) { } <% end -%> -func testAccSqlDatabaseInstanceDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - config := testAccProvider.Meta().(*Config) - if rs.Type != "google_sql_database_instance" { - continue - } +func testAccSqlDatabaseInstanceDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + config := googleProviderConfig(t) + if rs.Type != "google_sql_database_instance" { + continue + } - _, err := config.clientSqlAdmin.Instances.Get(config.Project, - rs.Primary.Attributes["name"]).Do() - if err == nil { - return fmt.Errorf("Database Instance still exists") + _, err := config.clientSqlAdmin.Instances.Get(config.Project, + rs.Primary.Attributes["name"]).Do() + if err == nil { + return fmt.Errorf("Database Instance still exists") + } } - } - return nil + return nil + } } -func testAccCheckGoogleSqlDatabaseRootUserDoesNotExist(instance string) resource.TestCheckFunc { +func testAccCheckGoogleSqlDatabaseRootUserDoesNotExist(t *testing.T, instance string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) users, err := config.clientSqlAdmin.Users.List(config.Project, instance).Do() @@ -674,7 +675,6 @@ resource "google_sql_database_instance" "instance" { } ` -<% unless version == 'ga' -%> var testGoogleSqlDatabaseInstance_basic_mssql = ` resource "google_sql_database_instance" "instance" { name = "%s" @@ -685,7 +685,6 @@ resource "google_sql_database_instance" "instance" { } } ` -<% end -%> func testGoogleSqlDatabaseInstanceConfig_withoutReplica(instanceName string) string { return fmt.Sprintf(` diff --git a/third_party/terraform/tests/resource_sql_database_test.go b/third_party/terraform/tests/resource_sql_database_test.go index 2d446aeb57cb..cb3599aaaa21 100644 --- a/third_party/terraform/tests/resource_sql_database_test.go +++ b/third_party/terraform/tests/resource_sql_database_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" @@ -17,18 +16,18 @@ func TestAccSqlDatabase_basic(t *testing.T) { var database sqladmin.Database resourceName := "google_sql_database.database" - instanceName := acctest.RandomWithPrefix("sqldatabasetest") - dbName := acctest.RandomWithPrefix("sqldatabasetest") + instanceName := fmt.Sprintf("sqldatabasetest-%d", randInt(t)) + dbName := fmt.Sprintf("sqldatabasetest-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseDestroy, + CheckDestroy: testAccSqlDatabaseDestroyProducer(t), Steps: []resource.TestStep{ { Config: fmt.Sprintf(testGoogleSqlDatabase_basic, instanceName, dbName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlDatabaseExists(resourceName, &database), + testAccCheckGoogleSqlDatabaseExists(t, resourceName, &database), testAccCheckGoogleSqlDatabaseEquals(resourceName, &database), ), }, @@ -71,20 +70,20 @@ func TestAccSqlDatabase_update(t *testing.T) { var database sqladmin.Database - instance_name := acctest.RandomWithPrefix("sqldatabasetest") - database_name := acctest.RandomWithPrefix("sqldatabasetest") + instance_name := fmt.Sprintf("sqldatabasetest-%d", randInt(t)) + database_name := fmt.Sprintf("sqldatabasetest-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlDatabaseDestroy, + CheckDestroy: testAccSqlDatabaseDestroyProducer(t), Steps: []resource.TestStep{ { Config: fmt.Sprintf( testGoogleSqlDatabase_basic, instance_name, database_name), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseExists( - "google_sql_database.database", &database), + t, "google_sql_database.database", &database), testAccCheckGoogleSqlDatabaseEquals( "google_sql_database.database", &database), ), @@ -94,7 +93,7 @@ func TestAccSqlDatabase_update(t *testing.T) { testGoogleSqlDatabase_latin1, instance_name, database_name), Check: resource.ComposeTestCheckFunc( testAccCheckGoogleSqlDatabaseExists( - "google_sql_database.database", &database), + t, "google_sql_database.database", &database), testAccCheckGoogleSqlDatabaseEquals( "google_sql_database.database", &database), ), @@ -103,8 +102,7 @@ func TestAccSqlDatabase_update(t *testing.T) { }) } -func testAccCheckGoogleSqlDatabaseEquals(n string, - database *sqladmin.Database) resource.TestCheckFunc { +func testAccCheckGoogleSqlDatabaseEquals(n string, database *sqladmin.Database) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -136,10 +134,9 @@ func testAccCheckGoogleSqlDatabaseEquals(n string, } } -func testAccCheckGoogleSqlDatabaseExists(n string, - database *sqladmin.Database) resource.TestCheckFunc { +func testAccCheckGoogleSqlDatabaseExists(t *testing.T, n string, database *sqladmin.Database) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Resource not found: %s", n) @@ -160,24 +157,26 @@ func testAccCheckGoogleSqlDatabaseExists(n string, } } -func testAccSqlDatabaseDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - config := testAccProvider.Meta().(*Config) - if rs.Type != "google_sql_database" { - continue +func testAccSqlDatabaseDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + config := googleProviderConfig(t) + if rs.Type != "google_sql_database" { + continue + } + + database_name := rs.Primary.Attributes["name"] + instance_name := rs.Primary.Attributes["instance"] + _, err := config.clientSqlAdmin.Databases.Get(config.Project, + instance_name, database_name).Do() + + if err == nil { + return fmt.Errorf("Database resource still exists") + } } - database_name := rs.Primary.Attributes["name"] - instance_name := rs.Primary.Attributes["instance"] - _, err := config.clientSqlAdmin.Databases.Get(config.Project, - instance_name, database_name).Do() - - if err == nil { - return fmt.Errorf("Database resource still exists") - } + return nil } - - return nil } var testGoogleSqlDatabase_basic = ` diff --git a/third_party/terraform/tests/resource_sql_user_test.go b/third_party/terraform/tests/resource_sql_user_test.go index 2c008d52f186..ed7ecdec9265 100644 --- a/third_party/terraform/tests/resource_sql_user_test.go +++ b/third_party/terraform/tests/resource_sql_user_test.go @@ -4,33 +4,34 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) func TestAccSqlUser_mysql(t *testing.T) { + // Multiple fine-grained resources + skipIfVcr(t) t.Parallel() - instance := acctest.RandomWithPrefix("i") - resource.Test(t, resource.TestCase{ + instance := fmt.Sprintf("i-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlUserDestroy, + CheckDestroy: testAccSqlUserDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleSqlUser_mysql(instance, "password"), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlUserExists("google_sql_user.user1"), - testAccCheckGoogleSqlUserExists("google_sql_user.user2"), + testAccCheckGoogleSqlUserExists(t, "google_sql_user.user1"), + testAccCheckGoogleSqlUserExists(t, "google_sql_user.user2"), ), }, { // Update password Config: testGoogleSqlUser_mysql(instance, "new_password"), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlUserExists("google_sql_user.user1"), - testAccCheckGoogleSqlUserExists("google_sql_user.user2"), + testAccCheckGoogleSqlUserExists(t, "google_sql_user.user1"), + testAccCheckGoogleSqlUserExists(t, "google_sql_user.user2"), ), }, { @@ -47,23 +48,23 @@ func TestAccSqlUser_mysql(t *testing.T) { func TestAccSqlUser_postgres(t *testing.T) { t.Parallel() - instance := acctest.RandomWithPrefix("i") - resource.Test(t, resource.TestCase{ + instance := fmt.Sprintf("i-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccSqlUserDestroy, + CheckDestroy: testAccSqlUserDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleSqlUser_postgres(instance, "password"), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlUserExists("google_sql_user.user"), + testAccCheckGoogleSqlUserExists(t, "google_sql_user.user"), ), }, { // Update password Config: testGoogleSqlUser_postgres(instance, "new_password"), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleSqlUserExists("google_sql_user.user"), + testAccCheckGoogleSqlUserExists(t, "google_sql_user.user"), ), }, { @@ -77,9 +78,9 @@ func TestAccSqlUser_postgres(t *testing.T) { }) } -func testAccCheckGoogleSqlUserExists(n string) resource.TestCheckFunc { +func testAccCheckGoogleSqlUserExists(t *testing.T, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Resource not found: %s", n) @@ -105,29 +106,31 @@ func testAccCheckGoogleSqlUserExists(n string) resource.TestCheckFunc { } } -func testAccSqlUserDestroy(s *terraform.State) error { - for _, rs := range s.RootModule().Resources { - config := testAccProvider.Meta().(*Config) - if rs.Type != "google_sql_database" { - continue - } +func testAccSqlUserDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + config := googleProviderConfig(t) + if rs.Type != "google_sql_database" { + continue + } - name := rs.Primary.Attributes["name"] - instance := rs.Primary.Attributes["instance"] - host := rs.Primary.Attributes["host"] - users, err := config.clientSqlAdmin.Users.List(config.Project, - instance).Do() + name := rs.Primary.Attributes["name"] + instance := rs.Primary.Attributes["instance"] + host := rs.Primary.Attributes["host"] + users, err := config.clientSqlAdmin.Users.List(config.Project, + instance).Do() - for _, user := range users.Items { - if user.Name == name && user.Host == host { - return fmt.Errorf("User still %s exists %s", name, err) + for _, user := range users.Items { + if user.Name == name && user.Host == host { + return fmt.Errorf("User still %s exists %s", name, err) + } } + + return nil } return nil } - - return nil } func testGoogleSqlUser_mysql(instance, password string) string { diff --git a/third_party/terraform/tests/resource_storage_bucket_access_control_test.go b/third_party/terraform/tests/resource_storage_bucket_access_control_test.go index b7619cfc9631..3176c9c5c24c 100644 --- a/third_party/terraform/tests/resource_storage_bucket_access_control_test.go +++ b/third_party/terraform/tests/resource_storage_bucket_access_control_test.go @@ -10,8 +10,8 @@ import ( func TestAccStorageBucketAccessControl_update(t *testing.T) { t.Parallel() - bucketName := testBucketName() - resource.Test(t, resource.TestCase{ + bucketName := testBucketName(t) + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -19,7 +19,7 @@ func TestAccStorageBucketAccessControl_update(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckStorageObjectAccessControlDestroy, + CheckDestroy: testAccCheckStorageObjectAccessControlDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketAccessControlBasic(bucketName, "READER", "allUsers"), diff --git a/third_party/terraform/tests/resource_storage_bucket_acl_test.go b/third_party/terraform/tests/resource_storage_bucket_acl_test.go index f007322e2f6b..8b5d412324e0 100644 --- a/third_party/terraform/tests/resource_storage_bucket_acl_test.go +++ b/third_party/terraform/tests/resource_storage_bucket_acl_test.go @@ -23,18 +23,18 @@ var ( func TestAccStorageBucketAcl_basic(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) skipIfEnvNotSet(t, "GOOGLE_PROJECT_NUMBER") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketAclDestroy, + CheckDestroy: testAccStorageBucketAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsAclBasic1(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic2), ), }, }, @@ -44,35 +44,35 @@ func TestAccStorageBucketAcl_basic(t *testing.T) { func TestAccStorageBucketAcl_upgrade(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) skipIfEnvNotSet(t, "GOOGLE_PROJECT_NUMBER") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketAclDestroy, + CheckDestroy: testAccStorageBucketAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsAclBasic1(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic2), ), }, { Config: testGoogleStorageBucketsAclBasic2(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic3_owner), ), }, { Config: testGoogleStorageBucketsAclBasicDelete(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAclDelete(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(t, bucketName, roleEntityBasic3_owner), ), }, }, @@ -82,35 +82,35 @@ func TestAccStorageBucketAcl_upgrade(t *testing.T) { func TestAccStorageBucketAcl_downgrade(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) skipIfEnvNotSet(t, "GOOGLE_PROJECT_NUMBER") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketAclDestroy, + CheckDestroy: testAccStorageBucketAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsAclBasic2(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic3_owner), ), }, { Config: testGoogleStorageBucketsAclBasic3(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAcl(bucketName, roleEntityBasic3_reader), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAcl(t, bucketName, roleEntityBasic3_reader), ), }, { Config: testGoogleStorageBucketsAclBasicDelete(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageBucketAclDelete(bucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageBucketAclDelete(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageBucketAclDelete(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageBucketAclDelete(t, bucketName, roleEntityBasic3_owner), ), }, }, @@ -120,11 +120,11 @@ func TestAccStorageBucketAcl_downgrade(t *testing.T) { func TestAccStorageBucketAcl_predefined(t *testing.T) { t.Parallel() - bucketName := testBucketName() - resource.Test(t, resource.TestCase{ + bucketName := testBucketName(t) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketAclDestroy, + CheckDestroy: testAccStorageBucketAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsAclPredefined(bucketName), @@ -137,12 +137,12 @@ func TestAccStorageBucketAcl_predefined(t *testing.T) { func TestAccStorageBucketAcl_unordered(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) skipIfEnvNotSet(t, "GOOGLE_PROJECT_NUMBER") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketAclDestroy, + CheckDestroy: testAccStorageBucketAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsAclUnordered(bucketName), @@ -155,11 +155,11 @@ func TestAccStorageBucketAcl_unordered(t *testing.T) { func TestAccStorageBucketAcl_RemoveOwner(t *testing.T) { t.Parallel() - bucketName := testBucketName() - resource.Test(t, resource.TestCase{ + bucketName := testBucketName(t) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketAclDestroy, + CheckDestroy: testAccStorageBucketAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsAclRemoveOwner(bucketName), @@ -168,10 +168,10 @@ func TestAccStorageBucketAcl_RemoveOwner(t *testing.T) { }) } -func testAccCheckGoogleStorageBucketAclDelete(bucket, roleEntityS string) resource.TestCheckFunc { +func testAccCheckGoogleStorageBucketAclDelete(t *testing.T, bucket, roleEntityS string) resource.TestCheckFunc { return func(s *terraform.State) error { roleEntity, _ := getRoleEntityPair(roleEntityS) - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientStorage.BucketAccessControls.Get(bucket, roleEntity.Entity).Do() @@ -183,10 +183,10 @@ func testAccCheckGoogleStorageBucketAclDelete(bucket, roleEntityS string) resour } } -func testAccCheckGoogleStorageBucketAcl(bucket, roleEntityS string) resource.TestCheckFunc { +func testAccCheckGoogleStorageBucketAcl(t *testing.T, bucket, roleEntityS string) resource.TestCheckFunc { return func(s *terraform.State) error { roleEntity, _ := getRoleEntityPair(roleEntityS) - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) res, err := config.clientStorage.BucketAccessControls.Get(bucket, roleEntity.Entity).Do() @@ -202,24 +202,26 @@ func testAccCheckGoogleStorageBucketAcl(bucket, roleEntityS string) resource.Tes } } -func testAccStorageBucketAclDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccStorageBucketAclDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_storage_bucket_acl" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket_acl" { + continue + } - bucket := rs.Primary.Attributes["bucket"] + bucket := rs.Primary.Attributes["bucket"] - _, err := config.clientStorage.BucketAccessControls.List(bucket).Do() + _, err := config.clientStorage.BucketAccessControls.List(bucket).Do() - if err == nil { - return fmt.Errorf("Acl for bucket %s still exists", bucket) + if err == nil { + return fmt.Errorf("Acl for bucket %s still exists", bucket) + } } - } - return nil + return nil + } } func testGoogleStorageBucketsAclBasic1(bucketName string) string { diff --git a/third_party/terraform/tests/resource_storage_bucket_iam_test.go b/third_party/terraform/tests/resource_storage_bucket_iam_test.go index 42753c1bfded..fcce5cc0d7a8 100644 --- a/third_party/terraform/tests/resource_storage_bucket_iam_test.go +++ b/third_party/terraform/tests/resource_storage_bucket_iam_test.go @@ -4,18 +4,17 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccStorageBucketIamPolicy(t *testing.T) { t.Parallel() - bucket := acctest.RandomWithPrefix("tf-test") - account := acctest.RandomWithPrefix("tf-test") serviceAcct := getTestServiceAccountFromEnv(t) + bucket := fmt.Sprintf("tf-test-%d", randInt(t)) + account := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/tests/resource_storage_bucket_object_test.go b/third_party/terraform/tests/resource_storage_bucket_object_test.go index 9d20c0385c7e..fe43788337d3 100644 --- a/third_party/terraform/tests/resource_storage_bucket_object_test.go +++ b/third_party/terraform/tests/resource_storage_bucket_object_test.go @@ -23,7 +23,7 @@ const ( func TestAccStorageObject_basic(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) data := []byte("data data data") h := md5.New() if _, err := h.Write(data); err != nil { @@ -35,14 +35,14 @@ func TestAccStorageObject_basic(t *testing.T) { if err := ioutil.WriteFile(testFile.Name(), data, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsObjectBasic(bucketName, testFile.Name()), - Check: testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + Check: testAccCheckGoogleStorageObject(t, bucketName, objectName, data_md5), }, }, }) @@ -51,7 +51,7 @@ func TestAccStorageObject_basic(t *testing.T) { func TestAccStorageObject_recreate(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) writeFile := func(name string, data []byte) string { h := md5.New() @@ -70,14 +70,14 @@ func TestAccStorageObject_recreate(t *testing.T) { updatedName := testFile.Name() + ".update" updated_data_md5 := writeFile(updatedName, []byte("datum")) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsObjectBasic(bucketName, testFile.Name()), - Check: testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + Check: testAccCheckGoogleStorageObject(t, bucketName, objectName, data_md5), }, { PreConfig: func() { @@ -87,7 +87,7 @@ func TestAccStorageObject_recreate(t *testing.T) { } }, Config: testGoogleStorageBucketsObjectBasic(bucketName, testFile.Name()), - Check: testAccCheckGoogleStorageObject(bucketName, objectName, updated_data_md5), + Check: testAccCheckGoogleStorageObject(t, bucketName, objectName, updated_data_md5), }, }, }) @@ -96,7 +96,7 @@ func TestAccStorageObject_recreate(t *testing.T) { func TestAccStorageObject_content(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) data := []byte(content) h := md5.New() if _, err := h.Write(data); err != nil { @@ -108,15 +108,15 @@ func TestAccStorageObject_content(t *testing.T) { if err := ioutil.WriteFile(testFile.Name(), data, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsObjectContent(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + testAccCheckGoogleStorageObject(t, bucketName, objectName, data_md5), resource.TestCheckResourceAttr( "google_storage_bucket_object.object", "content_type", "text/plain; charset=utf-8"), resource.TestCheckResourceAttr( @@ -130,7 +130,7 @@ func TestAccStorageObject_content(t *testing.T) { func TestAccStorageObject_withContentCharacteristics(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) data := []byte(content) h := md5.New() if _, err := h.Write(data); err != nil { @@ -143,16 +143,16 @@ func TestAccStorageObject_withContentCharacteristics(t *testing.T) { } disposition, encoding, language, content_type := "inline", "compress", "en", "binary/octet-stream" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsObject_optionalContentFields( bucketName, disposition, encoding, language, content_type), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + testAccCheckGoogleStorageObject(t, bucketName, objectName, data_md5), resource.TestCheckResourceAttr( "google_storage_bucket_object.object", "content_disposition", disposition), resource.TestCheckResourceAttr( @@ -170,13 +170,13 @@ func TestAccStorageObject_withContentCharacteristics(t *testing.T) { func TestAccStorageObject_dynamicContent(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testGoogleStorageBucketsObjectDynamicContent(testBucketName()), + Config: testGoogleStorageBucketsObjectDynamicContent(testBucketName(t)), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( "google_storage_bucket_object.object", "content_type", "text/plain; charset=utf-8"), @@ -191,7 +191,7 @@ func TestAccStorageObject_dynamicContent(t *testing.T) { func TestAccStorageObject_cacheControl(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) data := []byte(content) h := md5.New() if _, err := h.Write(data); err != nil { @@ -204,15 +204,15 @@ func TestAccStorageObject_cacheControl(t *testing.T) { } cacheControl := "private" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsObject_cacheControl(bucketName, testFile.Name(), cacheControl), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + testAccCheckGoogleStorageObject(t, bucketName, objectName, data_md5), resource.TestCheckResourceAttr( "google_storage_bucket_object.object", "cache_control", cacheControl), ), @@ -224,7 +224,7 @@ func TestAccStorageObject_cacheControl(t *testing.T) { func TestAccStorageObject_storageClass(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) data := []byte(content) h := md5.New() if _, err := h.Write(data); err != nil { @@ -237,15 +237,15 @@ func TestAccStorageObject_storageClass(t *testing.T) { } storageClass := "MULTI_REGIONAL" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsObject_storageClass(bucketName, storageClass), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + testAccCheckGoogleStorageObject(t, bucketName, objectName, data_md5), resource.TestCheckResourceAttr( "google_storage_bucket_object.object", "storage_class", storageClass), ), @@ -257,7 +257,7 @@ func TestAccStorageObject_storageClass(t *testing.T) { func TestAccStorageObject_metadata(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) data := []byte(content) h := md5.New() if _, err := h.Write(data); err != nil { @@ -269,15 +269,15 @@ func TestAccStorageObject_metadata(t *testing.T) { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectDestroy, + CheckDestroy: testAccStorageObjectDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsObject_metadata(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObject(bucketName, objectName, data_md5), + testAccCheckGoogleStorageObject(t, bucketName, objectName, data_md5), resource.TestCheckResourceAttr( "google_storage_bucket_object.object", "metadata.customKey", "custom_value"), ), @@ -286,9 +286,9 @@ func TestAccStorageObject_metadata(t *testing.T) { }) } -func testAccCheckGoogleStorageObject(bucket, object, md5 string) resource.TestCheckFunc { +func testAccCheckGoogleStorageObject(t *testing.T, bucket, object, md5 string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) objectsService := storage.NewObjectsService(config.clientStorage) @@ -307,28 +307,30 @@ func testAccCheckGoogleStorageObject(bucket, object, md5 string) resource.TestCh } } -func testAccStorageObjectDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccStorageObjectDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_storage_bucket_object" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket_object" { + continue + } - bucket := rs.Primary.Attributes["bucket"] - name := rs.Primary.Attributes["name"] + bucket := rs.Primary.Attributes["bucket"] + name := rs.Primary.Attributes["name"] - objectsService := storage.NewObjectsService(config.clientStorage) + objectsService := storage.NewObjectsService(config.clientStorage) - getCall := objectsService.Get(bucket, name) - _, err := getCall.Do() + getCall := objectsService.Get(bucket, name) + _, err := getCall.Do() - if err == nil { - return fmt.Errorf("Object %s still exists", name) + if err == nil { + return fmt.Errorf("Object %s still exists", name) + } } - } - return nil + return nil + } } func testGoogleStorageBucketsObjectContent(bucketName string) string { diff --git a/third_party/terraform/tests/resource_storage_bucket_test.go b/third_party/terraform/tests/resource_storage_bucket_test.go index 21402d08a1c7..74ad514dfe34 100644 --- a/third_party/terraform/tests/resource_storage_bucket_test.go +++ b/third_party/terraform/tests/resource_storage_bucket_test.go @@ -8,7 +8,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" @@ -16,19 +15,19 @@ import ( "google.golang.org/api/storage/v1" ) -func testBucketName() string { - return fmt.Sprintf("%s-%d", "tf-test-bucket", acctest.RandInt()) +func testBucketName(t *testing.T) string { + return fmt.Sprintf("%s-%d", "tf-test-bucket", randInt(t)) } func TestAccStorageBucket_basic(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_basic(bucketName), @@ -55,12 +54,12 @@ func TestAccStorageBucket_basic(t *testing.T) { func TestAccStorageBucket_requesterPays(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-requester-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-requester-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_requesterPays(bucketName, true), @@ -81,12 +80,12 @@ func TestAccStorageBucket_requesterPays(t *testing.T) { func TestAccStorageBucket_lowercaseLocation(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_lowercaseLocation(bucketName), @@ -103,12 +102,12 @@ func TestAccStorageBucket_lowercaseLocation(t *testing.T) { func TestAccStorageBucket_customAttributes(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_customAttributes(bucketName), @@ -130,11 +129,11 @@ func TestAccStorageBucket_customAttributes(t *testing.T) { func TestAccStorageBucket_lifecycleRulesMultiple(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_lifecycleRulesMultiple(bucketName), @@ -152,28 +151,19 @@ func TestAccStorageBucket_lifecycleRuleStateLive(t *testing.T) { t.Parallel() var bucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) - hashK := resourceGCSBucketLifecycleRuleConditionHash(map[string]interface{}{ - "age": 10, - "with_state": "LIVE", - "num_newer_versions": 0, - "created_before": "", - }) - attrPrefix := fmt.Sprintf("lifecycle_rule.0.condition.%d.", hashK) + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_lifecycleRule_withStateLive(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), testAccCheckStorageBucketLifecycleConditionState(googleapi.Bool(true), &bucket), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", attrPrefix+"with_state", "LIVE"), ), }, { @@ -189,25 +179,18 @@ func TestAccStorageBucket_lifecycleRuleStateArchived(t *testing.T) { t.Parallel() var bucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) - hashK := resourceGCSBucketLifecycleRuleConditionHash(map[string]interface{}{ - "age": 10, - "with_state": "ARCHIVED", - "num_newer_versions": 0, - "created_before": "", - }) - attrPrefix := fmt.Sprintf("lifecycle_rule.0.condition.%d.", hashK) + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_lifecycleRule_emptyArchived(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), testAccCheckStorageBucketLifecycleConditionState(nil, &bucket), ), }, @@ -220,10 +203,8 @@ func TestAccStorageBucket_lifecycleRuleStateArchived(t *testing.T) { Config: testAccStorageBucket_lifecycleRule_withStateArchived(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), testAccCheckStorageBucketLifecycleConditionState(googleapi.Bool(false), &bucket), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", attrPrefix+"with_state", "ARCHIVED"), ), }, { @@ -239,40 +220,19 @@ func TestAccStorageBucket_lifecycleRuleStateAny(t *testing.T) { t.Parallel() var bucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) - - hashKLive := resourceGCSBucketLifecycleRuleConditionHash(map[string]interface{}{ - "age": 10, - "with_state": "LIVE", - "num_newer_versions": 0, - "created_before": "", - }) - hashKArchived := resourceGCSBucketLifecycleRuleConditionHash(map[string]interface{}{ - "age": 10, - "with_state": "ARCHIVED", - "num_newer_versions": 0, - "created_before": "", - }) - hashKAny := resourceGCSBucketLifecycleRuleConditionHash(map[string]interface{}{ - "age": 10, - "with_state": "ANY", - "num_newer_versions": 0, - "created_before": "", - }) + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_lifecycleRule_withStateArchived(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), testAccCheckStorageBucketLifecycleConditionState(googleapi.Bool(false), &bucket), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", fmt.Sprintf("lifecycle_rule.0.condition.%d.with_state", hashKArchived), "ARCHIVED"), ), }, { @@ -284,10 +244,8 @@ func TestAccStorageBucket_lifecycleRuleStateAny(t *testing.T) { Config: testAccStorageBucket_lifecycleRule_withStateLive(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), testAccCheckStorageBucketLifecycleConditionState(googleapi.Bool(true), &bucket), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", fmt.Sprintf("lifecycle_rule.0.condition.%d.with_state", hashKLive), "LIVE"), ), }, { @@ -299,10 +257,8 @@ func TestAccStorageBucket_lifecycleRuleStateAny(t *testing.T) { Config: testAccStorageBucket_lifecycleRule_withStateAny(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), testAccCheckStorageBucketLifecycleConditionState(nil, &bucket), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", fmt.Sprintf("lifecycle_rule.0.condition.%d.with_state", hashKAny), "ANY"), ), }, { @@ -314,10 +270,8 @@ func TestAccStorageBucket_lifecycleRuleStateAny(t *testing.T) { Config: testAccStorageBucket_lifecycleRule_withStateArchived(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), testAccCheckStorageBucketLifecycleConditionState(googleapi.Bool(false), &bucket), - resource.TestCheckResourceAttr( - "google_storage_bucket.bucket", fmt.Sprintf("lifecycle_rule.0.condition.%d.with_state", hashKArchived), "ARCHIVED"), ), }, { @@ -334,18 +288,18 @@ func TestAccStorageBucket_storageClass(t *testing.T) { var bucket storage.Bucket var updated storage.Bucket - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_storageClass(bucketName, "MULTI_REGIONAL", ""), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), ), }, { @@ -357,7 +311,7 @@ func TestAccStorageBucket_storageClass(t *testing.T) { Config: testAccStorageBucket_storageClass(bucketName, "NEARLINE", ""), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &updated), + t, "google_storage_bucket.bucket", bucketName, &updated), // storage_class-only change should not recreate testAccCheckStorageBucketWasUpdated(&updated, &bucket), ), @@ -371,7 +325,7 @@ func TestAccStorageBucket_storageClass(t *testing.T) { Config: testAccStorageBucket_storageClass(bucketName, "REGIONAL", "US-CENTRAL1"), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &updated), + t, "google_storage_bucket.bucket", bucketName, &updated), // Location change causes recreate testAccCheckStorageBucketWasRecreated(&updated, &bucket), ), @@ -390,18 +344,18 @@ func TestAccStorageBucket_update_requesterPays(t *testing.T) { var bucket storage.Bucket var updated storage.Bucket - bucketName := fmt.Sprintf("tf-test-requester-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-requester-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_requesterPays(bucketName, true), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), ), }, { @@ -413,7 +367,7 @@ func TestAccStorageBucket_update_requesterPays(t *testing.T) { Config: testAccStorageBucket_requesterPays(bucketName, false), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &updated), + t, "google_storage_bucket.bucket", bucketName, &updated), testAccCheckStorageBucketWasUpdated(&updated, &bucket), ), }, @@ -432,18 +386,18 @@ func TestAccStorageBucket_update(t *testing.T) { var bucket storage.Bucket var recreated storage.Bucket var updated storage.Bucket - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_basic(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "force_destroy", "false"), ), @@ -458,7 +412,7 @@ func TestAccStorageBucket_update(t *testing.T) { Config: testAccStorageBucket_customAttributes(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &recreated), + t, "google_storage_bucket.bucket", bucketName, &recreated), testAccCheckStorageBucketWasRecreated(&recreated, &bucket), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "force_destroy", "true"), @@ -474,7 +428,7 @@ func TestAccStorageBucket_update(t *testing.T) { Config: testAccStorageBucket_customAttributes_withLifecycle1(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &updated), + t, "google_storage_bucket.bucket", bucketName, &updated), testAccCheckStorageBucketWasUpdated(&updated, &recreated), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "force_destroy", "true"), @@ -490,7 +444,7 @@ func TestAccStorageBucket_update(t *testing.T) { Config: testAccStorageBucket_customAttributes_withLifecycle2(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &updated), + t, "google_storage_bucket.bucket", bucketName, &updated), testAccCheckStorageBucketWasUpdated(&updated, &recreated), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "force_destroy", "true"), @@ -506,7 +460,7 @@ func TestAccStorageBucket_update(t *testing.T) { Config: testAccStorageBucket_customAttributes(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &updated), + t, "google_storage_bucket.bucket", bucketName, &updated), testAccCheckStorageBucketWasUpdated(&updated, &recreated), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "force_destroy", "true"), @@ -526,30 +480,30 @@ func TestAccStorageBucket_forceDestroy(t *testing.T) { t.Parallel() var bucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_customAttributes(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), ), }, { Config: testAccStorageBucket_customAttributes(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckStorageBucketPutItem(bucketName), + testAccCheckStorageBucketPutItem(t, bucketName), ), }, { - Config: testAccStorageBucket_customAttributes(acctest.RandomWithPrefix("tf-test-acl-bucket")), + Config: testAccStorageBucket_customAttributes(fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t))), Check: resource.ComposeTestCheckFunc( - testAccCheckStorageBucketMissing(bucketName), + testAccCheckStorageBucketMissing(t, bucketName), ), }, }, @@ -560,30 +514,30 @@ func TestAccStorageBucket_forceDestroyWithVersioning(t *testing.T) { t.Parallel() var bucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_forceDestroyWithVersioning(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), ), }, { Config: testAccStorageBucket_forceDestroyWithVersioning(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckStorageBucketPutItem(bucketName), + testAccCheckStorageBucketPutItem(t, bucketName), ), }, { Config: testAccStorageBucket_forceDestroyWithVersioning(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckStorageBucketPutItem(bucketName), + testAccCheckStorageBucketPutItem(t, bucketName), ), }, }, @@ -593,17 +547,17 @@ func TestAccStorageBucket_forceDestroyWithVersioning(t *testing.T) { func TestAccStorageBucket_forceDestroyObjectDeleteError(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_forceDestroyWithRetentionPolicy(bucketName), Check: resource.ComposeTestCheckFunc( - testAccCheckStorageBucketPutItem(bucketName), + testAccCheckStorageBucketPutItem(t, bucketName), ), }, { @@ -621,18 +575,18 @@ func TestAccStorageBucket_versioning(t *testing.T) { t.Parallel() var bucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_versioning(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), + t, "google_storage_bucket.bucket", bucketName, &bucket), resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "versioning.#", "1"), resource.TestCheckResourceAttr( @@ -651,11 +605,11 @@ func TestAccStorageBucket_versioning(t *testing.T) { func TestAccStorageBucket_logging(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_logging(bucketName, "log-bucket"), @@ -708,12 +662,12 @@ func TestAccStorageBucket_logging(t *testing.T) { func TestAccStorageBucket_cors(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageBucketsCors(bucketName), @@ -730,12 +684,12 @@ func TestAccStorageBucket_cors(t *testing.T) { func TestAccStorageBucket_defaultEventBasedHold(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_defaultEventBasedHold(bucketName), @@ -750,16 +704,18 @@ func TestAccStorageBucket_defaultEventBasedHold(t *testing.T) { } func TestAccStorageBucket_encryption(t *testing.T) { + // when rotation is set, next rotation time is set using time.Now + skipIfVcr(t) t.Parallel() context := map[string]interface{}{ "organization": getTestOrgFromEnv(t), "billing_account": getTestBillingAccountFromEnv(t), - "random_suffix": acctest.RandString(10), - "random_int": acctest.RandInt(), + "random_suffix": randString(t, 10), + "random_int": randInt(t), } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -778,9 +734,9 @@ func TestAccStorageBucket_encryption(t *testing.T) { func TestAccStorageBucket_bucketPolicyOnly(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -807,12 +763,12 @@ func TestAccStorageBucket_bucketPolicyOnly(t *testing.T) { func TestAccStorageBucket_labels(t *testing.T) { t.Parallel() - bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acl-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ // Going from two labels { @@ -849,19 +805,19 @@ func TestAccStorageBucket_retentionPolicy(t *testing.T) { t.Parallel() var bucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_retentionPolicy(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), - testAccCheckStorageBucketRetentionPolicy(bucketName), + t, "google_storage_bucket.bucket", bucketName, &bucket), + testAccCheckStorageBucketRetentionPolicy(t, bucketName), ), }, { @@ -876,13 +832,13 @@ func TestAccStorageBucket_retentionPolicy(t *testing.T) { func TestAccStorageBucket_website(t *testing.T) { t.Parallel() - bucketSuffix := acctest.RandomWithPrefix("tf-website-test") + bucketSuffix := fmt.Sprintf("tf-website-test-%d", randInt(t)) errRe := regexp.MustCompile("one of `((website.0.main_page_suffix,website.0.not_found_page)|(website.0.not_found_page,website.0.main_page_suffix))` must be specified") - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_websiteNoAttributes(bucketSuffix), @@ -913,19 +869,19 @@ func TestAccStorageBucket_retentionPolicyLocked(t *testing.T) { var bucket storage.Bucket var newBucket storage.Bucket - bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", acctest.RandInt()) + bucketName := fmt.Sprintf("tf-test-acc-bucket-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageBucketDestroy, + CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageBucket_lockedRetentionPolicy(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &bucket), - testAccCheckStorageBucketRetentionPolicy(bucketName), + t, "google_storage_bucket.bucket", bucketName, &bucket), + testAccCheckStorageBucketRetentionPolicy(t, bucketName), ), }, { @@ -937,7 +893,7 @@ func TestAccStorageBucket_retentionPolicyLocked(t *testing.T) { Config: testAccStorageBucket_retentionPolicy(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckStorageBucketExists( - "google_storage_bucket.bucket", bucketName, &newBucket), + t, "google_storage_bucket.bucket", bucketName, &newBucket), testAccCheckStorageBucketWasRecreated(&newBucket, &bucket), ), }, @@ -945,7 +901,7 @@ func TestAccStorageBucket_retentionPolicyLocked(t *testing.T) { }) } -func testAccCheckStorageBucketExists(n string, bucketName string, bucket *storage.Bucket) resource.TestCheckFunc { +func testAccCheckStorageBucketExists(t *testing.T, n string, bucketName string, bucket *storage.Bucket) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -956,7 +912,7 @@ func testAccCheckStorageBucketExists(n string, bucketName string, bucket *storag return fmt.Errorf("No Project_ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) found, err := config.clientStorage.Buckets.Get(rs.Primary.ID).Do() if err != nil { @@ -994,9 +950,9 @@ func testAccCheckStorageBucketWasRecreated(newBucket *storage.Bucket, b *storage } } -func testAccCheckStorageBucketPutItem(bucketName string) resource.TestCheckFunc { +func testAccCheckStorageBucketPutItem(t *testing.T, bucketName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) data := bytes.NewBufferString("test") dataReader := bytes.NewReader(data.Bytes()) @@ -1013,9 +969,9 @@ func testAccCheckStorageBucketPutItem(bucketName string) resource.TestCheckFunc } } -func testAccCheckStorageBucketRetentionPolicy(bucketName string) resource.TestCheckFunc { +func testAccCheckStorageBucketRetentionPolicy(t *testing.T, bucketName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) data := bytes.NewBufferString("test") dataReader := bytes.NewReader(data.Bytes()) @@ -1046,9 +1002,9 @@ func testAccCheckStorageBucketRetentionPolicy(bucketName string) resource.TestCh } } -func testAccCheckStorageBucketMissing(bucketName string) resource.TestCheckFunc { +func testAccCheckStorageBucketMissing(t *testing.T, bucketName string) resource.TestCheckFunc { return func(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientStorage.Buckets.Get(bucketName).Do() if err == nil { @@ -1082,21 +1038,23 @@ func testAccCheckStorageBucketLifecycleConditionState(expected *bool, b *storage } } -func testAccStorageBucketDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccStorageBucketDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_storage_bucket" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket" { + continue + } - _, err := config.clientStorage.Buckets.Get(rs.Primary.ID).Do() - if err == nil { - return fmt.Errorf("Bucket still exists") + _, err := config.clientStorage.Buckets.Get(rs.Primary.ID).Do() + if err == nil { + return fmt.Errorf("Bucket still exists") + } } - } - return nil + return nil + } } func testAccStorageBucket_basic(bucketName string) string { @@ -1432,11 +1390,22 @@ resource "google_kms_crypto_key" "crypto_key" { rotation_period = "1000000s" } +data "google_storage_project_service_account" "gcs_account" { +} + +resource "google_kms_crypto_key_iam_member" "iam" { + crypto_key_id = google_kms_crypto_key.crypto_key.id + role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" + member = "serviceAccount:${data.google_storage_project_service_account.gcs_account.email_address}" +} + resource "google_storage_bucket" "bucket" { name = "tf-test-crypto-bucket-%{random_int}" encryption { default_kms_key_name = google_kms_crypto_key.crypto_key.self_link } + + depends_on = [google_kms_crypto_key_iam_member.iam] } `, context) } diff --git a/third_party/terraform/tests/resource_storage_default_object_access_control_test.go b/third_party/terraform/tests/resource_storage_default_object_access_control_test.go index 71426b69cfd7..4583da3c03b7 100644 --- a/third_party/terraform/tests/resource_storage_default_object_access_control_test.go +++ b/third_party/terraform/tests/resource_storage_default_object_access_control_test.go @@ -10,8 +10,8 @@ import ( func TestAccStorageDefaultObjectAccessControl_update(t *testing.T) { t.Parallel() - bucketName := testBucketName() - resource.Test(t, resource.TestCase{ + bucketName := testBucketName(t) + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -19,7 +19,7 @@ func TestAccStorageDefaultObjectAccessControl_update(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckStorageDefaultObjectAccessControlDestroy, + CheckDestroy: testAccCheckStorageDefaultObjectAccessControlDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageDefaultObjectAccessControlBasic(bucketName, "READER", "allUsers"), diff --git a/third_party/terraform/tests/resource_storage_default_object_acl_test.go b/third_party/terraform/tests/resource_storage_default_object_acl_test.go index ac05160f1bf1..59428f71a196 100644 --- a/third_party/terraform/tests/resource_storage_default_object_acl_test.go +++ b/third_party/terraform/tests/resource_storage_default_object_acl_test.go @@ -11,17 +11,17 @@ import ( func TestAccStorageDefaultObjectAcl_basic(t *testing.T) { t.Parallel() - bucketName := testBucketName() - resource.Test(t, resource.TestCase{ + bucketName := testBucketName(t) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageDefaultObjectAclDestroy, + CheckDestroy: testAccStorageDefaultObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageDefaultObjectsAclBasic(bucketName, roleEntityBasic1, roleEntityBasic2), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic2), ), }, }, @@ -31,11 +31,11 @@ func TestAccStorageDefaultObjectAcl_basic(t *testing.T) { func TestAccStorageDefaultObjectAcl_noRoleEntity(t *testing.T) { t.Parallel() - bucketName := testBucketName() - resource.Test(t, resource.TestCase{ + bucketName := testBucketName(t) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageDefaultObjectAclDestroy, + CheckDestroy: testAccStorageDefaultObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageDefaultObjectsAclNoRoleEntity(bucketName), @@ -47,35 +47,35 @@ func TestAccStorageDefaultObjectAcl_noRoleEntity(t *testing.T) { func TestAccStorageDefaultObjectAcl_upgrade(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageDefaultObjectAclDestroy, + CheckDestroy: testAccStorageDefaultObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageDefaultObjectsAclBasic(bucketName, roleEntityBasic1, roleEntityBasic2), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic2), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic2), ), }, { Config: testGoogleStorageDefaultObjectsAclBasic(bucketName, roleEntityBasic2, roleEntityBasic3_owner), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic3_owner), ), }, { Config: testGoogleStorageDefaultObjectsAclBasicDelete(bucketName, roleEntityBasic1), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageDefaultObjectAclDelete(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageDefaultObjectAclDelete(bucketName, roleEntityBasic3_reader), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageDefaultObjectAclDelete(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageDefaultObjectAclDelete(t, bucketName, roleEntityBasic3_reader), ), }, }, @@ -85,35 +85,35 @@ func TestAccStorageDefaultObjectAcl_upgrade(t *testing.T) { func TestAccStorageDefaultObjectAcl_downgrade(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageDefaultObjectAclDestroy, + CheckDestroy: testAccStorageDefaultObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageDefaultObjectsAclBasic(bucketName, roleEntityBasic2, roleEntityBasic3_owner), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic3_owner), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic3_owner), ), }, { Config: testGoogleStorageDefaultObjectsAclBasic(bucketName, roleEntityBasic2, roleEntityBasic3_reader), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic3_reader), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic3_reader), ), }, { Config: testGoogleStorageDefaultObjectsAclBasicDelete(bucketName, roleEntityBasic1), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageDefaultObjectAcl(bucketName, roleEntityBasic1), - testAccCheckGoogleStorageDefaultObjectAclDelete(bucketName, roleEntityBasic2), - testAccCheckGoogleStorageDefaultObjectAclDelete(bucketName, roleEntityBasic3_reader), + testAccCheckGoogleStorageDefaultObjectAcl(t, bucketName, roleEntityBasic1), + testAccCheckGoogleStorageDefaultObjectAclDelete(t, bucketName, roleEntityBasic2), + testAccCheckGoogleStorageDefaultObjectAclDelete(t, bucketName, roleEntityBasic3_reader), ), }, }, @@ -124,12 +124,12 @@ func TestAccStorageDefaultObjectAcl_downgrade(t *testing.T) { func TestAccStorageDefaultObjectAcl_unordered(t *testing.T) { t.Parallel() - bucketName := testBucketName() + bucketName := testBucketName(t) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageDefaultObjectAclDestroy, + CheckDestroy: testAccStorageDefaultObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageDefaultObjectAclUnordered(bucketName), @@ -138,10 +138,10 @@ func TestAccStorageDefaultObjectAcl_unordered(t *testing.T) { }) } -func testAccCheckGoogleStorageDefaultObjectAcl(bucket, roleEntityS string) resource.TestCheckFunc { +func testAccCheckGoogleStorageDefaultObjectAcl(t *testing.T, bucket, roleEntityS string) resource.TestCheckFunc { return func(s *terraform.State) error { roleEntity, _ := getRoleEntityPair(roleEntityS) - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) res, err := config.clientStorage.DefaultObjectAccessControls.Get(bucket, roleEntity.Entity).Do() @@ -158,29 +158,31 @@ func testAccCheckGoogleStorageDefaultObjectAcl(bucket, roleEntityS string) resou } } -func testAccStorageDefaultObjectAclDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccStorageDefaultObjectAclDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { + for _, rs := range s.RootModule().Resources { - if rs.Type != "google_storage_default_object_acl" { - continue - } + if rs.Type != "google_storage_default_object_acl" { + continue + } - bucket := rs.Primary.Attributes["bucket"] + bucket := rs.Primary.Attributes["bucket"] - _, err := config.clientStorage.DefaultObjectAccessControls.List(bucket).Do() - if err == nil { - return fmt.Errorf("Default Storage Object Acl for bucket %s still exists", bucket) + _, err := config.clientStorage.DefaultObjectAccessControls.List(bucket).Do() + if err == nil { + return fmt.Errorf("Default Storage Object Acl for bucket %s still exists", bucket) + } } + return nil } - return nil } -func testAccCheckGoogleStorageDefaultObjectAclDelete(bucket, roleEntityS string) resource.TestCheckFunc { +func testAccCheckGoogleStorageDefaultObjectAclDelete(t *testing.T, bucket, roleEntityS string) resource.TestCheckFunc { return func(s *terraform.State) error { roleEntity, _ := getRoleEntityPair(roleEntityS) - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientStorage.DefaultObjectAccessControls.Get(bucket, roleEntity.Entity).Do() diff --git a/third_party/terraform/tests/resource_storage_hmac_key_test.go b/third_party/terraform/tests/resource_storage_hmac_key_test.go index 7e28c8965797..9a55fdb0a461 100644 --- a/third_party/terraform/tests/resource_storage_hmac_key_test.go +++ b/third_party/terraform/tests/resource_storage_hmac_key_test.go @@ -4,18 +4,17 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccStorageHmacKey_update(t *testing.T) { t.Parallel() - saName := fmt.Sprintf("%v%v", "service-account", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + saName := fmt.Sprintf("%v%v", "service-account", randString(t, 10)) + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckStorageHmacKeyDestroy, + CheckDestroy: testAccCheckStorageHmacKeyDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccGoogleStorageHmacKeyBasic(saName, "ACTIVE"), diff --git a/third_party/terraform/tests/resource_storage_notification_test.go b/third_party/terraform/tests/resource_storage_notification_test.go index 0abb075ba69f..a3229516ab8f 100644 --- a/third_party/terraform/tests/resource_storage_notification_test.go +++ b/third_party/terraform/tests/resource_storage_notification_test.go @@ -6,7 +6,6 @@ import ( "reflect" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" "google.golang.org/api/storage/v1" @@ -22,20 +21,20 @@ func TestAccStorageNotification_basic(t *testing.T) { skipIfEnvNotSet(t, "GOOGLE_PROJECT") var notification storage.Notification - bucketName := testBucketName() - topicName := fmt.Sprintf("tf-pstopic-test-%d", acctest.RandInt()) + bucketName := testBucketName(t) + topicName := fmt.Sprintf("tf-pstopic-test-%d", randInt(t)) topic := fmt.Sprintf("//pubsub.googleapis.com/projects/%s/topics/%s", os.Getenv("GOOGLE_PROJECT"), topicName) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageNotificationDestroy, + CheckDestroy: testAccStorageNotificationDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageNotificationBasic(bucketName, topicName, topic), Check: resource.ComposeTestCheckFunc( testAccCheckStorageNotificationExists( - "google_storage_notification.notification", ¬ification), + t, "google_storage_notification.notification", ¬ification), resource.TestCheckResourceAttr( "google_storage_notification.notification", "bucket", bucketName), resource.TestCheckResourceAttr( @@ -66,22 +65,22 @@ func TestAccStorageNotification_withEventsAndAttributes(t *testing.T) { skipIfEnvNotSet(t, "GOOGLE_PROJECT") var notification storage.Notification - bucketName := testBucketName() - topicName := fmt.Sprintf("tf-pstopic-test-%d", acctest.RandInt()) + bucketName := testBucketName(t) + topicName := fmt.Sprintf("tf-pstopic-test-%d", randInt(t)) topic := fmt.Sprintf("//pubsub.googleapis.com/projects/%s/topics/%s", os.Getenv("GOOGLE_PROJECT"), topicName) eventType1 := "OBJECT_FINALIZE" eventType2 := "OBJECT_ARCHIVE" - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageNotificationDestroy, + CheckDestroy: testAccStorageNotificationDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageNotificationOptionalEventsAttributes(bucketName, topicName, topic, eventType1, eventType2), Check: resource.ComposeTestCheckFunc( testAccCheckStorageNotificationExists( - "google_storage_notification.notification", ¬ification), + t, "google_storage_notification.notification", ¬ification), resource.TestCheckResourceAttr( "google_storage_notification.notification", "bucket", bucketName), resource.TestCheckResourceAttr( @@ -103,26 +102,28 @@ func TestAccStorageNotification_withEventsAndAttributes(t *testing.T) { }) } -func testAccStorageNotificationDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccStorageNotificationDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_storage_notification" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_notification" { + continue + } - bucket, notificationID := resourceStorageNotificationParseID(rs.Primary.ID) + bucket, notificationID := resourceStorageNotificationParseID(rs.Primary.ID) - _, err := config.clientStorage.Notifications.Get(bucket, notificationID).Do() - if err == nil { - return fmt.Errorf("Notification configuration still exists") + _, err := config.clientStorage.Notifications.Get(bucket, notificationID).Do() + if err == nil { + return fmt.Errorf("Notification configuration still exists") + } } - } - return nil + return nil + } } -func testAccCheckStorageNotificationExists(resource string, notification *storage.Notification) resource.TestCheckFunc { +func testAccCheckStorageNotificationExists(t *testing.T, resource string, notification *storage.Notification) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resource] if !ok { @@ -133,7 +134,7 @@ func testAccCheckStorageNotificationExists(resource string, notification *storag return fmt.Errorf("No ID is set") } - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) bucket, notificationID := resourceStorageNotificationParseID(rs.Primary.ID) diff --git a/third_party/terraform/tests/resource_storage_object_access_control_test.go b/third_party/terraform/tests/resource_storage_object_access_control_test.go index 017d42a67b83..c0699d324c97 100644 --- a/third_party/terraform/tests/resource_storage_object_access_control_test.go +++ b/third_party/terraform/tests/resource_storage_object_access_control_test.go @@ -11,13 +11,13 @@ import ( func TestAccStorageObjectAccessControl_update(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -25,7 +25,7 @@ func TestAccStorageObjectAccessControl_update(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckStorageObjectAccessControlDestroy, + CheckDestroy: testAccCheckStorageObjectAccessControlDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectAccessControlBasic(bucketName, objectName, "READER", "allUsers"), diff --git a/third_party/terraform/tests/resource_storage_object_acl_test.go b/third_party/terraform/tests/resource_storage_object_acl_test.go index 968d1ff5bfca..293a3fbbaec2 100644 --- a/third_party/terraform/tests/resource_storage_object_acl_test.go +++ b/third_party/terraform/tests/resource_storage_object_acl_test.go @@ -3,9 +3,7 @@ package google import ( "fmt" "io/ioutil" - "math/rand" "testing" - "time" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" @@ -13,21 +11,20 @@ import ( var tfObjectAcl, errObjectAcl = ioutil.TempFile("", "tf-gce-test") -func testAclObjectName() string { - return fmt.Sprintf("%s-%d", "tf-test-acl-object", - rand.New(rand.NewSource(time.Now().UnixNano())).Int()) +func testAclObjectName(t *testing.T) string { + return fmt.Sprintf("%s-%d", "tf-test-acl-object", randInt(t)) } func TestAccStorageObjectAcl_basic(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -35,14 +32,14 @@ func TestAccStorageObjectAcl_basic(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectAclDestroy, + CheckDestroy: testAccStorageObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectsAclBasic1(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic2), ), }, @@ -53,13 +50,13 @@ func TestAccStorageObjectAcl_basic(t *testing.T) { func TestAccStorageObjectAcl_upgrade(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -67,14 +64,14 @@ func TestAccStorageObjectAcl_upgrade(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectAclDestroy, + CheckDestroy: testAccStorageObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectsAclBasic1(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic2), ), }, @@ -82,9 +79,9 @@ func TestAccStorageObjectAcl_upgrade(t *testing.T) { { Config: testGoogleStorageObjectsAclBasic2(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic3_owner), ), }, @@ -92,11 +89,11 @@ func TestAccStorageObjectAcl_upgrade(t *testing.T) { { Config: testGoogleStorageObjectsAclBasicDelete(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAclDelete(bucketName, + testAccCheckGoogleStorageObjectAclDelete(t, bucketName, objectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAclDelete(bucketName, + testAccCheckGoogleStorageObjectAclDelete(t, bucketName, objectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAclDelete(bucketName, + testAccCheckGoogleStorageObjectAclDelete(t, bucketName, objectName, roleEntityBasic3_reader), ), }, @@ -107,13 +104,13 @@ func TestAccStorageObjectAcl_upgrade(t *testing.T) { func TestAccStorageObjectAcl_downgrade(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -121,14 +118,14 @@ func TestAccStorageObjectAcl_downgrade(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectAclDestroy, + CheckDestroy: testAccStorageObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectsAclBasic2(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic3_owner), ), }, @@ -136,9 +133,9 @@ func TestAccStorageObjectAcl_downgrade(t *testing.T) { { Config: testGoogleStorageObjectsAclBasic3(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic3_reader), ), }, @@ -146,11 +143,11 @@ func TestAccStorageObjectAcl_downgrade(t *testing.T) { { Config: testGoogleStorageObjectsAclBasicDelete(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAclDelete(bucketName, + testAccCheckGoogleStorageObjectAclDelete(t, bucketName, objectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAclDelete(bucketName, + testAccCheckGoogleStorageObjectAclDelete(t, bucketName, objectName, roleEntityBasic2), - testAccCheckGoogleStorageObjectAclDelete(bucketName, + testAccCheckGoogleStorageObjectAclDelete(t, bucketName, objectName, roleEntityBasic3_reader), ), }, @@ -161,13 +158,13 @@ func TestAccStorageObjectAcl_downgrade(t *testing.T) { func TestAccStorageObjectAcl_predefined(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -175,7 +172,7 @@ func TestAccStorageObjectAcl_predefined(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectAclDestroy, + CheckDestroy: testAccStorageObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectsAclPredefined(bucketName, objectName), @@ -187,13 +184,13 @@ func TestAccStorageObjectAcl_predefined(t *testing.T) { func TestAccStorageObjectAcl_predefinedToExplicit(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -201,7 +198,7 @@ func TestAccStorageObjectAcl_predefinedToExplicit(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectAclDestroy, + CheckDestroy: testAccStorageObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectsAclPredefined(bucketName, objectName), @@ -209,9 +206,9 @@ func TestAccStorageObjectAcl_predefinedToExplicit(t *testing.T) { { Config: testGoogleStorageObjectsAclBasic1(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic2), ), }, @@ -222,13 +219,13 @@ func TestAccStorageObjectAcl_predefinedToExplicit(t *testing.T) { func TestAccStorageObjectAcl_explicitToPredefined(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -236,14 +233,14 @@ func TestAccStorageObjectAcl_explicitToPredefined(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectAclDestroy, + CheckDestroy: testAccStorageObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectsAclBasic1(bucketName, objectName), Check: resource.ComposeTestCheckFunc( - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic1), - testAccCheckGoogleStorageObjectAcl(bucketName, + testAccCheckGoogleStorageObjectAcl(t, bucketName, objectName, roleEntityBasic2), ), }, @@ -258,13 +255,13 @@ func TestAccStorageObjectAcl_explicitToPredefined(t *testing.T) { func TestAccStorageObjectAcl_unordered(t *testing.T) { t.Parallel() - bucketName := testBucketName() - objectName := testAclObjectName() + bucketName := testBucketName(t) + objectName := testAclObjectName(t) objectData := []byte("data data data") if err := ioutil.WriteFile(tfObjectAcl.Name(), objectData, 0644); err != nil { t.Errorf("error writing file: %v", err) } - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { if errObjectAcl != nil { panic(errObjectAcl) @@ -272,7 +269,7 @@ func TestAccStorageObjectAcl_unordered(t *testing.T) { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageObjectAclDestroy, + CheckDestroy: testAccStorageObjectAclDestroyProducer(t), Steps: []resource.TestStep{ { Config: testGoogleStorageObjectAclUnordered(bucketName, objectName), @@ -281,10 +278,10 @@ func TestAccStorageObjectAcl_unordered(t *testing.T) { }) } -func testAccCheckGoogleStorageObjectAcl(bucket, object, roleEntityS string) resource.TestCheckFunc { +func testAccCheckGoogleStorageObjectAcl(t *testing.T, bucket, object, roleEntityS string) resource.TestCheckFunc { return func(s *terraform.State) error { roleEntity, _ := getRoleEntityPair(roleEntityS) - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) res, err := config.clientStorage.ObjectAccessControls.Get(bucket, object, roleEntity.Entity).Do() @@ -301,10 +298,10 @@ func testAccCheckGoogleStorageObjectAcl(bucket, object, roleEntityS string) reso } } -func testAccCheckGoogleStorageObjectAclDelete(bucket, object, roleEntityS string) resource.TestCheckFunc { +func testAccCheckGoogleStorageObjectAclDelete(t *testing.T, bucket, object, roleEntityS string) resource.TestCheckFunc { return func(s *terraform.State) error { roleEntity, _ := getRoleEntityPair(roleEntityS) - config := testAccProvider.Meta().(*Config) + config := googleProviderConfig(t) _, err := config.clientStorage.ObjectAccessControls.Get(bucket, object, roleEntity.Entity).Do() @@ -317,25 +314,27 @@ func testAccCheckGoogleStorageObjectAclDelete(bucket, object, roleEntityS string } } -func testAccStorageObjectAclDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) +func testAccStorageObjectAclDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_storage_bucket_acl" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_bucket_acl" { + continue + } - bucket := rs.Primary.Attributes["bucket"] - object := rs.Primary.Attributes["object"] + bucket := rs.Primary.Attributes["bucket"] + object := rs.Primary.Attributes["object"] - _, err := config.clientStorage.ObjectAccessControls.List(bucket, object).Do() + _, err := config.clientStorage.ObjectAccessControls.List(bucket, object).Do() - if err == nil { - return fmt.Errorf("Acl for bucket %s still exists", bucket) + if err == nil { + return fmt.Errorf("Acl for bucket %s still exists", bucket) + } } - } - return nil + return nil + } } func testGoogleStorageObjectsAclBasicDelete(bucketName string, objectName string) string { diff --git a/third_party/terraform/tests/resource_storage_transfer_job_test.go b/third_party/terraform/tests/resource_storage_transfer_job_test.go index e057a4898600..f82327572fb6 100644 --- a/third_party/terraform/tests/resource_storage_transfer_job_test.go +++ b/third_party/terraform/tests/resource_storage_transfer_job_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" ) @@ -12,17 +11,17 @@ import ( func TestAccStorageTransferJob_basic(t *testing.T) { t.Parallel() - testDataSourceBucketName := acctest.RandString(10) - testDataSinkName := acctest.RandString(10) - testTransferJobDescription := acctest.RandString(10) - testUpdatedDataSourceBucketName := acctest.RandString(10) - testUpdatedDataSinkBucketName := acctest.RandString(10) - testUpdatedTransferJobDescription := acctest.RandString(10) + testDataSourceBucketName := randString(t, 10) + testDataSinkName := randString(t, 10) + testTransferJobDescription := randString(t, 10) + testUpdatedDataSourceBucketName := randString(t, 10) + testUpdatedDataSinkBucketName := randString(t, 10) + testUpdatedTransferJobDescription := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageTransferJobDestroy, + CheckDestroy: testAccStorageTransferJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageTransferJob_basic(getTestProjectFromEnv(), testDataSourceBucketName, testDataSinkName, testTransferJobDescription), @@ -63,14 +62,14 @@ func TestAccStorageTransferJob_basic(t *testing.T) { func TestAccStorageTransferJob_omitScheduleEndDate(t *testing.T) { t.Parallel() - testDataSourceBucketName := acctest.RandString(10) - testDataSinkName := acctest.RandString(10) - testTransferJobDescription := acctest.RandString(10) + testDataSourceBucketName := randString(t, 10) + testDataSinkName := randString(t, 10) + testTransferJobDescription := randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccStorageTransferJobDestroy, + CheckDestroy: testAccStorageTransferJobDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccStorageTransferJob_omitScheduleEndDate(getTestProjectFromEnv(), testDataSourceBucketName, testDataSinkName, testTransferJobDescription), @@ -84,35 +83,37 @@ func TestAccStorageTransferJob_omitScheduleEndDate(t *testing.T) { }) } -func testAccStorageTransferJobDestroy(s *terraform.State) error { - config := testAccProvider.Meta().(*Config) - - for _, rs := range s.RootModule().Resources { - if rs.Type != "google_storage_transfer_job" { - continue - } - - rs_attr := rs.Primary.Attributes - name, ok := rs_attr["name"] - if !ok { - return fmt.Errorf("No name set") - } - - project, err := getTestProject(rs.Primary, config) - if err != nil { - return err +func testAccStorageTransferJobDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := googleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_storage_transfer_job" { + continue + } + + rs_attr := rs.Primary.Attributes + name, ok := rs_attr["name"] + if !ok { + return fmt.Errorf("No name set") + } + + project, err := getTestProject(rs.Primary, config) + if err != nil { + return err + } + + res, err := config.clientStorageTransfer.TransferJobs.Get(name).ProjectId(project).Do() + if res.Status != "DELETED" { + return fmt.Errorf("Transfer Job not set to DELETED") + } + if err != nil { + return fmt.Errorf("Transfer Job does not exist, should exist and be DELETED") + } } - res, err := config.clientStorageTransfer.TransferJobs.Get(name).ProjectId(project).Do() - if res.Status != "DELETED" { - return fmt.Errorf("Transfer Job not set to DELETED") - } - if err != nil { - return fmt.Errorf("Transfer Job does not exist, should exist and be DELETED") - } + return nil } - - return nil } func testAccStorageTransferJob_basic(project string, dataSourceBucketName string, dataSinkBucketName string, transferJobDescription string) string { diff --git a/third_party/terraform/tests/resource_tpu_node_test.go b/third_party/terraform/tests/resource_tpu_node_test.go index 5c13da1e5edf..d376bb9ec4e6 100644 --- a/third_party/terraform/tests/resource_tpu_node_test.go +++ b/third_party/terraform/tests/resource_tpu_node_test.go @@ -4,19 +4,18 @@ import ( "testing" "fmt" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) func TestAccTPUNode_tpuNodeBUpdateTensorFlowVersion(t *testing.T) { t.Parallel() - nodeId := acctest.RandomWithPrefix("tf-test") + nodeId := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckTPUNodeDestroy, + CheckDestroy: testAccCheckTPUNodeDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccTpuNode_tpuNodeTensorFlow(nodeId, 0), diff --git a/third_party/terraform/tests/resource_usage_export_bucket_test.go b/third_party/terraform/tests/resource_usage_export_bucket_test.go index f41e01b44128..268c340d95c4 100644 --- a/third_party/terraform/tests/resource_usage_export_bucket_test.go +++ b/third_party/terraform/tests/resource_usage_export_bucket_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" ) @@ -12,9 +11,9 @@ func TestAccComputeResourceUsageExportBucket(t *testing.T) { org := getTestOrgFromEnv(t) billingId := getTestBillingAccountFromEnv(t) - baseProject := acctest.RandomWithPrefix("tf-test") + baseProject := fmt.Sprintf("tf-test-%d", randInt(t)) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/third_party/terraform/utils/appengine_operation.go b/third_party/terraform/utils/appengine_operation.go index ad0975aa8111..a8b936b245a3 100644 --- a/third_party/terraform/utils/appengine_operation.go +++ b/third_party/terraform/utils/appengine_operation.go @@ -4,6 +4,7 @@ import ( "encoding/json" "fmt" "regexp" + "time" "google.golang.org/api/appengine/v1" ) @@ -29,11 +30,7 @@ func (w *AppEngineOperationWaiter) QueryOp() (interface{}, error) { return w.Service.Apps.Operations.Get(w.AppId, matches[1]).Do() } -func appEngineOperationWait(config *Config, res interface{}, appId, activity string) error { - return appEngineOperationWaitTime(config, res, appId, activity, 4) -} - -func appEngineOperationWaitTimeWithResponse(config *Config, res interface{}, response *map[string]interface{}, appId, activity string, timeoutMinutes int) error { +func appEngineOperationWaitTimeWithResponse(config *Config, res interface{}, response *map[string]interface{}, appId, activity string, timeout time.Duration) error { op := &appengine.Operation{} err := Convert(res, op) if err != nil { @@ -48,13 +45,13 @@ func appEngineOperationWaitTimeWithResponse(config *Config, res interface{}, res if err := w.SetOp(op); err != nil { return err } - if err := OperationWait(w, activity, timeoutMinutes, config.PollInterval); err != nil { + if err := OperationWait(w, activity, timeout, config.PollInterval); err != nil { return err } return json.Unmarshal([]byte(w.CommonOperationWaiter.Op.Response), response) } -func appEngineOperationWaitTime(config *Config, res interface{}, appId, activity string, timeoutMinutes int) error { +func appEngineOperationWaitTime(config *Config, res interface{}, appId, activity string, timeout time.Duration) error { op := &appengine.Operation{} err := Convert(res, op) if err != nil { @@ -69,5 +66,5 @@ func appEngineOperationWaitTime(config *Config, res interface{}, appId, activity if err := w.SetOp(op); err != nil { return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/third_party/terraform/utils/bootstrap_utils_test.go b/third_party/terraform/utils/bootstrap_utils_test.go index 2f7d7a2ddc5e..267c8a1f3bce 100644 --- a/third_party/terraform/utils/bootstrap_utils_test.go +++ b/third_party/terraform/utils/bootstrap_utils_test.go @@ -8,7 +8,6 @@ import ( "testing" "time" - "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "google.golang.org/api/cloudkms/v1" cloudresourcemanager "google.golang.org/api/cloudresourcemanager/v1" "google.golang.org/api/iam/v1" @@ -52,10 +51,12 @@ func BootstrapKMSKeyWithPurpose(t *testing.T, purpose string) bootstrappedKMS { * a KMS key. **/ func BootstrapKMSKeyWithPurposeInLocation(t *testing.T, purpose, locationID string) bootstrappedKMS { - if v := os.Getenv("TF_ACC"); v == "" { - t.Skip("Acceptance tests and bootstrapping skipped unless env 'TF_ACC' set") + return BootstrapKMSKeyWithPurposeInLocationAndName(t, purpose, locationID, SharedCryptoKey[purpose]) +} - // If not running acceptance tests, return an empty object +func BootstrapKMSKeyWithPurposeInLocationAndName(t *testing.T, purpose, locationID, keyShortName string) bootstrappedKMS { + config := BootstrapConfig(t) + if config == nil { return bootstrappedKMS{ &cloudkms.KeyRing{}, &cloudkms.CryptoKey{}, @@ -66,20 +67,7 @@ func BootstrapKMSKeyWithPurposeInLocation(t *testing.T, purpose, locationID stri keyRingParent := fmt.Sprintf("projects/%s/locations/%s", projectID, locationID) keyRingName := fmt.Sprintf("%s/keyRings/%s", keyRingParent, SharedKeyRing) keyParent := fmt.Sprintf("projects/%s/locations/%s/keyRings/%s", projectID, locationID, SharedKeyRing) - keyName := fmt.Sprintf("%s/cryptoKeys/%s", keyParent, SharedCryptoKey[purpose]) - - config := &Config{ - Credentials: getTestCredsFromEnv(), - Project: getTestProjectFromEnv(), - Region: getTestRegionFromEnv(), - Zone: getTestZoneFromEnv(), - } - - ConfigureBasePaths(config) - - if err := config.LoadAndValidate(context.Background()); err != nil { - t.Errorf("Unable to bootstrap KMS key: %s", err) - } + keyName := fmt.Sprintf("%s/cryptoKeys/%s", keyParent, keyShortName) // Get or Create the hard coded shared keyring for testing kmsClient := config.clientKms @@ -119,7 +107,7 @@ func BootstrapKMSKeyWithPurposeInLocation(t *testing.T, purpose, locationID stri } cryptoKey, err = kmsClient.Projects.Locations.KeyRings.CryptoKeys.Create(keyParent, &newKey). - CryptoKeyId(SharedCryptoKey[purpose]).Do() + CryptoKeyId(keyShortName).Do() if err != nil { t.Errorf("Unable to bootstrap KMS key. Cannot create new CryptoKey: %s", err) } @@ -203,24 +191,11 @@ func impersonationServiceAccountPermissions(config *Config, sa *iam.ServiceAccou } func BootstrapServiceAccount(t *testing.T, project, testRunner string) string { - if v := os.Getenv("TF_ACC"); v == "" { - t.Skip("Acceptance tests and bootstrapping skipped unless env 'TF_ACC' set") + config := BootstrapConfig(t) + if config == nil { return "" } - config := &Config{ - Credentials: getTestCredsFromEnv(), - Project: getTestProjectFromEnv(), - Region: getTestRegionFromEnv(), - Zone: getTestZoneFromEnv(), - } - - ConfigureBasePaths(config) - - if err := config.LoadAndValidate(context.Background()); err != nil { - t.Fatalf("Bootstrapping failed. Unable to load test config: %s", err) - } - sa, err := getOrCreateServiceAccount(config, project) if err != nil { t.Fatalf("Bootstrapping failed. Cannot retrieve service account, %s", err) @@ -245,23 +220,12 @@ const SharedTestNetworkPrefix = "tf-bootstrap-net-" // testId specifies the test/suite for which a shared network is used/initialized. // Returns the name of an network, creating it if hasn't been created in the test projcet. func BootstrapSharedTestNetwork(t *testing.T, testId string) string { - if v := os.Getenv("TF_ACC"); v == "" { - t.Skip("Acceptance tests and bootstrapping skipped unless env 'TF_ACC' set") - // If not running acceptance tests, return an empty string - return "" - } - project := getTestProjectFromEnv() networkName := SharedTestNetworkPrefix + testId - config := &Config{ - Credentials: getTestCredsFromEnv(), - Project: project, - Region: getTestRegionFromEnv(), - Zone: getTestZoneFromEnv(), - } - ConfigureBasePaths(config) - if err := config.LoadAndValidate(context.Background()); err != nil { - t.Errorf("Unable to bootstrap network: %s", err) + + config := BootstrapConfig(t) + if config == nil { + return "" } log.Printf("[DEBUG] Getting shared test network %q", networkName) @@ -280,7 +244,7 @@ func BootstrapSharedTestNetwork(t *testing.T, testId string) string { } log.Printf("[DEBUG] Waiting for network creation to finish") - err = computeOperationWaitTime(config, res, project, "Error bootstrapping shared test network", 4) + err = computeOperationWaitTime(config, res, project, "Error bootstrapping shared test network", 4*time.Minute) if err != nil { t.Fatalf("Error bootstrapping shared test network %q: %s", networkName, err) } @@ -299,24 +263,12 @@ func BootstrapSharedTestNetwork(t *testing.T, testId string) string { var SharedServicePerimeterProjectPrefix = "tf-bootstrap-sp-" func BootstrapServicePerimeterProjects(t *testing.T, desiredProjects int) []*cloudresourcemanager.Project { - if v := os.Getenv("TF_ACC"); v == "" { - t.Skip("Acceptance tests and bootstrapping skipped unless env 'TF_ACC' set") + config := BootstrapConfig(t) + if config == nil { return nil } org := getTestOrgFromEnv(t) - config := &Config{ - Credentials: getTestCredsFromEnv(), - Project: getTestProjectFromEnv(), - Region: getTestRegionFromEnv(), - Zone: getTestZoneFromEnv(), - } - - ConfigureBasePaths(config) - - if err := config.LoadAndValidate(context.Background()); err != nil { - t.Fatalf("Bootstrapping failed. Unable to load test config: %s", err) - } // The filter endpoint works differently if you provide both the parent id and parent type, and // doesn't seem to allow for prefix matching. Don't change this to include the parent type unless @@ -329,7 +281,7 @@ func BootstrapServicePerimeterProjects(t *testing.T, desiredProjects int) []*clo projects := res.Projects for len(projects) < desiredProjects { - pid := SharedServicePerimeterProjectPrefix + acctest.RandString(10) + pid := SharedServicePerimeterProjectPrefix + randString(t, 10) project := &cloudresourcemanager.Project{ ProjectId: pid, Name: "TF Service Perimeter Test", @@ -362,3 +314,24 @@ func BootstrapServicePerimeterProjects(t *testing.T, desiredProjects int) []*clo return projects } + +func BootstrapConfig(t *testing.T) *Config { + if v := os.Getenv("TF_ACC"); v == "" { + t.Skip("Acceptance tests and bootstrapping skipped unless env 'TF_ACC' set") + return nil + } + + config := &Config{ + Credentials: getTestCredsFromEnv(), + Project: getTestProjectFromEnv(), + Region: getTestRegionFromEnv(), + Zone: getTestZoneFromEnv(), + } + + ConfigureBasePaths(config) + + if err := config.LoadAndValidate(context.Background()); err != nil { + t.Fatalf("Bootstrapping failed. Unable to load test config: %s", err) + } + return config +} diff --git a/third_party/terraform/utils/cloudfunctions_operation.go b/third_party/terraform/utils/cloudfunctions_operation.go index d89fa4ea7963..fc96c52047ca 100644 --- a/third_party/terraform/utils/cloudfunctions_operation.go +++ b/third_party/terraform/utils/cloudfunctions_operation.go @@ -2,6 +2,7 @@ package google import ( "fmt" + "time" "google.golang.org/api/cloudfunctions/v1" ) @@ -18,12 +19,12 @@ func (w *CloudFunctionsOperationWaiter) QueryOp() (interface{}, error) { return w.Service.Operations.Get(w.Op.Name).Do() } -func cloudFunctionsOperationWait(config *Config, op *cloudfunctions.Operation, activity string, timeoutMin int) error { +func cloudFunctionsOperationWait(config *Config, op *cloudfunctions.Operation, activity string, timeout time.Duration) error { w := &CloudFunctionsOperationWaiter{ Service: config.clientCloudFunctions, } if err := w.SetOp(op); err != nil { return err } - return OperationWait(w, activity, timeoutMin, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/third_party/terraform/utils/common_diff_suppress.go.erb b/third_party/terraform/utils/common_diff_suppress.go.erb index 75cc734ded6f..fe31b2940333 100644 --- a/third_party/terraform/utils/common_diff_suppress.go.erb +++ b/third_party/terraform/utils/common_diff_suppress.go.erb @@ -81,7 +81,7 @@ func rfc3339TimeDiffSuppress(k, old, new string, d *schema.ResourceData) bool { <% unless version == 'ga' -%> // For managed SSL certs, if new is an absolute FQDN (trailing '.') but old isn't, treat them as equals. func absoluteDomainSuppress(k, old, new string, _ *schema.ResourceData) bool { - if k == "managed.0.domains.0" { + if strings.HasPrefix(k, "managed.0.domains.") { return old == strings.TrimRight(new, ".") } return old == new diff --git a/third_party/terraform/utils/common_operation.go b/third_party/terraform/utils/common_operation.go index 205125183105..4cf66af84d4b 100644 --- a/third_party/terraform/utils/common_operation.go +++ b/third_party/terraform/utils/common_operation.go @@ -9,6 +9,15 @@ import ( cloudresourcemanager "google.golang.org/api/cloudresourcemanager/v1" ) +// Wraps Op.Error in an implementation of built-in Error +type CommonOpError struct { + *cloudresourcemanager.Status +} + +func (e *CommonOpError) Error() string { + return fmt.Sprintf("Error code %v, message: %s", e.Code, e.Message) +} + type Waiter interface { // State returns the current status of the operation. State() string @@ -56,7 +65,7 @@ func (w *CommonOperationWaiter) State() string { func (w *CommonOperationWaiter) Error() error { if w != nil && w.Op.Error != nil { - return fmt.Errorf("Error code %v, message: %s", w.Op.Error.Code, w.Op.Error.Message) + return &CommonOpError{w.Op.Error} } return nil } @@ -126,7 +135,7 @@ func CommonRefreshFunc(w Waiter) resource.StateRefreshFunc { } } -func OperationWait(w Waiter, activity string, timeoutMinutes int, pollInterval time.Duration) error { +func OperationWait(w Waiter, activity string, timeout time.Duration, pollInterval time.Duration) error { if OperationDone(w) { if w.Error() != nil { return w.Error() @@ -138,7 +147,7 @@ func OperationWait(w Waiter, activity string, timeoutMinutes int, pollInterval t Pending: w.PendingStates(), Target: w.TargetStates(), Refresh: CommonRefreshFunc(w), - Timeout: time.Duration(timeoutMinutes) * time.Minute, + Timeout: timeout, MinTimeout: 2 * time.Second, PollInterval: pollInterval, } diff --git a/third_party/terraform/utils/common_polling.go b/third_party/terraform/utils/common_polling.go index e6468cc4b6c9..181b4bb65504 100644 --- a/third_party/terraform/utils/common_polling.go +++ b/third_party/terraform/utils/common_polling.go @@ -3,6 +3,7 @@ package google import ( "fmt" "log" + "sync" "time" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" @@ -31,19 +32,81 @@ func SuccessPollResult() PollResult { return nil } -func PollingWaitTime(pollF PollReadFunc, checkResponse PollCheckResponseFunc, activity string, timeout time.Duration) error { +func PollingWaitTime(pollF PollReadFunc, checkResponse PollCheckResponseFunc, activity string, + timeout time.Duration, targetOccurrences int) error { log.Printf("[DEBUG] %s: Polling until expected state is read", activity) - return resource.Retry(timeout, func() *resource.RetryError { + log.Printf("[DEBUG] Target occurrences: %d", targetOccurrences) + if targetOccurrences == 1 { + return resource.Retry(timeout, func() *resource.RetryError { + readResp, readErr := pollF() + return checkResponse(readResp, readErr) + }) + } + return RetryWithTargetOccurrences(timeout, targetOccurrences, func() *resource.RetryError { readResp, readErr := pollF() return checkResponse(readResp, readErr) }) } +// RetryWithTargetOccurrences is a basic wrapper around StateChangeConf that will retry +// a function until it returns the specified amount of target occurrences continuously. +// Adapted from the Retry function in the go SDK. +func RetryWithTargetOccurrences(timeout time.Duration, targetOccurrences int, + f resource.RetryFunc) error { + // These are used to pull the error out of the function; need a mutex to + // avoid a data race. + var resultErr error + var resultErrMu sync.Mutex + + c := &resource.StateChangeConf{ + Pending: []string{"retryableerror"}, + Target: []string{"success"}, + Timeout: timeout, + MinTimeout: 500 * time.Millisecond, + ContinuousTargetOccurence: targetOccurrences, + Refresh: func() (interface{}, string, error) { + rerr := f() + + resultErrMu.Lock() + defer resultErrMu.Unlock() + + if rerr == nil { + resultErr = nil + return 42, "success", nil + } + + resultErr = rerr.Err + + if rerr.Retryable { + return 42, "retryableerror", nil + } + return nil, "quit", rerr.Err + }, + } + + _, waitErr := c.WaitForState() + + // Need to acquire the lock here to be able to avoid race using resultErr as + // the return value + resultErrMu.Lock() + defer resultErrMu.Unlock() + + // resultErr may be nil because the wait timed out and resultErr was never + // set; this is still an error + if resultErr == nil { + return waitErr + } + // resultErr takes precedence over waitErr if both are set because it is + // more likely to be useful + return resultErr +} + /** * Common PollCheckResponseFunc implementations */ -// PollCheckForExistence waits for a successful response, continues polling on 404, and returns any other error. +// PollCheckForExistence waits for a successful response, continues polling on 404, +// and returns any other error. func PollCheckForExistence(_ map[string]interface{}, respErr error) PollResult { if respErr != nil { if isGoogleApiErrorWithCode(respErr, 404) { @@ -53,3 +116,15 @@ func PollCheckForExistence(_ map[string]interface{}, respErr error) PollResult { } return SuccessPollResult() } + +// PollCheckForAbsence waits for a 404 response, continues polling on a successful +// response, and returns any other error. +func PollCheckForAbsence(_ map[string]interface{}, respErr error) PollResult { + if respErr != nil { + if isGoogleApiErrorWithCode(respErr, 404) { + return SuccessPollResult() + } + return ErrorPollResult(respErr) + } + return PendingStatusPollResult("found") +} diff --git a/third_party/terraform/utils/composer_operation.go b/third_party/terraform/utils/composer_operation.go index 8e5bc4caded7..ddca3b67a166 100644 --- a/third_party/terraform/utils/composer_operation.go +++ b/third_party/terraform/utils/composer_operation.go @@ -2,6 +2,7 @@ package google import ( "fmt" + "time" composer "google.golang.org/api/composer/v1beta1" ) @@ -18,12 +19,12 @@ func (w *ComposerOperationWaiter) QueryOp() (interface{}, error) { return w.Service.Operations.Get(w.Op.Name).Do() } -func composerOperationWaitTime(config *Config, op *composer.Operation, project, activity string, timeoutMinutes int) error { +func composerOperationWaitTime(config *Config, op *composer.Operation, project, activity string, timeout time.Duration) error { w := &ComposerOperationWaiter{ Service: config.clientComposer.Projects.Locations, } if err := w.SetOp(op); err != nil { return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/third_party/terraform/utils/compute_instance_helpers.go b/third_party/terraform/utils/compute_instance_helpers.go.erb similarity index 97% rename from third_party/terraform/utils/compute_instance_helpers.go rename to third_party/terraform/utils/compute_instance_helpers.go.erb index f1cfc9348076..73860f2c80be 100644 --- a/third_party/terraform/utils/compute_instance_helpers.go +++ b/third_party/terraform/utils/compute_instance_helpers.go.erb @@ -1,3 +1,5 @@ +// <% autogen_exception -%> + package google import ( @@ -91,7 +93,6 @@ func expandScheduling(v interface{}) (*computeBeta.Scheduling, error) { if v, ok := original["preemptible"]; ok { scheduling.Preemptible = v.(bool) scheduling.ForceSendFields = append(scheduling.ForceSendFields, "Preemptible") - } if v, ok := original["on_host_maintenance"]; ok { @@ -117,6 +118,12 @@ func expandScheduling(v interface{}) (*computeBeta.Scheduling, error) { } } +<% unless version == 'ga' -%> + if v, ok := original["min_node_cpus"]; ok { + scheduling.MinNodeCpus = int64(v.(int)) + } +<% end -%> + return scheduling, nil } @@ -124,6 +131,9 @@ func flattenScheduling(resp *computeBeta.Scheduling) []map[string]interface{} { schedulingMap := map[string]interface{}{ "on_host_maintenance": resp.OnHostMaintenance, "preemptible": resp.Preemptible, +<% unless version == 'ga' -%> + "min_node_cpus": resp.MinNodeCpus, +<% end -%> } if resp.AutomaticRestart != nil { @@ -374,5 +384,11 @@ func schedulingHasChange(d *schema.ResourceData) bool { return true } +<% unless version == 'ga' -%> + if oScheduling["min_node_cpus"] != newScheduling["min_node_cpus"] { + return true + } +<% end -%> + return reflect.DeepEqual(newNa, originalNa) } diff --git a/third_party/terraform/utils/compute_operation.go b/third_party/terraform/utils/compute_operation.go index dc65e4adcffe..78544c473c1b 100644 --- a/third_party/terraform/utils/compute_operation.go +++ b/third_party/terraform/utils/compute_operation.go @@ -3,6 +3,7 @@ package google import ( "bytes" "fmt" + "time" "google.golang.org/api/compute/v1" ) @@ -78,11 +79,7 @@ func (w *ComputeOperationWaiter) TargetStates() []string { return []string{"DONE"} } -func computeOperationWait(config *Config, res interface{}, project, activity string) error { - return computeOperationWaitTime(config, res, project, activity, 4) -} - -func computeOperationWaitTime(config *Config, res interface{}, project, activity string, timeoutMinutes int) error { +func computeOperationWaitTime(config *Config, res interface{}, project, activity string, timeout time.Duration) error { op := &compute.Operation{} err := Convert(res, op) if err != nil { @@ -98,7 +95,7 @@ func computeOperationWaitTime(config *Config, res interface{}, project, activity if err := w.SetOp(op); err != nil { return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } // ComputeOperationError wraps compute.OperationError and implements the diff --git a/third_party/terraform/utils/config.go.erb b/third_party/terraform/utils/config.go.erb index 427ea6a6ba5d..a1d3a2816173 100644 --- a/third_party/terraform/utils/config.go.erb +++ b/third_party/terraform/utils/config.go.erb @@ -26,6 +26,9 @@ import ( "google.golang.org/api/bigtableadmin/v2" "google.golang.org/api/cloudbilling/v1" "google.golang.org/api/cloudbuild/v1" +<% unless version == 'ga' -%> + cloudidentity "google.golang.org/api/cloudidentity/v1beta1" +<% end -%> "google.golang.org/api/cloudfunctions/v1" "google.golang.org/api/cloudiot/v1" "google.golang.org/api/cloudkms/v1" @@ -42,9 +45,7 @@ import ( "google.golang.org/api/dns/v1" dnsBeta "google.golang.org/api/dns/v1beta2" file "google.golang.org/api/file/v1beta1" -<% unless version == 'ga' -%> - healthcare "google.golang.org/api/healthcare/v1beta1" -<% end -%> + healthcare "google.golang.org/api/healthcare/v1" "google.golang.org/api/iam/v1" iamcredentials "google.golang.org/api/iamcredentials/v1" cloudlogging "google.golang.org/api/logging/v2" @@ -77,6 +78,8 @@ type Config struct { PollInterval time.Duration client *http.Client + wrappedBigQueryClient *http.Client + wrappedPubsubClient *http.Client context context.Context terraformVersion string userAgent string @@ -92,6 +95,10 @@ type Config struct { clientBuild *cloudbuild.Service +<% unless version == 'ga' -%> + clientCloudIdentity *cloudidentity.Service +<% end -%> + ComposerBasePath string clientComposer *composer.Service @@ -147,10 +154,8 @@ type Config struct { IAMBasePath string clientIAM *iam.Service - <% unless version == 'ga' -%> clientHealthcare *healthcare.Service - <% end -%> clientServiceMan *servicemanagement.APIService @@ -189,9 +194,10 @@ type Config struct { var <%= product[:definitions].name -%>DefaultBasePath = "<%= product[:definitions].base_url -%>" <% end -%> -var defaultClientScopes = []string{ +var DefaultClientScopes = []string{ "https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-identity", "https://www.googleapis.com/auth/ndev.clouddns.readwrite", "https://www.googleapis.com/auth/devstorage.full_control", "https://www.googleapis.com/auth/userinfo.email", @@ -199,9 +205,11 @@ var defaultClientScopes = []string{ func (c *Config) LoadAndValidate(ctx context.Context) error { if len(c.Scopes) == 0 { - c.Scopes = defaultClientScopes + c.Scopes = DefaultClientScopes } + c.context = ctx + tokenSource, err := c.getTokenSource(c.Scopes) if err != nil { return err @@ -237,7 +245,6 @@ func (c *Config) LoadAndValidate(ctx context.Context) error { userAgent := fmt.Sprintf("%s %s", tfUserAgent, providerVersion) c.client = client - c.context = ctx c.userAgent = userAgent // This base path and some others below need the version and possibly more of the path @@ -338,6 +345,7 @@ func (c *Config) LoadAndValidate(ctx context.Context) error { pubsubClientBasePath := removeBasePathVersion(c.PubsubBasePath) log.Printf("[INFO] Instantiating Google Pubsub client for path %s", pubsubClientBasePath) wrappedPubsubClient := ClientWithAdditionalRetries(client, retryTransport, pubsubTopicProjectNotReady) + c.wrappedPubsubClient = wrappedPubsubClient c.clientPubsub, err = pubsub.NewService(ctx, option.WithHTTPClient(wrappedPubsubClient)) if err != nil { return err @@ -438,6 +446,7 @@ func (c *Config) LoadAndValidate(ctx context.Context) error { bigQueryClientBasePath := c.BigQueryBasePath log.Printf("[INFO] Instantiating Google Cloud BigQuery client for path %s", bigQueryClientBasePath) wrappedBigQueryClient := ClientWithAdditionalRetries(client, retryTransport, iamMemberMissing) + c.wrappedBigQueryClient = wrappedBigQueryClient c.clientBigQuery, err = bigquery.NewService(ctx, option.WithHTTPClient(wrappedBigQueryClient)) if err != nil { return err @@ -558,9 +567,8 @@ func (c *Config) LoadAndValidate(ctx context.Context) error { return err } c.clientStorageTransfer.UserAgent = userAgent - c.clientStorageTransfer.BasePath = storageTransferClientBasePath + c.clientStorageTransfer.BasePath = storageTransferClientBasePath - <% unless version == 'ga' -%> healthcareClientBasePath := removeBasePathVersion(c.HealthcareBasePath) log.Printf("[INFO] Instantiating Google Cloud Healthcare client for path %s", healthcareClientBasePath) @@ -570,7 +578,18 @@ func (c *Config) LoadAndValidate(ctx context.Context) error { } c.clientHealthcare.UserAgent = userAgent c.clientHealthcare.BasePath = healthcareClientBasePath - <% end -%> + +<% unless version == 'ga' -%> + cloudidentityClientBasePath := removeBasePathVersion(c.CloudIdentityBasePath) + log.Printf("[INFO] Instantiating Google Cloud CloudIdentity client for path %s", cloudidentityClientBasePath) + + c.clientCloudIdentity, err = cloudidentity.NewService(ctx, option.WithHTTPClient(client)) + if err != nil { + return err + } + c.clientCloudIdentity.UserAgent = userAgent + c.clientCloudIdentity.BasePath = cloudidentityClientBasePath +<% end -%> c.Region = GetRegionFromRegionSelfLink(c.Region) @@ -620,37 +639,60 @@ func (c *Config) synchronousTimeout() time.Duration { } func (c *Config) getTokenSource(clientScopes []string) (oauth2.TokenSource, error) { + creds, err := c.GetCredentials(clientScopes) + if err != nil { + return nil, fmt.Errorf("%s", err) + } + return creds.TokenSource, nil +} + +// staticTokenSource is used to be able to identify static token sources without reflection. +type staticTokenSource struct { + oauth2.TokenSource +} + +func (c *Config) GetCredentials(clientScopes []string) (googleoauth.Credentials, error) { if c.AccessToken != "" { contents, _, err := pathorcontents.Read(c.AccessToken) if err != nil { - return nil, fmt.Errorf("Error loading access token: %s", err) + return googleoauth.Credentials{}, fmt.Errorf("Error loading access token: %s", err) } log.Printf("[INFO] Authenticating using configured Google JSON 'access_token'...") log.Printf("[INFO] -- Scopes: %s", clientScopes) token := &oauth2.Token{AccessToken: contents} - return oauth2.StaticTokenSource(token), nil + + return googleoauth.Credentials{ + TokenSource: staticTokenSource{oauth2.StaticTokenSource(token)}, + }, nil } if c.Credentials != "" { contents, _, err := pathorcontents.Read(c.Credentials) if err != nil { - return nil, fmt.Errorf("Error loading credentials: %s", err) + return googleoauth.Credentials{}, fmt.Errorf("error loading credentials: %s", err) } - creds, err := googleoauth.CredentialsFromJSON(context.Background(), []byte(contents), clientScopes...) + creds, err := googleoauth.CredentialsFromJSON(c.context, []byte(contents), clientScopes...) if err != nil { - return nil, fmt.Errorf("Unable to parse credentials from '%s': %s", contents, err) + return googleoauth.Credentials{}, fmt.Errorf("unable to parse credentials from '%s': %s", contents, err) } log.Printf("[INFO] Authenticating using configured Google JSON 'credentials'...") log.Printf("[INFO] -- Scopes: %s", clientScopes) - return creds.TokenSource, nil + return *creds, nil } log.Printf("[INFO] Authenticating using DefaultClient...") log.Printf("[INFO] -- Scopes: %s", clientScopes) - return googleoauth.DefaultTokenSource(context.Background(), clientScopes...) + + defaultTS, err := googleoauth.DefaultTokenSource(context.Background(), clientScopes...) + if err != nil { + return googleoauth.Credentials{}, fmt.Errorf("Attempted to load application default credentials since neither `credentials` nor `access_token` was set in the provider block. No credentials loaded. To use your gcloud credentials, run 'gcloud auth application-default login'. Original error: %w", err) + } + return googleoauth.Credentials{ + TokenSource: defaultTS, + }, err } // Remove the `/{{version}}/` from a base path if present. @@ -682,7 +724,6 @@ func ConfigureBasePaths(c *Config) { c.IAMBasePath = IAMDefaultBasePath c.ServiceNetworkingBasePath = ServiceNetworkingDefaultBasePath c.BigQueryBasePath = BigQueryDefaultBasePath - c.CloudIoTBasePath = CloudIoTDefaultBasePath c.StorageTransferBasePath = StorageTransferDefaultBasePath c.BigtableAdminBasePath = BigtableAdminDefaultBasePath } diff --git a/third_party/terraform/utils/container_operation.go b/third_party/terraform/utils/container_operation.go index 4c58c4a355a5..8cc0770afbea 100644 --- a/third_party/terraform/utils/container_operation.go +++ b/third_party/terraform/utils/container_operation.go @@ -5,6 +5,7 @@ import ( "errors" "fmt" "log" + "time" container "google.golang.org/api/container/v1beta1" ) @@ -96,7 +97,7 @@ func (w *ContainerOperationWaiter) TargetStates() []string { return []string{"DONE"} } -func containerOperationWait(config *Config, op *container.Operation, project, location, activity string, timeoutMinutes int) error { +func containerOperationWait(config *Config, op *container.Operation, project, location, activity string, timeout time.Duration) error { w := &ContainerOperationWaiter{ Service: config.clientContainerBeta, Context: config.context, @@ -109,5 +110,5 @@ func containerOperationWait(config *Config, op *container.Operation, project, lo return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/third_party/terraform/utils/dataproc_cluster_operation.go b/third_party/terraform/utils/dataproc_cluster_operation.go index ba8740a0a8d8..309f9582a725 100644 --- a/third_party/terraform/utils/dataproc_cluster_operation.go +++ b/third_party/terraform/utils/dataproc_cluster_operation.go @@ -2,8 +2,9 @@ package google import ( "fmt" + "time" - "google.golang.org/api/dataproc/v1beta2" + dataproc "google.golang.org/api/dataproc/v1beta2" ) type DataprocClusterOperationWaiter struct { @@ -18,12 +19,12 @@ func (w *DataprocClusterOperationWaiter) QueryOp() (interface{}, error) { return w.Service.Projects.Regions.Operations.Get(w.Op.Name).Do() } -func dataprocClusterOperationWait(config *Config, op *dataproc.Operation, activity string, timeoutMinutes int) error { +func dataprocClusterOperationWait(config *Config, op *dataproc.Operation, activity string, timeout time.Duration) error { w := &DataprocClusterOperationWaiter{ Service: config.clientDataprocBeta, } if err := w.SetOp(op); err != nil { return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/third_party/terraform/utils/dataproc_job_operation.go b/third_party/terraform/utils/dataproc_job_operation.go index 50a32197606c..a295edf0f6e4 100644 --- a/third_party/terraform/utils/dataproc_job_operation.go +++ b/third_party/terraform/utils/dataproc_job_operation.go @@ -3,6 +3,7 @@ package google import ( "fmt" "net/http" + "time" "google.golang.org/api/dataproc/v1" ) @@ -65,14 +66,14 @@ func (w *DataprocJobOperationWaiter) TargetStates() []string { return []string{"CANCELLED", "DONE", "ATTEMPT_FAILURE", "ERROR"} } -func dataprocJobOperationWait(config *Config, region, projectId, jobId string, activity string, timeoutMinutes, minTimeoutSeconds int) error { +func dataprocJobOperationWait(config *Config, region, projectId, jobId string, activity string, timeout time.Duration) error { w := &DataprocJobOperationWaiter{ Service: config.clientDataproc, Region: region, ProjectId: projectId, JobId: jobId, } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } type DataprocDeleteJobOperationWaiter struct { @@ -103,7 +104,7 @@ func (w *DataprocDeleteJobOperationWaiter) QueryOp() (interface{}, error) { return job, err } -func dataprocDeleteOperationWait(config *Config, region, projectId, jobId string, activity string, timeoutMinutes, minTimeoutSeconds int) error { +func dataprocDeleteOperationWait(config *Config, region, projectId, jobId string, activity string, timeout time.Duration) error { w := &DataprocDeleteJobOperationWaiter{ DataprocJobOperationWaiter{ Service: config.clientDataproc, @@ -112,5 +113,5 @@ func dataprocDeleteOperationWait(config *Config, region, projectId, jobId string JobId: jobId, }, } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/third_party/terraform/utils/deployment_manager_operation.go b/third_party/terraform/utils/deployment_manager_operation.go index f605a4fd7219..721f0d0096cd 100644 --- a/third_party/terraform/utils/deployment_manager_operation.go +++ b/third_party/terraform/utils/deployment_manager_operation.go @@ -3,6 +3,8 @@ package google import ( "bytes" "fmt" + "time" + "google.golang.org/api/compute/v1" ) @@ -32,7 +34,7 @@ func (w *DeploymentManagerOperationWaiter) QueryOp() (interface{}, error) { return op, nil } -func deploymentManagerOperationWaitTime(config *Config, resp interface{}, project, activity string, timeoutMinutes int) error { +func deploymentManagerOperationWaitTime(config *Config, resp interface{}, project, activity string, timeout time.Duration) error { op := &compute.Operation{} err := Convert(resp, op) if err != nil { @@ -50,7 +52,7 @@ func deploymentManagerOperationWaitTime(config *Config, resp interface{}, projec return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } func (w *DeploymentManagerOperationWaiter) Error() error { diff --git a/third_party/terraform/utils/error_retry_predicates.go b/third_party/terraform/utils/error_retry_predicates.go index bc39c8fd4efe..a01c57c53aec 100644 --- a/third_party/terraform/utils/error_retry_predicates.go +++ b/third_party/terraform/utils/error_retry_predicates.go @@ -9,6 +9,7 @@ import ( "strings" "google.golang.org/api/googleapi" + sqladmin "google.golang.org/api/sqladmin/v1beta4" ) type RetryErrorPredicateFunc func(error) (bool, string) @@ -160,6 +161,22 @@ func pubsubTopicProjectNotReady(err error) (bool, string) { return false, "" } +// Retry if Cloud SQL operation returns a 429 with a specific message for +// concurrent operations. +func isSqlInternalError(err error) (bool, string) { + if gerr, ok := err.(*SqlAdminOperationError); ok { + // SqlAdminOperationError is a non-interface type so we need to cast it through + // a layer of interface{}. :) + var ierr interface{} + ierr = gerr + if serr, ok := ierr.(*sqladmin.OperationErrors); ok && serr.Errors[0].Code == "INTERNAL_ERROR" { + return true, "Received an internal error, which is sometimes retryable for some SQL resources. Optimistically retrying." + } + + } + return false, "" +} + // Retry if Cloud SQL operation returns a 429 with a specific message for // concurrent operations. func isSqlOperationInProgressError(err error) (bool, string) { @@ -175,7 +192,7 @@ func isSqlOperationInProgressError(err error) (bool, string) { // Retry if Monitoring operation returns a 429 with a specific message for // concurrent operations. -func isMonitoringRetryableError(err error) (bool, string) { +func isMonitoringConcurrentEditError(err error) (bool, string) { if gerr, ok := err.(*googleapi.Error); ok { if gerr.Code == 409 && strings.Contains(strings.ToLower(gerr.Body), "too many concurrent edits") { return true, "Waiting for other Monitoring changes to finish" @@ -215,3 +232,46 @@ func isNotFoundRetryableError(opType string) RetryErrorPredicateFunc { return false, "" } } + +func isStoragePreconditionError(err error) (bool, string) { + if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == 412 { + return true, fmt.Sprintf("Retry on storage precondition not met") + } + return false, "" +} + +func isDataflowJobUpdateRetryableError(err error) (bool, string) { + if gerr, ok := err.(*googleapi.Error); ok { + if gerr.Code == 404 && strings.Contains(gerr.Body, "in RUNNING OR DRAINING state") { + return true, "Waiting for job to be in a valid state" + } + } + return false, "" +} + +func isPeeringOperationInProgress(err error) (bool, string) { + if gerr, ok := err.(*googleapi.Error); ok { + if gerr.Code == 400 && strings.Contains(gerr.Body, "There is a peering operation in progress") { + return true, "Waiting peering operation to complete" + } + } + return false, "" +} + +func isCloudFunctionsSourceCodeError(err error) (bool, string) { + if operr, ok := err.(*CommonOpError); ok { + if operr.Code == 3 && operr.Message == "Failed to retrieve function source code" { + return true, fmt.Sprintf("Retry on Function failing to pull code from GCS") + } + } + return false, "" +} + +func datastoreIndex409Contention(err error) (bool, string) { + if gerr, ok := err.(*googleapi.Error); ok { + if gerr.Code == 409 && strings.Contains(gerr.Body, "too much contention") { + return true, "too much contention - waiting for less activity" + } + } + return false, "" +} diff --git a/third_party/terraform/utils/field_helpers.go b/third_party/terraform/utils/field_helpers.go index 771fcc9a9218..cb81f103070b 100644 --- a/third_party/terraform/utils/field_helpers.go +++ b/third_party/terraform/utils/field_helpers.go @@ -346,6 +346,18 @@ func parseRegionalFieldValue(resourceType, fieldValue, projectSchemaField, regio // - provider-level region // - region extracted from the provider-level zone func getRegionFromSchema(regionSchemaField, zoneSchemaField string, d TerraformResourceData, config *Config) (string, error) { + // if identical such as GKE location, check if it's a zone first and find + // the region if so. Otherwise, return as it's a region. + if regionSchemaField == zoneSchemaField { + if v, ok := d.GetOk(regionSchemaField); ok { + if isZone(v.(string)) { + return getRegionFromZone(v.(string)), nil + } + + return v.(string), nil + } + } + if v, ok := d.GetOk(regionSchemaField); ok && regionSchemaField != "" { return GetResourceNameFromSelfLink(v.(string)), nil } diff --git a/third_party/terraform/utils/healthcare_utils.go.erb b/third_party/terraform/utils/healthcare_utils.go similarity index 97% rename from third_party/terraform/utils/healthcare_utils.go.erb rename to third_party/terraform/utils/healthcare_utils.go index 2726cd19250a..ada0f626fedf 100644 --- a/third_party/terraform/utils/healthcare_utils.go.erb +++ b/third_party/terraform/utils/healthcare_utils.go @@ -1,10 +1,9 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( -"fmt" -"regexp" -"strings" + "fmt" + "regexp" + "strings" ) type healthcareDatasetId struct { @@ -176,7 +175,6 @@ func parseHealthcareHl7V2StoreId(id string, config *Config) (*healthcareHl7V2Sto return nil, fmt.Errorf("Invalid Hl7V2Store id format, expecting `{projectId}/{locationId}/{datasetName}/{hl7V2StoreName}` or `{locationId}/{datasetName}/{hl7V2StoreName}.`") } - type healthcareDicomStoreId struct { DatasetId healthcareDatasetId Name string @@ -235,6 +233,3 @@ func parseHealthcareDicomStoreId(id string, config *Config) (*healthcareDicomSto } return nil, fmt.Errorf("Invalid DicomStore id format, expecting `{projectId}/{locationId}/{datasetName}/{dicomStoreName}` or `{locationId}/{datasetName}/{dicomStoreName}.`") } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/utils/iam.go.erb b/third_party/terraform/utils/iam.go.erb index b02f7fe273d0..52330479b062 100644 --- a/third_party/terraform/utils/iam.go.erb +++ b/third_party/terraform/utils/iam.go.erb @@ -266,7 +266,17 @@ func createIamBindingsMap(bindings []*cloudresourcemanager.Binding) map[iamBindi // Return list of Bindings for a map of role to member sets func listFromIamBindingMap(bm map[iamBindingKey]map[string]struct{}) []*cloudresourcemanager.Binding { rb := make([]*cloudresourcemanager.Binding, 0, len(bm)) - for key, members := range bm { + var keys []iamBindingKey + for k := range bm { + keys = append(keys, k) + } + sort.Slice(keys, func(i, j int) bool { + keyI := keys[i] + keyJ := keys[j] + return fmt.Sprintf("%s%s", keyI.Role, keyI.Condition.String()) < fmt.Sprintf("%s%s", keyJ.Role, keyJ.Condition.String()) + }) + for _, key := range keys { + members := bm[key] if len(members) == 0 { continue } diff --git a/third_party/terraform/utils/iam_bigquery_dataset.go b/third_party/terraform/utils/iam_bigquery_dataset.go new file mode 100644 index 000000000000..4041da43261a --- /dev/null +++ b/third_party/terraform/utils/iam_bigquery_dataset.go @@ -0,0 +1,236 @@ +package google + +import ( + "errors" + "fmt" + "strings" + + "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform-plugin-sdk/helper/schema" + "google.golang.org/api/cloudresourcemanager/v1" +) + +var IamBigqueryDatasetSchema = map[string]*schema.Schema{ + "dataset_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, +} + +var bigqueryAccessPrimitiveToRoleMap = map[string]string{ + "OWNER": "roles/bigquery.dataOwner", + "WRITER": "roles/bigquery.dataEditor", + "READER": "roles/bigquery.dataViewer", +} + +type BigqueryDatasetIamUpdater struct { + project string + datasetId string + Config *Config +} + +func NewBigqueryDatasetIamUpdater(d *schema.ResourceData, config *Config) (ResourceIamUpdater, error) { + project, err := getProject(d, config) + if err != nil { + return nil, err + } + + d.Set("project", project) + + return &BigqueryDatasetIamUpdater{ + project: project, + datasetId: d.Get("dataset_id").(string), + Config: config, + }, nil +} + +func BigqueryDatasetIdParseFunc(d *schema.ResourceData, config *Config) error { + fv, err := parseProjectFieldValue("datasets", d.Id(), "project", d, config, false) + if err != nil { + return err + } + + d.Set("project", fv.Project) + d.Set("dataset_id", fv.Name) + + // Explicitly set the id so imported resources have the same ID format as non-imported ones. + d.SetId(fv.RelativeLink()) + return nil +} + +func (u *BigqueryDatasetIamUpdater) GetResourceIamPolicy() (*cloudresourcemanager.Policy, error) { + url := fmt.Sprintf("%s%s", u.Config.BigQueryBasePath, u.GetResourceId()) + + res, err := sendRequest(u.Config, "GET", u.project, url, nil) + if err != nil { + return nil, errwrap.Wrapf(fmt.Sprintf("Error retrieving IAM policy for %s: {{err}}", u.DescribeResource()), err) + } + + policy, err := accessToPolicy(res["access"]) + if err != nil { + return nil, err + } + return policy, nil +} + +func (u *BigqueryDatasetIamUpdater) SetResourceIamPolicy(policy *cloudresourcemanager.Policy) error { + url := fmt.Sprintf("%s%s", u.Config.BigQueryBasePath, u.GetResourceId()) + + access, err := policyToAccess(policy) + if err != nil { + return err + } + obj := map[string]interface{}{ + "access": access, + } + + _, err = sendRequest(u.Config, "PATCH", u.project, url, obj) + if err != nil { + return fmt.Errorf("Error creating DatasetAccess: %s", err) + } + + return nil +} + +func accessToPolicy(access interface{}) (*cloudresourcemanager.Policy, error) { + if access == nil { + return nil, nil + } + roleToBinding := make(map[string]*cloudresourcemanager.Binding) + + accessArr := access.([]interface{}) + for _, v := range accessArr { + memberRole := v.(map[string]interface{}) + rawRole, ok := memberRole["role"] + if !ok { + // "view" allows role to not be defined. It is a special dataset access construct, so ignore + // If a user wants to manage "view" access they should use the `bigquery_dataset_access` resource + continue + } + role := rawRole.(string) + if iamRole, ok := bigqueryAccessPrimitiveToRoleMap[role]; ok { + // API changes certain IAM roles to legacy roles. Revert these changes + role = iamRole + } + member, err := accessToIamMember(memberRole) + if err != nil { + return nil, err + } + // We have to combine bindings manually + binding, ok := roleToBinding[role] + if !ok { + binding = &cloudresourcemanager.Binding{Role: role, Members: []string{}} + } + binding.Members = append(binding.Members, member) + + roleToBinding[role] = binding + } + bindings := make([]*cloudresourcemanager.Binding, 0) + for _, v := range roleToBinding { + bindings = append(bindings, v) + } + + return &cloudresourcemanager.Policy{Bindings: bindings}, nil +} + +func policyToAccess(policy *cloudresourcemanager.Policy) ([]map[string]interface{}, error) { + res := make([]map[string]interface{}, 0) + if len(policy.AuditConfigs) != 0 { + return nil, errors.New("Access policies not allowed on BigQuery Dataset IAM policies") + } + for _, binding := range policy.Bindings { + if binding.Condition != nil { + return nil, errors.New("IAM conditions not allowed on BigQuery Dataset IAM") + } + if fullRole, ok := bigqueryAccessPrimitiveToRoleMap[binding.Role]; ok { + return nil, fmt.Errorf("BigQuery Dataset legacy role %s is not allowed when using google_bigquery_dataset_iam resources. Please use the full form: %s", binding.Role, fullRole) + } + for _, member := range binding.Members { + access := map[string]interface{}{ + "role": binding.Role, + } + memberType, member, err := iamMemberToAccess(member) + if err != nil { + return nil, err + } + access[memberType] = member + res = append(res, access) + } + } + + return res, nil +} + +// Returns the member access type and member for an IAM member. +// Dataset access uses different member types to identify groups, domains, etc. +// these types are used as keys in the access JSON payload +func iamMemberToAccess(member string) (string, string, error) { + pieces := strings.SplitN(member, ":", 2) + if len(pieces) > 1 { + switch pieces[0] { + case "group": + return "groupByEmail", pieces[1], nil + case "domain": + return "domain", pieces[1], nil + case "user": + return "userByEmail", pieces[1], nil + case "serviceAccount": + return "userByEmail", pieces[1], nil + default: + return "", "", fmt.Errorf("Failed to parse BigQuery Dataset IAM member type: %s", member) + } + } + if member == "projectOwners" || member == "projectReaders" || member == "projectWriters" || member == "allAuthenticatedUsers" { + // These are special BigQuery Dataset permissions + return "specialGroup", member, nil + } + return "iamMember", member, nil +} + +func accessToIamMember(access map[string]interface{}) (string, error) { + // One of the fields must be set, we have to find which IAM member type this maps to + if member, ok := access["groupByEmail"]; ok { + return fmt.Sprintf("group:%s", member.(string)), nil + } + if member, ok := access["domain"]; ok { + return fmt.Sprintf("domain:%s", member.(string)), nil + } + if member, ok := access["specialGroup"]; ok { + return member.(string), nil + } + if member, ok := access["iamMember"]; ok { + return member.(string), nil + } + if _, ok := access["view"]; ok { + // view does not map to an IAM member, use access instead + return "", fmt.Errorf("Failed to convert BigQuery Dataset access to IAM member. To use views with a dataset, please use dataset_access") + } + if member, ok := access["userByEmail"]; ok { + // service accounts have "gservice" in their email. This is best guess due to lost information + if strings.Contains(member.(string), "gserviceaccount") { + return fmt.Sprintf("serviceAccount:%s", member.(string)), nil + } + return fmt.Sprintf("user:%s", member.(string)), nil + } + return "", fmt.Errorf("Failed to identify IAM member from BigQuery Dataset access: %v", access) +} + +func (u *BigqueryDatasetIamUpdater) GetResourceId() string { + return fmt.Sprintf("projects/%s/datasets/%s", u.project, u.datasetId) +} + +// Matches the mutex of google_big_query_dataset_access +func (u *BigqueryDatasetIamUpdater) GetMutexKey() string { + return fmt.Sprintf("%s", u.datasetId) +} + +func (u *BigqueryDatasetIamUpdater) DescribeResource() string { + return fmt.Sprintf("Bigquery Dataset %s/%s", u.project, u.datasetId) +} diff --git a/third_party/terraform/utils/iam_healthcare_dataset.go.erb b/third_party/terraform/utils/iam_healthcare_dataset.go similarity index 93% rename from third_party/terraform/utils/iam_healthcare_dataset.go.erb rename to third_party/terraform/utils/iam_healthcare_dataset.go index c6bbafb4dd20..5179fff850af 100644 --- a/third_party/terraform/utils/iam_healthcare_dataset.go.erb +++ b/third_party/terraform/utils/iam_healthcare_dataset.go @@ -1,9 +1,9 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" - healthcare "google.golang.org/api/healthcare/v1beta1" + + healthcare "google.golang.org/api/healthcare/v1" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" @@ -111,6 +111,3 @@ func healthcareToResourceManagerPolicy(p *healthcare.Policy) (*cloudresourcemana } return out, nil } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/utils/iam_healthcare_dicom_store.go.erb b/third_party/terraform/utils/iam_healthcare_dicom_store.go similarity index 92% rename from third_party/terraform/utils/iam_healthcare_dicom_store.go.erb rename to third_party/terraform/utils/iam_healthcare_dicom_store.go index b144c56b5549..ff7d946b1bc4 100644 --- a/third_party/terraform/utils/iam_healthcare_dicom_store.go.erb +++ b/third_party/terraform/utils/iam_healthcare_dicom_store.go @@ -1,10 +1,9 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" - healthcare "google.golang.org/api/healthcare/v1beta1" + healthcare "google.golang.org/api/healthcare/v1" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" @@ -93,6 +92,3 @@ func (u *HealthcareDicomStoreIamUpdater) GetMutexKey() string { func (u *HealthcareDicomStoreIamUpdater) DescribeResource() string { return fmt.Sprintf("Healthcare DicomStore %q", u.resourceId) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/utils/iam_healthcare_fhir_store.go.erb b/third_party/terraform/utils/iam_healthcare_fhir_store.go similarity index 92% rename from third_party/terraform/utils/iam_healthcare_fhir_store.go.erb rename to third_party/terraform/utils/iam_healthcare_fhir_store.go index 38a9d57bedfd..ebb513e5103d 100644 --- a/third_party/terraform/utils/iam_healthcare_fhir_store.go.erb +++ b/third_party/terraform/utils/iam_healthcare_fhir_store.go @@ -1,9 +1,9 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" - healthcare "google.golang.org/api/healthcare/v1beta1" + + healthcare "google.golang.org/api/healthcare/v1" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" @@ -92,6 +92,3 @@ func (u *HealthcareFhirStoreIamUpdater) GetMutexKey() string { func (u *HealthcareFhirStoreIamUpdater) DescribeResource() string { return fmt.Sprintf("Healthcare FhirStore %q", u.resourceId) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/utils/iam_healthcare_hl7_v2_store.go.erb b/third_party/terraform/utils/iam_healthcare_hl7_v2_store.go similarity index 92% rename from third_party/terraform/utils/iam_healthcare_hl7_v2_store.go.erb rename to third_party/terraform/utils/iam_healthcare_hl7_v2_store.go index 0d41f7c1a62a..77f7561df5cc 100644 --- a/third_party/terraform/utils/iam_healthcare_hl7_v2_store.go.erb +++ b/third_party/terraform/utils/iam_healthcare_hl7_v2_store.go @@ -1,10 +1,9 @@ -<% autogen_exception -%> package google -<% unless version == 'ga' -%> + import ( "fmt" - healthcare "google.golang.org/api/healthcare/v1beta1" + healthcare "google.golang.org/api/healthcare/v1" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" @@ -93,6 +92,3 @@ func (u *HealthcareHl7V2StoreIamUpdater) GetMutexKey() string { func (u *HealthcareHl7V2StoreIamUpdater) DescribeResource() string { return fmt.Sprintf("Healthcare Hl7V2Store %q", u.resourceId) } -<% else %> -// Magic Modules doesn't let us remove files - blank out beta-only common-compile files for now. -<% end -%> diff --git a/third_party/terraform/utils/iam_organization.go b/third_party/terraform/utils/iam_organization.go index 99ed48f7f669..99b742b8d18a 100644 --- a/third_party/terraform/utils/iam_organization.go +++ b/third_party/terraform/utils/iam_organization.go @@ -9,9 +9,10 @@ import ( var IamOrganizationSchema = map[string]*schema.Schema{ "org_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The numeric ID of the organization in which you want to manage the audit logging config.`, }, } diff --git a/third_party/terraform/utils/iam_project.go b/third_party/terraform/utils/iam_project.go index aa2841d0cb93..fb61769df91d 100644 --- a/third_party/terraform/utils/iam_project.go +++ b/third_party/terraform/utils/iam_project.go @@ -18,6 +18,17 @@ var IamProjectSchema = map[string]*schema.Schema{ }, } +// In google_project_iam_policy, project is required and not inferred by +// getProject. +var IamPolicyProjectSchema = map[string]*schema.Schema{ + "project": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: compareProjectName, + }, +} + type ProjectIamUpdater struct { resourceId string Config *Config @@ -37,6 +48,15 @@ func NewProjectIamUpdater(d *schema.ResourceData, config *Config) (ResourceIamUp }, nil } +// NewProjectIamPolicyUpdater is similar to NewProjectIamUpdater, except that it +// doesn't call getProject and only uses an explicitly set project. +func NewProjectIamPolicyUpdater(d *schema.ResourceData, config *Config) (ResourceIamUpdater, error) { + return &ProjectIamUpdater{ + resourceId: d.Get("project").(string), + Config: config, + }, nil +} + func ProjectIdParseFunc(d *schema.ResourceData, _ *Config) error { d.Set("project", d.Id()) return nil @@ -84,3 +104,12 @@ func (u *ProjectIamUpdater) GetMutexKey() string { func (u *ProjectIamUpdater) DescribeResource() string { return fmt.Sprintf("project %q", u.resourceId) } + +func compareProjectName(_, old, new string, _ *schema.ResourceData) bool { + // We can either get "projects/project-id" or "project-id", so strip any prefixes + return GetResourceNameFromSelfLink(old) == GetResourceNameFromSelfLink(new) +} + +func getProjectIamPolicyMutexKey(pid string) string { + return fmt.Sprintf("iam-project-%s", pid) +} diff --git a/third_party/terraform/utils/import.go b/third_party/terraform/utils/import.go index 7ec578c8c340..c01ec187758e 100644 --- a/third_party/terraform/utils/import.go +++ b/third_party/terraform/utils/import.go @@ -145,7 +145,7 @@ func getImportIdQualifiers(idRegexes []string, d TerraformResourceData, config * return result, nil } } - return nil, fmt.Errorf("Import id %q doesn't match any of the accepted formats: %v", d.Id(), idRegexes) + return nil, fmt.Errorf("Import id %q doesn't match any of the accepted formats: %v", id, idRegexes) } // Returns a set of default values that are contained in a regular expression diff --git a/third_party/terraform/utils/metadata.go b/third_party/terraform/utils/metadata.go index a643d68bdfb5..068366764484 100644 --- a/third_party/terraform/utils/metadata.go +++ b/third_party/terraform/utils/metadata.go @@ -4,6 +4,7 @@ import ( "errors" "fmt" "log" + "sort" computeBeta "google.golang.org/api/compute/v0.beta" "google.golang.org/api/compute/v1" @@ -101,9 +102,14 @@ func BetaMetadataUpdate(oldMDMap map[string]interface{}, newMDMap map[string]int func expandComputeMetadata(m map[string]interface{}) []*compute.MetadataItems { metadata := make([]*compute.MetadataItems, len(m)) + var keys []string + for key := range m { + keys = append(keys, key) + } + sort.Strings(keys) // Append new metadata to existing metadata - for key, val := range m { - v := val.(string) + for _, key := range keys { + v := m[key].(string) metadata = append(metadata, &compute.MetadataItems{ Key: key, Value: &v, @@ -144,10 +150,15 @@ func resourceInstanceMetadata(d TerraformResourceData) (*computeBeta.Metadata, e } if len(mdMap) > 0 { m.Items = make([]*computeBeta.MetadataItems, 0, len(mdMap)) - for key, val := range mdMap { - v := val.(string) + var keys []string + for k := range mdMap { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + v := mdMap[k].(string) m.Items = append(m.Items, &computeBeta.MetadataItems{ - Key: key, + Key: k, Value: &v, }) } diff --git a/third_party/terraform/utils/node_config.go.erb b/third_party/terraform/utils/node_config.go.erb index 852fb8a808b0..0c139786d94e 100644 --- a/third_party/terraform/utils/node_config.go.erb +++ b/third_party/terraform/utils/node_config.go.erb @@ -2,7 +2,6 @@ package google import ( - "strconv" "strings" "github.com/hashicorp/terraform-plugin-sdk/helper/schema" @@ -20,240 +19,246 @@ var defaultOauthScopes = []string{ "https://www.googleapis.com/auth/trace.append", } -var schemaNodeConfig = &schema.Schema{ - Type: schema.TypeList, - Optional: true, - Computed: true, - ForceNew: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "disk_size_gb": { - Type: schema.TypeInt, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validation.IntAtLeast(10), - }, +func schemaNodeConfig() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disk_size_gb": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.IntAtLeast(10), + }, - "disk_type": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd"}, false), - }, + "disk_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd"}, false), + }, - "guest_accelerator": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - Computed: true, - ForceNew: true, - // Legacy config mode allows removing GPU's from an existing resource - // See https://www.terraform.io/docs/configuration/attr-as-blocks.html - ConfigMode: schema.SchemaConfigModeAttr, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "count": &schema.Schema{ - Type: schema.TypeInt, - Required: true, - ForceNew: true, - }, - "type": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - DiffSuppressFunc: compareSelfLinkOrResourceName, + "guest_accelerator": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + // Legacy config mode allows removing GPU's from an existing resource + // See https://www.terraform.io/docs/configuration/attr-as-blocks.html + ConfigMode: schema.SchemaConfigModeAttr, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "count": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: compareSelfLinkOrResourceName, + }, }, }, }, - }, - "image_type": { - Type: schema.TypeString, - Optional: true, - Computed: true, - }, + "image_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, - "labels": { - Type: schema.TypeMap, - Optional: true, - // Computed=true because GKE Sandbox will automatically add labels to nodes that can/cannot run sandboxed pods. - Computed: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, + "labels": { + Type: schema.TypeMap, + Optional: true, + // Computed=true because GKE Sandbox will automatically add labels to nodes that can/cannot run sandboxed pods. + Computed: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + <% unless version.nil? || version == 'ga' -%> + DiffSuppressFunc: containerNodePoolLabelsSuppress, + <% end -%> + }, - "local_ssd_count": { - Type: schema.TypeInt, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validation.IntAtLeast(0), - }, + "local_ssd_count": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.IntAtLeast(0), + }, - "machine_type": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, + "machine_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, - "metadata": { - Type: schema.TypeMap, - Optional: true, - Computed: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, + "metadata": { + Type: schema.TypeMap, + Optional: true, + Computed: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, - "min_cpu_platform": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, + "min_cpu_platform": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, - "oauth_scopes": { - Type: schema.TypeSet, - Optional: true, - Computed: true, - ForceNew: true, - Elem: &schema.Schema{ - Type: schema.TypeString, - StateFunc: func(v interface{}) string { - return canonicalizeServiceScope(v.(string)) + "oauth_scopes": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + StateFunc: func(v interface{}) string { + return canonicalizeServiceScope(v.(string)) + }, }, + DiffSuppressFunc: containerClusterAddedScopesSuppress, + Set: stringScopeHashcode, }, - DiffSuppressFunc: containerClusterAddedScopesSuppress, - Set: stringScopeHashcode, - }, - "preemptible": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Default: false, - }, + "preemptible": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + }, - "service_account": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, + "service_account": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, - "tags": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, + "tags": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, - "shielded_instance_config": &schema.Schema{ - Type: schema.TypeList, - Optional: true, - Computed: true, - ForceNew: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "enable_secure_boot": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Default: false, - }, - "enable_integrity_monitoring": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Default: true, + "shielded_instance_config": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_secure_boot": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + }, + "enable_integrity_monitoring": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: true, + }, }, }, }, - }, - "taint": { - Type: schema.TypeList, - Optional: true, - // Computed=true because GKE Sandbox will automatically add taints to nodes that can/cannot run sandboxed pods. - Computed: true, - ForceNew: true, - // Legacy config mode allows explicitly defining an empty taint. - // See https://www.terraform.io/docs/configuration/attr-as-blocks.html - ConfigMode: schema.SchemaConfigModeAttr, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "key": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "value": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "effect": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{"NO_SCHEDULE", "PREFER_NO_SCHEDULE", "NO_EXECUTE"}, false), + "taint": { + Type: schema.TypeList, + Optional: true, + // Computed=true because GKE Sandbox will automatically add taints to nodes that can/cannot run sandboxed pods. + Computed: true, + ForceNew: true, + // Legacy config mode allows explicitly defining an empty taint. + // See https://www.terraform.io/docs/configuration/attr-as-blocks.html + ConfigMode: schema.SchemaConfigModeAttr, + <% unless version.nil? || version == 'ga' -%> + DiffSuppressFunc: containerNodePoolTaintSuppress, + <% end -%> + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "value": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "effect": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{"NO_SCHEDULE", "PREFER_NO_SCHEDULE", "NO_EXECUTE"}, false), + }, }, }, }, - }, - "workload_metadata_config": { -<% if version.nil? || version == 'ga' -%> - Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", - Computed: true, -<% end -%> - Type: schema.TypeList, - Optional: true, - ForceNew: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "node_metadata": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{"UNSPECIFIED", "SECURE", "EXPOSE", "GKE_METADATA_SERVER"}, false), + "workload_metadata_config": { + <% if version.nil? || version == 'ga' -%> + Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", + <% end -%> + Computed: true, + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "node_metadata": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"UNSPECIFIED", "SECURE", "EXPOSE", "GKE_METADATA_SERVER"}, false), + }, }, }, }, - }, - "sandbox_config": { -<% if version.nil? || version == 'ga' -%> - Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", - Computed: true, -<% end -%> - Type: schema.TypeList, - Optional: true, - ForceNew: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "sandbox_type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{"gvisor"}, false), + "sandbox_config": { + <% if version.nil? || version == 'ga' -%> + Removed: "This field is in beta. Use it in the the google-beta provider instead. See https://terraform.io/docs/providers/google/guides/provider_versions.html for more details.", + Computed: true, + <% end -%> + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "sandbox_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"gvisor"}, false), + }, }, }, }, - }, -<% unless version == 'ga' -%> - "boot_disk_kms_key": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + <% unless version == 'ga' -%> + "boot_disk_kms_key": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + <% end -%> }, -<% end -%> }, - }, + } } func expandNodeConfig(v interface{}) *containerBeta.NodeConfig { @@ -376,11 +381,8 @@ func expandNodeConfig(v interface{}) *containerBeta.NodeConfig { } <% unless version == 'ga' -%> - if v, ok := nodeConfig["workload_metadata_config"]; ok && len(v.([]interface{})) > 0 { - conf := v.([]interface{})[0].(map[string]interface{}) - nc.WorkloadMetadataConfig = &containerBeta.WorkloadMetadataConfig{ - NodeMetadata: conf["node_metadata"].(string), - } + if v, ok := nodeConfig["workload_metadata_config"]; ok { + nc.WorkloadMetadataConfig = expandWorkloadMetadataConfig(v) } if v, ok := nodeConfig["sandbox_config"]; ok && len(v.([]interface{})) > 0 { @@ -398,6 +400,23 @@ func expandNodeConfig(v interface{}) *containerBeta.NodeConfig { return nc } +<% unless version == 'ga' -%> +func expandWorkloadMetadataConfig(v interface{}) *containerBeta.WorkloadMetadataConfig { + if v == nil { + return nil + } + ls := v.([]interface{}) + if len(ls) == 0 { + return nil + } + + cfg := ls[0].(map[string]interface{}) + return &containerBeta.WorkloadMetadataConfig{ + NodeMetadata: cfg["node_metadata"].(string), + } +} + +<% end -%> func flattenNodeConfig(c *containerBeta.NodeConfig) []map[string]interface{} { config := make([]map[string]interface{}, 0, 1) @@ -488,4 +507,119 @@ func flattenSandboxConfig(c *containerBeta.SandboxConfig) []map[string]interface } return result } + +func containerNodePoolLabelsSuppress(k, old, new string, d *schema.ResourceData) bool { + // Node configs are embedded into multiple resources (container cluster and + // container node pool) so we determine the node config key dynamically. + idx := strings.Index(k, ".labels.") + if idx < 0 { + return false + } + + root := k[:idx] + + // Right now, GKE only applies its own out-of-band labels when you enable + // Sandbox. We only need to perform diff suppression in this case; + // otherwise, the default Terraform behavior is fine. + o, n := d.GetChange(root + ".sandbox_config.0.sandbox_type") + if o == nil || n == nil { + return false + } + + // Pull the entire changeset as a list rather than trying to deal with each + // element individually. + o, n = d.GetChange(root + ".labels") + if o == nil || n == nil { + return false + } + + labels := n.(map[string]interface{}) + + // Remove all current labels, skipping GKE-managed ones if not present in + // the new configuration. + for key, value := range o.(map[string]interface{}) { + if nv, ok := labels[key]; ok && nv == value { + delete(labels, key) + } else if !strings.HasPrefix(key, "sandbox.gke.io/") { + // User-provided label removed in new configuration. + return false + } + } + + // If, at this point, the map still has elements, the new configuration + // added an additional taint. + if len(labels) > 0 { + return false + } + + return true +} + +func containerNodePoolTaintSuppress(k, old, new string, d *schema.ResourceData) bool { + // Node configs are embedded into multiple resources (container cluster and + // container node pool) so we determine the node config key dynamically. + idx := strings.Index(k, ".taint.") + if idx < 0 { + return false + } + + root := k[:idx] + + // Right now, GKE only applies its own out-of-band labels when you enable + // Sandbox. We only need to perform diff suppression in this case; + // otherwise, the default Terraform behavior is fine. + o, n := d.GetChange(root + ".sandbox_config.0.sandbox_type") + if o == nil || n == nil { + return false + } + + // Pull the entire changeset as a list rather than trying to deal with each + // element individually. + o, n = d.GetChange(root + ".taint") + if o == nil || n == nil { + return false + } + + type taintType struct { + Key, Value, Effect string + } + + taintSet := make(map[taintType]struct{}) + + // Add all new taints to set. + for _, raw := range n.([]interface{}) { + data := raw.(map[string]interface{}) + taint := taintType{ + Key: data["key"].(string), + Value: data["value"].(string), + Effect: data["effect"].(string), + } + taintSet[taint] = struct{}{} + } + + // Remove all current taints, skipping GKE-managed keys if not present in + // the new configuration. + for _, raw := range o.([]interface{}) { + data := raw.(map[string]interface{}) + taint := taintType{ + Key: data["key"].(string), + Value: data["value"].(string), + Effect: data["effect"].(string), + } + if _, ok := taintSet[taint]; ok { + delete(taintSet, taint) + } else if !strings.HasPrefix(taint.Key, "sandbox.gke.io/") { + // User-provided taint removed in new configuration. + return false + } + } + + // If, at this point, the set still has elements, the new configuration + // added an additional taint. + if len(taintSet) > 0 { + return false + } + + return true +} <% end -%> diff --git a/third_party/terraform/utils/provider.go.erb b/third_party/terraform/utils/provider.go.erb index 8b639138e564..a03d32982e0b 100644 --- a/third_party/terraform/utils/provider.go.erb +++ b/third_party/terraform/utils/provider.go.erb @@ -135,7 +135,6 @@ func Provider() terraform.ResourceProvider { IAMCustomEndpointEntryKey: IAMCustomEndpointEntry, ServiceNetworkingCustomEndpointEntryKey: ServiceNetworkingCustomEndpointEntry, ServiceUsageCustomEndpointEntryKey: ServiceUsageCustomEndpointEntry, - CloudIoTCustomEndpointEntryKey: CloudIoTCustomEndpointEntry, StorageTransferCustomEndpointEntryKey: StorageTransferCustomEndpointEntry, BigtableAdminCustomEndpointEntryKey: BigtableAdminCustomEndpointEntry, }, @@ -147,10 +146,14 @@ func Provider() terraform.ResourceProvider { "google_client_config": dataSourceGoogleClientConfig(), "google_client_openid_userinfo": dataSourceGoogleClientOpenIDUserinfo(), "google_cloudfunctions_function": dataSourceGoogleCloudFunctionsFunction(), + <% unless version == 'ga' -%> + "google_cloud_identity_groups": dataSourceGoogleCloudIdentityGroups(), + "google_cloud_identity_group_memberships": dataSourceGoogleCloudIdentityGroupMemberships(), + <% end -%> "google_composer_image_versions": dataSourceGoogleComposerImageVersions(), "google_compute_address": dataSourceGoogleComputeAddress(), "google_compute_backend_service": dataSourceGoogleComputeBackendService(), - "google_compute_backend_bucket": dataSourceGoogleComputeBackendBucket(), + "google_compute_backend_bucket": dataSourceGoogleComputeBackendBucket(), "google_compute_default_service_account": dataSourceGoogleComputeDefaultServiceAccount(), "google_compute_forwarding_rule": dataSourceGoogleComputeForwardingRule(), "google_compute_global_address": dataSourceGoogleComputeGlobalAddress(), @@ -179,13 +182,21 @@ func Provider() terraform.ResourceProvider { "google_container_registry_repository": dataSourceGoogleContainerRepo(), "google_dns_keys": dataSourceDNSKeys(), "google_dns_managed_zone": dataSourceDnsManagedZone(), + <% unless version == 'ga' -%> + "google_game_services_game_server_deployment_rollout": dataSourceGameServicesGameServerDeploymentRollout(), + <% end -%> "google_iam_policy": dataSourceGoogleIamPolicy(), "google_iam_role": dataSourceGoogleIamRole(), + "google_iam_testable_permissions": dataSourceGoogleIamTestablePermissions(), "google_kms_crypto_key": dataSourceGoogleKmsCryptoKey(), "google_kms_crypto_key_version": dataSourceGoogleKmsCryptoKeyVersion(), "google_kms_key_ring": dataSourceGoogleKmsKeyRing(), "google_kms_secret": dataSourceGoogleKmsSecret(), "google_kms_secret_ciphertext": dataSourceGoogleKmsSecretCiphertext(), + <% unless version == 'ga' -%> + "google_firebase_web_app": dataSourceGoogleFirebaseWebApp(), + "google_firebase_web_app_config": dataSourceGoogleFirebaseWebappConfig(), + <% end -%> "google_folder": dataSourceGoogleFolder(), "google_folder_organization_policy": dataSourceGoogleFolderOrganizationPolicy(), "google_monitoring_notification_channel": dataSourceMonitoringNotificationChannel(), @@ -196,18 +207,19 @@ func Provider() terraform.ResourceProvider { "google_project": dataSourceGoogleProject(), "google_projects": dataSourceGoogleProjects(), "google_project_organization_policy": dataSourceGoogleProjectOrganizationPolicy(), - <% unless version == 'ga' -%> "google_secret_manager_secret_version": dataSourceSecretManagerSecretVersion(), - <% end -%> "google_service_account": dataSourceGoogleServiceAccount(), "google_service_account_access_token": dataSourceGoogleServiceAccountAccessToken(), + "google_service_account_id_token": dataSourceGoogleServiceAccountIdToken(), "google_service_account_key": dataSourceGoogleServiceAccountKey(), "google_sql_ca_certs": dataSourceGoogleSQLCaCerts(), + "google_sql_database_instance": dataSourceSqlDatabaseInstance(), "google_storage_bucket_object": dataSourceGoogleStorageBucketObject(), "google_storage_object_signed_url": dataSourceGoogleSignedUrl(), "google_storage_project_service_account": dataSourceGoogleStorageProjectServiceAccount(), "google_storage_transfer_project_service_account": dataSourceGoogleStorageTransferProjectServiceAccount(), "google_tpu_tensorflow_versions": dataSourceTpuTensorflowVersions(), + "google_redis_instance": dataSourceGoogleRedisInstance(), }, ResourcesMap: ResourceMap(), @@ -286,11 +298,13 @@ end # products.each do "google_bigtable_instance_iam_member": ResourceIamMember(IamBigtableInstanceSchema, NewBigtableInstanceUpdater, BigtableInstanceIdParseFunc), "google_bigtable_instance_iam_policy": ResourceIamPolicy(IamBigtableInstanceSchema, NewBigtableInstanceUpdater, BigtableInstanceIdParseFunc), "google_bigtable_table": resourceBigtableTable(), + "google_bigquery_dataset_iam_binding": ResourceIamBinding(IamBigqueryDatasetSchema, NewBigqueryDatasetIamUpdater, BigqueryDatasetIdParseFunc), + "google_bigquery_dataset_iam_member": ResourceIamMember(IamBigqueryDatasetSchema, NewBigqueryDatasetIamUpdater, BigqueryDatasetIdParseFunc), + "google_bigquery_dataset_iam_policy": ResourceIamPolicy(IamBigqueryDatasetSchema, NewBigqueryDatasetIamUpdater, BigqueryDatasetIdParseFunc), "google_billing_account_iam_binding": ResourceIamBinding(IamBillingAccountSchema, NewBillingAccountIamUpdater, BillingAccountIdParseFunc), "google_billing_account_iam_member": ResourceIamMember(IamBillingAccountSchema, NewBillingAccountIamUpdater, BillingAccountIdParseFunc), "google_billing_account_iam_policy": ResourceIamPolicy(IamBillingAccountSchema, NewBillingAccountIamUpdater, BillingAccountIdParseFunc), "google_cloudfunctions_function": resourceCloudFunctionsFunction(), - "google_cloudiot_registry": resourceCloudIoTRegistry(), "google_composer_environment": resourceComposerEnvironment(), "google_compute_attached_disk": resourceComputeAttachedDisk(), "google_compute_instance": resourceComputeInstance(), @@ -312,6 +326,9 @@ end # products.each do "google_container_node_pool": resourceContainerNodePool(), "google_container_registry": resourceContainerRegistry(), "google_dataflow_job": resourceDataflowJob(), + <% unless version == 'ga' -%> + "google_dataflow_flex_template_job": resourceDataflowFlexTemplateJob(), + <% end -%> "google_dataproc_cluster": resourceDataprocCluster(), "google_dataproc_cluster_iam_binding": ResourceIamBinding(IamDataprocClusterSchema, NewDataprocClusterUpdater, DataprocClusterIdParseFunc), "google_dataproc_cluster_iam_member": ResourceIamMember(IamDataprocClusterSchema, NewDataprocClusterUpdater, DataprocClusterIdParseFunc), @@ -326,8 +343,8 @@ end # products.each do "google_folder_iam_binding": ResourceIamBinding(IamFolderSchema, NewFolderIamUpdater, FolderIdParseFunc), "google_folder_iam_member": ResourceIamMember(IamFolderSchema, NewFolderIamUpdater, FolderIdParseFunc), "google_folder_iam_policy": ResourceIamPolicy(IamFolderSchema, NewFolderIamUpdater, FolderIdParseFunc), + "google_folder_iam_audit_config": ResourceIamAuditConfig(IamFolderSchema, NewFolderIamUpdater, FolderIdParseFunc), "google_folder_organization_policy": resourceGoogleFolderOrganizationPolicy(), -<% unless version == 'ga' -%> "google_healthcare_dataset_iam_binding": ResourceIamBindingWithBatching(IamHealthcareDatasetSchema, NewHealthcareDatasetIamUpdater, DatasetIdParseFunc, IamBatchingEnabled), "google_healthcare_dataset_iam_member": ResourceIamMemberWithBatching(IamHealthcareDatasetSchema, NewHealthcareDatasetIamUpdater, DatasetIdParseFunc, IamBatchingEnabled), "google_healthcare_dataset_iam_policy": ResourceIamPolicy(IamHealthcareDatasetSchema, NewHealthcareDatasetIamUpdater, DatasetIdParseFunc), @@ -340,21 +357,25 @@ end # products.each do "google_healthcare_hl7_v2_store_iam_binding": ResourceIamBindingWithBatching(IamHealthcareHl7V2StoreSchema, NewHealthcareHl7V2StoreIamUpdater, Hl7V2StoreIdParseFunc, IamBatchingEnabled), "google_healthcare_hl7_v2_store_iam_member": ResourceIamMemberWithBatching(IamHealthcareHl7V2StoreSchema, NewHealthcareHl7V2StoreIamUpdater, Hl7V2StoreIdParseFunc, IamBatchingEnabled), "google_healthcare_hl7_v2_store_iam_policy": ResourceIamPolicy(IamHealthcareHl7V2StoreSchema, NewHealthcareHl7V2StoreIamUpdater, Hl7V2StoreIdParseFunc), -<% end -%> "google_logging_billing_account_sink": resourceLoggingBillingAccountSink(), "google_logging_billing_account_exclusion": ResourceLoggingExclusion(BillingAccountLoggingExclusionSchema, NewBillingAccountLoggingExclusionUpdater, billingAccountLoggingExclusionIdParseFunc), + "google_logging_billing_account_bucket_config": ResourceLoggingBillingAccountBucketConfig(), "google_logging_organization_sink": resourceLoggingOrganizationSink(), "google_logging_organization_exclusion": ResourceLoggingExclusion(OrganizationLoggingExclusionSchema, NewOrganizationLoggingExclusionUpdater, organizationLoggingExclusionIdParseFunc), + "google_logging_organization_bucket_config": ResourceLoggingOrganizationBucketConfig(), "google_logging_folder_sink": resourceLoggingFolderSink(), "google_logging_folder_exclusion": ResourceLoggingExclusion(FolderLoggingExclusionSchema, NewFolderLoggingExclusionUpdater, folderLoggingExclusionIdParseFunc), + "google_logging_folder_bucket_config": ResourceLoggingFolderBucketConfig(), "google_logging_project_sink": resourceLoggingProjectSink(), "google_logging_project_exclusion": ResourceLoggingExclusion(ProjectLoggingExclusionSchema, NewProjectLoggingExclusionUpdater, projectLoggingExclusionIdParseFunc), + "google_logging_project_bucket_config": ResourceLoggingProjectBucketConfig(), "google_kms_key_ring_iam_binding": ResourceIamBinding(IamKmsKeyRingSchema, NewKmsKeyRingIamUpdater, KeyRingIdParseFunc), "google_kms_key_ring_iam_member": ResourceIamMember(IamKmsKeyRingSchema, NewKmsKeyRingIamUpdater, KeyRingIdParseFunc), "google_kms_key_ring_iam_policy": ResourceIamPolicy(IamKmsKeyRingSchema, NewKmsKeyRingIamUpdater, KeyRingIdParseFunc), "google_kms_crypto_key_iam_binding": ResourceIamBinding(IamKmsCryptoKeySchema, NewKmsCryptoKeyIamUpdater, CryptoIdParseFunc), "google_kms_crypto_key_iam_member": ResourceIamMember(IamKmsCryptoKeySchema, NewKmsCryptoKeyIamUpdater, CryptoIdParseFunc), "google_kms_crypto_key_iam_policy": ResourceIamPolicy(IamKmsCryptoKeySchema, NewKmsCryptoKeyIamUpdater, CryptoIdParseFunc), + "google_monitoring_dashboard": resourceMonitoringDashboard(), "google_service_networking_connection": resourceServiceNetworkingConnection(), "google_spanner_instance_iam_binding": ResourceIamBinding(IamSpannerInstanceSchema, NewSpannerInstanceIamUpdater, SpannerInstanceIdParseFunc), "google_spanner_instance_iam_member": ResourceIamMember(IamSpannerInstanceSchema, NewSpannerInstanceIamUpdater, SpannerInstanceIdParseFunc), @@ -368,11 +389,11 @@ end # products.each do "google_organization_iam_binding": ResourceIamBinding(IamOrganizationSchema, NewOrganizationIamUpdater, OrgIdParseFunc), "google_organization_iam_custom_role": resourceGoogleOrganizationIamCustomRole(), "google_organization_iam_member": ResourceIamMember(IamOrganizationSchema, NewOrganizationIamUpdater, OrgIdParseFunc), - "google_organization_iam_policy": ResourceIamPolicy(IamOrganizationSchema, NewOrganizationIamUpdater, OrgIdParseFunc), - "google_organization_iam_audit_config": ResourceIamAuditConfig(IamOrganizationSchema, NewOrganizationIamUpdater, OrgIdParseFunc), + "google_organization_iam_policy": ResourceIamPolicy(IamOrganizationSchema, NewOrganizationIamUpdater, OrgIdParseFunc), + "google_organization_iam_audit_config": ResourceIamAuditConfig(IamOrganizationSchema, NewOrganizationIamUpdater, OrgIdParseFunc), "google_organization_policy": resourceGoogleOrganizationPolicy(), "google_project": resourceGoogleProject(), - "google_project_iam_policy": resourceGoogleProjectIamPolicy(), + "google_project_iam_policy": ResourceIamPolicy(IamPolicyProjectSchema, NewProjectIamPolicyUpdater, ProjectIdParseFunc), "google_project_iam_binding": ResourceIamBindingWithBatching(IamProjectSchema, NewProjectIamUpdater, ProjectIdParseFunc, IamBatchingEnabled), "google_project_iam_member": ResourceIamMemberWithBatching(IamProjectSchema, NewProjectIamUpdater, ProjectIdParseFunc, IamBatchingEnabled), "google_project_iam_audit_config": ResourceIamAuditConfigWithBatching(IamProjectSchema, NewProjectIamUpdater, ProjectIdParseFunc, IamBatchingEnabled), @@ -459,7 +480,6 @@ func providerConfigure(d *schema.ResourceData, p *schema.Provider, terraformVers config.IAMBasePath = d.Get(IAMCustomEndpointEntryKey).(string) config.ServiceNetworkingBasePath = d.Get(ServiceNetworkingCustomEndpointEntryKey).(string) config.ServiceUsageBasePath = d.Get(ServiceUsageCustomEndpointEntryKey).(string) - config.CloudIoTBasePath = d.Get(CloudIoTCustomEndpointEntryKey).(string) config.StorageTransferBasePath = d.Get(StorageTransferCustomEndpointEntryKey).(string) config.BigtableAdminBasePath = d.Get(BigtableAdminCustomEndpointEntryKey).(string) diff --git a/third_party/terraform/utils/provider_handwritten_endpoint.go.erb b/third_party/terraform/utils/provider_handwritten_endpoint.go.erb index c8a72dbc681d..10c8d306e6b9 100644 --- a/third_party/terraform/utils/provider_handwritten_endpoint.go.erb +++ b/third_party/terraform/utils/provider_handwritten_endpoint.go.erb @@ -29,18 +29,6 @@ var CloudBillingCustomEndpointEntry = &schema.Schema{ }, CloudBillingDefaultBasePath), } -var CloudIoTDefaultBasePath = "https://cloudiot.googleapis.com/v1/" -var CloudIoTCustomEndpointEntryKey = "cloud_iot_custom_endpoint" -var CloudIoTCustomEndpointEntry = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ValidateFunc: validateCustomEndpoint, - DefaultFunc: schema.MultiEnvDefaultFunc([]string{ - "GOOGLE_CLOUD_IOT_CUSTOM_ENDPOINT", - }, CloudIoTDefaultBasePath), -} - - var ComposerDefaultBasePath = "https://composer.googleapis.com/v1beta1/" var ComposerCustomEndpointEntryKey = "composer_custom_endpoint" var ComposerCustomEndpointEntry = &schema.Schema{ diff --git a/third_party/terraform/utils/provider_test.go.erb b/third_party/terraform/utils/provider_test.go.erb index 4e52b61db81f..d135be884108 100644 --- a/third_party/terraform/utils/provider_test.go.erb +++ b/third_party/terraform/utils/provider_test.go.erb @@ -63,6 +63,16 @@ var orgEnvVars = []string{ "GOOGLE_ORG", } +<% unless version == 'ga' -%> +var custIdEnvVars = []string{ + "GOOGLE_CUST_ID", +} + +var identityUserEnvVars = []string{ + "GOOGLE_IDENTITY_USER", +} +<% end -%> + var orgEnvDomainVars = []string{ "GOOGLE_ORG_DOMAIN", } @@ -80,11 +90,18 @@ var billingAccountEnvVars = []string{ } var configs map[string]*Config -var sources map[string]rand.Source + +// A source for a given VCR test with the value that seeded it +type VcrSource struct { + seed int64 + source rand.Source +} + +var sources map[string]VcrSource func init() { configs = make(map[string]*Config) - sources = make(map[string]rand.Source) + sources = make(map[string]VcrSource) testAccProvider = Provider().(*schema.Provider) testAccRandomProvider = random.Provider().(*schema.Provider) <% if version == 'ga' -%> @@ -144,7 +161,7 @@ func getCachedConfig(d *schema.ResourceData, configureFunc func(d *schema.Resour log.Print("[DEBUG] No environment var set for VCR_PATH, skipping VCR") return config, nil } - path := filepath.Join(envPath, testName) + path := filepath.Join(envPath, vcrFileName(testName)) rec, err := recorder.NewAsMode(path, vcrMode, config.client.Transport) if err != nil { @@ -153,20 +170,48 @@ func getCachedConfig(d *schema.ResourceData, configureFunc func(d *schema.Resour // Defines how VCR will match requests to responses. rec.SetMatcher(func(r *http.Request, i cassette.Request) bool { // Default matcher compares method and URL only - defaultMatch := cassette.DefaultMatcher(r, i) + if !cassette.DefaultMatcher(r, i) { + return false + } if r.Body == nil { - return defaultMatch + return true } + contentType := r.Header.Get("Content-Type") + // If body contains media, don't try to compare + if strings.Contains(contentType, "multipart/related") { + return true + } + var b bytes.Buffer if _, err := b.ReadFrom(r.Body); err != nil { log.Printf("[DEBUG] Failed to read request body from cassette: %v", err) return false } r.Body = ioutil.NopCloser(&b) - // body must match recorded body - return defaultMatch && b.String() == i.Body + reqBody := b.String() + // If body matches identically, we are done + if reqBody == i.Body { + return true + } + + // JSON might be the same, but reordered. Try parsing json and comparing + if strings.Contains(contentType, "application/json") { + var reqJson, cassetteJson interface{} + if err := json.Unmarshal([]byte(reqBody), &reqJson); err != nil { + log.Printf("[DEBUG] Failed to unmarshall request json: %v", err) + return false + } + if err := json.Unmarshal([]byte(i.Body), &cassetteJson); err != nil { + log.Printf("[DEBUG] Failed to unmarshall cassette json: %v", err) + return false + } + return reflect.DeepEqual(reqJson, cassetteJson) + } + return false }) config.client.Transport = rec + config.wrappedPubsubClient.Transport = rec + config.wrappedBigQueryClient.Transport = rec configs[testName] = config return config, err } @@ -175,9 +220,19 @@ func getCachedConfig(d *schema.ResourceData, configureFunc func(d *schema.Resour func closeRecorder(t *testing.T) { if config, ok := configs[t.Name()]; ok { // We did not cache the config if it does not use VCR - err := config.client.Transport.(*recorder.Recorder).Stop() - if err != nil { - t.Error(err) + if !t.Failed() && isVcrEnabled() { + // If a test succeeds, write new seed/yaml to files + err := config.client.Transport.(*recorder.Recorder).Stop() + if err != nil { + t.Error(err) + } + envPath := os.Getenv("VCR_PATH") + if vcrSource, ok := sources[t.Name()]; ok { + err = writeSeedToFile(vcrSource.seed, vcrSeedFile(envPath, t.Name())) + if err != nil { + t.Error(err) + } + } } // Clean up test config delete(configs, t.Name()) @@ -185,12 +240,18 @@ func closeRecorder(t *testing.T) { } } +func googleProviderConfig(t *testing.T) *Config { + config, ok := configs[t.Name()] + if ok { + return config + } + return testAccProvider.Meta().(*Config) +} + func getTestAccProviders(testName string) map[string]terraform.ResourceProvider { - prov := testAccProvider + prov := Provider().(*schema.Provider) provRand := random.Provider().(*schema.Provider) - envPath := os.Getenv("VCR_PATH") - recordingMode := os.Getenv("VCR_MODE") - if envPath != "" && recordingMode != "" { + if isVcrEnabled() { old := prov.ConfigureFunc prov.ConfigureFunc = func(d *schema.ResourceData) (interface{}, error) { return getCachedConfig(d, old, testName) @@ -198,49 +259,64 @@ func getTestAccProviders(testName string) map[string]terraform.ResourceProvider } else { log.Print("[DEBUG] VCR_PATH or VCR_MODE not set, skipping VCR") } - // TODO(slevenick): Add OICS provider return map[string]terraform.ResourceProvider{ "google": prov, + "google-beta": prov, "random": provRand, } } +func isVcrEnabled() bool { + envPath := os.Getenv("VCR_PATH") + vcrMode := os.Getenv("VCR_MODE") + return envPath != "" && vcrMode != "" +} + // Wrapper for resource.Test to swap out providers for VCR providers and handle VCR specific things // Can be called when VCR is not enabled, and it will behave as normal -func vcrTest(t *testing.T, c resource.TestCase, destroyFuncProducer func(provider *schema.Provider) func(s *terraform.State) error) { - providers := getTestAccProviders(t.Name()) - c.Providers = providers - defer closeRecorder(t) - c.CheckDestroy = destroyFuncProducer(providers["google"].(*schema.Provider)) +func vcrTest(t *testing.T, c resource.TestCase) { + if isVcrEnabled() { + providers := getTestAccProviders(t.Name()) + c.Providers = providers + defer closeRecorder(t) + } resource.Test(t, c) } +// Retrieves a unique test name used for writing files +// replaces all `/` characters that would cause filepath issues +// This matters during tests that dispatch multiple tests, for example TestAccLoggingFolderExclusion +func vcrSeedFile(path, name string) string { + return filepath.Join(path, fmt.Sprintf("%s.seed", vcrFileName(name))) +} + +func vcrFileName(name string) string { + return strings.ReplaceAll(name, "/", "_") +} + // Produces a rand.Source for VCR testing based on the given mode. // In RECORDING mode, generates a new seed and saves it to a file, using the seed for the source // In REPLAYING mode, reads a seed from a file and creates a source from it -func vcrSource(t *testing.T, path, mode string) (rand.Source, error) { +func vcrSource(t *testing.T, path, mode string) (*VcrSource, error) { if s, ok := sources[t.Name()]; ok { - return s, nil + return &s, nil } - fileName := filepath.Join(path, fmt.Sprintf("%s.seed", t.Name())) switch mode { case "RECORDING": seed := rand.Int63() s := rand.NewSource(seed) - err := writeSeedToFile(seed, fileName) - if err != nil { - return nil, err - } - sources[t.Name()] = s - return s, nil + vcrSource := VcrSource{seed: seed, source: s} + sources[t.Name()] = vcrSource + return &vcrSource, nil case "REPLAYING": - seed, err := readSeedFromFile(fileName) + seed, err := readSeedFromFile(vcrSeedFile(path, t.Name())) if err != nil { return nil, err } s := rand.NewSource(seed) - sources[t.Name()] = s - return s, nil + vcrSource := VcrSource{seed: seed, source: s} + sources[t.Name()] = vcrSource + return &vcrSource, nil default: log.Printf("[DEBUG] No valid environment var set for VCR_MODE, expected RECORDING or REPLAYING, skipping VCR. VCR_MODE: %s", mode) return nil, errors.New("No valid VCR_MODE set") @@ -279,18 +355,18 @@ func writeSeedToFile(seed int64, fileName string) error { } func randString(t *testing.T, length int) string { + if !isVcrEnabled() { + return acctest.RandString(length) + } envPath := os.Getenv("VCR_PATH") vcrMode := os.Getenv("VCR_MODE") - if envPath == "" || vcrMode == "" { - return acctest.RandString(10) - } s, err := vcrSource(t, envPath, vcrMode) if err != nil { // At this point we haven't created any resources, so fail fast t.Fatal(err) } - r := rand.New(s) + r := rand.New(s.source) result := make([]byte, length) set := "abcdefghijklmnopqrstuvwxyz012346789" for i := 0; i < length; i++ { @@ -299,6 +375,21 @@ func randString(t *testing.T, length int) string { return string(result) } +func randInt(t *testing.T) int { + if !isVcrEnabled() { + return acctest.RandInt() + } + envPath := os.Getenv("VCR_PATH") + vcrMode := os.Getenv("VCR_MODE") + s, err := vcrSource(t, envPath, vcrMode) + if err != nil { + // At this point we haven't created any resources, so fail fast + t.Fatal(err) + } + + return rand.New(s.source).Int() +} + func TestProvider(t *testing.T) { if err := Provider().(*schema.Provider).InternalValidate(); err != nil { t.Fatalf("err: %s", err) @@ -377,13 +468,13 @@ func TestProvider_loadCredentialsFromJSON(t *testing.T) { func TestAccProviderBasePath_setBasePath(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeAddressDestroy, + CheckDestroy: testAccCheckComputeAddressDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccProviderBasePath_setBasePath("https://www.googleapis.com/compute/beta/", acctest.RandString(10)), + Config: testAccProviderBasePath_setBasePath("https://www.googleapis.com/compute/beta/", randString(t, 10)), }, { ResourceName: "google_compute_address.default", @@ -397,13 +488,13 @@ func TestAccProviderBasePath_setBasePath(t *testing.T) { func TestAccProviderBasePath_setInvalidBasePath(t *testing.T) { t.Parallel() - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckComputeAddressDestroy, + CheckDestroy: testAccCheckComputeAddressDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccProviderBasePath_setBasePath("https://www.example.com/compute/beta/", acctest.RandString(10)), + Config: testAccProviderBasePath_setBasePath("https://www.example.com/compute/beta/", randString(t, 10)), ExpectError: regexp.MustCompile("got HTTP response code 404 with body"), }, }, @@ -411,15 +502,17 @@ func TestAccProviderBasePath_setInvalidBasePath(t *testing.T) { } func TestAccProviderUserProjectOverride(t *testing.T) { + // Parallel fine-grained resource creation + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) billing := getTestBillingAccountFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - sa := acctest.RandomWithPrefix("tf-test") - topicName := "tf-test-topic-" + acctest.RandString(10) + pid := "tf-test-" + randString(t, 10) + sa := "tf-test-" + randString(t, 10) + topicName := "tf-test-topic-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, // No TestDestroy since that's not really the point of this test @@ -455,14 +548,16 @@ func TestAccProviderUserProjectOverride(t *testing.T) { // Do the same thing as TestAccProviderUserProjectOverride, but using a resource that gets its project via // a reference to a different resource instead of a project field. func TestAccProviderIndirectUserProjectOverride(t *testing.T) { + // Parallel fine-grained resource creation + skipIfVcr(t) t.Parallel() org := getTestOrgFromEnv(t) billing := getTestBillingAccountFromEnv(t) - pid := acctest.RandomWithPrefix("tf-test") - sa := acctest.RandomWithPrefix("tf-test") + pid := "tf-test-" + randString(t, 10) + sa := "tf-test-" + randString(t, 10) - resource.Test(t, resource.TestCase{ + vcrTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, // No TestDestroy since that's not really the point of this test @@ -760,6 +855,16 @@ func getTestZoneFromEnv() string { return multiEnvSearch(zoneEnvVars) } +<% unless version == 'ga' -%> +func getTestCustIdFromEnv(t *testing.T) string { + return multiEnvSearch(custIdEnvVars) +} + +func getTestIdentityUserFromEnv(t *testing.T) string { + return multiEnvSearch(identityUserEnvVars) +} +<% end -%> + // Firestore can't be enabled at the same time as Datastore, so we need a new // project to manage it until we can enable Firestore programmatically. func getTestFirestoreProjectFromEnv(t *testing.T) string { @@ -800,3 +905,12 @@ func multiEnvSearch(ks []string) string { } return "" } + +// Some tests fail during VCR. One common case is race conditions when creating resources. +// If a test config adds two fine-grained resources with the same parent it is undefined +// which will be created first, causing VCR to fail ~50% of the time +func skipIfVcr(t *testing.T) { + if isVcrEnabled() { + t.Skipf("VCR enabled, skipping test: %s", t.Name()) + } +} diff --git a/third_party/terraform/utils/service_account_waiter.go b/third_party/terraform/utils/service_account_waiter.go index 9a36ad0e14d4..079acb78347b 100644 --- a/third_party/terraform/utils/service_account_waiter.go +++ b/third_party/terraform/utils/service_account_waiter.go @@ -33,7 +33,7 @@ func (w *ServiceAccountKeyWaiter) RefreshFunc() resource.StateRefreshFunc { } } -func serviceAccountKeyWaitTime(client *iam.ProjectsServiceAccountsKeysService, keyName, publicKeyType, activity string, timeoutMinutes int) error { +func serviceAccountKeyWaitTime(client *iam.ProjectsServiceAccountsKeysService, keyName, publicKeyType, activity string, timeout time.Duration) error { w := &ServiceAccountKeyWaiter{ Service: client, PublicKeyType: publicKeyType, @@ -44,7 +44,7 @@ func serviceAccountKeyWaitTime(client *iam.ProjectsServiceAccountsKeysService, k Pending: []string{"PENDING"}, Target: []string{"DONE"}, Refresh: w.RefreshFunc(), - Timeout: time.Duration(timeoutMinutes) * time.Minute, + Timeout: timeout, MinTimeout: 2 * time.Second, } _, err := c.WaitForState() diff --git a/third_party/terraform/utils/service_networking_operation.go b/third_party/terraform/utils/service_networking_operation.go index 6faf50def6b8..7a689e4d0c92 100644 --- a/third_party/terraform/utils/service_networking_operation.go +++ b/third_party/terraform/utils/service_networking_operation.go @@ -1,6 +1,8 @@ package google import ( + "time" + "google.golang.org/api/servicenetworking/v1" ) @@ -13,11 +15,7 @@ func (w *ServiceNetworkingOperationWaiter) QueryOp() (interface{}, error) { return w.Service.Operations.Get(w.Op.Name).Do() } -func serviceNetworkingOperationWait(config *Config, op *servicenetworking.Operation, activity string) error { - return serviceNetworkingOperationWaitTime(config, op, activity, 10) -} - -func serviceNetworkingOperationWaitTime(config *Config, op *servicenetworking.Operation, activity string, timeoutMinutes int) error { +func serviceNetworkingOperationWaitTime(config *Config, op *servicenetworking.Operation, activity string, timeout time.Duration) error { w := &ServiceNetworkingOperationWaiter{ Service: config.clientServiceNetworking, } @@ -25,5 +23,5 @@ func serviceNetworkingOperationWaitTime(config *Config, op *servicenetworking.Op if err := w.SetOp(op); err != nil { return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } diff --git a/third_party/terraform/utils/serviceman_operation.go b/third_party/terraform/utils/serviceman_operation.go index d31f3b2638e2..f0ff980a8c91 100644 --- a/third_party/terraform/utils/serviceman_operation.go +++ b/third_party/terraform/utils/serviceman_operation.go @@ -2,6 +2,7 @@ package google import ( "fmt" + "time" "google.golang.org/api/googleapi" "google.golang.org/api/servicemanagement/v1" @@ -19,11 +20,7 @@ func (w *ServiceManagementOperationWaiter) QueryOp() (interface{}, error) { return w.Service.Operations.Get(w.Op.Name).Do() } -func serviceManagementOperationWait(config *Config, op *servicemanagement.Operation, activity string) (googleapi.RawMessage, error) { - return serviceManagementOperationWaitTime(config, op, activity, 10) -} - -func serviceManagementOperationWaitTime(config *Config, op *servicemanagement.Operation, activity string, timeoutMinutes int) (googleapi.RawMessage, error) { +func serviceManagementOperationWaitTime(config *Config, op *servicemanagement.Operation, activity string, timeout time.Duration) (googleapi.RawMessage, error) { w := &ServiceManagementOperationWaiter{ Service: config.clientServiceMan, } @@ -32,7 +29,7 @@ func serviceManagementOperationWaitTime(config *Config, op *servicemanagement.Op return nil, err } - if err := OperationWait(w, activity, timeoutMinutes, config.PollInterval); err != nil { + if err := OperationWait(w, activity, timeout, config.PollInterval); err != nil { return nil, err } return w.Op.Response, nil diff --git a/third_party/terraform/utils/serviceusage_operation.go b/third_party/terraform/utils/serviceusage_operation.go index 3f2bdf35f342..7d054b658ea7 100644 --- a/third_party/terraform/utils/serviceusage_operation.go +++ b/third_party/terraform/utils/serviceusage_operation.go @@ -2,12 +2,13 @@ package google import ( "encoding/json" + "time" "google.golang.org/api/googleapi" "google.golang.org/api/serviceusage/v1" ) -func serviceUsageOperationWait(config *Config, op *serviceusage.Operation, project, activity string) error { +func serviceUsageOperationWait(config *Config, op *serviceusage.Operation, project, activity string, timeout time.Duration) error { // maintained for compatibility with old code that was written before the // autogenerated waiters. b, err := op.MarshalJSON() @@ -18,7 +19,7 @@ func serviceUsageOperationWait(config *Config, op *serviceusage.Operation, proje if err := json.Unmarshal(b, &m); err != nil { return err } - return serviceUsageOperationWaitTime(config, m, project, activity, 10) + return serviceUsageOperationWaitTime(config, m, project, activity, timeout) } func handleServiceUsageRetryableError(err error) error { diff --git a/third_party/terraform/utils/sql_utils.go b/third_party/terraform/utils/sql_utils.go new file mode 100644 index 000000000000..1924b4952fdd --- /dev/null +++ b/third_party/terraform/utils/sql_utils.go @@ -0,0 +1,26 @@ +package google + +import ( + "log" + "strings" + + "github.com/hashicorp/errwrap" + "google.golang.org/api/googleapi" +) + +func transformSQLDatabaseReadError(err error) error { + if gErr, ok := errwrap.GetType(err, &googleapi.Error{}).(*googleapi.Error); ok { + if gErr.Code == 400 && strings.Contains(gErr.Message, "Invalid request since instance is not running") { + // This error occurs when attempting a GET after deleting the sql database and sql instance. It leads to to + // inconsistent behavior as handleNotFoundError(...) expects an error code of 404 when a resource does not + // exist. To get the desired behavior from handleNotFoundError, modify the return code to 404 so that + // handleNotFoundError(...) will treat this as a NotFound error + gErr.Code = 404 + } + + log.Printf("[DEBUG] Transformed SQLDatabase error") + return gErr + } + + return err +} diff --git a/third_party/terraform/utils/sqladmin_operation.go b/third_party/terraform/utils/sqladmin_operation.go index 5fd2984f8e51..c6faddf992f0 100644 --- a/third_party/terraform/utils/sqladmin_operation.go +++ b/third_party/terraform/utils/sqladmin_operation.go @@ -4,6 +4,7 @@ import ( "bytes" "fmt" "log" + "time" sqladmin "google.golang.org/api/sqladmin/v1beta4" ) @@ -99,11 +100,7 @@ func (w *SqlAdminOperationWaiter) TargetStates() []string { return []string{"DONE"} } -func sqlAdminOperationWait(config *Config, res interface{}, project, activity string) error { - return sqlAdminOperationWaitTime(config, res, project, activity, 10) -} - -func sqlAdminOperationWaitTime(config *Config, res interface{}, project, activity string, timeoutMinutes int) error { +func sqlAdminOperationWaitTime(config *Config, res interface{}, project, activity string, timeout time.Duration) error { op := &sqladmin.Operation{} err := Convert(res, op) if err != nil { @@ -118,7 +115,7 @@ func sqlAdminOperationWaitTime(config *Config, res interface{}, project, activit if err := w.SetOp(op); err != nil { return err } - return OperationWait(w, activity, timeoutMinutes, config.PollInterval) + return OperationWait(w, activity, timeout, config.PollInterval) } // SqlAdminOperationError wraps sqladmin.OperationError and implements the diff --git a/third_party/terraform/utils/stateful_mig_polling.go.erb b/third_party/terraform/utils/stateful_mig_polling.go.erb new file mode 100644 index 000000000000..ec4c8f5ba076 --- /dev/null +++ b/third_party/terraform/utils/stateful_mig_polling.go.erb @@ -0,0 +1,140 @@ +<% autogen_exception -%> +package google + +<% unless version == 'ga' -%> +import ( + "fmt" + "log" + + "github.com/hashicorp/errwrap" +) + +// PerInstanceConfig needs both regular operation polling AND custom polling for deletion which is why this is not generated +func resourceComputePerInstanceConfigPollRead(d *schema.ResourceData, meta interface{}) PollReadFunc { + return func() (map[string]interface{}, error) { + config := meta.(*Config) + + url, err := replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{instance_group_manager}}/listPerInstanceConfigs") + if err != nil { + return nil, err + } + + project, err := getProject(d, config) + if err != nil { + return nil, err + } + res, err := sendRequest(config, "POST", project, url, nil) + if err != nil { + return res, err + } + res, err = flattenNestedComputePerInstanceConfig(d, meta, res) + if err != nil { + return nil, err + } + + // Returns nil res if nested object is not found + return res, nil + } +} + +// RegionPerInstanceConfig needs both regular operation polling AND custom polling for deletion which is why this is not generated +func resourceComputeRegionPerInstanceConfigPollRead(d *schema.ResourceData, meta interface{}) PollReadFunc { + return func() (map[string]interface{}, error) { + config := meta.(*Config) + + url, err := replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/listPerInstanceConfigs") + if err != nil { + return nil, err + } + + project, err := getProject(d, config) + if err != nil { + return nil, err + } + res, err := sendRequest(config, "POST", project, url, nil) + if err != nil { + return res, err + } + res, err = flattenNestedComputeRegionPerInstanceConfig(d, meta, res) + if err != nil { + return nil, err + } + + // Returns nil res if nested object is not found + return res, nil + } +} + +// Returns an instance name in the form zones/{zone}/instances/{instance} for the managed +// instance matching the name of a PerInstanceConfig +func findInstanceName(d *schema.ResourceData, config *Config) (string, error) { + url, err := replaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/instanceGroupManagers/{{region_instance_group_manager}}/listManagedInstances") + + if err != nil { + return "", err + } + + project, err := getProject(d, config) + if err != nil { + return "", err + } + instanceNameToFind := fmt.Sprintf("/%s", d.Get("name").(string)) + + token := "" + for paginate := true; paginate; { + urlWithToken := "" + if token != "" { + urlWithToken = fmt.Sprintf("%s?maxResults=1&pageToken=%s", url, token) + } else { + urlWithToken = fmt.Sprintf("%s?maxResults=1", url) + } + res, err := sendRequest(config, "POST", project, urlWithToken, nil) + if err != nil { + return "", err + } + + managedInstances, ok := res["managedInstances"] + if !ok { + return "", fmt.Errorf("Failed to parse response for listManagedInstances for %s", d.Id()) + } + + managedInstancesArr := managedInstances.([]interface{}) + for _, managedInstanceRaw := range managedInstancesArr { + instance := managedInstanceRaw.(map[string]interface{}) + name, ok := instance["instance"] + if !ok { + return "", fmt.Errorf("Failed to read instance name for managed instance: %#v", instance) + } + if strings.HasSuffix(name.(string), instanceNameToFind) { + return name.(string), nil + } + } + + tokenRaw, paginate := res["nextPageToken"] + if paginate { + token = tokenRaw.(string) + } + } + + return "", fmt.Errorf("Failed to find managed instance with name: %s", instanceNameToFind) +} + +func PollCheckInstanceConfigDeleted(resp map[string]interface{}, respErr error) PollResult { + if respErr != nil { + return ErrorPollResult(respErr) + } + + // Nested object 404 appears as nil response + if resp == nil { + // Config no longer exists + return SuccessPollResult() + } + + // Read status + status := resp["status"].(string) + if status == "DELETING" { + return PendingStatusPollResult("Still deleting") + } + return ErrorPollResult(fmt.Errorf("Expected PerInstanceConfig to be deleting but status is: %s", status)) +} +<% end -%> diff --git a/third_party/terraform/utils/test-fixtures/binauthz/generated_payload.json.tmpl b/third_party/terraform/utils/test-fixtures/binauthz/generated_payload.json.tmpl new file mode 100644 index 000000000000..3db3c90fe980 --- /dev/null +++ b/third_party/terraform/utils/test-fixtures/binauthz/generated_payload.json.tmpl @@ -0,0 +1,12 @@ +{ + "critical": { + "identity": { + "docker-reference": "%s" + }, + "image": { + "%s" + }, + "type": "Google cloud binauthz container signature" + } +} + diff --git a/third_party/terraform/utils/test-fixtures/rsa_private_4096.pem b/third_party/terraform/utils/test-fixtures/rsa_private_4096.pem new file mode 100644 index 000000000000..14be2aca8257 --- /dev/null +++ b/third_party/terraform/utils/test-fixtures/rsa_private_4096.pem @@ -0,0 +1,52 @@ +-----BEGIN PRIVATE KEY----- +MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQC/rB4LVpPXqXap +Lqp1hzLsE6PM/tPBP3NQCIWFakvbnoZGoLzJBF2oyyLrxD//vrYhTK7+podsSBDx +ZGZB1VUYOXGQF+sD+JcHZ2C6OryOHQdYhXZ6/dN6tXdpTC9hSWujCGrTbtJf1TWF +u590YIy4qweIXTkSuA28S5HxcCR4n9hXYNXb6xGr5aD3LhHBRht1yF0W3LV3lJf+ +zRiqzCDI4b1WgJ1HttoXba6CFVHjhpGb7zeaC2oBngz5Gl7skZGxQZj+lHf3VOiL +SzwSfXkkaICGpfZbRSj8VlmvLfK6pF90dgQ0H4AO7JMxOtRopm/t65rSVMHPmiib +eNIXeC8jtJCgt0r+OERU46rjUiMHwPrRKB3r1sPvJMjnXrK9n+zcDIrNbp/oqeN1 +sJxkopPQYB480s2ENh9KUD1vAnwdBLYV6dKlJMCsq8aJWuyKcm42NtB0+5AdylZ/ +3HWbcZFMy8X5M1gz+WnsgvZUxslF9Y/LWJbGb0XVivp2QdWgs/uhxiQ1Eeg8u63k +TogUm0qVUjY2d5RgkadhI77XE9X1OqUl4w3Cve4UwA3w3+gHd/hZbC5FiBHQvmDY +D4u/H6Btt2X5Nv/QI1+yV9K4OsVXxnifqBXBv1k8aU0cK+epF8tSlEUnCZHtRi/O +L8TutckQOQr61yc+TPYm2N0spHupVwIDAQABAoICAQC/iO2VAvVGI3ASbDGmtG3s +f0vGRDey+wbuSTW0Np6LXoRr+5/reFNno2bIFxqlJBy4dfrBOgRF7lYQAvY0f0xD +otOa3GvbUgUKPwHn114o3VVD3kqhaRh1nPUw4hLOsyG+j2DA3BOZ9GNBulYDY5/7 +wd0LJa0syYPgT9wNWrT3XTRBTOEonGTSU+tgVkcjzj0OnCR5/h/Q2UpyMt2df9Kb +KwmbcXa5/T0/ADnMgCWOqiDDpG75nsJVz2zDWNjWqjje2uBaNl8TZ2PiHlJvX7c4 +7LzS0PG4Dwp/7oI8jjvqyusgY/abZ4b+YuZL4a/0y606IaBa4puyKyi9BCVdkpn2 +5YCQ/iOqzUfsllfqgdbpgwtDyjpwQ7wkghQAlJn6sb0YnU0g5wBP4UkAYyLahB+Z +AYO9rTs9qvx6jlqDVAcCbIHLXnf41aKGYOPWvxDkBe30LCMWbaLpmpsD7ImUlqX2 +VsRuk8RuQpiLmsBXoD9zBPdkZCzvp6HF9B7weLxEaj1ciArEHAHyTJFNqrOwJz1Y +ndSitxIioes64kAVfe4YcQCmGLVTg4YOF+6Nf1COnHEVkHQb8dXiUSDisq+6iW9n +fIqLp7cqp9aNjjKMCmMngNPs5856c91Er7Q+FtuVII3uwWaCZ8sxhXZg2M8nahUQ +JfiU6MYiXK/hkSlbA3RssQKCAQEA82bP5KK9ZjHWUVstnVbnCV2IXBI+jW3PF/B6 +x334atmmtKl0cFuP3jHjYDW7ZX2YiQALxG2ARRAgm5QpimKwIQtiHVNCfO0lhRBZ +iBumcIsxHyQ+YlDZ6T3dRQIaFpgFVgQhSzmxRi9XFnT597qw0WkyWEqI+9cL05rx +NUrzmR1aCq4B8XykT+kU6hcZLw2pX7XIY5O4NfNneU55/WQnw7HSGSBh/07YjfZl +q82SKY+bsC2+B3XJEGTL70RxW0KIuxvmZF3cbmTx4o7jR4f+/E0F7NSf+iOq8cLl +blmrX1qwmGS7NTzYQv+Vv1rb0yoqt2ocysndRUuvpKErKGLzbQKCAQEAyZfeGlZ5 +L0CC+NWg606Ayuv6GmmHqGbMzaqPsOjURBAcbwHCnR2s9ziC433ZwNPGxbBQHvwV +Femja7MXjU7mC+u3musitrrYs/NZLVeFHAO83pqy4tsmplN0oaPU6cJim+ftZ8hx +NQ8baUjURkrut9S47ne1bmx1tmJy+57lloK4glKilayby9x5aPxxoIzScr44orFy +a/UIjaI4TdIqaI2ma2r5FyWSt6bliHeQHDoHVbB4uAIn4MH33RDMwAUsHyBNHtjT +MiIIH2RMi8uQdDC46yBZnub2ysY9u7X87Cjdx0m08SqqE1E4DyiY5XbuGCMulXr5 +NjCVDI30CN6RUwKCAQEAojUDGLBnniJaXG9yD6fpYjFl/U3fR+tFFwQZHrdRhQu1 +cDJ5uaMbVo1SpTxJvZIcxDg2n1oGIIBl6qiromCwVeU7JqXk6lI0LeA+ellK6zen +rcQ+mtCc2DZ1Llb/Qc1fyPoJohM5k7dax0l/iFtvGK+NcI+DiKnAZO2eD9D6VDDe +X72k5+UTr3l6iaKJEvV8yZ7gg5PfMH0cmRf2bip/4YewpzQQes91u+3Xxc3CuVXO +AHQLbvdM3lL+IV8wWAwYCPHH8V0n2J4HIN/ukS4NfOBrsW/liRKaCnHC6m5xqaNL +itOeexUoXkXr6tFHLAuu2fqqY25xuot86y7JDyoaZQKCAQAxXGU+z7OmlUY8hZp8 +Y2F3zmYT94kG+/zj0wKSD8CB1ewQZj7v16dVdVnfOB6Mb455M1266ICFOAsSwOxg +ZlQ/0PzJCxAZ7BBJp6lLR+XI4UVqwDhTAdQp379sIMBuaHFauWRRCCxoEIXmtrV7 +bLec/ZI2mcsr+ZStDtgWsmaG/wUMrA0xVu1i8l4sDbwI0tJo1BjsBUT+GCZ6/6CH +tZE6voxkOvI477NIEq6bRqNbtf27xpUYgTagev4k/AsWbW3LRU584hx8ZwbUIOv5 +QuPg/1kYAXjQRr9hET9maf4/GRaMyIhskMTKLBXs6ETf0soj0OGEKnNBCI5GX6/E +SDMPAoIBAQC767WNJ31WeaO/7ut0SI5vabFff7f9DkdVlVkeJxooFNrDLgb66aaC +4JTfeqSCv6w/7ABcQyyaHxSZhZjMiuKHroDVDznq0xPjADqxXnbd05ih5aPnuedm +Hm+oQN0o/ozs7FwMRbvzx45kHHMZvShISR1IHGCuw915OkwA4JHzY5CljRK51Lhq +VR3LpLxD3pmSirARGGGVJ0IdYcwlmTo+4ACeX6wOH6FH8gtccTw2HNTSskCq0cQp +zi0EpoUWQNjFsuZ2W5dwgWr5AoEnsRkKMEucaYiP+kTGC6R4sZaSTDZpg5ikxZFX +4xmS3YG5FDvzS9GmJK0YvstxZHMb28ii +-----END PRIVATE KEY----- diff --git a/third_party/terraform/utils/test-fixtures/rsa_public.pem b/third_party/terraform/utils/test-fixtures/rsa_public.pem new file mode 100644 index 000000000000..2b2acadf6760 --- /dev/null +++ b/third_party/terraform/utils/test-fixtures/rsa_public.pem @@ -0,0 +1,14 @@ +-----BEGIN PUBLIC KEY----- +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAv6weC1aT16l2qS6qdYcy +7BOjzP7TwT9zUAiFhWpL256GRqC8yQRdqMsi68Q//762IUyu/qaHbEgQ8WRmQdVV +GDlxkBfrA/iXB2dgujq8jh0HWIV2ev3TerV3aUwvYUlrowhq027SX9U1hbufdGCM +uKsHiF05ErgNvEuR8XAkeJ/YV2DV2+sRq+Wg9y4RwUYbdchdFty1d5SX/s0Yqswg +yOG9VoCdR7baF22ughVR44aRm+83mgtqAZ4M+Rpe7JGRsUGY/pR391Toi0s8En15 +JGiAhqX2W0Uo/FZZry3yuqRfdHYENB+ADuyTMTrUaKZv7eua0lTBz5oom3jSF3gv +I7SQoLdK/jhEVOOq41IjB8D60Sgd69bD7yTI516yvZ/s3AyKzW6f6KnjdbCcZKKT +0GAePNLNhDYfSlA9bwJ8HQS2FenSpSTArKvGiVrsinJuNjbQdPuQHcpWf9x1m3GR +TMvF+TNYM/lp7IL2VMbJRfWPy1iWxm9F1Yr6dkHVoLP7ocYkNRHoPLut5E6IFJtK +lVI2NneUYJGnYSO+1xPV9TqlJeMNwr3uFMAN8N/oB3f4WWwuRYgR0L5g2A+Lvx+g +bbdl+Tb/0CNfslfSuDrFV8Z4n6gVwb9ZPGlNHCvnqRfLUpRFJwmR7UYvzi/E7rXJ +EDkK+tcnPkz2JtjdLKR7qVcCAwEAAQ== +-----END PUBLIC KEY----- diff --git a/third_party/terraform/utils/utils.go.erb b/third_party/terraform/utils/utils.go.erb index 64a8358eaa68..13c9b5e519b7 100644 --- a/third_party/terraform/utils/utils.go.erb +++ b/third_party/terraform/utils/utils.go.erb @@ -9,6 +9,7 @@ import ( "fmt" "log" "net/url" + "sort" "strings" "time" @@ -86,7 +87,8 @@ func handleNotFoundError(err error, d *schema.ResourceData, resource string) err return nil } - return fmt.Errorf("Error reading %s: %s", resource, err) + return errwrap.Wrapf( + fmt.Sprintf("Error when reading or editing %s: {{err}}", resource), err) } func isGoogleApiErrorWithCode(err error, errCode int) bool { @@ -209,6 +211,8 @@ func convertStringSet(set *schema.Set) []string { for _, v := range set.List() { s = append(s, v.(string)) } + sort.Strings(s) + return s } @@ -226,6 +230,7 @@ func stringSliceFromGolangSet(sset map[string]struct{}) []string { for s := range sset { ls = append(ls, s) } + sort.Strings(ls) return ls } @@ -411,7 +416,7 @@ func calcAddRemove(from []string, to []string) (add, remove []string) { } func stringInSlice(arr []string, str string) bool { - for _,i := range arr { + for _, i := range arr { if i == str { return true } @@ -421,9 +426,22 @@ func stringInSlice(arr []string, str string) bool { } func migrateStateNoop(v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { - return is, nil + return is, nil } func expandString(v interface{}, d TerraformResourceData, config *Config) (string, error) { return v.(string), nil } + +func changeFieldSchemaToForceNew(sch *schema.Schema) { + sch.ForceNew = true + switch sch.Type { + case schema.TypeList: + case schema.TypeSet: + if nestedR, ok := sch.Elem.(*schema.Resource); ok { + for _, nestedSch := range nestedR.Schema { + changeFieldSchemaToForceNew(nestedSch) + } + } + } +} diff --git a/third_party/terraform/utils/validation.go b/third_party/terraform/utils/validation.go index 0fa957c91207..2b145de95c41 100644 --- a/third_party/terraform/utils/validation.go +++ b/third_party/terraform/utils/validation.go @@ -30,6 +30,9 @@ const ( // https://cloud.google.com/iam/docs/understanding-custom-roles#naming_the_role IAMCustomRoleIDRegex = "^[a-zA-Z0-9_\\.]{3,64}$" + + // https://cloud.google.com/managed-microsoft-ad/reference/rest/v1/projects.locations.global.domains/create#query-parameters + ADDomainNameRegex = "^[a-z][a-z0-9-]{0,14}\\.[a-z0-9-\\.]*[a-z]+[a-z0-9]*$" ) var ( @@ -311,3 +314,15 @@ func validateRFC3339Date(v interface{}, k string) (warnings []string, errors []e } return } + +func validateADDomainName() schema.SchemaValidateFunc { + return func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if len(value) > 64 || !regexp.MustCompile(ADDomainNameRegex).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q (%q) doesn't match regexp %q, domain_name must be 2 to 64 with lowercase letters, digits, hyphens, dots and start with a letter", k, value, ADDomainNameRegex)) + } + return + } +} diff --git a/third_party/terraform/website-compiled/google.erb b/third_party/terraform/website-compiled/google.erb deleted file mode 100644 index 8a1c09327606..000000000000 --- a/third_party/terraform/website-compiled/google.erb +++ /dev/null @@ -1,1656 +0,0 @@ -<% autogen_exception -%> -<%# - Hashicorp uses erb's to generate their website files. In order to run through the MM - generator we need to double escape their code with '<%%' - -%> -<%% wrap_layout :inner do %> - <%% content_for :sidebar do %> - - <%% end %> - -<%%= yield %> - <%% end %> diff --git a/third_party/terraform/website/docs/d/google_active_folder.html.markdown b/third_party/terraform/website/docs/d/active_folder.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_active_folder.html.markdown rename to third_party/terraform/website/docs/d/active_folder.html.markdown diff --git a/third_party/terraform/website/docs/d/google_bigquery_default_service_account.html.markdown b/third_party/terraform/website/docs/d/bigquery_default_service_account.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_bigquery_default_service_account.html.markdown rename to third_party/terraform/website/docs/d/bigquery_default_service_account.html.markdown diff --git a/third_party/terraform/website/docs/d/google_billing_account.html.markdown b/third_party/terraform/website/docs/d/billing_account.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_billing_account.html.markdown rename to third_party/terraform/website/docs/d/billing_account.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_client_config.html.markdown b/third_party/terraform/website/docs/d/client_config.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_client_config.html.markdown rename to third_party/terraform/website/docs/d/client_config.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_client_openid_userinfo.html.markdown b/third_party/terraform/website/docs/d/client_openid_userinfo.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_client_openid_userinfo.html.markdown rename to third_party/terraform/website/docs/d/client_openid_userinfo.html.markdown diff --git a/third_party/terraform/website/docs/d/cloud_identity_group_membership.html.markdown b/third_party/terraform/website/docs/d/cloud_identity_group_membership.html.markdown new file mode 100644 index 000000000000..197f4eec1770 --- /dev/null +++ b/third_party/terraform/website/docs/d/cloud_identity_group_membership.html.markdown @@ -0,0 +1,74 @@ +--- +subcategory: "Cloud Identity" +layout: "google" +page_title: "Google: google_cloud_identity_group_memberships" +sidebar_current: "docs-google-datasource-cloud-identity-group-memberships" +description: |- + Get list of the Cloud Identity Group Memberships within a Group. +--- + +# google_cloud_identity_group_memberships + +Use this data source to get list of the Cloud Identity Group Memberships within a given Group. + +https://cloud.google.com/identity/docs/concepts/overview#memberships + +## Example Usage + +```tf +data "google_cloud_identity_group_memberships" "members" { + group = "groups/123eab45c6defghi" +} +``` + +## Argument Reference + +* `group` - The parent Group resource under which to lookup the Membership names. Must be of the form groups/{group_id}. + +## Attributes Reference + +In addition to the arguments listed above, the following attributes are exported: + +* `memberships` - The list of memberships under the given group. Structure is documented below. + +The `memberships` block contains: + +* `name` - + The resource name of the Membership, of the form groups/{group_id}/memberships/{membership_id}. + +* `roles` - The MembershipRoles that apply to the Membership. Structure is documented below. + +* `member_key` - + (Optional) + EntityKey of the member. Structure is documented below. + +* `preferred_member_key` - + (Optional) + EntityKey of the member. Structure is documented below. + +The `roles` block supports: + +* `name` - The name of the MembershipRole. One of OWNER, MANAGER, MEMBER. + + +The `member_key` block supports: + +* `id` - The ID of the entity. For Google-managed entities, the id is the email address of an existing + group or user. For external-identity-mapped entities, the id is a string conforming + to the Identity Source's requirements. + +* `namespace` - The namespace in which the entity exists. + If not populated, the EntityKey represents a Google-managed entity + such as a Google user or a Google Group. + If populated, the EntityKey represents an external-identity-mapped group. + +The `preferred_member_key` block supports: + +* `id` - The ID of the entity. For Google-managed entities, the id is the email address of an existing + group or user. For external-identity-mapped entities, the id is a string conforming + to the Identity Source's requirements. + +* `namespace` - The namespace in which the entity exists. + If not populated, the EntityKey represents a Google-managed entity + such as a Google user or a Google Group. + If populated, the EntityKey represents an external-identity-mapped group. diff --git a/third_party/terraform/website/docs/d/cloud_identity_groups.html.markdown b/third_party/terraform/website/docs/d/cloud_identity_groups.html.markdown new file mode 100644 index 000000000000..0521f496bac2 --- /dev/null +++ b/third_party/terraform/website/docs/d/cloud_identity_groups.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "Cloud Identity" +layout: "google" +page_title: "Google: google_cloud_identity_groups" +sidebar_current: "docs-google-datasource-cloud-identity-groups" +description: |- + Get list of the Cloud Identity Groups under a customer or namespace. +--- + +# google_cloud_identity_groups + +Use this data source to get list of the Cloud Identity Groups under a customer or namespace. + +https://cloud.google.com/identity/docs/concepts/overview#groups + +## Example Usage + +```tf +data "google_cloud_identity_groups" "groups" { + parent = "customers/A01b123xz" +} +``` + +## Argument Reference + +* `parent` - The parent resource under which to list all Groups. Must be of the form identitysources/{identity_source_id} for external- identity-mapped groups or customers/{customer_id} for Google Groups. + +## Attributes Reference + +In addition to the arguments listed above, the following attributes are exported: + +* `groups` - The list of groups under the provided customer or namespace. Structure is documented below. + +The `groups` block contains: + +* `name` - + Resource name of the Group in the format: groups/{group_id}, where `group_id` is the unique ID assigned to the Group. + +* `group_key` - + EntityKey of the Group. Structure is documented below. + +* `display_name` - + The display name of the Group. + +* `description` - + An extended description to help users determine the purpose of a Group. + +* `labels` -The labels that apply to the Group. + Contains 'cloudidentity.googleapis.com/groups.discussion_forum': '' if the Group is a Google Group or + 'system/groups/external': '' if the Group is an external-identity-mapped group. + +The `group_key` block supports: + +* `id` - + The ID of the entity. + For Google-managed entities, the id is the email address of an existing group or user. + For external-identity-mapped entities, the id is a string conforming + to the Identity Source's requirements. + +* `namespace` - + The namespace in which the entity exists. + If not populated, the EntityKey represents a Google-managed entity + such as a Google user or a Google Group. + If populated, the EntityKey represents an external-identity-mapped group. + The namespace must correspond to an identity source created in Admin Console + and must be in the form of `identitysources/{identity_source_id}`. \ No newline at end of file diff --git a/third_party/terraform/website/docs/d/datasource_cloudfunctions_function.html.markdown b/third_party/terraform/website/docs/d/cloudfunctions_function.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_cloudfunctions_function.html.markdown rename to third_party/terraform/website/docs/d/cloudfunctions_function.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_composer_image_versions.html.markdown b/third_party/terraform/website/docs/d/composer_image_versions.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_composer_image_versions.html.markdown rename to third_party/terraform/website/docs/d/composer_image_versions.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_address.html.markdown b/third_party/terraform/website/docs/d/compute_address.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_address.html.markdown rename to third_party/terraform/website/docs/d/compute_address.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_compute_backend_bucket.html.markdown b/third_party/terraform/website/docs/d/compute_backend_bucket.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_compute_backend_bucket.html.markdown rename to third_party/terraform/website/docs/d/compute_backend_bucket.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_compute_backend_service.html.markdown b/third_party/terraform/website/docs/d/compute_backend_service.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_compute_backend_service.html.markdown rename to third_party/terraform/website/docs/d/compute_backend_service.html.markdown diff --git a/third_party/terraform/website/docs/d/google_compute_default_service_account.html.markdown b/third_party/terraform/website/docs/d/compute_default_service_account.html.markdown similarity index 97% rename from third_party/terraform/website/docs/d/google_compute_default_service_account.html.markdown rename to third_party/terraform/website/docs/d/compute_default_service_account.html.markdown index 8bc330a41b83..55960f08768a 100644 --- a/third_party/terraform/website/docs/d/google_compute_default_service_account.html.markdown +++ b/third_party/terraform/website/docs/d/compute_default_service_account.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Platform" +subcategory: "Compute Engine" layout: "google" page_title: "Google: google_compute_default_service_account" sidebar_current: "docs-google-datasource-compute-default-service-account" diff --git a/third_party/terraform/website/docs/d/datasource_compute_forwarding_rule.html.markdown b/third_party/terraform/website/docs/d/compute_forwarding_rule.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_forwarding_rule.html.markdown rename to third_party/terraform/website/docs/d/compute_forwarding_rule.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_global_address.html.markdown b/third_party/terraform/website/docs/d/compute_global_address.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_global_address.html.markdown rename to third_party/terraform/website/docs/d/compute_global_address.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_image.html.markdown b/third_party/terraform/website/docs/d/compute_image.html.markdown similarity index 97% rename from third_party/terraform/website/docs/d/datasource_compute_image.html.markdown rename to third_party/terraform/website/docs/d/compute_image.html.markdown index f30e03f6ecce..127cd632016b 100644 --- a/third_party/terraform/website/docs/d/datasource_compute_image.html.markdown +++ b/third_party/terraform/website/docs/d/compute_image.html.markdown @@ -51,6 +51,7 @@ that is part of an image family and is not deprecated. In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the data source with format `projects/{{project}}/global/images/{{name}}` * `self_link` - The URI of the image. * `name` - The name of the image. * `family` - The family name of the image. diff --git a/third_party/terraform/website/docs/d/datasource_compute_instance.html.markdown b/third_party/terraform/website/docs/d/compute_instance.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_instance.html.markdown rename to third_party/terraform/website/docs/d/compute_instance.html.markdown diff --git a/third_party/terraform/website/docs/d/google_compute_instance_group.html.markdown b/third_party/terraform/website/docs/d/compute_instance_group.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_compute_instance_group.html.markdown rename to third_party/terraform/website/docs/d/compute_instance_group.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_instance_serial_port.html.markdown b/third_party/terraform/website/docs/d/compute_instance_serial_port.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_instance_serial_port.html.markdown rename to third_party/terraform/website/docs/d/compute_instance_serial_port.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_lb_ip_ranges.html.markdown b/third_party/terraform/website/docs/d/compute_lb_ip_ranges.html.markdown similarity index 97% rename from third_party/terraform/website/docs/d/datasource_compute_lb_ip_ranges.html.markdown rename to third_party/terraform/website/docs/d/compute_lb_ip_ranges.html.markdown index a2c52ccc4e65..d0226e0ec4b4 100644 --- a/third_party/terraform/website/docs/d/datasource_compute_lb_ip_ranges.html.markdown +++ b/third_party/terraform/website/docs/d/compute_lb_ip_ranges.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Platform" +subcategory: "Compute Engine" layout: "google" page_title: "Google: google_compute_lb_ip_ranges" sidebar_current: "docs-google-datasource-compute-lb-ip-ranges" diff --git a/third_party/terraform/website/docs/d/datasource_compute_network.html.markdown b/third_party/terraform/website/docs/d/compute_network.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_network.html.markdown rename to third_party/terraform/website/docs/d/compute_network.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_compute_network_endpoint_group.html.markdown b/third_party/terraform/website/docs/d/compute_network_endpoint_group.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_compute_network_endpoint_group.html.markdown rename to third_party/terraform/website/docs/d/compute_network_endpoint_group.html.markdown diff --git a/third_party/terraform/website/docs/d/google_compute_node_types.html.markdown b/third_party/terraform/website/docs/d/compute_node_types.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_compute_node_types.html.markdown rename to third_party/terraform/website/docs/d/compute_node_types.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_region_instance_group.html.markdown b/third_party/terraform/website/docs/d/compute_region_instance_group.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_region_instance_group.html.markdown rename to third_party/terraform/website/docs/d/compute_region_instance_group.html.markdown diff --git a/third_party/terraform/website/docs/d/google_compute_regions.html.markdown b/third_party/terraform/website/docs/d/compute_regions.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_compute_regions.html.markdown rename to third_party/terraform/website/docs/d/compute_regions.html.markdown diff --git a/third_party/terraform/website/docs/d/google_compute_resource_policy.html.markdown b/third_party/terraform/website/docs/d/compute_resource_policy.html.markdown similarity index 97% rename from third_party/terraform/website/docs/d/google_compute_resource_policy.html.markdown rename to third_party/terraform/website/docs/d/compute_resource_policy.html.markdown index d91f671bfa75..212135092d9d 100644 --- a/third_party/terraform/website/docs/d/google_compute_resource_policy.html.markdown +++ b/third_party/terraform/website/docs/d/compute_resource_policy.html.markdown @@ -1,6 +1,6 @@ --- layout: "google" -subcategory: "Cloud Platform" +subcategory: "Compute Engine" page_title: "Google: google_compute_resource_policy" sidebar_current: "docs-google-datasource-compute-resource-policy" description: |- diff --git a/third_party/terraform/website/docs/d/datasource_compute_router.html.markdown b/third_party/terraform/website/docs/d/compute_router.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_router.html.markdown rename to third_party/terraform/website/docs/d/compute_router.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_ssl_certificate.html.markdown b/third_party/terraform/website/docs/d/compute_ssl_certificate.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_ssl_certificate.html.markdown rename to third_party/terraform/website/docs/d/compute_ssl_certificate.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_ssl_policy.html.markdown b/third_party/terraform/website/docs/d/compute_ssl_policy.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_ssl_policy.html.markdown rename to third_party/terraform/website/docs/d/compute_ssl_policy.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_subnetwork.html.markdown b/third_party/terraform/website/docs/d/compute_subnetwork.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_subnetwork.html.markdown rename to third_party/terraform/website/docs/d/compute_subnetwork.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_compute_vpn_gateway.html.markdown b/third_party/terraform/website/docs/d/compute_vpn_gateway.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_compute_vpn_gateway.html.markdown rename to third_party/terraform/website/docs/d/compute_vpn_gateway.html.markdown diff --git a/third_party/terraform/website/docs/d/google_compute_zones.html.markdown b/third_party/terraform/website/docs/d/compute_zones.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_compute_zones.html.markdown rename to third_party/terraform/website/docs/d/compute_zones.html.markdown diff --git a/third_party/terraform/website/docs/d/google_container_cluster.html.markdown b/third_party/terraform/website/docs/d/container_cluster.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_container_cluster.html.markdown rename to third_party/terraform/website/docs/d/container_cluster.html.markdown diff --git a/third_party/terraform/website/docs/d/google_container_engine_versions.html.markdown b/third_party/terraform/website/docs/d/container_engine_versions.html.markdown similarity index 88% rename from third_party/terraform/website/docs/d/google_container_engine_versions.html.markdown rename to third_party/terraform/website/docs/d/container_engine_versions.html.markdown index d6b6ce23a780..31cfddf323f0 100644 --- a/third_party/terraform/website/docs/d/google_container_engine_versions.html.markdown +++ b/third_party/terraform/website/docs/d/container_engine_versions.html.markdown @@ -21,6 +21,7 @@ support the same version. ```hcl data "google_container_engine_versions" "central1b" { + provider = "google-beta" location = "us-central1-b" version_prefix = "1.12." } @@ -36,6 +37,10 @@ resource "google_container_cluster" "foo" { password = "adoy.rm" } } + +output "stable_channel_version" { + value = data.google_container_engine_versions.central1b.release_channel_default_version["STABLE"] +} ``` ## Argument Reference @@ -66,3 +71,4 @@ The following attributes are exported: * `latest_master_version` - The latest version available in the given zone for use with master instances. * `latest_node_version` - The latest version available in the given zone for use with node instances. * `default_cluster_version` - Version of Kubernetes the service deploys by default. +* `release_channel_default_version` ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) - A map from a release channel name to the channel's default version. diff --git a/third_party/terraform/website/docs/d/google_container_registry_image.html.markdown b/third_party/terraform/website/docs/d/container_registry_image.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_container_registry_image.html.markdown rename to third_party/terraform/website/docs/d/container_registry_image.html.markdown diff --git a/third_party/terraform/website/docs/d/google_container_registry_repository.html.markdown b/third_party/terraform/website/docs/d/container_registry_repository.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_container_registry_repository.html.markdown rename to third_party/terraform/website/docs/d/container_registry_repository.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_service_account_id_token.html.markdown b/third_party/terraform/website/docs/d/datasource_google_service_account_id_token.html.markdown new file mode 100644 index 000000000000..81b91ccb970f --- /dev/null +++ b/third_party/terraform/website/docs/d/datasource_google_service_account_id_token.html.markdown @@ -0,0 +1,99 @@ +--- +subcategory: "Cloud Platform" +layout: "google" +page_title: "Google: google_service_account_id_token" +sidebar_current: "docs-google-service-account-id-token" +description: |- + Produces OpenID Connect token for service accounts +--- + +# google\_service\_account\id\_token + +This data source provides a Google OpenID Connect (`oidc`) `id_token`. Tokens issued from this data source are typically used to call external services that accept OIDC tokens for authentication (e.g. [Google Cloud Run](https://cloud.google.com/run/docs/authenticating/service-to-service)). + +For more information see +[OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html#IDToken). + +## Example Usage - ServiceAccount JSON credential file. + `google_service_account_id_token` will use the configured [provider credentials](https://www.terraform.io/docs/providers/google/guides/provider_reference.html#credentials-1) + + ```hcl + data "google_service_account_id_token" "oidc" { + target_audience = "https://foo.bar/" + } + + output "oidc_token" { + value = data.google_service_account_id_token.oidc.id_token + } + ``` + +## Example Usage - Service Account Impersonation. + `google_service_account_access_token` will use background impersonated credentials provided by [google_service_account_access_token](https://www.terraform.io/docs/providers/google/d/datasource_google_service_account_access_token.html). + + Note: to use the following, you must grant `target_service_account` the + `roles/iam.serviceAccountTokenCreator` role on itself. + + ```hcl + data "google_service_account_access_token" "impersonated" { + provider = google + target_service_account = "impersonated-account@project.iam.gserviceaccount.com" + delegates = [] + scopes = ["userinfo-email", "cloud-platform"] + lifetime = "300s" + } + + provider "google" { + alias = "impersonated" + access_token = data.google_service_account_access_token.impersonated.access_token + } + + data "google_service_account_id_token" "oidc" { + provider = google.impersonated + target_service_account = "impersonated-account@project.iam.gserviceaccount.com" + delegates = [] + include_email = true + target_audience = "https://foo.bar/" + } + + output "oidc_token" { + value = data.google_service_account_id_token.oidc.id_token + } + ``` + +## Example Usage - Invoking Cloud Run Endpoint + + The following configuration will invoke [Cloud Run](https://cloud.google.com/run/docs/authenticating/service-to-service) endpoint where the service account for Terraform has been granted `roles/run.invoker` role previously. + +```hcl + +data "google_service_account_id_token" "oidc" { + target_audience = "https://your.cloud.run.app/" +} + +data "http" "cloudrun" { + url = "https://your.cloud.run.app/" + request_headers = { + Authorization = "Bearer ${data.google_service_account_id_token.oidc.id_token}" + } +} + + +output "cloud_run_response" { + value = data.http.cloudrun.body +} +``` + +## Argument Reference + +The following arguments are supported: + +* `target_audience` (Required) - The audience claim for the `id_token`. +* `target_service_account` (Optional) - The email of the service account being impersonated. Used only when using impersonation mode. +* `delegates` (Optional) - Delegate chain of approvals needed to perform full impersonation. Specify the fully qualified service account name. Used only when using impersonation mode. +* `include_email` (Optional) Include the verified email in the claim. Used only when using impersonation mode. + +## Attributes Reference + +The following attribute is exported: + +* `id_token` - The `id_token` representing the new generated identity. diff --git a/third_party/terraform/website/docs/d/datasource_dns_keys.html.markdown b/third_party/terraform/website/docs/d/dns_keys.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_dns_keys.html.markdown rename to third_party/terraform/website/docs/d/dns_keys.html.markdown diff --git a/third_party/terraform/website/docs/d/firebase_web_app.html.markdown b/third_party/terraform/website/docs/d/firebase_web_app.html.markdown new file mode 100644 index 000000000000..6fa83cbe00c3 --- /dev/null +++ b/third_party/terraform/website/docs/d/firebase_web_app.html.markdown @@ -0,0 +1,48 @@ +--- +subcategory: "Firebase" +layout: "google" +page_title: "Google: google_firebase_web_app" +sidebar_current: "docs-google-firebase-web-app" +description: |- + A Google Cloud Firebase web application instance +--- + +# google\_firebase\_web\_app + +A Google Cloud Firebase web application instance + +~> **Warning:** This resource is in beta, and should be used with the terraform-provider-google-beta provider. +See [Provider Versions](https://terraform.io/docs/providers/google/guides/provider_versions.html) for more details on beta resources. + + +## Argument Reference + +The following arguments are supported: + + +* `app_id` - + (Required) + The app_ip of name of the Firebase webApp. + + +- - - + + +* `project` - (Optional) The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{name}}` + +* `name` - + The fully qualified resource name of the App, for example: + projects/projectId/webApps/appId + +* `app_id` - + Immutable. The globally unique, Firebase-assigned identifier of the App. + This identifier should be treated as an opaque token, as the data format is not specified. + diff --git a/third_party/terraform/website/docs/d/firebase_web_app_config.html.markdown b/third_party/terraform/website/docs/d/firebase_web_app_config.html.markdown new file mode 100644 index 000000000000..64e34c8f072a --- /dev/null +++ b/third_party/terraform/website/docs/d/firebase_web_app_config.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "Firebase" +layout: "google" +page_title: "Google: google_firebase_web_app_config" +sidebar_current: "docs-google-firebase-web-app-config" +description: |- + A Google Cloud Firebase web application configuration +--- + +# google\_firebase\_web\_app\_config + +A Google Cloud Firebase web application configuration + +~> **Warning:** This resource is in beta, and should be used with the terraform-provider-google-beta provider. +See [Provider Versions](https://terraform.io/docs/providers/google/guides/provider_versions.html) for more details on beta resources. + +To get more information about WebApp, see: + +* [API documentation](https://firebase.google.com/docs/projects/api/reference/rest/v1beta1/projects.webApps) +* How-to Guides + * [Official Documentation](https://firebase.google.com/) + + +## Argument Reference +The following arguments are supported: + +* `web_app_id` - (Required) the id of the firebase web app + +- - - + +* `project` - (Optional) The ID of the project in which the resource belongs. If it + is not provided, the provider project is used. + +## Attributes Reference + +In addition to the arguments listed above, the following attributes are exported: + +* `api_key` - + The API key associated with the web App. + +* `auth_domain` - + The domain Firebase Auth configures for OAuth redirects, in the format: + projectId.firebaseapp.com + +* `database_url` - + The default Firebase Realtime Database URL. + +* `storage_bucket` - + The default Cloud Storage for Firebase storage bucket name. + +* `location_id` - + The ID of the project's default GCP resource location. The location is one of the available GCP resource + locations. + This field is omitted if the default GCP resource location has not been finalized yet. To set your project's + default GCP resource location, call defaultLocation.finalize after you add Firebase services to your project. + +* `messaging_sender_id` - + The sender ID for use with Firebase Cloud Messaging. + +* `measurement_id` - + The unique Google-assigned identifier of the Google Analytics web stream associated with the Firebase Web App. + Firebase SDKs use this ID to interact with Google Analytics APIs. + This field is only present if the App is linked to a web stream in a Google Analytics App + Web property. + Learn more about this ID and Google Analytics web streams in the Analytics documentation. + To generate a measurementId and link the Web App with a Google Analytics web stream, + call projects.addGoogleAnalytics. \ No newline at end of file diff --git a/third_party/terraform/website/docs/d/google_folder.html.markdown b/third_party/terraform/website/docs/d/folder.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_folder.html.markdown rename to third_party/terraform/website/docs/d/folder.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_folder_organization_policy.html.markdown b/third_party/terraform/website/docs/d/folder_organization_policy.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_folder_organization_policy.html.markdown rename to third_party/terraform/website/docs/d/folder_organization_policy.html.markdown diff --git a/third_party/terraform/website/docs/d/game_services_game_server_deployment_rollout.html.markdown b/third_party/terraform/website/docs/d/game_services_game_server_deployment_rollout.html.markdown new file mode 100644 index 000000000000..6823fac8d3f7 --- /dev/null +++ b/third_party/terraform/website/docs/d/game_services_game_server_deployment_rollout.html.markdown @@ -0,0 +1,72 @@ +--- +subcategory: "Game Servers" +layout: "google" +page_title: "Google: google_game_services_game_server_deployment_rollout" +sidebar_current: "docs-google-datasource-game-services-game-server-deployment-rollout" +description: |- + Get the rollout state. +--- + +# google\_game\_services\_game\_server\_deployment\_rollout + +Use this data source to get the rollout state. + +https://cloud.google.com/game-servers/docs/reference/rest/v1beta/GameServerDeploymentRollout + +## Example Usage + + +```hcl +data "google_game_services_game_server_deployment_rollout" "qa" { + provider = google-beta + deployment_id = "tf-test-deployment-s8sn12jt2c" +} +``` + +## Argument Reference + +The following arguments are supported: + + +* `deployment_id` - (Required) + The deployment to get the rollout state from. Only 1 rollout must be associated with each deployment. + + +## Attributes Reference + +In addition to the arguments listed above, the following attributes are exported: + +* `default_game_server_config` - + This field points to the game server config that is + applied by default to all realms and clusters. For example, + `projects/my-project/locations/global/gameServerDeployments/my-game/configs/my-config`. + + +* `game_server_config_overrides` - + The game_server_config_overrides contains the per game server config + overrides. The overrides are processed in the order they are listed. As + soon as a match is found for a cluster, the rest of the list is not + processed. Structure is documented below. + +* `project` - The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + + +The `game_server_config_overrides` block contains: + +* `realms_selector` - + Selection by realms. Structure is documented below. + +* `config_version` - + Version of the configuration. + +The `realms_selector` block contains: + +* `realms` - + List of realms to match against. + +* `id` - an identifier for the resource with format `projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout` + +* `name` - + The resource id of the game server deployment + eg: `projects/my-project/locations/global/gameServerDeployments/my-deployment/rollout`. diff --git a/third_party/terraform/website/docs/d/google_iam_policy.html.markdown b/third_party/terraform/website/docs/d/iam_policy.html.markdown similarity index 88% rename from third_party/terraform/website/docs/d/google_iam_policy.html.markdown rename to third_party/terraform/website/docs/d/iam_policy.html.markdown index d2a0a4db705c..5b2ea7d18810 100644 --- a/third_party/terraform/website/docs/d/google_iam_policy.html.markdown +++ b/third_party/terraform/website/docs/d/iam_policy.html.markdown @@ -87,6 +87,15 @@ each accept the following arguments: * `log_type` (Required) Defines the logging level. `DATA_READ`, `DATA_WRITE` and `ADMIN_READ` capture different types of events. See [the audit configuration documentation](https://cloud.google.com/resource-manager/reference/rest/Shared.Types/AuditConfig) for more details. * `exempted_members` (Optional) Specifies the identities that are exempt from these types of logging operations. Follows the same format of the `members` array for `binding`. +* `condition` - (Optional) An [IAM Condition](https://cloud.google.com/iam/docs/conditions-overview) for a given binding. Structure is documented below. + +The `condition` block supports: + +* `expression` - (Required) Textual representation of an expression in Common Expression Language syntax. + +* `title` - (Required) A title for the expression, i.e. a short string describing its purpose. + +* `description` - (Optional) An optional description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI. ## Attributes Reference diff --git a/third_party/terraform/website/docs/d/datasource_google_iam_role.html.markdown b/third_party/terraform/website/docs/d/iam_role.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_iam_role.html.markdown rename to third_party/terraform/website/docs/d/iam_role.html.markdown diff --git a/third_party/terraform/website/docs/d/iam_testable_permissions.html.markdown b/third_party/terraform/website/docs/d/iam_testable_permissions.html.markdown new file mode 100644 index 000000000000..6c37065b44f9 --- /dev/null +++ b/third_party/terraform/website/docs/d/iam_testable_permissions.html.markdown @@ -0,0 +1,46 @@ +--- +subcategory: "Cloud Platform" +layout: "google" +page_title: "Google: google_iam_testable_permissions" +sidebar_current: "docs-google-datasource-iam-testable-permissions" +description: |- + Retrieve a list of testable permissions for a resource. Testable permissions mean the permissions that user can add or remove in a role at a given resource. The resource can be referenced either via the full resource name or via a URI. +--- + +# google\_iam\_testable\_permissions + +Retrieve a list of testable permissions for a resource. Testable permissions mean the permissions that user can add or remove in a role at a given resource. The resource can be referenced either via the full resource name or via a URI. + +## Example Usage + +Retrieve all the supported permissions able to be set on `my-project` that are in either GA or BETA. This is useful for dynamically constructing custom roles. + +```hcl +data "google_iam_testable_permissions" "perms" { + full_resource_name = "//cloudresourcemanager.googleapis.com/projects/my-project" + stages = ["GA", "BETA"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `full_resource_name` - (Required) See [full resource name documentation](https://cloud.google.com/apis/design/resource_names#full_resource_name) for more detail. +* `stages` - (Optional) The acceptable release stages of the permission in the output. Note that `BETA` does not include permissions in `GA`, but you can specify both with `["GA", "BETA"]` for example. Can be a list of `"ALPHA"`, `"BETA"`, `"GA"`, `"DEPRECATED"`. Default is `["GA"]`. +* `custom_support_level` - (Optional) The level of support for custom roles. Can be one of `"NOT_SUPPORTED"`, `"SUPPORTED"`, `"TESTING"`. Default is `"SUPPORTED"` + +## Attributes Reference + +The following attributes are exported: + +* `permissions` - A list of permissions matching the provided input. Structure is defined below. + +The `permissions` block supports: + +* `name` - Name of the permission. +* `title` - Human readable title of the permission. +* `stage` - Release stage of the permission. +* `custom_support_level` - The the support level of this permission for custom roles. +* `api_disabled` - Whether the corresponding API has been enabled for the resource. + diff --git a/third_party/terraform/website/docs/d/google_kms_crypto_key.html.markdown b/third_party/terraform/website/docs/d/kms_crypto_key.html.markdown similarity index 97% rename from third_party/terraform/website/docs/d/google_kms_crypto_key.html.markdown rename to third_party/terraform/website/docs/d/kms_crypto_key.html.markdown index cc91bd9fc5f8..7513de8167b3 100644 --- a/third_party/terraform/website/docs/d/google_kms_crypto_key.html.markdown +++ b/third_party/terraform/website/docs/d/kms_crypto_key.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud KMS" +subcategory: "Cloud Key Management Service" layout: "google" page_title: "Google: google_kms_crypto_key" sidebar_current: "docs-google-datasource-kms-crypto-key" diff --git a/third_party/terraform/website/docs/d/google_kms_crypto_key_version.html.markdown b/third_party/terraform/website/docs/d/kms_crypto_key_version.html.markdown similarity index 93% rename from third_party/terraform/website/docs/d/google_kms_crypto_key_version.html.markdown rename to third_party/terraform/website/docs/d/kms_crypto_key_version.html.markdown index 029879306a89..7c37e240aad3 100644 --- a/third_party/terraform/website/docs/d/google_kms_crypto_key_version.html.markdown +++ b/third_party/terraform/website/docs/d/kms_crypto_key_version.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud KMS" +subcategory: "Cloud Key Management Service" layout: "google" page_title: "Google: google_kms_crypto_key_version" sidebar_current: "docs-google-datasource-kms-crypto-key-version" @@ -47,6 +47,8 @@ The following arguments are supported: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `//cloudkms.googleapis.com/v1/{{crypto_key}}/cryptoKeyVersions/{{version}}` + * `state` - The current state of the CryptoKeyVersion. See the [state reference](https://cloud.google.com/kms/docs/reference/rest/v1/projects.locations.keyRings.cryptoKeys.cryptoKeyVersions#CryptoKeyVersion.CryptoKeyVersionState) for possible outputs. * `protection_level` - The ProtectionLevel describing how crypto operations are performed with this CryptoKeyVersion. See the [protection_level reference](https://cloud.google.com/kms/docs/reference/rest/v1/ProtectionLevel) for possible outputs. diff --git a/third_party/terraform/website/docs/d/google_kms_key_ring.html.markdown b/third_party/terraform/website/docs/d/kms_key_ring.html.markdown similarity index 97% rename from third_party/terraform/website/docs/d/google_kms_key_ring.html.markdown rename to third_party/terraform/website/docs/d/kms_key_ring.html.markdown index 3f788a0fc007..8b48d6f94284 100644 --- a/third_party/terraform/website/docs/d/google_kms_key_ring.html.markdown +++ b/third_party/terraform/website/docs/d/kms_key_ring.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud KMS" +subcategory: "Cloud Key Management Service" layout: "google" page_title: "Google: google_kms_key_ring" sidebar_current: "docs-google-datasource-kms-key-ring" diff --git a/third_party/terraform/website/docs/d/google_kms_secret.html.markdown b/third_party/terraform/website/docs/d/kms_secret.html.markdown similarity index 96% rename from third_party/terraform/website/docs/d/google_kms_secret.html.markdown rename to third_party/terraform/website/docs/d/kms_secret.html.markdown index 28a59ddd9b60..84ab16bb19f7 100644 --- a/third_party/terraform/website/docs/d/google_kms_secret.html.markdown +++ b/third_party/terraform/website/docs/d/kms_secret.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud KMS" +subcategory: "Cloud Key Management Service" layout: "google" page_title: "Google: google_kms_secret" sidebar_current: "docs-google-kms-secret" @@ -15,7 +15,7 @@ within your resource definitions. For more information see [the official documentation](https://cloud.google.com/kms/docs/encrypt-decrypt). -~> **NOTE**: Using this data provider will allow you to conceal secret data within your +~> **NOTE:** Using this data provider will allow you to conceal secret data within your resource definitions, but it does not take care of protecting that data in the logging output, plan output, or state output. Please take care to secure your secret data outside of resource definitions. diff --git a/third_party/terraform/website/docs/d/google_kms_secret_ciphertext.html.markdown b/third_party/terraform/website/docs/d/kms_secret_ciphertext.html.markdown similarity index 96% rename from third_party/terraform/website/docs/d/google_kms_secret_ciphertext.html.markdown rename to third_party/terraform/website/docs/d/kms_secret_ciphertext.html.markdown index c868fdc536d5..a3133fdc77c5 100644 --- a/third_party/terraform/website/docs/d/google_kms_secret_ciphertext.html.markdown +++ b/third_party/terraform/website/docs/d/kms_secret_ciphertext.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud KMS" +subcategory: "Cloud Key Management Service" layout: "google" page_title: "Google: google_kms_secret_ciphertext" sidebar_current: "docs-google-kms-secret-ciphertext" @@ -17,7 +17,7 @@ ciphertext within your resource definitions. For more information see [the official documentation](https://cloud.google.com/kms/docs/encrypt-decrypt). -~> **NOTE**: Using this data source will allow you to conceal secret data within your +~> **NOTE:** Using this data source will allow you to conceal secret data within your resource definitions, but it does not take care of protecting that data in the logging output, plan output, or state output. Please take care to secure your secret data outside of resource definitions. diff --git a/third_party/terraform/website/docs/d/datasource_monitoring_app_engine_service.html.markdown b/third_party/terraform/website/docs/d/monitoring_app_engine_service.html.markdown similarity index 98% rename from third_party/terraform/website/docs/d/datasource_monitoring_app_engine_service.html.markdown rename to third_party/terraform/website/docs/d/monitoring_app_engine_service.html.markdown index a2b1aa3ee3f6..d904eb7d53f8 100644 --- a/third_party/terraform/website/docs/d/datasource_monitoring_app_engine_service.html.markdown +++ b/third_party/terraform/website/docs/d/monitoring_app_engine_service.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Monitoring" +subcategory: "Cloud (Stackdriver) Monitoring" layout: "google" page_title: "Google: google_monitoring_app_engine_service" sidebar_current: "docs-google-datasource-monitoring-app-engine-service" diff --git a/third_party/terraform/website/docs/d/datasource_monitoring_notification_channel.html.markdown b/third_party/terraform/website/docs/d/monitoring_notification_channel.html.markdown similarity index 98% rename from third_party/terraform/website/docs/d/datasource_monitoring_notification_channel.html.markdown rename to third_party/terraform/website/docs/d/monitoring_notification_channel.html.markdown index fa8abbf52af1..09fcb2f89407 100644 --- a/third_party/terraform/website/docs/d/datasource_monitoring_notification_channel.html.markdown +++ b/third_party/terraform/website/docs/d/monitoring_notification_channel.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Monitoring" +subcategory: "Cloud (Stackdriver) Monitoring" layout: "google" page_title: "Google: google_monitoring_notification_channel" sidebar_current: "docs-google-datasource-monitoring-notification-channel" diff --git a/third_party/terraform/website/docs/d/datasource_google_monitoring_uptime_check_ips.html.markdown b/third_party/terraform/website/docs/d/monitoring_uptime_check_ips.html.markdown similarity index 96% rename from third_party/terraform/website/docs/d/datasource_google_monitoring_uptime_check_ips.html.markdown rename to third_party/terraform/website/docs/d/monitoring_uptime_check_ips.html.markdown index 753aaeed9e5d..821a3e08399e 100644 --- a/third_party/terraform/website/docs/d/datasource_google_monitoring_uptime_check_ips.html.markdown +++ b/third_party/terraform/website/docs/d/monitoring_uptime_check_ips.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Monitoring" +subcategory: "Cloud (Stackdriver) Monitoring" layout: "google" page_title: "Google: google_monitoring_uptime_check_ips" sidebar_current: "docs-google-datasource-google-monitoring-uptime-check-ips" diff --git a/third_party/terraform/website/docs/d/datasource_google_netblock_ip_ranges.html.markdown b/third_party/terraform/website/docs/d/netblock_ip_ranges.html.markdown similarity index 96% rename from third_party/terraform/website/docs/d/datasource_google_netblock_ip_ranges.html.markdown rename to third_party/terraform/website/docs/d/netblock_ip_ranges.html.markdown index 219e2b38a8de..85af3b063e00 100644 --- a/third_party/terraform/website/docs/d/datasource_google_netblock_ip_ranges.html.markdown +++ b/third_party/terraform/website/docs/d/netblock_ip_ranges.html.markdown @@ -64,7 +64,7 @@ The following arguments are supported: * `cloud-netblocks` - Corresponds to the IP addresses used for resources on Google Cloud Platform. [More details.](https://cloud.google.com/compute/docs/faq#where_can_i_find_product_name_short_ip_ranges) - * `google-netblocks` - Corresponds to IP addresses used for Google services. [More details.](https://support.google.com/a/answer/33786?hl=en) + * `google-netblocks` - Corresponds to IP addresses used for Google services. [More details.](https://cloud.google.com/compute/docs/faq#where_can_i_find_product_name_short_ip_ranges) * `restricted-googleapis` - Corresponds to the IP addresses used for Private Google Access only for services that support VPC Service Controls API access. [More details.](https://cloud.google.com/vpc/docs/private-access-options#domain-vips) diff --git a/third_party/terraform/website/docs/d/google_organization.html.markdown b/third_party/terraform/website/docs/d/organization.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_organization.html.markdown rename to third_party/terraform/website/docs/d/organization.html.markdown diff --git a/third_party/terraform/website/docs/d/google_project.html.markdown b/third_party/terraform/website/docs/d/project.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_project.html.markdown rename to third_party/terraform/website/docs/d/project.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_project_organization_policy.html.markdown b/third_party/terraform/website/docs/d/project_organization_policy.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_project_organization_policy.html.markdown rename to third_party/terraform/website/docs/d/project_organization_policy.html.markdown diff --git a/third_party/terraform/website/docs/d/google_projects.html.markdown b/third_party/terraform/website/docs/d/projects.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/google_projects.html.markdown rename to third_party/terraform/website/docs/d/projects.html.markdown diff --git a/third_party/terraform/website/docs/d/redis_instance.html.markdown b/third_party/terraform/website/docs/d/redis_instance.html.markdown new file mode 100644 index 000000000000..bbbf501f67b2 --- /dev/null +++ b/third_party/terraform/website/docs/d/redis_instance.html.markdown @@ -0,0 +1,114 @@ +--- +subcategory: "Memorystore (Redis)" +layout: "google" +page_title: "Google: google_redis_instance" +sidebar_current: "docs-google-datasource-redis-instance" +description: |- + Get information about a Google Cloud Redis instance. +--- + +# google\_redis\_instance + +Get information about a Google Cloud Redis instance. For more information see +the [official documentation](https://cloud.google.com/memorystore/docs/redis) +and [API](https://cloud.google.com/memorystore/docs/redis/apis). + +## Example Usage + +```hcl +data "google_redis_instance" "default" { + name = "my-redis-instance" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of a Redis instance. + +- - - + +* `project` - (Optional) The project in which the resource belongs. If it + is not provided, the provider project is used. + +* `region` - (Optional) The region in which the resource belongs. If it + is not provided, the provider region is used. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `memory_size_gb` - + Redis memory size in GiB. + +* `alternative_location_id` - + Only applicable to STANDARD_HA tier which protects the instance + against zonal failures by provisioning it across two zones. + If provided, it must be a different zone from the one provided in + [locationId]. + +* `authorized_network` - + The full name of the Google Compute Engine network to which the + instance is connected. If left unspecified, the default network + will be used. + +* `connect_mode` - + The connection mode of the Redis instance. + +* `display_name` - + An arbitrary and optional user-provided name for the instance. + +* `labels` - + Resource labels to represent user provided metadata. + +* `redis_configs` - + Redis configuration parameters, according to http://redis.io/topics/config. + Please check Memorystore documentation for the list of supported parameters: + https://cloud.google.com/memorystore/docs/redis/reference/rest/v1/projects.locations.instances#Instance.FIELDS.redis_configs + +* `location_id` - + The zone where the instance will be provisioned. If not provided, + the service will choose a zone for the instance. For STANDARD_HA tier, + instances will be created across two zones for protection against + zonal failures. If [alternativeLocationId] is also provided, it must + be different from [locationId]. + +* `redis_version` - + The version of Redis software. If not provided, latest supported + version will be used. Currently, the supported values are: + - REDIS_4_0 for Redis 4.0 compatibility + - REDIS_3_2 for Redis 3.2 compatibility + +* `reserved_ip_range` - + The CIDR range of internal addresses that are reserved for this + instance. If not provided, the service will choose an unused /29 + block, for example, 10.0.0.0/29 or 192.168.0.0/29. Ranges must be + unique and non-overlapping with existing subnets in an authorized + network. + +* `tier` - + The service tier of the instance. Must be one of these values: + - BASIC: standalone instance + - STANDARD_HA: highly available primary/replica instances + + Default value: `BASIC` + Possible values are: + * `BASIC` + * `STANDARD_HA` + +* `host` - Hostname or IP address of the exposed Redis endpoint used by clients + to connect to the service. + +* `port` - The port number of the exposed Redis endpoint. + +* `create_time` - + The time the instance was created in RFC3339 UTC "Zulu" format, + accurate to nanoseconds. + +* `current_location_id` - + The current zone where the Redis endpoint is placed. + For Basic Tier instances, this will always be the same as the + [locationId] provided by the user at creation time. For Standard Tier + instances, this can be either [locationId] or [alternativeLocationId] + and can change after a failover event. diff --git a/third_party/terraform/website/docs/d/datasource_google_secret_manager_secret_version.html.markdown b/third_party/terraform/website/docs/d/secret_manager_secret_version.html.markdown similarity index 92% rename from third_party/terraform/website/docs/d/datasource_google_secret_manager_secret_version.html.markdown rename to third_party/terraform/website/docs/d/secret_manager_secret_version.html.markdown index 69d5373afc09..a316e77739f4 100644 --- a/third_party/terraform/website/docs/d/datasource_google_secret_manager_secret_version.html.markdown +++ b/third_party/terraform/website/docs/d/secret_manager_secret_version.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Platform" +subcategory: "Secret Manager" layout: "google" page_title: "Google: google_secret_manager_secret_version" sidebar_current: "docs-google-datasource-secret-manager-secret-version" @@ -9,13 +9,12 @@ description: |- # google\_secret\_manager\_secret\_version -Get a Secret Manager secret's version. For more information see the [official documentation](https://cloud.google.com/secret-manager/docs/) and [API](https://cloud.google.com/secret-manager/docs/reference/rest/v1beta1/projects.secrets.versions). +Get a Secret Manager secret's version. For more information see the [official documentation](https://cloud.google.com/secret-manager/docs/) and [API](https://cloud.google.com/secret-manager/docs/reference/rest/v1/projects.secrets.versions). ## Example Usage ```hcl data "google_secret_manager_secret_version" "basic" { - provider = google-beta secret = "my-secret" } ``` diff --git a/third_party/terraform/website/docs/d/datasource_google_service_account.html.markdown b/third_party/terraform/website/docs/d/service_account.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_service_account.html.markdown rename to third_party/terraform/website/docs/d/service_account.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_service_account_access_token.html.markdown b/third_party/terraform/website/docs/d/service_account_access_token.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_service_account_access_token.html.markdown rename to third_party/terraform/website/docs/d/service_account_access_token.html.markdown diff --git a/third_party/terraform/website/docs/d/datasource_google_service_account_key.html.markdown b/third_party/terraform/website/docs/d/service_account_key.html.markdown similarity index 82% rename from third_party/terraform/website/docs/d/datasource_google_service_account_key.html.markdown rename to third_party/terraform/website/docs/d/service_account_key.html.markdown index 69a2201b9a04..fdacf3e1674a 100644 --- a/third_party/terraform/website/docs/d/datasource_google_service_account_key.html.markdown +++ b/third_party/terraform/website/docs/d/service_account_key.html.markdown @@ -7,11 +7,10 @@ description: |- Get a Google Cloud Platform service account Public Key --- -# google\_service\_account\_key +# google_service_account_key Get service account public key. For more information, see [the official documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and [API](https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys/get). - ## Example Usage ```hcl @@ -34,13 +33,13 @@ data "google_service_account_key" "mykey" { The following arguments are supported: * `name` - (Required) The name of the service account key. This must have format - `projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}/keys/{KEYID}`, where `{ACCOUNT}` - is the email address or unique id of the service account. + `projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}/keys/{KEYID}`, where `{ACCOUNT}` + is the email address or unique id of the service account. * `project` - (Optional) The ID of the project that the service account will be created in. - Defaults to the provider project configuration. + Defaults to the provider project configuration. -* `public_key_type` (Optional) The output format of the public key requested. X509_PEM is the default output format. +* `public_key_type` (Optional) The output format of the public key requested. TYPE_X509_PEM_FILE is the default output format. ## Attributes Reference diff --git a/third_party/terraform/website/docs/d/datasource_google_sql_ca_certs.html.markdown b/third_party/terraform/website/docs/d/sql_ca_certs.html.markdown similarity index 100% rename from third_party/terraform/website/docs/d/datasource_google_sql_ca_certs.html.markdown rename to third_party/terraform/website/docs/d/sql_ca_certs.html.markdown diff --git a/third_party/terraform/website/docs/d/sql_database_instance.html.markdown b/third_party/terraform/website/docs/d/sql_database_instance.html.markdown new file mode 100644 index 000000000000..e65cd9134e8a --- /dev/null +++ b/third_party/terraform/website/docs/d/sql_database_instance.html.markdown @@ -0,0 +1,183 @@ +--- +subcategory: "Cloud SQL" +layout: "google" +page_title: "Google: google_sql_database_instance" +sidebar_current: "docs-google-datasource-sql-database-instance" +description: |- + Get a SQL database instance in Google Cloud SQL. +--- + +# google\_sql\_database\_instance + +Use this data source to get information about a Cloud SQL instance + +## Example Usage + + +```hcl +data "google_sql_database_instance" "qa" { + name = google_sql_database_instance.master.name +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (required) The name of the instance. + +* `project` - (optional) The ID of the project in which the resource belongs. + +* `region` - (optional) The region the instance exists in. + +## Attributes Reference + +In addition to the arguments listed above, the following attributes are exported: + +* `settings` - The settings to use for the database. The + configuration is detailed below. + +* `database_version` - The MySQL, PostgreSQL or SQL Server (beta) version to use. + +* `master_instance_name` - The name of the instance that will act as + the master in the replication setup. + +* `replica_configuration` - The configuration for replication. The + configuration is detailed below. + +* `root_password` - Initial root password. Required for MS SQL Server, ignored by MySQL and PostgreSQL. + +* `encryption_key_name` - [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + The full path to the encryption key used for the CMEK disk encryption. + +The `settings` block contains: + +* `tier` - The machine type to use. + +* `activation_policy` - This specifies when the instance should be + active. Can be either `ALWAYS`, `NEVER` or `ON_DEMAND`. + +* `authorized_gae_applications` - (Deprecated) This property is only applicable to First Generation instances. + First Generation instances are now deprecated, see [here](https://cloud.google.com/sql/docs/mysql/upgrade-2nd-gen) + for information on how to upgrade to Second Generation instances. + A list of Google App Engine (GAE) project names that are allowed to access this instance. + +* `availability_type` - The availability type of the Cloud SQL +instance, high availability (`REGIONAL`) or single zone (`ZONAL`). + +* `crash_safe_replication` - (Deprecated) This property is only applicable to First Generation instances. + First Generation instances are now deprecated, see [here](https://cloud.google.com/sql/docs/mysql/upgrade-2nd-gen) + +* `disk_autoresize` - Configuration to increase storage size automatically. + +* `disk_size` - The size of data disk, in GB. + +* `disk_type` - The type of data disk. + +* `pricing_plan` - Pricing plan for this instance. + +* `replication_type` - This property is only applicable to First Generation instances. + First Generation instances are now deprecated, see [here](https://cloud.google.com/sql/docs/mysql/upgrade-2nd-gen) + +* `user_labels` - A set of key/value user label pairs to assign to the instance. + +The `settings.database_flags` sublist contains: + +* `name` - Name of the flag. + +* `value` - Value of the flag. + +The `settings.backup_configuration` subblock contains: + +* `binary_log_enabled` - True if binary logging is enabled. + +* `enabled` - True if backup configuration is enabled. + +* `start_time` - `HH:MM` format time indicating when backup configuration starts. + +The `settings.ip_configuration` subblock contains: + +* `ipv4_enabled` - Whether this Cloud SQL instance should be assigned a public IPV4 address. + +* `private_network` - The VPC network from which the Cloud SQL instance is accessible for private IP. + +* `require_ssl` - True if mysqld default to `REQUIRE X509` for users connecting over IP. + +The `settings.ip_configuration.authorized_networks[]` sublist contains: + +* `expiration_time` - The [RFC 3339](https://tools.ietf.org/html/rfc3339) + formatted date time string indicating when this whitelist expires. + +* `name` - A name for this whitelist entry. + +* `value` - A CIDR notation IPv4 or IPv6 address that is allowed to access this instance. + +The `settings.location_preference` subblock contains: + +* `follow_gae_application` - A GAE application whose zone to remain in. + +* `zone` - The preferred compute engine. + +The `settings.maintenance_window` subblock for instances declares a one-hour +[maintenance window](https://cloud.google.com/sql/docs/instance-settings?hl=en#maintenance-window-2ndgen) +when an Instance can automatically restart to apply updates. The maintenance window is specified in UTC time. It contains: + +* `day` - Day of week (`1-7`), starting on Monday. + +* `hour` - Hour of day (`0-23`), ignored if `day` not set. + +* `update_track` - Receive updates earlier (`canary`) or later (`stable`). + +The `replica_configuration` block contains: + +* `ca_certificate` - PEM representation of the trusted CA's x509 certificate. + +* `client_certificate` - PEM representation of the slave's x509 certificate. + +* `client_key` - PEM representation of the slave's private key. + +* `connect_retry_interval` - The number of seconds between connect retries. + +* `dump_file_path` - Path to a SQL file in GCS from which slave instances are created. + +* `failover_target` - Specifies if the replica is the failover target. + +* `master_heartbeat_period` - Time in ms between replication heartbeats. + +* `password` - Password for the replication connection. + +* `sslCipher` - Permissible ciphers for use in SSL encryption. + +* `username` - Username for replication connection. + +* `verify_server_certificate` - True if the master's common name value is checked during the SSL handshake. + +* `self_link` - The URI of the created resource. + +* `connection_name` - The connection name of the instance to be used in connection strings. + +* `service_account_email_address` - The service account email address assigned to the instance. + +* `ip_address.0.ip_address` - The IPv4 address assigned. + +* `ip_address.0.time_to_retire` - The time this IP address will be retired, in RFC 3339 format. + +* `ip_address.0.type` - The type of this IP address. + +* `first_ip_address` - The first IPv4 address of any type assigned. + +* `public_ip_address` - The first public (`PRIMARY`) IPv4 address assigned. + +* `private_ip_address` - The first private (`PRIVATE`) IPv4 address assigned. + +* `settings.version` - Used to make sure changes to the `settings` block are atomic. + +* `server_ca_cert.0.cert` - The CA Certificate used to connect to the SQL Instance via SSL. + +* `server_ca_cert.0.common_name` - The CN valid for the CA Cert. + +* `server_ca_cert.0.create_time` - Creation time of the CA Cert. + +* `server_ca_cert.0.expiration_time` - Expiration time of the CA Cert. + +* `server_ca_cert.0.sha1_fingerprint` - SHA Fingerprint of the CA Cert. diff --git a/third_party/terraform/website/docs/d/google_storage_project_service_account.html.markdown b/third_party/terraform/website/docs/d/storage_project_service_account.html.markdown similarity index 98% rename from third_party/terraform/website/docs/d/google_storage_project_service_account.html.markdown rename to third_party/terraform/website/docs/d/storage_project_service_account.html.markdown index 017e712944b6..e1344dceec8e 100644 --- a/third_party/terraform/website/docs/d/google_storage_project_service_account.html.markdown +++ b/third_party/terraform/website/docs/d/storage_project_service_account.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Platform" +subcategory: "Cloud Storage" layout: "google" page_title: "Google: google_storage_project_service_account" sidebar_current: "docs-google-datasource-storage-project-service-account" diff --git a/third_party/terraform/website/docs/d/google_storage_transfer_project_service_account.html.markdown b/third_party/terraform/website/docs/d/storage_transfer_project_service_account.html.markdown similarity index 96% rename from third_party/terraform/website/docs/d/google_storage_transfer_project_service_account.html.markdown rename to third_party/terraform/website/docs/d/storage_transfer_project_service_account.html.markdown index b3ad3bdaeb98..1d355e453225 100644 --- a/third_party/terraform/website/docs/d/google_storage_transfer_project_service_account.html.markdown +++ b/third_party/terraform/website/docs/d/storage_transfer_project_service_account.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Platform" +subcategory: "Storage Transfer Service" layout: "google" page_title: "Google: google_storage_transfer_project_service_account" sidebar_current: "docs-google-datasource-storage-transfer-project-service-account" diff --git a/third_party/terraform/website/docs/d/datasource_tpu_tensorflow_versions.html.markdown b/third_party/terraform/website/docs/d/tpu_tensorflow_versions.html.markdown similarity index 97% rename from third_party/terraform/website/docs/d/datasource_tpu_tensorflow_versions.html.markdown rename to third_party/terraform/website/docs/d/tpu_tensorflow_versions.html.markdown index 3f24fc6e358e..8ec1d888f7f2 100644 --- a/third_party/terraform/website/docs/d/datasource_tpu_tensorflow_versions.html.markdown +++ b/third_party/terraform/website/docs/d/tpu_tensorflow_versions.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Platform" +subcategory: "Cloud TPU" layout: "google" page_title: "Google: google_tpu_tensorflow_versions" sidebar_current: "docs-google-datasource-tpu-tensorflow-versions" diff --git a/third_party/terraform/website/docs/guides/getting_started.html.markdown b/third_party/terraform/website/docs/guides/getting_started.html.markdown index 6547896ff256..daf474dc21c8 100644 --- a/third_party/terraform/website/docs/guides/getting_started.html.markdown +++ b/third_party/terraform/website/docs/guides/getting_started.html.markdown @@ -1,7 +1,7 @@ --- layout: "google" page_title: "Getting Started with the Google provider" -sidebar_current: "docs-google-provider-getting-started" +sidebar_current: "docs-google-provider-guides-getting-started" description: |- Getting started with the Google Cloud Platform provider --- @@ -20,7 +20,15 @@ provider. ## Configuring the Provider -First create a Terraform config file named `"main.tf"`. Inside, you'll +First, authenticate with GCP. The easiest way to do this is to run +`gcloud auth application-default login`, if you already have gcloud +installed. If you don't already have it, gcloud can be installed with +`apt-get install google-cloud-sdk` on Debian-based machines. For a +production use-case, you will want to use service account authentication, +which you can learn about further down in this doc, but for experimenting, +gcloud authentication will work fine. + +Next, create a Terraform config file named `"main.tf"`. Inside, you'll want to include the following configuration: ```hcl @@ -45,7 +53,7 @@ Not all resources require a location. Some GCP resources are global and are automatically spread across all of GCP. -> Want to try out another location? Check out the [list of available regions and zones](https://cloud.google.com/compute/docs/regions-zones/#available). -Instances created in zones outside the US are not part of the always free tier +Instances created in zones outside the US are not necessarily part of the always free tier and could incur charges. ## Creating a VM instance @@ -80,7 +88,7 @@ resource "google_compute_instance" "vm_instance" { network_interface { # A default network is created for all GCP projects - network = "default" + network = "default" access_config { } } @@ -157,13 +165,22 @@ choose an existing account, or create a new one. Next, download the JSON key file. Name it something you can remember, and store it somewhere secure on your machine. +> *Note*: Currently the only supported service account credentials are credentials +downloaded from Cloud Console or generated by `gcloud`. + + You supply the key to Terraform using the environment variable -`GOOGLE_CLOUD_KEYFILE_JSON`, setting the value to the location of the file. +`GOOGLE_APPLICATION_CREDENTIALS`, setting the value to the location of the file. ```bash -export GOOGLE_CLOUD_KEYFILE_JSON={{path}} +export GOOGLE_APPLICATION_CREDENTIALS={{path}} ``` +If you choose to use `gcloud`-generated credentials, and you encounter +quota or billing issues which don't seem to apply to you, you may want to set +`user_project_override` to `true` in the provider block - see the +[provider reference](/docs/providers/google/guides/provider_reference.html) for more information. + -> Remember to add this line to a startup file such as `bash_profile` or `bashrc` to store your credentials across sessions! @@ -189,7 +206,7 @@ resource "google_compute_instance" "vm_instance" { network_interface { # A default network is created for all GCP projects - network = google_compute_network.vpc_network.self_link + network = google_compute_network.vpc_network.self_link access_config { } } diff --git a/third_party/terraform/website/docs/guides/provider_reference.html.markdown b/third_party/terraform/website/docs/guides/provider_reference.html.markdown index 1e8fd05ab400..5ae71b387b33 100644 --- a/third_party/terraform/website/docs/guides/provider_reference.html.markdown +++ b/third_party/terraform/website/docs/guides/provider_reference.html.markdown @@ -18,7 +18,7 @@ location (`zone` and/or `region`) for your resources. ```hcl provider "google" { - credentials = "${file("account.json")}" + credentials = file("account.json") project = "my-project-id" region = "us-central1" zone = "us-central1-c" @@ -27,7 +27,7 @@ provider "google" { ```hcl provider "google-beta" { - credentials = "${file("account.json")}" + credentials = file("account.json") project = "my-project-id" region = "us-central1" zone = "us-central1-c" @@ -74,7 +74,12 @@ same configuration. * `credentials` - (Optional) Either the path to or the contents of a [service account key file] in JSON format. You can -[manage key files using the Cloud Console]. +[manage key files using the Cloud Console]. If not provided, the +application default credentials will be used. You can configure +Application Default Credentials on your personal machine by +running `gcloud auth application-default login`. If +terraform is running on a GCP machine, and this value is unset, +it will automatically use that machine's configured service account. * `project` - (Optional) The default project to manage resources in. If another project is specified on a resource, it will take precedence. @@ -110,7 +115,7 @@ Values are expected to include the version of the service, such as `https://www.googleapis.com/compute/v1/`. * `batching` - (Optional) This block controls batching GCP calls for groups of specific resource types. Structure is documented below. -~>**NOTE**: Batching is not implemented for the majority or resources/request types and is bounded by two values. If you are running into issues with slow batches +~>**NOTE:** Batching is not implemented for the majority or resources/request types and is bounded by two values. If you are running into issues with slow batches resources, you may need to adjust one or both of 1) the core [`-parallelism`](https://www.terraform.io/docs/commands/apply.html#parallelism-n) flag, which controls how many concurrent resources are being operated on and 2) `send_after`, the time interval after which a batch is sent. * `request_timeout` - (Optional) A duration string controlling the amount of time @@ -265,7 +270,7 @@ be used for configuration are below: * `iam_credentials_custom_endpoint` (`GOOGLE_IAM_CREDENTIALS_CUSTOM_ENDPOINT`) - `https://iamcredentials.googleapis.com/v1/` * `kms_custom_endpoint` (`GOOGLE_KMS_CUSTOM_ENDPOINT`) - `https://cloudkms.googleapis.com/v1/` * `logging_custom_endpoint` (`GOOGLE_LOGGING_CUSTOM_ENDPOINT`) - `https://logging.googleapis.com/v2/` -* `monitoring_custom_endpoint` (`GOOGLE_MONITORING_CUSTOM_ENDPOINT`) - `https://monitoring.googleapis.com/v3/` +* `monitoring_custom_endpoint` (`GOOGLE_MONITORING_CUSTOM_ENDPOINT`) - `https://monitoring.googleapis.com/` * `pubsub_custom_endpoint` (`GOOGLE_PUBSUB_CUSTOM_ENDPOINT`) - `https://pubsub.googleapis.com/v1/` * `redis_custom_endpoint` (`GOOGLE_REDIS_CUSTOM_ENDPOINT`) - `https://redis.googleapis.com/v1/` | `https://redis.googleapis.com/v1beta1/` * `resource_manager_custom_endpoint` (`GOOGLE_RESOURCE_MANAGER_CUSTOM_ENDPOINT`) - `https://cloudresourcemanager.googleapis.com/v1/` diff --git a/third_party/terraform/website/docs/guides/using_gke_with_terraform.html.markdown b/third_party/terraform/website/docs/guides/using_gke_with_terraform.html.markdown new file mode 100644 index 000000000000..71c0a1416c80 --- /dev/null +++ b/third_party/terraform/website/docs/guides/using_gke_with_terraform.html.markdown @@ -0,0 +1,200 @@ +--- +layout: "google" +page_title: "Using GKE with Terraform" +sidebar_current: "docs-google-provider-guides-using-gke" +description: |- + Recommendations and best practices for using GKE with Terraform. +--- + +# Using GKE with Terraform + +This page is a brief overview of GKE usage with Terraform, based on the content +available in the [How-to guides for GKE](https://cloud.google.com/kubernetes-engine/docs/how-to). +It's intended as a supplement for intermediate users, covering cases that are +unintuitive or confusing when using Terraform instead of `gcloud`/the Cloud +Console. + +Additionally, you may consider using Google's [`kubernetes-engine`](https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google) +module, which implements many of this practices for you. + +If the information on this page conflicts with recommendations available on +`cloud.google.com`, `cloud.google.com` should be considered the correct source. + +## Interacting with Kubernetes + +After creating a `google_container_cluster` with Terraform, authentication to +the cluster are often a challenge. In most cases, you can use `gcloud` to +configure cluster access, [generating a `kubeconfig` entry](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry): + +```bash +gcloud container clusters get-credentials cluster-name +``` + +Using this command, `gcloud` will generate a `kubeconfig` entry that uses +`gcloud` as an authentication mechanism. However, sometimes performing +authentication inline with Terraform or a static config without `gcloud` is more +desirable. + +### Using the Kubernetes and Helm Providers + +When using the `kubernetes` and `helm` providers, +[statically defined credentials](https://www.terraform.io/docs/providers/kubernetes/index.html#statically-defined-credentials) +can allow you to connect to clusters defined in the same config or in a remote +state. You can configure either using configuration such as the following: + +```hcl +# Retrieve an access token as the Terraform runner +data "google_client_config" "provider" {} + +data "google_container_cluster" "my_cluster" { + name = "my-cluster" + location = "us-central1" +} + +provider "kubernetes" { + load_config_file = false + + host = "https://${data.google_container_cluster.my_cluster.endpoint}" + token = data.google_client_config.provider.access_token + cluster_ca_certificate = base64decode( + data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate, + ) +} +``` + +Alternatively, you can authenticate as another service account on which your +Terraform runner has been granted the `roles/iam.serviceAccountTokenCreator` +role: + +```hcl +data "google_service_account_access_token" "my_kubernetes_sa" { + target_service_account = "{{service_account}}" + scopes = ["userinfo-email", "cloud-platform"] + lifetime = "3600s" +} + +data "google_container_cluster" "my_cluster" { + name = "my-cluster" + location = "us-central1" +} + +provider "kubernetes" { + load_config_file = false + + host = "https://${data.google_container_cluster.my_cluster.endpoint}" + token = data.google_service_account_access_token.my_kubernetes_sa.access_token + cluster_ca_certificate = base64decode( + data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate, + ) +} +``` + +### Using kubectl / kubeconfig + +It's possible to interface with `kubectl` or other `.kubeconfig`-based tools by +providing them a `.kubeconfig` directly. For situations where `gcloud` can't be +used as an authentication mechanism, you can generate a static `.kubeconfig` +file instead. + +An authentication submodule, `auth`, is provided as part of Google's +[`kubernetes-engine`](https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google) +module. You can use it through the module registry, or [in the module source](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/auth). + +Authenticating using this method will use a Terraform-generated access token +which persists for 1 hour. For longer-lasting sessions, or cases where a single +persistent config is required, using `gcloud` is advised. + +## VPC-native Clusters + +[VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips) +are GKE clusters that use [alias IP ranges](https://cloud.google.com/vpc/docs/alias-ip). +VPC-native clusters route traffic between pods using a VPC network, and are able +to route to other VPCs across network peerings along with [several other benefits](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips). + +This is in contrast to [routes-based clusters](https://cloud.google.com/kubernetes-engine/docs/how-to/routes-based-cluster), +which route pod traffic using GCP routes. + +In both `gcloud` and the Cloud Console, VPC-native is the default for new +clusters and increasingly, GKE features such as [Standalone Network Endpoint Groups (NEGs)](https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#pod_readiness) +have relied on clusters being VPC-native. In Terraform however, the default +behaviour is to create a routes-based cluster for backwards compatibility. + +It's recommended that you create a VPC-native cluster, done by specifying the +`ip_allocation_policy` block. Configuration will look like the following: + +```hcl +resource "google_container_cluster" "my_vpc_native_cluster" { + name = "my-vpc-native-cluster" + location = "us-central1" + initial_node_count = 1 + + network = "default" + subnetwork = "default" + + ip_allocation_policy { + cluster_ipv4_cidr_block = "/16" + services_ipv4_cidr_block = "/22" + } + + # other settings... +} +``` + +## Node Pool Management + +In Terraform, we recommend managing your node pools using the +`google_container_node_pool` resource, separate from the +`google_container_cluster` resource. This separates cluster-level configuration +like networking and Kubernetes features from the configuration of your nodes. +Additionally, it helps ensure your cluster isn't inadvertently deleted. +Terraform struggles to handle complex changes to subresources, and may attempt +to delete a cluster based on changes to inline node pools. + +However, the GKE API doesn't allow creating a cluster without nodes. It's common +for Terraform users to define a block such as the following: + +```hcl +resource "google_container_cluster" "my-gke-cluster" { + name = "my-gke-cluster" + location = "us-central1" + + # We can't create a cluster with no node pool defined, but we want to only use + # separately managed node pools. So we create the smallest possible default + # node pool and immediately delete it. + remove_default_node_pool = true + initial_node_count = 1 + + # other settings... +} +``` + +This creates `initial_node_count` nodes per zone the cluster has nodes in, +typically 1 zone if the cluster `location` is a zone, and 3 if it's a `region`. +Your cluster's initial GKE masters will be sized based on the +`initial_node_count` provided. If subsequent node pools add a large number of +nodes to your cluster, GKE may cause a resizing event immediately after adding a +node pool. + +The initial node pool will be created using the +[Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) +as the [`service_account`](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account). +If you've disabled that service account, or want to use a +[least privilege Google service account](https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#use_least_privilege_sa) +for the temporary node pool, you can add the following configuration to your +`google_container_cluster` block: + +```hcl +resource "google_container_cluster" "my-gke-cluster" { + # other settings... + + node_config { + service_account = "{{service_account}}" + } + + lifecycle { + ignore_changes = ["node_config"] + } + + # other settings... +} +``` diff --git a/third_party/terraform/website/docs/guides/version_2_upgrade.html.markdown b/third_party/terraform/website/docs/guides/version_2_upgrade.html.markdown index fea9c72539b2..a9dc3806614e 100644 --- a/third_party/terraform/website/docs/guides/version_2_upgrade.html.markdown +++ b/third_party/terraform/website/docs/guides/version_2_upgrade.html.markdown @@ -1,7 +1,7 @@ --- layout: "google" page_title: "Terraform Google Provider 2.0.0 Upgrade Guide" -sidebar_current: "docs-google-provider-version-2-upgrade" +sidebar_current: "docs-google-provider-guides-version-2-upgrade" description: |- Terraform Google Provider 2.0.0 Upgrade Guide --- @@ -298,21 +298,21 @@ resource "google_cloudbuild_trigger" "build_trigger" { branch_name = "master-updated" repo_name = "some-repo-updated" } - + build { images = ["gcr.io/$PROJECT_ID/$REPO_NAME:$SHORT_SHA"] - tags = ["team-a", "service-b", "updated"] - + tags = ["team-a", "service-b", "updated"] + step { name = "gcr.io/cloud-builders/gsutil" args = ["cp", "gs://mybucket/remotefile.zip", "localfile-updated.zip"] } - + step { name = "gcr.io/cloud-builders/go" args = ["build", "my_package_updated"] } - + step { name = "gcr.io/cloud-builders/docker" args = ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$SHORT_SHA", "-f", "Dockerfile", "."] @@ -393,11 +393,11 @@ data "google_compute_image" "my_image" { } resource "google_compute_disk" "foobar" { - name = "example-disk" + name = "example-disk" image = "${data.google_compute_image.my_image.self_link}" - size = 50 - type = "pd-ssd" - zone = "us-central1-a" + size = 50 + type = "pd-ssd" + zone = "us-central1-a" disk_encryption_key { raw_key = "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" } @@ -488,25 +488,25 @@ Use the `snapshot_encryption_key` block instead: ```hcl data "google_compute_image" "my_image" { - family = "debian-9" - project = "debian-cloud" + family = "debian-9" + project = "debian-cloud" } resource "google_compute_disk" "my_disk" { - name = "my-disk" - image = "${data.google_compute_image.my_image.self_link}" - size = 10 - type = "pd-ssd" - zone = "us-central1-a" + name = "my-disk" + image = "${data.google_compute_image.my_image.self_link}" + size = 10 + type = "pd-ssd" + zone = "us-central1-a" } resource "google_compute_snapshot" "my_snapshot" { - name = "my-snapshot" - source_disk = "${google_compute_disk.my_disk.name}" - zone = "us-central1-a" - snapshot_encryption_key { - raw_key = "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" - } + name = "my-snapshot" + source_disk = "${google_compute_disk.my_disk.name}" + zone = "us-central1-a" + snapshot_encryption_key { + raw_key = "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" + } } ``` @@ -516,26 +516,26 @@ Use the `source_disk_encryption_key` block instead: ```hcl data "google_compute_image" "my_image" { - family = "debian-9" - project = "debian-cloud" + family = "debian-9" + project = "debian-cloud" } resource "google_compute_disk" "my_disk" { - name = "my-disk" - image = "${data.google_compute_image.my_image.self_link}" - size = 10 - type = "pd-ssd" - zone = "us-central1-a" - disk_encryption_key { - raw_key = "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" - } + name = "my-disk" + image = "${data.google_compute_image.my_image.self_link}" + size = 10 + type = "pd-ssd" + zone = "us-central1-a" + disk_encryption_key { + raw_key = "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" + } } resource "google_compute_snapshot" "my_snapshot" { - name = "my-snapshot" - source_disk = "${google_compute_disk.my_disk.name}" - zone = "us-central1-a" - source_disk_encryption_key { - raw_key = "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" - } + name = "my-snapshot" + source_disk = "${google_compute_disk.my_disk.name}" + zone = "us-central1-a" + source_disk_encryption_key { + raw_key = "SGVsbG8gZnJvbSBHb29nbGUgQ2xvdWQgUGxhdGZvcm0=" + } } ``` @@ -609,10 +609,10 @@ resource "random_id" "np" { } resource "google_container_node_pool" "example" { - name = "${random_id.np.dec}" - zone = "us-central1-a" - cluster = "${google_container_cluster.example.name}" - node_count = 1 + name = "${random_id.np.dec}" + zone = "us-central1-a" + cluster = "${google_container_cluster.example.name}" + node_count = 1 node_config { machine_type = "${var.machine_type}" diff --git a/third_party/terraform/website/docs/guides/version_3_upgrade.html.markdown b/third_party/terraform/website/docs/guides/version_3_upgrade.html.markdown index a30748754a1a..7c77f211af76 100644 --- a/third_party/terraform/website/docs/guides/version_3_upgrade.html.markdown +++ b/third_party/terraform/website/docs/guides/version_3_upgrade.html.markdown @@ -1,7 +1,7 @@ --- layout: "google" page_title: "Terraform Google Provider 3.0.0 Upgrade Guide" -sidebar_current: "docs-google-provider-version-3-upgrade" +sidebar_current: "docs-google-provider-guides-version-3-upgrade" description: |- Terraform Google Provider 3.0.0 Upgrade Guide --- @@ -426,8 +426,8 @@ resource "google_cloud_run_service" "default" { metadata { annotations = { - "autoscaling.knative.dev/maxScale" = "1000" - "run.googleapis.com/client-name" = "cloud-console" + "autoscaling.knative.dev/maxScale" = "1000" + "run.googleapis.com/client-name" = "terraform" } name = "revision-name" } @@ -660,11 +660,11 @@ directed to that version. ```hcl resource "google_compute_instance_group_manager" "my_igm" { - name = "my-igm" - zone = "us-central1-c" - base_instance_name = "igm" + name = "my-igm" + zone = "us-central1-c" + base_instance_name = "igm" - instance_template = google_compute_instance_template.my_tmpl.self_link + instance_template = google_compute_instance_template.my_tmpl.self_link } ``` @@ -672,14 +672,14 @@ resource "google_compute_instance_group_manager" "my_igm" { ```hcl resource "google_compute_instance_group_manager" "my_igm" { - name = "my-igm" - zone = "us-central1-c" - base_instance_name = "igm" + name = "my-igm" + zone = "us-central1-c" + base_instance_name = "igm" - version { - name = "prod" - instance_template = google_compute_instance_template.my_tmpl.self_link - } + version { + name = "prod" + instance_template = google_compute_instance_template.my_tmpl.self_link + } } ``` @@ -695,13 +695,13 @@ For more details see the ```hcl resource "google_compute_instance_group_manager" "my_igm" { - name = "my-igm" - zone = "us-central1-c" - base_instance_name = "igm" + name = "my-igm" + zone = "us-central1-c" + base_instance_name = "igm" - instance_template = "${google_compute_instance_template.my_tmpl.self_link}" + instance_template = "${google_compute_instance_template.my_tmpl.self_link}" - update_strategy = "NONE" + update_strategy = "NONE" } ``` @@ -709,19 +709,19 @@ resource "google_compute_instance_group_manager" "my_igm" { ```hcl resource "google_compute_instance_group_manager" "my_igm" { - name = "my-igm" - zone = "us-central1-c" - base_instance_name = "igm" + name = "my-igm" + zone = "us-central1-c" + base_instance_name = "igm" - version { - name = "prod" - instance_template = "${google_compute_instance_template.my_tmpl.self_link}" - } + version { + name = "prod" + instance_template = "${google_compute_instance_template.my_tmpl.self_link}" + } - update_policy { - minimal_action = "RESTART" - type = "OPPORTUNISTIC" - } + update_policy { + minimal_action = "RESTART" + type = "OPPORTUNISTIC" + } } ``` @@ -749,10 +749,10 @@ the following is valid: ```hcl disk { - auto_delete = true - type = "SCRATCH" - disk_type = "local-ssd" - disk_size_gb = 375 + auto_delete = true + type = "SCRATCH" + disk_type = "local-ssd" + disk_size_gb = 375 } ``` @@ -761,26 +761,26 @@ fail: ```hcl disk { - source_image = "https://www.googleapis.com/compute/v1/projects/gce-uefi-images/global/images/centos-7-v20190729" - auto_delete = true - type = "SCRATCH" + source_image = "https://www.googleapis.com/compute/v1/projects/gce-uefi-images/global/images/centos-7-v20190729" + auto_delete = true + type = "SCRATCH" } ``` ```hcl disk { - source_image = "https://www.googleapis.com/compute/v1/projects/gce-uefi-images/global/images/centos-7-v20190729" - auto_delete = true - disk_type = "local-ssd" + source_image = "https://www.googleapis.com/compute/v1/projects/gce-uefi-images/global/images/centos-7-v20190729" + auto_delete = true + disk_type = "local-ssd" } ``` ```hcl disk { - auto_delete = true - type = "SCRATCH" - disk_type = "local-ssd" - disk_size_gb = 300 + auto_delete = true + type = "SCRATCH" + disk_type = "local-ssd" + disk_size_gb = 300 } ``` @@ -983,8 +983,8 @@ to `false`. ```hcl resource "google_container_cluster" "primary" { - name = "my-cluster" - location = "us-central1" + name = "my-cluster" + location = "us-central1" initial_node_count = 1 @@ -998,8 +998,8 @@ resource "google_container_cluster" "primary" { ```hcl resource "google_container_cluster" "primary" { - name = "my-cluster" - location = "us-central1" + name = "my-cluster" + location = "us-central1" initial_node_count = 1 } @@ -1057,9 +1057,9 @@ resource "google_compute_network" "container_network" { } resource "google_container_cluster" "primary" { - name = "my-cluster" - location = "us-central1" - network = google_compute_network.container_network.name + name = "my-cluster" + location = "us-central1" + network = google_compute_network.container_network.name initial_node_count = 1 @@ -1528,8 +1528,8 @@ module "project_services" { source = "terraform-google-modules/project-factory/google//modules/project_services" version = "3.3.0" - project_id = "your-project-id" - activate_apis = [ + project_id = "your-project-id" + activate_apis = [ "iam.googleapis.com", "cloudresourcemanager.googleapis.com", ] @@ -1550,7 +1550,7 @@ resource "google_project_service" "service" { service = each.key - project = "your-project-id" + project = "your-project-id" disable_on_destroy = false } ``` diff --git a/third_party/terraform/website/docs/r/app_engine_application.html.markdown b/third_party/terraform/website/docs/r/app_engine_application.html.markdown index 739516a8d028..7a0579d2ce8e 100644 --- a/third_party/terraform/website/docs/r/app_engine_application.html.markdown +++ b/third_party/terraform/website/docs/r/app_engine_application.html.markdown @@ -16,6 +16,9 @@ Allows creation and management of an App Engine application. successfully deleted; this is a limitation of Terraform, and will go away in the future. Terraform is not able to delete App Engine applications. +~> **Warning:** All arguments including `iap.oauth2_client_secret` will be stored in the raw +state as plain-text. [Read more about sensitive data in state](/docs/state/sensitive-data.html). + ## Example Usage ```hcl @@ -36,7 +39,7 @@ resource "google_app_engine_application" "app" { The following arguments are supported: * `project` - (Required) The project ID to create the application under. - ~>**NOTE**: GCP only accepts project ID, not project number. If you are using number, + ~>**NOTE:** GCP only accepts project ID, not project number. If you are using number, you may get a "Permission denied" error. * `location_id` - (Required) The [location](https://cloud.google.com/appengine/docs/locations) @@ -44,6 +47,8 @@ The following arguments are supported: * `auth_domain` - (Optional) The domain to authenticate users with when using App Engine's User API. +* `database_type` - (Optional) The type of the Cloud Firestore or Cloud Datastore database associated with this application. + * `serving_status` - (Optional) The serving status of the app. * `feature_settings` - (Optional) A block of optional settings to configure specific App Engine features: @@ -51,6 +56,13 @@ The following arguments are supported: * `split_health_checks` - (Required) Set to false to use the legacy health check instead of the readiness and liveness checks. +* `iap` - (Optional) Settings for enabling Cloud Identity Aware Proxy + + * `oauth2_client_id` - (Required) OAuth2 client ID to use for the authentication flow. + + * `oauth2_client_secret` - (Required) OAuth2 client secret to use for the authentication flow. + The SHA-256 hash of the value is returned in the oauth2ClientSecretSha256 field. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are @@ -72,6 +84,18 @@ exported: * `gcr_domain` - The GCR domain used for storing managed Docker images for this app. +* `iap` - Settings for enabling Cloud Identity Aware Proxy + + * `oauth2_client_secret_sha256` - Hex-encoded SHA-256 hash of the client secret. + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `update` - Default is 4 minutes. + ## Import Applications can be imported using the ID of the project the application belongs to, e.g. diff --git a/third_party/terraform/website/docs/r/bigquery_dataset_iam.html.markdown b/third_party/terraform/website/docs/r/bigquery_dataset_iam.html.markdown new file mode 100644 index 000000000000..74fc656a885c --- /dev/null +++ b/third_party/terraform/website/docs/r/bigquery_dataset_iam.html.markdown @@ -0,0 +1,122 @@ +--- +layout: "google" +subcategory: "BigQuery" +page_title: "Google: google_bigquery_dataset_iam" +sidebar_current: "docs-google-bigquery-dataset-iam" +description: |- + Collection of resources to manage IAM policy for a BigQuery dataset. +--- + +# IAM policy for BigQuery dataset + +Three different resources help you manage your IAM policy for BigQuery dataset. Each of these resources serves a different use case: + +* `google_bigquery_dataset_iam_policy`: Authoritative. Sets the IAM policy for the dataset and replaces any existing policy already attached. +* `google_bigquery_dataset_iam_binding`: Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the dataset are preserved. +* `google_bigquery_dataset_iam_member`: Non-authoritative. Updates the IAM policy to grant a role to a new member. Other members for the role for the dataset are preserved. + +These resources are intended to convert the permissions system for BigQuery datasets to the standard IAM interface. For advanced usages, including [creating authorized views](https://cloud.google.com/bigquery/docs/share-access-views), please use either `google_bigquery_dataset_access` or the `access` field on `google_bigquery_dataset`. + +~> **Note:** These resources **cannot** be used with `google_bigquery_dataset_access` resources or the `access` field on `google_bigquery_dataset` or they will fight over what the policy should be. + +~> **Note:** Using any of these resources will remove any authorized view permissions from the dataset. To assign and preserve authorized view permissions use the `google_bigquery_dataset_access` instead. + +~> **Note:** Legacy BigQuery roles `OWNER` `WRITER` and `READER` **cannot** be used with any of these IAM resources. Instead use the full role form of: `roles/bigquery.dataOwner` `roles/bigquery.dataEditor` and `roles/bigquery.dataViewer`. + +~> **Note:** `google_bigquery_dataset_iam_policy` **cannot** be used in conjunction with `google_bigquery_dataset_iam_binding` and `google_bigquery_dataset_iam_member` or they will fight over what your policy should be. + +~> **Note:** `google_bigquery_dataset_iam_binding` resources **can be** used in conjunction with `google_bigquery_dataset_iam_member` resources **only if** they do not grant privilege to the same role. + +## google\_bigquery\_dataset\_iam\_policy + +```hcl +data "google_iam_policy" "owner" { + binding { + role = "roles/dataOwner" + + members = [ + "user:jane@example.com", + ] + } +} + +resource "google_bigquery_dataset_iam_policy" "dataset" { + dataset_id = "your-dataset-id" + policy_data = data.google_iam_policy.owner.policy_data +} +``` + +## google\_bigquery\_dataset\_iam\_binding + +```hcl +resource "google_bigquery_dataset_iam_binding" "reader" { + dataset_id = "your-dataset-id" + role = "roles/bigquery.dataViewer" + + members = [ + "user:jane@example.com", + ] +} +``` + +## google\_bigquery\_dataset\_iam\_member + +```hcl +resource "google_bigquery_dataset_iam_member" "editor" { + dataset_id = "your-dataset-id" + role = "roles/bigquery.dataEditor" + member = "user:jane@example.com" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `dataset_id` - (Required) The dataset ID. + +* `member/members` - (Required) Identities that will be granted the privilege in `role`. + Each entry can have one of the following values: + * **allUsers**: A special identifier that represents anyone who is on the internet; with or without a Google account. + * **allAuthenticatedUsers**: A special identifier that represents anyone who is authenticated with a Google account or a service account. + * **user:{emailid}**: An email address that represents a specific Google account. For example, alice@gmail.com or joe@example.com. + * **serviceAccount:{emailid}**: An email address that represents a service account. For example, my-other-app@appspot.gserviceaccount.com. + * **group:{emailid}**: An email address that represents a Google group. For example, admins@example.com. + * **domain:{domain}**: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com. + +* `role` - (Required) The role that should be applied. Only one + `google_bigquery_dataset_iam_binding` can be used per role. Note that custom roles must be of the format + `[projects|organizations]/{parent-name}/roles/{role-name}`. + +* `policy_data` - (Required only by `google_bigquery_dataset_iam_policy`) The policy data generated by + a `google_iam_policy` data source. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are +exported: + +* `etag` - (Computed) The etag of the dataset's IAM policy. + +## Import + +IAM member imports use space-delimited identifiers; the resource in question, the role, and the account. This member resource can be imported using the `dataset_id`, role, and account e.g. + +``` +$ terraform import google_bigquery_dataset_iam_member.dataset_iam "projects/your-project-id/datasets/dataset-id roles/viewer user:foo@example.com" +``` + +IAM binding imports use space-delimited identifiers; the resource in question and the role. This binding resource can be imported using the `dataset_id` and role, e.g. + +``` +$ terraform import google_bigquery_dataset_iam_binding.dataset_iam "projects/your-project-id/datasets/dataset-id roles/viewer" +``` + +IAM policy imports use the identifier of the resource in question. This policy resource can be imported using the `dataset_id`, role, and account e.g. + +``` +$ terraform import google_bigquery_dataset_iam_policy.dataset_iam projects/your-project-id/datasets/dataset-id +``` + +-> **Custom Roles**: If you're importing a IAM resource with a custom role, make sure to use the + full name of the custom role, e.g. `[projects/my-project|organizations/my-org]/roles/my-custom-role`. diff --git a/third_party/terraform/website/docs/r/bigquery_table.html.markdown b/third_party/terraform/website/docs/r/bigquery_table.html.markdown index ec2c9289e4d5..0f91b6ef8a9b 100644 --- a/third_party/terraform/website/docs/r/bigquery_table.html.markdown +++ b/third_party/terraform/website/docs/r/bigquery_table.html.markdown @@ -112,12 +112,8 @@ The following arguments are supported: * `labels` - (Optional) A mapping of labels to assign to the resource. -* `schema` - (Optional) A JSON schema for the table. Schema is required - for CSV and JSON formats and is disallowed for Google Cloud - Bigtable, Cloud Datastore backups, and Avro formats when using - external tables. For more information see the - [BigQuery API documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource). - ~>**NOTE**: Because this field expects a JSON string, any changes to the +* `schema` - (Optional) A JSON schema for the table. + ~>**NOTE:** Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn't changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced `STRUCT` field type with `RECORD` @@ -127,7 +123,7 @@ The following arguments are supported: * `time_partitioning` - (Optional) If specified, configures time-based partitioning for this table. Structure is documented below. -* `range_partitioning` - (Optional, Beta) If specified, configures range-based +* `range_partitioning` - (Optional) If specified, configures range-based partitioning for this table. Structure is documented below. * `clustering` - (Optional) Specifies column names to use for data clustering. @@ -152,6 +148,11 @@ The `external_data_configuration` block supports: `source_format` is set to "GOOGLE_SHEETS". Structure is documented below. +* `hive_partitioning_options` (Optional) - When set, configures hive partitioning + support. Not all storage formats support hive partitioning -- requesting hive + partitioning on an unsupported format will lead to an error, as will providing + an invalid specification. + * `ignore_unknown_values` (Optional) - Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with @@ -162,6 +163,18 @@ The `external_data_configuration` block supports: * `max_bad_records` (Optional) - The maximum number of bad records that BigQuery can ignore when reading data. +* `schema` - (Optional) A JSON schema for the external table. Schema is required + for CSV and JSON formats if autodetect is not on. Schema is disallowed + for Google Cloud Bigtable, Cloud Datastore backups, Avro, ORC and Parquet formats. + ~>**NOTE:** Because this field expects a JSON string, any changes to the + string will create a diff, even if the JSON itself hasn't changed. + Furthermore drift for this field cannot not be detected because BigQuery + only uses this schema to compute the effective schema for the table, therefore + any changes on the configured value will force the table to be recreated. + This schema is effectively only applied when creating a table from an external + datasource, after creation the computed schema will be stored in + `google_bigquery_table.schema` + * `source_format` (Required) - The data format. Supported values are: "CSV", "GOOGLE_SHEETS", "NEWLINE_DELIMITED_JSON", "AVRO", "PARQUET", and "DATSTORE_BACKUP". To use "GOOGLE_SHEETS" @@ -207,6 +220,26 @@ The `google_sheets_options` block supports: that BigQuery will skip when reading the data. At least one of `range` or `skip_leading_rows` must be set. +The `hive_partitioning_options` block supports: + +* `mode` (Optional) - When set, what mode of hive partitioning to use when + reading data. The following modes are supported. + * AUTO: automatically infer partition key name(s) and type(s). + * STRINGS: automatically infer partition key name(s). All types are + Not all storage formats support hive partitioning. Requesting hive + partitioning on an unsupported format will lead to an error. + Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. + * CUSTOM: when set to `CUSTOM`, you must encode the partition key schema within the `source_uri_prefix` by setting `source_uri_prefix` to `gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}`. + +* `source_uri_prefix` (Optional) - When hive partition detection is requested, + a common for all source uris must be required. The prefix must end immediately + before the partition key encoding begins. For example, consider files following + this data layout. `gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro` + `gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro` When hive + partitioning is requested with either AUTO or STRINGS detection, the common prefix + can be either of `gs://bucket/path_to_table` or `gs://bucket/path_to_table/`. + Note that when `mode` is set to `CUSTOM`, you must encode the partition key schema within the `source_uri_prefix` by setting `source_uri_prefix` to `gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}`. + The `time_partitioning` block supports: * `expiration_ms` - (Optional) Number of milliseconds for which to keep the @@ -259,6 +292,8 @@ The `encryption_configuration` block supports the following arguments: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/datasets/{{dataset}}/tables/{{name}}` + * `creation_time` - The time when this table was created, in milliseconds since the epoch. * `etag` - A hash of the resource. diff --git a/third_party/terraform/website/docs/r/bigtable_gc_policy.html.markdown b/third_party/terraform/website/docs/r/bigtable_gc_policy.html.markdown index e47aa7cfc3f8..988830a15407 100644 --- a/third_party/terraform/website/docs/r/bigtable_gc_policy.html.markdown +++ b/third_party/terraform/website/docs/r/bigtable_gc_policy.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Bigtable" +subcategory: "Cloud Bigtable" layout: "google" page_title: "Google: google_bigtable_gc_policy" sidebar_current: "docs-google-bigtable-gc-policy" diff --git a/third_party/terraform/website/docs/r/bigtable_instance.html.markdown b/third_party/terraform/website/docs/r/bigtable_instance.html.markdown index 47205456ba7f..640c5b6b3727 100644 --- a/third_party/terraform/website/docs/r/bigtable_instance.html.markdown +++ b/third_party/terraform/website/docs/r/bigtable_instance.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Bigtable" +subcategory: "Cloud Bigtable" layout: "google" page_title: "Google: google_bigtable_instance" sidebar_current: "docs-google-bigtable-instance" @@ -13,6 +13,15 @@ Creates a Google Bigtable instance. For more information see [the official documentation](https://cloud.google.com/bigtable/) and [API](https://cloud.google.com/bigtable/docs/go/reference). +-> **Note**: It is strongly recommended to set `lifecycle { prevent_destroy = true }` +on instances in order to prevent accidental data loss. See +[Terraform docs](https://www.terraform.io/docs/configuration/resources.html#prevent_destroy) +for more information on lifecycle parameters. + +-> **Note**: On newer versions of the provider, you must explicitly set `deletion_protection=false` +(and run `terraform apply` to write the field to state) in order to destroy an instance. +It is recommended to not set this field (or set it to true) until you're ready to destroy. + ## Example Usage - Production Instance @@ -23,9 +32,13 @@ resource "google_bigtable_instance" "production-instance" { cluster { cluster_id = "tf-instance-cluster" zone = "us-central1-b" - num_nodes = 3 + num_nodes = 1 storage_type = "HDD" } + + lifecycle { + prevent_destroy = true + } } ``` @@ -50,7 +63,8 @@ The following arguments are supported: * `name` - (Required) The name (also called Instance Id in the Cloud Console) of the Cloud Bigtable instance. -* `cluster` - (Required) A block of cluster configuration options. This can be specified 1 or 2 times. See structure below. +* `cluster` - (Required) A block of cluster configuration options. This can be specified at least once, and up to 4 times. +See structure below. ----- @@ -61,6 +75,9 @@ The following arguments are supported: * `display_name` - (Optional) The human-readable display name of the Bigtable instance. Defaults to the instance `name`. +* `deletion_protection` - (Optional) Whether or not to allow Terraform to destroy the instance. Unless this field is set to false +in Terraform state, a `terraform destroy` or `terraform apply` that would delete the instance will fail. + ----- @@ -73,7 +90,7 @@ cluster must have a different zone in the same region. Zones that support Bigtable instances are noted on the [Cloud Bigtable locations page](https://cloud.google.com/bigtable/docs/locations). * `num_nodes` - (Optional) The number of nodes in your Cloud Bigtable cluster. -Required, with a minimum of `3` for a `PRODUCTION` instance. Must be left unset +Required, with a minimum of `1` for a `PRODUCTION` instance. Must be left unset for a `DEVELOPMENT` instance. * `storage_type` - (Optional) The storage type to use. One of `"SSD"` or @@ -86,7 +103,9 @@ for a `DEVELOPMENT` instance. ## Attributes Reference -Only the arguments listed above are exposed as attributes. +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/instances/{{name}}` ## Import diff --git a/third_party/terraform/website/docs/r/bigtable_instance_iam.html.markdown b/third_party/terraform/website/docs/r/bigtable_instance_iam.html.markdown index 73e89a320eb3..691076f9a51e 100644 --- a/third_party/terraform/website/docs/r/bigtable_instance_iam.html.markdown +++ b/third_party/terraform/website/docs/r/bigtable_instance_iam.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Bigtable" +subcategory: "Cloud Bigtable" layout: "google" page_title: "Google: google_bigtable_instance_iam" sidebar_current: "docs-google-bigtable-instance-iam" diff --git a/third_party/terraform/website/docs/r/bigtable_table.html.markdown b/third_party/terraform/website/docs/r/bigtable_table.html.markdown index 8c035c6d35f4..37d27bdf3b42 100644 --- a/third_party/terraform/website/docs/r/bigtable_table.html.markdown +++ b/third_party/terraform/website/docs/r/bigtable_table.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Bigtable" +subcategory: "Cloud Bigtable" layout: "google" page_title: "Google: google_bigtable_table" sidebar_current: "docs-google-bigtable-table" @@ -13,6 +13,11 @@ Creates a Google Cloud Bigtable table inside an instance. For more information s [the official documentation](https://cloud.google.com/bigtable/) and [API](https://cloud.google.com/bigtable/docs/go/reference). +-> **Note:** It is strongly recommended to set `lifecycle { prevent_destroy = true }` +on tables in order to prevent accidental data loss. See +[Terraform docs](https://www.terraform.io/docs/configuration/resources.html#prevent_destroy) +for more information on lifecycle parameters. + ## Example Usage @@ -26,12 +31,20 @@ resource "google_bigtable_instance" "instance" { num_nodes = 3 storage_type = "HDD" } + + lifecycle { + prevent_destroy = true + } } resource "google_bigtable_table" "table" { name = "tf-table" instance_name = google_bigtable_instance.instance.name split_keys = ["a", "b", "c"] + + lifecycle { + prevent_destroy = true + } } ``` @@ -44,6 +57,8 @@ The following arguments are supported: * `instance_name` - (Required) The name of the Bigtable instance. * `split_keys` - (Optional) A list of predefined keys to split the table on. +!> **Warning:** Modifying the `split_keys` of an existing table will cause Terraform +to delete/recreate the entire `google_bigtable_table` resource. * `column_family` - (Optional) A group of columns within a table which share a common configuration. This can be specified multiple times. Structure is documented below. @@ -58,7 +73,11 @@ The following arguments are supported: ## Attributes Reference -Only the arguments listed above are exposed as attributes. +In addition to the arguments listed above, the following computed attributes are +exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/instances/{{instance_name}}/tables/{{name}}` + ## Import diff --git a/third_party/terraform/website/docs/r/cloudfunctions_function.html.markdown b/third_party/terraform/website/docs/r/cloudfunctions_function.html.markdown index 13aa62af6a50..b164535a7169 100644 --- a/third_party/terraform/website/docs/r/cloudfunctions_function.html.markdown +++ b/third_party/terraform/website/docs/r/cloudfunctions_function.html.markdown @@ -107,7 +107,7 @@ The following arguments are supported: * `name` - (Required) A user-defined name of the function. Function names must be unique globally. * `runtime` - (Required) The runtime in which the function is going to run. -Eg. `"nodejs8"`, `"nodejs10"`, `"python37"`, `"go111"`. +Eg. `"nodejs8"`, `"nodejs10"`, `"python37"`, `"go111"`, `"go113"`. - - - @@ -125,7 +125,7 @@ Eg. `"nodejs8"`, `"nodejs10"`, `"python37"`, `"go111"`. * `ingress_settings` - (Optional) String value that controls what traffic can reach the function. Allowed values are ALLOW_ALL and ALLOW_INTERNAL_ONLY. Changes to this field will recreate the cloud function. -* `labels` - (Optional) A set of key/value label pairs to assign to the function. +* `labels` - (Optional) A set of key/value label pairs to assign to the function. Label keys must follow the requirements at https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements. * `service_account_email` - (Optional) If provided, the self-provided service account to run the function with. @@ -147,7 +147,7 @@ Eg. `"nodejs8"`, `"nodejs10"`, `"python37"`, `"go111"`. The `event_trigger` block supports: * `event_type` - (Required) The type of event to observe. For example: `"google.storage.object.finalize"`. -See the documentation on [calling Cloud Functions](https://cloud.google.com/functions/docs/calling/) for a +See the documentation on [calling Cloud Functions](https://cloud.google.com/functions/docs/calling/) for a full reference of accepted triggers. * `resource` - (Required) Required. The name or partial URI of the resource from @@ -193,8 +193,9 @@ This resource provides the following ## Import -Functions can be imported using the `name`, e.g. +Functions can be imported using the `name` or `{{project}}/{{region}}/name`, e.g. ``` $ terraform import google_cloudfunctions_function.default function-test +$ terraform import google_cloudfunctions_function.default {{project}}/{{region}}/function-test ``` diff --git a/third_party/terraform/website/docs/r/cloudiot_registry.html.markdown b/third_party/terraform/website/docs/r/cloudiot_registry.html.markdown deleted file mode 100644 index fc23cb15bee5..000000000000 --- a/third_party/terraform/website/docs/r/cloudiot_registry.html.markdown +++ /dev/null @@ -1,123 +0,0 @@ ---- -subcategory: "Cloud IoT Core" -layout: "google" -page_title: "Google: google_cloudiot_registry" -sidebar_current: "docs-google-cloudiot-registry-x" -description: |- - Creates a device registry in Google's Cloud IoT Core platform ---- - -# google\_cloudiot\_registry - - Creates a device registry in Google's Cloud IoT Core platform. For more information see -[the official documentation](https://cloud.google.com/iot/docs/) and -[API](https://cloud.google.com/iot/docs/reference/cloudiot/rest/v1/projects.locations.registries). - - -## Example Usage - -```hcl -resource "google_pubsub_topic" "default-devicestatus" { - name = "default-devicestatus" -} - -resource "google_pubsub_topic" "default-telemetry" { - name = "default-telemetry" -} - -resource "google_cloudiot_registry" "default-registry" { - name = "default-registry" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.default-telemetry.id - } - - state_notification_config = { - pubsub_topic_name = google_pubsub_topic.default-devicestatus.id - } - - http_config = { - http_enabled_state = "HTTP_ENABLED" - } - - mqtt_config = { - mqtt_enabled_state = "MQTT_ENABLED" - } - - credentials { - public_key_certificate = { - format = "X509_CERTIFICATE_PEM" - certificate = file("rsa_cert.pem") - } - } -} -``` - -## Argument Reference - -The following arguments are supported: - -* `name` - (Required) A unique name for the resource, required by device registry. - Changing this forces a new resource to be created. - -- - - - -* `project` - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used. - -* `region` - (Optional) The Region in which the created address should reside. If it is not provided, the provider region is used. - -* `event_notification_configs` - (Optional) List of configurations for event notification, such as -PubSub topics to publish device events to. Structure is documented below. - -* `state_notification_config` - (Optional) A PubSub topic to publish device state updates. Structure is documented below. - -* `mqtt_config` - (Optional) Activate or deactivate MQTT. Structure is documented below. -* `http_config` - (Optional) Activate or deactivate HTTP. Structure is documented below. - -* `credentials` - (Optional) List of public key certificates to authenticate devices. Structure is documented below. - - -The `event_notification_configs` block supports: - -* `pubsub_topic_name` - (Required) PubSub topic name to publish device events. - -* `subfolder_matches` - (Optional) If the subfolder name matches this string - exactly, this configuration will be used. The string must not include the - leading '/' character. If empty, all strings are matched. Empty value can - only be used for the last `event_notification_configs` item. - -The `state_notification_config` block supports: - -* `pubsub_topic_name` - (Required) PubSub topic name to publish device state updates. - -The `mqtt_config` block supports: - -* `mqtt_enabled_state` - (Required) The field allows `MQTT_ENABLED` or `MQTT_DISABLED`. - -The `http_config` block supports: - -* `http_enabled_state` - (Required) The field allows `HTTP_ENABLED` or `HTTP_DISABLED`. - -The `credentials` block supports: - -* `public_key_certificate` - (Required) The certificate format and data. - -The `public_key_certificate` block supports: - -* `format` - (Required) The field allows only `X509_CERTIFICATE_PEM`. -* `certificate` - (Required) The certificate data. - - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{region}}/registries/{{name}}` - -## Import - -A device registry can be imported using the `name`, e.g. - -``` -$ terraform import google_cloudiot_registry.default-registry projects/{project}/locations/{region}/registries/{name} -``` diff --git a/third_party/terraform/website/docs/r/composer_environment.html.markdown b/third_party/terraform/website/docs/r/composer_environment.html.markdown index 5f5534e37be3..87f11f12b86e 100644 --- a/third_party/terraform/website/docs/r/composer_environment.html.markdown +++ b/third_party/terraform/website/docs/r/composer_environment.html.markdown @@ -169,6 +169,10 @@ The `config` block supports: (Optional) The configuration used for the Private IP Cloud Composer environment. Structure is documented below. +* `web_server_network_access_control` - + (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + The network-level access control policy for the Airflow web server. If unspecified, no network-level access restrictions will be applied. + The `node_config` block supports: @@ -289,7 +293,7 @@ The `software_config` block supports: The major version of Python used to run the Apache Airflow scheduler, worker, and webserver processes. Can be set to '2' or '3'. If not specified, the default is '2'. Cannot be updated. -The `private_environment_config` block supports: +See [documentation](https://cloud.google.com/composer/docs/how-to/managing/configuring-private-ip) for setting up private environments. The `private_environment_config` block supports: * `enable_private_endpoint` - If true, access to the public endpoint of the GKE cluster is denied. @@ -302,6 +306,32 @@ The `private_environment_config` block supports: in use within the cluster's network. If left blank, the default value of '172.16.0.0/28' is used. +* `cloud_sql_ipv4_cidr_block` - + (Optional) + The CIDR block from which IP range in tenant project will be reserved for Cloud SQL. Needs to be disjoint from `web_server_ipv4_cidr_block` + +* `web_server_ipv4_cidr_block` - + (Optional) + The CIDR block from which IP range for web server will be reserved. Needs to be disjoint from `master_ipv4_cidr_block` and `cloud_sql_ipv4_cidr_block`. + +The `web_server_network_access_control` supports: + +* `allowed_ip_range` - + A collection of allowed IP ranges with descriptions. Structure is documented below. + +The `allowed_ip_range` supports: + +* `value` - + (Required) + IP address or range, defined using CIDR notation, of requests that this rule applies to. + Examples: `192.168.1.1` or `192.168.0.0/16` or `2001:db8::/32` or `2001:0db8:0000:0042:0000:8a2e:0370:7334`. + IP range prefixes should be properly truncated. For example, + `1.2.3.4/24` should be truncated to `1.2.3.0/24`. Similarly, for IPv6, `2001:db8::1/32` should be truncated to `2001:db8::/32`. + +* `description` - + (Optional) + A description of this ip range. + The `ip_allocation_policy` block supports: * `use_ip_aliases` - diff --git a/third_party/terraform/website/docs/r/compute_instance.html.markdown b/third_party/terraform/website/docs/r/compute_instance.html.markdown index 66a16b725c83..198627708b3e 100644 --- a/third_party/terraform/website/docs/r/compute_instance.html.markdown +++ b/third_party/terraform/website/docs/r/compute_instance.html.markdown @@ -156,7 +156,7 @@ The following arguments are supported: Structure is documented below. **Note**: [`allow_stopping_for_update`](#allow_stopping_for_update) must be set to true or your instance must have a `desired_status` of `TERMINATED` in order to update this field. -* `tags` - (Optional) A list of tags to attach to the instance. +* `tags` - (Optional) A list of network tags to attach to the instance. * `shielded_instance_config` - (Optional) Enable [Shielded VM](https://cloud.google.com/security/shielded-cloud/shielded-vm) on this instance. Shielded VM provides verifiable integrity to prevent against malware and rootkits. Defaults to disabled. Structure is documented below. **Note**: [`shielded_instance_config`](#shielded_instance_config) can only be used with boot images with shielded vm support. See the complete list [here](https://cloud.google.com/compute/docs/images#shielded-images). @@ -164,6 +164,8 @@ The following arguments are supported: * `enable_display` - (Optional) Enable [Virtual Displays](https://cloud.google.com/compute/docs/instances/enable-instance-virtual-display#verify_display_driver) on this instance. **Note**: [`allow_stopping_for_update`](#allow_stopping_for_update) must be set to true or your instance must have a `desired_status` of `TERMINATED` in order to update this field. +* `resource_policies` (Optional) -- A list of short names or self_links of resource policies to attach to the instance. Modifying this list will cause the instance to recreate. Currently a max of 1 resource policy is supported. + --- @@ -209,7 +211,7 @@ The `initialize_params` block supports: `global/images/family/{family}`, `family/{family}`, `{project}/{family}`, `{project}/{image}`, `{family}`, or `{image}`. If referred by family, the images names must include the family name. If they don't, use the - [google_compute_image data source](/docs/providers/google/d/datasource_compute_image.html). + [google_compute_image data source](/docs/providers/google/d/compute_image.html). For instance, the image `centos-6-v20180104` includes its family name `centos-6`. These images can be referred by family name here. @@ -350,6 +352,8 @@ The `shielded_instance_config` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/zones/{{zone}}/instances/{{name}}` + * `instance_id` - The server-assigned unique identifier of this instance. * `metadata_fingerprint` - The unique fingerprint of the metadata. diff --git a/third_party/terraform/website/docs/r/compute_instance_from_template.html.markdown b/third_party/terraform/website/docs/r/compute_instance_from_template.html.markdown index 459e2e18e507..8e487842fab7 100644 --- a/third_party/terraform/website/docs/r/compute_instance_from_template.html.markdown +++ b/third_party/terraform/website/docs/r/compute_instance_from_template.html.markdown @@ -48,7 +48,7 @@ resource "google_compute_instance_from_template" "tpl" { name = "instance-from-template" zone = "us-central1-a" - source_instance_template = google_compute_instance_template.tpl.self_link + source_instance_template = google_compute_instance_template.tpl.id // Override fields from instance template can_ip_forward = false diff --git a/third_party/terraform/website/docs/r/compute_instance_group.html.markdown b/third_party/terraform/website/docs/r/compute_instance_group.html.markdown index 4407f62caeae..9b29a452dab1 100644 --- a/third_party/terraform/website/docs/r/compute_instance_group.html.markdown +++ b/third_party/terraform/website/docs/r/compute_instance_group.html.markdown @@ -24,7 +24,7 @@ resource "google_compute_instance_group" "test" { name = "terraform-test" description = "Terraform test instance group" zone = "us-central1-a" - network = google_compute_network.default.self_link + network = google_compute_network.default.id } ``` @@ -36,8 +36,8 @@ resource "google_compute_instance_group" "webservers" { description = "Terraform test instance group" instances = [ - google_compute_instance.test.self_link, - google_compute_instance.test2.self_link, + google_compute_instance.test.id, + google_compute_instance.test2.id, ] named_port { @@ -63,7 +63,7 @@ as shown in this example to avoid this type of error. resource "google_compute_instance_group" "staging_group" { name = "staging-instance-group" zone = "us-central1-c" - instances = [google_compute_instance.staging_vm.self_link] + instances = [google_compute_instance.staging_vm.id] named_port { name = "http" port = "8080" @@ -105,11 +105,11 @@ resource "google_compute_backend_service" "staging_service" { protocol = "HTTPS" backend { - group = google_compute_instance_group.staging_group.self_link + group = google_compute_instance_group.staging_group.id } health_checks = [ - google_compute_https_health_check.staging_health.self_link, + google_compute_https_health_check.staging_health.id, ] } @@ -136,7 +136,7 @@ The following arguments are supported: group. * `instances` - (Optional) List of instances in the group. They should be given - as self_link URLs. When adding instances they must all be in the same + as either self_link or id. When adding instances they must all be in the same network and zone as the instance group. * `named_port` - (Optional) The named port configuration. See the section below @@ -161,6 +161,8 @@ The `named_port` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}/zones/{{zone}}/instanceGroups/{{name}}` + * `self_link` - The URI of the created resource. * `size` - The number of instances in the group. diff --git a/third_party/terraform/website/docs/r/compute_instance_group_manager.html.markdown b/third_party/terraform/website/docs/r/compute_instance_group_manager.html.markdown index a3c6b3b51226..8a24bb7a6a86 100644 --- a/third_party/terraform/website/docs/r/compute_instance_group_manager.html.markdown +++ b/third_party/terraform/website/docs/r/compute_instance_group_manager.html.markdown @@ -39,10 +39,10 @@ resource "google_compute_instance_group_manager" "appserver" { zone = "us-central1-a" version { - instance_template = google_compute_instance_template.appserver.self_link + instance_template = google_compute_instance_template.appserver.id } - target_pools = [google_compute_target_pool.appserver.self_link] + target_pools = [google_compute_target_pool.appserver.id] target_size = 2 named_port { @@ -51,7 +51,7 @@ resource "google_compute_instance_group_manager" "appserver" { } auto_healing_policies { - health_check = google_compute_health_check.autohealing.self_link + health_check = google_compute_health_check.autohealing.id initial_delay_sec = 300 } } @@ -70,12 +70,12 @@ resource "google_compute_instance_group_manager" "appserver" { version { name = "appserver" - instance_template = google_compute_instance_template.appserver.self_link + instance_template = google_compute_instance_template.appserver.id } version { name = "appserver-canary" - instance_template = google_compute_instance_template.appserver-canary.self_link + instance_template = google_compute_instance_template.appserver-canary.id target_size { fixed = 1 } @@ -134,7 +134,10 @@ The following arguments are supported: * `auto_healing_policies` - (Optional) The autohealing policies for this managed instance group. You can specify only one value. Structure is documented below. For more information, see the [official documentation](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed-instances#monitoring_groups). +* `stateful_disk` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Disks created on the instances that will be preserved on instance delete, update, etc. Structure is documented below. For more information see the [official documentation](https://cloud.google.com/compute/docs/instance-groups/configuring-stateful-disks-in-migs). + * `update_policy` - (Optional) The update policy for this managed instance group. Structure is documented below. For more information, see the [official documentation](https://cloud.google.com/compute/docs/instance-groups/updating-managed-instance-groups) and [API](https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroupManagers/patch) + - - - The `update_policy` block supports: @@ -183,7 +186,7 @@ The `version` block supports: ```hcl version { name = "appserver-canary" - instance_template = google_compute_instance_template.appserver-canary.self_link + instance_template = google_compute_instance_template.appserver-canary.id target_size { fixed = 1 @@ -194,7 +197,7 @@ version { ```hcl version { name = "appserver-canary" - instance_template = google_compute_instance_template.appserver-canary.self_link + instance_template = google_compute_instance_template.appserver-canary.id target_size { percent = 20 @@ -220,11 +223,20 @@ The `target_size` block supports: Note that when using `percent`, rounding will be in favor of explicitly set `target_size` values; a managed instance group with 2 instances and 2 `version`s, one of which has a `target_size.percent` of `60` will create 2 instances of that `version`. +The `stateful_disk` block supports: (Include a `stateful_disk` block for each stateful disk required). + +* `device_name` - (Required), The device name of the disk to be attached. + +* `delete_rule` - (Optional), A value that prescribes what should happen to the stateful disk when the VM instance is deleted. The available options are `NEVER` and `ON_PERMANENT_INSTANCE_DELETION`. `NEVER` detatch the disk when the VM is deleted, but not delete the disk. `ON_PERMANENT_INSTANCE_DELETION` will delete the stateful disk when the VM is permanently deleted from the instance group. The default is `NEVER`. + + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/zones/{{zone}}/instanceGroupManagers/{{name}}` + * `fingerprint` - The fingerprint of the instance group manager. * `instance_group` - The full URL of the instance group created by the manager. diff --git a/third_party/terraform/website/docs/r/compute_instance_template.html.markdown b/third_party/terraform/website/docs/r/compute_instance_template.html.markdown index 17da40487c7c..4b2f154802f0 100644 --- a/third_party/terraform/website/docs/r/compute_instance_template.html.markdown +++ b/third_party/terraform/website/docs/r/compute_instance_template.html.markdown @@ -112,7 +112,7 @@ resource "google_compute_instance_template" "instance_template" { resource "google_compute_instance_group_manager" "instance_group_manager" { name = "instance-group-manager" - instance_template = google_compute_instance_template.instance_template.self_link + instance_template = google_compute_instance_template.instance_template.id base_instance_name = "instance-group-manager" zone = "us-central1-f" target_size = "1" @@ -136,7 +136,7 @@ group manager. If you're not sure, we recommend deploying the latest image available when Terraform runs, because this means all the instances in your group will be based on the same image, always, and means that no upgrades or changes to your instances happen outside of a `terraform apply`. -You can achieve this by using the [`google_compute_image`](../d/datasource_compute_image.html) +You can achieve this by using the [`google_compute_image`](../d/compute_image.html) data source, which will retrieve the latest image on every `terraform apply`, and will update the template to use that specific image: @@ -413,6 +413,8 @@ The `shielded_instance_config` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/global/instanceTemplates/{{name}}` + * `metadata_fingerprint` - The unique fingerprint of the metadata. * `self_link` - The URI of the created resource. @@ -422,6 +424,14 @@ exported: [1]: /docs/providers/google/r/compute_instance_group_manager.html [2]: /docs/configuration/resources.html#lifecycle +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `delete` - Default is 4 minutes. + ## Import Instance templates can be imported using any of these accepted formats: diff --git a/third_party/terraform/website/docs/r/compute_network_peering.html.markdown b/third_party/terraform/website/docs/r/compute_network_peering.html.markdown index 36395fb3c07e..1f015838ac52 100644 --- a/third_party/terraform/website/docs/r/compute_network_peering.html.markdown +++ b/third_party/terraform/website/docs/r/compute_network_peering.html.markdown @@ -24,14 +24,14 @@ to be functional. ```hcl resource "google_compute_network_peering" "peering1" { name = "peering1" - network = google_compute_network.default.self_link - peer_network = google_compute_network.other.self_link + network = google_compute_network.default.id + peer_network = google_compute_network.other.id } resource "google_compute_network_peering" "peering2" { name = "peering2" - network = google_compute_network.other.self_link - peer_network = google_compute_network.default.self_link + network = google_compute_network.other.id + peer_network = google_compute_network.default.id } resource "google_compute_network" "default" { @@ -60,18 +60,34 @@ may belong to a different project. Whether to export the custom routes to the peer network. Defaults to `false`. * `import_custom_routes` - (Optional) -Whether to export the custom routes from the peer network. Defaults to `false`. +Whether to import the custom routes from the peer network. Defaults to `false`. + +* `export_subnet_routes_with_public_ip` - (Optional) +Whether subnet routes with public IP range are exported. The default value is true, all subnet routes are exported. The IPv4 special-use ranges (https://en.wikipedia.org/wiki/IPv4#Special_addresses) are always exported to peers and are not controlled by this field. + +* `import_subnet_routes_with_public_ip` - (Optional) +Whether subnet routes with public IP range are imported. The default value is false. The IPv4 special-use ranges (https://en.wikipedia.org/wiki/IPv4#Special_addresses) are always imported from peers and are not controlled by this field. ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `{{network}}/{{name}}` + * `state` - State for the peering, either `ACTIVE` or `INACTIVE`. The peering is `ACTIVE` when there's a matching configuration in the peer network. * `state_details` - Details about the current state of the peering. +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `delete` - Default is 4 minutes. + ## Import VPC network peerings can be imported using the name and project of the primary network the peering exists in and the name of the network peering diff --git a/third_party/terraform/website/docs/r/compute_project_default_network_tier.html.markdown b/third_party/terraform/website/docs/r/compute_project_default_network_tier.html.markdown index ce67afa6ea90..8cd53fca302e 100644 --- a/third_party/terraform/website/docs/r/compute_project_default_network_tier.html.markdown +++ b/third_party/terraform/website/docs/r/compute_project_default_network_tier.html.markdown @@ -38,7 +38,16 @@ The following arguments are supported: ## Attributes Reference -Only the arguments listed above are exposed as attributes. +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{project}}` + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes (also used for update). ## Import diff --git a/third_party/terraform/website/docs/r/compute_project_metadata.html.markdown b/third_party/terraform/website/docs/r/compute_project_metadata.html.markdown index e975ec2a5ef3..b0f3adbb31ee 100644 --- a/third_party/terraform/website/docs/r/compute_project_metadata.html.markdown +++ b/third_party/terraform/website/docs/r/compute_project_metadata.html.markdown @@ -44,7 +44,17 @@ The following arguments are supported: ## Attributes Reference -Only the arguments listed above are exposed as attributes. +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{project}}` + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes (also used for update). +- `delete` - Default is 4 minutes. ## Import diff --git a/third_party/terraform/website/docs/r/compute_project_metadata_item.html.markdown b/third_party/terraform/website/docs/r/compute_project_metadata_item.html.markdown index b82d909e5521..37135fdd2d60 100644 --- a/third_party/terraform/website/docs/r/compute_project_metadata_item.html.markdown +++ b/third_party/terraform/website/docs/r/compute_project_metadata_item.html.markdown @@ -38,7 +38,9 @@ The following arguments are supported: ## Attributes Reference -Only the arguments listed above are exposed as attributes. +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{key}}`` ## Import diff --git a/third_party/terraform/website/docs/r/compute_region_instance_group_manager.html.markdown b/third_party/terraform/website/docs/r/compute_region_instance_group_manager.html.markdown index bdf7a4634f1c..faa3f2f01362 100644 --- a/third_party/terraform/website/docs/r/compute_region_instance_group_manager.html.markdown +++ b/third_party/terraform/website/docs/r/compute_region_instance_group_manager.html.markdown @@ -40,10 +40,10 @@ resource "google_compute_region_instance_group_manager" "appserver" { distribution_policy_zones = ["us-central1-a", "us-central1-f"] version { - instance_template = google_compute_instance_template.appserver.self_link + instance_template = google_compute_instance_template.appserver.id } - target_pools = [google_compute_target_pool.appserver.self_link] + target_pools = [google_compute_target_pool.appserver.id] target_size = 2 named_port { @@ -52,7 +52,7 @@ resource "google_compute_region_instance_group_manager" "appserver" { } auto_healing_policies { - health_check = google_compute_health_check.autohealing.self_link + health_check = google_compute_health_check.autohealing.id initial_delay_sec = 300 } } @@ -69,11 +69,11 @@ resource "google_compute_region_instance_group_manager" "appserver" { target_size = 5 version { - instance_template = google_compute_instance_template.appserver.self_link + instance_template = google_compute_instance_template.appserver.id } version { - instance_template = google_compute_instance_template.appserver-canary.self_link + instance_template = google_compute_instance_template.appserver-canary.id target_size { fixed = 1 } @@ -137,6 +137,9 @@ group. You can specify only one value. Structure is documented below. For more i * `distribution_policy_zones` - (Optional) The distribution policy for this managed instance group. You can specify one or more values. For more information, see the [official documentation](https://cloud.google.com/compute/docs/instance-groups/distributing-instances-with-regional-instance-groups#selectingzones). + +* `stateful_disk` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Disks created on the instances that will be preserved on instance delete, update, etc. Structure is documented below. For more information see the [official documentation](https://cloud.google.com/compute/docs/instance-groups/configuring-stateful-disks-in-migs). Proactive cross zone instance redistribution must be disabled before you can update stateful disks on existing instance group managers. This can be controlled via the `update_policy`. + - - - The `update_policy` block supports: @@ -188,7 +191,7 @@ The `version` block supports: ```hcl version { name = "appserver-canary" - instance_template = google_compute_instance_template.appserver-canary.self_link + instance_template = google_compute_instance_template.appserver-canary.id target_size { fixed = 1 @@ -199,7 +202,7 @@ version { ```hcl version { name = "appserver-canary" - instance_template = google_compute_instance_template.appserver-canary.self_link + instance_template = google_compute_instance_template.appserver-canary.id target_size { percent = 20 @@ -224,11 +227,19 @@ The `target_size` block supports: Note that when using `percent`, rounding will be in favor of explicitly set `target_size` values; a managed instance group with 2 instances and 2 `version`s, one of which has a `target_size.percent` of `60` will create 2 instances of that `version`. +The `stateful_disk` block supports: (Include a `stateful_disk` block for each stateful disk required). + +* `device_name` - (Required), The device name of the disk to be attached. + +* `delete_rule` - (Optional), A value that prescribes what should happen to the stateful disk when the VM instance is deleted. The available options are `NEVER` and `ON_PERMANENT_INSTANCE_DELETION`. `NEVER` detatch the disk when the VM is deleted, but not delete the disk. `ON_PERMANENT_INSTANCE_DELETION` will delete the stateful disk when the VM is permanently deleted from the instance group. The default is `NEVER`. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `{{disk.name}}` + * `fingerprint` - The fingerprint of the instance group manager. * `instance_group` - The full URL of the instance group created by the manager. diff --git a/third_party/terraform/website/docs/r/compute_router_interface.html.markdown b/third_party/terraform/website/docs/r/compute_router_interface.html.markdown index 8fe20838d4b3..d9925792e7ae 100644 --- a/third_party/terraform/website/docs/r/compute_router_interface.html.markdown +++ b/third_party/terraform/website/docs/r/compute_router_interface.html.markdown @@ -63,7 +63,17 @@ or both. ## Attributes Reference -Only the arguments listed above are exposed as attributes. +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{region}}/{{router}}/{{name}}` + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `delete` - Default is 4 minutes. ## Import diff --git a/third_party/terraform/website/docs/r/compute_security_policy.html.markdown b/third_party/terraform/website/docs/r/compute_security_policy.html.markdown index 0e3f1e411e5e..fb47cd2dd9b3 100644 --- a/third_party/terraform/website/docs/r/compute_security_policy.html.markdown +++ b/third_party/terraform/website/docs/r/compute_security_policy.html.markdown @@ -111,6 +111,8 @@ The `expr` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/global/securityPolicies/{{name}}` + * `fingerprint` - Fingerprint of this resource. * `self_link` - The URI of the created resource. diff --git a/third_party/terraform/website/docs/r/compute_shared_vpc_host_project.html.markdown b/third_party/terraform/website/docs/r/compute_shared_vpc_host_project.html.markdown index 6a5a56f5cfa6..6b4441a0604e 100644 --- a/third_party/terraform/website/docs/r/compute_shared_vpc_host_project.html.markdown +++ b/third_party/terraform/website/docs/r/compute_shared_vpc_host_project.html.markdown @@ -44,6 +44,20 @@ The following arguments are expected: * `project` - (Required) The ID of the project that will serve as a Shared VPC host project +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{project}}` + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `delete` - Default is 4 minutes. + ## Import Google Compute Engine Shared VPC host project feature can be imported using the `project`, e.g. diff --git a/third_party/terraform/website/docs/r/compute_shared_vpc_service_project.html.markdown b/third_party/terraform/website/docs/r/compute_shared_vpc_service_project.html.markdown index 7db964db4031..2bbe46b2f30e 100644 --- a/third_party/terraform/website/docs/r/compute_shared_vpc_service_project.html.markdown +++ b/third_party/terraform/website/docs/r/compute_shared_vpc_service_project.html.markdown @@ -38,6 +38,20 @@ The following arguments are expected: * `service_project` - (Required) The ID of the project that will serve as a Shared VPC service project. +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{host_project}}/{{service_project}}` + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `delete` - Default is 4 minutes. + ## Import Google Compute Engine Shared VPC service project feature can be imported using the `host_project` and `service_project`, e.g. diff --git a/third_party/terraform/website/docs/r/compute_target_pool.html.markdown b/third_party/terraform/website/docs/r/compute_target_pool.html.markdown index 47119d23d410..d5bb492169ae 100644 --- a/third_party/terraform/website/docs/r/compute_target_pool.html.markdown +++ b/third_party/terraform/website/docs/r/compute_target_pool.html.markdown @@ -81,8 +81,19 @@ The following arguments are supported: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/regions/{{region}}/targetPools/{{name}}` + * `self_link` - The URI of the created resource. +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `update` - Default is 4 minutes. +- `delete` - Default is 4 minutes. + ## Import Target pools can be imported using any of the following formats: diff --git a/third_party/terraform/website/docs/r/container_cluster.html.markdown b/third_party/terraform/website/docs/r/container_cluster.html.markdown index bbf40a393d54..f6243acf6f91 100644 --- a/third_party/terraform/website/docs/r/container_cluster.html.markdown +++ b/third_party/terraform/website/docs/r/container_cluster.html.markdown @@ -9,6 +9,9 @@ description: |- # google\_container\_cluster +-> See the [Using GKE with Terraform](/docs/providers/google/guides/using_gke_with_terraform.html) +guide for more information about using GKE with Terraform. + Manages a Google Kubernetes Engine (GKE) cluster. For more information see [the official documentation](https://cloud.google.com/container-engine/docs/clusters) and [the API reference](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters). @@ -48,7 +51,7 @@ resource "google_container_node_pool" "primary_preemptible_nodes" { node_config { preemptible = true - machine_type = "n1-standard-1" + machine_type = "e2-medium" metadata = { disable-legacy-endpoints = "true" @@ -137,14 +140,14 @@ in this cluster in CIDR notation (e.g. `10.96.0.0/14`). Leave blank to have one automatically chosen or specify a `/14` block in `10.0.0.0/8`. This field will only work for routes-based clusters, where `ip_allocation_policy` is not defined. -* `cluster_autoscaling` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) +* `cluster_autoscaling` - (Optional) Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster's workload. See the [guide to using Node Auto-Provisioning](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning) for more details. Structure is documented below. -* `database_encryption` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)). +* `database_encryption` - (Optional) Structure is documented below. * `description` - (Optional) Description of the cluster. @@ -169,7 +172,7 @@ for more information. will have statically granted permissions beyond those provided by the RBAC configuration or IAM. Defaults to `false` -* `enable_shielded_nodes` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Enable Shielded Nodes features on all nodes in this cluster. Defaults to `false`. +* `enable_shielded_nodes` - (Optional) Enable Shielded Nodes features on all nodes in this cluster. Defaults to `false`. * `initial_node_count` - (Optional) The number of nodes to create in this cluster's default node pool. In regional or multi-zonal clusters, this is the @@ -183,6 +186,10 @@ VPC-native clusters. Adding this block enables [IP aliasing](https://cloud.googl making the cluster VPC-native instead of routes-based. Structure is documented below. +* `networking_mode` - (Optional, [Beta]) Determines whether alias IPs or routes will be used for pod IPs in the cluster. +Options are `VPC_NATIVE` or `ROUTES`. `VPC_NATIVE` enables [IP aliasing](https://cloud.google.com/kubernetes-engine/docs/how-to/ip-aliases), +and requires the `ip_allocation_policy` block to be defined. By default when this field is unspecified, GKE will create a `ROUTES`-based cluster. + * `logging_service` - (Optional) The logging service that the cluster should write logs to. Available options include `logging.googleapis.com`(Legacy Stackdriver), `logging.googleapis.com/kubernetes`(Stackdriver Kubernetes Engine Logging), and `none`. Defaults to `logging.googleapis.com/kubernetes` @@ -265,12 +272,23 @@ region are guaranteed to support the same version. * `private_cluster_config` - (Optional) Configuration for [private clusters](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters), clusters with private nodes. Structure is documented below. +* `cluster_telemetry` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Configuration for + [ClusterTelemetry](https://cloud.google.com/monitoring/kubernetes-engine/installing#controlling_the_collection_of_application_logs) feature, + Structure is documented below. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. -* `release_channel` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Configuration options for the - [Release channel](https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels) - feature, which provide more control over automatic upgrades of your GKE clusters. Structure is documented below. +* `release_channel` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) +Configuration options for the [Release channel](https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels) +feature, which provide more control over automatic upgrades of your GKE clusters. +When updating this field, GKE imposes specific version requirements. See +[Migrating between release channels](https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels#migrating_between_release_channels) +for more details; the `google_container_engine_versions` datasource can provide +the default version for a channel. Note that removing the `release_channel` +field from your config will cause Terraform to stop managing your cluster's +release channel, but will not unenroll it. Instead, use the `"UNSPECIFIED"` +channel. Structure is documented below. * `remove_default_node_pool` - (Optional) If `true`, deletes the default node pool upon cluster creation. If you're using `google_container_node_pool` @@ -298,6 +316,17 @@ subnetwork in which the cluster's instances are launched. * `enable_intranode_visibility` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network. +* `default_snat_status` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + [GKE SNAT](https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#how_ipmasq_works) DefaultSnatStatus contains the desired state of whether default sNAT should be disabled on the cluster, [API doc](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#networkconfig). + +The `default_snat_status` block supports + +* `disabled` - (Required) Whether the cluster disables default in-node sNAT rules. In-node sNAT rules will be disabled when defaultSnatStatus is disabled.When disabled is set to false, default IP masquerade rules will be applied to the nodes to prevent sNAT on cluster internal traffic + +The `cluster_telemetry` block supports +* `type` - Telemetry integration for the cluster. Supported values (`ENABLE, DISABLE, SYSTEM_ONLY`); + `SYSTEM_ONLY` (Only system components are monitored and logged) is only available in GKE versions 1.15 and later. + The `addons_config` block supports: * `horizontal_pod_autoscaling` - (Optional) The status of the Horizontal Pod Autoscaling @@ -318,20 +347,29 @@ The `addons_config` block supports: It can only be disabled if the nodes already do not have network policies enabled. Defaults to disabled; set `disabled = false` to enable. +* `cloudrun_config` - (Optional). + The status of the CloudRun addon. It is disabled by default. + Set `disabled = false` to enable. + * `istio_config` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)). Structure is documented below. -* `cloudrun_config` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)). - The status of the CloudRun addon. It requires `istio_config` enabled. It is disabled by default. - Set `disabled = false` to enable. This addon can only be enabled at cluster creation time. - * `dns_cache_config` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)). The status of the NodeLocal DNSCache addon. It is disabled by default. - Set `enabled = true` to enable. - + Set `enabled = true` to enable. + **Enabling/Disabling NodeLocal DNSCache in an existing cluster is a disruptive operation. All cluster nodes running GKE 1.15 and higher are recreated.** +* `gce_persistent_disk_csi_driver_config` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)). + Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. Defaults to disabled; set `enabled = true` to enable. + +* `kalm_config` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/ provider_versions.html)). + Configuration for the KALM addon, which manages the lifecycle of k8s. It is disabled by default; Set `enabled = true` to enable. + +* `config_connector_config` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)). + The status of the ConfigConnector addon. It is disabled by default; Set `enabled = true` to enable. + This example `addons_config` disables two addons: ```hcl @@ -389,6 +427,11 @@ for a list of types. The `auto_provisioning_defaults` block supports: +* `min_cpu_platform` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) +Minimum CPU platform to be used for NAP created node pools. The instance may be scheduled on the +specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such +as "Intel Haswell" or "Intel Sandy Bridge". + * `oauth_scopes` - (Optional) Scopes that are used by NAP when creating node pools. -> `monitoring.write` is always enabled regardless of user input. `monitoring` and `logging.write` may also be enabled depending on the values for `monitoring_service` and `logging_service`. @@ -527,7 +570,7 @@ The `node_config` block supports: attached to each cluster node. Defaults to 0. * `machine_type` - (Optional) The name of a Google Compute Engine machine type. - Defaults to `n1-standard-1`. To create a custom machine type, value should be set as specified + Defaults to `e2-medium`. To create a custom machine type, value should be set as specified [here](https://cloud.google.com/compute/docs/reference/latest/instances#machineType). * `metadata` - (Optional) The metadata key/value pairs assigned to instances in @@ -631,6 +674,10 @@ subnet. See [Private Cluster Limitations](https://cloud.google.com/kubernetes-en for more details. This field only applies to private clusters, when `enable_private_nodes` is `true`. +* `master_global_access_config` (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) - Controls cluster master global +access settings. If unset, Terraform will no longer manage this field and will +not modify the previously-set value. Structure is documented below. + In addition, the `private_cluster_config` allows access to the following read-only fields: * `peering_name` - The name of the peering between this cluster and the Google owned VPC. @@ -643,6 +690,11 @@ In addition, the `private_cluster_config` allows access to the following read-on `private_cluster_config` when `enable_private_nodes` is `false`. It's recommended that you omit the block entirely if the field is not set to `true`. +The `private_cluster_config.master_global_access_config` block supports: + +* `enabled` (Optional) - Whether the cluster master is accessible globally or +not. + The `sandbox_config` block supports: * `sandbox_type` (Required) Which sandbox to use for pods in the node pool. @@ -721,6 +773,8 @@ The `vertical_pod_autoscaling` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{zone}}/clusters/{{name}}` + * `endpoint` - The IP address of this cluster's Kubernetes master. * `instance_group_urls` - List of instance group URLs which have been assigned @@ -760,6 +814,7 @@ This resource provides the following [Timeouts](/docs/configuration/resources.html#timeouts) configuration options: - `create` - Default is 40 minutes. +- `read` - Default is 40 minutes. - `update` - Default is 60 minutes. - `delete` - Default is 40 minutes. diff --git a/third_party/terraform/website/docs/r/container_node_pool.html.markdown b/third_party/terraform/website/docs/r/container_node_pool.html.markdown index cc7d7c628e9c..b711ce0c1ef4 100644 --- a/third_party/terraform/website/docs/r/container_node_pool.html.markdown +++ b/third_party/terraform/website/docs/r/container_node_pool.html.markdown @@ -9,6 +9,9 @@ description: |- # google\_container\_node\_pool +-> See the [Using GKE with Terraform](/docs/providers/google/guides/using_gke_with_terraform.html) +guide for more information about using GKE with Terraform. + Manages a node pool in a Google Kubernetes Engine (GKE) cluster separately from the cluster control plane. For more information see [the official documentation](https://cloud.google.com/container-engine/docs/node-pools) and [the API reference](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters.nodePools). @@ -35,7 +38,7 @@ resource "google_container_node_pool" "primary_preemptible_nodes" { node_config { preemptible = true - machine_type = "n1-standard-1" + machine_type = "e2-medium" oauth_scopes = [ "https://www.googleapis.com/auth/logging.write", @@ -122,7 +125,7 @@ this will force recreation of the resource. See the [official documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr) for more information. -* `node_locations` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) +* `node_locations` - (Optional) The list of zones in which the node pool's nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If unspecified, the cluster-level @@ -187,6 +190,8 @@ The `upgrade_settings` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `{{project}}/{{zone}}/{{cluster}}/{{name}}` + * `instance_group_urls` - The resource URLs of the managed instance groups associated with this node pool. diff --git a/third_party/terraform/website/docs/r/dataflow_flex_template_job.html.markdown b/third_party/terraform/website/docs/r/dataflow_flex_template_job.html.markdown new file mode 100644 index 000000000000..307871d081c8 --- /dev/null +++ b/third_party/terraform/website/docs/r/dataflow_flex_template_job.html.markdown @@ -0,0 +1,82 @@ +--- +subcategory: "Dataflow" +layout: "google" +page_title: "Google: google_dataflow_flex_template_job" +sidebar_current: "docs-google-dataflow-flex-template-job" +description: |- + Creates a job in Dataflow based on a Flex Template. +--- + +# google\_dataflow\_flex\_template\_job + +Creates a [Flex Template](https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates) +job on Dataflow, which is an implementation of Apache Beam running on Google +Compute Engine. For more information see the official documentation for [Beam](https://beam.apache.org) +and [Dataflow](https://cloud.google.com/dataflow/). + +## Example Usage + +```hcl +resource "google_dataflow_flex_template_job" "big_data_job" { + provider = google-beta + name = "dataflow-flextemplates-job" + container_spec_gcs_path = "gs://my-bucket/templates/template.json" + parameters = { + inputSubscription = "messages" + } +} +``` + +## Note on "destroy" / "apply" +There are many types of Dataflow jobs. Some Dataflow jobs run constantly, +getting new data from (e.g.) a GCS bucket, and outputting data continuously. +Some jobs process a set amount of data then terminate. All jobs can fail while +running due to programming errors or other issues. In this way, Dataflow jobs +are different from most other Terraform / Google resources. + +The Dataflow resource is considered 'existing' while it is in a nonterminal +state. If it reaches a terminal state (e.g. 'FAILED', 'COMPLETE', +'CANCELLED'), it will be recreated on the next 'apply'. This is as expected for +jobs which run continuously, but may surprise users who use this resource for +other kinds of Dataflow jobs. + +A Dataflow job which is 'destroyed' may be "cancelled" or "drained". If +"cancelled", the job terminates - any data written remains where it is, but no +new data will be processed. If "drained", no new data will enter the pipeline, +but any data currently in the pipeline will finish being processed. The default +is "cancelled", but if a user sets `on_delete` to `"drain"` in the +configuration, you may experience a long wait for your `terraform destroy` to +complete. + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A unique name for the resource, required by Dataflow. + +* `container_spec_gcs_path` - (Required) The GCS path to the Dataflow job Flex +Template. + +- - - + +* `parameters` - (Optional) Key/Value pairs to be passed to the Dataflow job (as +used in the template). + +* `labels` - (Optional) User labels to be specified for the job. Keys and values +should follow the restrictions specified in the [labeling restrictions](https://cloud.google.com/compute/docs/labeling-resources#restrictions) +page. **NOTE**: Google-provided Dataflow templates often provide default labels +that begin with `goog-dataflow-provided`. Unless explicitly set in config, these +labels will be ignored to prevent diffs on re-apply. + +* `on_delete` - (Optional) One of "drain" or "cancel". Specifies behavior of +deletion during `terraform destroy`. See above note. + +* `project` - (Optional) The project in which the resource belongs. If it is not +provided, the provider project is used. + +## Attributes Reference +In addition to the arguments listed above, the following computed attributes are exported: + +* `job_id` - The unique ID of this job. + +* `state` - The current state of the resource, selected from the [JobState enum](https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs#Job.JobState) diff --git a/third_party/terraform/website/docs/r/dataflow_job.html.markdown b/third_party/terraform/website/docs/r/dataflow_job.html.markdown index 21f9c77b98b4..1c3aba2fa0ba 100644 --- a/third_party/terraform/website/docs/r/dataflow_job.html.markdown +++ b/third_party/terraform/website/docs/r/dataflow_job.html.markdown @@ -58,7 +58,7 @@ The following arguments are supported: * `subnetwork` - (Optional) The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK". * `machine_type` - (Optional) The machine type to use for the job. * `ip_configuration` - (Optional) The configuration for VM IPs. Options are `"WORKER_IP_PUBLIC"` or `"WORKER_IP_PRIVATE"`. - +* `additional_experiments` - (Optional) List of experiments that should be used by the job. An example value is `["enable_stackdriver_agent_metrics"]`. ## Attributes Reference diff --git a/third_party/terraform/website/docs/r/dataproc_cluster.html.markdown b/third_party/terraform/website/docs/r/dataproc_cluster.html.markdown index 4afdff672169..86febff987a3 100644 --- a/third_party/terraform/website/docs/r/dataproc_cluster.html.markdown +++ b/third_party/terraform/website/docs/r/dataproc_cluster.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Dataproc" +subcategory: "Dataproc" layout: "google" page_title: "Google: google_dataproc_cluster" sidebar_current: "docs-google-dataproc-cluster" @@ -146,6 +146,7 @@ The `cluster_config` block supports: # You can define multiple initialization_action blocks initialization_action { ... } encryption_config { ... } + endpoint_config { ... } } ``` @@ -186,6 +187,8 @@ The `cluster_config` block supports: * `lifecycle_config` (Optional, Beta) The settings for auto deletion cluster schedule. Structure defined below. +* `endpoint_config` (Optional, Beta) The config settings for port access on the cluster. + Structure defined below. - - - The `cluster_config.gce_cluster_config` block supports: @@ -435,10 +438,13 @@ cluster_config { Accepted values are: * ANACONDA * DRUID + * HBASE * HIVE_WEBHCAT * JUPYTER * KERBEROS * PRESTO + * RANGER + * SOLR * ZEPPELIN * ZOOKEEPER @@ -583,6 +589,21 @@ cluster_config { A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". +- - - + +The `endpoint_config` block (Optional, Computed, Beta) supports: + +```hcl +cluster_config { + endpoint_config { + enable_http_port_access = "true" + } +} +``` + +* `enable_http_port_access` - (Optional) The flag to enable http access to specific ports + on the cluster from external sources (aka Component Gateway). Defaults to false. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are @@ -607,6 +628,9 @@ exported: * `cluster_config.0.lifecycle_config.0.idle_start_time` - Time when the cluster became idle (most recent job finished) and became eligible for deletion due to idleness. +* `cluster_config.0.endpoint_config.0.http_ports` - The map of port descriptions to URLs. Will only be populated if + `enable_http_port_access` is true. + ## Timeouts This resource provides the following diff --git a/third_party/terraform/website/docs/r/dataproc_cluster_iam.html.markdown b/third_party/terraform/website/docs/r/dataproc_cluster_iam.html.markdown index 707f9d532f2f..288da1ecd0f0 100644 --- a/third_party/terraform/website/docs/r/dataproc_cluster_iam.html.markdown +++ b/third_party/terraform/website/docs/r/dataproc_cluster_iam.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Dataproc" +subcategory: "Dataproc" layout: "google" page_title: "Google: google_dataproc_cluster_iam" sidebar_current: "docs-google-dataproc-cluster-iam" diff --git a/third_party/terraform/website/docs/r/dataproc_job.html.markdown b/third_party/terraform/website/docs/r/dataproc_job.html.markdown index c630e1e4ef65..04e34ec64b09 100644 --- a/third_party/terraform/website/docs/r/dataproc_job.html.markdown +++ b/third_party/terraform/website/docs/r/dataproc_job.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Dataproc" +subcategory: "Dataproc" layout: "google" page_title: "Google: google_dataproc_job" sidebar_current: "docs-google-dataproc-job" diff --git a/third_party/terraform/website/docs/r/dataproc_job_iam.html.markdown b/third_party/terraform/website/docs/r/dataproc_job_iam.html.markdown index 8fc24c1cd232..e8811a7b3f19 100644 --- a/third_party/terraform/website/docs/r/dataproc_job_iam.html.markdown +++ b/third_party/terraform/website/docs/r/dataproc_job_iam.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Dataproc" +subcategory: "Dataproc" layout: "google" page_title: "Google: google_dataproc_job_iam" sidebar_current: "docs-google-dataproc-job-iam" diff --git a/third_party/terraform/website/docs/r/dns_record_set.html.markdown b/third_party/terraform/website/docs/r/dns_record_set.html.markdown index 9bf54bf116e9..3b2e1fa50bb7 100644 --- a/third_party/terraform/website/docs/r/dns_record_set.html.markdown +++ b/third_party/terraform/website/docs/r/dns_record_set.html.markdown @@ -157,7 +157,10 @@ The following arguments are supported: ## Attributes Reference -Only the arguments listed above are exposed as attributes. +-In addition to the arguments listed above, the following computed attributes are +-exported: + +* `id` - an identifier for the resource with format `{{project}}/{{zone}}/{{name}}/{{type}}` ## Import diff --git a/third_party/terraform/website/docs/r/endpoints_service.html.markdown b/third_party/terraform/website/docs/r/endpoints_service.html.markdown index 232068c8e5cc..22a5e1ac893d 100644 --- a/third_party/terraform/website/docs/r/endpoints_service.html.markdown +++ b/third_party/terraform/website/docs/r/endpoints_service.html.markdown @@ -76,3 +76,12 @@ In addition to the arguments, the following attributes are available: ### Endpoint Object Structure * `name`: The simple name of the endpoint as described in the config. * `address`: The FQDN of the endpoint as described in the config. + +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 10 minutes. +- `update` - Default is 10 minutes. +- `delete` - Default is 10 minutes. diff --git a/third_party/terraform/website/docs/r/google_folder_iam_audit_config.html.markdown b/third_party/terraform/website/docs/r/google_folder_iam_audit_config.html.markdown new file mode 100644 index 000000000000..f82ae3a862e9 --- /dev/null +++ b/third_party/terraform/website/docs/r/google_folder_iam_audit_config.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Cloud Platform" +layout: "google" +page_title: "Google: google_folder_iam_audit_config" +sidebar_current: "docs-google-folder-iam-audit-config" +description: |- + Allows management of audit logging config for a given service for a Google Cloud Platform folder. +--- + +## google\_folder\_iam\_audit\_config + +Allows management of audit logging config for a given service for a Google Cloud Platform folder. + +```hcl +resource "google_folder_iam_audit_config" "config" { + folder = "folders/{folder_id}" + service = "allServices" + audit_log_config { + log_type = "DATA_READ" + exempted_members = [ + "user:joebloggs@hashicorp.com", + ] + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `folder` - (Required) The resource name of the folder in which you want to manage the audit logging config. Its format is folders/{folder_id}. + +* `service` - (Required) Service which will be enabled for audit logging. The special value `allServices` covers all services. Note that if there are google\_folder\_iam\_audit\_config resources covering both `allServices` and a specific service then the union of the two AuditConfigs is used for that service: the `log_types` specified in each `audit_log_config` are enabled, and the `exempted_members` in each `audit_log_config` are exempted. + +* `audit_log_config` - (Required) The configuration for logging of each type of permission. This can be specified multiple times. Structure is documented below. + +--- + +The `audit_log_config` block supports: + +* `log_type` - (Required) Permission type for which logging is to be configured. Must be one of `DATA_READ`, `DATA_WRITE`, or `ADMIN_READ`. + +* `exempted_members` - (Optional) Identities that do not cause logging for this type of permission. + Each entry can have one of the following values: + * **user:{emailid}**: An email address that represents a specific Google account. For example, alice@gmail.com or joe@example.com. + * **serviceAccount:{emailid}**: An email address that represents a service account. For example, my-other-app@appspot.gserviceaccount.com. + * **group:{emailid}**: An email address that represents a Google group. For example, admins@example.com. + * **domain:{domain}**: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com. + +## Import +IAM audit config imports use the identifier of the resource in question and the service, e.g. + +``` +terraform import google_folder_iam_audit_config.config "{{folder_id}} foo.googleapis.com" +``` diff --git a/third_party/terraform/website/docs/r/google_kms_crypto_key_iam.html.markdown b/third_party/terraform/website/docs/r/google_kms_crypto_key_iam.html.markdown index df41321ea3cf..6761e0720fbe 100644 --- a/third_party/terraform/website/docs/r/google_kms_crypto_key_iam.html.markdown +++ b/third_party/terraform/website/docs/r/google_kms_crypto_key_iam.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud KMS" +subcategory: "Cloud Key Management Service" layout: "google" page_title: "Google: google_kms_crypto_key_iam" sidebar_current: "docs-google-kms-crypto-key-iam" @@ -198,4 +198,4 @@ $ terraform import google_kms_crypto_key_iam_policy.crypto_key your-project-id/l ``` -> If you're importing a resource with beta features, make sure to include `-provider=google-beta` -as an argument so that Terraform uses the correct provider to import your resource. \ No newline at end of file +as an argument so that Terraform uses the correct provider to import your resource. diff --git a/third_party/terraform/website/docs/r/google_kms_key_ring_iam.html.markdown b/third_party/terraform/website/docs/r/google_kms_key_ring_iam.html.markdown index bed407fc272e..0aa98b6ba449 100644 --- a/third_party/terraform/website/docs/r/google_kms_key_ring_iam.html.markdown +++ b/third_party/terraform/website/docs/r/google_kms_key_ring_iam.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud KMS" +subcategory: "Cloud Key Management Service" layout: "google" page_title: "Google: google_kms_key_ring_iam" sidebar_current: "docs-google-kms-key-ring-iam" diff --git a/third_party/terraform/website/docs/r/google_organization_iam_custom_role.html.markdown b/third_party/terraform/website/docs/r/google_organization_iam_custom_role.html.markdown index 69a007ad5f6a..59b0591ca325 100644 --- a/third_party/terraform/website/docs/r/google_organization_iam_custom_role.html.markdown +++ b/third_party/terraform/website/docs/r/google_organization_iam_custom_role.html.markdown @@ -60,6 +60,10 @@ exported: * `deleted` - (Optional) The current deleted state of the role. +* `id` - an identifier for the resource with the format `organizations/{{org_id}}/roles/{{role_id}}` + +* `name` - The name of the role in the format `organizations/{{org_id}}/roles/{{role_id}}`. Like `id`, this field can be used as a reference in other resources such as IAM role bindings. + ## Import Customized IAM organization role can be imported using their URI, e.g. diff --git a/third_party/terraform/website/docs/r/google_project.html.markdown b/third_party/terraform/website/docs/r/google_project.html.markdown index 2fc93707cd54..4a88c0d8b67e 100644 --- a/third_party/terraform/website/docs/r/google_project.html.markdown +++ b/third_party/terraform/website/docs/r/google_project.html.markdown @@ -104,8 +104,19 @@ The following arguments are supported: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}` + * `number` - The numeric identifier of the project. +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 10 minutes. +- `update` - Default is 10 minutes. +- `delete` - Default is 10 minutes. + ## Import Projects can be imported using the `project_id`, e.g. diff --git a/third_party/terraform/website/docs/r/google_project_iam.html.markdown b/third_party/terraform/website/docs/r/google_project_iam.html.markdown index 14a427b82d7b..c8ea70862740 100644 --- a/third_party/terraform/website/docs/r/google_project_iam.html.markdown +++ b/third_party/terraform/website/docs/r/google_project_iam.html.markdown @@ -28,7 +28,8 @@ Four different resources help you manage your IAM policy for a project. Each of from anyone without organization-level access to the project. Proceed with caution. It's not recommended to use `google_project_iam_policy` with your provider project to avoid locking yourself out, and it should generally only be used with projects - fully managed by Terraform. + fully managed by Terraform. If you do use this resource, it is recommended to **import** the policy before + applying the change. ```hcl resource "google_project_iam_policy" "project" { @@ -47,7 +48,7 @@ data "google_iam_policy" "admin" { } ``` -With IAM Conditions ([beta](https://terraform.io/docs/providers/google/provider_versions.html)): +With IAM Conditions: ```hcl resource "google_project_iam_policy" "project" { @@ -87,7 +88,7 @@ resource "google_project_iam_binding" "project" { } ``` -With IAM Conditions ([beta](https://terraform.io/docs/providers/google/provider_versions.html)): +With IAM Conditions: ```hcl resource "google_project_iam_binding" "project" { @@ -116,7 +117,7 @@ resource "google_project_iam_member" "project" { } ``` -With IAM Conditions ([beta](https://terraform.io/docs/providers/google/provider_versions.html)): +With IAM Conditions: ```hcl resource "google_project_iam_member" "project" { @@ -182,7 +183,7 @@ will not be inferred from the provider. * `audit_log_config` - (Required only by google\_project\_iam\_audit\_config) The configuration for logging of each type of permission. This can be specified multiple times. Structure is documented below. -* `condition` - (Optional, [Beta](https://terraform.io/docs/providers/google/provider_versions.html)) An [IAM Condition](https://cloud.google.com/iam/docs/conditions-overview) for a given binding. +* `condition` - (Optional) An [IAM Condition](https://cloud.google.com/iam/docs/conditions-overview) for a given binding. Structure is documented below. --- @@ -241,4 +242,3 @@ terraform import google_project_iam_audit_config.my_project "your-project-id foo -> **Custom Roles**: If you're importing a IAM resource with a custom role, make sure to use the full name of the custom role, e.g. `[projects/my-project|organizations/my-org]/roles/my-custom-role`. - diff --git a/third_party/terraform/website/docs/r/google_project_iam_custom_role.html.markdown b/third_party/terraform/website/docs/r/google_project_iam_custom_role.html.markdown index d4dd6c8ed557..d4081d15ec23 100644 --- a/third_party/terraform/website/docs/r/google_project_iam_custom_role.html.markdown +++ b/third_party/terraform/website/docs/r/google_project_iam_custom_role.html.markdown @@ -60,6 +60,10 @@ exported: * `deleted` - (Optional) The current deleted state of the role. + * `id` - an identifier for the resource with the format `projects/{{project}}/roles/{{role_id}}` + + * `name` - The name of the role in the format `projects/{{project}}/roles/{{role_id}}`. Like `id`, this field can be used as a reference in other resources such as IAM role bindings. + ## Import Customized IAM project role can be imported using their URI, e.g. diff --git a/third_party/terraform/website/docs/r/google_project_service.html.markdown b/third_party/terraform/website/docs/r/google_project_service.html.markdown index d39516d156ca..cdb0247c1dcf 100644 --- a/third_party/terraform/website/docs/r/google_project_service.html.markdown +++ b/third_party/terraform/website/docs/r/google_project_service.html.markdown @@ -38,6 +38,12 @@ If `false` or unset, an error will be generated if any enabled services depend o * `disable_on_destroy` - (Optional) If true, disable the service when the terraform resource is destroyed. Defaults to true. May be useful in the event that a project is long-lived but the infrastructure running in that project changes frequently. +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{project}}/{{service}}` + ## Import Project services can be imported using the `project_id` and `service`, e.g. diff --git a/third_party/terraform/website/docs/r/google_service_account.html.markdown b/third_party/terraform/website/docs/r/google_service_account.html.markdown index 75efb5f5901a..a00cee70bdb9 100644 --- a/third_party/terraform/website/docs/r/google_service_account.html.markdown +++ b/third_party/terraform/website/docs/r/google_service_account.html.markdown @@ -50,6 +50,8 @@ The following arguments are supported: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/serviceAccounts/{{email}}` + * `email` - The e-mail address of the service account. This value should be referenced from any `google_iam_policy` data sources that would grant the service account privileges. @@ -58,6 +60,13 @@ exported: * `unique_id` - The unique id of the service account. +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 5 minutes. + ## Import Service accounts can be imported using their URI, e.g. diff --git a/third_party/terraform/website/docs/r/google_service_account_iam.html.markdown b/third_party/terraform/website/docs/r/google_service_account_iam.html.markdown index 8f87bd776ca7..5bd5551aef2e 100644 --- a/third_party/terraform/website/docs/r/google_service_account_iam.html.markdown +++ b/third_party/terraform/website/docs/r/google_service_account_iam.html.markdown @@ -63,7 +63,7 @@ resource "google_service_account_iam_binding" "admin-account-iam" { } ``` -With IAM Conditions ([beta](https://terraform.io/docs/providers/google/provider_versions.html)): +With IAM Conditions: ```hcl resource "google_service_account" "sa" { @@ -112,7 +112,7 @@ resource "google_service_account_iam_member" "gce-default-account-iam" { } ``` -With IAM Conditions ([beta](https://terraform.io/docs/providers/google/provider_versions.html)): +With IAM Conditions: ```hcl resource "google_service_account" "sa" { @@ -155,7 +155,7 @@ The following arguments are supported: * `policy_data` - (Required only by `google_service_account_iam_policy`) The policy data generated by a `google_iam_policy` data source. -* `condition` - (Optional, [Beta](https://terraform.io/docs/providers/google/provider_versions.html)) An [IAM Condition](https://cloud.google.com/iam/docs/conditions-overview) for a given binding. +* `condition` - (Optional) An [IAM Condition](https://cloud.google.com/iam/docs/conditions-overview) for a given binding. Structure is documented below. The `condition` block supports: diff --git a/third_party/terraform/website/docs/r/google_service_account_key.html.markdown b/third_party/terraform/website/docs/r/google_service_account_key.html.markdown index 16faf7e572c7..974608205668 100644 --- a/third_party/terraform/website/docs/r/google_service_account_key.html.markdown +++ b/third_party/terraform/website/docs/r/google_service_account_key.html.markdown @@ -7,11 +7,10 @@ description: |- Allows management of a Google Cloud Platform service account Key Pair --- -# google\_service\_account\_key +# google_service_account_key Creates and manages service account key-pairs, which allow the user to establish identity of a service account outside of GCP. For more information, see [the official documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and [API](https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys). - ## Example Usage, creating a new Key Pair ```hcl @@ -61,7 +60,7 @@ Valid values are listed at [ServiceAccountPrivateKeyType](https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys#ServiceAccountKeyAlgorithm) (only used on create) -* `public_key_type` (Optional) The output format of the public key requested. X509_PEM is the default output format. +* `public_key_type` (Optional) The output format of the public key requested. TYPE_X509_PEM_FILE is the default output format. * `private_key_type` (Optional) The output format of the private key. TYPE_GOOGLE_CREDENTIALS_FILE is the default output format. @@ -69,6 +68,8 @@ Valid values are listed at The following attributes are exported in addition to the arguments listed above: +* `id` - an identifier for the resource with format `projects/{{project}}/serviceAccounts/{{account}}/keys/{{key}}` + * `name` - The name used for this key pair * `public_key` - The public key, base64 encoded @@ -80,4 +81,3 @@ service account keys through the CLI or web console. This is only populated when * `valid_before` - The key can be used before this timestamp. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". - diff --git a/third_party/terraform/website/docs/r/logging_billing_account_bucket_config.html.markdown b/third_party/terraform/website/docs/r/logging_billing_account_bucket_config.html.markdown new file mode 100644 index 000000000000..f92395d589c1 --- /dev/null +++ b/third_party/terraform/website/docs/r/logging_billing_account_bucket_config.html.markdown @@ -0,0 +1,64 @@ +--- +subcategory: "Cloud (Stackdriver) Logging" +layout: "google" +page_title: "Google: google_logging_billing_account_bucket_config" +sidebar_current: "docs-google-logging-billing-account-bucket-config" +description: |- + Manages a billing account level logging bucket config. +--- + +# google\_logging\_billing_account\_bucket\_config + +Manages a billing account level logging bucket config. For more information see +[the official logging documentation](https://cloud.google.com/logging/docs/) and +[Storing Logs](https://cloud.google.com/logging/docs/storage). + +~> **Note:** Logging buckets are automatically created for a given folder, project, organization, billingAccount and cannot be deleted. Creating a resource of this type will acquire and update the resource that already exists at the desired location. These buckets cannot be removed so deleting this resource will remove the bucket config from your terraform state but will leave the logging bucket unchanged. The buckets that are currently automatically created are "_Default" and "_Required". + +## Example Usage + +```hcl +data "google_billing_account" "default" { + billing_account = "00AA00-000AAA-00AA0A" +} + +resource "google_logging_billing_account_bucket_config" "basic" { + billing_account = data.google_billing_account.default.billing_account + location = "global" + retention_days = 30 + bucket_id = "_Default" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `billing_account` - (Required) The parent resource that contains the logging bucket. + +* `location` - (Required) The location of the bucket. The supported locations are: "global" "us-central1" + +* `bucket_id` - (Required) The name of the logging bucket. Logging automatically creates two log buckets: `_Required` and `_Default`. + +* `description` - (Optional) Describes this bucket. + +* `retention_days` - (Optional) Logs will be retained by default for this amount of time, after which they will automatically be deleted. The minimum retention period is 1 day. If this value is set to zero at bucket creation time, the default time of 30 days will be used. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are +exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/buckets/{{bucket}}` + +* `name` - The resource name of the bucket. For example: "projects/my-project-id/locations/my-location/buckets/my-bucket-id" + +* `lifecycle_state` - The bucket's lifecycle such as active or deleted. See [LifecycleState](https://cloud.google.com/logging/docs/reference/v2/rest/v2/billingAccounts.buckets#LogBucket.LifecycleState). + +## Import + +This resource can be imported using the following format: + +``` +$ terraform import google_logging_billing_account_bucket_config.default billingAccounts/{{billingAccount}}/locations/{{location}}/buckets/{{bucket_id}} +``` diff --git a/third_party/terraform/website/docs/r/logging_billing_account_exclusion.html.markdown b/third_party/terraform/website/docs/r/logging_billing_account_exclusion.html.markdown index b54161a8d10b..e1ae396442d2 100644 --- a/third_party/terraform/website/docs/r/logging_billing_account_exclusion.html.markdown +++ b/third_party/terraform/website/docs/r/logging_billing_account_exclusion.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_billing_account_exclusion" sidebar_current: "docs-google-logging-billing_account-exclusion" @@ -47,6 +47,12 @@ The following arguments are supported: See [Advanced Log Filters](https://cloud.google.com/logging/docs/view/advanced-filters) for information on how to write a filter. +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `billingAccounts/{{billing_account}}/exclusions/{{name}}` + ## Import Billing account logging exclusions can be imported using their URI, e.g. diff --git a/third_party/terraform/website/docs/r/logging_billing_account_sink.html.markdown b/third_party/terraform/website/docs/r/logging_billing_account_sink.html.markdown index 75fc816eee32..d654997e6f95 100644 --- a/third_party/terraform/website/docs/r/logging_billing_account_sink.html.markdown +++ b/third_party/terraform/website/docs/r/logging_billing_account_sink.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_billing_account_sink" sidebar_current: "docs-google-logging-billing-account-sink" @@ -77,6 +77,8 @@ The `bigquery_options` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `billingAccounts/{{billing_account_id}}/sinks/{{sink_id}}` + * `writer_identity` - The identity associated with this sink. This identity must be granted write access to the configured `destination`. diff --git a/third_party/terraform/website/docs/r/logging_folder_bucket_config.html.markdown b/third_party/terraform/website/docs/r/logging_folder_bucket_config.html.markdown new file mode 100644 index 000000000000..eb42bcfdc53a --- /dev/null +++ b/third_party/terraform/website/docs/r/logging_folder_bucket_config.html.markdown @@ -0,0 +1,65 @@ +--- +subcategory: "Cloud (Stackdriver) Logging" +layout: "google" +page_title: "Google: google_logging_folder_bucket_config" +sidebar_current: "docs-google-logging-folder-bucket-config" +description: |- + Manages a folder-level logging bucket config. +--- + +# google\_logging\_folder\_bucket\_config + +Manages a folder-level logging bucket config. For more information see +[the official logging documentation](https://cloud.google.com/logging/docs/) and +[Storing Logs](https://cloud.google.com/logging/docs/storage). + +~> **Note:** Logging buckets are automatically created for a given folder, project, organization, billingAccount and cannot be deleted. Creating a resource of this type will acquire and update the resource that already exists at the desired location. These buckets cannot be removed so deleting this resource will remove the bucket config from your terraform state but will leave the logging bucket unchanged. The buckets that are currently automatically created are "_Default" and "_Required". + +## Example Usage + +```hcl +resource "google_folder" "default" { + display_name = "some-folder-name" + parent = "organizations/123456789" +} + +resource "google_logging_folder_bucket_config" "basic" { + folder = google_folder.default.name + location = "global" + retention_days = 30 + bucket_id = "_Default" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `folder` - (Required) The parent resource that contains the logging bucket. + +* `location` - (Required) The location of the bucket. The supported locations are: "global" "us-central1" + +* `bucket_id` - (Required) The name of the logging bucket. Logging automatically creates two log buckets: `_Required` and `_Default`. + +* `description` - (Optional) Describes this bucket. + +* `retention_days` - (Optional) Logs will be retained by default for this amount of time, after which they will automatically be deleted. The minimum retention period is 1 day. If this value is set to zero at bucket creation time, the default time of 30 days will be used. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are +exported: + +* `id` - an identifier for the resource with format `folders/{{folder}}/locations/{{location}}/buckets/{{bucket_id}}` + +* `name` - The resource name of the bucket. For example: "folders/my-folder-id/locations/my-location/buckets/my-bucket-id" + +* `lifecycle_state` - The bucket's lifecycle such as active or deleted. See [LifecycleState](https://cloud.google.com/logging/docs/reference/v2/rest/v2/billingAccounts.buckets#LogBucket.LifecycleState). + +## Import + +This resource can be imported using the following format: + +``` +$ terraform import google_logging_folder_bucket_config.default folders/{{folder}}/locations/{{location}}/buckets/{{bucket_id}} +``` diff --git a/third_party/terraform/website/docs/r/logging_folder_exclusion.html.markdown b/third_party/terraform/website/docs/r/logging_folder_exclusion.html.markdown index 615e53d85e7e..8d6ceee78314 100644 --- a/third_party/terraform/website/docs/r/logging_folder_exclusion.html.markdown +++ b/third_party/terraform/website/docs/r/logging_folder_exclusion.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_folder_exclusion" sidebar_current: "docs-google-logging-folder-exclusion" @@ -53,6 +53,12 @@ The following arguments are supported: See [Advanced Log Filters](https://cloud.google.com/logging/docs/view/advanced-filters) for information on how to write a filter. +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `folders/{{folder}}/exclusions/{{name}}` + ## Import Folder-level logging exclusions can be imported using their URI, e.g. diff --git a/third_party/terraform/website/docs/r/logging_folder_sink.html.markdown b/third_party/terraform/website/docs/r/logging_folder_sink.html.markdown index f07f743e4d59..788940b8a5c9 100644 --- a/third_party/terraform/website/docs/r/logging_folder_sink.html.markdown +++ b/third_party/terraform/website/docs/r/logging_folder_sink.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_folder_sink" sidebar_current: "docs-google-logging-folder-sink" @@ -87,6 +87,8 @@ The `bigquery_options` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `folders/{{folder_id}}/sinks/{{name}}` + * `writer_identity` - The identity associated with this sink. This identity must be granted write access to the configured `destination`. @@ -95,5 +97,5 @@ exported: Folder-level logging sinks can be imported using this format: ``` -$ terraform import google_logging_folder_sink.my_sink folders/{{folder_id}}/sinks/{{sink_id}} +$ terraform import google_logging_folder_sink.my_sink folders/{{folder_id}}/sinks/{{name}} ``` diff --git a/third_party/terraform/website/docs/r/logging_organization_bucket_config.html.markdown b/third_party/terraform/website/docs/r/logging_organization_bucket_config.html.markdown new file mode 100644 index 000000000000..ad93586746de --- /dev/null +++ b/third_party/terraform/website/docs/r/logging_organization_bucket_config.html.markdown @@ -0,0 +1,65 @@ +--- +subcategory: "Cloud (Stackdriver) Logging" +layout: "google" +page_title: "Google: google_logging_organization_bucket_config" +sidebar_current: "docs-google-logging-organization-bucket-config" +description: |- + Manages a organization-level logging bucket config. +--- + +# google\_logging\_organization\_bucket\_config + +Manages a organization-level logging bucket config. For more information see +[the official logging documentation](https://cloud.google.com/logging/docs/) and +[Storing Logs](https://cloud.google.com/logging/docs/storage). + +~> **Note:** Logging buckets are automatically created for a given folder, project, organization, billingAccount and cannot be deleted. Creating a resource of this type will acquire and update the resource that already exists at the desired location. These buckets cannot be removed so deleting this resource will remove the bucket config from your terraform state but will leave the logging bucket unchanged. The buckets that are currently automatically created are "_Default" and "_Required". + +## Example Usage + +```hcl +data "google_organization" "default" { + organization = "123456789" +} + +resource "google_logging_organization_bucket_config" "basic" { + organization = data.google_organization.default.organization + location = "global" + retention_days = 30 + bucket_id = "_Default" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `organization` - (Required) The parent resource that contains the logging bucket. + +* `location` - (Required) The location of the bucket. The supported locations are: "global" "us-central1" + +* `bucket_id` - (Required) The name of the logging bucket. Logging automatically creates two log buckets: `_Required` and `_Default`. + +* `description` - (Optional) Describes this bucket. + +* `retention_days` - (Optional) Logs will be retained by default for this amount of time, after which they will automatically be deleted. The minimum retention period is 1 day. If this value is set to zero at bucket creation time, the default time of 30 days will be used. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are +exported: + +* `id` - an identifier for the resource with format `organizations/{{organization}}/locations/{{location}}/buckets/{{bucket_id}}` + +* `name` - The resource name of the bucket. For example: "organizations/my-organization-id/locations/my-location/buckets/my-bucket-id" + +* `lifecycle_state` - The bucket's lifecycle such as active or deleted. See [LifecycleState](https://cloud.google.com/logging/docs/reference/v2/rest/v2/billingAccounts.buckets#LogBucket.LifecycleState). + +## Import + + +This resource can be imported using the following format: + +``` +$ terraform import google_logging_organization_bucket_config.default organizations/{{organization}}/locations/{{location}}/buckets/{{bucket_id}} +``` diff --git a/third_party/terraform/website/docs/r/logging_organization_exclusion.html.markdown b/third_party/terraform/website/docs/r/logging_organization_exclusion.html.markdown index c580ba520dc0..bb628b504f6a 100644 --- a/third_party/terraform/website/docs/r/logging_organization_exclusion.html.markdown +++ b/third_party/terraform/website/docs/r/logging_organization_exclusion.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_organization_exclusion" sidebar_current: "docs-google-logging-organization-exclusion" @@ -47,10 +47,16 @@ The following arguments are supported: See [Advanced Log Filters](https://cloud.google.com/logging/docs/view/advanced-filters) for information on how to write a filter. +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `organizations/{{organization}}/exclusions/{{name}}` + ## Import Organization-level logging exclusions can be imported using their URI, e.g. ``` -$ terraform import google_logging_organization_exclusion.my_exclusion organizations/my-organization/exclusions/my-exclusion +$ terraform import google_logging_organization_exclusion.my_exclusion organizations/{{organization}}/exclusions/{{name}} ``` diff --git a/third_party/terraform/website/docs/r/logging_organization_sink.html.markdown b/third_party/terraform/website/docs/r/logging_organization_sink.html.markdown index 0cbb5508c5ce..75dab680c7e8 100644 --- a/third_party/terraform/website/docs/r/logging_organization_sink.html.markdown +++ b/third_party/terraform/website/docs/r/logging_organization_sink.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_organization_sink" sidebar_current: "docs-google-logging-organization-sink" @@ -79,6 +79,8 @@ The `bigquery_options` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `organizations/{{organization}}/sinks/{{name}}` + * `writer_identity` - The identity associated with this sink. This identity must be granted write access to the configured `destination`. diff --git a/third_party/terraform/website/docs/r/logging_project_bucket_config.html.markdown b/third_party/terraform/website/docs/r/logging_project_bucket_config.html.markdown new file mode 100644 index 000000000000..10bce5197ab3 --- /dev/null +++ b/third_party/terraform/website/docs/r/logging_project_bucket_config.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "Cloud (Stackdriver) Logging" +layout: "google" +page_title: "Google: google_logging_project_bucket_config" +sidebar_current: "docs-google-logging-project-bucket-config" +description: |- + Manages a project-level logging bucket config. +--- + +# google\_logging\_project\_bucket\_config + +Manages a project-level logging bucket config. For more information see +[the official logging documentation](https://cloud.google.com/logging/docs/) and +[Storing Logs](https://cloud.google.com/logging/docs/storage). + +~> **Note:** Logging buckets are automatically created for a given folder, project, organization, billingAccount and cannot be deleted. Creating a resource of this type will acquire and update the resource that already exists at the desired location. These buckets cannot be removed so deleting this resource will remove the bucket config from your terraform state but will leave the logging bucket unchanged. The buckets that are currently automatically created are "_Default" and "_Required". + +## Example Usage + +```hcl +resource "google_project" "default" { + project_id = "your-project-id" + name = "your-project-id" + org_id = "123456789" +} + +resource "google_logging_project_bucket_config" "basic" { + project = google_project.default.name + location = "global" + retention_days = 30 + bucket_id = "_Default" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `project` - (Required) The parent resource that contains the logging bucket. + +* `location` - (Required) The location of the bucket. The supported locations are: "global" "us-central1" + +* `bucket_id` - (Required) The name of the logging bucket. Logging automatically creates two log buckets: `_Required` and `_Default`. + +* `description` - (Optional) Describes this bucket. + +* `retention_days` - (Optional) Logs will be retained by default for this amount of time, after which they will automatically be deleted. The minimum retention period is 1 day. If this value is set to zero at bucket creation time, the default time of 30 days will be used. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are +exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/buckets/{{bucket_id}}` + +* `name` - The resource name of the bucket. For example: "projects/my-project-id/locations/my-location/buckets/my-bucket-id" + +* `lifecycle_state` - The bucket's lifecycle such as active or deleted. See [LifecycleState](https://cloud.google.com/logging/docs/reference/v2/rest/v2/billingAccounts.buckets#LogBucket.LifecycleState). + +## Import + +This resource can be imported using the following format: + +``` +$ terraform import google_logging_project_bucket_config.default projects/{{project}}/locations/{{location}}/buckets/{{bucket_id}} +``` diff --git a/third_party/terraform/website/docs/r/logging_project_exclusion.html.markdown b/third_party/terraform/website/docs/r/logging_project_exclusion.html.markdown index 854d968db428..5fcbb9a66804 100644 --- a/third_party/terraform/website/docs/r/logging_project_exclusion.html.markdown +++ b/third_party/terraform/website/docs/r/logging_project_exclusion.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_project_exclusion" sidebar_current: "docs-google-logging-project-exclusion" @@ -47,6 +47,12 @@ The following arguments are supported: * `project` - (Optional) The project to create the exclusion in. If omitted, the project associated with the provider is used. +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/exclusions/{{name}}` + ## Import Project-level logging exclusions can be imported using their URI, e.g. diff --git a/third_party/terraform/website/docs/r/logging_project_sink.html.markdown b/third_party/terraform/website/docs/r/logging_project_sink.html.markdown index 8beb419b22ff..2e40a8092b42 100644 --- a/third_party/terraform/website/docs/r/logging_project_sink.html.markdown +++ b/third_party/terraform/website/docs/r/logging_project_sink.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Stackdriver Logging" +subcategory: "Cloud (Stackdriver) Logging" layout: "google" page_title: "Google: google_logging_project_sink" sidebar_current: "docs-google-logging-project-sink" @@ -127,6 +127,8 @@ The `bigquery_options` block supports: In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/sinks/{{name}}` + * `writer_identity` - The identity associated with this sink. This identity must be granted write access to the configured `destination`. diff --git a/third_party/terraform/website/docs/r/monitoring_dashboard.html.markdown b/third_party/terraform/website/docs/r/monitoring_dashboard.html.markdown new file mode 100644 index 000000000000..54633a2b2b84 --- /dev/null +++ b/third_party/terraform/website/docs/r/monitoring_dashboard.html.markdown @@ -0,0 +1,156 @@ +--- +subcategory: "Cloud (Stackdriver) Monitoring" +layout: "google" +page_title: "Google: google_monitoring_dashboard" +sidebar_current: "docs-google-monitoring-dashboard" +description: |- + A Google Stackdriver dashboard. +--- + +# google\_monitoring\_dashboard + +A Google Stackdriver dashboard. Dashboards define the content and layout of pages in the Stackdriver web application. + +To get more information about Dashboards, see: + +* [API documentation](https://cloud.google.com/monitoring/api/ref_v3/rest/v1/projects.dashboards) +* How-to Guides + * [Official Documentation](https://cloud.google.com/monitoring/dashboards) + +## Example Usage - Monitoring Dashboard Basic + + +```hcl +resource "google_monitoring_dashboard" "dashboard" { + dashboard_json = < If you're importing a resource with beta features, make sure to include `-provider=google-beta` +as an argument so that Terraform uses the correct provider to import your resource. + +## User Project Overrides + +This resource supports [User Project Overrides](https://www.terraform.io/docs/providers/google/guides/provider_reference.html#user_project_override). diff --git a/third_party/terraform/website/docs/r/runtimeconfig_config.html.markdown b/third_party/terraform/website/docs/r/runtimeconfig_config.html.markdown index 0af0429327a5..06297dd82996 100644 --- a/third_party/terraform/website/docs/r/runtimeconfig_config.html.markdown +++ b/third_party/terraform/website/docs/r/runtimeconfig_config.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Runtime Configuration" +subcategory: "Runtime Configurator" layout: "google" page_title: "Google: google_runtimeconfig_config" sidebar_current: "docs-google-runtimeconfig-config" @@ -39,6 +39,12 @@ is not provided, the provider project is used. * `description` - (Optional) The description to associate with the runtime config. +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/configs/{{name}}` + ## Import Runtime Configs can be imported using the `name` or full config name, e.g. diff --git a/third_party/terraform/website/docs/r/runtimeconfig_variable.html.markdown b/third_party/terraform/website/docs/r/runtimeconfig_variable.html.markdown index d5b5e80ba3d8..647897837c75 100644 --- a/third_party/terraform/website/docs/r/runtimeconfig_variable.html.markdown +++ b/third_party/terraform/website/docs/r/runtimeconfig_variable.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Runtime Configuration" +subcategory: "Runtime Configurator" layout: "google" page_title: "Google: google_runtimeconfig_variable" sidebar_current: "docs-google-runtimeconfig-variable" @@ -74,6 +74,8 @@ is specified, it must be base64 encoded and less than 4096 bytes in length. In addition to the arguments listed above, the following computed attributes are exported: +* `id` - an identifier for the resource with format `projects/{{project}}/configs/{{config}}/variables/{{name}}` + * `update_time` - (Computed) The timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds, representing when the variable was last updated. Example: "2016-10-09T12:33:37.578138407Z". diff --git a/third_party/terraform/website/docs/r/service_networking_connection.html.markdown b/third_party/terraform/website/docs/r/service_networking_connection.html.markdown index e201834c9315..bee2d2f3cfbf 100644 --- a/third_party/terraform/website/docs/r/service_networking_connection.html.markdown +++ b/third_party/terraform/website/docs/r/service_networking_connection.html.markdown @@ -26,11 +26,11 @@ resource "google_compute_global_address" "private_ip_alloc" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = google_compute_network.peering_network.self_link + network = google_compute_network.peering_network.id } resource "google_service_networking_connection" "foobar" { - network = google_compute_network.peering_network.self_link + network = google_compute_network.peering_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } diff --git a/third_party/terraform/website/docs/r/spanner_database_iam.html.markdown b/third_party/terraform/website/docs/r/spanner_database_iam.html.markdown index a910b23838e6..efc97254806a 100644 --- a/third_party/terraform/website/docs/r/spanner_database_iam.html.markdown +++ b/third_party/terraform/website/docs/r/spanner_database_iam.html.markdown @@ -126,6 +126,6 @@ IAM policy imports use the identifier of the resource in question, e.g. $ terraform import google_spanner_database_iam_policy.database project-name/instance-name/database-name ``` --> **Custom Roles**: If you're importing a IAM resource with a custom role, make sure to use the +-> **Custom Roles:** If you're importing a IAM resource with a custom role, make sure to use the full name of the custom role, e.g. `[projects/my-project|organizations/my-org]/roles/my-custom-role`. diff --git a/third_party/terraform/website/docs/r/sql_database_instance.html.markdown b/third_party/terraform/website/docs/r/sql_database_instance.html.markdown index 726c9cd620fe..d887e5108477 100644 --- a/third_party/terraform/website/docs/r/sql_database_instance.html.markdown +++ b/third_party/terraform/website/docs/r/sql_database_instance.html.markdown @@ -122,7 +122,7 @@ resource "google_sql_database_instance" "postgres" { ``` ### Private IP Instance -~> **NOTE**: For private IP instance setup, note that the `google_sql_database_instance` does not actually interpolate values from `google_service_networking_connection`. You must explicitly add a `depends_on`reference as shown below. +~> **NOTE:** For private IP instance setup, note that the `google_sql_database_instance` does not actually interpolate values from `google_service_networking_connection`. You must explicitly add a `depends_on`reference as shown below. ```hcl resource "google_compute_network" "private_network" { @@ -138,13 +138,13 @@ resource "google_compute_global_address" "private_ip_address" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = google_compute_network.private_network.self_link + network = google_compute_network.private_network.id } resource "google_service_networking_connection" "private_vpc_connection" { provider = google-beta - network = google_compute_network.private_network.self_link + network = google_compute_network.private_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_address.name] } @@ -165,7 +165,7 @@ resource "google_sql_database_instance" "instance" { tier = "db-f1-micro" ip_configuration { ipv4_enabled = false - private_network = google_compute_network.private_network.self_link + private_network = google_compute_network.private_network.id } } } @@ -194,7 +194,7 @@ The following arguments are supported: * `database_version` - (Optional, Default: `MYSQL_5_6`) The MySQL, PostgreSQL or SQL Server (beta) version to use. Supported values include `MYSQL_5_6`, -`MYSQL_5_7`, `POSTGRES_9_6`,`POSTGRES_11`, `SQLSERVER_2017_STANDARD`, +`MYSQL_5_7`, `POSTGRES_9_6`,`POSTGRES_10`, `POSTGRES_11`, `POSTGRES_12`, `SQLSERVER_2017_STANDARD`, `SQLSERVER_2017_ENTERPRISE`, `SQLSERVER_2017_EXPRESS`, `SQLSERVER_2017_WEB`. [Database Version Policies](https://cloud.google.com/sql/docs/sqlserver/db-versions) includes an up-to-date reference of supported versions. @@ -214,7 +214,7 @@ includes an up-to-date reference of supported versions. * `replica_configuration` - (Optional) The configuration for replication. The configuration is detailed below. -* `root_password` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Initial root password. Required for MS SQL Server, ignored by MySQL and PostgreSQL. +* `root_password` - (Optional) Initial root password. Required for MS SQL Server, ignored by MySQL and PostgreSQL. * `encryption_key_name` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) The full path to the encryption key used for the CMEK disk encryption. Setting @@ -240,8 +240,10 @@ The required `settings` block supports: for information on how to upgrade to Second Generation instances. A list of Google App Engine (GAE) project names that are allowed to access this instance. -* `availability_type` - (Optional) This specifies whether a PostgreSQL instance - should be set up for high availability (`REGIONAL`) or single zone (`ZONAL`). +* `availability_type` - (Optional) The availability type of the Cloud SQL +instance, high availability (`REGIONAL`) or single zone (`ZONAL`).' For MySQL +instances, ensure that `settings.backup_configuration.enabled` and +`settings.backup_configuration.binary_log_enabled` are both set to `true`. * `crash_safe_replication` - (Optional, Deprecated) This property is only applicable to First Generation instances. First Generation instances are now deprecated, see [here](https://cloud.google.com/sql/docs/mysql/upgrade-2nd-gen) diff --git a/third_party/terraform/website/docs/r/sql_ssl_cert.html.markdown b/third_party/terraform/website/docs/r/sql_ssl_cert.html.markdown index cef243cc41d5..45277388f1fa 100644 --- a/third_party/terraform/website/docs/r/sql_ssl_cert.html.markdown +++ b/third_party/terraform/website/docs/r/sql_ssl_cert.html.markdown @@ -66,6 +66,14 @@ exported: * `expiration_time` - The time when the certificate expires in RFC 3339 format, for example 2012-11-15T16:19:00.094Z. +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 10 minutes. +- `delete` - Default is 10 minutes. + ## Import Since the contents of the certificate cannot be accessed after its creation, this resource cannot be imported. diff --git a/third_party/terraform/website/docs/r/sql_user.html.markdown b/third_party/terraform/website/docs/r/sql_user.html.markdown index 1ce5715ba28b..2aa161fada61 100644 --- a/third_party/terraform/website/docs/r/sql_user.html.markdown +++ b/third_party/terraform/website/docs/r/sql_user.html.markdown @@ -66,6 +66,15 @@ The following arguments are supported: Only the arguments listed above are exposed as attributes. +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 10 minutes. +- `update` - Default is 10 minutes. +- `delete` - Default is 10 minutes. + ## Import SQL users for MySQL databases can be imported using the `project`, `instance`, `host` and `name`, e.g. diff --git a/third_party/terraform/website/docs/r/storage_bucket.html.markdown b/third_party/terraform/website/docs/r/storage_bucket.html.markdown index 3be26ea38970..e06763162ce9 100644 --- a/third_party/terraform/website/docs/r/storage_bucket.html.markdown +++ b/third_party/terraform/website/docs/r/storage_bucket.html.markdown @@ -153,7 +153,7 @@ The `retention_policy` block supports: * `is_locked` - (Optional) If set to `true`, the bucket will be [locked](https://cloud.google.com/storage/docs/using-bucket-lock#lock-bucket) and permanently restrict edits to the bucket's retention policy. Caution: Locking a bucket is an irreversible action. -* `retention_period` - (Optional) The period of time, in seconds, that objects in the bucket must be retained and cannot be deleted, overwritten, or archived. The value must be less than 3,155,760,000 seconds. +* `retention_period` - (Optional) The period of time, in seconds, that objects in the bucket must be retained and cannot be deleted, overwritten, or archived. The value must be less than 2,147,483,647 seconds. The `logging` block supports: diff --git a/third_party/terraform/website/docs/r/storage_transfer_job.html.markdown b/third_party/terraform/website/docs/r/storage_transfer_job.html.markdown index 8e8c69bcafc8..b1015fe1f5b8 100644 --- a/third_party/terraform/website/docs/r/storage_transfer_job.html.markdown +++ b/third_party/terraform/website/docs/r/storage_transfer_job.html.markdown @@ -1,5 +1,5 @@ --- -subcategory: "Cloud Storage" +subcategory: "Storage Transfer Service" layout: "google" page_title: "Google: google_storage_transfer_job" sidebar_current: "docs-google-storage-transfer-job-x" diff --git a/third_party/terraform/website/docs/r/usage_export_bucket.html.markdown b/third_party/terraform/website/docs/r/usage_export_bucket.html.markdown index 6a7935c8fc14..3ea6d2ca2d6d 100644 --- a/third_party/terraform/website/docs/r/usage_export_bucket.html.markdown +++ b/third_party/terraform/website/docs/r/usage_export_bucket.html.markdown @@ -39,6 +39,14 @@ resource "google_project_usage_export_bucket" "usage_export" { * `project`: (Optional) The project to set the export bucket on. If it is not provided, the provider project is used. +## Timeouts + +This resource provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - Default is 4 minutes. +- `delete` - Default is 4 minutes. + ## Import A project's Usage Export Bucket can be imported using this format: diff --git a/third_party/terraform/website/google.erb.tmpl b/third_party/terraform/website/google.erb.tmpl new file mode 100644 index 000000000000..78885f4771c0 --- /dev/null +++ b/third_party/terraform/website/google.erb.tmpl @@ -0,0 +1,79 @@ +<% wrap_layout :inner do %> + <% content_for :sidebar do %> + + <% end %> + +<%= yield %> + <% end %>