setup separate gcs buckets for different sets of terraform resources#1952
setup separate gcs buckets for different sets of terraform resources#1952k8s-ci-robot merged 1 commit intokubernetes:mainfrom
Conversation
b967a22 to
4523de0
Compare
db38ea4 to
22538e3
Compare
hack/migrate-tf-buckets.sh
Outdated
| # one-off script to migrate terraforms state files to the appropriate buckets | ||
| set -x | ||
|
|
||
| function tfstate_cp() { |
There was a problem hiding this comment.
This function has a similar logic in Terraform. Once you change the bucket in the provider field, TF will copy the state to this new destination. An example: https://www.terraform.io/docs/cloud/migrate/index.html#step-6-run-terraform-init-to-migrate-the-workspace.
There was a problem hiding this comment.
I'll remove this script from the PR and update terraform then, once a few in-flight PRs have landed
Still using this as a test run to ensure we org admins can view the different buckets. Still having problems copying aws resources over atm due to use of hmac keys
|
In running |
|
Bumped into this while trying to Modified Removed Then, went back and undid the Took a guess and ran |
|
Bumped into #2000 while trying to Went ahead and ran |
|
@spiffxp It's possible we may no longer need this resource. GKE Monitoring changed a lot since |
Allow groups less privileged than k8s-infra-gcp-org-admins to use
terraform to manage resources over which they have ownership.
Terraform state can include potentially include sensitive values.
Since we have terraform setup to store state in GCS, we need to ensure
visibility and access to state matches ownership of (privileges to modify)
the resources it describes.
We're using uniform bucket-level access on our GCS buckets to avoid the
complexity introduced by per-object ACLs. This means if we want different
groups with different privilege levels using terraform to manage different
sets of resources, we need to provision a GCS bucket for each group.
The new bucket schema is "k8s-infra-tf-{folder}[-{suffix}]" where:
- {folder} is the intended GCP folder for GCP projects managed by this
group, access level should be ~owners of folder
- {suffix} is subset of resources contained somewhere underneath folder,
access level should ~editors of those resources
The GCP folders don't actually exist yet, but the plan is:
- public: kubernetes-public (potentially release related projects too)
- prow: prow-build clusters and e2e projects
- aws: if there are gcp projects being used to manage aws resources
- sandbox: temporary projects
The buckets being added are:
- k8s-infra-tf-aws: to manage aws resources
- k8s-infra-tf-prow-clusters: to manage prow-build, prow-build-trusted
- k8s-infra-tf-public-clusters: to manage aaa
- k8s-infra-tf-sandbox-ii: for the ii team to manage things in sandbox
Organization admins are given storage.admin privileges to all buckets
for break-glass purposes.
Terraform modules were migrated by running `terraform init` and
`terraform refresh` for each module
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dims, spiffxp The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Related to refactoring infra/gcp, ref: #516 |
Allow groups less privileged than k8s-infra-gcp-org-admins to use terraform to manage resources over which they have ownership.
Terraform state can include potentially include sensitive values. Since we have terraform setup to store state in GCS, we need to ensure visibility and access to state matches ownership of (privileges to modify) the resources it describes.
We're using uniform bucket-level access on our GCS buckets to avoid the complexity introduced by per-object ACLs. This means if we want different groups with different privilege levels using terraform to manage different sets of resources, we need to provision a GCS bucket for each group.
The new bucket schema is
k8s-infra-tf-{folder}[-{suffix}]where:{folder}is the intended GCP folder for GCP projects managed by this group, access level should be ~owners of folder{suffix}is subset of resources contained somewhere underneath folder, access level should ~editors of those resourcesk8s-infra-tf-: is 14 chars, leaving 49 before max length for bucket nameSo for example I was tempted to create
gs://k8s-infra-tf-publicand moveaaacluster state there. But we may want to use terraform to manage a whole bunch of other resources beside clusters inkubernetes-public, and we might not want to grantcluster-adminsthe privileges to acccidentally delete everything inkubernetes-public(e.g. DNS)The GCP folders don't actually exist yet, but the plan is:
public: kubernetes-public (potentially release related projects too)prow: prow-build clusters and e2e projectsaws: if there are gcp projects being used to manage aws resourcessandbox: temporary projectsThe buckets being added are:
k8s-infra-tf-aws: to manage aws resourcesk8s-infra-tf-prow-clusters: to manage prow-build, prow-build-trustedk8s-infra-tf-public-clusters: to manage aaak8s-infra-tf-sandbox-ii: for the ii team to manage things in sandboxOrganization admins are given
storage.adminprivileges to allk8s-infra-tf-buckets for break-glass purposes.Terraform modules were migrated by running
terraform initandterraform refreshfor each module/cc @ameukam @thockin