Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setting node_config on autopilot clusters should not be allowed #8863

Closed
Assignees
Labels

Comments

@thecodeassassin
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

v0.14.7

Affected Resource(s)

  • google_container_cluster

Terraform Configuration Files

resource "google_container_cluster" "autopilot_cluster" {
  name               = "streaming-${var.cluster.name}"
  location           = var.cluster.region
  initial_node_count = 1
  min_master_version = var.gke_version

  node_version = var.gke_version

  enable_autopilot = true

  # Network to which the cluster is connected
  network    = var.vpc_self_link
  subnetwork = google_compute_subnetwork.subnetwork-ip-alias.name


  node_config {
    service_account = var.service_account_email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]
    labels = {
      cost_center = "mcls"
      region      = var.cluster.region
    }
    tags = ["streaming", "transcoding", "mcls"]
  }

  ip_allocation_policy {
    cluster_secondary_range_name  = local.subnetwork_pods_name
    services_secondary_range_name = local.subnetwork_services_name
  }

  timeouts {
    create = "30m"
    update = "40m"
  }

}
``

### Debug Output

 ~ node_config {
              ~ disk_size_gb      = 100 -> (known after apply)
              ~ disk_type         = "pd-standard" -> (known after apply)
              ~ guest_accelerator = [] -> (known after apply)
              ~ image_type        = "COS_CONTAINERD" -> (known after apply)
              ~ labels            = {} -> (known after apply)
              ~ local_ssd_count   = 0 -> (known after apply)
              ~ machine_type      = "e2-medium" -> (known after apply)
              ~ metadata          = {
                  - "disable-legacy-endpoints" = "true"
                } -> (known after apply)
              + min_cpu_platform  = (known after apply)
              ~ oauth_scopes      = [
                  - "https://www.googleapis.com/auth/devstorage.read_only",
                  - "https://www.googleapis.com/auth/logging.write",
                  - "https://www.googleapis.com/auth/monitoring",
                  - "https://www.googleapis.com/auth/service.management.readonly",
                  - "https://www.googleapis.com/auth/servicecontrol",
                  - "https://www.googleapis.com/auth/trace.append",
                ] -> (known after apply)
              ~ preemptible       = false -> (known after apply)
              ~ service_account   = "default" -> (known after apply)
              ~ tags              = [] -> (known after apply)
              ~ taint             = [] -> (known after apply)

              ~ shielded_instance_config {
                  ~ enable_integrity_monitoring = true -> (known after apply)
                  ~ enable_secure_boot          = true -> (known after apply)
                }

              ~ workload_metadata_config {
                  ~ node_metadata = "GKE_METADATA_SERVER" -> (known after apply)
                }
            }

### Expected Behavior

Not sure what node_config would do for autopilot clusters.

It should probably not be possible to set this value since it has no actual purpose as far as I can see other than recreating your clusters every time you apply.

### Actual Behavior

Keeps recreating the clusters

### Steps to Reproduce

1. `terraform apply`

### Important Factoids

<!--- Are there anything atypical about your accounts that we should know? For example: authenticating as a user instead of a service account? --->

### References

<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests

Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
@ghost ghost added the bug label Apr 7, 2021
@venkykuberan venkykuberan self-assigned this Apr 7, 2021
@venkykuberan
Copy link
Contributor

I don't see Node config is pre-configured for AutoPilot mode as per the docs https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview. It looks to me provider is inline with the specs. Please let us know if you see otherwise .

@thecodeassassin
Copy link
Author

@venkykuberan the problem is, when you do specify it all of your settings will be overridden and your clusters will be recreated.

@codergolem
Copy link

codergolem commented Apr 8, 2021

I could successfully create a cluster with autopilot and node_config included, so that seems not to be a problem, what I am struggling with it is how to pass a service account different from the default one and oauth scopes for the node, I used the node_config block for that but it is not working, it is ignored, cluster_autoscaling is another way to pass this info but this setting is conflicting with autopilot.
And I know it is possible to pass the service account because it is a parameter in:

    gcloud container clusters create-auto - create an Autopilot cluster for
        running containers

SYNOPSIS
    gcloud container clusters create-auto NAME [--async]
        [--cluster-ipv4-cidr=CLUSTER_IPV4_CIDR]
        [--cluster-secondary-range-name=NAME]
        [--cluster-version=CLUSTER_VERSION]
        [--create-subnetwork=[KEY=VALUE,...]] [--network=NETWORK]
        [--release-channel=CHANNEL] [--services-ipv4-cidr=CIDR]
        [--services-secondary-range-name=NAME] [--subnetwork=SUBNETWORK]
        [--enable-master-authorized-networks
          --master-authorized-networks=NETWORK,[NETWORK,...]]
        [--enable-private-endpoint
          --enable-private-nodes --master-ipv4-cidr=MASTER_IPV4_CIDR]
        [--region=REGION | --zone=ZONE, -z ZONE]
        [--scopes=[SCOPE,...];
          default="gke-default" --service-account=SERVICE_ACCOUNT]
        [GCLOUD_WIDE_FLAG ...]

@slevenick
Copy link
Collaborator

Huh, I'm not too familiar with the autopilot mode or GKE itself. From what I'm looking at on the docs page: https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview it looks like authentication is handled through workload identity. I'm not sure how we would set the scopes like in that gcloud command, but I'm pretty sure that setting it on the autoscaling block won't be what you want.

Terraform is using the REST API: https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters so there would need to be a way to set that field through the API for us to be able to enable it. Can you track down how that service account is being used in the API calls through gcloud?

@ghost
Copy link

ghost commented May 15, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators May 15, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.