Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[private_cluster] Kubernetes provider resolving to localhost as GKE endpoint #1675

Closed
navilg opened this issue Jun 16, 2023 · 2 comments
Closed
Labels
bug Something isn't working Stale

Comments

@navilg
Copy link

navilg commented Jun 16, 2023

TL;DR

We have GKE private cluster created using private_cluster module with ip_masq_agent.
Now we have added another nodepool detail in terraform to create additional nodepool for cluster.

But when we run terraform plan/apply, But it fails with error as below

Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/ip-masq-agent": dial tcp 127.0.0.1:80: connect: connection refused

It has started occuring since two days back. Everything was working fine since then.

We have kubernetes provider block as per document as below:

data "google_client_config" "default" {}

provider "kubernetes" {
  host                   = "https://${module.kubernetes-engine_private-cluster.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(module.kubernetes-engine_private-cluster.ca_certificate)
}

Expected behavior

Terraform should resolve ${module.kubernetes-engine_private-cluster.endpoint} in kubernetes provider block with correct endpoint ip instead of localhost during refresh when we run terraform plan/apply

Observed behavior

It resolves ${module.kubernetes-engine_private-cluster.endpoint} to localhost.

Terraform Configuration

data "google_client_config" "default" {}

provider "kubernetes" {
  host                   = "https://${module.gke.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(module.gke.ca_certificate)
}

module "gke" {
  source = "terraform-google-modules/kubernetes-engine/google//modules/private-cluster"

  # Start editing below this line
  project_id                    = var.project
  name                          = var.cluster_name
  region                        = var.region
  zones                         = var.zones
  # ...And other cluster details

  node_pools = [
    {
      name               = var.np_1_name
      version            = var.node_version
      machine_type       = var.node_machine_type
      initial_node_count = 1
      node_count         = 1
      # ...Other node pool details
    },
    {
      name               = var.np_2_name
      version            = var.node_version
      machine_type       = var.node_machine_type
      initial_node_count = 1
      node_count         = 1
      # ...Other node pool details
    },
  ]

    default-node-pool = [
      "default-node-pool",
    ]
  }
}

Terraform Version

Tested with terraform 1.1.6 and 1.3.6

Additional information

No response

@navilg navilg added the bug Something isn't working label Jun 16, 2023
@github-actions
Copy link

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days

@github-actions github-actions bot added the Stale label Aug 15, 2023
@navilg
Copy link
Author

navilg commented Aug 19, 2023

Its resolved. It was happening because due to some terraform module changes it was re-creating (destroy and create) the cluster. Due to which when it was trying to run plan stage, It was not able to resolve the endpoint of cluster. I had created the cluster with my own managed module but later switched the module in terraform to use module "terraform-google-modules/kubernetes-engine/google//modules/private-cluster".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

1 participant