You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have GKE private cluster created using private_cluster module with ip_masq_agent.
Now we have added another nodepool detail in terraform to create additional nodepool for cluster.
But when we run terraform plan/apply, But it fails with error as below
Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/ip-masq-agent": dial tcp 127.0.0.1:80: connect: connection refused
It has started occuring since two days back. Everything was working fine since then.
We have kubernetes provider block as per document as below:
Terraform should resolve ${module.kubernetes-engine_private-cluster.endpoint} in kubernetes provider block with correct endpoint ip instead of localhost during refresh when we run terraform plan/apply
Observed behavior
It resolves ${module.kubernetes-engine_private-cluster.endpoint} to localhost.
Terraform Configuration
data"google_client_config""default" {}
provider"kubernetes" {
host="https://${module.gke.endpoint}"token=data.google_client_config.default.access_tokencluster_ca_certificate=base64decode(module.gke.ca_certificate)
}
module"gke" {
source="terraform-google-modules/kubernetes-engine/google//modules/private-cluster"# Start editing below this lineproject_id=var.projectname=var.cluster_nameregion=var.regionzones=var.zones# ...And other cluster detailsnode_pools=[
{
name = var.np_1_name
version = var.node_version
machine_type = var.node_machine_type
initial_node_count =1
node_count =1# ...Other node pool details
},
{
name = var.np_2_name
version = var.node_version
machine_type = var.node_machine_type
initial_node_count =1
node_count =1# ...Other node pool details
},
]
default-node-pool=[
"default-node-pool",
]
}
}
Terraform Version
Tested with terraform 1.1.6 and 1.3.6
Additional information
No response
The text was updated successfully, but these errors were encountered:
Its resolved. It was happening because due to some terraform module changes it was re-creating (destroy and create) the cluster. Due to which when it was trying to run plan stage, It was not able to resolve the endpoint of cluster. I had created the cluster with my own managed module but later switched the module in terraform to use module "terraform-google-modules/kubernetes-engine/google//modules/private-cluster".
TL;DR
We have GKE private cluster created using private_cluster module with ip_masq_agent.
Now we have added another nodepool detail in terraform to create additional nodepool for cluster.
But when we run terraform plan/apply, But it fails with error as below
It has started occuring since two days back. Everything was working fine since then.
We have kubernetes provider block as per document as below:
Expected behavior
Terraform should resolve ${module.kubernetes-engine_private-cluster.endpoint} in kubernetes provider block with correct endpoint ip instead of localhost during refresh when we run terraform plan/apply
Observed behavior
It resolves ${module.kubernetes-engine_private-cluster.endpoint} to localhost.
Terraform Configuration
Terraform Version
Additional information
No response
The text was updated successfully, but these errors were encountered: