-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Closed
Description
Description
When creating an EKS Cluster using terraform, we get the following error:
Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused
on .terraform/modules/deployment.eks/aws_auth.tf line 65, in resource "kubernetes_config_map" "aws_auth":
65: resource "kubernetes_config_map" "aws_auth" {
To fix this, we have to manually run:
aws eks update-kubeconfig --name ${var.context.app_name} --region ${var.context.region}
and then:
terraform apply -auto-approve
Versions
- Terraform: 0.12.21
- Provider(s):
- aws - 3.22.0
- terraform-aws-modules/eks/aws - 14.0.0
- terraform-aws-modules/vpc/aws - 2.61.0
- AWS CLI: 2.0.30
- Helm: 3.3.4
- Kubectl: 1.19.0
Reproduction
- Run the terraform code in the Code Snippet to Reproduce section
- Run
terraform init && terraform apply -auto-approve
- You will get the error in the description.
To fix, manually run:
aws eks update-kubeconfig --name ${var.context.app_name} --region ${var.context.region}
terraform apply -auto-approve
Code Snippet to Reproduce
terraform {
required_version = ">= 0.12.21"
}
provider "aws" {
version = "~> 3.22.0"
region = "${var.context.region}"
}
data "aws_availability_zones" "available" {}
resource "random_string" "suffix" {
length = 8
special = false
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.61.0"
name = "${var.context.app_name}"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"kubernetes.io/cluster/${var.context.app_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${var.context.app_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.context.app_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
resource "aws_security_group" "all_worker_mgmt" {
name_prefix = "${var.context.app_name}-all_worker_management"
vpc_id = "${module.vpc.vpc_id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
]
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "14.0.0"
cluster_name = "${var.context.app_name}"
cluster_version = "1.19"
subnets = "${module.vpc.private_subnets}"
vpc_id = "${module.vpc.vpc_id}"
cluster_create_timeout = "30m"
worker_groups = [
{
instance_type = "${var.context.kubernetes.aws.machine_type}"
asg_desired_capacity = "${var.context.replica_count}"
asg_min_size = "${var.context.replica_count}"
asg_max_size = "${var.context.replica_count}"
root_volume_type = "gp2"
}
]
worker_additional_security_group_ids = ["${aws_security_group.all_worker_mgmt.id}"]
map_users = var.context.iam.aws.map_users
map_roles = var.context.iam.aws.map_roles
}
Expected behavior
The cluster gets created successfully
Actual behavior
We get this output:
Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused
on .terraform/modules/deployment.eks/aws_auth.tf line 65, in resource "kubernetes_config_map" "aws_auth":
65: resource "kubernetes_config_map" "aws_auth" {
ashtonian, sainipradeep, ElSamhaa, stefankuksenko, BornToBeRoot and 22 more
Metadata
Metadata
Assignees
Labels
No labels