-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Closed
Description
I have issues
Described below
I'm submitting a...
- bug report
- feature request
- support request - read the FAQ first!
- kudos, thank you, warm fuzzy
What is the current behavior?
Every time I execute a terraform apply, it tries to make the following changes
# module.eks-cluster.kubernetes_config_map.aws_auth[0] will be updated in-place
~ resource "kubernetes_config_map" "aws_auth" {
binary_data = {}
~ data = {
"mapAccounts" = jsonencode([])
~ "mapRoles" = <<~EOT
- - groups:
- - system:bootstrappers
- - system:nodes
- rolearn: arn:aws:iam::1234567890:role/mytests-managed-node-groups
- username: system:node:{{EC2PrivateDNSName}}
+
+
EOT
"mapUsers" = jsonencode([])
}
id = "kube-system/aws-auth"
metadata {
annotations = {}
generation = 0
labels = {}
name = "aws-auth"
namespace = "kube-system"
resource_version = "35037"
self_link = "/api/v1/namespaces/kube-system/configmaps/aws-auth"
uid = "781f4175-1b42-11ea-b157-02d81c51517c"
}
}
after that, the node becomes 'NotReady'.
NAME STATUS ROLES AGE VERSION
ip-10-10-10-180.eu-west-1.compute.internal NotReady <none> 55m v1.14.7-eks-1861c5
When looking into the managed node logs at /var/log/messages:
Dec 10 17:32:49 ip-10-10-10-180 kubelet: E1210 17:32:49.651699 3821 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Dec 10 17:32:50 ip-10-10-10-180 kubelet: E1210 17:32:50.130065 3821 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Dec 10 17:32:50 ip-10-10-10-180 kubelet: E1210 17:32:50.506744 3821 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Dec 10 17:32:50 ip-10-10-10-180 kubelet: E1210 17:32:50.780769 3821 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
I fix it by re-applying the aws-auth config map with the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::1234567890:role/mytests-managed-node-groups
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
If this is a bug, how to reproduce? Please include a code sample if relevant.
module "eks-cluster" {
source = "github.com/terraform-aws-modules/terraform-aws-eks"
cluster_name = "mytests"
subnets = data.terraform_remote_state.networking.outputs.private_subnets
vpc_id = data.terraform_remote_state.networking.outputs.vpc_id
node_groups = [
{
name = "mytest_node_groups"
instance_type = "t3.large"
asg_max_size = 2
autoscaling_enabled = true
public_ip = false
key_name = "XXXXXX"
tags = [{
key = "foo"
value = "bar"
propagate_at_launch = true
}]
worker_additional_security_group_ids = [
aws_security_group.EFS_client
]
}
]
tags = {
environment = "test"
}
}
resource "aws_security_group" "EFS_client" {
name = "EFS_client"
description = "Allow EFS outbound traffic"
vpc_id = data.terraform_remote_state.networking.outputs.vpc_id
}
What's the expected behavior?
Applying terraform should push working configuration to ensure Workers can access to the master layer.
Are you able to fix this problem and submit a PR? Link here if you have already.
No
Environment details
- Affected module version: current master branch
- OS: MacOS Laters version
- Terraform version: Terraform v0.12.16
Any other relevant info
It may have been caused by using Terraform Helm 3 provider to deploy the efs-provider module. Not sure, still, I fall back into a loop where applying this module, breaks the connection between master and workers.
wbertelsen, endeepak, stevie-, hughiephan and acmck
Metadata
Metadata
Assignees
Labels
No labels