-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nodes with custom domain-name
fail to join EKS cluster on Kubernetes 1.26
#3028
Comments
Can you provide the
|
Hi @FernandoMiguel, It seems like Bottlerocket is getting the instance hostname On the host, can you query IMDS for me to check what it returns for the hostname?
|
|
|
Hi @FernandoMiguel, We're still looking into the issue. There is a discrepancy specifically in As a temporary workaround, you can try setting the The bash script in the bootstrap container could do something like this:
Please let us know if you have any questions about setting this up. |
We aren't in a super hurry to upgrade. If it helps, I can spin up a cluster in other regions besides us-east-1. |
us-east-1
nodes and instances with custom domain-name
fail to join EKS cluster on Kubernetes 1.26
us-east-1
nodes and instances with custom domain-name
fail to join EKS cluster on Kubernetes 1.26 us-east-1
nodes and nodes with custom domain-name
fail to join EKS cluster on Kubernetes 1.26
Yeah, try that out if you'd like. If our assessment is correct, the nodes should be able to join the cluster as long as it's not in |
Hmm, I just created a new EKS cluster in
So there must be something different that's causing IMDS to return the incorrect hostname. In any case, #3033 should fix the issue you're seeing @FernandoMiguel. |
@etungsten what else can I provide to help understand what is causing the issue so you can reproduce? |
Hi @FernandoMiguel, It would be good to verify your cluster VPC DNS and DHCP options. |
It seems like the domain-name for your Do you know if terraform had somehow overridden that through https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc_dhcp_options#domain_name? Another workaround you can use for now (before #3033 gets released in a new version) is to change the domain name in your DHCP option to |
us-east-1
nodes and nodes with custom domain-name
fail to join EKS cluster on Kubernetes 1.26 domain-name
fail to join EKS cluster on Kubernetes 1.26
I'm not familiar with any particular changes done on our side, and this is a relatively new account. |
I started to experience the same issue after I am in
|
Update: Rolling back to bottlerocket OS version |
I have this issue in I have noticed, that on 1.13.2, the node only has a single private IP whereas before it always had 2 (ipv4). That might be due the the failed cluster registration and no CNI interaction to that point |
OK, kubelet logs reveal the issue in my region. I just had a very old DHCP option set and
|
@etungsten once #3033 is merged and new AMI is available, what steps, if any, will we have to take to make this work with not changes to the VPC config? |
@FernandoMiguel Once the AMIs are available with #3033, the behavior should be similar to before 1.26 where the hostname provided to kubelet is the one expected by the cluster. This shouldn't require any changes on the VPC side. |
great news. eagerly waiting for its release. |
just spun up a new test cluster. |
Image I'm using:
1.13.3-752a994d
What I expected to happen:
brand new EKS 1.26 cluster, to have MNG join the cluster
What actually happened:
I just created a new test cluster with eks 1.26.
ASG for MNG picked the following AMI
bottlerocket-aws-k8s-1.26-x86_64-v1.13.3-752a994d
these two nodes failed to join the cluster
How to reproduce the problem:
these are our subnet settings
The text was updated successfully, but these errors were encountered: