You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following up to #2465, I spent a huge amount of time trying to figure out why DNS was not working inside vcluster deployed in EKS and ended up discovering that a custom port (1053) is used for DNS traffic.
The official deployment guide on AWS doesn't document this and makes use of eksctl which creates allow-all AWS security groups, which makes everything work fine.
Whoever uses different tools (like Terraform) to create EKS clusters though is going to have hard times trying to figure out why by default vcluster DNS doesn't work unless the workloads are deployed in the same node as CoreDNS pod.
Users need to know that a custom DNS port is required to allow them configuring their AWS security groups accordingly.
What did you expect to happen?
DNS custom port 1053 usage should be documented
How can we reproduce it (as minimally and precisely as possible)?
Deploy an EKS cluster with Terraform, try to schedule a workload in a node different than CoreDNS pod and try to resolve DNS names
Anything else we need to know?
No response
Host cluster Kubernetes version
$ kubectl version
# paste output here
vcluster version
$ vcluster --version
# paste output here
VCluster Config
# My vcluster.yaml / values.yaml here
The text was updated successfully, but these errors were encountered:
What happened?
Following up to #2465, I spent a huge amount of time trying to figure out why DNS was not working inside vcluster deployed in EKS and ended up discovering that a custom port (1053) is used for DNS traffic.
The official deployment guide on AWS doesn't document this and makes use of
eksctl
which creates allow-all AWS security groups, which makes everything work fine.Whoever uses different tools (like Terraform) to create EKS clusters though is going to have hard times trying to figure out why by default vcluster DNS doesn't work unless the workloads are deployed in the same node as CoreDNS pod.
Users need to know that a custom DNS port is required to allow them configuring their AWS security groups accordingly.
What did you expect to happen?
DNS custom port 1053 usage should be documented
How can we reproduce it (as minimally and precisely as possible)?
Deploy an EKS cluster with Terraform, try to schedule a workload in a node different than CoreDNS pod and try to resolve DNS names
Anything else we need to know?
No response
Host cluster Kubernetes version
vcluster version
VCluster Config
The text was updated successfully, but these errors were encountered: