-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Listen tcp :53: bind: permission denied ERROR!! #125226
Comments
There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:
Please see the group list for a listing of the SIGs, working groups, and committees available. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
a better place to ask would be on the support channels. please see: /kind support |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[root@node1 ~]# kubectl logs -n kube-system coredns-5f98f8d567-5h7b4 K8S 1.29.5&kube-flannel |
It's an issue, that's why I raised here to address. |
Hi @ascarl2010 , What's your OS version? Can you share it here? Regards, |
I have similar issue with an RKE 1.29 install (1.6.0-rc6 v1.29.5-rancher1-1) with the CoreDNS deployment in the kube-system namespace. The pods are in Error and show the same message : "listen tcp :53: bind: permission denied" I'm wondering if something has changed in the new Kubernetes or container version (I also now have a new seccompProfile set in the coredns deployment securityContext definition...) ? securityContext:
│ allowPrivilegeEscalation: true
│ capabilities:
│ add:
│ - NET_BIND_SERVICE
│ drop:
│ - all
│ readOnlyRootFilesystem: true
│ seccompProfile:
│ type: RuntimeDefault |
It has seen in few other setups as well with Kubernetes 1.29.4 and 1.30.1, coredns v1.11.1 ( non-root image and run) and containerd 1.6. Workaround:
However still not able to completely figure out the exact reason. The capability bit looks to be not set for privileged bind by a non-root user in our context:
Could be a combination of Kernel+capabilities , but yet to figure out the exact reason. |
See below some additional info after more research and troubleshooting...I have found a fix and the problem might be related to the fact that I'm running on Centos7 nodes (with the Docker container runtime)... There was a change in the SecurityContext with coreDNS 1.11.1 (coming with Rancher 1.29)
|
Hello dear coders, I was facing the same incident and the fix for the issue was the following (I added a comment on the original issue): This was done in : $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.4 LTS
Release: 22.04
Codename: jammy $ k version
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2 Hopefully this works for ya all. |
Indeed I'm still not sure why with the default Kubespray configuration, some of my nodes were allowing privileged ports to be used by non-root users, while others were not. |
We were seeing the same issue on a cluster that was upgraded from 1.29.7 to 1.29.11 - but before coredns pods worked and started normally. By applying the changes to the coredns deployment-manifest like discussed in #105309 the pods came up immediately. So for now I'm using this coredns deployment-manifest
But anyway it's super weird because I have other clusters with same OS (ALMA 9.5, Kernel 5.14.0-503.15.1.el9_5.x86_64, same kernel parameters, containerd.io-1.7.24-3.1.el9.x86_64, etc.) where I do not have those issues 🤷 - still investigating |
again we had this issue on one of our clusters so, as far as I understood in containerd's
to be future proof. With containerd v2 those two parameters seem to be set to I would have preferred the |
What happened?
I was setting up the single-node k8s cluster (1 controlplane and 1 worker node). After going through all the installation process from the official k8s site -> at last, after deploying the network plugin on the k8s cluster. CoreDNS pods went into a Crashloopbackoff state. I did check for the container logs and found the following error message:
Please look into this and provide some insights. I have faced same issue while upgrading the cluster from v1.29 to v1.30.
Thanks & Regards,
Tej Singh Rana
What did you expect to happen?
Both coreDNS pods should be in the running state, after deploying the network plugin.
How can we reproduce it (as minimally and precisely as possible)?
Simply follow the steps from the official k8s site.
Anything else we need to know?
I did some tests, and I used 1024 instead of 53 port, and it started to work. (AFAIK, using below port ~1000 was not working)
Environment:
coredns
configMap.listen tcp :53: bind: permission denied
Kubernetes version
Cloud provider
N/A
OS version
Install tools
Container runtime (CRI) and version (if applicable)
containerd containerd.io 1.6.26 3dd1e886e55dd695541fdcd67420c2888645a495
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: