-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubedns container cannot connect to apiserver #193
Comments
iptables
|
I suspect you're hitting issue #196. You can verify that this is the root cause by manually editing
|
@phagunbaya If you do try the above, I would also kill/restart kubelet for it to take effect faster. When I hit this problem myself, kubelet's exponential backoff was making it take forever to try to restart the kube-apiserver pod. |
Did you try flushing your iptable rules and restart kubelet service ? |
@msavlani Flushing iptable rules did not help. |
Also killing DNS pod seems to resolve this for me... |
I am not entierly sure this has to do with #196, I think there is a race condition elsewhere. I've just hit this in something I'm working on at the moment, I will update if I figure out what causes it, as seem to have a way of reproducing is reliably. |
I setup a single-machine Kubernetes cluster for development and faced the same problem.But modifying the port does not solve the problem |
Hi @TracyBin, how do you solve this problem at last? |
@jeffchanjunwei It is the problem of iptables.Please try the follow command
If the command solve your problem,please tell me. |
@TracyBin It doesn't work. kubedns-amd64:1.9 images still can not start. Errors as follows: kubectl describe pod kubedns docker logs kubedns-amd |
@jeffchanjunwei do you solve this problem? |
@pineking yes. It is the cause of network that results into the problem. |
I got the same issue,my kubedns log : [root@k8s ~]# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns I've tied a lot ,but none of them worked. |
I have found the solution to my problem: Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"a55267932d501b9fbd6d73e5ded47d79b5763ce5", GitTreeState:"clean", BuildDate:"2017-04-14T13:36:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} 1.First,we should make sure the ip-forward enabled on the linux kernel of every node.Just execute command: 2.Secondly,if your docker's version >=1.13,the default FORWARD chain policy was DROP,you should set default policy of the FORWARD chain to ACCEPT:$ sudo iptables -P FORWARD ACCEPT. 3.Then the configuration of the kube-proxy must be pass in : ps: --cluster-cidr string The CIDR range of pods in the cluster. It is used to bridge traffic coming from outside of the cluster. If not provided, no off-cluster bridging will be performed. |
Closing this as fixed with v1.6 |
This is still here on 1.7.3 with Ubuntu 16.04. Same exact problem. Have been trying all the possible solutions from disabling apparmor, changing the ports, making sure nothing blocks it.. It still doesn't work. I tried it on a completely fresh droplet from DigitalOcean and it's still the same. Doesn't look like a configuration problem from my side. I just ran the commands as they are in https://medium.com/@SystemMining/setup-kubenetes-cluster-on-ubuntu-16-04-with-kubeadm-336f4061d929 |
@mhsabbagh, I have the exact version as yours, 1 master, 3 nodes, the dashboard was setup on node 2 automatically when apply dashboard.yaml. and dashboard error looks like the same as others.
I have been searching for an solution, but still cannot find a solution. I could telnet to 10.96.0.1 on port 443 from any of the master and nodes Are we sure it has been fixed in v1.6? |
I also have this problem in kubernetes v1.7.4, and after I restart docker, it fix. |
Also hitting this on a fair frequent basis with Kubernetes 1.7 on top of Docker 1.12.6 Running |
@BenHall please open a new issue with relevant details. |
systemctl stop kubelet The route problem can be solved by flush iptables. |
Thanks @frankruizhi for the info. Worked for me!! (Used docker version >1.13) |
I got the same problem when I use kubeadm to init a k8s v1.8 cluster with one master and one node. Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} |
kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE |
kubectl logs kube-dns-545bc4bfd4-zqv6j -n kube-system -c kubedns --previous=true |
kubectl logs kube-dns-545bc4bfd4-zqv6j -n kube-system -c dnsmasq --previous=true |
Which pod network is preferred/works out of the box? I'm running into these same issues, but I have no clue how to fix them. I picked kube-router btw, but running into these same issues. |
Robert, I don't know how mature kube-router is, have you tried Weave Net?
…On Sun, 4 Feb 2018, 1:40 pm Robert te Kaat, ***@***.***> wrote:
Which pod network is preferred/works out of the box? I'm running into
these same issues, but I have no clue how to fix them. I picked kube-router
btw, but running into these same issues.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#193 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPWS9zoKgV8dj6-lL_dyKmmfxlrw4Mqks5tRbNWgaJpZM4MStGG>
.
|
I set k8s cluster using virtualbox, 1-kube-master, 2-kube-workers. When google, there are lots of similar issue, although many ticket shows closed, I tried a lot, but no luck. The root cause should be in kube-dns, flannel and kube-proxy, anyone can tell exactly what is wrong in them ? :-) kube-dns has 3 components/container: kubedns, dnsmasq,sidecar ` ` try use kubectl exec to check each container (1) kubedns = always down with error in log. ` Waiting for services and endpoints to be initialized from apiserver...
` (2) dnsmasq = ok, but it seems the default /etc/resolv.conf might have issue, why it uses my HOST machine's DNS setting? should it use "nameserver 10.96.0.10" ? `
` (3)sidecar = ok, with failure on dnsProbe, this seems NOT a big issue. `
` 10.96.0.1:443 is the cluster ip of kubernetes service, this service is in "default" namespace, can kube-dns from namesapce "kube-system" able to access this in namespace "default" ? I suspect here might have problem ? `
` |
@xiangpengzhao We had an issue where it was a timing related bug with IPTables. Our solution was to upgrade to the latest CNI plugin (in our case Weave). |
Same problem here with K8S 1.10.5 and weave 2.3.0. The problem is solved temporarily thanks to lastboy1228 (#193 (comment)) |
Hi, How did you solve the problem? I encounter the same issue too. |
kubectl delete svc kubernetes |
For flannel network add-on to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init |
You may need to execute the below command to ensure that the default policy is ACCEPT, to avoiding you are kicked out of your machine when using ssh.
And then you can safely flush your rules:
|
if you are using rancher you can go to kubernetes>infrastructre stacks |
这个问题解决了吗?我也遇到这个问题.TKS! go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.9.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.9.0.1:443: i/o timeout |
I had this issue with Kubernetes 1.18 and docker 19 |
fresh installation of 1.18.2 - same problem. Network cilium, OS- Debian 10
|
I make some conclusion:
(2) Under resource
|
just the some, want to know what happened before encounter this problem and how to prevent it |
well, it works |
thank you very much, it works! |
我也遇到了相同的问题,通过刷新iptables解决了,但是不知道具体的原因。 |
systemctl stop kubelet |
This worked on a kubernetes installation ran on a single node and installed using kubekey. |
kubedns logs:
kube-apiserver logs
The text was updated successfully, but these errors were encountered: