-
Notifications
You must be signed in to change notification settings - Fork 673
Weave unable to establish connection #3411
Comments
What made you think that? |
@bboreham Look at the logs coming from the weave pods, where xx:xx:xx:xx:xx:xx is the ipv6 address of node1:
|
That looks more like a Weave Net "Peer Name", which is an opaque internal identifier. The log indicates you are getting TCP traffic but not UDP traffic. I can't think of anything that would cause that after 1-2 days and then revert on a restart. |
These logs don't appear when the weave pods are running normally. When i run |
Today it happened again. I forgot to mention, when I try to access the K8s dashboard (using kubectl proxy) it gives me this error:
However after a restart of all weave pods everything seems normal again. |
Yes, it won't work if it can't get UDP packets across. That is the thing we need to diagnose.
I suspect you are looking at the MAC address, which is set to the peer name. |
I've renamed this because ipv6 is the second-oldest request (#19) for a feature in Weave Net: there is no code to handle ipv6 and your logs don't show an ipv6 problem. |
I've found what caused the "no route to host" errors. I configured the firewall on the nodes/master to only allow the ports listed in the Kubernetes docs. Which only specifies opening up TCP ports and no UDP. Opening up all connections for the same ports for incoming connections to/from the nodes and master fixed the issue. However I still find it weird that the cluster was able to function normally for a limited amount of time... |
Thanks for the update @danielgelling. Sadly that docs page is only covering ports used by Kubernetes' internal features. It does note "The pod network plugin you use (see below) may also require certain ports to be open", and we publish https://www.weave.works/docs/net/latest/faq/#ports. If you can say how else you searched for that information we can attempt to make it easier to find. It is mysterious to me how a firewall would be reset by deleting the pods. |
Your comment about the UDP packets set me thinking last week, so I thought about what are the things that could block these packets. Firewall came to mind, because on the setup of the master and the nodes we blocked all incoming traffic except for our office and VPN ip, and the ports mentioned in the K8s docs |
What you expected to happen?
Weave pods running stable using ipv4 to communicate.
What happened?
A while (1 to 2 days) after initialising the weave pods they are unable to communicate to eachother because they want to use ipv6, however it is unable to do so.
Restarting (deleting) the pods gets the cluster up and running again, however after a few days it reverts back to ipv6 again.
How to reproduce it?
Install Kubernetes on bare-metal and set CNI to weave. (I also run Istio 1.0.1, which might be an influence, however I don't think so.)
Anything else we need to know?
Versions:
Logs:
(I have replaced all ipv4 addresses with xx.xx.xx.xx for node1's ipv4 and yy.yy.yy.yy for node2's ipv4 and respectively for ipv6 xx:xx:xx:xx:xx:xx and yy:yy:yy:yy:yy:yy)
Network:
$ ip route (on node1)
$ ip -4 -o addr (on node1)
$ sudo iptables-save (on node1) (replaced master IP with aaa.aaa.aaa.aaa)
The text was updated successfully, but these errors were encountered: