Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

race between weave-kube and kube-proxy can allow all traffic through Service VIP #3230

Open
brb opened this issue Jan 29, 2018 · 6 comments
Open

Comments

@brb
Copy link
Contributor

brb commented Jan 29, 2018

kube-proxy pre-appends the iptables rule "-j KUBE-FORWARD" which ACCEPTs all traffic and prevents it entering the "WEAVE-NPC" chain.

In #3210 we introduced a fix which pre-appends "-j WEAVE-NPC" after kube-proxy has inserted "-j KUBE-FORWARD". The fix relies on a premise that weave-kube is started after kube-proxy which follows from a fact that weave-kube depends on api-server (to get a peer list) and api-server is accessible to weave-kube only after kube-proxy has inserted all nat rules.

However, if the nat rules for api-server are present (e.g. from previous k8s installation which failed to flush them), then weave-kube can start before kube-proxy, and thus the WEAVE-NPC rule will be preceded by the KUBE-FORWARD => all traffic will be enabled to Pods through Service Virtual IP.

(Maybe) possible fixes to the problem:

@brb
Copy link
Contributor Author

brb commented Feb 1, 2018

Monitoring and maintaining iptables rules would also solve such issues as #3155, #3106.

@rade rade added the bug label Mar 20, 2018
@rade rade changed the title Fix race between weave-kube and kube-proxy which can allow all traffic through Service VIP race between weave-kube and kube-proxy can allow all traffic through Service VIP Mar 20, 2018
@murali-reddy
Copy link
Contributor

@brb is this still a problem? I would like to take a shot at this.

@brb
Copy link
Contributor Author

brb commented Jul 30, 2018

@murali-reddy I think this is still a problem. I'd be interested in the "Monitor (and maintain) iptables rules and ensure the required order" solution, as it could be used to maintain other iptables rules installed by Net.

Before implementing it, perhaps you could present possible ideas.

@murali-reddy
Copy link
Contributor

One pattern i have seen with Kubernetes controllers is that idea of periodic reconciliation loop which ensures actual state is in sync with desired state. Such reconciliation even can fix out-of-band changes like iptables restart etc and issues as mentioned in #3155, #3106. But that is more significant work but holistic solution IMO. I still don't have good clarity how much of the code (both net and npc) is idempotent.

I will look for other alternatives as well.

I'd be interested in the "Monitor (and maintain) iptables rules and ensure the required order" solution

By Monitor you mean, read the iptables once in a while and check for the order, or is there a callback kind of functionality when there is a change to chain?

@brb
Copy link
Contributor Author

brb commented Jul 31, 2018

By Monitor you mean, read the iptables once in a while and check for the order, or is there a callback kind of functionality when there is a change to chain?

I meant what you just described above, i.e. the way k8s does.

@bzillins
Copy link

bzillins commented Aug 20, 2018

Still a problem, currently worked around by modifying the kube-proxy service to remove weave iptables rules in ExecStartPre and remove weave containers in ExecStartPost for now.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants