-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
whitelist traffic to cluster IP and node ports in INPUT chain to bypass netwrok policy enforcement #914
Conversation
97b022f
to
e5a9900
Compare
netwrok policy enforcement Fixes #905
This PR has been tested through following test cases: Four services are used: service A, service B, service C and service D. Service B and C are of type NodePort and service D is running in host network. Network policies are appliced such that pods in service A can only access service B.
Merging the PR based on the testing done. |
args := []string{"-m", "comment", "--comment", "kube-router netpol", "-j", customChain} | ||
exists, err := iptablesCmdHandler.Exists("filter", builtinChain, args...) | ||
|
||
exists, err := iptablesCmdHandler.Exists("filter", chain, ruleSpec...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really minor nitpic, but we should probably do this existence checking before the List
call above. If the iptables rule doesn't exist, then there is no reason to do the above list and it would save us an iptables call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. Will fix it.
if err != nil { | ||
glog.Fatalf("Failed to delete wrong rule to jump to chain %s in %s chain due to %s", customChain, builtinChain, err.Error()) | ||
} | ||
err = iptablesCmdHandler.Delete("filter", chain, strconv.Itoa(ruleNo+1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that you've talked about this other places in the code, but at some point in the future we should really not rely on deleting iptables rules by number (I know we do it all over the place). But there exists a possibility that since we've listed the rules, that some other process has come in and manipulated iptables causing us to potentially delete the wrong rule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. There is a possibility. Perhaps holding the iptables lock followed by iptables-save and iptables-restore would be a safe solution. But if there are non-cooperating processes which does not use locks we just have no way to control it with iptables. Such races has been problem:
@@ -74,6 +76,8 @@ func NewKubeRouterConfig() *KubeRouterConfig { | |||
BGPGracefulRestartDeferralTime: 360 * time.Second, | |||
EnableOverlay: true, | |||
OverlayType: "subnet", | |||
ClusterIPCIDR: "10.96.0.0/12", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a safe guess for all of the user's that use our daemonset's that we publish?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its default value kubernetes uses.
I know my review comes late. Overall, LGTM. For posterity, as we discussed offline earlier, this will affect the ability to route to ExternalIPs or LoadBalancer type services from within the cluster. Granted, I don't think this was the intention of Kubernetes for inter-cluster traffic (i.e. it should be using a ClusterIP), but it does have the potential to affect users (I know that our current users use ExternalIPs for inter-cluster traffic routing because that's what they get back from Consul). Other than that, I had one nit and one question, but neither are pressing. |
@aauren thanks for the review
Please open an issue for the use-case. We should discuss and see how to evolve it. |
Added here: #934 |
…ss netwrok policy enforcement (cloudnativelabs#914) * whitelist traffic to cluster IP and node ports in INPUT chain to bypass netwrok policy enforcement Fixes cloudnativelabs#905 * fix unit test failure * ensure netpol firewall rules are configured after service proxy firewall rules
Fixes #905
Following rules will be prepended in
KUBE-ROUTER-INPUT
chain. So traffic to cluster IP's and node port services are targetted toRETURN
.Note: there is no dependency/assumptions on service proxy. Service proxy could add its filtering rules to filter traffic to services. Below is e.g. in case where kube-router is acting as service proxy and has setup filter rules to restrict traffic to service VIP's.