-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ExternalIPs and LoadBalancer IPs No Longer Work with NetworkPolicy without Manual Specification #934
Comments
@aauren Seems like reasonable use-case. If i understood correctly consul acts as a service discovery for both in-cluster and out-of-the cluster services and you would want services with external IP to be used by both in consistent manner? |
@murali-reddy Yes, that is correct. |
Is it connected to #938 ? |
@murali-reddy I think that we're ready to move forward with a contribution to hopefully fix this. While we could fix it based upon a range like we did with ClusterIP with subnet mask or NodePort with a port range, I think that it's possible that we'll miss use-cases here if we try to do a range for this one. So I would propose that we make an additional ipset to track externalIPs and then handle them like we do ClusterIPs and NodePorts on the INPUT chain (e.g.
The only downside to this is that it will introduce an additional point of churn for the NPC. Right now the NPC only watches on pod, namespace, and networkpolicy events. In order to do this on the NPC we would have to include the informer for services as well (to be notified if someone ever changed the ExternalIP of the service so that we could add it / remove it from our ipset). However, I don't think that we'd have to run the entire NPC controller when this happened, and instead we could have that handler only update the affected ExternalIP ipset that we're maintaining. Alternatively, we could have the the NSC manage this new ipset by adding another one here:
Let me know your thoughts on this and I'll begin drafting a PR. |
Unfortunatley I did not forsee there will be cases where pod's access services by external IP with in the cluster. Given we already have
While external IP's are not managed by Kubernetes and can not make general assumption on nature of the external IP pool. I feel its reasonable to accept multiple ranges of external IP's. Alternativley we can spare the users from this configuration by bookkeeping the set of external IP's. We can do above approaches but there is going to be overhead.
This sounds reasonable. If there is any objection to accept the external IP ranges as configuration then I would prefer this.
Yes, I would prefer to keep individual functionalities be properly decoupled. I understand for users like yours a cohesive solution is better, but would be better to keep decoupling intact (for the users who dont run kube-router's service proxy and possiblity to run the individual functionalities as containers in the kube-router pod) |
I mean ranges won't be a huge problem for us, but I do think that it would be difficult to make ranges work for users that potentially have a bunch of To me, this feels like a pretty big usability concession. Maybe we make the range flag, so that people can choose to add ranges or choose to have kube-router maintain their list of external IPs manually? This essentially would allow users to choose between manual configuration and performance, or automatic configuration with some performance trade-off. What would you think about an option similar to the following?
|
I feel this may not be as bad. At least from what I see from the non-cloud implementation of LoadBalancer service Since no one has run into this problem atleast at the moment how about just imposing restriction that one has to configure |
That's fine. We can start with ranges, and then if someone else has a strong use-case for allowing kube-router to do the So for posterity, what we're decided to do is add the following:
Off is essentially the same functionality that we provide with the current 1.0 release. |
A bit unrelated to $title but the following statement ...
... made me think about this project we recently discovered which is yet another way to handle LoadBalancer service IP assignment without any k8s cloud-controller-manager : https://github.com/Nordix/assign-lb-ip |
Since this PR was merged: #914 only ClusterIPs and NodePorts are specifically bypassed in the INPUT chain for inter-cluster traffic. Essentially, for ClusterIPs and NodePorts the policy enforcement is specifically bypassed in INPUT and then enforced in OUTPUT after all of the VIPs have been disambiguated by IPVS into PodIP addresses. This allows users to use them for inter-cluster traffic using the standard pod and namespace selectors.
However, since ExternalIPs and LoadBalancer IPs were not added to the functionality of #914, they will still get processed in INPUT while the VIP still exists as the destination and will be denied unless manually allowed by an
ipBlock
statement AND a namespace / pod selector (since this will be enforced when it traverses the OUTPUT chain.While I think that the intention of the Kubernetes network SIG is that all inter-cluster traffic be routed through ClusterIPs (which is why this is returned from built-in Kubernetes DNS when you lookup a service record) we have many users that lookup records from Consul. As it concerns Consul, we wish the services within Consul to be route-able both from within the cluster and from outside the cluster, so we only add NodePorts and ExternalIPs to the service definitions in Consul from Kubernetes. As such all of our containerized applications that utilize Consul attempt to route to in-cluster services via ExternalIPs or NodePorts.
The text was updated successfully, but these errors were encountered: