-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
networkpolicy is not working when service proxy SNAT's traffic #744
Comments
Was your nginx pod setup with the DSR annotation by any chance? |
Closing as stale. |
@murali-reddy We recently had a message in slack that brought this issue back up. I'm able to reproduce it myself by applying a network policy to a pod and then bouncing traffic to it through another node. Here is the setup, context, and tcpdump courtesy of Environment context:
Traffic is sent from Deployment Setup:
tcpdump from
tcpdump from
I would imagine that this is a common problem for all k8s network frameworks. Do you happen to have any knowledge of how Calico or others address this? |
@aauren Whether its kube-proxy or Kube-router acting as service proxy when external client access the service, to ensure symmetric routing (i.e return traffic goes through same node) traffic is SNAT'ed please see https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport Its inherent problem One can use services with |
@murali-reddy that makes sense. Given that it's described in the k8s documentation that this is a pitfall of proxy'd service traffic it seems to me that this is just an accepted problem upstream. Two things that it would be worth getting your opinion on:
|
Agree. That should be documented.
I am afraid that would give nodes (e.g. a compramised node) unrestricted access to the pod which is not desirable. In general problem of preserving source IP is not anything specific to Kubernetes. AFAIK there is no one-fit soulution. In case of Kubernetes setting |
I've just tried with calico and I can confirm calico has the same issue @aauren |
Hi
we use kube-router to advertise the service and pod cidr with bgp
Now we want to limit the access the pod via networkpolicy.
example deployment:
Example policy
Default deny
The service-vip is announced via anycast from all nodes
But it only works when a client from 172.17.88.0/24 randomly hit the k8s-worker-3 node which have the nginx pod on it.
All other nodes forward the incoming traffic from the nginx service-ip via node-ip towards the nginx pod-ip, which never hit the networkpolicy role as the source ip is completely different.
Maybe someone can give me a hint to resolve this issue?
The text was updated successfully, but these errors were encountered: