-
Notifications
You must be signed in to change notification settings - Fork 674
NetworkPolicy only enforced in pod-to-pod traffic, not when using services #3452
Comments
could you show the output of this command too. |
|
Interesting - you don't have any iptables rules redirecting Kubernetes services. I suspect the source address is getting masqueraded, so the connection no longer looks like it is coming from the pod. |
Yes, it's running in ipvs mode:
|
Observing this
it is interesting how access to service ip goes through. As network policy is applied in the namespace default rule is to drop the packet. Since we dont have any pods matching Could you please share the |
Nodes & Pods:
node/thesis-test-master :
node/thesis-test-node-0:
|
@maxbischoff thanks for sharing the iptable, ipset details. As pointed out by @bboreham it does look like to be the result of masquerading. Since both the pods are running on same node traffic getting masqueraded results in node's IP to be used as source IP. Network policies are generally implemented to allow any traffic from node local Ip's. so traffic is not run through the network policies. Looking at the possible cases where kube-proxy IPVS masquerades the traffic, I don't expect traffic to getting masquraded in your deployment. Would it possible to add additional node into the cluster (so as to ensure pods run on different node) and see if this scenario works. Alternatively if you can (by tcpdump traffic) see if the traffic is getting masquraded then that confirms why network polices are not imposed. |
It seems like traffic is getting masqueraded |
@maxbischoff thanks for confirming. You need to check what is causing kube-proxy to masquerade the traffic. Please note that in general semantics of network policies only deal with pod IP or ipblock and does not really cover how service proxy fits in. |
I wonder if it is hitting this line in our iptables rules:
if the dnat done by ipvs happens after iptables run then I think it would. |
I just provisioned a cluster with kube-proxy in ipvs mode. Things are in-order as expected by weave. I dont see traffic getting masqueraded when service IP is accessed from the pods.
@maxbischoff if you still have the cluster and its not too much of trouble can you please share below command output when you run the test on the node where busybox running
and please see which of the rule has |
I tried it but it seems to not hit either:
After running wget in busybox:
|
I believe I am affected with this as well - kubernetes 1.14.1 weave-net 2.5.1
the attempts listed above are as follows:
More log files and settings also captured |
@aleks-mariusz Its not clear the context of your problem. Have you applied any network policies? Perhaps opening a new issues with all the details would be helpful As far as this issue is concerned network policy did not work when service is accessed as traffic is getting masquraded as noted in this comment in which case it is known (nothing specific to weave NPC, but general network policies does not work well with services) that network policies would not work as expected. |
sorry for not being clear, i piggybacked off this issue because i'm using the exact same tests that the OP did, and also am using IPVS. so I guess I am trying to give more data-points to try to help get this particular issue worked-on/resolved rather than opening what seems like a duplicate issue (unless that's preferred?) btw/fyi, traffic to services are blocked properly by weave-npc, but only when having kube-proxy use legacy iptables mode.. it's just that when kube-proxy uses IPVS that services are not properly blocked. unfortunately this will mean i will have to revert my weave-powered kubernetes cluster back to iptables mode until weave-npc supports IPVS mode as well as it's supported iptables.. |
thanks @aleks-mariusz I did try kube-proxy in IPVS mode for this issue, did not run into any issue. Can you please see and check if in ipvs mode, Can you also confirm if counters are increasing when you do |
re the masquerade-all is set to false and i am not specifying cluster-cidr with the kube-proxy config.. i will need to check the counters still, but not sure what that will tell us |
there are two cases in which network policies does not work as expected. When traffic gets MASQUERADE either by kube-proxy or weave which changes source IP of the packet, which results in ingress network policies to not work as expected other case is in which all traffic from the node is allowed to the pods running on the node (#3285). So if you are running on single node, or in case where source and destination pods are on same node then you will see network policies not working as epxected |
What you expected to happen?
When following the kubernetes tutorial on declaring network policies, I expect
wget
from a unlabelled pod to the nginx service to timeout.What happened?
When using
wget
on the service hostname, the nginx pod can be reached. When accessing it directly via pod IP access is blocked.How to reproduce it?
As in the tutorial:
kubectl run nginx --image=nginx --expose --port 80
kubectl apply -f nginx-policy.yaml
Anything else we need to know?
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Versions:
Logs:
Network:
The text was updated successfully, but these errors were encountered: