-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metric server with weave net can't collect monitoring data from the node where his pod is placed #166
Comments
I'd take this up with the weave folks. Unfortunately, there's not much we can do to help, unless perhaps a different node address type might help (in that case, look at the |
having the same issue, did you get it work? |
Not yet, don't know if it is related to weave or something else. |
I'm running metrics-server on a 1.13 cluster with Weave 2.5.0 and I had to add the commands below to the container on
|
@hshtel |
here is a detailed log:
** Is this a REQUEST FOR HELP? **
** Is this a BUG REPORT? **
yes
What you expected to happen?
allow connection from inside the pod to the host where it's placed (so Metric server can collect monitoring data from the node where his pod is placed)
What happened?
I cant get metrics from the r2s14 which is the node where Metric server pod is placed.
"kubectl top node" output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
r2s12 320m 4% 1869Mi 7%
r2s13 65m 0% 705Mi 5%
r2s14
in metric server log I have the below line repeated:
unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:r2s14: unable to fetch metrics from Kubelet r2s14 (10.199.183.219): Get https://10.199.183.219:10250/stats/summary/: dial tcp 10.199.183.219:10250: i/o timeout
Anything else we need to know?
kubeadm is used to create the cluster.
below are my nodes:
NAME STATUS ROLES AGE VERSION INTERNAL-IP
node/r2s12 Ready master 7d19h v1.12.1 10.199.183.217
node/r2s13 Ready 7d19h v1.12.1 10.199.183.218
node/r2s14 Ready 7d19h v1.12.1 10.199.183.219
Versions:
$ weave version
root@r2s12:~# weave version
This is WEAVE, Version 4.4 (TeX Live 2015/Debian)
weave: fatal: web file `version.web' not found.
$ docker version
Client:
Version: 18.06.0-ce
API version: 1.38
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:11:02 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:09:05 2018
OS/Arch: linux/amd64
Experimental: false
$ kubectl version
v1.12.1
Logs:
in metric server log I have the below line repeated:
E1025 08:32:29.117138 1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:r2s14: unable to fetch metrics from Kubelet r2s14 (10.199.183.219): Get https://10.199.183.219:10250/stats/summary/: dial tcp 10.199.183.219:10250: i/o timeout
$ kubectl logs -n kube-system weave
nothing special in this log it is all like below:
WARN: 2018/10/24 09:49:14.479057 Vetoed installation of hairpin flow FlowSpec{keys: [EthernetFlowKey{src: f6:b0:8c:c6:13:40, dst: 56:41:54:56:ea:d8} InPortFlowKey{vport: 1}], actions: [OutputAction{vport: 1}]}
INFO: 2018/10/24 09:49:41.618990 Discovered remote MAC 8a:6d:19:0e:95:26 at ae:9a:41:9a:13:07(r2s13)
INFO: 2018/10/24 09:49:41.637759 Discovered remote MAC 66:eb:77:3e:83:18 at 0a:3a:30:bc:0c:e2(r2s12)
WARN: 2018/10/24 09:52:06.717484 [allocator]: Delete: no addresses for 571ccb5552b16cc8b36780d277323bf979c9604c9a82db508b1df9c2b91e9111
$ kubectl logs -n kube-system weave-npc
nothing special in this log it is all like below:
DEBU: 2018/10/24 11:57:17.698101 EVENT DeletePod {"metadata":{"creationTimestamp":"2018-10-24T11:03:18Z","deletionGracePeriodSeconds":0,"deletionTimestamp":"2018-10-24T11:56:59Z","generateName":"coredns-745bf547c4-","labels":{"k8s-app":"kube-dns","pod-template-hash":"745bf547c4"},"name":"coredns-745bf547c4-g5v7r","namespace":"kube-system","resourceVersion":"866220","selfLink":"/api/v1/namespaces/kube-system/pods/coredns-745bf547c4-g5v7r","uid":"69b12a72-d77c-11e8-a2ac-00259e1e2b8c"},"spec":{"containers":[{"image":"k8s.gcr.io/coredns:1.2.2","imagePullPolicy":"IfNotPresent","name":"coredns","ports":[{"containerPort":53,"name":"dns","protocol":"UDP"},{"containerPort":53,"name":"dns-tcp","protocol":"TCP"},{"containerPort":9153,"name":"metrics","protocol":"TCP"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"Default","nodeName":"r2s12","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"coredns","serviceAccountName":"coredns","terminationGracePeriodSeconds":30},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2018-10-24T11:03:18Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2018-10-24T11:56:00Z","message":"containers with unready status: [coredns]","reason":"ContainersNotReady","status":"False","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2018-10-24T11:56:00Z","message":"containers with unready status: [coredns]","reason":"ContainersNotReady","status":"False","type":"ContainersReady"},{"lastProbeTime":null,"lastTransitionTime":"2018-10-24T11:03:18Z","status":"True","type":"PodScheduled"}],"hostIP":"10.199.183.217","phase":"Running","podIP":"10.38.128.4","qosClass":"Burstable","startTime":"2018-10-24T11:03:18Z"}}
INFO: 2018/10/24 11:57:17.698152 deleting entry 10.38.128.4 from weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 of 69b12a72-d77c-11e8-a2ac-00259e1e2b8c
Network:
ip route from inside the pod:
/ # ip route
default via 10.40.0.2 dev eth0
10.32.0.0/12 dev eth0 scope link src 10.40.0.26
netstat from inside the pod:
/ # netstat -a | grep 10250
tcp 0 1 metrics-server-7fbd9b8589-whstv:48430 10.199.183.219:10250 SYN_SENT
tcp 0 0 metrics-server-7fbd9b8589-whstv:56664 10.199.183.218:10250 ESTABLISHED
tcp 0 0 metrics-server-7fbd9b8589-whstv:41814 10-199-183-217.kubernetes.default.svc.cluster.local:10250 ESTABLISHED
ip route from the node where the pod deployed:
root@r2s14:~# ip route
default via 172.1.1.1 dev enp5s4f1 onlink
10.32.0.0/12 dev weave proto kernel scope link src 10.40.0.2
10.199.183.192/26 dev enp5s4f0 proto kernel scope link src 10.199.183.219
172.1.1.0/24 dev enp5s4f1 proto kernel scope link src 172.1.1.3
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
netstat from the node where the pod is deployed:
root@r2s14:~# netstat -a | grep 10250
tcp6 0 0 [::]:10250 [::]:* LISTEN
tcp6 0 0 r2s14.r2s14:10250 r2s12.r2s12:42440 ESTABLISHED
$ ip -4 -o addr
root@r2s14:~# ip -4 -o addr
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
4: enp5s4f0 inet 10.199.183.219/26 brd 10.199.183.255 scope global enp5s4f0\ valid_lft forever preferred_lft forever
5: enp5s4f1 inet 172.1.1.3/24 brd 172.1.1.255 scope global enp5s4f1\ valid_lft forever preferred_lft forever
7: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever
10: weave inet 10.40.0.2/12 brd 10.47.255.255 scope global weave\ valid_lft forever preferred_lft forever
root@r2s14:~# iptables-save
Generated by iptables-save v1.6.0 on Thu Oct 25 03:07:19 2018
*nat
:PREROUTING ACCEPT [13:1528]
:INPUT ACCEPT [3:385]
:OUTPUT ACCEPT [6:460]
:POSTROUTING ACCEPT [6:460]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4V43NLPFJ2DWVK47 - [0:0]
:KUBE-SEP-ASVMHCGRO4KKGD3M - [0:0]
:KUBE-SEP-BO2XIYXA55BODR2G - [0:0]
:KUBE-SEP-DOMGESOFEAMUQ6UA - [0:0]
:KUBE-SEP-JJGE5N6MW2ULLL3L - [0:0]
:KUBE-SEP-MZZMRRDBWXXO2L4O - [0:0]
:KUBE-SEP-OTJ7N2WJ7UGHPRZJ - [0:0]
:KUBE-SEP-PGKBHR7HMQAUHTBJ - [0:0]
:KUBE-SEP-XKMU5ICJFX5QWQ5D - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-BT6UIQGWTIPVJ3V7 - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-LC5QY66VUV2HJ6WZ - [0:0]
:KUBE-SVC-MOX62VNPUJXMPGLS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/kube-ops-view:" -m tcp --dport 32157 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/kube-ops-view:" -m tcp --dport 32157 -j KUBE-SVC-MOX62VNPUJXMPGLS
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32684 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32684 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4V43NLPFJ2DWVK47 -s 10.38.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-4V43NLPFJ2DWVK47 -p tcp -m tcp -j DNAT --to-destination 10.38.0.6:8443
-A KUBE-SEP-ASVMHCGRO4KKGD3M -s 10.40.0.26/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ASVMHCGRO4KKGD3M -p tcp -m tcp -j DNAT --to-destination 10.40.0.26:443
-A KUBE-SEP-BO2XIYXA55BODR2G -s 10.38.0.8/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-BO2XIYXA55BODR2G -p tcp -m tcp -j DNAT --to-destination 10.38.0.8:8080
-A KUBE-SEP-DOMGESOFEAMUQ6UA -s 10.38.0.5/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-DOMGESOFEAMUQ6UA -p udp -m udp -j DNAT --to-destination 10.38.0.5:53
-A KUBE-SEP-JJGE5N6MW2ULLL3L -s 10.38.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-JJGE5N6MW2ULLL3L -p tcp -m tcp -j DNAT --to-destination 10.38.0.7:6379
-A KUBE-SEP-MZZMRRDBWXXO2L4O -s 10.38.0.5/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-MZZMRRDBWXXO2L4O -p tcp -m tcp -j DNAT --to-destination 10.38.0.5:53
-A KUBE-SEP-OTJ7N2WJ7UGHPRZJ -s 10.40.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-OTJ7N2WJ7UGHPRZJ -p tcp -m tcp -j DNAT --to-destination 10.40.0.6:53
-A KUBE-SEP-PGKBHR7HMQAUHTBJ -s 10.199.183.217/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-PGKBHR7HMQAUHTBJ -p tcp -m tcp -j DNAT --to-destination 10.199.183.217:6443
-A KUBE-SEP-XKMU5ICJFX5QWQ5D -s 10.40.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-XKMU5ICJFX5QWQ5D -p udp -m udp -j DNAT --to-destination 10.40.0.6:53
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.106.80.75/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.80.75/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-SVC-LC5QY66VUV2HJ6WZ
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.109.232.252/32 -p tcp -m comment --comment "default/kube-ops-view: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.109.232.252/32 -p tcp -m comment --comment "default/kube-ops-view: cluster IP" -m tcp --dport 80 -j KUBE-SVC-MOX62VNPUJXMPGLS
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.105.172.238/32 -p tcp -m comment --comment "default/kube-ops-view-redis: cluster IP" -m tcp --dport 6379 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.105.172.238/32 -p tcp -m comment --comment "default/kube-ops-view-redis: cluster IP" -m tcp --dport 6379 -j KUBE-SVC-BT6UIQGWTIPVJ3V7
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.98.126.0/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.98.126.0/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-BT6UIQGWTIPVJ3V7 -j KUBE-SEP-JJGE5N6MW2ULLL3L
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MZZMRRDBWXXO2L4O
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-OTJ7N2WJ7UGHPRZJ
-A KUBE-SVC-LC5QY66VUV2HJ6WZ -j KUBE-SEP-ASVMHCGRO4KKGD3M
-A KUBE-SVC-MOX62VNPUJXMPGLS -j KUBE-SEP-BO2XIYXA55BODR2G
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-PGKBHR7HMQAUHTBJ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-DOMGESOFEAMUQ6UA
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-XKMU5ICJFX5QWQ5D
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -j KUBE-SEP-4V43NLPFJ2DWVK47
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
Completed on Thu Oct 25 03:07:19 2018
Generated by iptables-save v1.6.0 on Thu Oct 25 03:07:19 2018
*filter
:INPUT ACCEPT [147:55346]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [154:27496]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -i weave -m comment --comment "NOTE: this must go before '-j KUBE-FORWARD'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before '-j KUBE-FORWARD'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state NEW -m set ! --match-set weave-local-pods src -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j NFLOG --nflog-group 86
-A WEAVE-NPC-EGRESS -m mark ! --mark 0x40000/0x40000 -j DROP
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
COMMIT
The text was updated successfully, but these errors were encountered: