Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing kernel modules for kube-proxy's IPVS mode #3087

Closed
residentsummer opened this issue Aug 21, 2018 · 7 comments
Closed

Missing kernel modules for kube-proxy's IPVS mode #3087

residentsummer opened this issue Aug 21, 2018 · 7 comments
Labels
area/guest-vm General configuration issues with the minikube guest VM help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@residentsummer
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST

Please provide the following details:

Environment:

minikube version: v0.28.2
OS: MacOS 10.13
VM driver: xhyve
ISO version: ~/.minikube/cache/iso/minikube-v0.28.1.iso

  • Install tools:
  • Others:

What happened:
kube-proxy won't start in ipvs mode

What you expected to happen:
kube-proxy should start in ipvs mode

How to reproduce it (as minimally and precisely as possible):

After minikube boots, set mode: "ipvs" in config.conf in kube-proxy ConfigMap:

kubectl edit -n kube-system configmap/kube-proxy

To apply new configuration, delete the old pod and k8s will create a new one,
as required by corresponding DaemonSet:

$ kc get -n kube-system pods
NAME                                    READY     STATUS    RESTARTS   AGE
...
kube-proxy-49psk                        1/1       Running   0          11h
...
$ kc delete -n kube-system po/kube-proxy-49psk
pod "kube-proxy-49psk" deleted
$ kc get -n kube-system pods
NAME                                    READY     STATUS    RESTARTS   AGE
...
kube-proxy-x7qgq                        1/1       Running   0          7m
...

Output of minikube logs (if applicable):

$ kc logs -n kube-system po/kube-proxy-x7qgq
E0805 09:46:12.625751       1 ipset.go:156] Failed to make sure ip set: &{{KUBE-CLUSTER-IP hash:ip,port inet 1024 65536 0-65535 Kubernetes service cluster ip + port for masquerade purpose} map[] 0xc420562080} exist, error: error creating ipset KUBE-CLUSTER-IP, error: exit status 1
E0805 09:46:42.645604       1 ipset.go:156] Failed to make sure ip set: &{{KUBE-LOAD-BALANCER-FW hash:ip,port inet 1024 65536 0-65535 Kubernetes service load balancer ip + port for load balancer with sourceRange} map[] 0xc420562080} exist, error: error creating ipset KUBE-LOAD-BALANCER-FW, error: exit status 1
E0805 09:47:12.677159       1 ipset.go:156] Failed to make sure ip set: &{{KUBE-NODE-PORT-UDP bitmap:port inet 1024 65536 0-65535 Kubernetes nodeport UDP port for masquerade purpose} map[] 0xc420562080} exist, error: error creating ipset KUBE-NODE-PORT-UDP, error: exit status 1
E0805 09:47:42.748946       1 ipset.go:156] Failed to make sure ip set: &{{KUBE-NODE-PORT-LOCAL-TCP bitmap:port inet 1024 65536 0-65535 Kubernetes nodeport TCP port with externalTrafficPolicy=local} map[] 0xc420562080} exist, error: error creating ipset KUBE-NODE-PORT-LOCAL-TCP, error: exit status 1

Anything else do we need to know:

With all ipset-related modules, kube-proxy starts as expected:

I0811 21:26:07.996804       1 feature_gate.go:230] feature gates: &{map[]}
I0811 21:26:08.064640       1 server_others.go:183] Using ipvs Proxier.
W0811 21:26:08.086817       1 proxier.go:349] clusterCIDR not specified, unable to distinguish between internal and external traffic
W0811 21:26:08.086847       1 proxier.go:355] IPVS scheduler not specified, use rr by default
I0811 21:26:08.087178       1 server_others.go:210] Tearing down inactive rules.
I0811 21:26:08.142232       1 server.go:448] Version: v1.11.0
I0811 21:26:08.154958       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0811 21:26:08.155260       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0811 21:26:08.155338       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0811 21:26:08.155394       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0811 21:26:08.155634       1 config.go:102] Starting endpoints config controller
I0811 21:26:08.155661       1 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
I0811 21:26:08.155703       1 config.go:202] Starting service config controller
I0811 21:26:08.155709       1 controller_utils.go:1025] Waiting for caches to sync for service config controller
I0811 21:26:08.256254       1 controller_utils.go:1032] Caches are synced for service config controller
I0811 21:26:08.256369       1 controller_utils.go:1032] Caches are synced for endpoints config controller

ipsets, created by kube-proxy:

$ kc exec -it kube-proxy-lxj2d ipset list | grep Type | sort -u
Type: bitmap:port
Type: hash:ip,port
Type: hash:ip,port,ip
Type: hash:ip,port,net

So, the proposed changes are:

diff --git a/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig b/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig
index e5de73c4d..bb860f22e 100644
--- a/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig
+++ b/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig
@@ -187,7 +187,11 @@ CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
 CONFIG_NETFILTER_XT_MATCH_TIME=m
 CONFIG_NETFILTER_XT_MATCH_U32=m
 CONFIG_IP_SET=y
+CONFIG_IP_SET_BITMAP_PORT=m
 CONFIG_IP_SET_HASH_IP=m
+CONFIG_IP_SET_HASH_IPPORT=m
+CONFIG_IP_SET_HASH_IPPORTIP=m
+CONFIG_IP_SET_HASH_IPPORTNET=m
 CONFIG_IP_SET_HASH_NET=m
 CONFIG_IP_SET_LIST_SET=m
 CONFIG_IP_VS=m
@tstromberg tstromberg added kind/feature Categorizes issue or PR as related to a new feature. area/guest-vm General configuration issues with the minikube guest VM labels Sep 19, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2019
@tstromberg
Copy link
Contributor

I can confirm this still happens in minikube v0.33.1, and would be happy to review any PR's which fix this issue. Following the repro instructions:

I0123 23:48:53.812354       1 server_others.go:189] Using ipvs Proxier.
W0123 23:48:53.813081       1 proxier.go:375] clusterCIDR not specified, unable to distinguish between internal and external traffic
W0123 23:48:53.813194       1 proxier.go:381] IPVS scheduler not specified, use rr by default
I0123 23:48:53.813356       1 server_others.go:216] Tearing down inactive rules.
I0123 23:48:53.837048       1 server.go:464] Version: v1.13.2
I0123 23:48:53.845902       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0123 23:48:53.846263       1 config.go:102] Starting endpoints config controller
I0123 23:48:53.846275       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0123 23:48:53.846301       1 config.go:202] Starting service config controller
I0123 23:48:53.846305       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0123 23:48:53.946580       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0123 23:48:53.946594       1 controller_utils.go:1034] Caches are synced for service config controller
E0123 23:48:53.971896       1 ipset.go:162] Failed to make sure ip set: &{{KUBE-LOAD-BALANCER-LOCAL hash:ip,port inet 1024 65536 0-65535 Kubernetes service load balancer ip + port with externalTrafficPolicy=local} map[] 0xc0001b8bb0} exist, error: error creating ipset KUBE-LOAD-BALANCER-LOCAL, error: exit status 1
E0123 23:48:53.986797       1 ipset.go:162] Failed to make sure ip set: &{{KUBE-NODE-PORT-TCP bitmap:port inet 1024 65536 0-65535 Kubernetes nodeport TCP port for masquerade purpose} map[] 0xc0001b8bb0} exist, error: error creating ipset KUBE-NODE-PORT-TCP, error: exit status 1

@tstromberg tstromberg added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jan 23, 2019
@mdonkers
Copy link
Contributor

mdonkers commented Feb 8, 2019

I can confirm the same. Building the kernel and configuring shows a few more changes, but other than that, after re-building the ISO then IPVS works. Also v0.33.1. Following diff was generated after running make linux-menuconfig:

diff --git i/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig w/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig
index e1e4f905a..e766c4131 100644
--- i/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig
+++ w/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig
@@ -188,8 +188,21 @@ CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
 CONFIG_NETFILTER_XT_MATCH_TIME=m
 CONFIG_NETFILTER_XT_MATCH_U32=m
 CONFIG_IP_SET=y
+CONFIG_IP_SET_BITMAP_IP=m
+CONFIG_IP_SET_BITMAP_IPMAC=m
+CONFIG_IP_SET_BITMAP_PORT=m
 CONFIG_IP_SET_HASH_IP=m
+CONFIG_IP_SET_HASH_IPMARK=m
+CONFIG_IP_SET_HASH_IPPORT=m
+CONFIG_IP_SET_HASH_IPPORTIP=m
+CONFIG_IP_SET_HASH_IPPORTNET=m
+CONFIG_IP_SET_HASH_IPMAC=m
+CONFIG_IP_SET_HASH_MAC=m
+CONFIG_IP_SET_HASH_NETPORTNET=m
 CONFIG_IP_SET_HASH_NET=m
+CONFIG_IP_SET_HASH_NETNET=m
+CONFIG_IP_SET_HASH_NETPORT=m
+CONFIG_IP_SET_HASH_NETIFACE=m
 CONFIG_IP_SET_LIST_SET=m
 CONFIG_IP_VS=m
 CONFIG_IP_VS_IPV6=y

@tstromberg
Copy link
Contributor

Thanks! I'd be happy to accept any PR's which update the ISO appropriately.

@mdonkers
Copy link
Contributor

mdonkers commented Mar 2, 2019

Hi @tstromberg , I've created a PR to resolve this here: #3783

@tstromberg
Copy link
Contributor

Closing, as v1.0.0 includes #3783 to address this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-vm General configuration issues with the minikube guest VM help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

5 participants