Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: reusing running container -> bind: address already in use #7102

Closed
tstromberg opened this issue Mar 18, 2020 · 3 comments Β· Fixed by #7125
Closed

docker: reusing running container -> bind: address already in use #7102

tstromberg opened this issue Mar 18, 2020 · 3 comments Β· Fixed by #7125
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@tstromberg
Copy link
Contributor

startup looks OK, though it did take 2.4 minutes:

πŸ˜„  minikube v1.9.0-beta.1 on Darwin 10.15.3
✨  Using the docker driver based on existing profile
βŒ›  Reconfiguring existing host ...
πŸƒ  Using the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.18.0-rc.1 on Docker 19.03.2 ...
    β–ͺ apiserver.authorization-mode=AlwaysAllow
πŸš€  Launching Kubernetes ... 
🌟  Enabling addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube"

❗  /usr/local/bin/kubectl is v1.16.3, which may be incompatible with Kubernetes v1.18.0-rc.1.
πŸ’‘  You can also use 'minikube kubectl -- get pods' to invoke a matching version

Then I looked at the pods:

NAMESPACE     NAME                          READY   STATUS             RESTARTS   AGE
kube-system   coredns-66bff467f8-bjtjp      1/1     Running            2          39m
kube-system   coredns-66bff467f8-tnc86      1/1     Running            2          39m
kube-system   etcd-m01                      0/1     CrashLoopBackOff   6          39m
kube-system   kindnet-m45jt                 1/1     Running            3          39m
kube-system   kube-apiserver-m01            1/1     Running            0          21s
kube-system   kube-controller-manager-m01   1/1     Running            0          21s
kube-system   kube-proxy-k7cln              1/1     Running            2          39m
kube-system   kube-scheduler-m01            0/1     CrashLoopBackOff   6          39m
kube-system   storage-provisioner           1/1     Running            3          39m
kubectl logs etcd-m01 -n kube-system                                                                                                                         [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-18 20:46:50.184783 I | etcdmain: etcd Version: 3.4.3
2020-03-18 20:46:50.184816 I | etcdmain: Git SHA: 3cf2f69b5
2020-03-18 20:46:50.184818 I | etcdmain: Go Version: go1.12.12
2020-03-18 20:46:50.184820 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-18 20:46:50.184823 I | etcdmain: setting maximum number of CPUs to 6, total number of available CPUs is 6
2020-03-18 20:46:50.184860 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-18 20:46:50.184883 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-03-18 20:46:50.185005 C | etcdmain: listen tcp 172.17.0.2:2380: bind: address already in use
@tstromberg
Copy link
Contributor Author

same with the scheduler:

I0318 20:48:52.228877       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0318 20:48:52.228939       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0318 20:48:52.716641       1 serving.go:313] Generated self-signed cert in-memory
failed to create listener: failed to listen on 0.0.0.0:10251: listen tcp 0.0.0.0:10251: bind: address already in use

@tstromberg tstromberg added co/docker-driver Issues related to kubernetes in container priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Mar 18, 2020
@tstromberg tstromberg added this to the v1.9.0 March 24th milestone Mar 18, 2020
@tstromberg
Copy link
Contributor Author

It's worth noting that there was a flag change:

./out/minikube start --extra-config=apiserver.authorization-mode=AlwaysAllow

@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Mar 19, 2020
@tstromberg
Copy link
Contributor Author

It's not always reproducible, but this seems to hit the condition often enough:

./out/minikube delete; ./out/minikube start --driver=docker; sleep 60; ./out/minikube start --driver=docker --alsologtostderr -v=1

There are two issues here: it takes longer to launch, and we dump a lot of extra error output ot the console.

πŸš€  Launching Kubernetes ... 
I0319 14:13:35.435290    7444 kubeadm.go:299] restartCluster start
I0319 14:13:35.546492    7444 kubeadm.go:142] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
stdout:

stderr:
I0319 14:13:37.126481    7444 kverify.go:45] waiting for apiserver process to appear ...
I0319 14:13:37.253085    7444 kverify.go:66] duration metric: took 126.604417ms to wait for apiserver process to appear ...
I0319 14:13:37.295804    7444 kapi.go:58] client config for minikube: &rest.Config{Host:"https://127.0.0.1:32774", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/tstromberg/.minikube/client.crt", KeyFile:"/Users/tstromberg/.minikube/client.key", CAFile:"/Users/tstromberg/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x51c21d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
I0319 14:13:37.300115    7444 kverify.go:82] waiting for kube-system pods to appear ...
W0319 14:13:44.231927    7444 kverify.go:98] pod list returned error: Get "https://127.0.0.1:32774/api/v1/namespaces/kube-system/pods": EOF
I0319 14:13:44.890692    7444 logs.go:203] 1 containers: [c84203e8819d]
I0319 14:13:45.041448    7444 logs.go:203] 2 containers: [b018b63147bd ab8e14965ab9]
I0319 14:13:45.210424    7444 logs.go:203] 2 containers: [c0f8e07b8193 17889d2ae3ff]
I0319 14:13:45.387518    7444 logs.go:203] 2 containers: [8ae651f578b7 367928ad3bee]
I0319 14:13:45.555704    7444 logs.go:203] 1 containers: [1bfca5f1ff5e]
I0319 14:13:45.727261    7444 logs.go:203] 0 containers: []
W0319 14:13:45.727290    7444 logs.go:205] No container was found matching "kubernetes-dashboard"
I0319 14:13:45.894872    7444 logs.go:203] 1 containers: [6d76c377fdbf]
I0319 14:13:46.056880    7444 logs.go:203] 2 containers: [7284df776371 7b70f6c305bc]
I0319 14:13:46.056935    7444 logs.go:117] Gathering logs for dmesg ...
I0319 14:13:46.184916    7444 logs.go:117] Gathering logs for coredns [17889d2ae3ff] ...
I0319 14:13:46.344010    7444 logs.go:117] Gathering logs for Docker ...
I0319 14:13:46.524686    7444 logs.go:117] Gathering logs for kube-scheduler [8ae651f578b7] ...
I0319 14:13:46.693938    7444 logs.go:117] Gathering logs for kube-controller-manager [7284df776371] ...
I0319 14:13:46.865743    7444 logs.go:117] Gathering logs for container status ...
I0319 14:13:47.004590    7444 logs.go:117] Gathering logs for kube-controller-manager [7b70f6c305bc] ...
I0319 14:13:47.199792    7444 logs.go:117] Gathering logs for kubelet ...
W0319 14:13:47.334322    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:28 minikube kubelet[2217]: E0319 21:13:28.532272    2217 pod_workers.go:191] Error syncing pod 3c853d9152c593aabdb79b7b49733896 ("kube-scheduler-m01_kube-system(3c853d9152c593aabdb79b7b49733896)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-m01_kube-system(3c853d9152c593aabdb79b7b49733896)"
W0319 14:13:47.334489    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:28 minikube kubelet[2217]: E0319 21:13:28.615980    2217 pod_workers.go:191] Error syncing pod 106c465f-704e-414c-b97b-df723f68931f ("kindnet-j6gmw_kube-system(106c465f-704e-414c-b97b-df723f68931f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 10s restarting failed container=kindnet-cni pod=kindnet-j6gmw_kube-system(106c465f-704e-414c-b97b-df723f68931f)"
W0319 14:13:47.334631    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:28 minikube kubelet[2217]: E0319 21:13:28.629148    2217 pod_workers.go:191] Error syncing pod 617508941319f2647e776f5e7675282b ("kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"
W0319 14:13:47.335786    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:29 minikube kubelet[2217]: E0319 21:13:29.305300    2217 pod_workers.go:191] Error syncing pod 44759d44-5487-4e7c-b307-19cadbeb39a6 ("coredns-66bff467f8-dnknz_kube-system(44759d44-5487-4e7c-b307-19cadbeb39a6)"), skipping: failed to "StartContainer" for "coredns" with RunContainerError: "failed to start container \"17889d2ae3ff70ecc9db60d415e1734c3db8ef70adf71a6be0688c36cf7128b8\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:319: getting the final child's pid from pipe caused \\\"EOF\\\"\": unknown"
W0319 14:13:47.336641    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:29 minikube kubelet[2217]: E0319 21:13:29.405088    2217 pod_workers.go:191] Error syncing pod 58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2 ("coredns-66bff467f8-n7xsm_kube-system(58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2)"), skipping: failed to "StartContainer" for "coredns" with RunContainerError: "failed to start container \"c0f8e07b8193f104b60ea2052f4032711d3efa2cf272ff0f10e1c19272584bf4\": Error response from daemon: OCI runtime create failed: container_linux.go:338: creating new parent process caused \"container_linux.go:1920: running lstat on namespace path \\\"/proc/4726/ns/ipc\\\" caused \\\"lstat /proc/4726/ns/ipc: no such file or directory\\\"\": unknown"
W0319 14:13:47.342414    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.328557    2217 pod_workers.go:191] Error syncing pod 617508941319f2647e776f5e7675282b ("kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"
W0319 14:13:47.342550    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.421937    2217 pod_workers.go:191] Error syncing pod 3c853d9152c593aabdb79b7b49733896 ("kube-scheduler-m01_kube-system(3c853d9152c593aabdb79b7b49733896)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-m01_kube-system(3c853d9152c593aabdb79b7b49733896)"
W0319 14:13:47.342669    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.426099    2217 pod_workers.go:191] Error syncing pod 86128be13eaf1aa4234cfb2fcf4bbebb ("etcd-m01_kube-system(86128be13eaf1aa4234cfb2fcf4bbebb)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-m01_kube-system(86128be13eaf1aa4234cfb2fcf4bbebb)"
W0319 14:13:47.342791    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.549057    2217 pod_workers.go:191] Error syncing pod 106c465f-704e-414c-b97b-df723f68931f ("kindnet-j6gmw_kube-system(106c465f-704e-414c-b97b-df723f68931f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 10s restarting failed container=kindnet-cni pod=kindnet-j6gmw_kube-system(106c465f-704e-414c-b97b-df723f68931f)"
W0319 14:13:47.343060    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.819144    2217 pod_workers.go:191] Error syncing pod 44759d44-5487-4e7c-b307-19cadbeb39a6 ("coredns-66bff467f8-dnknz_kube-system(44759d44-5487-4e7c-b307-19cadbeb39a6)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 10s restarting failed container=coredns pod=coredns-66bff467f8-dnknz_kube-system(44759d44-5487-4e7c-b307-19cadbeb39a6)"
W0319 14:13:47.343323    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.832747    2217 pod_workers.go:191] Error syncing pod 58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2 ("coredns-66bff467f8-n7xsm_kube-system(58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 10s restarting failed container=coredns pod=coredns-66bff467f8-n7xsm_kube-system(58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2)"
W0319 14:13:47.362071    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:39 minikube kubelet[5658]: E0319 21:13:39.525004    5658 pod_workers.go:191] Error syncing pod 617508941319f2647e776f5e7675282b ("kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"
W0319 14:13:47.364355    7444 logs.go:132] Found kubelet problem: Mar 19 21:13:42 minikube kubelet[5658]: E0319 21:13:42.591314    5658 pod_workers.go:191] Error syncing pod 617508941319f2647e776f5e7675282b ("kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"
I0319 14:13:47.372901    7444 logs.go:117] Gathering logs for kube-apiserver [c84203e8819d] ...
W0319 14:13:47.576632    7444 logs.go:132] Found kube-apiserver [c84203e8819d] problem: Error: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use
I0319 14:13:47.594694    7444 logs.go:117] Gathering logs for etcd [b018b63147bd] ...
I0319 14:13:47.785850    7444 logs.go:117] Gathering logs for etcd [ab8e14965ab9] ...
I0319 14:13:48.139229    7444 logs.go:117] Gathering logs for storage-provisioner [6d76c377fdbf] ...
I0319 14:13:48.407146    7444 logs.go:117] Gathering logs for coredns [c0f8e07b8193] ...
I0319 14:13:48.568689    7444 logs.go:117] Gathering logs for kube-scheduler [367928ad3bee] ...
I0319 14:13:48.739302    7444 logs.go:117] Gathering logs for kube-proxy [1bfca5f1ff5e] ...
❌  Problems detected in kube-apiserver [c84203e8819d]:
    Error: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use
❌  Problems detected in kubelet:
    Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.549057    2217 pod_workers.go:191] Error syncing pod 106c465f-704e-414c-b97b-df723f68931f ("kindnet-j6gmw_kube-system(106c465f-704e-414c-b97b-df723f68931f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 10s restarting failed container=kindnet-cni pod=kindnet-j6gmw_kube-system(106c465f-704e-414c-b97b-df723f68931f)"
    Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.819144    2217 pod_workers.go:191] Error syncing pod 44759d44-5487-4e7c-b307-19cadbeb39a6 ("coredns-66bff467f8-dnknz_kube-system(44759d44-5487-4e7c-b307-19cadbeb39a6)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 10s restarting failed container=coredns pod=coredns-66bff467f8-dnknz_kube-system(44759d44-5487-4e7c-b307-19cadbeb39a6)"
    Mar 19 21:13:31 minikube kubelet[2217]: E0319 21:13:31.832747    2217 pod_workers.go:191] Error syncing pod 58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2 ("coredns-66bff467f8-n7xsm_kube-system(58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "back-off 10s restarting failed container=coredns pod=coredns-66bff467f8-n7xsm_kube-system(58eab7f3-6e3b-4042-8e5b-5e2b8d0df2d2)"
    Mar 19 21:13:39 minikube kubelet[5658]: E0319 21:13:39.525004    5658 pod_workers.go:191] Error syncing pod 617508941319f2647e776f5e7675282b ("kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"
    Mar 19 21:13:42 minikube kubelet[5658]: E0319 21:13:42.591314    5658 pod_workers.go:191] Error syncing pod 617508941319f2647e776f5e7675282b ("kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-m01_kube-system(617508941319f2647e776f5e7675282b)"
I0319 14:13:58.934890    7444 kverify.go:101] 9 kube-system pods found
I0319 14:13:58.934918    7444 kverify.go:110] duration metric: took 21.634743481s to wait for pod list to return data ...
I0319 14:13:59.979578    7444 ops.go:35] apiserver oom_adj: -16
I0319 14:13:59.979624    7444 kubeadm.go:303] restartCluster took 24.544232971s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant