Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none upgrade: kubeadm exit status 1 (bind: address already in use) #5332

Closed
tstromberg opened this issue Sep 13, 2019 · 6 comments
Closed

none upgrade: kubeadm exit status 1 (bind: address already in use) #5332

tstromberg opened this issue Sep 13, 2019 · 6 comments
Labels
kind/flake Categorizes issue or PR as related to a flaky test. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@tstromberg
Copy link
Contributor

Probable test flake https://storage.googleapis.com/minikube-builds/logs/5324/none_Linux.txt

                W0913 03:11:55.412882   24049 exit.go:99] Error restarting cluster: addon phase: running command: sudo env PATH=/var/lib/minikube/binaries/v1.10.13:$PATH kubeadm alpha phase addon all --config /var/tmp/minikube/kubeadm.yaml: exit status 1
                * 
                X Error restarting cluster: addon phase: running command: sudo env PATH=/var/lib/minikube/binaries/v1.10.13:$PATH kubeadm alpha phase addon all --config /var/tmp/minikube/kubeadm.yaml: exit status 1
                * 
                * Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
                  - https://github.com/kubernetes/minikube/issues/new/choose
                
            start_stop_delete_test.go:123: [out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8 --cache-images=false --kubernetes-version=v1.10.13 --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --vm-driver=none ] failed: exit status 70
            panic.go:406: TestStartStop/group/docker failed, collecting logs ...

Based on the output, I feel like we may not be cleaning up a previously setup minikube test:

                * failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use
@tstromberg tstromberg added kind/flake Categorizes issue or PR as related to a flaky test. needs-solution-message Issues where where offering a solution for an error would be helpful labels Sep 13, 2019
@tstromberg tstromberg added this to the v1.5.0-candidate milestone Sep 13, 2019
@tstromberg tstromberg changed the title none: TestStartStop/group/docker: none: TestStartStop/group/docker: bkubeadm alpha phase addon all: exit status 1 Sep 13, 2019
@tstromberg tstromberg changed the title none: TestStartStop/group/docker: bkubeadm alpha phase addon all: exit status 1 none: TestStartStop/group/docker: kubeadm alpha phase addon all: exit status 1 Sep 13, 2019
@tstromberg tstromberg added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed needs-solution-message Issues where where offering a solution for an error would be helpful labels Sep 16, 2019
@tstromberg tstromberg changed the title none: TestStartStop/group/docker: kubeadm alpha phase addon all: exit status 1 none: TestStartStop/group/docker & TestVersionUpgrade: kubeadm alpha phase addon all: exit status 1 Sep 16, 2019
@tstromberg
Copy link
Contributor Author

v1.4.0 updates this to mention the following:

        * Relaunching Kubernetes using kubeadm ... 
        * Problems detected in kube-controller-manager [69da8169e08c]:
          - failed to create listener: failed to listen on 0.0.0.0:10252: listen tcp 0.0.0.0:10252: bind: address already in use
        * Problems detected in kube-scheduler [96f00f4575ce]:
          - failed to create listener: failed to listen on 0.0.0.0:10251: listen tcp 0.0.0.0:10251: bind: address already in use   

@tstromberg tstromberg changed the title none: TestStartStop/group/docker & TestVersionUpgrade: kubeadm alpha phase addon all: exit status 1 none upgrade: kubeadm exit status 1 (bind: address already in use) Sep 23, 2019
@tstromberg tstromberg modified the milestones: v1.5.0, v1.6.0-candidate Oct 14, 2019
@tstromberg tstromberg modified the milestones: v1.6.0, v1.7.0-candidate Oct 30, 2019
@tstromberg tstromberg removed this from the v1.7.0-candidate milestone Dec 9, 2019
@tstromberg tstromberg added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Dec 9, 2019
@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

there is a small possiblity that this has been fixed by v1.6.1 if not seen again, we could close it.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 7, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

4 participants