-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker_Linux: TestStartStop/group/containerd: listen tcp 0.0.0.0:8444: bind: address already in use #7521
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
priority/important-soon
Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone
Comments
tstromberg
added
kind/flake
Categorizes issue or PR as related to a flaky test.
priority/important-soon
Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
labels
Apr 8, 2020
tstromberg
added
kind/failing-test
Categorizes issue or PR as related to a consistently or frequently failing test.
and removed
kind/failing-test
Categorizes issue or PR as related to a consistently or frequently failing test.
labels
Apr 8, 2020
this is dupe of #7505 |
here is the root cause: unable to stop kubelet:
|
so one step closer !
|
ok I found the total root cause ! u can not 'crictl rm' a container before stopping it , sent a PR ! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
priority/important-soon
Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
https://storage.googleapis.com/minikube-builds/logs/7516/5110da9/Docker_Linux.html#fail_TestStartStop%2fgroup%2fcontainerd
stdout shows some initial trouble with restarting:
Pods show that the apiserver restarted, but the dashboard didn't. It appears we lost state when restarting the apiserver?
This sounds like a race condition bug: where we claim health on the previous nameserver pid, but it restarts in the middle of us enabling the dashboard. We guard against this for Kubernetes upgrades by asserting that the apiserver is running the right version, but we don't yet guard against this for other cases.
The text was updated successfully, but these errors were encountered: