Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestStartStop/group/embed-certs on Docker: "k8s-app=kubernetes-dashboard" failed to start within 9m0s #7921

Closed
priyawadhwa opened this issue Apr 27, 2020 · 2 comments · Fixed by #8035
Assignees
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@priyawadhwa
Copy link

start_stop_delete_test.go:141: (dbg) TestStartStop/group/embed-certs: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:141: ***** TestStartStop/group/embed-certs: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****

As seen here

@priyawadhwa priyawadhwa added the kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. label Apr 27, 2020
@priyawadhwa priyawadhwa added this to the v1.10.0 milestone Apr 27, 2020
@tstromberg tstromberg changed the title failing test: TestStartStop/group/embed-certs TestStartStop/group/embed-certs: "k8s-app=kubernetes-dashboard" failed to start within 9m0s Apr 28, 2020
@tstromberg tstromberg changed the title TestStartStop/group/embed-certs: "k8s-app=kubernetes-dashboard" failed to start within 9m0s TestStartStop/group/embed-certs on Docker: "k8s-app=kubernetes-dashboard" failed to start within 9m0s Apr 28, 2020
@tstromberg tstromberg added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 28, 2020
@medyagh
Copy link
Member

medyagh commented May 6, 2020

@tstromberg @priyawadhwa @sharifelgamal

I believe this failing test is telling us, minikube does NOT respect the busybox that was deployed Before Stop.
in another word, if you deploy an app (busybox) and then stop minikube, it will be gone after start.

I verfied this is a flake on HEAD on docker on mac

$ ./out/minikube start --driver=docker
$ kc apply -f test/integration/testdata/busybox.yaml

$ medmac@~/workspace/minikube (master) $ kc get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
default       busybox                            1/1     Running   0          34s
kube-system   coredns-66bff467f8-7pmmc           1/1     Running   0          42s
kube-system   coredns-66bff467f8-g8r9l           1/1     Running   0          42s
kube-system   etcd-minikube                      1/1     Running   0          57s
kube-system   kube-apiserver-minikube            1/1     Running   0          57s
kube-system   kube-controller-manager-minikube   1/1     Running   0          57s
kube-system   kube-proxy-6rmx5                   1/1     Running   0          42s
kube-system   kube-scheduler-minikube            1/1     Running   0          57s
kube-system   storage-provisioner                1/1     Running   0          57s


$ ./out/minikube stop


$ medmac@~/workspace/minikube (master) $ ./out/minikube start
😄  minikube v1.10.0-beta.2 on Darwin 10.13.6
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.18.1 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🤦  Unable to restart cluster, will reset it: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file

stderr:
W0506 00:17:46.655985     852 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
error execution phase kubeconfig/controller-manager: failed to find CurrentContext in Contexts of the kubeconfig file /etc/kubernetes/controller-manager.conf
To see the stack trace of this error execute with --v=5 or higher

🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"


$ medmac@~/workspace/minikube (master) $ kc get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-5qff2           1/1     Running   0          65s
kube-system   coredns-66bff467f8-zc982           1/1     Running   0          65s
kube-system   etcd-minikube                      1/1     Running   0          80s
kube-system   kube-apiserver-minikube            1/1     Running   0          80s
kube-system   kube-controller-manager-minikube   1/1     Running   0          80s
kube-system   kube-proxy-sqtxh                   1/1     Running   0          65s
kube-system   kube-scheduler-minikube            1/1     Running   0          80s
kube-system   storage-provisioner                1/1     Running   1          80s

I believe this is could be caused by docker changing the IP each on restart.

@medyagh
Copy link
Member

medyagh commented May 6, 2020

this integration test for BusyBox might fail each time docker changes the container IP after STOP

till we create our own network subnets

#7756

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants