Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker_Linux: TestOffline/group/crio: missing components: kube-dns #7519

Closed
tstromberg opened this issue Apr 8, 2020 · 3 comments
Closed
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@tstromberg
Copy link
Contributor

https://storage.googleapis.com/minikube-builds/logs/7516/5110da9/Docker_Linux.html

--- FAIL: TestOffline/group/crio (580.65s)

aab_offline_test.go:53: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20200408T092706.125460795-5291 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker 

aab_offline_test.go:53: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p offline-crio-20200408T092706.125460795-5291 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker : exit status 70 (9m31.319046215s)

-- stdout --
	* [offline-crio-20200408T092706.125460795-5291] minikube v1.9.2 on Debian 9.12
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-7516-2780-5110da968d365cf5ec9e61c4e4d398707592c848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-7516-2780-5110da968d365cf5ec9e61c4e4d398707592c848/.minikube
	  - MINIKUBE_LOCATION=7516
	* Using the docker driver based on user configuration
	* Starting control plane node offline-crio-20200408T092706.125460795-5291 in cluster offline-crio-20200408T092706.125460795-5291
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (8 available), Memory=2000MB (30163MB available) ...
	* Found network options:
	  - HTTP_PROXY=172.16.1.1:1
	* Preparing Kubernetes v1.18.0 on CRI-O 1.17.0 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16
	* Enabling addons: default-storageclass, storage-provisioner

Looks like we hit the 6 minute timeout to roll CoreDNS out:

	I0408 09:30:05.914167    6997 system_pods.go:64] "coredns-66bff467f8-j64jj" [71015865-ca75-4741-827e-2a1eac304f38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
...
	I0408 09:36:37.441668    6997 system_pods.go:89] found pod: "coredns-66bff467f8-t4cp6" [3711c317-6dae-40e5-8eba-ea022e0eb5ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
...
	W0408 09:36:37.442042    6997 exit.go:101] startup failed: Wait failed: waiting for apps_running: waitings for k8s app running: missing components: kube-dns

Shows up here:

	NAMESPACE     NAME                                                                  READY   STATUS              RESTARTS   AGE     LABELS

	kube-system   coredns-66bff467f8-j64jj                                              0/1     ContainerCreating   0          6m54s   k8s-app=kube-dns,pod-template-hash=66bff467f8

	kube-system   coredns-66bff467f8-t4cp6                                              0/1     ContainerCreating   0          6m54s   k8s-app=kube-dns,pod-template-hash=66bff467f8

The logs don't make it clear why the containers are not ready.

@tstromberg tstromberg added kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. and removed kind/flake Categorizes issue or PR as related to a flaky test. labels Apr 8, 2020
@tstromberg tstromberg added this to the v1.10.0 milestone Apr 8, 2020
@tstromberg
Copy link
Contributor Author

NOTE: I was not able to reproduce this locally on macOS.

@medyagh
Copy link
Member

medyagh commented Apr 8, 2020

@tstromberg did you try with --wait=true ?

@priyawadhwa priyawadhwa added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 15, 2020
@medyagh
Copy link
Member

medyagh commented Apr 15, 2020

fiixed by my wait pr

@medyagh medyagh closed this as completed Apr 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

3 participants