Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestAddons/parallel/Registry failing on none: wget: bad address 'registry.kube-system.svc.cluster.local' #5926

Closed
tstromberg opened this issue Nov 15, 2019 · 2 comments
Labels
area/testing kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@tstromberg
Copy link
Contributor

tstromberg commented Nov 15, 2019

The registry addon hasn't changed in months, but start timings have:

--- FAIL: TestAddons (83.61s)
    addons_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=false --memory=2600 --alsologtostderr -v=1 --addons=ingress --addons=registry --vm-driver=none 
    addons_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=false --memory=2600 --alsologtostderr -v=1 --addons=ingress --addons=registry --vm-driver=none : (22.104702021s)
    --- FAIL: TestAddons/parallel (45.08s)
        --- FAIL: TestAddons/parallel/Registry (45.08s)
            addons_test.go:152: registry stabilized in 6.901098124s
            addons_test.go:154: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
            helpers.go:268: "registry-4r8zx" [4c39e360-fb60-48bd-a7c6-a6a164a0d6de] Pending
            helpers.go:268: "registry-4r8zx" [4c39e360-fb60-48bd-a7c6-a6a164a0d6de] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
            helpers.go:268: "registry-4r8zx" [4c39e360-fb60-48bd-a7c6-a6a164a0d6de] Running
            addons_test.go:154: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 10.012248147s
            addons_test.go:157: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
            helpers.go:268: "registry-proxy-vpgmp" [75c22d11-d7a1-4360-97f8-2167673f24c7] Running
            addons_test.go:157: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005989006s
            addons_test.go:162: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
            addons_test.go:167: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
            addons_test.go:167: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (3.613896419s)
                -- stdout --
                wget: bad address 'registry.kube-system.svc.cluster.local'
                pod "registry-test" deleted
                
                -- /stdout --
                ** stderr ** 
                pod default/registry-test terminated (Error)
                
                ** /stderr **
            addons_test.go:169: [kubectl --context minikube run --rm registry-test --restart=Never --image=busybox -it -- sh -c wget --spider -S http://registry.kube-system.svc.cluster.local] failed: exit status 1
            addons_test.go:173: curl = "wget: bad address 'registry.kube-system.svc.cluster.local'\r\npod \"registry-test\" deleted\n", want *HTTP/1.1 200*
            addons_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
            addons_test.go:206: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1

Based on this output:

   addons_test.go:82: (dbg) kubectl --context minikube get po -A --show-labels:
        NAMESPACE     NAME                                        READY   STATUS        RESTARTS   AGE   LABELS
        kube-system   coredns-5644d7b6d9-jsczk                    1/1     Running       0          45s   k8s-app=kube-dns,pod-template-hash=5644d7b6d9
        kube-system   coredns-5644d7b6d9-z2zgg                    1/1     Running       0          45s   k8s-app=kube-dns,pod-template-hash=5644d7b6d9
        kube-system   kube-proxy-5dbf7                            1/1     Running       0          45s   controller-revision-hash=56ffd4ff47,k8s-app=kube-proxy,pod-template-generation=1
        kube-system   nginx-ingress-controller-6fc5bcc8c9-w9dbw   1/1     Running       0          43s   addonmanager.kubernetes.io/mode=Reconcile,app.kubernetes.io/name=nginx-ingress-controller,app.kubernetes.io/part-of=kube-system,pod-template-hash=6fc5bcc8c9
        kube-system   registry-4r8zx                              1/1     Terminating   0          44s   actual-registry=true,addonmanager.kubernetes.io/mode=Reconcile,kubernetes.io/minikube-addons=registry
        kube-system   registry-proxy-vpgmp                        1/1     Terminating   0          44s   addonmanager.kubernetes.io/mode=Reconcile,controller-revision-hash=675799b8c9,kubernetes.io/minikube-addons=registry,pod-template-generation=1,registry-proxy=true
        kube-system   storage-provisioner                         1/1     Running       0          44s   addonmanager.kubernetes.io/mode=Reconcile,integration-test=storage-provisioner
    addons_test.go:82: (dbg) Run:  kubectl --context minikube describe node

and this output:

     Non-terminated Pods:         (7 in total)
          Namespace                  Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
          ---------                  ----                                         ------------  ----------  ---------------  -------------  ---
          kube-system                coredns-5644d7b6d9-jsczk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     45s
          kube-system                coredns-5644d7b6d9-z2zgg                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     45s
          kube-system                kube-proxy-5dbf7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
          kube-system                nginx-ingress-controller-6fc5bcc8c9-w9dbw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
          kube-system                registry-4r8zx                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
          kube-system                registry-proxy-vpgmp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
          kube-system                storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s

My theroy is that maybe the registry came in from a previous test, and was being restarted? Are we not deleting kubernetes pods when we shut down the none driver?

@tstromberg tstromberg changed the title TestAddons/parallel/Registry failing: wget: bad address 'registry.kube-system.svc.cluster.local' TestAddons/parallel/Registry failing on none: wget: bad address 'registry.kube-system.svc.cluster.local' Nov 15, 2019
@tstromberg tstromberg added area/testing kind/bug Categorizes issue or PR as related to a bug. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed kind/bug Categorizes issue or PR as related to a bug. labels Nov 20, 2019
@tstromberg tstromberg added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Dec 9, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2020
@tstromberg
Copy link
Contributor Author

Closing as this hasn't been confirmed in some time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/testing kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

3 participants