Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestFunctional Failure PersistentVolumeClaim #6363

Closed
medyagh opened this issue Jan 22, 2020 · 0 comments · Fixed by #6440
Closed

TestFunctional Failure PersistentVolumeClaim #6363

medyagh opened this issue Jan 22, 2020 · 0 comments · Fixed by #6440
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@medyagh
Copy link
Member

medyagh commented Jan 22, 2020

I have seen consistant failures of this test

=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (509.31s)
fn_pvc.go:40: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers.go:260: "storage-provisioner" [3012cc7f-e9af-4fbc-b1de-99bf97cc3bf3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
helpers.go:260: "storage-provisioner" [3012cc7f-e9af-4fbc-b1de-99bf97cc3bf3] Running
fn_pvc.go:40: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 1m47.020411317s
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:45: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get storageclass -o=json
fn_pvc.go:61: no default storage class after retry: no storageclass yet
fn_pvc.go:65: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 apply -f testdata/pvc.yaml
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:71: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get pvc testpvc -o=json
fn_pvc.go:87: PV Creation failed with error: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:testpvc GenerateName: Namespace:default SelfLink:/api/v1/namespaces/default/persistentvolumeclaims/testpvc UID:4667b335-bb01-44fd-8478-5a2767b3564f ResourceVersion:1056 Generation:0 CreationTimestamp:2020-01-21 18:14:26 -0800 PST DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"testpvc","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}}}}
] OwnerReferences:[] Initializers:nil Finalizers:[kubernetes.io/pvc-protection] ClusterName: ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:2147483648 scale:0} d:{Dec:<nil>} s:2Gi Format:BinarySI}]} VolumeName: StorageClassName:<nil> VolumeMode:0xc0007e0eb0 DataSource:nil} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[]}})
functional_test.go:116: *** TestFunctional FAILED at 2020-01-21 18:18:51.075112966 -0800 PST m=+2054.962614777
functional_test.go:116: >>> TestFunctional FAILED: start of post-mortem logs >>>
functional_test.go:116: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 get po -A --show-labels
functional_test.go:116: (dbg) kubectl --context functional-20200121T180545.468672125-11591 get po -A --show-labels:
NAMESPACE     NAME                               READY   STATUS      RESTARTS   AGE     LABELS
default       busybox                            1/1     Running     0          7m4s    integration-test=busybox
default       busybox-mount                      0/1     Completed   0          7m18s   integration-test=busybox-mount
default       hello-node-7676b5fb8d-z78lr        1/1     Running     0          8m16s   app=hello-node,pod-template-hash=7676b5fb8d
default       mysql-5787d7b5fc-zcd7d             1/1     Running     0          8m59s   app=mysql,pod-template-hash=5787d7b5fc
default       nginx-svc                          1/1     Running     0          8m49s   run=nginx-svc
kube-system   coredns-6955765f44-64bkh           1/1     Running     0          9m5s    k8s-app=kube-dns,pod-template-hash=6955765f44
kube-system   coredns-6955765f44-9mgxn           1/1     Running     0          9m5s    k8s-app=kube-dns,pod-template-hash=6955765f44
kube-system   etcd-minikube                      1/1     Running     0          9m10s   component=etcd,tier=control-plane
kube-system   kube-apiserver-minikube            1/1     Running     0          9m10s   component=kube-apiserver,tier=control-plane
kube-system   kube-controller-manager-minikube   1/1     Running     0          9m10s   component=kube-controller-manager,tier=control-plane
kube-system   kube-proxy-kh65x                   1/1     Running     0          9m4s    controller-revision-hash=68bd87b66,k8s-app=kube-proxy,pod-template-generation=1
kube-system   kube-scheduler-minikube            1/1     Running     0          9m10s   component=kube-scheduler,tier=control-plane
kube-system   storage-provisioner                1/1     Running     0          6m51s   addonmanager.kubernetes.io/mode=Reconcile,integration-test=storage-provisioner
functional_test.go:116: (dbg) Run:  kubectl --context functional-20200121T180545.468672125-11591 describe node
functional_test.go:116: (dbg) kubectl --context functional-20200121T180545.468672125-11591 describe node:
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 21 Jan 2020 18:09:36 -0800
Taints:             <none>
Unschedulable:      false
Lease:
HolderIdentity:  minikube
AcquireTime:     <unset>
RenewTime:       Tue, 21 Jan 2020 18:18:42 -0800
Conditions:
Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
----             ------  -----------------                 ------------------                ------                       -------
MemoryPressure   False   Tue, 21 Jan 2020 18:16:45 -0800   Tue, 21 Jan 2020 18:09:30 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
DiskPressure     False   Tue, 21 Jan 2020 18:16:45 -0800   Tue, 21 Jan 2020 18:09:30 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
PIDPressure      False   Tue, 21 Jan 2020 18:16:45 -0800   Tue, 21 Jan 2020 18:09:30 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
Ready            True    Tue, 21 Jan 2020 18:16:45 -0800   Tue, 21 Jan 2020 18:09:51 -0800   KubeletReady                 kubelet is posting ready status
Addresses:
InternalIP:  192.168.39.240
Hostname:    minikube
Capacity:
cpu:                2
ephemeral-storage:  16954240Ki
hugepages-2Mi:      0
memory:             2375656Ki
pods:               110
Allocatable:
cpu:                2
ephemeral-storage:  16954240Ki
hugepages-2Mi:      0
memory:             2375656Ki
pods:               110
System Info:
Machine ID:                 9e1ecd1c6beb4d4d9b187101d3e1a258
System UUID:                9e1ecd1c-6beb-4d4d-9b18-7101d3e1a258
Boot ID:                    a26ff1b0-9282-44d8-929b-9d77c602e931
Kernel Version:             4.19.88
OS Image:                   Buildroot 2019.02.8
Operating System:           linux
Architecture:               amd64
Container Runtime Version:  docker://19.3.5
Kubelet Version:            v1.17.0
Kube-Proxy Version:         v1.17.0
Non-terminated Pods:          (12 in total)
Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
---------                   ----                                ------------  ----------  ---------------  -------------  ---
default                     busybox                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
default                     hello-node-7676b5fb8d-z78lr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
default                     mysql-5787d7b5fc-zcd7d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
default                     nginx-svc                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
kube-system                 coredns-6955765f44-64bkh            100m (5%)     0 (0%)      70Mi (3%)        170Mi (7%)     9m5s
kube-system                 coredns-6955765f44-9mgxn            100m (5%)     0 (0%)      70Mi (3%)        170Mi (7%)     9m5s
kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
kube-system                 kube-apiserver-minikube             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m10s
kube-system                 kube-controller-manager-minikube    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m10s
kube-system                 kube-proxy-kh65x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m4s
kube-system                 kube-scheduler-minikube             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m10s
kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource           Requests    Limits
--------           --------    ------
cpu                750m (37%)  0 (0%)
memory             140Mi (6%)  340Mi (14%)
ephemeral-storage  0 (0%)      0 (0%)
Events:
Type    Reason                   Age                    From                  Message
----    ------                   ----                   ----                  -------
Normal  NodeHasNoDiskPressure    9m27s (x4 over 9m27s)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
Normal  NodeHasSufficientPID     9m27s (x4 over 9m27s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
Normal  NodeHasSufficientMemory  9m26s (x5 over 9m27s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
Normal  Starting                 9m10s                  kubelet, minikube     Starting kubelet.
Normal  NodeHasSufficientMemory  9m10s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
Normal  NodeHasNoDiskPressure    9m10s                  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
Normal  NodeHasSufficientPID     9m10s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
Normal  NodeAllocatableEnforced  9m10s                  kubelet, minikube     Updated Node Allocatable limit across pods
Normal  Starting                 9m2s                   kube-proxy, minikube  Starting kube-proxy.
Normal  NodeReady                9m                     kubelet, minikube     Node minikube status is now: NodeReady
functional_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200121T180545.468672125-11591 logs --problems
functional_test.go:116: (dbg) Done: out/minikube-linux-amd64 -p functional-20200121T180545.468672125-11591 logs --problems: (1.701280586s)
functional_test.go:116: TestFunctional logs: 
functional_test.go:116: <<< TestFunctional FAILED: end of post-mortem logs <<<
helpers.go:159: (dbg) Run:  out/minikube-linux-amd64 delete -p functional-20200121T180545.468672125-11591
helpers.go:159: (dbg) Done: out/minikube-linux-amd64 delete -p functional-20200121T180545.468672125-11591: (1.197494842s)
@medyagh medyagh added the kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. label Jan 22, 2020
@priyawadhwa priyawadhwa added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/support Categorizes issue or PR as a support question. labels Jan 22, 2020
@tstromberg tstromberg added this to the v1.7.0 milestone Jan 22, 2020
@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. and removed kind/support Categorizes issue or PR as a support question. labels Jan 22, 2020
@tstromberg tstromberg self-assigned this Jan 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants