Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky TestFunctional/parallel/ComponentHealth: pending #10130

Closed
lingsamuel opened this issue Jan 12, 2021 · 11 comments · Fixed by #10424
Closed

Flaky TestFunctional/parallel/ComponentHealth: pending #10130

lingsamuel opened this issue Jan 12, 2021 · 11 comments · Fixed by #10424
Assignees
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@lingsamuel
Copy link
Contributor

lingsamuel commented Jan 12, 2021

/kind failing-test
/kind flake


2021-01-12T08:54:30.1684190Z === CONT  TestFunctional/parallel/ComponentHealth
2021-01-12T08:54:30.1685461Z     functional_test.go:390: etcd phase: Running
2021-01-12T08:54:30.1687029Z     functional_test.go:390: control-plane phase: Running
2021-01-12T08:54:30.1687925Z     functional_test.go:390: kube-apiserver phase: Pending
2021-01-12T08:54:30.1689403Z     functional_test.go:395: kube-apiserver is not Running: {Phase:Pending Conditions:[] Message: Reason: HostIP: PodIP: StartTime:<nil> ContainerStatuses:[]}
2021-01-12T08:54:30.1690567Z     functional_test.go:390: control-plane phase: Pending
2021-01-12T08:54:30.1691459Z     functional_test.go:390: kube-controller-manager phase: Running

2021-01-12T08:54:32.4341001Z         	* ==> kubelet <==
2021-01-12T08:54:32.4341664Z         	* -- Logs begin at Tue 2021-01-12 08:53:06 UTC, end at Tue 2021-01-12 08:54:32 UTC. --
2021-01-12T08:54:32.4344563Z         	* Jan 12 08:54:23 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:23.848127    5095 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/dbabc6fcaa35495e1aa10964d680d780-k8s-certs") pod "kube-apiserver-functional-20210112085224-2630" (UID: "dbabc6fcaa35495e1aa10964d680d780")
2021-01-12T08:54:32.4356393Z         	* Jan 12 08:54:23 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:23.848141    5095 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/a3e7be694ef7cf952503c5d331abc0ac-etc-ca-certificates") pod "kube-controller-manager-functional-20210112085224-2630" (UID: "a3e7be694ef7cf952503c5d331abc0ac")
2021-01-12T08:54:32.4361958Z         	* Jan 12 08:54:23 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:23.848155    5095 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/a3e7be694ef7cf952503c5d331abc0ac-kubeconfig") pod "kube-controller-manager-functional-20210112085224-2630" (UID: "a3e7be694ef7cf952503c5d331abc0ac")
2021-01-12T08:54:32.4365072Z         	* Jan 12 08:54:23 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:23.848162    5095 reconciler.go:157] Reconciler: start to sync state
2021-01-12T08:54:32.4366873Z         	* Jan 12 08:54:24 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:24.582214    5095 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-c4vsl through plugin: invalid network status for
2021-01-12T08:54:32.4370250Z         	* Jan 12 08:54:24 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:24.837706    5095 request.go:655] Throttling request took 1.051521351s, request: GET:https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dstorage-provisioner-token-59dsj&limit=500&resourceVersion=0
2021-01-12T08:54:32.4374214Z         	* Jan 12 08:54:25 functional-20210112085224-2630 kubelet[5095]: E0112 08:54:25.042472    5095 kubelet.go:1635] Failed creating a mirror pod for "kube-apiserver-functional-20210112085224-2630_kube-system(dbabc6fcaa35495e1aa10964d680d780)": pods "kube-apiserver-functional-20210112085224-2630" already exists
2021-01-12T08:54:32.4377128Z         	* Jan 12 08:54:25 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:25.329124    5095 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-c4vsl through plugin: invalid network status for
2021-01-12T08:54:32.4380519Z         	* Jan 12 08:54:25 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:25.529583    5095 reflector.go:436] object-"kube-system"/"storage-provisioner-token-59dsj": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"storage-provisioner-token-59dsj": Unexpected watch close - watch lasted less than a second and no items received
2021-01-12T08:54:32.4384120Z         	* Jan 12 08:54:25 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:25.638905    5095 status_manager.go:550] Failed to get status for pod "storage-provisioner_kube-system(f196b45d-0544-4224-95f4-8497a4d177de)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner": dial tcp 192.168.49.71:8441: connect: connection refused
2021-01-12T08:54:32.4393542Z         	* Jan 12 08:54:25 functional-20210112085224-2630 kubelet[5095]: E0112 08:54:25.679815    5095 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-74ff55c5b-c4vsl.165970299642fda7", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"coredns-74ff55c5b-c4vsl", UID:"1d8a6d3f-75df-4ad6-8b2c-7966464cc021", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{coredns}"}, Reason:"Started", Message:"Started container coredns", Source:v1.EventSource{Component:"kubelet", Host:"functional-20210112085224-2630"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff776d06872b3a7, ext:2889350957, loc:(*time.Location)(0x70c7020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff776d06872b3a7, ext:2889350957, loc:(*time.Location)(0x70c7020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.71:8441: connect: connection refused'(may retry after sleeping)
2021-01-12T08:54:32.4401043Z         	* Jan 12 08:54:25 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:25.770254    5095 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-c4vsl through plugin: invalid network status for
2021-01-12T08:54:32.4403242Z         	* Jan 12 08:54:26 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:26.784426    5095 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-c4vsl through plugin: invalid network status for
2021-01-12T08:54:32.4405586Z         	* Jan 12 08:54:26 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:26.798518    5095 scope.go:95] [topologymanager] RemoveContainer - Container ID: c018e6634eb41b462164ee3d0ba9af3992cc35de45430a04965d889bc69ba1c1
2021-01-12T08:54:32.4407960Z         	* Jan 12 08:54:26 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:26.798929    5095 scope.go:95] [topologymanager] RemoveContainer - Container ID: 72d7777e07f2ec1f3747899e2c580e1c461e1697bc1d723368c67dea7f2b3073
2021-01-12T08:54:32.4410362Z         	* Jan 12 08:54:26 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:26.806320    5095 pod_container_deletor.go:79] Container "0348ea5511188ee0f096e6ee25a84b2734ba5d4b9be5f74a2034b5605b84af8f" not found in pod's containers
2021-01-12T08:54:32.4412737Z         	* Jan 12 08:54:26 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:26.810801    5095 kubelet.go:1618] Trying to delete pod kube-apiserver-functional-20210112085224-2630_kube-system 9c225a9e-7f5d-41d7-a428-576c84a6d113
2021-01-12T08:54:32.4415304Z         	* Jan 12 08:54:26 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:26.830297    5095 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cf8276de8f2ac902e0a6eae9c825cadddba98914a857b898cf2f13f8815dea4
2021-01-12T08:54:32.4417556Z         	* Jan 12 08:54:29 functional-20210112085224-2630 kubelet[5095]: E0112 08:54:29.567440    5095 reflector.go:138] object-"kube-system"/"coredns-token-6grjt": Failed to watch *v1.Secret: unknown (get secrets)
2021-01-12T08:54:32.4419313Z         	* Jan 12 08:54:29 functional-20210112085224-2630 kubelet[5095]: E0112 08:54:29.569578    5095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
2021-01-12T08:54:32.4421075Z         	* Jan 12 08:54:29 functional-20210112085224-2630 kubelet[5095]: E0112 08:54:29.569777    5095 reflector.go:138] object-"kube-system"/"kube-proxy-token-wn5ml": Failed to watch *v1.Secret: unknown (get secrets)
2021-01-12T08:54:32.4422866Z         	* Jan 12 08:54:29 functional-20210112085224-2630 kubelet[5095]: E0112 08:54:29.569863    5095 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
2021-01-12T08:54:32.4426084Z         	* Jan 12 08:54:29 functional-20210112085224-2630 kubelet[5095]: E0112 08:54:29.570085    5095 reflector.go:138] object-"kube-system"/"storage-provisioner-token-59dsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-59dsj" is forbidden: User "system:node:functional-20210112085224-2630" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210112085224-2630' and this object
2021-01-12T08:54:32.4429519Z         	* Jan 12 08:54:29 functional-20210112085224-2630 kubelet[5095]: W0112 08:54:29.928459    5095 kubelet.go:1622] Deleted mirror pod "kube-apiserver-functional-20210112085224-2630_kube-system(9c225a9e-7f5d-41d7-a428-576c84a6d113)" because it is outdated
2021-01-12T08:54:32.4432063Z         	* Jan 12 08:54:30 functional-20210112085224-2630 kubelet[5095]: I0112 08:54:30.869366    5095 kubelet.go:1618] Trying to delete pod kube-apiserver-functional-20210112085224-2630_kube-system 9c225a9e-7f5d-41d7-a428-576c84a6d113
2021-01-12T08:54:32.4433287Z         	* 
2021-01-12T08:54:32.4451978Z         -- /stdout --
2021-01-12T08:54:32.4453165Z     helpers_test.go:248: (dbg) Run:  ./minikube-linux-amd64 status --format={{.APIServer}} -p functional-20210112085224-2630 -n functional-20210112085224-2630
2021-01-12T08:54:32.7239561Z     helpers_test.go:255: (dbg) Run:  kubectl --context functional-20210112085224-2630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
2021-01-12T08:54:32.7919921Z     helpers_test.go:261: non-running pods: kube-apiserver-functional-20210112085224-2630
2021-01-12T08:54:32.7921512Z     helpers_test.go:263: ======> post-mortem[TestFunctional/parallel/ComponentHealth]: describe non-running pods <======
2021-01-12T08:54:32.7923181Z     helpers_test.go:266: (dbg) Run:  kubectl --context functional-20210112085224-2630 describe pod kube-apiserver-functional-20210112085224-2630
2021-01-12T08:54:32.8665906Z     helpers_test.go:266: (dbg) Non-zero exit: kubectl --context functional-20210112085224-2630 describe pod kube-apiserver-functional-20210112085224-2630: exit status 1 (74.240964ms)
2021-01-12T08:54:32.8667175Z         
2021-01-12T08:54:32.8667493Z         ** stderr ** 
2021-01-12T08:54:32.8668639Z         	Error from server (NotFound): pods "kube-apiserver-functional-20210112085224-2630" not found
2021-01-12T08:54:32.8669474Z         
2021-01-12T08:54:32.8669775Z         ** /stderr **
2021-01-12T08:54:32.8671197Z     helpers_test.go:268: kubectl --context functional-20210112085224-2630 describe pod kube-apiserver-functional-20210112085224-2630: exit status 1
@k8s-ci-robot k8s-ci-robot added kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. labels Jan 12, 2021
@lingsamuel lingsamuel changed the title Flaky TestFunctional/parallel/ComponentHealth: pod not found Flaky TestFunctional/parallel/ComponentHealth: pending Jan 13, 2021
@medyagh
Copy link
Member

medyagh commented Jan 13, 2021

@medyagh
Copy link
Member

medyagh commented Jan 13, 2021

so when we say "minikube start --wait=all" minikube should wait for everyhing to be up
I have noticed when we do a "Second Start" on minikube, it might NOT respect the --wait flag
maybe that is the cause

we need to figure out why the api server is going Pending after it is started.

@medyagh
Copy link
Member

medyagh commented Jan 19, 2021

in the logs we see

	* ==> dmesg <==
	* [  +0.006294]  [<ffffffff8db8a29f>] ? pagefault_out_of_memory+0x2f/0x80
	* [  +0.006561]  [<ffffffff8da636cd>] ? __do_page_fault+0x4bd/0x4f0
	* [  +0.006032]  [<ffffffff8e020b68>] ? page_fault+0x28/0x30

that could be pointing to a memory issue.

@medyagh
Copy link
Member

medyagh commented Jan 19, 2021

how about try to replicate this using a low memory minikube and test only funciotnal against that
you can start a minikube cluster and make the e2e binary to use an existing cluster.

  -cleanup=false  -profile=existing_profile
Usage of ./out/e2e-darwin-amd64:
  -add_dir_header
    	If true, adds the file directory to the header of the log messages
  -alsologtostderr
    	log to standard error as well as files
  -binary string
    	path to minikube binary (default "../../out/minikube")
  -cleanup
    	cleanup failed test run (default true)
  -gvisor
    	run gvisor integration test (slow)
  -log_backtrace_at value
    	when logging hits line file:N, emit a stack trace
  -log_dir string
    	If non-empty, write log files in this directory
  -log_file string
    	If non-empty, use this log file
  -log_file_max_size uint
    	Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
  -logtostderr
    	log to standard error instead of files (default true)
  -minikube-start-args string
    	Arguments to pass to minikube start
  -one_output
    	If true, only write logs to their native severity level (vs also writing to each lower severity level
  -postmortem-logs
    	show logs after a failed test run (default true)
  -profile string
    	force tests to run against a particular profile

@azhao155
Copy link
Contributor

/assign @azhao155

@azhao155
Copy link
Contributor

azhao155 commented Jan 25, 2021

This is the memory

    functional_test.go:974: Yanshu docker stats CONTAINER ID        NAME                                                                                                                               CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
        4b519ed30dd0        k8s_mount-munger_busybox-mount_default_281ff499-14ab-4a50-8281-e93a58671dc6_0                                                         0.00%               0B / 0B               0.00%               0B / 0B             0B / 0B             0
        652a1b13d087        k8s_echoserver_hello-node-7567d9fdc9-fcg2p_default_680af5ce-73c1-485c-a732-630309f3e50b_0                                             0.00%               3.191MiB / 102.2GiB   0.00%               42B / 0B            0B / 0B             2
        2d61d2c9263e        k8s_mysql_mysql-65c76b9ccb-4d7bm_default_47940a1c-caac-4040-a5c9-3e697bd9ed84_0                                                       104.91%             468.4MiB / 102.2GiB   0.45%               42B / 0B            0B / 250MB          25
        434d3d45ffb1        k8s_kubernetes-dashboard_kubernetes-dashboard-6cff4c7c4f-7wch5_kubernetes-dashboard_765bc310-5c70-4109-ad48-590e4c3ed0c2_0            0.00%               18.09MiB / 102.2GiB   0.02%               11.4kB / 9.62kB     0B / 0B             10
        8b220135831b        k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-c95fcf479-xb6p2_kubernetes-dashboard_be153bc8-80a3-4b1a-a2db-918bf76da5b6_0   0.00%               10.05MiB / 102.2GiB   0.01%               42B / 0B            0B / 57.3kB         7
        18e4e1530734        k8s_POD_kubernetes-dashboard-6cff4c7c4f-7wch5_kubernetes-dashboard_765bc310-5c70-4109-ad48-590e4c3ed0c2_0                             0.00%               1.105MiB / 102.2GiB   0.00%               11.4kB / 9.62kB     0B / 0B             1
        e8a6c5a6c2de        k8s_POD_dashboard-metrics-scraper-c95fcf479-xb6p2_kubernetes-dashboard_be153bc8-80a3-4b1a-a2db-918bf76da5b6_0                         0.00%               1.258MiB / 102.2GiB   0.00%               42B / 0B            0B / 0B             1
        06a476302d37        k8s_POD_hello-node-7567d9fdc9-fcg2p_default_680af5ce-73c1-485c-a732-630309f3e50b_0                                                    0.00%               1.27MiB / 102.2GiB    0.00%               42B / 0B            0B / 0B             1
        b41b3dc2b4a2        k8s_POD_mysql-65c76b9ccb-4d7bm_default_47940a1c-caac-4040-a5c9-3e697bd9ed84_0                                                         0.00%               1.367MiB / 102.2GiB   0.00%               42B / 0B            0B / 0B             1
        55f66866f786        k8s_storage-provisioner_storage-provisioner_kube-system_3b22d887-327e-438a-841e-fae8f26d87b2_3                                        0.28%               15.51MiB / 102.2GiB   0.01%               0B / 0B             0B / 0B             10
        663e8d410cd5        k8s_coredns_coredns-74ff55c5b-m5b6l_kube-system_565d2bc4-b15f-4d61-853b-98232e669291_1                                                0.34%               15.14MiB / 170MiB     8.91%               33.7kB / 8.74kB     0B / 0B             12
        911898f56c42        k8s_kube-apiserver_kube-apiserver-minikube_kube-system_2fd2cd9f507e9291456c4aa2931c296e_0                                             11.65%              302.5MiB / 102.2GiB   0.29%               0B / 0B             0B / 0B             15
        5c49f0e19dc7        k8s_POD_coredns-74ff55c5b-m5b6l_kube-system_565d2bc4-b15f-4d61-853b-98232e669291_1                                                    0.00%               1.156MiB / 102.2GiB   0.00%               33.7kB / 8.74kB     0B / 0B             1
        6511fe0e59e2        k8s_POD_kube-apiserver-minikube_kube-system_2fd2cd9f507e9291456c4aa2931c296e_0                                                        0.00%               1.297MiB / 102.2GiB   0.00%               0B / 0B             0B / 0B             1
        542cead0ba0e        k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_a3e7be694ef7cf952503c5d331abc0ac_1                           2.13%               45.39MiB / 102.2GiB   0.04%               0B / 0B             0B / 0B             11
        643f40bf0ec5        k8s_kube-proxy_kube-proxy-drhnb_kube-system_40015aaf-3cda-45ec-95b5-36e8e1efcee0_1                                                    0.00%               26.25MiB / 102.2GiB   0.03%               0B / 0B             0B / 24.6kB         12
        8d8889e397b1        k8s_etcd_etcd-minikube_kube-system_c31fe6a5afdd142cf3450ac972274b36_1                                                                 3.20%               35.07MiB / 102.2GiB   0.03%               0B / 0B             0B / 12.2MB         23
        6fee3cf0592b        k8s_kube-scheduler_kube-scheduler-minikube_kube-system_3478da2c440ba32fb6c087b3f3b99813_1                                             0.18%               21.88MiB / 102.2GiB   0.02%               0B / 0B             0B / 0B             13
        dd8a7b6a8b0e        k8s_POD_kube-controller-manager-minikube_kube-system_a3e7be694ef7cf952503c5d331abc0ac_1                                               0.00%               1.164MiB / 102.2GiB   0.00%               0B / 0B             0B / 0B             1
        10d6d3b574cc        k8s_POD_storage-provisioner_kube-system_3b22d887-327e-438a-841e-fae8f26d87b2_1                                                        0.00%               1.203MiB / 102.2GiB   0.00%               0B / 0B             0B / 0B             1
        3f04a68f9427        k8s_POD_kube-proxy-drhnb_kube-system_40015aaf-3cda-45ec-95b5-36e8e1efcee0_1                                                           0.00%               1.422MiB / 102.2GiB   0.00%               0B / 0B             0B / 0B             1
        61eb11093bf1        k8s_POD_kube-scheduler-minikube_kube-system_3478da2c440ba32fb6c087b3f3b99813_1                                                        0.00%               1.031MiB / 102.2GiB   0.00%               0B / 0B             0B / 0B             1
        a4536f4c65c6        k8s_POD_etcd-minikube_kube-system_c31fe6a5afdd142cf3450ac972274b36_1                                                                  0.00%               1.137MiB / 102.2GiB   0.00%               0B / 0B             0B / 0B             1
        eb4f4fd5c6d7        k8s_POD_sp-pod_default_fd5e8c1b-edb1-48ca-aedf-c2246b8b7910_0                                                                         0.00%               1.227MiB / 102.2GiB   0.00%               0B / 0B             0B / 0B             1

@lingsamuel
Copy link
Contributor Author

468.4MiB / 102.2GiB doesn't seem to be very large? will ~0.5GB memory usage cause OOM?

@priyawadhwa priyawadhwa added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 25, 2021
@priyawadhwa
Copy link

Hey @azhao155 was the 104.91% in your output for the mysql memory consumption?

@medyagh
Copy link
Member

medyagh commented Jan 25, 2021

how about we delete the mysql after we are done testing with it ?

@medyagh
Copy link
Member

medyagh commented Jan 25, 2021

468.4MiB / 102.2GiB doesn't seem to be very large? will ~0.5GB memory usage cause OOM?

I think @azhao155 is running it on personal machine with a lot of memory, the numbers could used in relative...maybe

@azhao155
Copy link
Contributor

Hey @azhao155 was the 104.91% in your output for the mysql memory consumption?

that's the cpu, this is the memory: 468.4MiB / 102.2GiB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants