Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flake Fail: TestFunctional/parallel/DockerEnv #9691

Closed
medyagh opened this issue Nov 13, 2020 · 2 comments · Fixed by #9775
Closed

Flake Fail: TestFunctional/parallel/DockerEnv #9691

medyagh opened this issue Nov 13, 2020 · 2 comments · Fixed by #9775
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@medyagh
Copy link
Member

medyagh commented Nov 13, 2020

this is a flake that API Server goes stopped after it started succesfully

TestFunctional/parallel/DockerEnv
Example: https://storage.googleapis.com/minikube-builds/logs/9747/3b5c265/Docker_Linux.html#fail_TestFunctional%2fparallel%2fDockerEnv


=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:175: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20201113002616-6290 docker-env) && out/minikube-linux-amd64 status -p functional-20201113002616-6290"
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:175: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20201113002616-6290 docker-env) && out/minikube-linux-amd64 status -p functional-20201113002616-6290": exit status 2 (10.381188135s)
-- stdout --
	functional-20201113002616-6290
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
-- /stdout --
functional_test.go:181: failed to do status after eval-ing docker-env. error: exit status 2
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect functional-20201113002616-6290
helpers_test.go:229: (dbg) docker inspect functional-20201113002616-6290:
-- stdout --
	[
	    {
	        "Id": "d4134ddbdd8c8ea995b08ffe372cf0b92fe2ac529b5f5796e4895717941bea11",
	        "Created": "2020-11-13T00:26:17.836545463Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38252,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-13T00:26:18.372729387Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7ed8827b36a5092c654640afc56410f6f25f6dca005a46d458ed9949dce0ab88",
	        "ResolvConfPath": "/var/lib/docker/containers/d4134ddbdd8c8ea995b08ffe372cf0b92fe2ac529b5f5796e4895717941bea11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4134ddbdd8c8ea995b08ffe372cf0b92fe2ac529b5f5796e4895717941bea11/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4134ddbdd8c8ea995b08ffe372cf0b92fe2ac529b5f5796e4895717941bea11/hosts",
	        "LogPath": "/var/lib/docker/containers/d4134ddbdd8c8ea995b08ffe372cf0b92fe2ac529b5f5796e4895717941bea11/d4134ddbdd8c8ea995b08ffe372cf0b92fe2ac529b5f5796e4895717941bea11-json.log",
	        "Name": "/functional-20201113002616-6290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20201113002616-6290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20201113002616-6290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44a9d20156ca5009ba2e1bba78842f8b80a04750afd25dbc0da39c482cffee0c-init/diff:/var/lib/docker/overlay2/37785018e21808420b88c311407db90bf61cd6d9ff3509c376820926379c03d2/diff:/var/lib/docker/overlay2/72da5352cf986b3c0336029d0ae370c61414ee3eec54a7508b74364da29f041d/diff:/var/lib/docker/overlay2/18b4f482feb9865f79d18f7c0a885686b60777629e7f1acc9aaa76cc7ac692cc/diff:/var/lib/docker/overlay2/181e2753675e6f9b2668fd9977126e08e31251993e67ea4330dcfef36e785d2a/diff:/var/lib/docker/overlay2/0522861244bb82d4e6300a471f7afb3b44e72e14e6676db21b07036da8b5924b/diff:/var/lib/docker/overlay2/87d98baabdce4ed3fd7e9e84ba60454c0613bcf1967e9a76ade44c9871a466f4/diff:/var/lib/docker/overlay2/df56d8041dcd52251ba6e42bf74fa23917b4c5bb067b39231adc07108888fbb1/diff:/var/lib/docker/overlay2/138bb89559a6b4c3a59cfda23a774f682635c448e0b9dfbbe5fde5fae8140d8f/diff:/var/lib/docker/overlay2/6875ae84bc9ed28797340e399459aa48ade1d801d61f2e61dd244e47ce5dd94e/diff:/var/lib/docker/overlay2/3c93ee
faf35c3970bbca71dee1585cef2f83d8ba2e6bab8014282870785b7930/diff:/var/lib/docker/overlay2/8b722d194415562c3fc77392fbd1c832b34e420551aa90010a1d5212a0ecfe77/diff:/var/lib/docker/overlay2/553c101fd6ba7c3eb457f9a1a148ed7c75a98c48f7d113cca76f00da1d8d6e25/diff:/var/lib/docker/overlay2/3ba91f6602dc33fcc648055ae402d2c652a079e8be4b9c708895c24f1bec85c3/diff:/var/lib/docker/overlay2/0c2ae730d446f90c85fa55045dcd664ac9d4a0d3b0e425249df1e765c88a09cd/diff:/var/lib/docker/overlay2/b407d7a55b995a1eee1bba471e5c612d8a337170f8db53f6037923e57944dd63/diff:/var/lib/docker/overlay2/9321be401bab35637ed8c56f30c176b053a053c1360cc369cb1283d49389bf14/diff:/var/lib/docker/overlay2/0b792420956c3ee13ea4d34d2880566d7ddfcd02a978bbc26e3ad36576ac2d0a/diff:/var/lib/docker/overlay2/b4e2a639f3a447ce9bc99c15c91f8bcea8bf6bf94df74bb81a32281bdc173944/diff:/var/lib/docker/overlay2/a595c7c2a2c246aa1b1fadc2df2bc837eb1bf1d77d60767e4aa41422cb555c6a/diff:/var/lib/docker/overlay2/cd4dc165f0b87e060676355fc1711ca56b35873eadf82d9d6462ced297c1e24f/diff:/var/lib/d
ocker/overlay2/f973a434e181a47b345ccc431df06f7a60014481dbe87259ac4979482b782845/diff:/var/lib/docker/overlay2/5241324519ab64f7cdf69ca40158362a9a00d9b0ee82b7e7c35c97e23c398c05/diff:/var/lib/docker/overlay2/8cff34e5b3d152245258620460e3e32dc28f9efba9a63681c31b024c88922853/diff:/var/lib/docker/overlay2/4690e36707758dffba3c9146d6603b83c2661a5124e4473d2232620c390ec140/diff:/var/lib/docker/overlay2/6ea6c2e49e8b7579b4f4dcebba9a0b36b41380c62c43a2e43222db980981c815/diff:/var/lib/docker/overlay2/6592a70f33c7abc3ec6610a878f9344887958109ab5bb1175698d09588beab6f/diff:/var/lib/docker/overlay2/20f1a7fe6f9ab5d8cf4f9b3e035f281097dd4c43c807862deb9638823b477ccb/diff:/var/lib/docker/overlay2/1b711feabab61769cdc062727895f63240c32d834c88cc048bac51625b8c047b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44a9d20156ca5009ba2e1bba78842f8b80a04750afd25dbc0da39c482cffee0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44a9d20156ca5009ba2e1bba78842f8b80a04750afd25dbc0da39c482cffee0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44a9d20156ca5009ba2e1bba78842f8b80a04750afd25dbc0da39c482cffee0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20201113002616-6290",
	                "Source": "/var/lib/docker/volumes/functional-20201113002616-6290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20201113002616-6290",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20201113002616-6290",
	                "name.minikube.sigs.k8s.io": "functional-20201113002616-6290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8375c8eb2701c1f27ee9165c24141b8eb6d21bb974fd59898a1752e696cfd45",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8375c8eb270",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20201113002616-6290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.147"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d4134ddbdd8c"
	                    ],
	                    "NetworkID": "f1ff20fbb6c797ba7ee59a7b30f7e8d042267200eff8d9e61faa932277b9e2a5",
	                    "EndpointID": "c04907383325dac8ac2ec818d11d070747ebf6016fb9f924257d691985f4426f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.147",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:93",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201113002616-6290 -n functional-20201113002616-6290
helpers_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201113002616-6290 -n functional-20201113002616-6290: exit status 2 (474.245541ms)
-- stdout --
	Running
-- /stdout --
helpers_test.go:233: status error: exit status 2 (may be ok)
helpers_test.go:238: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201113002616-6290 logs -n 25
=== CONT  TestFunctional/parallel/DockerEnv
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-20201113002616-6290 logs -n 25: (20.51698227s)
helpers_test.go:246: TestFunctional/parallel/DockerEnv logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Fri 2020-11-13 00:26:18 UTC, end at Fri 2020-11-13 00:43:53 UTC. --
	* Nov 13 00:43:49 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:49.593700234Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 13 00:43:49 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:49.596646936Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 13 00:43:49 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:49.597510891Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 13 00:43:49 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:49.598921081Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 13 00:43:49 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:49.800167353Z" level=info msg="Removing stale sandbox 55f4dda6f0f672a3aec53536e224faac723d742c5d0002a67048499e8e966141 (64f6947c99171035a9136440c66efdcad18165239f0e5fb89a3b56786dfa7530)"
	* Nov 13 00:43:49 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:49.802863246Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4abe933e0234550fe2701b656826ef6f840a1916aa781eb369f2a0efe65416cf 3a5b977c508fef5e37e0a20a140764b1f73022589b9499f39c8c307157f6385e], retrying...."
	* Nov 13 00:43:50 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:50.218034639Z" level=info msg="Removing stale sandbox 59e84b0fb3024f94bc5bea1cfa11f076bcc0010919078ddc61e1943503440c17 (4b2e2859af929fdf04f22c2eeb7b5e8c724b798cc8d7f5edbc904ac107f5aea7)"
	* Nov 13 00:43:50 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:50.224583223Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 37dccb7ab448b2b65581e89f8e00e892104887f936746b66025946a64dbc5ba5 f69168c3b8cef471ab1caf988ecc41b6f20bf75980c47940a5e1710265135929], retrying...."
	* Nov 13 00:43:50 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:50.323458491Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 13 00:43:50 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:50.494452136Z" level=info msg="Removing stale sandbox 90e5bb75c2604a21f7c09cf26dc2a843a8cfa5f60ba67562b11f6654b4dbbc2d (dafac67996dc044fe522c5ade00276808ea893ec477a51c006f346cf932523ae)"
	* Nov 13 00:43:50 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:50.496944568Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4abe933e0234550fe2701b656826ef6f840a1916aa781eb369f2a0efe65416cf a9bc92d0992a8b6efde65edd032173e330e031f1f841138c0dc07f2daeb56571], retrying...."
	* Nov 13 00:43:50 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:50.723606309Z" level=info msg="Removing stale sandbox 1af63ef138fb6617967430ce766704e42b5560c024c93f21bc1eef7350806d34 (5b8da46171e6d488cced96851ea44c122699c4b06fd8830e29eb2ac49ac5d172)"
	* Nov 13 00:43:50 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:50.726086423Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4abe933e0234550fe2701b656826ef6f840a1916aa781eb369f2a0efe65416cf 9c41fbd487c56b6113e8c1760358ea7cc91156b54ce558ed7669ee9ad5d36a7d], retrying...."
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.058057405Z" level=info msg="Removing stale sandbox 30da017cd6965f7bbe125bb4a9705924938b66ba11301935b48c4c9d9ac28330 (a424946f72b2de94d5fac25a405571497056cf0efbdac272edb8d53134fc75fa)"
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.060712804Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4abe933e0234550fe2701b656826ef6f840a1916aa781eb369f2a0efe65416cf 51456b9bc524c977e6f20cf5207a2e2ebe15c50f1f61ec86ee2d3bf712feffe5], retrying...."
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.265905525Z" level=info msg="Removing stale sandbox 52bf54418649eb791ce70587c8952f9fa83e94cd9b380bb3ad271043a9ce833a (de3704320c24e1690d95caf0256e3b050f6aad131cc80276cb71254cda1d82fa)"
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.268477706Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4abe933e0234550fe2701b656826ef6f840a1916aa781eb369f2a0efe65416cf b8720fea45ee8c49ee62f302bda5e083ff1e2c71d1c3868b6092bf7113da40d0], retrying...."
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.296613464Z" level=info msg="There are old running containers, the network config will not take affect"
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.303666677Z" level=info msg="Loading containers: done."
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.335362169Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.335464617Z" level=info msg="Daemon has completed initialization"
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.350144116Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 13 00:43:51 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:51.350168086Z" level=info msg="API listen on [::]:2376"
	* Nov 13 00:43:51 functional-20201113002616-6290 systemd[1]: Started Docker Application Container Engine.
	* Nov 13 00:43:52 functional-20201113002616-6290 dockerd[9121]: time="2020-11-13T00:43:52.581678888Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 915441283c962       bfe3a36ebd252       1 second ago        Running             coredns                   1                   010edba05ceac
	* 3bcdc58dec0fc       607331163122e       1 second ago        Running             kube-apiserver            1                   1bc1a3960b1bc
	* e555ac980ff97       0369cf4303ffd       7 seconds ago       Exited              etcd                      1                   64f6947c99171
	* 39d0d9e4fc721       d373dd5a8593a       8 seconds ago       Exited              kube-proxy                1                   5b8da46171e6d
	* 9f8eb5f68867a       bad58561c4be7       8 seconds ago       Exited              storage-provisioner       1                   de3704320c24e
	* 84af752cef4c2       2f32d66b884f8       9 seconds ago       Exited              kube-scheduler            1                   dafac67996dc0
	* 6843db544929f       8603821e1a7a5       9 seconds ago       Exited              kube-controller-manager   1                   a424946f72b2d
	* 74f867e1d81ca       bfe3a36ebd252       16 minutes ago      Exited              coredns                   0                   7d13512e53f38
	* 6a904ced8e5c2       607331163122e       17 minutes ago      Exited              kube-apiserver            0                   ed91cb868b26b
	* 
	* ==> coredns [74f867e1d81c] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> coredns [915441283c96] <==
	* 
	* ==> describe nodes <==
	* Name:               functional-20201113002616-6290
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=functional-20201113002616-6290
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=3f749609aa5a0250e46a58c8fcbf1974ff219f39
	*                     minikube.k8s.io/name=functional-20201113002616-6290
	*                     minikube.k8s.io/updated_at=2020_11_13T00_26_47_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Fri, 13 Nov 2020 00:26:44 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  functional-20201113002616-6290
	*   AcquireTime:     <unset>
	*   RenewTime:       Fri, 13 Nov 2020 00:44:11 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Fri, 13 Nov 2020 00:42:22 +0000   Fri, 13 Nov 2020 00:26:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Fri, 13 Nov 2020 00:42:22 +0000   Fri, 13 Nov 2020 00:26:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Fri, 13 Nov 2020 00:42:22 +0000   Fri, 13 Nov 2020 00:26:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Fri, 13 Nov 2020 00:42:22 +0000   Fri, 13 Nov 2020 00:26:58 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.147
	*   Hostname:    functional-20201113002616-6290
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 44dbb0141e58462d8e8aabe2f445165f
	*   System UUID:                99631c12-6f97-4d7f-a51f-5bd86b42f1ce
	*   Boot ID:                    615b2a3d-e929-407b-b2d1-cfd6824a1694
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-f9fd979d6-gjhfk                                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	*   kube-system                 etcd-functional-20201113002616-6290                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	*   kube-system                 kube-apiserver-functional-20201113002616-6290             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	*   kube-system                 kube-controller-manager-functional-20201113002616-6290    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	*   kube-system                 kube-proxy-mkfv6                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	*   kube-system                 kube-scheduler-functional-20201113002616-6290             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	*   kube-system                 storage-provisioner                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                From        Message
	*   ----    ------                   ----               ----        -------
	*   Normal  NodeHasSufficientPID     17m (x4 over 17m)  kubelet     Node functional-20201113002616-6290 status is now: NodeHasSufficientPID
	*   Normal  NodeHasSufficientMemory  17m (x5 over 17m)  kubelet     Node functional-20201113002616-6290 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    17m (x5 over 17m)  kubelet     Node functional-20201113002616-6290 status is now: NodeHasNoDiskPressure
	*   Normal  Starting                 17m                kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  17m                kubelet     Node functional-20201113002616-6290 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    17m                kubelet     Node functional-20201113002616-6290 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     17m                kubelet     Node functional-20201113002616-6290 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             17m                kubelet     Node functional-20201113002616-6290 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  17m                kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 17m                kube-proxy  Starting kube-proxy.
	*   Normal  NodeReady                17m                kubelet     Node functional-20201113002616-6290 status is now: NodeReady
	* 
	* ==> dmesg <==
	* [  +0.291384] IPv4: martian source 10.32.0.1 from 10.32.0.2, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 8a d0 43 0b 8d 5f 08 06        ........C.._..
	* [Nov13 00:42] net_ratelimit: 1 callbacks suppressed
	* [  +0.000002] IPv4: martian source 10.32.0.3 from 10.32.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff a2 e4 46 16 57 51 08 06        ........F.WQ..
	* [  +0.000153] IPv4: martian source 10.32.0.3 from 10.32.0.3, on dev datapath
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff a2 e4 46 16 57 51 08 06        ........F.WQ..
	* [ +25.508337] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov13 00:43] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +19.118754] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff aa 79 96 90 25 66 08 06        .......y..%f..
	* [  +0.000006] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff aa 79 96 90 25 66 08 06        .......y..%f..
	* [  +0.091560] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 92 0f e3 17 f2 19 08 06        ..............
	* [  +0.516735] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 79 96 90 25 66 08 06        .......y..%f..
	* [  +8.357531] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff fe a3 20 32 6a ce 08 06        ........ 2j...
	* [  +8.467911] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff fe a3 20 32 6a ce 08 06        ........ 2j...
	* [  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff aa 79 96 90 25 66 08 06        .......y..%f..
	* [  +7.348487] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cbr0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 22 29 6e 98 2b 3d 08 06        ......")n.+=..
	* 
	* ==> etcd [e555ac980ff9] <==
	* 2020-11-13 00:43:46.671707 W | auth: simple token is not cryptographically signed
	* 2020-11-13 00:43:46.673565 I | mvcc: restore compact to 828
	* 2020-11-13 00:43:46.678291 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/13 00:43:46 INFO: 53e45a4945349d6c switched to configuration voters=(6045055870578236780)
	* 2020-11-13 00:43:46.679079 I | etcdserver/membership: added member 53e45a4945349d6c [https://192.168.49.147:2380] to cluster 45fd6b83d136843b
	* 2020-11-13 00:43:46.679222 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-13 00:43:46.679296 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-13 00:43:46.682140 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-13 00:43:46.682308 I | embed: listening for peers on 192.168.49.147:2380
	* 2020-11-13 00:43:46.682499 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/13 00:43:47 INFO: 53e45a4945349d6c is starting a new election at term 2
	* raft2020/11/13 00:43:47 INFO: 53e45a4945349d6c became candidate at term 3
	* raft2020/11/13 00:43:47 INFO: 53e45a4945349d6c received MsgVoteResp from 53e45a4945349d6c at term 3
	* raft2020/11/13 00:43:47 INFO: 53e45a4945349d6c became leader at term 3
	* raft2020/11/13 00:43:47 INFO: raft.node: 53e45a4945349d6c elected leader 53e45a4945349d6c at term 3
	* 2020-11-13 00:43:47.957982 I | etcdserver: published {Name:functional-20201113002616-6290 ClientURLs:[https://192.168.49.147:2379]} to cluster 45fd6b83d136843b
	* 2020-11-13 00:43:47.958013 I | embed: ready to serve client requests
	* 2020-11-13 00:43:47.958124 I | embed: ready to serve client requests
	* 2020-11-13 00:43:47.959821 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-13 00:43:47.964215 I | embed: serving client requests on 192.168.49.147:2379
	* 2020-11-13 00:43:48.571470 I | embed: rejected connection from "127.0.0.1:51514" (error "EOF", ServerName "")
	* 2020-11-13 00:43:49.192473 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/13 00:43:49 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* WARNING: 2020/11/13 00:43:49 grpc: addrConn.createTransport failed to connect to {192.168.49.147:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.49.147:2379: connect: connection refused". Reconnecting...
	* 2020-11-13 00:43:50.193159 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  00:44:12 up 26 min,  0 users,  load average: 5.40, 7.54, 4.96
	* Linux functional-20201113002616-6290 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [3bcdc58dec0f] <==
	* I1113 00:44:11.274037       1 controller.go:83] Starting OpenAPI AggregationController
	* I1113 00:44:11.274542       1 controller.go:86] Starting OpenAPI controller
	* I1113 00:44:11.274767       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	* I1113 00:44:11.274794       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	* I1113 00:44:11.274832       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	* I1113 00:44:11.274840       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	* I1113 00:44:11.274880       1 autoregister_controller.go:141] Starting autoregister controller
	* I1113 00:44:11.274886       1 cache.go:32] Waiting for caches to sync for autoregister controller
	* I1113 00:44:11.274919       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1113 00:44:11.274947       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1113 00:44:11.275180       1 available_controller.go:404] Starting AvailableConditionController
	* I1113 00:44:11.275202       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	* E1113 00:44:11.275795       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.147, ResourceVersion: 0, AdditionalErrorMsg: 
	* I1113 00:44:11.276807       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	* I1113 00:44:11.276831       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	* I1113 00:44:11.480361       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I1113 00:44:11.486003       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1113 00:44:11.574869       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I1113 00:44:11.574935       1 cache.go:39] Caches are synced for autoregister controller
	* I1113 00:44:11.575905       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1113 00:44:11.586292       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1113 00:44:11.935006       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1113 00:44:12.272785       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1113 00:44:12.272837       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1113 00:44:12.278913       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* 
	* ==> kube-apiserver [6a904ced8e5c] <==
	* W1113 00:43:48.288005       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.294387       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.294703       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.295963       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.296044       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.296075       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.296150       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.296601       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.296699       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.298271       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.299190       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.386043       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.386394       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.386537       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.386402       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.386509       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.386513       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.386513       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.387449       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.387499       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.387692       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.396426       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.396884       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* W1113 00:43:48.396884       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
	* E1113 00:43:48.477605       1 controller.go:184] Get "https://localhost:8441/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:8441: connect: connection refused
	* 
	* ==> kube-controller-manager [6843db544929] <==
	* Flag --port has been deprecated, see --secure-port instead.
	* I1113 00:43:46.717271       1 serving.go:331] Generated self-signed cert in-memory
	* I1113 00:43:47.993099       1 controllermanager.go:175] Version: v1.19.2
	* I1113 00:43:47.995143       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	* I1113 00:43:47.995473       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1113 00:43:47.995522       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1113 00:43:47.995613       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* W1113 00:43:47.996802       1 controllermanager.go:628] fetch api resource lists failed, use legacy client builder: Get "https://192.168.49.147:8441/api/v1?timeout=32s": dial tcp 192.168.49.147:8441: connect: connection refused
	* 
	* ==> kube-proxy [39d0d9e4fc72] <==
	* E1113 00:43:45.979974       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201113002616-6290": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.084118       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201113002616-6290": dial tcp 192.168.49.147:8441: connect: connection refused
	* 
	* ==> kube-scheduler [84af752cef4c] <==
	* E1113 00:43:46.094200       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.147:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.094210       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.147:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.094216       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.147:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.094634       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.147:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.094842       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.147:8441/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.094956       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.147:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.095067       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.147:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.095185       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.147:8441/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.095257       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.147:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.095348       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.147:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:46.923089       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.147:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.073028       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.147:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.090856       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.147:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.114678       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.147:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.206101       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.147:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.207095       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.147:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.248588       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.147:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.305183       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.147:8441/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.431500       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.147:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.540366       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.147:8441/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.565812       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.147:8441/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.585692       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.147:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:47.694085       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.147:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:49.002858       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.147:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* E1113 00:43:49.155240       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.147:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.147:8441: connect: connection refused
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2020-11-13 00:26:18 UTC, end at Fri 2020-11-13 00:44:12 UTC. --
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: W1113 00:43:53.649325    2252 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-gjhfk through plugin: invalid network status for
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: I1113 00:43:53.674239    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6843db544929f823c0cfdefd27f845e70216f66947242f5c6019ce21e296cabc
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: I1113 00:43:53.674393    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 84af752cef4c2a456ae7c1b211c0393f3b6df85a4930fb81ed7078d6c7dcefb2
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: I1113 00:43:53.674666    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e555ac980ff9770f9e43eb3be425c5fe6ba28f4afe5e832ea27881f62392a0f7
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: E1113 00:43:53.674910    2252 pod_workers.go:191] Error syncing pod ff7d12f9e4f14e202a85a7c5534a3129 ("kube-scheduler-functional-20201113002616-6290_kube-system(ff7d12f9e4f14e202a85a7c5534a3129)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-functional-20201113002616-6290_kube-system(ff7d12f9e4f14e202a85a7c5534a3129)"
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: I1113 00:43:53.674964    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 39d0d9e4fc7216969e8ecf7e0bca763c10c95ce457e81369608011f3bbadf1ce
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: E1113 00:43:53.675183    2252 pod_workers.go:191] Error syncing pod dcc127c185c80a61d90d8e659e768641 ("kube-controller-manager-functional-20201113002616-6290_kube-system(dcc127c185c80a61d90d8e659e768641)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20201113002616-6290_kube-system(dcc127c185c80a61d90d8e659e768641)"
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: E1113 00:43:53.675287    2252 pod_workers.go:191] Error syncing pod 1346e59e8516413f92bbc1eb9fcfd90d ("etcd-functional-20201113002616-6290_kube-system(1346e59e8516413f92bbc1eb9fcfd90d)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-functional-20201113002616-6290_kube-system(1346e59e8516413f92bbc1eb9fcfd90d)"
	* Nov 13 00:43:53 functional-20201113002616-6290 kubelet[2252]: E1113 00:43:53.675372    2252 pod_workers.go:191] Error syncing pod e37f81e3-3251-43d6-96fa-41fd7d197275 ("kube-proxy-mkfv6_kube-system(e37f81e3-3251-43d6-96fa-41fd7d197275)"), skipping: failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-proxy pod=kube-proxy-mkfv6_kube-system(e37f81e3-3251-43d6-96fa-41fd7d197275)"
	* Nov 13 00:43:54 functional-20201113002616-6290 kubelet[2252]: I1113 00:43:54.776495    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e555ac980ff9770f9e43eb3be425c5fe6ba28f4afe5e832ea27881f62392a0f7
	* Nov 13 00:43:54 functional-20201113002616-6290 kubelet[2252]: E1113 00:43:54.778176    2252 pod_workers.go:191] Error syncing pod 1346e59e8516413f92bbc1eb9fcfd90d ("etcd-functional-20201113002616-6290_kube-system(1346e59e8516413f92bbc1eb9fcfd90d)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-functional-20201113002616-6290_kube-system(1346e59e8516413f92bbc1eb9fcfd90d)"
	* Nov 13 00:44:00 functional-20201113002616-6290 kubelet[2252]: I1113 00:44:00.463442    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6843db544929f823c0cfdefd27f845e70216f66947242f5c6019ce21e296cabc
	* Nov 13 00:44:01 functional-20201113002616-6290 kubelet[2252]: I1113 00:44:01.191672    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 84af752cef4c2a456ae7c1b211c0393f3b6df85a4930fb81ed7078d6c7dcefb2
	* Nov 13 00:44:02 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:02.994924    2252 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "functional-20201113002616-6290": Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201113002616-6290?resourceVersion=0&timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	* Nov 13 00:44:03 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:03.143859    2252 controller.go:136] failed to ensure node lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20201113002616-6290?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	* Nov 13 00:44:03 functional-20201113002616-6290 kubelet[2252]: W1113 00:44:03.647564    2252 status_manager.go:550] Failed to get status for pod "kube-apiserver-functional-20201113002616-6290_kube-system(3efc31628e017e2ea7bcfb48d2e48ac0)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20201113002616-6290": net/http: TLS handshake timeout
	* Nov 13 00:44:05 functional-20201113002616-6290 kubelet[2252]: I1113 00:44:05.291517    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e555ac980ff9770f9e43eb3be425c5fe6ba28f4afe5e832ea27881f62392a0f7
	* Nov 13 00:44:05 functional-20201113002616-6290 kubelet[2252]: I1113 00:44:05.291591    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9f8eb5f68867a8db64bf1270c7dc0306943908f95e17b21e1c09ae3895ad72a3
	* Nov 13 00:44:07 functional-20201113002616-6290 kubelet[2252]: I1113 00:44:07.291508    2252 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 39d0d9e4fc7216969e8ecf7e0bca763c10c95ce457e81369608011f3bbadf1ce
	* Nov 13 00:44:10 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:10.206489    2252 event.go:273] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": net/http: TLS handshake timeout' (may retry after sleeping)
	* Nov 13 00:44:11 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:11.472676    2252 reflector.go:127] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	* Nov 13 00:44:11 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:11.473813    2252 reflector.go:127] object-"kube-system"/"coredns-token-fl9fg": Failed to watch *v1.Secret: unknown (get secrets)
	* Nov 13 00:44:11 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:11.474177    2252 reflector.go:127] object-"kube-system"/"storage-provisioner-token-4cvwb": Failed to watch *v1.Secret: unknown (get secrets)
	* Nov 13 00:44:11 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:11.482487    2252 reflector.go:127] object-"kube-system"/"kube-proxy-token-j75z6": Failed to watch *v1.Secret: unknown (get secrets)
	* Nov 13 00:44:11 functional-20201113002616-6290 kubelet[2252]: E1113 00:44:11.482803    2252 reflector.go:127] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	* 
	* ==> storage-provisioner [9f8eb5f68867] <==
	* F1113 00:43:45.871251       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20201113002616-6290 -n functional-20201113002616-6290
helpers_test.go:255: (dbg) Run:  kubectl --context functional-20201113002616-6290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestFunctional/parallel/DockerEnv]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context functional-20201113002616-6290 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context functional-20201113002616-6290 describe pod : exit status 1 (67.84739ms)
** stderr ** 
	error: resource name may not be empty
** /stderr **
helpers_test.go:268: kubectl --context functional-20201113002616-6290 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv (31.95s)
@medyagh medyagh added kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Nov 13, 2020
@medyagh
Copy link
Member Author

medyagh commented Nov 20, 2020

this is the same source of the issue for "TestFunctional/parallel/ServiceCmd" Failure as well.
since api server goes down, minikube test will fail to deploy hello-node deployment therefore there wont be a service. as seen here https://storage.googleapis.com/minikube-builds/logs/9747/3b5c265/Docker_Linux.html#fail_TestFunctional%2fparallel%2fServiceCmd

@medyagh
Copy link
Member Author

medyagh commented Nov 20, 2020

What could be possible causes ?

  • MySQL deployment screws up the resource limits ?
  • Small amount of memory
  • Parallel testing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant