Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow alternative storage-driver to be selected #7933

Open
steghio opened this issue Apr 29, 2020 · 11 comments
Open

Allow alternative storage-driver to be selected #7933

steghio opened this issue Apr 29, 2020 · 11 comments
Labels
co/docker-driver Issues related to kubernetes in container co/preload help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@steghio
Copy link

steghio commented Apr 29, 2020

Steps to reproduce the issue:

Host is Ubuntu 20.04 using ext4 filesystem, with additional mounted NTFS partition.

Docker info:

Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:25:46 2020
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:24:19 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

  1. modify docker configuration to use a different storage folder and storage driver for NTFS filesystem:
{
   "graph": "/path/to/new/docker_folder_on_NTFS_filesystem",   
   "storage-driver": "vfs"
}
  1. start a fresh (no .minikube and .kube folders exist in user home) minikube cluster using docker driver:
    minikube start --driver=docker

Full output of failed command:

The command per se does not fail, but it pulls the wrong tarball:

preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4

which is incompatible with the target NTFS filesystem. There is no possibility to specify a different storage driver via additional parameters and also it appears no image with different storage driver exists at all under the pull location: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/

The interesting part is that no error is thrown during startup, but the process will hang towards the end outputting multiple times lines such as:

Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51590->127.0.0.1:39697: read: connection reset by peer

it will then leave the container running, but inaccessible.

Reverting back the docker configuration to use local ext4 filesystem, for example /var/lib/docker makes the process complete successfully and lets the cluster start correctly.

Would it be possible to provide preloaded tarballs that do NOT rely on overlay storage driver? Alternatively, would it be possible to provide configuration options to specify the desired storage driver and let the local process create the correct folders at startup?

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

@afbjorklund
Copy link
Collaborator

Similar to #7626

@afbjorklund afbjorklund added the co/docker-driver Issues related to kubernetes in container label Apr 29, 2020
@afbjorklund
Copy link
Collaborator

It is supposed to fallback to not using a preload for such drivers, but to use the regular cache instead.

@medyagh
Copy link
Member

medyagh commented Apr 30, 2020

I dont think this is wrong, the docker Inside minikube uses overlay storage driver.

@steghio could you please do this:
$ minikube ssh
and then

docker info

I believe since we install the docker inside minikube oursleves we install it with overlay, so it is not pulling the wrong one.

@medyagh
Copy link
Member

medyagh commented Apr 30, 2020

@steghio when you modify the configuration, do u use modify the docker on your host or the docker inside minikube?

ps: not related to this issue but I am curious why do you use VFS ?
docker docs says https://docs.docker.com/storage/storagedriver/select-storage-driver/

The vfs storage driver is intended for testing purposes, and for situations where no copy-on-write filesystem can be used. Performance of this storage driver is poor, and is not generally recommended for production use.

@steghio
Copy link
Author

steghio commented Apr 30, 2020

Hello @medyagh

@steghio when you modify the configuration, do u use modify the docker on your host or the docker inside minikube?

I have a fresh docker installation and I modified it via the daemon.json config file under /etc/docker

Specifically I modified the docker folder location and the storage driver being used as mentioned earlier.

I installed minikube as DEB package from https://minikube.sigs.k8s.io/docs/start/ - minikube_1.9.2-0_amd64.deb

I do not modify anything else, in my home folder there are no .minikube and .kube folders.

When I run the minikube start --driver=docker command, it appears minikube uses my own docker folder for some purposes at least, since I can see its volume being created there, but I cannot see the tarball content (eg the overlay folder) being copied there.

Then, the startup process gets stuck at the SSH/TCP connection with this setup, here is the trimmed output (replaced repeating entries with [...]):

minikube start --driver=docker --alsologtostderr -v=1
W0430 09:52:50.978183    9749 root.go:248] Error reading config file at /home/sghio/.minikube/config/config.json: open /home/sghio/.minikube/config/config.json: no such file or directory
I0430 09:52:50.978450    9749 notify.go:125] Checking for updates...
I0430 09:52:51.117389    9749 start.go:262] hostinfo: {"hostname":"sghio","uptime":2313,"bootTime":1588230858,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-28-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"a40f2d69-0300-4a52-b302-3c7708b95e81"}
I0430 09:52:51.118460    9749 start.go:272] virtualization: kvm host
😄  minikube v1.9.2 on Ubuntu 20.04
I0430 09:52:51.124401    9749 driver.go:245] Setting default libvirt URI to qemu:///system
✨  Using the docker driver based on user configuration
I0430 09:52:51.204698    9749 start.go:310] selected driver: docker
I0430 09:52:51.204709    9749 start.go:656] validating driver "docker" against <nil>
I0430 09:52:51.204719    9749 start.go:662] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0430 09:52:51.204736    9749 start.go:1100] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0430 09:52:51.268241    9749 start.go:1004] Using suggested 3900MB memory alloc based on sys=15895MB, container=15895MB
I0430 09:52:51.268343    9749 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node m01 in cluster minikube
🚜  Pulling base image ...
I0430 09:52:51.269748    9749 cache.go:104] Beginning downloading kic artifacts
I0430 09:52:51.269764    9749 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 09:52:51.269795    9749 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0430 09:52:51.269805    9749 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0430 09:52:51.399548    9749 preload.go:114] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0430 09:52:51.399611    9749 cache.go:46] Caching tarball of preloaded images
I0430 09:52:51.399700    9749 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 09:52:51.618222    9749 preload.go:114] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
💾  Downloading Kubernetes v1.18.0 preload ...
I0430 09:52:51.620736    9749 preload.go:144] Downloading: &{Ctx:<nil> Src:https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 Dst:/home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.download Pwd: Mode:2 Detectors:[] Decompressors:map[] Getters:map[] Dir:false ProgressListener:<nil> Options:[0xbf9750]}
    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB
I0430 09:53:40.013825    9749 preload.go:160] saving checksum for preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 ...
I0430 09:53:40.190956    9749 preload.go:177] verifying checksumm of /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.download ...
I0430 09:53:41.161543    9749 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker
I0430 09:53:41.161781    9749 profile.go:138] Saving config to /home/sghio/.minikube/profiles/minikube/config.json ...
I0430 09:53:41.161848    9749 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/config.json: {Name:mkdfd2599793d139652beeb9b7c6edc17d15a9d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 09:56:11.976573    9749 cache.go:117] Successfully downloaded all kic artifacts
I0430 09:56:11.976702    9749 start.go:260] acquiring machines lock for minikube: {Name:mke0aaec204cf09e2ebc7f8434f0940ebff18499 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0430 09:56:11.976918    9749 start.go:264] acquired machines lock for "minikube" in 156.491µs
I0430 09:56:11.976980    9749 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0430 09:56:11.977105    9749 start.go:107] createHost starting for "m01" (driver="docker")
🔥  Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=3900MB (15895MB available) ...
I0430 09:56:12.141004    9749 start.go:143] libmachine.API.Create for "minikube" (driver="docker")
I0430 09:56:12.141034    9749 client.go:169] LocalClient.Create starting
I0430 09:56:12.141078    9749 main.go:110] libmachine: Creating CA: /home/sghio/.minikube/certs/ca.pem
I0430 09:56:12.470616    9749 main.go:110] libmachine: Creating client certificate: /home/sghio/.minikube/certs/cert.pem
I0430 09:56:12.574464    9749 oci.go:250] executing with [docker ps -a --format {{.Names}}] timeout: 30s
I0430 09:56:12.606654    9749 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0430 09:56:12.646324    9749 oci.go:128] Successfully created a docker volume minikube
I0430 09:56:28.299156    9749 oci.go:250] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 19s
I0430 09:56:28.392073    9749 oci.go:160] the created container "minikube" has a running status.
I0430 09:56:28.392719    9749 kic.go:142] Creating ssh key for kic: /home/sghio/.minikube/machines/minikube/id_rsa...
I0430 09:56:28.797915    9749 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0430 09:56:29.092824    9749 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 09:56:29.093174    9749 preload.go:97] Found local preload: /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0430 09:56:29.093239    9749 kic.go:128] Starting extracting preloaded images to volume
I0430 09:56:29.093325    9749 volumes.go:85] executing: [docker run --rm --entrypoint /usr/bin/tar -v /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 -I lz4 -xvf /preloaded.tar -C /extractDir]
I0430 09:57:00.005757    9749 kic.go:133] Took 30.912513 seconds to extract preloaded images to volume
I0430 09:57:00.005799    9749 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0430 09:57:00.043150    9749 machine.go:86] provisioning docker machine ...
I0430 09:57:00.043608    9749 ubuntu.go:166] provisioning hostname "minikube"
I0430 09:57:00.083249    9749 main.go:110] libmachine: Using SSH client type: native
I0430 09:57:00.085761    9749 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0430 09:57:00.085779    9749 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0430 09:57:00.086206    9749 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0430 09:57:03.087534    9749 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41994->127.0.0.1:32770: read: connection reset by peer
[...]
I0430 09:58:12.113034    9749 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42102->127.0.0.1:32770: read: connection reset by peer
I0430 09:58:12.142710    9749 start.go:110] createHost completed in 2m0.16524174s
I0430 09:58:12.142762    9749 start.go:77] releasing machines lock for "minikube", held for 2m0.165813041s
🤦  StartHost failed, but will try again: creating host: create host timed out in 120.000000 seconds
I0430 09:58:12.144347    9749 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
🔥  Deleting "minikube" in docker ...
I0430 09:58:15.113706    9749 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32770: connect: connection refused
I0430 09:58:18.045572    9749 start.go:260] acquiring machines lock for minikube: {Name:mke0aaec204cf09e2ebc7f8434f0940ebff18499 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0430 09:58:18.045841    9749 start.go:264] acquired machines lock for "minikube" in 194.211µs
I0430 09:58:18.045904    9749 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0430 09:58:18.046093    9749 start.go:107] createHost starting for "m01" (driver="docker")
I0430 09:58:18.114011    9749 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32770: connect: connection refused
🔥  Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=3900MB (15895MB available) ...
I0430 09:58:18.131913    9749 start.go:143] libmachine.API.Create for "minikube" (driver="docker")
I0430 09:58:18.131940    9749 client.go:169] LocalClient.Create starting
I0430 09:58:18.131973    9749 main.go:110] libmachine: Reading certificate data from /home/sghio/.minikube/certs/ca.pem
I0430 09:58:18.132120    9749 main.go:110] libmachine: Decoding PEM data...
I0430 09:58:18.132140    9749 main.go:110] libmachine: Parsing certificate...
I0430 09:58:18.132418    9749 main.go:110] libmachine: Reading certificate data from /home/sghio/.minikube/certs/cert.pem
I0430 09:58:18.132553    9749 main.go:110] libmachine: Decoding PEM data...
I0430 09:58:18.132569    9749 main.go:110] libmachine: Parsing certificate...
I0430 09:58:18.132710    9749 oci.go:250] executing with [docker ps -a --format {{.Names}}] timeout: 30s
I0430 09:58:18.163132    9749 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0430 09:58:18.193490    9749 oci.go:128] Successfully created a docker volume minikube
I0430 09:58:21.114248    9749 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32770: connect: connection refused
[...]
I0430 09:58:30.115326    9749 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32770: connect: connection refused
I0430 09:58:32.174631    9749 oci.go:250] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 19s
I0430 09:58:32.213533    9749 oci.go:160] the created container "minikube" has a running status.
I0430 09:58:32.213562    9749 kic.go:142] Creating ssh key for kic: /home/sghio/.minikube/machines/minikube/id_rsa...
I0430 09:58:32.689793    9749 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0430 09:58:32.964504    9749 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 09:58:32.964582    9749 preload.go:97] Found local preload: /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0430 09:58:32.964656    9749 kic.go:128] Starting extracting preloaded images to volume
I0430 09:58:32.964746    9749 volumes.go:85] executing: [docker run --rm --entrypoint /usr/bin/tar -v /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 -I lz4 -xvf /preloaded.tar -C /extractDir]
I0430 09:58:33.115779    9749 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32770: connect: connection refused
[...]
I0430 09:59:00.118616    9749 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32770: connect: connection refused
I0430 09:59:00.443675    9749 kic.go:133] Took 27.479015 seconds to extract preloaded images to volume
I0430 09:59:00.443713    9749 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0430 09:59:00.474589    9749 machine.go:86] provisioning docker machine ...
I0430 09:59:00.474634    9749 ubuntu.go:166] provisioning hostname "minikube"
I0430 09:59:00.505260    9749 main.go:110] libmachine: Using SSH client type: native
I0430 09:59:00.505402    9749 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
I0430 09:59:00.505413    9749 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0430 09:59:00.506577    9749 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36274->127.0.0.1:32773: read: connection reset by peer
I0430 09:59:03.119191    9749 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32770: connect: connection refused
[...]
I0430 09:59:36.519275    9749 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: EOF
[...]
I0430 10:00:00.130152    9749 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0430 10:00:00.195318    9749 main.go:110] libmachine: Using SSH client type: native
I0430 10:00:00.195471    9749 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
I0430 10:00:00.195501    9749 main.go:110] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0430 10:00:00.195805    9749 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36410->127.0.0.1:32773: read: connection reset by peer
[...]
I0430 10:00:15.532224    9749 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36454->127.0.0.1:32773: read: connection reset by peer
I0430 10:00:18.132268    9749 start.go:110] createHost completed in 2m0.086133275s
I0430 10:00:18.132337    9749 start.go:77] releasing machines lock for "minikube", held for 2m0.086459312s

❌  [CREATE_TIMEOUT] Failed to start docker container. "minikube start" may fix it. creating host: create host timed out in 120.000000 seconds
💡  Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/7072

Also:

minikube ssh
ssh: exit status 255

and:

minikube status
E0430 09:43:40.151916    8957 status.go:233] kubeconfig endpoint: empty IP
m01
host: Running
kubelet: Error
apiserver: Stopped
kubeconfig: Misconfigured


WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`

ps: not related to this issue but I am curious why do you use VFS ?
docker docs says https://docs.docker.com/storage/storagedriver/select-storage-driver/

The vfs storage driver is intended for testing purposes, and for situations where no copy-on-write filesystem can be used. Performance of this storage driver is poor, and is not generally recommended for production use.

Yes I saw that too, but if the docker folder is on a NTFS filesystem, it appears that is the only usable storage driver as the others do not list it as supported (overlay for sure does not work, I get a clear error if I try it). I was testing on NTFS in order to share data with my dual boot setup with Windows.

Note the very important thing here is this: if I leave my docker folder on the standard location /var/lib/docker and I follow the exact same steps (no .minikube or .kube folders in my home, using docker driver) then everything goes perfectly:

minikube start --driver=docker --alsologtostderr -v=1
W0430 10:06:00.682491   12472 root.go:248] Error reading config file at /home/sghio/.minikube/config/config.json: open /home/sghio/.minikube/config/config.json: no such file or directory
I0430 10:06:00.683303   12472 notify.go:125] Checking for updates...
I0430 10:06:01.040347   12472 start.go:262] hostinfo: {"hostname":"sghio","uptime":3103,"bootTime":1588230858,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-28-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"a40f2d69-0300-4a52-b302-3c7708b95e81"}
I0430 10:06:01.040826   12472 start.go:272] virtualization: kvm host
😄  minikube v1.9.2 on Ubuntu 20.04
I0430 10:06:01.043458   12472 driver.go:245] Setting default libvirt URI to qemu:///system
✨  Using the docker driver based on user configuration
I0430 10:06:01.119136   12472 start.go:310] selected driver: docker
I0430 10:06:01.119147   12472 start.go:656] validating driver "docker" against <nil>
I0430 10:06:01.119165   12472 start.go:662] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0430 10:06:01.119186   12472 start.go:1100] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0430 10:06:01.199117   12472 start.go:1004] Using suggested 3900MB memory alloc based on sys=15895MB, container=15895MB
I0430 10:06:01.199244   12472 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node m01 in cluster minikube
🚜  Pulling base image ...
I0430 10:06:01.203111   12472 cache.go:104] Beginning downloading kic artifacts
I0430 10:06:01.203137   12472 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 10:06:01.203213   12472 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0430 10:06:01.203227   12472 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0430 10:06:01.336956   12472 preload.go:114] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0430 10:06:01.336973   12472 cache.go:46] Caching tarball of preloaded images
I0430 10:06:01.336999   12472 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 10:06:01.464132   12472 preload.go:114] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
💾  Downloading Kubernetes v1.18.0 preload ...
I0430 10:06:01.466062   12472 preload.go:144] Downloading: &{Ctx:<nil> Src:https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 Dst:/home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.download Pwd: Mode:2 Detectors:[] Decompressors:map[] Getters:map[] Dir:false ProgressListener:<nil> Options:[0xbf9750]}
    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB
I0430 10:06:53.567954   12472 preload.go:160] saving checksum for preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 ...
I0430 10:06:53.729000   12472 preload.go:177] verifying checksumm of /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.download ...
I0430 10:06:55.073166   12472 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker
I0430 10:06:55.073446   12472 profile.go:138] Saving config to /home/sghio/.minikube/profiles/minikube/config.json ...
I0430 10:06:55.073546   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/config.json: {Name:mkdfd2599793d139652beeb9b7c6edc17d15a9d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:22.534048   12472 cache.go:117] Successfully downloaded all kic artifacts
I0430 10:07:22.534089   12472 start.go:260] acquiring machines lock for minikube: {Name:mke0aaec204cf09e2ebc7f8434f0940ebff18499 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0430 10:07:22.534157   12472 start.go:264] acquired machines lock for "minikube" in 50.431µs
I0430 10:07:22.534181   12472 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0430 10:07:22.534233   12472 start.go:107] createHost starting for "m01" (driver="docker")
🔥  Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=3900MB (15895MB available) ...
I0430 10:07:22.629079   12472 start.go:143] libmachine.API.Create for "minikube" (driver="docker")
I0430 10:07:22.629106   12472 client.go:169] LocalClient.Create starting
I0430 10:07:22.629138   12472 main.go:110] libmachine: Creating CA: /home/sghio/.minikube/certs/ca.pem
I0430 10:07:22.798692   12472 main.go:110] libmachine: Creating client certificate: /home/sghio/.minikube/certs/cert.pem
I0430 10:07:22.885282   12472 oci.go:250] executing with [docker ps -a --format {{.Names}}] timeout: 30s
I0430 10:07:22.925378   12472 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0430 10:07:22.967416   12472 oci.go:128] Successfully created a docker volume minikube
I0430 10:07:27.165149   12472 oci.go:250] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 19s
I0430 10:07:27.210291   12472 oci.go:160] the created container "minikube" has a running status.
I0430 10:07:27.210313   12472 kic.go:142] Creating ssh key for kic: /home/sghio/.minikube/machines/minikube/id_rsa...
I0430 10:07:27.650290   12472 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0430 10:07:27.865326   12472 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 10:07:27.865371   12472 preload.go:97] Found local preload: /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0430 10:07:27.865380   12472 kic.go:128] Starting extracting preloaded images to volume
I0430 10:07:27.865409   12472 volumes.go:85] executing: [docker run --rm --entrypoint /usr/bin/tar -v /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 -I lz4 -xvf /preloaded.tar -C /extractDir]
I0430 10:07:34.468037   12472 kic.go:133] Took 6.602647 seconds to extract preloaded images to volume
I0430 10:07:34.468092   12472 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0430 10:07:34.513660   12472 machine.go:86] provisioning docker machine ...
I0430 10:07:34.513688   12472 ubuntu.go:166] provisioning hostname "minikube"
I0430 10:07:34.546959   12472 main.go:110] libmachine: Using SSH client type: native
I0430 10:07:34.547130   12472 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0430 10:07:34.547146   12472 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0430 10:07:34.703799   12472 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0430 10:07:34.749673   12472 main.go:110] libmachine: Using SSH client type: native
I0430 10:07:34.749801   12472 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0430 10:07:34.749824   12472 main.go:110] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0430 10:07:34.855436   12472 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0430 10:07:34.855513   12472 ubuntu.go:172] set auth options {CertDir:/home/sghio/.minikube CaCertPath:/home/sghio/.minikube/certs/ca.pem CaPrivateKeyPath:/home/sghio/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/sghio/.minikube/machines/server.pem ServerKeyPath:/home/sghio/.minikube/machines/server-key.pem ClientKeyPath:/home/sghio/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/sghio/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/sghio/.minikube}
I0430 10:07:34.855618   12472 ubuntu.go:174] setting up certificates
I0430 10:07:34.855662   12472 provision.go:83] configureAuth start
I0430 10:07:34.913209   12472 provision.go:132] copyHostCerts
I0430 10:07:34.913386   12472 provision.go:106] generating server cert: /home/sghio/.minikube/machines/server.pem ca-key=/home/sghio/.minikube/certs/ca.pem private-key=/home/sghio/.minikube/certs/ca-key.pem org=sghio.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0430 10:07:34.999503   12472 provision.go:160] copyRemoteCerts
I0430 10:07:35.057223   12472 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0430 10:07:35.136430   12472 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: Process exited with status 1
I0430 10:07:35.137044   12472 ssh_runner.go:174] Transferring 1679 bytes to /etc/docker/server-key.pem
I0430 10:07:35.138467   12472 ssh_runner.go:193] server-key.pem: copied 1679 bytes
I0430 10:07:35.181308   12472 ssh_runner.go:155] Checked if /etc/docker/ca.pem exists, but got error: Process exited with status 1
I0430 10:07:35.181550   12472 ssh_runner.go:174] Transferring 1034 bytes to /etc/docker/ca.pem
I0430 10:07:35.182233   12472 ssh_runner.go:193] ca.pem: copied 1034 bytes
I0430 10:07:35.200628   12472 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: Process exited with status 1
I0430 10:07:35.200754   12472 ssh_runner.go:174] Transferring 1115 bytes to /etc/docker/server.pem
I0430 10:07:35.201151   12472 ssh_runner.go:193] server.pem: copied 1115 bytes
I0430 10:07:35.213807   12472 provision.go:86] configureAuth took 358.120334ms
I0430 10:07:35.213829   12472 ubuntu.go:190] setting minikube options for container-runtime
I0430 10:07:35.247456   12472 main.go:110] libmachine: Using SSH client type: native
I0430 10:07:35.247583   12472 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0430 10:07:35.247596   12472 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0430 10:07:35.356767   12472 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0430 10:07:35.356849   12472 ubuntu.go:71] root file system type: overlay
I0430 10:07:35.357325   12472 provision.go:295] Updating docker unit: /lib/systemd/system/docker.service ...
I0430 10:07:35.413837   12472 main.go:110] libmachine: Using SSH client type: native
I0430 10:07:35.413977   12472 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0430 10:07:35.414054   12472 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0430 10:07:35.565446   12472 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0430 10:07:35.606333   12472 main.go:110] libmachine: Using SSH client type: native
I0430 10:07:35.606515   12472 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0430 10:07:35.606548   12472 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
I0430 10:07:36.142712   12472 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2020-04-30 08:07:35.556123031 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0430 10:07:36.142856   12472 machine.go:89] provisioned docker machine in 1.629176963s
I0430 10:07:36.142877   12472 client.go:172] LocalClient.Create took 13.513760515s
I0430 10:07:36.142914   12472 start.go:148] libmachine.API.Create for "minikube" took 13.513831037s
I0430 10:07:36.142943   12472 start.go:189] post-start starting for "minikube" (driver="docker")
I0430 10:07:36.142984   12472 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0430 10:07:36.143053   12472 start.go:223] determining appropriate runner for "docker"
I0430 10:07:36.143087   12472 start.go:234] Returning KICRunner for "docker" driver
I0430 10:07:36.143212   12472 kic_runner.go:91] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0430 10:07:36.247015   12472 filesync.go:118] Scanning /home/sghio/.minikube/addons for local assets ...
I0430 10:07:36.247082   12472 filesync.go:118] Scanning /home/sghio/.minikube/files for local assets ...
I0430 10:07:36.247116   12472 start.go:192] post-start completed in 104.133957ms
I0430 10:07:36.247292   12472 start.go:110] createHost completed in 13.71304846s
I0430 10:07:36.247301   12472 start.go:77] releasing machines lock for "minikube", held for 13.713133454s
I0430 10:07:36.278294   12472 profile.go:138] Saving config to /home/sghio/.minikube/profiles/minikube/config.json ...
I0430 10:07:36.278425   12472 kic_runner.go:91] Run: curl -sS -m 2 https://k8s.gcr.io/
I0430 10:07:36.278748   12472 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0430 10:07:36.407677   12472 kic_runner.go:91] Run: sudo systemctl stop -f containerd
I0430 10:07:36.612875   12472 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0430 10:07:36.733265   12472 kic_runner.go:91] Run: sudo systemctl is-active --quiet service crio
I0430 10:07:36.847373   12472 kic_runner.go:91] Run: sudo systemctl start docker
I0430 10:07:37.202364   12472 kic_runner.go:91] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
I0430 10:07:37.333589   12472 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0430 10:07:37.333613   12472 preload.go:97] Found local preload: /home/sghio/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0430 10:07:37.333660   12472 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0430 10:07:37.333737   12472 certs.go:51] Setting up /home/sghio/.minikube/profiles/minikube for IP: 172.17.0.2
I0430 10:07:37.333942   12472 certs.go:173] generating minikubeCA CA: /home/sghio/.minikube/ca.key
I0430 10:07:37.475773   12472 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0430 10:07:37.475804   12472 docker.go:305] Images already preloaded, skipping extraction
I0430 10:07:37.475854   12472 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0430 10:07:37.486027   12472 crypto.go:157] Writing cert to /home/sghio/.minikube/ca.crt ...
I0430 10:07:37.486070   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/ca.crt: {Name:mk6d5a59d5bb83689cb70ba1147215f91d76ffe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.486301   12472 crypto.go:165] Writing key to /home/sghio/.minikube/ca.key ...
I0430 10:07:37.486313   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/ca.key: {Name:mka85abcfcbdd4bcf489fe4cfd81b6c9b3cd0c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.486387   12472 certs.go:173] generating proxyClientCA CA: /home/sghio/.minikube/proxy-client-ca.key
I0430 10:07:37.609290   12472 crypto.go:157] Writing cert to /home/sghio/.minikube/proxy-client-ca.crt ...
I0430 10:07:37.609334   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/proxy-client-ca.crt: {Name:mkccd8dd8117dcaf507b1991e3460bf1e38f609c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.609504   12472 crypto.go:165] Writing key to /home/sghio/.minikube/proxy-client-ca.key ...
I0430 10:07:37.609518   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/proxy-client-ca.key: {Name:mk920a3adf0028c24764a9fde7110746c5fb1d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.609627   12472 certs.go:267] generating minikube-user signed cert: /home/sghio/.minikube/profiles/minikube/client.key
I0430 10:07:37.609634   12472 crypto.go:69] Generating cert /home/sghio/.minikube/profiles/minikube/client.crt with IP's: []
I0430 10:07:37.611467   12472 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0430 10:07:37.611492   12472 cache_images.go:69] Images are preloaded, skipping loading
I0430 10:07:37.611527   12472 kubeadm.go:125] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:172.17.0.2}
I0430 10:07:37.611607   12472 kubeadm.go:129] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 172.17.0.2:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: 172.17.0.2:10249

I0430 10:07:37.611713   12472 kic_runner.go:91] Run: docker info --format {{.CgroupDriver}}
I0430 10:07:37.737565   12472 crypto.go:157] Writing cert to /home/sghio/.minikube/profiles/minikube/client.crt ...
I0430 10:07:37.737599   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/client.crt: {Name:mke8ce2b16c2e3316ac1b7f943d7f7656ae9421e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.737758   12472 crypto.go:165] Writing key to /home/sghio/.minikube/profiles/minikube/client.key ...
I0430 10:07:37.737774   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/client.key: {Name:mk9683033689897f89636497911eb129996e943a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.737866   12472 certs.go:267] generating minikube signed cert: /home/sghio/.minikube/profiles/minikube/apiserver.key.eaa33411
I0430 10:07:37.737878   12472 crypto.go:69] Generating cert /home/sghio/.minikube/profiles/minikube/apiserver.crt.eaa33411 with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0430 10:07:37.751754   12472 kubeadm.go:671] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:}
I0430 10:07:37.751882   12472 kic_runner.go:91] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
I0430 10:07:37.856126   12472 binaries.go:42] Found k8s binaries, skipping transfer
I0430 10:07:37.856212   12472 kic_runner.go:91] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0430 10:07:37.982832   12472 crypto.go:157] Writing cert to /home/sghio/.minikube/profiles/minikube/apiserver.crt.eaa33411 ...
I0430 10:07:37.982872   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/apiserver.crt.eaa33411: {Name:mkb6122e31e6c32acd26acf8804ab8f53866b37d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.983947   12472 crypto.go:165] Writing key to /home/sghio/.minikube/profiles/minikube/apiserver.key.eaa33411 ...
I0430 10:07:37.983965   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/apiserver.key.eaa33411: {Name:mkec474db54192792e785f0629735cc01eee2656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:37.984057   12472 certs.go:278] copying /home/sghio/.minikube/profiles/minikube/apiserver.crt.eaa33411 -> /home/sghio/.minikube/profiles/minikube/apiserver.crt
I0430 10:07:37.984114   12472 certs.go:282] copying /home/sghio/.minikube/profiles/minikube/apiserver.key.eaa33411 -> /home/sghio/.minikube/profiles/minikube/apiserver.key
I0430 10:07:37.984167   12472 certs.go:267] generating aggregator signed cert: /home/sghio/.minikube/profiles/minikube/proxy-client.key
I0430 10:07:37.984175   12472 crypto.go:69] Generating cert /home/sghio/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0430 10:07:38.268445   12472 crypto.go:157] Writing cert to /home/sghio/.minikube/profiles/minikube/proxy-client.crt ...
I0430 10:07:38.268732   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/proxy-client.crt: {Name:mkfd57c4901be88a3eae522121abebcab93205ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:38.268908   12472 crypto.go:165] Writing key to /home/sghio/.minikube/profiles/minikube/proxy-client.key ...
I0430 10:07:38.268943   12472 lock.go:35] WriteFile acquiring /home/sghio/.minikube/profiles/minikube/proxy-client.key: {Name:mk0d46fb387d8097d6c4c8b8fbdabfaae910351d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:07:38.269141   12472 certs.go:330] found cert: ca-key.pem (1675 bytes)
I0430 10:07:38.269220   12472 certs.go:330] found cert: ca.pem (1034 bytes)
I0430 10:07:38.269281   12472 certs.go:330] found cert: cert.pem (1074 bytes)
I0430 10:07:38.269331   12472 certs.go:330] found cert: key.pem (1675 bytes)
I0430 10:07:38.270955   12472 certs.go:120] copying: /var/lib/minikube/certs/apiserver.crt
I0430 10:07:38.343713   12472 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0430 10:07:38.429050   12472 certs.go:120] copying: /var/lib/minikube/certs/apiserver.key
I0430 10:07:38.517727   12472 kic_runner.go:91] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet"
I0430 10:07:38.591908   12472 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.crt
I0430 10:07:38.712795   12472 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.key
I0430 10:07:38.863445   12472 certs.go:120] copying: /var/lib/minikube/certs/ca.crt
I0430 10:07:38.992988   12472 certs.go:120] copying: /var/lib/minikube/certs/ca.key
I0430 10:07:39.130061   12472 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.crt
I0430 10:07:39.233162   12472 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.key
I0430 10:07:39.336661   12472 certs.go:120] copying: /usr/share/ca-certificates/minikubeCA.pem
I0430 10:07:39.433851   12472 certs.go:120] copying: /var/lib/minikube/kubeconfig
I0430 10:07:39.555350   12472 kic_runner.go:91] Run: openssl version
I0430 10:07:39.676653   12472 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0430 10:07:39.821609   12472 kic_runner.go:91] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0430 10:07:39.936146   12472 certs.go:370] hashing: -rw-r--r-- 1 root root 1066 Apr 30 08:07 /usr/share/ca-certificates/minikubeCA.pem
I0430 10:07:39.936208   12472 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0430 10:07:40.034626   12472 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0430 10:07:40.151290   12472 kubeadm.go:278] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0430 10:07:40.151374   12472 kic_runner.go:91] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0430 10:07:40.292635   12472 kic_runner.go:91] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0430 10:07:40.432833   12472 kic_runner.go:91] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0430 10:07:40.563680   12472 kubeadm.go:214] ignoring SystemVerification for kubeadm because of either driver or kubernetes version
I0430 10:07:40.563739   12472 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0430 10:07:40.671124   12472 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0430 10:07:40.774686   12472 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0430 10:07:40.926180   12472 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0430 10:07:41.064150   12472 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0430 10:07:59.995145   12472 kic_runner.go:118] Done: [docker exec --privileged minikube /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: (18.930966325s)
I0430 10:07:59.995297   12472 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create --kubeconfig=/var/lib/minikube/kubeconfig -f -
I0430 10:08:00.552101   12472 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl label nodes minikube.k8s.io/version=v1.9.2 minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_04_30T10_08_00_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0430 10:08:00.705985   12472 kic_runner.go:91] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0430 10:08:00.817455   12472 ops.go:35] apiserver oom_adj: -16
I0430 10:08:00.817545   12472 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0430 10:08:00.975747   12472 kubeadm.go:794] duration metric: took 158.208569ms to wait for elevateKubeSystemPrivileges.
I0430 10:08:00.975775   12472 kubeadm.go:280] StartCluster complete in 20.824487319s
I0430 10:08:00.975800   12472 settings.go:123] acquiring lock: {Name:mk013ff0e8a25c3de2f43903b5c51ec0f247ad66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:08:00.975907   12472 settings.go:131] Updating kubeconfig:  /home/sghio/.kube/config
I0430 10:08:00.977180   12472 lock.go:35] WriteFile acquiring /home/sghio/.kube/config: {Name:mk2c06f5911a380e38e8cba80ba830ac08c75596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0430 10:08:00.977347   12472 addons.go:292] enableAddons start: toEnable=map[], additional=[]
I0430 10:08:00.977440   12472 addons.go:60] IsEnabled "ingress" = false (listed in config=false)
I0430 10:08:00.977502   12472 addons.go:60] IsEnabled "istio" = false (listed in config=false)
I0430 10:08:00.977556   12472 addons.go:60] IsEnabled "nvidia-driver-installer" = false (listed in config=false)
I0430 10:08:00.977605   12472 addons.go:60] IsEnabled "helm-tiller" = false (listed in config=false)
I0430 10:08:00.977656   12472 addons.go:60] IsEnabled "efk" = false (listed in config=false)
I0430 10:08:00.977709   12472 addons.go:60] IsEnabled "registry-creds" = false (listed in config=false)
I0430 10:08:00.977757   12472 addons.go:60] IsEnabled "logviewer" = false (listed in config=false)
I0430 10:08:00.977806   12472 addons.go:60] IsEnabled "freshpod" = false (listed in config=false)
I0430 10:08:00.977860   12472 addons.go:60] IsEnabled "dashboard" = false (listed in config=false)
I0430 10:08:00.977910   12472 addons.go:60] IsEnabled "storage-provisioner" = false (listed in config=false)
I0430 10:08:00.977959   12472 addons.go:60] IsEnabled "storage-provisioner-gluster" = false (listed in config=false)
I0430 10:08:00.978017   12472 addons.go:60] IsEnabled "registry" = false (listed in config=false)
I0430 10:08:00.978117   12472 addons.go:60] IsEnabled "registry-aliases" = false (listed in config=false)
I0430 10:08:00.978177   12472 addons.go:60] IsEnabled "nvidia-gpu-device-plugin" = false (listed in config=false)
I0430 10:08:00.978241   12472 addons.go:60] IsEnabled "gvisor" = false (listed in config=false)
I0430 10:08:00.978296   12472 addons.go:60] IsEnabled "ingress-dns" = false (listed in config=false)
I0430 10:08:00.978351   12472 addons.go:60] IsEnabled "default-storageclass" = false (listed in config=false)
I0430 10:08:00.978405   12472 addons.go:60] IsEnabled "istio-provisioner" = false (listed in config=false)
I0430 10:08:00.978463   12472 addons.go:60] IsEnabled "metrics-server" = false (listed in config=false)
🌟  Enabling addons: default-storageclass, storage-provisioner
I0430 10:08:00.980021   12472 addons.go:46] Setting default-storageclass=true in profile "minikube"
I0430 10:08:00.980286   12472 addons.go:242] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0430 10:08:00.983576   12472 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0430 10:08:01.025164   12472 addons.go:105] Setting addon default-storageclass=true in "minikube"
I0430 10:08:01.025272   12472 addons.go:60] IsEnabled "default-storageclass" = false (listed in config=false)
W0430 10:08:01.025285   12472 addons.go:120] addon default-storageclass should already be in state true
I0430 10:08:01.025301   12472 host.go:65] Checking if "minikube" exists ...
I0430 10:08:01.025538   12472 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0430 10:08:01.060978   12472 addons.go:209] installing /etc/kubernetes/addons/storageclass.yaml
I0430 10:08:01.158845   12472 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0430 10:08:01.386091   12472 addons.go:71] Writing out "minikube" config to set default-storageclass=true...
I0430 10:08:01.386240   12472 addons.go:46] Setting storage-provisioner=true in profile "minikube"
I0430 10:08:01.386333   12472 addons.go:105] Setting addon storage-provisioner=true in "minikube"
I0430 10:08:01.386395   12472 addons.go:60] IsEnabled "storage-provisioner" = false (listed in config=false)
W0430 10:08:01.386407   12472 addons.go:120] addon storage-provisioner should already be in state true
I0430 10:08:01.386423   12472 host.go:65] Checking if "minikube" exists ...
I0430 10:08:01.386648   12472 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0430 10:08:01.419079   12472 addons.go:209] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0430 10:08:01.526086   12472 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0430 10:08:01.751616   12472 addons.go:71] Writing out "minikube" config to set storage-provisioner=true...
I0430 10:08:01.751768   12472 addons.go:294] enableAddons completed in 774.420475ms
I0430 10:08:01.752086   12472 kapi.go:58] client config for minikube: &rest.Config{Host:"https://172.17.0.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/sghio/.minikube/profiles/minikube/client.crt", KeyFile:"/home/sghio/.minikube/profiles/minikube/client.key", CAFile:"/home/sghio/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15f1fb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
I0430 10:08:01.753191   12472 api_server.go:46] waiting for apiserver process to appear ...
I0430 10:08:01.753236   12472 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0430 10:08:01.870240   12472 api_server.go:66] duration metric: took 118.445073ms to wait for apiserver process to appear ...
I0430 10:08:01.870260   12472 api_server.go:82] waiting for apiserver healthz status ...
I0430 10:08:01.870270   12472 api_server.go:184] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0430 10:08:01.874068   12472 api_server.go:135] control plane version: v1.18.0
I0430 10:08:01.874086   12472 api_server.go:125] duration metric: took 3.819016ms to wait for apiserver health ...
I0430 10:08:01.874095   12472 system_pods.go:37] waiting for kube-system pods to appear ...
I0430 10:08:01.879188   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:01.879214   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:02.387287   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:02.387387   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:02.885049   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:02.885132   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:03.385587   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:03.385662   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:03.885571   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:03.885652   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:04.384807   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:04.384896   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:04.884869   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:04.884945   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:05.384425   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:05.384496   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:05.884586   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:05.884691   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:06.384610   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:06.384687   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:06.886409   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:06.886502   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:07.384279   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:07.384367   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:07.883591   12472 system_pods.go:55] 1 kube-system pods found
I0430 10:08:07.883653   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:08.389645   12472 system_pods.go:55] 5 kube-system pods found
I0430 10:08:08.389715   12472 system_pods.go:57] "etcd-minikube" [59f914b1-82fd-4f52-b73d-48dc6e2bcd2d] Running
I0430 10:08:08.389761   12472 system_pods.go:57] "kube-apiserver-minikube" [5cb80706-d71c-42af-9ad5-f16fa7f2920e] Running
I0430 10:08:08.389794   12472 system_pods.go:57] "kube-controller-manager-minikube" [6b1f7113-b697-4709-b8ef-445e4c3bcab5] Running
I0430 10:08:08.389824   12472 system_pods.go:57] "kube-scheduler-minikube" [dc31c024-a7f6-4f86-b322-1595ef5458ad] Running
I0430 10:08:08.389859   12472 system_pods.go:57] "storage-provisioner" [622b11e2-91d2-41a8-98dc-3eec9b185fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0430 10:08:08.389896   12472 system_pods.go:68] duration metric: took 6.515787075s to wait for pod list to return data ...
I0430 10:08:08.389932   12472 kubeadm.go:397] duration metric: took 6.638135471s to wait for : map[apiserver:true system_pods:true] ...
🏄  Done! kubectl is now configured to use "minikube"
I0430 10:08:08.531949   12472 start.go:454] kubectl: 1.18.2, cluster: 1.18.0 (minor skew: 0)

And after I successfully run minikube ssh:

docker info   
Client:
 Debug Mode: false

Server:
 Containers: 18
  Running: 18
  Paused: 0
  Stopped: 0
 Images: 11
 Server Version: 19.03.2
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: ff48f57fc83a8c44cf4ad5d672424a98ba37ded6
 runc version: 
 init version: 
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.4.0-28-generic
 Operating System: Ubuntu 19.10 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.52GiB
 Name: minikube
 ID: WZUN:24RZ:FIYG:KMBH:IGWN:H567:3T5O:RAUY:2JZU:MECF:E56P:O5BJ
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
  provider=docker
 Experimental: false
 Insecure Registries:
  10.96.0.0/12
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

And:

minikube status
m01
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

As mentioned, the only difference in my setup is the daemon.json file specifying I wish to use a different location and storage driver for my docker.

Lastly, other containers I have can successfully run in both configurations (local EXT4 filesystem and NTFS filesystem).

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. priority/backlog Higher priority than priority/awaiting-more-evidence. kind/feature Categorizes issue or PR as related to a new feature. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Apr 30, 2020
@tstromberg tstromberg changed the title Cannot specify storage driver option for docker preloaded tarball image Allow alternative storage-driver to be selected Apr 30, 2020
@tstromberg
Copy link
Contributor

PR's welcome for this. Is it possible that setting --docker-opt may work with some experimentation?

In the mean-time, using minikube start --preload=false may work for some.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 30, 2020

I think something goes wrong in the KIC boot, when running on the vfs storage driver.

The main issue here is NTFS, I don't think overlayfs accepts ntfs as the underlying layer...

Tested booting minikube on vfs, and that goes OK - just is slow, and eats lots of disk

After booting, the docker storage used is 23G (for the kicbase, and for the docker images)

@afbjorklund
Copy link
Collaborator

Same issue with docker run --privileged docker:dind:

level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay2

For some reason, minikube is hardcoding the driver.

failed to start daemon: error initializing graphdriver: driver not supported

Without that, it will default to overlay2 but fall back to aufs (or vfs).
So maybe the hardcoded driver should be removed from daemon.json ?


I think it comes from that ancient kubernetes example file for docker.

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

Which was created before overlayfs was the default, docker < 18.09:

Docker supports the following storage drivers:

  • overlay2 is the preferred storage driver, for all currently supported Linux distributions, and requires no extra configuration.
  • aufs is the preferred storage driver for Docker 18.06 and older, when running on Ubuntu 14.04 on kernel 3.13 which has no support for overlay2.

Can argue about json-file vs journald logs, some other day. :-)

https://docs.docker.com/config/containers/logging/configure/

As usual (cgroup driver), docker defaults to not using systemd.

https://www.projectatomic.io/blog/2015/04/logging-docker-container-output-to-journald/

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 28, 2020
This was referenced Sep 3, 2020
@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 16, 2020
@ho-229
Copy link

ho-229 commented Jun 18, 2024

Hi everyone, I have same issue.

overlay2 cannot works with my f2fs partition, so I need to set storage-driver to fuse-overlayfs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container co/preload help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

9 participants