-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker: /etc/hosts entries from host machine are ignored #7802
Comments
This seems to be highly dependent on the system. For instance, with dnsmasq instead of systemd:
It is still resolvable, e.g. by $ ping foo.bar
PING foo.bar (1.2.3.4) 56(84) bytes of data.
From 10.190.2.50 icmp_seq=1 Destination Net Unreachable
^C
--- foo.bar ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
It seems like the VirtualBox DNS server implements the same feature (as systemd-resolved does) But that the Docker driver just goes to the regular DNS (in my case 8.8.8.8, no dnsmasq support...) |
Same thing for the Podman driver. So it sounds more like a "missing feature", than an actual bug ? |
Here are more details about my use case: I have an ingress which exposes the address
|
The feature is called "NAT DNS proxy";: https://www.virtualbox.org/manual/ch09.html#nat-adv-dns EDIT: Or the actual feature is actually "Host DNS resolver", which is an add-on to that feature... It is enabled by the machine virtualbox driver, when creating the VirtualBox virtual machine. hostDNSResolver := "off"
if d.HostDNSResolver {
hostDNSResolver = "on"
} dnsProxy := "off"
if d.DNSProxy {
dnsProxy = "on"
} "--natdnshostresolver1", hostDNSResolver,
"--natdnsproxy1", dnsProxy, There is no built-in functionality to do the same for Docker, so workaround is to edit |
Theoretically one could deploy a similar DNS proxy on the host, and configure Docker to talk to it. It might even be an existing project somewhere ? Sortof depends on how many hosts are involved... There were some other nice use cases for this, like resolving various minikube profiles on host. I seem to recall that someone was dabbling with it, but can only find
|
This one looked promising, if someone wants to try it: https://github.com/janeczku/go-dnsmasq |
@afbjorklund Thank you for the workaround. It appears that it fixes the DNS resolution from From docker@minikube:~$ tail -1 /etc/hosts
1.2.3.4 foo.bar
docker@minikube:~$ ping foo.bar
PING foo.bar (1.2.3.4) 56(84) bytes of data. From a pod running in minikube root@mypod:/usr/src/app# ping foo.bar
ping: foo.bar: No address associated with hostname Restarting the |
Well, the pod containers also have their own This is why we have DNS, to not have to copy these things. |
FWIW the following hack does the trick: Note: use with care:
When the pod is started, |
I would accept a PR that makes docker driver act like our VM drivers ! this seems to be done by mimicking our machine driver for virtualbox https://github.com/machine-drivers/machine |
How about adding a new parameter like |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
I would still accept a PR that would implement this feature |
go to /etc/docker/daemon.json ( create it if not exist) and add the following: {
"dns": ["my.dns.server.ip:port"]
} Where you can run your own dns server (i.e. coredns) and mount to it /etc/hosts format DB. |
When using the
docker
driver, the/etc/hosts
file from the host machine is ignored. Things work as expected when I use thevirtualbox
driver.Steps to reproduce the issue:
virtualbox
^
foo.bar
is successfully resolved to1.2.3.4
docker
minikube start --driver=virtualbox --alsologtostderr
minikube start --driver=docker --alsologtostderr
I0420 13:22:24.795486 9717 main.go:110] libmachine: Using SSH client type: native
I0420 13:22:24.795594 9717 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32785 }
I0420 13:22:24.795607 9717 main.go:110] libmachine: About to run SSH command:
I0420 13:22:24.918101 9717 main.go:110] libmachine: SSH cmd err, output: :
I0420 13:22:24.918159 9717 ubuntu.go:172] set auth options {CertDir:/home/aurelien/.minikube CaCertPath:/home/aurelien/.minikube/certs/ca.pem CaPrivateKeyPath:/home/aurelien/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/aurelien/.minikube/machines/server.pem ServerKeyPath:/home/aurelien/.minikube/machines/server-key.pem ClientKeyPath:/home/aurelien/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/aurelien/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/aurelien/.minikube}
I0420 13:22:24.918204 9717 ubuntu.go:174] setting up certificates
I0420 13:22:24.918234 9717 provision.go:83] configureAuth start
I0420 13:22:24.988668 9717 provision.go:132] copyHostCerts
I0420 13:22:24.988839 9717 provision.go:106] generating server cert: /home/aurelien/.minikube/machines/server.pem ca-key=/home/aurelien/.minikube/certs/ca.pem private-key=/home/aurelien/.minikube/certs/ca-key.pem org=aurelien.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0420 13:22:25.107222 9717 provision.go:160] copyRemoteCerts
I0420 13:22:25.171394 9717 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0420 13:22:25.238579 9717 ssh_runner.go:155] Checked if /etc/docker/ca.pem exists, but got error: Process exited with status 1
I0420 13:22:25.239145 9717 ssh_runner.go:174] Transferring 1042 bytes to /etc/docker/ca.pem
I0420 13:22:25.240384 9717 ssh_runner.go:193] ca.pem: copied 1042 bytes
I0420 13:22:25.285147 9717 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: Process exited with status 1
I0420 13:22:25.285684 9717 ssh_runner.go:174] Transferring 1123 bytes to /etc/docker/server.pem
I0420 13:22:25.286986 9717 ssh_runner.go:193] server.pem: copied 1123 bytes
I0420 13:22:25.334227 9717 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: Process exited with status 1
I0420 13:22:25.334826 9717 ssh_runner.go:174] Transferring 1675 bytes to /etc/docker/server-key.pem
I0420 13:22:25.336282 9717 ssh_runner.go:193] server-key.pem: copied 1675 bytes
I0420 13:22:25.372539 9717 provision.go:86] configureAuth took 454.269329ms
I0420 13:22:25.372595 9717 ubuntu.go:190] setting minikube options for container-runtime
I0420 13:22:25.446769 9717 main.go:110] libmachine: Using SSH client type: native
I0420 13:22:25.446873 9717 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32785 }
I0420 13:22:25.446886 9717 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0420 13:22:25.579692 9717 main.go:110] libmachine: SSH cmd err, output: : overlay
I0420 13:22:25.579748 9717 ubuntu.go:71] root file system type: overlay
I0420 13:22:25.580077 9717 provision.go:295] Updating docker unit: /lib/systemd/system/docker.service ...
I0420 13:22:25.650742 9717 main.go:110] libmachine: Using SSH client type: native
I0420 13:22:25.650840 9717 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32785 }
I0420 13:22:25.650895 9717 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0420 13:22:25.798776 9717 main.go:110] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0420 13:22:25.865210 9717 main.go:110] libmachine: Using SSH client type: native
I0420 13:22:25.865305 9717 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32785 }
I0420 13:22:25.865320 9717 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
I0420 13:22:26.357565 9717 main.go:110] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2020-04-20 17:22:25.794628850 +0000
@@ -8,24 +8,22 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
I0420 13:22:26.357690 9717 machine.go:89] provisioned docker machine in 1.755773272s
I0420 13:22:26.357718 9717 client.go:172] LocalClient.Create took 5.534963784s
I0420 13:22:26.357767 9717 start.go:148] libmachine.API.Create for "minikube" took 5.535031853s
I0420 13:22:26.357787 9717 start.go:189] post-start starting for "minikube" (driver="docker")
I0420 13:22:26.357800 9717 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0420 13:22:26.357843 9717 start.go:234] Returning KICRunner for "docker" driver
I0420 13:22:26.357974 9717 kic_runner.go:91] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0420 13:22:26.454130 9717 filesync.go:118] Scanning /home/aurelien/.minikube/addons for local assets ...
I0420 13:22:26.454173 9717 filesync.go:118] Scanning /home/aurelien/.minikube/files for local assets ...
I0420 13:22:26.454190 9717 start.go:192] post-start completed in 96.393005ms
I0420 13:22:26.454334 9717 start.go:110] createHost completed in 5.674532637s
I0420 13:22:26.454342 9717 start.go:77] releasing machines lock for "minikube", held for 5.674591273s
I0420 13:22:26.492136 9717 profile.go:138] Saving config to /home/aurelien/.minikube/profiles/minikube/config.json ...
I0420 13:22:26.492224 9717 kic_runner.go:91] Run: curl -sS -m 2 https://k8s.gcr.io/
I0420 13:22:26.492302 9717 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0420 13:22:26.571247 9717 kic_runner.go:91] Run: sudo systemctl stop -f containerd
I0420 13:22:26.652075 9717 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0420 13:22:26.733092 9717 kic_runner.go:91] Run: sudo systemctl is-active --quiet service crio
I0420 13:22:26.807229 9717 kic_runner.go:91] Run: sudo systemctl start docker
I0420 13:22:27.242875 9717 kic_runner.go:91] Run: docker version --format {{.Server.Version}}
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
I0420 13:22:27.343722 9717 certs.go:51] Setting up /home/aurelien/.minikube/profiles/minikube for IP: 172.17.0.2
I0420 13:22:27.343740 9717 certs.go:169] skipping minikubeCA CA generation: /home/aurelien/.minikube/ca.key
I0420 13:22:27.343750 9717 certs.go:169] skipping proxyClientCA CA generation: /home/aurelien/.minikube/proxy-client-ca.key
I0420 13:22:27.343760 9717 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0420 13:22:27.343776 9717 certs.go:267] generating minikube-user signed cert: /home/aurelien/.minikube/profiles/minikube/client.key
I0420 13:22:27.343781 9717 preload.go:97] Found local preload: /home/aurelien/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0420 13:22:27.343785 9717 crypto.go:69] Generating cert /home/aurelien/.minikube/profiles/minikube/client.crt with IP's: []
I0420 13:22:27.343820 9717 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0420 13:22:27.442356 9717 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0420 13:22:27.442415 9717 docker.go:305] Images already preloaded, skipping extraction
I0420 13:22:27.442476 9717 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0420 13:22:27.462790 9717 crypto.go:157] Writing cert to /home/aurelien/.minikube/profiles/minikube/client.crt ...
I0420 13:22:27.462836 9717 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/client.crt: {Name:mkf9533d637068707a83d7570062997171d4356e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:27.463019 9717 crypto.go:165] Writing key to /home/aurelien/.minikube/profiles/minikube/client.key ...
I0420 13:22:27.463051 9717 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/client.key: {Name:mk63becf07a08e2aa7beaff2e2efc2d3028a5a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:27.463117 9717 certs.go:267] generating minikube signed cert: /home/aurelien/.minikube/profiles/minikube/apiserver.key.eaa33411
I0420 13:22:27.463125 9717 crypto.go:69] Generating cert /home/aurelien/.minikube/profiles/minikube/apiserver.crt.eaa33411 with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0420 13:22:27.547852 9717 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0420 13:22:27.547911 9717 cache_images.go:69] Images are preloaded, skipping loading
I0420 13:22:27.547960 9717 kubeadm.go:125] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:172.17.0.2}
I0420 13:22:27.548047 9717 kubeadm.go:129] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.2
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 172.17.0.2
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 172.17.0.2:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: 172.17.0.2:10249
I0420 13:22:27.548126 9717 kic_runner.go:91] Run: docker info --format {{.CgroupDriver}}
I0420 13:22:27.661090 9717 kubeadm.go:671] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests
[Install]
config:
{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:}
I0420 13:22:27.661148 9717 kic_runner.go:91] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
I0420 13:22:27.682710 9717 crypto.go:157] Writing cert to /home/aurelien/.minikube/profiles/minikube/apiserver.crt.eaa33411 ...
I0420 13:22:27.682730 9717 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/apiserver.crt.eaa33411: {Name:mka9e31ad634c8e704ec0e8f2a48174144de8c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:27.682867 9717 crypto.go:165] Writing key to /home/aurelien/.minikube/profiles/minikube/apiserver.key.eaa33411 ...
I0420 13:22:27.682874 9717 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/apiserver.key.eaa33411: {Name:mkc4a6c6456e48003da88d7b808164492a01af16 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:27.682960 9717 certs.go:278] copying /home/aurelien/.minikube/profiles/minikube/apiserver.crt.eaa33411 -> /home/aurelien/.minikube/profiles/minikube/apiserver.crt
I0420 13:22:27.683018 9717 certs.go:282] copying /home/aurelien/.minikube/profiles/minikube/apiserver.key.eaa33411 -> /home/aurelien/.minikube/profiles/minikube/apiserver.key
I0420 13:22:27.683087 9717 certs.go:267] generating aggregator signed cert: /home/aurelien/.minikube/profiles/minikube/proxy-client.key
I0420 13:22:27.683094 9717 crypto.go:69] Generating cert /home/aurelien/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0420 13:22:27.735779 9717 binaries.go:42] Found k8s binaries, skipping transfer
I0420 13:22:27.735851 9717 kic_runner.go:91] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0420 13:22:27.911946 9717 crypto.go:157] Writing cert to /home/aurelien/.minikube/profiles/minikube/proxy-client.crt ...
I0420 13:22:27.911965 9717 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd702074be9bc1a9c0e9f347e58877d21d15cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:27.912053 9717 crypto.go:165] Writing key to /home/aurelien/.minikube/profiles/minikube/proxy-client.key ...
I0420 13:22:27.912061 9717 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/proxy-client.key: {Name:mk33582d0cea0dfe3bb8901a4e0f5679bd5afafd Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:27.912142 9717 certs.go:330] found cert: ca-key.pem (1679 bytes)
I0420 13:22:27.912162 9717 certs.go:330] found cert: ca.pem (1042 bytes)
I0420 13:22:27.912175 9717 certs.go:330] found cert: cert.pem (1082 bytes)
I0420 13:22:27.912191 9717 certs.go:330] found cert: key.pem (1679 bytes)
I0420 13:22:27.912646 9717 certs.go:120] copying: /var/lib/minikube/certs/apiserver.crt
I0420 13:22:28.008067 9717 certs.go:120] copying: /var/lib/minikube/certs/apiserver.key
I0420 13:22:28.054214 9717 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0420 13:22:28.100651 9717 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.crt
I0420 13:22:28.151716 9717 kic_runner.go:91] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet"
I0420 13:22:28.189233 9717 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.key
I0420 13:22:28.270618 9717 certs.go:120] copying: /var/lib/minikube/certs/ca.crt
I0420 13:22:28.357089 9717 certs.go:120] copying: /var/lib/minikube/certs/ca.key
I0420 13:22:28.435287 9717 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.crt
I0420 13:22:28.509545 9717 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.key
I0420 13:22:28.581822 9717 certs.go:120] copying: /usr/share/ca-certificates/minikubeCA.pem
I0420 13:22:28.655821 9717 certs.go:120] copying: /var/lib/minikube/kubeconfig
I0420 13:22:28.731027 9717 kic_runner.go:91] Run: openssl version
I0420 13:22:28.809633 9717 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0420 13:22:28.881514 9717 kic_runner.go:91] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0420 13:22:28.950929 9717 certs.go:370] hashing: -rw-r--r-- 1 root root 1066 Oct 1 2019 /usr/share/ca-certificates/minikubeCA.pem
I0420 13:22:28.950979 9717 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0420 13:22:29.025498 9717 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0420 13:22:29.099408 9717 kubeadm.go:278] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:7900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0420 13:22:29.099511 9717 kic_runner.go:91] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0420 13:22:29.198364 9717 kic_runner.go:91] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0420 13:22:29.270792 9717 kic_runner.go:91] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0420 13:22:29.342979 9717 kubeadm.go:214] ignoring SystemVerification for kubeadm because of either driver or kubernetes version
I0420 13:22:29.343034 9717 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0420 13:22:29.417388 9717 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0420 13:22:29.496541 9717 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0420 13:22:29.569888 9717 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0420 13:22:29.647385 9717 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0420 13:22:47.133896 9717 kic_runner.go:118] Done: [docker exec --privileged minikube /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: (17.486485208s)
I0420 13:22:47.134029 9717 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create --kubeconfig=/var/lib/minikube/kubeconfig -f -
I0420 13:22:47.573561 9717 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl label nodes minikube.k8s.io/version=v1.9.2 minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_04_20T13_22_47_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0420 13:22:47.690320 9717 kic_runner.go:91] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0420 13:22:47.766552 9717 ops.go:35] apiserver oom_adj: -16
I0420 13:22:47.766612 9717 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0420 13:22:47.875542 9717 kubeadm.go:794] duration metric: took 108.967171ms to wait for elevateKubeSystemPrivileges.
I0420 13:22:47.875561 9717 kubeadm.go:280] StartCluster complete in 18.776156547s
I0420 13:22:47.875574 9717 settings.go:123] acquiring lock: {Name:mka38e6d98ba4b3ca12911b7730c655289c87d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:47.875625 9717 settings.go:131] Updating kubeconfig: /home/aurelien/.kube/config
I0420 13:22:47.887212 9717 lock.go:35] WriteFile acquiring /home/aurelien/.kube/config: {Name:mk6624bc5cc76fcb30beb1c7182a4f459370260e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0420 13:22:47.887377 9717 addons.go:292] enableAddons start: toEnable=map[], additional=[]
🌟 Enabling addons: default-storageclass, storage-provisioner
I0420 13:22:47.889721 9717 addons.go:46] Setting default-storageclass=true in profile "minikube"
I0420 13:22:47.889943 9717 addons.go:242] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0420 13:22:47.912952 9717 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0420 13:22:47.989758 9717 addons.go:105] Setting addon default-storageclass=true in "minikube"
W0420 13:22:47.989835 9717 addons.go:120] addon default-storageclass should already be in state true
I0420 13:22:47.989951 9717 host.go:65] Checking if "minikube" exists ...
I0420 13:22:47.990130 9717 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0420 13:22:48.028433 9717 addons.go:209] installing /etc/kubernetes/addons/storageclass.yaml
I0420 13:22:48.105015 9717 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0420 13:22:48.259251 9717 addons.go:71] Writing out "minikube" config to set default-storageclass=true...
I0420 13:22:48.259372 9717 addons.go:46] Setting storage-provisioner=true in profile "minikube"
I0420 13:22:48.259444 9717 addons.go:105] Setting addon storage-provisioner=true in "minikube"
W0420 13:22:48.259502 9717 addons.go:120] addon storage-provisioner should already be in state true
I0420 13:22:48.259513 9717 host.go:65] Checking if "minikube" exists ...
I0420 13:22:48.259700 9717 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0420 13:22:48.301736 9717 addons.go:209] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0420 13:22:48.384341 9717 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0420 13:22:48.552775 9717 addons.go:71] Writing out "minikube" config to set storage-provisioner=true...
I0420 13:22:48.552906 9717 addons.go:294] enableAddons completed in 665.52866ms
I0420 13:22:48.558656 9717 api_server.go:46] waiting for apiserver process to appear ...
I0420 13:22:48.558691 9717 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0420 13:22:48.633188 9717 api_server.go:66] duration metric: took 80.258791ms to wait for apiserver process to appear ...
I0420 13:22:48.633207 9717 api_server.go:82] waiting for apiserver healthz status ...
I0420 13:22:48.633217 9717 api_server.go:184] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0420 13:22:48.636372 9717 api_server.go:135] control plane version: v1.18.0
I0420 13:22:48.636385 9717 api_server.go:125] duration metric: took 3.171152ms to wait for apiserver health ...
I0420 13:22:48.636394 9717 system_pods.go:37] waiting for kube-system pods to appear ...
I0420 13:22:48.640415 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:48.640435 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:49.145107 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:49.145195 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:49.644909 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:49.644977 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:50.145037 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:50.145109 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:50.642143 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:50.642170 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:51.142091 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:51.142116 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:51.642472 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:51.642508 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:52.144141 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:52.144233 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:52.645281 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:52.645348 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:53.142335 9717 system_pods.go:55] 1 kube-system pods found
I0420 13:22:53.142355 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:53.648765 9717 system_pods.go:55] 4 kube-system pods found
I0420 13:22:53.648836 9717 system_pods.go:57] "etcd-minikube" [c72d1c5b-79ce-438e-868c-d2e589b6694c] Pending
I0420 13:22:53.648853 9717 system_pods.go:57] "kube-apiserver-minikube" [f9d824ec-d73b-4385-9d7f-5e4bec46dc73] Pending
I0420 13:22:53.648888 9717 system_pods.go:57] "kube-controller-manager-minikube" [eb6010d1-abb2-4a04-93dc-a2e521e827ac] Pending
I0420 13:22:53.648929 9717 system_pods.go:57] "storage-provisioner" [445874d9-e289-46d8-9924-c7ff30382ba9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0420 13:22:53.648972 9717 system_pods.go:68] duration metric: took 5.012566358s to wait for pod list to return data ...
I0420 13:22:53.648994 9717 kubeadm.go:397] duration metric: took 5.096065277s to wait for : map[apiserver:true system_pods:true] ...
🏄 Done! kubectl is now configured to use "minikube"
I0420 13:22:53.717188 9717 start.go:454] kubectl: 1.18.1, cluster: 1.18.0 (minor skew: 0)
The text was updated successfully, but these errors were encountered: