Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: writing kubeconfig: unable to open /tmp/juju-x: permission denied (sysctl fs.protected_regular=0) #6391

Closed
badeball opened this issue Jan 24, 2020 · 9 comments
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@badeball
Copy link

The exact command to reproduce the issue:

$ sudo sh -c "minikube start && minikube stop"

The full output of the command that failed:

😄  minikube v1.6.2 on Arch 
✨  Selecting 'none' driver from user configuration (alternates: [virtualbox])
🤹  Running on localhost (CPUs=12, Memory=31758MB, Disk=959886MB) ...
ℹ️   OS release is Arch Linux
⚠️  VM may be unable to resolve external DNS records
🐳  Preparing Kubernetes v1.17.0 on Docker '19.03.5-ce' ...
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
🤹  Configuring local host environment ...

⚠️  The 'none' driver provides limited isolation and may reduce system security and reliability.
⚠️  For more information, see:
👉  https://minikube.sigs.k8s.io/docs/reference/drivers/none/

⚠️  kubectl and minikube configuration will be stored in /root
⚠️  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /root/.kube /root/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"
⚠️  /usr/bin/kubectl is version 0.0.0-master+70132b0f13, and is incompatible with Kubernetes 1.17.0. You will need to update /usr/bin/kubectl or use 'minikube kubectl' to connect with this cluster
✋  Stopping "minikube" in none ...
✋  Stopping "minikube" in none ...
🛑  "minikube" stopped.

💣  update config: writing kubeconfig: Error writing file /root/.kube/config: failed to acquire lock for /root/.kube/config: {Name:mk72a1487fd2da23da9e8181e16f352a6105bd56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}: unable to open /tmp/juju-mk72a1487fd2da23da9e8181e16f352a6105bd56: permission denied

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

(Sorry, my terminal isn't so fancy.)

The mentioned file has the following permissions and ownership.

$ ls -l /tmp/juju-mk72a1487fd2da23da9e8181e16f352a6105bd56
-rw------- 1 jonas jonas 0 Jan 24 10:50 /tmp/juju-mk72a1487fd2da23da9e8181e16f352a6105bd56

For reference, «jonas» is the user I am invoking sudo as.

Interestingly, I can log in as root (ie. SUDO_UID and SUDO_GID won't be present) and run the same commands successfully.

$ sudo su - root -c "minikube start && minikube stop"

However, nothing seems to be actually stopped, all the containers are still running.

$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
60c7f8c08b02        303ce5db0e90           "etcd --advertise-cl…"   37 seconds ago      Up 36 seconds                           k8s_etcd_etcd-minikube_kube-system_e9b1d32e1378e58c792bf62e4ea8595d_0
8e830ac570f5        0cae8d5cc64c           "kube-apiserver --ad…"   37 seconds ago      Up 36 seconds                           k8s_kube-apiserver_kube-apiserver-minikube_kube-system_d98da31751326ddd3ae5920ddbbbbd41_0
708f65d0b382        5eb3b7486872           "kube-controller-man…"   37 seconds ago      Up 36 seconds                           k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_d3bae15a5312b2276dba62f500e9e65c_0
567cb60d36af        78c190f736b1           "kube-scheduler --au…"   37 seconds ago      Up 36 seconds                           k8s_kube-scheduler_kube-scheduler-minikube_kube-system_ff67867321338ffd885039e188f6b424_0
bbf30c4da87e        bd12a212f9dc           "/opt/kube-addons.sh"    37 seconds ago      Up 36 seconds                           k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0
65075b30d674        k8s.gcr.io/pause:3.1   "/pause"                 37 seconds ago      Up 37 seconds                           k8s_POD_etcd-minikube_kube-system_e9b1d32e1378e58c792bf62e4ea8595d_0
0aff9030b456        k8s.gcr.io/pause:3.1   "/pause"                 37 seconds ago      Up 37 seconds                           k8s_POD_kube-scheduler-minikube_kube-system_ff67867321338ffd885039e188f6b424_0
ad7fe794723e        k8s.gcr.io/pause:3.1   "/pause"                 37 seconds ago      Up 37 seconds                           k8s_POD_kube-controller-manager-minikube_kube-system_d3bae15a5312b2276dba62f500e9e65c_0
881f3e6ef50d        k8s.gcr.io/pause:3.1   "/pause"                 37 seconds ago      Up 37 seconds                           k8s_POD_kube-apiserver-minikube_kube-system_d98da31751326ddd3ae5920ddbbbbd41_0
2a95025af931        k8s.gcr.io/pause:3.1   "/pause"                 37 seconds ago      Up 37 seconds                           k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0

The output of the minikube logs command:

$ sudo minikube logs
==> Docker <==
-- Logs begin at Sun 2019-10-27 18:14:15 CET, end at Fri 2020-01-24 11:12:35 CET. --
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.433878410+01:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.433902314+01:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.433927887+01:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.433937213+01:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.433945097+01:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.433957134+01:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.433984544+01:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434012780+01:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434149776+01:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434213056+01:00" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434578743+01:00" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434600107+01:00" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434625929+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434635079+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434642684+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434649706+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434656810+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434665120+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434672424+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434679504+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.434686382+01:00" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435474453+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435493904+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435505395+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435513536+01:00" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435890409+01:00" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435918928+01:00" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435937404+01:00" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jan 24 11:10:12 bactop dockerd[841]: time="2020-01-24T11:10:12.435943912+01:00" level=info msg="containerd successfully booted in 0.021885s"
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.442924628+01:00" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.442943153+01:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.442958195+01:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.442967739+01:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.444068053+01:00" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.444082264+01:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.444094267+01:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.444103534+01:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 24 11:10:12 bactop dockerd[795]: time="2020-01-24T11:10:12.696962630+01:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.707039528+01:00" level=warning msg="Your kernel does not support cgroup rt period"
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.707055845+01:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.707086087+01:00" level=warning msg="Your kernel does not support cgroup blkio weight"
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.707089700+01:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.707217826+01:00" level=info msg="Loading containers: start."
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.868839065+01:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.900969724+01:00" level=info msg="Loading containers: done."
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.957816264+01:00" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.957965289+01:00" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5-ce
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.958628408+01:00" level=info msg="Daemon has completed initialization"
Jan 24 11:10:13 bactop dockerd[795]: time="2020-01-24T11:10:13.982104358+01:00" level=info msg="API listen on /run/docker.sock"
Jan 24 11:10:13 bactop systemd[1]: Started Docker Application Container Engine.
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.383253733+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/45c91d46fa40cb7a527914837fcb0ed94c9eda8e4a9e86081e897de3a8971f60.sock debug=false pid=3536
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.407188479+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/8bd4e94cebe3983b85f62efc86e9fb11e6293273106acfc7ef195d866c2af066.sock debug=false pid=3540
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.450690446+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/4ed8839d35702e76b1a38e13d4cdfa58b4a1ea1458dea86dbaf50f24e89600e1.sock debug=false pid=3571
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.488551712+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/c8e72318de27a1db40d3eed86ae7f1bb227ea96b8f84a4ce6f3ba4e6747ab32a.sock debug=false pid=3589
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.504018751+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/517c21ee9ed6a7d324142859294d4917ec6d3a747f7a0065900da559eaacd3ba.sock debug=false pid=3606
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.703707682+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/c89eeb77b327b3cc2e1f82dc113f9e79d86080e6f11d480961399ad1af2f08e7.sock debug=false pid=3732
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.715241678+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/d10c0664e526799eca4580310a5ad38d5a51db03f46da37bea28e57bcbce73d9.sock debug=false pid=3742
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.721144152+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/189f7b024070046cefcfe5fdc232ef82f4cf8db67e62c26f6c618a064e51d985.sock debug=false pid=3753
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.731283158+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/4d8c09b5a92aa5667aa7f2e660261a863f394294339eea3f77078811dd8a9232.sock debug=false pid=3760
Jan 24 11:11:35 bactop dockerd[841]: time="2020-01-24T11:11:35.735828219+01:00" level=info msg="shim containerd-shim started" address=/containerd-shim/8279d19ce84026b516a946fa079d285caf6788fc9f2d45d8acfcfc2218ebb8bd.sock debug=false pid=3773

==> container status <==
sudo: crictl: command not found
CONTAINER ID        IMAGE                  COMMAND                  CREATED              STATUS              PORTS               NAMES
e188cabc735b        bd12a212f9dc           "/opt/kube-addons.sh"    About a minute ago   Up 59 seconds                           k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0
37de9478457e        0cae8d5cc64c           "kube-apiserver --ad…"   About a minute ago   Up 59 seconds                           k8s_kube-apiserver_kube-apiserver-minikube_kube-system_d98da31751326ddd3ae5920ddbbbbd41_0
3e5a8fc37803        78c190f736b1           "kube-scheduler --au…"   About a minute ago   Up 59 seconds                           k8s_kube-scheduler_kube-scheduler-minikube_kube-system_ff67867321338ffd885039e188f6b424_0
85a5502cf834        303ce5db0e90           "etcd --advertise-cl…"   About a minute ago   Up 59 seconds                           k8s_etcd_etcd-minikube_kube-system_e9b1d32e1378e58c792bf62e4ea8595d_0
ddf2c8bbea95        5eb3b7486872           "kube-controller-man…"   About a minute ago   Up 59 seconds                           k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_d3bae15a5312b2276dba62f500e9e65c_0
440a39b8aae6        k8s.gcr.io/pause:3.1   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-scheduler-minikube_kube-system_ff67867321338ffd885039e188f6b424_0
fbc945443b2d        k8s.gcr.io/pause:3.1   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-controller-manager-minikube_kube-system_d3bae15a5312b2276dba62f500e9e65c_0
6ad5fd13be1d        k8s.gcr.io/pause:3.1   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-apiserver-minikube_kube-system_d98da31751326ddd3ae5920ddbbbbd41_0
8c9305f9986d        k8s.gcr.io/pause:3.1   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_etcd-minikube_kube-system_e9b1d32e1378e58c792bf62e4ea8595d_0
de7a397fe48a        k8s.gcr.io/pause:3.1   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0

==> dmesg <==
[  +0.000000] Hardware name: Dell Inc. XPS 15 9570/0HWTMH, BIOS 1.10.1 04/26/2019
[  +0.000002] Workqueue: pm pm_runtime_work
[  +0.000019] RIP: 0010:gf100_vmm_invalidate+0x215/0x230 [nouveau]
[  +0.000001] Code: 8b 40 10 48 8b 78 10 4c 8b 6f 50 4d 85 ed 75 03 4c 8b 2f e8 9d 23 b7 c0 4c 89 ea 48 c7 c7 1c 78 03 c2 48 89 c6 e8 89 62 5c c0 <0f> 0b e9 5b ff ff ff e8 9f 5f 5c c0 66 66 2e 0f 1f 84 00 00 00 00
[  +0.000000] RSP: 0018:ffffbc37805a7640 EFLAGS: 00010286
[  +0.000001] RAX: 0000000000000000 RBX: ffff9af298898000 RCX: 0000000000000000
[  +0.000001] RDX: 0000000000000001 RSI: 0000000000000096 RDI: 00000000ffffffff
[  +0.000000] RBP: 0000000000000001 R08: 0000000000000834 R09: 0000000000000001
[  +0.000000] R10: 0000000000000000 R11: 0000000000000001 R12: ffff9af2d394e820
[  +0.000001] R13: ffff9af2d7532000 R14: ffff9af29ca26500 R15: ffff9af2b8d8d280
[  +0.000000] FS:  0000000000000000(0000) GS:ffff9af2dc4c0000(0000) knlGS:0000000000000000
[  +0.000001] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  +0.000000] CR2: 000000c000a07b40 CR3: 000000020e00a004 CR4: 00000000003606e0
[  +0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  +0.000001] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  +0.000000] Call Trace:
[  +0.000020]  nvkm_vmm_iter.constprop.0+0x352/0x810 [nouveau]
[  +0.000003]  ? __switch_to_asm+0x40/0x70
[  +0.000001]  ? __switch_to_asm+0x34/0x70
[  +0.000000]  ? __switch_to_asm+0x40/0x70
[  +0.000017]  ? gp100_vmm_pgt_dma+0x200/0x200 [nouveau]
[  +0.000001]  ? __switch_to_asm+0x40/0x70
[  +0.000001]  ? __switch_to_asm+0x34/0x70
[  +0.000017]  nvkm_vmm_map+0x136/0x360 [nouveau]
[  +0.000016]  ? gp100_vmm_pgt_dma+0x200/0x200 [nouveau]
[  +0.000014]  nvkm_vram_map+0x56/0x80 [nouveau]
[  +0.000016]  nvkm_uvmm_mthd+0x676/0x790 [nouveau]
[  +0.000011]  nvkm_ioctl+0xde/0x180 [nouveau]
[  +0.000002]  ? schedule_timeout+0x25f/0x310
[  +0.000009]  nvif_object_mthd+0x112/0x140 [nouveau]
[  +0.000010]  nvif_vmm_map+0x11e/0x130 [nouveau]
[  +0.000002]  ? dma_resv_wait_timeout_rcu+0x146/0x310
[  +0.000020]  nouveau_mem_map+0x93/0xf0 [nouveau]
[  +0.000020]  nouveau_vma_map+0x44/0x70 [nouveau]
[  +0.000020]  nouveau_bo_move_ntfy+0xcd/0xe0 [nouveau]
[  +0.000003]  ttm_bo_handle_move_mem+0x41d/0x5a0 [ttm]
[  +0.000002]  ttm_bo_evict+0x192/0x210 [ttm]
[  +0.000002]  ttm_mem_evict_first+0x267/0x360 [ttm]
[  +0.000002]  ttm_bo_force_list_clean+0xa2/0x170 [ttm]
[  +0.000020]  nouveau_do_suspend+0x93/0x190 [nouveau]
[  +0.000019]  nouveau_pmops_runtime_suspend+0x40/0xa0 [nouveau]
[  +0.000003]  pci_pm_runtime_suspend+0x58/0x140
[  +0.000001]  ? __switch_to_asm+0x34/0x70
[  +0.000001]  ? pci_pm_thaw_noirq+0xa0/0xa0
[  +0.000001]  __rpm_callback+0x7b/0x130
[  +0.000001]  ? pci_pm_thaw_noirq+0xa0/0xa0
[  +0.000001]  rpm_callback+0x1f/0x70
[  +0.000000]  rpm_suspend+0x136/0x610
[  +0.000001]  pm_runtime_work+0x94/0xa0
[  +0.000002]  process_one_work+0x1e2/0x3b0
[  +0.000001]  worker_thread+0x4a/0x3d0
[  +0.000002]  kthread+0xfb/0x130
[  +0.000001]  ? process_one_work+0x3b0/0x3b0
[  +0.000000]  ? kthread_park+0x90/0x90
[  +0.000001]  ret_from_fork+0x35/0x40
[  +0.000001] ---[ end trace a515d8949d00bef3 ]---
[  +0.000331] [TTM] Buffer eviction failed
[ +14.132313] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality.
[  +0.864987] nouveau 0000:01:00.0: DRM: failed to idle channel 1 [DRM]
[  +6.412760] kauditd_printk_skb: 5 callbacks suppressed

==> kernel <==
 11:12:35 up 2 min,  1 user,  load average: 1.58, 1.01, 0.41
Linux bactop 5.4.11-arch1-1 #1 SMP PREEMPT Sun, 12 Jan 2020 12:15:27 +0000 x86_64 GNU/Linux
PRETTY_NAME="Arch Linux"

==> kube-addon-manager ["e188cabc735b"] <==
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-24T10:12:19+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-24T10:12:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
error: no objects passed to apply
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-24T10:12:24+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-24T10:12:24+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-24T10:12:30+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-24T10:12:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
error: no objects passed to apply

==> kube-apiserver ["37de9478457e"] <==
W0124 10:11:37.639423       1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0124 10:11:37.645611       1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0124 10:11:37.656864       1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0124 10:11:37.660435       1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0124 10:11:37.670236       1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0124 10:11:37.679865       1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0124 10:11:37.679875       1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0124 10:11:37.684537       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0124 10:11:37.684545       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0124 10:11:37.685530       1 client.go:361] parsed scheme: "endpoint"
I0124 10:11:37.685545       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0124 10:11:37.690186       1 client.go:361] parsed scheme: "endpoint"
I0124 10:11:37.690198       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0124 10:11:38.748163       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0124 10:11:38.748164       1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0124 10:11:38.748203       1 secure_serving.go:178] Serving securely on [::]:8443
I0124 10:11:38.748164       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0124 10:11:38.748313       1 controller.go:85] Starting OpenAPI controller
I0124 10:11:38.748337       1 controller.go:81] Starting OpenAPI AggregationController
I0124 10:11:38.748342       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0124 10:11:38.748346       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0124 10:11:38.748362       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0124 10:11:38.748383       1 available_controller.go:386] Starting AvailableConditionController
I0124 10:11:38.748390       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0124 10:11:38.748393       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0124 10:11:38.748386       1 establishing_controller.go:73] Starting EstablishingController
I0124 10:11:38.748405       1 naming_controller.go:288] Starting NamingConditionController
I0124 10:11:38.748423       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0124 10:11:38.748442       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0124 10:11:38.748510       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0124 10:11:38.748518       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0124 10:11:38.748618       1 autoregister_controller.go:140] Starting autoregister controller
I0124 10:11:38.748626       1 crd_finalizer.go:263] Starting CRDFinalizer
I0124 10:11:38.748626       1 cache.go:32] Waiting for caches to sync for autoregister controller
E0124 10:11:38.749182       1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.6.129.192, ResourceVersion: 0, AdditionalErrorMsg: 
I0124 10:11:38.749675       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0124 10:11:38.749683       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0124 10:11:38.749707       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0124 10:11:38.749724       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0124 10:11:38.848549       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0124 10:11:38.848580       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0124 10:11:38.848668       1 shared_informer.go:204] Caches are synced for crd-autoregister 
I0124 10:11:38.848788       1 cache.go:39] Caches are synced for autoregister controller
I0124 10:11:38.849872       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I0124 10:11:39.747976       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0124 10:11:39.748015       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0124 10:11:39.748038       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0124 10:11:39.754564       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0124 10:11:39.760194       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0124 10:11:39.760234       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0124 10:11:40.295606       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0124 10:11:40.366806       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0124 10:11:40.511598       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [10.6.129.192]
I0124 10:11:40.513666       1 controller.go:606] quota admission added evaluator for: endpoints
I0124 10:11:40.981428       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0124 10:11:42.076258       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0124 10:11:42.091600       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0124 10:11:42.196345       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0124 10:11:49.015837       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0124 10:11:49.212983       1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-controller-manager ["ddf2c8bbea95"] <==
I0124 10:11:48.189592       1 controllermanager.go:533] Started "clusterrole-aggregation"
W0124 10:11:48.189634       1 controllermanager.go:525] Skipping "ttl-after-finished"
I0124 10:11:48.189731       1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0124 10:11:48.189758       1 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
I0124 10:11:48.438506       1 controllermanager.go:533] Started "replicaset"
I0124 10:11:48.438582       1 replica_set.go:180] Starting replicaset controller
I0124 10:11:48.438596       1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
I0124 10:11:48.689537       1 controllermanager.go:533] Started "bootstrapsigner"
W0124 10:11:48.689581       1 controllermanager.go:525] Skipping "nodeipam"
I0124 10:11:48.689621       1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer
I0124 10:11:48.939521       1 controllermanager.go:533] Started "persistentvolume-expander"
I0124 10:11:48.939562       1 expand_controller.go:319] Starting expand controller
I0124 10:11:48.939589       1 shared_informer.go:197] Waiting for caches to sync for expand
I0124 10:11:48.946658       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0124 10:11:48.951816       1 shared_informer.go:197] Waiting for caches to sync for resource quota
W0124 10:11:48.957780       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0124 10:11:48.989475       1 shared_informer.go:204] Caches are synced for TTL 
I0124 10:11:48.989564       1 shared_informer.go:204] Caches are synced for GC 
I0124 10:11:48.989564       1 shared_informer.go:204] Caches are synced for PVC protection 
I0124 10:11:48.989765       1 shared_informer.go:204] Caches are synced for service account 
I0124 10:11:48.989814       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I0124 10:11:48.989974       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I0124 10:11:49.008453       1 shared_informer.go:204] Caches are synced for daemon sets 
E0124 10:11:49.025526       1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0124 10:11:49.033008       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"bfcf3468-db4f-4bf8-b93f-5ea724678b98", APIVersion:"apps/v1", ResourceVersion:"179", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-9xqsk
I0124 10:11:49.039776       1 shared_informer.go:204] Caches are synced for job 
I0124 10:11:49.039787       1 shared_informer.go:204] Caches are synced for PV protection 
I0124 10:11:49.039808       1 shared_informer.go:204] Caches are synced for ReplicationController 
I0124 10:11:49.041146       1 shared_informer.go:204] Caches are synced for taint 
I0124 10:11:49.041217       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
I0124 10:11:49.041255       1 taint_manager.go:186] Starting NoExecuteTaintManager
W0124 10:11:49.041316       1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0124 10:11:49.041370       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0124 10:11:49.041438       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"d3e085ab-f72f-4a57-b4b5-0ca8e7439bbf", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0124 10:11:49.044917       1 shared_informer.go:204] Caches are synced for namespace 
E0124 10:11:49.047041       1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"bfcf3468-db4f-4bf8-b93f-5ea724678b98", ResourceVersion:"179", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715457502, loc:(*time.Location)(0x6b951c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0006ddd00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000208d40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0006ddd20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0006ddd40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0006ddd80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000b4e730), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a66e38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001885bc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0009900a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000a66e78)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0124 10:11:49.165101       1 shared_informer.go:204] Caches are synced for endpoint 
I0124 10:11:49.210195       1 shared_informer.go:204] Caches are synced for deployment 
I0124 10:11:49.216214       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9b359218-b9bb-4d13-8d65-ccc6a5a7f313", APIVersion:"apps/v1", ResourceVersion:"174", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I0124 10:11:49.238830       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I0124 10:11:49.244631       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"ac0d4541-17f0-4225-b136-b23255ea05e3", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-8fgbx
I0124 10:11:49.255958       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"ac0d4541-17f0-4225-b136-b23255ea05e3", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-4rhxc
I0124 10:11:49.330524       1 shared_informer.go:204] Caches are synced for disruption 
I0124 10:11:49.330550       1 disruption.go:338] Sending events to api server.
I0124 10:11:49.338735       1 shared_informer.go:204] Caches are synced for HPA 
I0124 10:11:49.389103       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0124 10:11:49.389749       1 shared_informer.go:204] Caches are synced for stateful set 
I0124 10:11:49.440460       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I0124 10:11:49.461315       1 shared_informer.go:204] Caches are synced for attach detach 
I0124 10:11:49.492254       1 shared_informer.go:204] Caches are synced for resource quota 
I0124 10:11:49.539885       1 shared_informer.go:204] Caches are synced for expand 
I0124 10:11:49.540648       1 shared_informer.go:204] Caches are synced for persistent volume 
I0124 10:11:49.547480       1 shared_informer.go:204] Caches are synced for garbage collector 
I0124 10:11:49.552771       1 shared_informer.go:204] Caches are synced for resource quota 
I0124 10:11:49.579108       1 shared_informer.go:204] Caches are synced for garbage collector 
I0124 10:11:49.579146       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0124 10:11:55.109512       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"33b31d9a-84da-472f-9d74-8e564af57465", APIVersion:"apps/v1", ResourceVersion:"375", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1
I0124 10:11:55.122890       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"b90f9037-962b-4c2e-bcbf-8373af34bdc3", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-72tsw
I0124 10:11:55.122922       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"cd571ba2-efe4-4083-af95-22b4fd9645d5", APIVersion:"apps/v1", ResourceVersion:"379", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1
I0124 10:11:55.141752       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"d10e8347-c2cb-474e-8164-4b65e531a43d", APIVersion:"apps/v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-4l49t

==> kube-scheduler ["3e5a8fc37803"] <==
I0124 10:11:36.276809       1 serving.go:312] Generated self-signed cert in-memory
W0124 10:11:36.504007       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0124 10:11:36.504041       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0124 10:11:38.763950       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0124 10:11:38.763964       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0124 10:11:38.763969       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0124 10:11:38.763972       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0124 10:11:38.857369       1 authorization.go:47] Authorization is disabled
W0124 10:11:38.857405       1 authentication.go:92] Authentication is disabled
I0124 10:11:38.857433       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0124 10:11:38.867315       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0124 10:11:38.868150       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0124 10:11:38.868948       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0124 10:11:38.869055       1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0124 10:11:38.872294       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0124 10:11:38.873433       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0124 10:11:38.873486       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0124 10:11:38.873643       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0124 10:11:38.874253       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0124 10:11:38.874429       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0124 10:11:38.874649       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0124 10:11:38.874826       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0124 10:11:38.874956       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0124 10:11:38.875049       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0124 10:11:38.876089       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0124 10:11:38.876092       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0124 10:11:39.874790       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0124 10:11:39.876380       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0124 10:11:39.877550       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0124 10:11:39.878698       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0124 10:11:39.880300       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0124 10:11:39.881575       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0124 10:11:39.882573       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0124 10:11:39.883602       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0124 10:11:39.884770       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0124 10:11:39.885785       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0124 10:11:39.887291       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0124 10:11:39.888235       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0124 10:11:40.968493       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0124 10:11:40.969337       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0124 10:11:40.983816       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0124 10:11:55.139721       1 factory.go:494] pod is already present in the activeQ
E0124 10:11:55.159306       1 factory.go:494] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Sun 2019-10-27 18:14:15 CET, end at Fri 2020-01-24 11:12:35 CET. --
Jan 24 11:11:42 bactop systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Jan 24 11:11:42 bactop systemd[1]: kubelet.service: Succeeded.
Jan 24 11:11:42 bactop systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 24 11:11:42 bactop systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.160726    4351 server.go:416] Version: v1.17.0
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.161136    4351 plugins.go:100] No cloud provider specified.
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.161170    4351 server.go:821] Client rotation is on, will bootstrap in background
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.165157    4351 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220210    4351 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220751    4351 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220766    4351 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220818    4351 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220823    4351 container_manager_linux.go:305] Creating device plugin manager: true
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220835    4351 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1c0d0 0x6e95c50 0x1b1c9a0 map[] map[] map[] map[] map[] 0xc0007f0060 [0] 0x6e95c50}
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220860    4351 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220926    4351 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220931    4351 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.220936    4351 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x6e95c50 10000000000 0xc0008b8fc0 <nil> <nil> <nil> <nil> map[] 0x6e95c50}
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.221003    4351 kubelet.go:286] Adding pod path: /etc/kubernetes/manifests
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.221017    4351 kubelet.go:311] Watching apiserver
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.221858    4351 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.221875    4351 client.go:104] Start docker client with request timeout=2m0s
Jan 24 11:11:42 bactop kubelet[4351]: W0124 11:11:42.229786    4351 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.229937    4351 docker_service.go:240] Hairpin mode set to "hairpin-veth"
Jan 24 11:11:42 bactop kubelet[4351]: W0124 11:11:42.230014    4351 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 24 11:11:42 bactop kubelet[4351]: W0124 11:11:42.231671    4351 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jan 24 11:11:42 bactop kubelet[4351]: W0124 11:11:42.231687    4351 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.232629    4351 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.238296    4351 docker_service.go:260] Docker Info: &{ID:5AY5:YQCD:LXDC:W36B:6Q4O:LNJ6:SFUU:NDOW:TUMM:RVX7:4BEM:XM5H Containers:10 ContainersRunning:10 ContainersPaused:0 ContainersStopped:0 Images:440 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:104 SystemTime:2020-01-24T11:11:42.23332379+01:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:1 KernelVersion:5.4.11-arch1-1 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0008a72d0 NCPU:12 MemTotal:33300697088 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bactop Labels:[] ExperimentalBuild:false ServerVersion:19.03.5-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d50db0a42053864a270f648048f9a8b4f24eced3.m Expected:d50db0a42053864a270f648048f9a8b4f24eced3.m} RuncCommit:{ID:d736ef14f0288d6993a1845745d6756cfc9ddd5a Expected:d736ef14f0288d6993a1845745d6756cfc9ddd5a} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.238358    4351 docker_service.go:273] Setting cgroupDriver to cgroupfs
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244091    4351 remote_runtime.go:59] parsed scheme: ""
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244102    4351 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244117    4351 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0  <nil>}] <nil>}
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244122    4351 clientconn.go:577] ClientConn switching balancer to "pick_first"
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244139    4351 remote_image.go:50] parsed scheme: ""
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244143    4351 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244148    4351 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0  <nil>}] <nil>}
Jan 24 11:11:42 bactop kubelet[4351]: I0124 11:11:42.244151    4351 clientconn.go:577] ClientConn switching balancer to "pick_first"
Jan 24 11:11:50 bactop systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Jan 24 11:12:02 bactop kubelet[4351]: E0124 11:12:02.500191    4351 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Jan 24 11:12:02 bactop kubelet[4351]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Jan 24 11:12:02 bactop kubelet[4351]: I0124 11:12:02.527738    4351 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.5-ce, apiVersion: 1.40.0
Jan 24 11:12:02 bactop kubelet[4351]: I0124 11:12:02.534155    4351 server.go:1113] Started kubelet
Jan 24 11:12:02 bactop kubelet[4351]: I0124 11:12:02.534189    4351 server.go:143] Starting to listen on 0.0.0.0:10250
Jan 24 11:12:02 bactop systemd[1]: kubelet.service: Succeeded.
Jan 24 11:12:02 bactop systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

The operating system version:

I use Arch Linux btw.

@afbjorklund
Copy link
Collaborator

Looks like this is a new feature in systemd version 241 (minikube uses version 240):

https://github.com/systemd/systemd/blob/ecebd1ecf815648cf91749301a648169d07c0046/NEWS#L53

While this will hopefully improve the security of most installations, it is technically a backwards incompatible change

Basically root is not allowed to read the users files, which breaks github.com/juju/mutex


To reproduce:

sudo sysctl fs.protected_regular=1

$ touch /tmp/foo
$ chmod 600 /tmp/foo
$ sudo tee /tmp/foo
tee: /tmp/foo: Åtkomst nekas
^C

To disable:

sudo sysctl fs.protected_regular=0

@afbjorklund afbjorklund added co/none-driver kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jan 25, 2020
@afbjorklund
Copy link
Collaborator

The best long-term fix here would be to stop having to run minikube with sudo...
That would also fix all the file permission issues and other things: see #3760

⚠️  kubectl and minikube configuration will be stored in /root
⚠️  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /root/.kube /root/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

@badeball
Copy link
Author

That's some catch, very good job! It does indeed seems to be the issue.

I'm very much looking forward to the day when sudo won't be necessary. I the meantime I think I will resort to using the virtualbox driver.

@tstromberg tstromberg changed the title none: Failure to start and stop cluster none: unable to open /tmp/juju-mk72a1487fd2da23da9e8181e16f352a6105bd56: permission denied Mar 19, 2020
@tstromberg tstromberg changed the title none: unable to open /tmp/juju-mk72a1487fd2da23da9e8181e16f352a6105bd56: permission denied none: writing kubeconfig: unable to open /tmp/juju-x: permission denied Mar 19, 2020
@tstromberg tstromberg changed the title none: writing kubeconfig: unable to open /tmp/juju-x: permission denied none: writing kubeconfig: unable to open /tmp/juju-x: permission denied (sysctl fs.protected_regular=0) Mar 19, 2020
@tstromberg tstromberg added the needs-solution-message Issues where where offering a solution for an error would be helpful label Mar 19, 2020
@tstromberg tstromberg removed the needs-solution-message Issues where where offering a solution for an error would be helpful label Apr 2, 2020
@priyawadhwa
Copy link

I'm going to go ahead and close this issue as it seems resolved for @badeball -- if you need to reopen at any time, please comment /reopen on this issue.

We can track the broader issue of using none without sudo here: #3760

@FlorianLudwig
Copy link

I stumbled upon this the cli helpfully brought mere here:

💡  Suggestion: Run 'sudo sysctl fs.protected_regular=1', or try a driver which does not require root, such as '--driver=docker'
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/6391

Interestingly the suggestion sudo sysctl fs.protected_regular=1 should besudo sysctl fs.protected_regular=0, right? Should I open a separate issue for this?

@afbjorklund
Copy link
Collaborator

@FlorianLudwig : It was fixed in 644b419

@Rahul295-tech
Copy link

Failed to start none bare metal machine. Running "minikube delete" may fix it: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c
505c81a7ee49a8e3ecb89: permission denied

@anmolsharma40
Copy link

WHAT HAPPENED**

root@ip-172-31-42-227:/home/ubuntu# sudo minikube start --force

  • minikube v1.27.0 on Ubuntu 22.04 (xen/amd64)
    ! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
    ! Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases.
    ! For more information, see: musl-based DNS resolution will break on v1.25.0 in certain configurations kubernetes#112135
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
    ! StartHost failed, but will try again: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89: permission denied
  • Failed to start none bare metal machine. Running "minikube delete" may fix it: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89: permission denied

X Exiting due to HOST_JUJU_LOCK_PERMISSION: Failed to start host: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89: permission denied

WHEN START MINIKUBE

OPERATIING SYSTEM
--ubuntu

DRIVER
--docker

@Dilip-cloud
Copy link

Change the recursive Permissions to root on /tmp folder chown -R root:root /tmp It worked for me in kubernetes version 1.26

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

8 participants