Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyperkit has 50% resting CPU usage when Idle #9104

Closed
sanarena opened this issue Aug 27, 2020 · 3 comments
Closed

hyperkit has 50% resting CPU usage when Idle #9104

sanarena opened this issue Aug 27, 2020 · 3 comments

Comments

@sanarena
Copy link

sanarena commented Aug 27, 2020

Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT

minikube start --driver=hyperkit --kubernetes-version v1.18.3 --disk-size 60g --memory 3500

Steps to reproduce the issue:

  1. Run minikube v1.12.3
  2. look at activity monitor.

Full output of failed command:
No error

Full output of minikube start command used, if not already included:
No error

@sanarena
Copy link
Author

Here is output of minikube log.

==> Docker <== -- Logs begin at Thu 2020-08-27 11:32:50 UTC, end at Thu 2020-08-27 12:27:05 UTC. -- Aug 27 11:34:05 minikube dockerd[2160]: time="2020-08-27T11:34:05.234648413Z" level=info msg="Daemon has completed initialization" Aug 27 11:34:05 minikube systemd[1]: Started Docker Application Container Engine. Aug 27 11:34:05 minikube dockerd[2160]: time="2020-08-27T11:34:05.266946108Z" level=info msg="API listen on /var/run/docker.sock" Aug 27 11:34:05 minikube dockerd[2160]: time="2020-08-27T11:34:05.267323614Z" level=info msg="API listen on [::]:2376" Aug 27 11:34:20 minikube dockerd[2160]: time="2020-08-27T11:34:20.681708812Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/94112386c223a35c8b59f06efaf854a26c78553dedcccd8e30a3b688dc473763/shim.sock" debug=false pid=3070 Aug 27 11:34:20 minikube dockerd[2160]: time="2020-08-27T11:34:20.685581753Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/114b016aa2831ec4edf8950008c932d536546f0b28b5611a327bd73b8969624d/shim.sock" debug=false pid=3073 Aug 27 11:34:20 minikube dockerd[2160]: time="2020-08-27T11:34:20.881483245Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/020968b433521c3dfed9fd3140349f497c8ba82f600f725b001653f05bbfe6e3/shim.sock" debug=false pid=3129 Aug 27 11:34:20 minikube dockerd[2160]: time="2020-08-27T11:34:20.979798688Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/17b217550d2f90b4ac0226ff44c80064b3ef6c0c192ade2209b9e78f09645fe8/shim.sock" debug=false pid=3154 Aug 27 11:34:21 minikube dockerd[2160]: time="2020-08-27T11:34:21.582632861Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a55123539ca6f25c07a7b8dee6897ccb31cda5a4b267142daf1c99520d4537e6/shim.sock" debug=false pid=3266 Aug 27 11:34:21 minikube dockerd[2160]: time="2020-08-27T11:34:21.709717288Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/650fec4f7fec3b5fbc8d8b36d6d1f5c2c0fac7dc40cbdd02f96ef75e52e45d3e/shim.sock" debug=false pid=3274 Aug 27 11:34:21 minikube dockerd[2160]: time="2020-08-27T11:34:21.852500796Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/65911a94591b558e8caf158b104f4a2596419c7c47df4afc9599dfb667c2337f/shim.sock" debug=false pid=3300 Aug 27 11:34:21 minikube dockerd[2160]: time="2020-08-27T11:34:21.921494310Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6dc0a0bc5f36b6dbaebea81b237dfedddc1d6450597f5a44839c7cad78a802f3/shim.sock" debug=false pid=3315 Aug 27 11:34:47 minikube dockerd[2160]: time="2020-08-27T11:34:47.992042422Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b6c869581f47c4ce669db15b01241d6b08957d50f6d017bd4f831b62b8d8260b/shim.sock" debug=false pid=3952 Aug 27 11:34:48 minikube dockerd[2160]: time="2020-08-27T11:34:48.452608863Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e064a2113b27664ece8b5cdeac3691d87ccb7ae51b56756e6440b6468d4cbc6/shim.sock" debug=false pid=3993 Aug 27 11:35:00 minikube dockerd[2160]: time="2020-08-27T11:35:00.545450175Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2d674af93cb20138955e5152863a0d35ec179e6d494e8f0db99385fa1556a797/shim.sock" debug=false pid=4217 Aug 27 11:35:01 minikube dockerd[2160]: time="2020-08-27T11:35:01.687936237Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/00cac345b179916de5f44aa4b091740a0f722186ad2f3a589d11fc963f501872/shim.sock" debug=false pid=4297 Aug 27 11:35:01 minikube dockerd[2160]: time="2020-08-27T11:35:01.728908371Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0ebc5308e5b6ab430a951b9cc5e16f5c2a6edcc36f0ddfd37fac912465962eee/shim.sock" debug=false pid=4316 Aug 27 11:35:01 minikube dockerd[2160]: time="2020-08-27T11:35:01.822547803Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4b64d65f1b8a3a342fcaf8eb729e0cec5d36f1b0ba7db6261e8de1159b3138f/shim.sock" debug=false pid=4338 Aug 27 11:35:02 minikube dockerd[2160]: time="2020-08-27T11:35:02.198093302Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/974ea58ab5f0bd2ed91956014b8559acd54aac63a37c1a67efd049361112aa97/shim.sock" debug=false pid=4419 Aug 27 11:35:02 minikube dockerd[2160]: time="2020-08-27T11:35:02.891245879Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/06a31a620a83e36fcf9434a9b7933e9eb9f2809c443208f2a0b6fee5733f8770/shim.sock" debug=false pid=4505 Aug 27 11:35:21 minikube dockerd[2160]: time="2020-08-27T11:35:21.535863489Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7e389816b21a40fa8fbe051a92392038e3071e4f09f109106d55a187e9307ba3/shim.sock" debug=false pid=4661 Aug 27 11:35:21 minikube dockerd[2160]: time="2020-08-27T11:35:21.885745475Z" level=info msg="shim reaped" id=7e389816b21a40fa8fbe051a92392038e3071e4f09f109106d55a187e9307ba3 Aug 27 11:35:21 minikube dockerd[2160]: time="2020-08-27T11:35:21.904885901Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:35:23 minikube dockerd[2160]: time="2020-08-27T11:35:23.062721090Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/080130917eacc523c7ed41f2b79ae00bf0700038c4c52674a14455118f48585a/shim.sock" debug=false pid=4721 Aug 27 11:35:23 minikube dockerd[2160]: time="2020-08-27T11:35:23.450174436Z" level=info msg="shim reaped" id=080130917eacc523c7ed41f2b79ae00bf0700038c4c52674a14455118f48585a Aug 27 11:35:23 minikube dockerd[2160]: time="2020-08-27T11:35:23.463020275Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:35:25 minikube dockerd[2160]: time="2020-08-27T11:35:25.203950049Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/467c0bbd868a4fd7170074a361118d5ee9e714cf91831ad7219411614d75806a/shim.sock" debug=false pid=4773 Aug 27 11:35:25 minikube dockerd[2160]: time="2020-08-27T11:35:25.653264391Z" level=info msg="shim reaped" id=467c0bbd868a4fd7170074a361118d5ee9e714cf91831ad7219411614d75806a Aug 27 11:35:25 minikube dockerd[2160]: time="2020-08-27T11:35:25.663498044Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:35:26 minikube dockerd[2160]: time="2020-08-27T11:35:26.026308556Z" level=info msg="shim reaped" id=0ebc5308e5b6ab430a951b9cc5e16f5c2a6edcc36f0ddfd37fac912465962eee Aug 27 11:35:26 minikube dockerd[2160]: time="2020-08-27T11:35:26.036293598Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:35:27 minikube dockerd[2160]: time="2020-08-27T11:35:27.189450972Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cb29980d8439d6cc38a1f5eb02abfdee9d7547601b2c3337bad7bec79a8f5085/shim.sock" debug=false pid=4873 Aug 27 11:35:28 minikube dockerd[2160]: time="2020-08-27T11:35:28.456834720Z" level=info msg="shim reaped" id=cb29980d8439d6cc38a1f5eb02abfdee9d7547601b2c3337bad7bec79a8f5085 Aug 27 11:35:28 minikube dockerd[2160]: time="2020-08-27T11:35:28.468475488Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:35:31 minikube dockerd[2160]: time="2020-08-27T11:35:31.445989297Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f0d101a706ede1c4f5d1b05dcbcea715a77d8798e820a0928eb56446aee2981b/shim.sock" debug=false pid=5015 Aug 27 11:35:36 minikube dockerd[2160]: time="2020-08-27T11:35:36.727315134Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8bed26bbd80f050ce1bf6df2f4bb837257f18eb5ad4c0caa3ec65ee1a052f4b3/shim.sock" debug=false pid=5072 Aug 27 11:35:37 minikube dockerd[2160]: time="2020-08-27T11:35:37.798902215Z" level=info msg="shim reaped" id=8bed26bbd80f050ce1bf6df2f4bb837257f18eb5ad4c0caa3ec65ee1a052f4b3 Aug 27 11:35:37 minikube dockerd[2160]: time="2020-08-27T11:35:37.810407414Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:35:38 minikube dockerd[2160]: time="2020-08-27T11:35:38.907898685Z" level=info msg="shim reaped" id=2d674af93cb20138955e5152863a0d35ec179e6d494e8f0db99385fa1556a797 Aug 27 11:35:38 minikube dockerd[2160]: time="2020-08-27T11:35:38.917784805Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:36:25 minikube dockerd[2160]: time="2020-08-27T11:36:25.358254665Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c4febdf02da7512d87ba50072ba708eb32458880f24ca202277300f945844fe0/shim.sock" debug=false pid=5463 Aug 27 11:36:30 minikube dockerd[2160]: time="2020-08-27T11:36:30.084754926Z" level=warning msg="Published ports are discarded when using host network mode" Aug 27 11:36:30 minikube dockerd[2160]: time="2020-08-27T11:36:30.279177022Z" level=warning msg="Published ports are discarded when using host network mode" Aug 27 11:36:30 minikube dockerd[2160]: time="2020-08-27T11:36:30.420661056Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a4a83a79afba13f2b3f32c70f57eb17f72e52f057b8f63c201479e7950385670/shim.sock" debug=false pid=5645 Aug 27 11:36:34 minikube dockerd[2160]: time="2020-08-27T11:36:34.337420404Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/464020776a36ad20068c3b47c7f8de7040d87aedda6313e7e85b4cfde2a37dd3/shim.sock" debug=false pid=5697 Aug 27 11:36:57 minikube dockerd[2160]: time="2020-08-27T11:36:57.819532648Z" level=info msg="shim reaped" id=974ea58ab5f0bd2ed91956014b8559acd54aac63a37c1a67efd049361112aa97 Aug 27 11:37:19 minikube dockerd[2160]: time="2020-08-27T11:37:19.301589874Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 27 11:37:26 minikube dockerd[2160]: time="2020-08-27T11:37:26.703423793Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d571a86fccfe570141f66c3b9656f2aaab379077f7e05afe9ee292f31280ced1/shim.sock" debug=false pid=5930 Aug 27 11:37:44 minikube dockerd[2160]: time="2020-08-27T11:37:44.359789087Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8e3c1cce2d6d6b23d395549093ddb39cd9de3ff0e3d22a1ee1a8706ef703e6bd/shim.sock" debug=false pid=6041 Aug 27 11:37:59 minikube dockerd[2160]: time="2020-08-27T11:37:59.467438208Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dcd942ec61ca774b96d2296d378acc660d904806f38dab50ded61e29044c07bd/shim.sock" debug=false pid=6188 Aug 27 11:38:17 minikube dockerd[2160]: time="2020-08-27T11:38:17.036244343Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f2b5e4bb86d6d9d18808677564c70b6bd0658baf616916d7ed267c3382aaf4a1/shim.sock" debug=false pid=6330 Aug 27 11:38:17 minikube dockerd[2160]: time="2020-08-27T11:38:17.813615023Z" level=info msg="Attempting next endpoint for pull after error: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials" Aug 27 11:38:17 minikube dockerd[2160]: time="2020-08-27T11:38:17.813989991Z" level=error msg="Handler for POST /images/create returned error: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials" Aug 27 11:38:32 minikube dockerd[2160]: time="2020-08-27T11:38:32.530276082Z" level=info msg="Attempting next endpoint for pull after error: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials" Aug 27 11:38:32 minikube dockerd[2160]: time="2020-08-27T11:38:32.531985301Z" level=error msg="Handler for POST /images/create returned error: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials" Aug 27 11:39:01 minikube dockerd[2160]: time="2020-08-27T11:39:01.551708909Z" level=info msg="Attempting next endpoint for pull after error: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials" Aug 27 11:39:01 minikube dockerd[2160]: time="2020-08-27T11:39:01.551908982Z" level=error msg="Handler for POST /images/create returned error: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials" Aug 27 11:41:27 minikube dockerd[2160]: time="2020-08-27T11:41:27.019717211Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1484ee17fb2bd3f89893c1d54394583744d8ea510fb036de22372ed7a406a5f9/shim.sock" debug=false pid=7301 Aug 27 11:41:48 minikube dockerd[2160]: time="2020-08-27T11:41:48.986971725Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c39d585d188e6b5e2a73db71afd6c8670afaa380ea78f91926267731a3022a3e/shim.sock" debug=false pid=7592 Aug 27 11:47:51 minikube dockerd[2160]: time="2020-08-27T11:47:51.381864877Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d35ddf3893537dc8ccc943f43fbdbe5deaef8ad82b989a978eaf40147dcf32e5/shim.sock" debug=false pid=9206

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
d35ddf3893537 ############.dkr.ecr.ap-southeast-1.amazonaws.com/mongo40@sha256:ab9c6c8b433c04e3e9b15130e3136c3d863bf9b41123ebccee765d75c16df9bd 39 minutes ago Running mongo40 0 c39d585d188e6
1484ee17fb2bd ############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55@sha256:54fc332ded87886fe7ecfa806ea583dd3c29c2e302b7e1f312366ca71f02119c 45 minutes ago Running mysql55 0 f2b5e4bb86d6d
dcd942ec61ca7 upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 49 minutes ago Running registry-creds 0 464020776a36a
8e3c1cce2d6d6 cryptexlabs/minikube-ingress-dns@sha256:d07dfd1b882d8ee70d71514434c10fdd8c54d347b5a883323154d6096f1e8c67 49 minutes ago Running minikube-ingress-dns 0 a4a83a79afba1
d571a86fccfe5 9c3ca9f065bb1 49 minutes ago Running storage-provisioner 1 00cac345b1799
c4febdf02da75 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:fc650620719e460df04043512ec4af146b7d9da163616960e58aceeaf4ea5ba1 50 minutes ago Running controller 0 f0d101a706ede
8bed26bbd80f0 5693ebf5622ae 51 minutes ago Exited patch 2 2d674af93cb20
467c0bbd868a4 jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7 51 minutes ago Exited create 0 0ebc5308e5b6a
06a31a620a83e 67da37a9a360e 52 minutes ago Running coredns 0 c4b64d65f1b8a
974ea58ab5f0b 9c3ca9f065bb1 52 minutes ago Exited storage-provisioner 0 00cac345b1799
1e064a2113b27 3439b7546f29b 52 minutes ago Running kube-proxy 0 b6c869581f47c
6dc0a0bc5f36b 303ce5db0e90d 52 minutes ago Running etcd 0 17b217550d2f9
65911a94591b5 76216c34ed0c7 52 minutes ago Running kube-scheduler 0 020968b433521
650fec4f7fec3 da26705ccb4b5 52 minutes ago Running kube-controller-manager 0 94112386c223a
a55123539ca6f 7e28efa976bd1 52 minutes ago Running kube-apiserver 0 114b016aa2831

==> coredns [06a31a620a83] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=2243b4b97c131e3244c5f014faedca0d846599f5
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_08_27T18_34_39_0700
minikube.k8s.io/version=v1.12.3
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 27 Aug 2020 11:34:36 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Thu, 27 Aug 2020 12:27:03 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Thu, 27 Aug 2020 12:23:55 +0000 Thu, 27 Aug 2020 11:34:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 27 Aug 2020 12:23:55 +0000 Thu, 27 Aug 2020 11:34:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 27 Aug 2020 12:23:55 +0000 Thu, 27 Aug 2020 11:34:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 27 Aug 2020 12:23:55 +0000 Thu, 27 Aug 2020 11:34:56 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.22
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 52228316Ki
hugepages-2Mi: 0
memory: 3431072Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 52228316Ki
hugepages-2Mi: 0
memory: 3431072Ki
pods: 110
System Info:
Machine ID: 1ae5d7d9377b4b45b9d8ff3b18148652
System UUID: 035511ea-0000-0000-89af-9801a78b00f1
Boot ID: e00d93fa-01ef-49e5-8595-205a9b4f30fe
Kernel Version: 4.19.114
OS Image: Buildroot 2019.02.11
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.12
Kubelet Version: v1.18.3
Kube-Proxy Version: v1.18.3
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


default mongo40-deployment-76d6587bc-q44nw 0 (0%) 0 (0%) 150Mi (4%) 1Gi (30%) 45m
default mysql55-deployment-dfbffbb6c-hdqwj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48m
kube-system coredns-66bff467f8-rhq45 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 52m
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
kube-system ingress-nginx-controller-69ccf5d9d8-67t5c 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 52m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 52m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 52m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 50m
kube-system kube-proxy-96zpv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 52m
kube-system registry-creds-85f59c657-7h96n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 50m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 750m (37%) 0 (0%)
memory 310Mi (9%) 1194Mi (35%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal NodeHasSufficientMemory 52m (x7 over 52m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 52m (x7 over 52m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 52m (x6 over 52m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 52m kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 52m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 52m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 52m kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 52m kubelet, minikube Updated Node Allocatable limit across pods
Normal Starting 52m kube-proxy, minikube Starting kube-proxy.
Normal NodeReady 52m kubelet, minikube Node minikube status is now: NodeReady

==> dmesg <==
[Aug27 11:32] ERROR: earlyprintk= earlyser already used
[ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20180810/tbprint-177)
[ +0.000000] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.009504] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.227735] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.008821] systemd-fstab-generator[1106]: Ignoring "noauto" for root device
[ +0.017637] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000005] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +1.648461] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +1.983967] vboxguest: loading out-of-tree module taints kernel.
[ +0.006847] vboxguest: PCI device not found, probably running on physical hardware.
[ +4.023276] systemd-fstab-generator[1905]: Ignoring "noauto" for root device
[ +0.151812] systemd-fstab-generator[1915]: Ignoring "noauto" for root device
[Aug27 11:34] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
[ +2.392315] kauditd_printk_skb: 65 callbacks suppressed
[ +0.587059] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
[ +0.509200] systemd-fstab-generator[2378]: Ignoring "noauto" for root device
[ +3.419952] systemd-fstab-generator[2594]: Ignoring "noauto" for root device
[ +10.122097] kauditd_printk_skb: 107 callbacks suppressed
[ +19.751226] systemd-fstab-generator[3676]: Ignoring "noauto" for root device
[ +9.998261] kauditd_printk_skb: 32 callbacks suppressed
[ +7.563491] NFSD: Unable to end grace period: -110
[Aug27 11:35] kauditd_printk_skb: 50 callbacks suppressed
[ +11.073396] kauditd_printk_skb: 5 callbacks suppressed
[ +27.534237] kauditd_printk_skb: 11 callbacks suppressed
[Aug27 11:36] hrtimer: interrupt took 16583762 ns
[Aug27 11:41] kauditd_printk_skb: 5 callbacks suppressed
[Aug27 11:48] kauditd_printk_skb: 5 callbacks suppressed

==> etcd [6dc0a0bc5f36] <==
2020-08-27 11:41:47.335571 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (130.600549ms) to execute
2020-08-27 11:41:47.344814 W | etcdserver: read-only range request "key:"/registry/events/default/mysql55-deployment-dfbffbb6c-hdqwj.162f1d32e4bf0edf" " with result "range_response_count:1 size:853" took too long (139.798622ms) to execute
2020-08-27 11:41:47.787815 W | etcdserver: request "header:<ID:2379998676296431062 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/endpointslices/default/mongo-service-bbkv4" mod_revision:0 > success:<request_put:<key:"/registry/endpointslices/default/mongo-service-bbkv4" value_size:946 >> failure:<>>" with result "size:16" took too long (123.350663ms) to execute
2020-08-27 11:41:47.817316 W | etcdserver: read-only range request "key:"/registry/persistentvolumes/pvc-e21c19cb-5863-458d-94a1-ebb77dccbc34" " with result "range_response_count:1 size:1201" took too long (193.390541ms) to execute
2020-08-27 11:41:52.481920 W | etcdserver: read-only range request "key:"/registry/masterleases/" range_end:"/registry/masterleases0" " with result "range_response_count:1 size:135" took too long (309.522241ms) to execute
2020-08-27 11:42:42.343421 W | etcdserver: read-only range request "key:"/registry/secrets" range_end:"/registry/secrett" count_only:true " with result "range_response_count:0 size:7" took too long (245.106341ms) to execute
2020-08-27 11:42:47.635637 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (491.965489ms) to execute
2020-08-27 11:43:20.152512 W | etcdserver: read-only range request "key:"/registry/ingress" range_end:"/registry/ingrest" count_only:true " with result "range_response_count:0 size:7" took too long (341.011389ms) to execute
2020-08-27 11:43:20.153797 W | etcdserver: read-only range request "key:"/registry/csidrivers" range_end:"/registry/csidrivert" count_only:true " with result "range_response_count:0 size:5" took too long (136.356672ms) to execute
2020-08-27 11:44:26.150788 I | mvcc: store.index: compact 834
2020-08-27 11:44:26.259982 I | mvcc: finished scheduled compaction at 834 (took 107.054716ms)
2020-08-27 11:45:06.391361 W | wal: sync duration of 1.471190697s, expected less than 1s
2020-08-27 11:45:06.394301 W | etcdserver: read-only range request "key:"/registry/events" range_end:"/registry/eventt" count_only:true " with result "range_response_count:0 size:7" took too long (283.124613ms) to execute
2020-08-27 11:45:13.614997 W | etcdserver: read-only range request "key:"/registry/persistentvolumeclaims/default/mysql-volume-claim10" " with result "range_response_count:1 size:1422" took too long (124.236053ms) to execute
2020-08-27 11:45:19.564589 W | etcdserver: read-only range request "key:"/registry/csidrivers" range_end:"/registry/csidrivert" count_only:true " with result "range_response_count:0 size:5" took too long (252.850848ms) to execute
2020-08-27 11:45:22.155330 W | etcdserver: read-only range request "key:"/registry/roles" range_end:"/registry/rolet" count_only:true " with result "range_response_count:0 size:7" took too long (428.143562ms) to execute
2020-08-27 11:45:22.157345 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result "range_response_count:1 size:257" took too long (246.40512ms) to execute
2020-08-27 11:45:27.534937 W | etcdserver: read-only range request "key:"/registry/horizontalpodautoscalers" range_end:"/registry/horizontalpodautoscalert" count_only:true " with result "range_response_count:0 size:5" took too long (183.727465ms) to execute
2020-08-27 11:45:28.454842 W | etcdserver: read-only range request "key:"/registry/leases" range_end:"/registry/leaset" count_only:true " with result "range_response_count:0 size:7" took too long (164.79997ms) to execute
2020-08-27 11:45:34.839602 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" " with result "range_response_count:1 size:601" took too long (743.544843ms) to execute
2020-08-27 11:46:02.346158 W | etcdserver: read-only range request "key:"/registry/configmaps/kube-system/ingress-controller-leader-nginx" " with result "range_response_count:1 size:607" took too long (128.005718ms) to execute
2020-08-27 11:46:02.347105 W | etcdserver: read-only range request "key:"/registry/endpointslices/default/kubernetes" " with result "range_response_count:1 size:485" took too long (132.56942ms) to execute
2020-08-27 11:46:14.340346 W | etcdserver: read-only range request "key:"/registry/jobs/" range_end:"/registry/jobs0" limit:500 " with result "range_response_count:2 size:6682" took too long (215.409849ms) to execute
2020-08-27 11:46:33.126153 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:289" took too long (186.329881ms) to execute
2020-08-27 11:46:36.709442 W | etcdserver: read-only range request "key:"/registry/ranges/serviceips" " with result "range_response_count:1 size:86619" took too long (238.694133ms) to execute
2020-08-27 11:47:18.086540 W | etcdserver: request "header:<ID:2379998676296432253 > lease_revoke:id:2107742fb28c7a46" with result "size:28" took too long (147.880995ms) to execute
2020-08-27 11:47:23.208891 W | etcdserver: request "header:<ID:2379998676296432272 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.64.22" mod_revision:1261 > success:<request_put:<key:"/registry/masterleases/192.168.64.22" value_size:68 lease:2379998676296432270 >> failure:<request_range:<key:"/registry/masterleases/192.168.64.22" > >>" with result "size:16" took too long (152.739544ms) to execute
2020-08-27 11:47:46.298631 W | etcdserver: read-only range request "key:"/registry/statefulsets" range_end:"/registry/statefulsett" count_only:true " with result "range_response_count:0 size:5" took too long (498.911938ms) to execute
2020-08-27 11:47:46.299218 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" " with result "range_response_count:1 size:601" took too long (469.772295ms) to execute
2020-08-27 11:47:46.300213 W | etcdserver: read-only range request "key:"/registry/jobs/" range_end:"/registry/jobs0" limit:500 " with result "range_response_count:2 size:6682" took too long (615.986171ms) to execute
2020-08-27 11:47:46.303146 W | etcdserver: read-only range request "key:"/registry/networkpolicies" range_end:"/registry/networkpoliciet" count_only:true " with result "range_response_count:0 size:5" took too long (658.204556ms) to execute
2020-08-27 11:47:51.151283 W | etcdserver: read-only range request "key:"/registry/csinodes" range_end:"/registry/csinodet" count_only:true " with result "range_response_count:0 size:7" took too long (257.927481ms) to execute
2020-08-27 11:48:02.985956 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result "range_response_count:1 size:257" took too long (181.534925ms) to execute
2020-08-27 11:48:03.491914 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:289" took too long (151.167247ms) to execute
2020-08-27 11:48:05.653655 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" " with result "range_response_count:1 size:601" took too long (150.006917ms) to execute
2020-08-27 11:49:26.183633 I | mvcc: store.index: compact 1127
2020-08-27 11:49:26.316117 I | mvcc: finished scheduled compaction at 1127 (took 122.540936ms)
2020-08-27 11:51:05.609141 W | etcdserver: read-only range request "key:"/registry/configmaps/kube-system/ingress-controller-leader-nginx" " with result "range_response_count:1 size:607" took too long (179.258509ms) to execute
2020-08-27 11:51:05.622966 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" " with result "range_response_count:1 size:601" took too long (235.725718ms) to execute
2020-08-27 11:54:26.209605 I | mvcc: store.index: compact 1386
2020-08-27 11:54:26.255309 I | mvcc: finished scheduled compaction at 1386 (took 43.437395ms)
2020-08-27 11:57:00.527296 W | etcdserver: read-only range request "key:"/registry/configmaps/kube-system/ingress-controller-leader-nginx" " with result "range_response_count:1 size:608" took too long (132.656867ms) to execute
2020-08-27 11:59:26.233002 I | mvcc: store.index: compact 1640
2020-08-27 11:59:26.243873 I | mvcc: finished scheduled compaction at 1640 (took 9.952603ms)
2020-08-27 12:03:46.344285 W | etcdserver: request "header:<ID:2379998676296435815 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.64.22" mod_revision:2093 > success:<request_put:<key:"/registry/masterleases/192.168.64.22" value_size:69 lease:2379998676296435812 >> failure:<request_range:<key:"/registry/masterleases/192.168.64.22" > >>" with result "size:16" took too long (204.190196ms) to execute
2020-08-27 12:03:59.403561 W | etcdserver: read-only range request "key:"/registry/jobs/" range_end:"/registry/jobs0" limit:500 " with result "range_response_count:2 size:6682" took too long (166.846459ms) to execute
2020-08-27 12:04:26.250153 I | mvcc: store.index: compact 1889
2020-08-27 12:04:26.257101 I | mvcc: finished scheduled compaction at 1889 (took 4.089457ms)
2020-08-27 12:05:20.632825 W | etcdserver: read-only range request "key:"/registry/cronjobs/" range_end:"/registry/cronjobs0" limit:500 " with result "range_response_count:0 size:5" took too long (140.202056ms) to execute
2020-08-27 12:07:04.221535 W | etcdserver: request "header:<ID:2379998676296436522 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/configmaps/kube-system/ingress-controller-leader-nginx" mod_revision:2258 > success:<request_put:<key:"/registry/configmaps/kube-system/ingress-controller-leader-nginx" value_size:520 >> failure:<request_range:<key:"/registry/configmaps/kube-system/ingress-controller-leader-nginx" > >>" with result "size:16" took too long (133.040155ms) to execute
2020-08-27 12:07:04.222211 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" " with result "range_response_count:1 size:601" took too long (137.298569ms) to execute
2020-08-27 12:09:26.269204 I | mvcc: store.index: compact 2136
2020-08-27 12:09:26.272922 I | mvcc: finished scheduled compaction at 2136 (took 1.893708ms)
2020-08-27 12:14:26.279536 I | mvcc: store.index: compact 2383
2020-08-27 12:14:26.282185 I | mvcc: finished scheduled compaction at 2383 (took 1.329493ms)
2020-08-27 12:19:26.314850 I | mvcc: store.index: compact 2632
2020-08-27 12:19:26.319438 I | mvcc: finished scheduled compaction at 2632 (took 3.919448ms)
2020-08-27 12:23:01.551872 W | etcdserver: request "header:<ID:2379998676296439956 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" mod_revision:3059 > success:<request_put:<key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" value_size:512 >> failure:<request_range:<key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" > >>" with result "size:16" took too long (120.797453ms) to execute
2020-08-27 12:24:26.341570 I | mvcc: store.index: compact 2882
2020-08-27 12:24:26.346295 I | mvcc: finished scheduled compaction at 2882 (took 3.380924ms)

==> kernel <==
12:27:11 up 54 min, 1 user, load average: 1.15, 1.15, 1.34
Linux minikube 4.19.114 #1 SMP Mon Aug 3 12:35:22 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.11"

==> kube-apiserver [a55123539ca6] <==
Trace[1373192999]: [11.615263837s] [11.615249413s] About to write a response
I0827 11:37:20.255848 1 trace.go:116] Trace[204724013]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.64.22 (started: 2020-08-27 11:37:17.943949957 +0000 UTC m=+175.860217618) (total time: 2.311838881s):
Trace[204724013]: [2.311406707s] [2.311386662s] About to write a response
I0827 11:37:20.262019 1 trace.go:116] Trace[194582957]: "Get" url:/api/v1/nodes/minikube,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.64.22 (started: 2020-08-27 11:37:17.953518439 +0000 UTC m=+175.869786101) (total time: 2.308433117s):
Trace[194582957]: [2.306712395s] [2.306682644s] About to write a response
I0827 11:37:21.073681 1 trace.go:116] Trace[1325857045]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-08-27 11:37:20.279128281 +0000 UTC m=+178.195396014) (total time: 794.507588ms):
Trace[1325857045]: [171.546522ms] [171.546522ms] initial value restored
Trace[1325857045]: [431.224511ms] [259.677989ms] Transaction prepared
Trace[1325857045]: [794.449087ms] [363.224576ms] Transaction committed
I0827 11:37:22.586219 1 trace.go:116] Trace[1453318406]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-08-27 11:37:21.388447026 +0000 UTC m=+179.304714812) (total time: 1.197730186s):
Trace[1453318406]: [374.312745ms] [374.312745ms] initial value restored
Trace[1453318406]: [756.912843ms] [382.600098ms] Transaction prepared
Trace[1453318406]: [1.197673328s] [440.760485ms] Transaction committed
I0827 11:37:25.053583 1 trace.go:116] Trace[1581804083]: "Get" url:/api/v1/nodes/minikube,user-agent:nginx-ingress-controller/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.3 (started: 2020-08-27 11:37:24.468750553 +0000 UTC m=+182.385018210) (total time: 584.775173ms):
Trace[1581804083]: [530.363866ms] [530.345627ms] About to write a response
I0827 11:37:25.057656 1 trace.go:116] Trace[1718158147]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (started: 2020-08-27 11:37:24.307692696 +0000 UTC m=+182.223960364) (total time: 749.889806ms):
Trace[1718158147]: [749.838556ms] [749.022448ms] Transaction committed
I0827 11:37:25.057834 1 trace.go:116] Trace[2137418671]: "Update" url:/api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx,user-agent:nginx-ingress-controller/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.3 (started: 2020-08-27 11:37:24.30741639 +0000 UTC m=+182.223684057) (total time: 750.386428ms):
Trace[2137418671]: [750.274085ms] [750.063093ms] Object stored in database
I0827 11:38:16.512367 1 controller.go:606] quota admission added evaluator for: ingresses.networking.k8s.io
I0827 11:38:52.099059 1 trace.go:116] Trace[88586375]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-08-27 11:38:51.365755779 +0000 UTC m=+269.282023448) (total time: 733.216578ms):
Trace[88586375]: [485.144991ms] [485.144991ms] initial value restored
Trace[88586375]: [708.24622ms] [223.101229ms] Transaction prepared
I0827 11:39:30.999141 1 trace.go:116] Trace[1843908960]: "Patch" url:/api/v1/namespaces/default/pods/mysql55-deployment-dfbffbb6c-hdqwj/status,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.64.22 (started: 2020-08-27 11:39:30.332508303 +0000 UTC m=+308.248775998) (total time: 532.729872ms):
Trace[1843908960]: [532.213182ms] [442.632713ms] Object stored in database
I0827 11:39:31.142263 1 trace.go:116] Trace[143186968]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-08-27 11:39:30.447377711 +0000 UTC m=+308.363645373) (total time: 694.837563ms):
Trace[143186968]: [411.876856ms] [411.876856ms] initial value restored
Trace[143186968]: [694.760838ms] [277.809375ms] Transaction committed
I0827 11:39:31.142615 1 trace.go:116] Trace[395845127]: "Patch" url:/api/v1/namespaces/default/events/mysql55-deployment-dfbffbb6c-hdqwj.162f1d04ad28e285,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.64.22 (started: 2020-08-27 11:39:30.446678918 +0000 UTC m=+308.362946634) (total time: 695.835443ms):
Trace[395845127]: [412.58012ms] [412.517901ms] About to apply patch
Trace[395845127]: [695.644435ms] [278.199056ms] Object stored in database
I0827 11:41:32.363240 1 trace.go:116] Trace[1046849248]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-08-27 11:41:31.778693766 +0000 UTC m=+429.694961449) (total time: 584.375147ms):
Trace[1046849248]: [584.340152ms] [572.260299ms] Transaction committed
I0827 11:41:36.494320 1 trace.go:116] Trace[470218239]: "Get" url:/api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx,user-agent:nginx-ingress-controller/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.3 (started: 2020-08-27 11:41:35.900451499 +0000 UTC m=+433.816719152) (total time: 593.791222ms):
Trace[470218239]: [592.467542ms] [592.4558ms] About to write a response
I0827 11:45:35.133791 1 trace.go:116] Trace[1089855878]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.64.22 (started: 2020-08-27 11:45:34.090629664 +0000 UTC m=+672.006897325) (total time: 1.043039323s):
Trace[1089855878]: [1.04292331s] [1.042907951s] About to write a response
I0827 11:47:46.312408 1 trace.go:116] Trace[1259271562]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-08-27 11:47:45.682786103 +0000 UTC m=+803.599053744) (total time: 629.548671ms):
Trace[1259271562]: [629.548671ms] [629.548671ms] END
I0827 11:47:46.313287 1 trace.go:116] Trace[418173131]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:cronjob-controller,client:192.168.64.22 (started: 2020-08-27 11:47:45.682730645 +0000 UTC m=+803.598998306) (total time: 630.514335ms):
Trace[418173131]: [629.721774ms] [629.673713ms] Listing from storage done
I0827 11:48:03.207657 1 trace.go:116] Trace[101022174]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-08-27 11:48:02.555885679 +0000 UTC m=+820.472153327) (total time: 651.680403ms):
Trace[101022174]: [651.618294ms] [651.609165ms] About to write a response
I0827 11:51:05.804278 1 trace.go:116] Trace[457388145]: "Get" url:/api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx,user-agent:nginx-ingress-controller/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.3 (started: 2020-08-27 11:51:04.877769574 +0000 UTC m=+1002.794037230) (total time: 771.711884ms):
Trace[457388145]: [771.515946ms] [770.931634ms] About to write a response
I0827 11:51:05.805096 1 trace.go:116] Trace[1623623664]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.64.22 (started: 2020-08-27 11:51:04.73675316 +0000 UTC m=+1002.653020843) (total time: 915.722557ms):
Trace[1623623664]: [915.63303ms] [915.620387ms] About to write a response
W0827 11:59:45.036742 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
I0827 12:03:45.741437 1 trace.go:116] Trace[151364689]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-08-27 12:03:43.435432842 +0000 UTC m=+1761.351700559) (total time: 2.305812451s):
Trace[151364689]: [2.305718746s] [2.30570561s] About to write a response
I0827 12:03:45.744062 1 trace.go:116] Trace[971445307]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.64.22 (started: 2020-08-27 12:03:43.724478211 +0000 UTC m=+1761.640745908) (total time: 2.019514339s):
Trace[971445307]: [2.019423832s] [2.019412872s] About to write a response
I0827 12:03:46.345842 1 trace.go:116] Trace[54012332]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-08-27 12:03:45.742337446 +0000 UTC m=+1763.658605156) (total time: 603.427886ms):
Trace[54012332]: [364.411909ms] [353.697635ms] Transaction prepared
Trace[54012332]: [603.395093ms] [238.983184ms] Transaction committed
I0827 12:05:20.698739 1 trace.go:116] Trace[538117653]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-08-27 12:05:20.18302539 +0000 UTC m=+1858.099293055) (total time: 515.544524ms):
Trace[538117653]: [515.478911ms] [404.303363ms] Transaction committed
I0827 12:05:20.699367 1 trace.go:116] Trace[315100938]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.64.22 (started: 2020-08-27 12:05:20.182771852 +0000 UTC m=+1858.099039526) (total time: 516.193853ms):
Trace[315100938]: [516.055491ms] [515.867275ms] Object stored in database
W0827 12:11:39.620077 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-controller-manager [650fec4f7fec] <==
W0827 11:34:44.886260 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0827 11:34:44.892009 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0827 11:34:44.916268 1 shared_informer.go:230] Caches are synced for namespace
I0827 11:34:44.935988 1 shared_informer.go:230] Caches are synced for PV protection
I0827 11:34:44.939345 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0827 11:34:44.954316 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0827 11:34:44.960492 1 shared_informer.go:230] Caches are synced for TTL
I0827 11:34:44.963390 1 shared_informer.go:230] Caches are synced for service account
I0827 11:34:44.972607 1 shared_informer.go:230] Caches are synced for expand
I0827 11:34:45.094386 1 shared_informer.go:230] Caches are synced for GC
I0827 11:34:45.098755 1 shared_informer.go:230] Caches are synced for HPA
I0827 11:34:45.110161 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0827 11:34:45.137789 1 shared_informer.go:230] Caches are synced for attach detach
I0827 11:34:45.137789 1 shared_informer.go:230] Caches are synced for daemon sets
I0827 11:34:45.146937 1 shared_informer.go:230] Caches are synced for endpoint
I0827 11:34:45.147045 1 shared_informer.go:230] Caches are synced for deployment
I0827 11:34:45.172427 1 shared_informer.go:230] Caches are synced for job
I0827 11:34:45.174040 1 shared_informer.go:230] Caches are synced for disruption
I0827 11:34:45.174093 1 disruption.go:339] Sending events to api server.
I0827 11:34:45.174769 1 shared_informer.go:230] Caches are synced for ReplicationController
I0827 11:34:45.193086 1 shared_informer.go:230] Caches are synced for PVC protection
I0827 11:34:45.195522 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0827 11:34:45.195601 1 shared_informer.go:230] Caches are synced for persistent volume
I0827 11:34:45.257637 1 shared_informer.go:230] Caches are synced for stateful set
I0827 11:34:45.331347 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"7e87812a-c84b-4a6c-a975-15f046bc0292", APIVersion:"apps/v1", ResourceVersion:"255", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
I0827 11:34:45.360773 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0827 11:34:45.436021 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"fc53d43c-96de-429b-8110-7a3bdd596b70", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-rhq45
I0827 11:34:45.436117 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4d140db1-5751-40d0-867c-cb2ee58abd2a", APIVersion:"apps/v1", ResourceVersion:"225", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-96zpv
I0827 11:34:45.445121 1 shared_informer.go:230] Caches are synced for resource quota
I0827 11:34:45.463381 1 shared_informer.go:230] Caches are synced for resource quota
I0827 11:34:45.477320 1 shared_informer.go:230] Caches are synced for garbage collector
I0827 11:34:45.477387 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0827 11:34:45.490334 1 shared_informer.go:230] Caches are synced for taint
I0827 11:34:45.490467 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
W0827 11:34:45.490595 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0827 11:34:45.490645 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0827 11:34:45.491149 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0827 11:34:45.491336 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"8f8199b6-c9ba-49e8-90d6-2b63f4fa7363", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
E0827 11:34:45.506472 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0827 11:34:45.522793 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4d140db1-5751-40d0-867c-cb2ee58abd2a", ResourceVersion:"225", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734124879, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0013fe6e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0013fe700)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0013fe720), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000fac700), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0013fe740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0013fe760), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013fe7a0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000b87ea0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000eea698), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0008ba1c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000b3a68)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000eea6e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0827 11:34:45.535365 1 shared_informer.go:230] Caches are synced for garbage collector
E0827 11:34:45.574169 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0827 11:34:51.318684 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"ingress-nginx-controller", UID:"15ad3aef-705e-4bf4-8362-d8c666e7f1d0", APIVersion:"apps/v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-69ccf5d9d8 to 1
I0827 11:34:51.336013 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"ingress-nginx-controller-69ccf5d9d8", UID:"9fa5e8bf-ed42-4c81-b4f8-f6f4c2ecd9d9", APIVersion:"apps/v1", ResourceVersion:"384", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-69ccf5d9d8-67t5c
I0827 11:34:51.536495 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"c52d4f8e-fa35-4a7a-ab02-3b8a917e489c", APIVersion:"batch/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-w9drk
I0827 11:34:51.824813 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"39c29d16-4cec-4280-9fae-0d9538a27da6", APIVersion:"batch/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-lrd4b
I0827 11:35:00.492949 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0827 11:35:25.913845 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"c52d4f8e-fa35-4a7a-ab02-3b8a917e489c", APIVersion:"batch/v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
I0827 11:35:38.750151 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"39c29d16-4cec-4280-9fae-0d9538a27da6", APIVersion:"batch/v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
I0827 11:36:33.196076 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"registry-creds", UID:"148fd32a-8b81-4d22-90df-45224f03cb0b", APIVersion:"apps/v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set registry-creds-85f59c657 to 1
I0827 11:36:33.245567 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"registry-creds-85f59c657", UID:"a7cda87a-4f42-450f-be7c-f41f7d225681", APIVersion:"apps/v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: registry-creds-85f59c657-7h96n
I0827 11:36:33.542689 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"live-repo-volume-claim", UID:"e7b9908d-c465-4239-81be-8871384980a9", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
I0827 11:36:33.873608 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"persist-storage-claim", UID:"ecdada70-86b7-448b-9412-709f04ccdf4e", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
E0827 11:37:00.798025 1 cronjob_controller.go:125] Failed to extract job list: etcdserver: request timed out
I0827 11:38:15.633735 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mysql-volume-claim10", UID:"614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
I0827 11:38:15.812622 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"mysql55-deployment", UID:"f87b6a9a-0204-426c-b581-efd74ccfa10f", APIVersion:"apps/v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set mysql55-deployment-dfbffbb6c to 1
I0827 11:38:15.942617 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"mysql55-deployment-dfbffbb6c", UID:"f2621423-5f3d-4201-a356-b585a52c866c", APIVersion:"apps/v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mysql55-deployment-dfbffbb6c-hdqwj
I0827 11:41:46.923865 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-volume-claim", UID:"e21c19cb-5863-458d-94a1-ebb77dccbc34", APIVersion:"v1", ResourceVersion:"961", FieldPath:""}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
I0827 11:41:47.370106 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"mongo40-deployment", UID:"ec69f573-7683-416e-b155-9d4a978a1210", APIVersion:"apps/v1", ResourceVersion:"966", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set mongo40-deployment-76d6587bc to 1
I0827 11:41:47.427270 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"mongo40-deployment-76d6587bc", UID:"25a2db7b-f954-490d-b428-9d65d6fb353e", APIVersion:"apps/v1", ResourceVersion:"970", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mongo40-deployment-76d6587bc-q44nw

==> kube-proxy [1e064a2113b2] <==
W0827 11:34:49.098413 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0827 11:34:49.122897 1 node.go:136] Successfully retrieved node IP: 192.168.64.22
I0827 11:34:49.123076 1 server_others.go:186] Using iptables Proxier.
W0827 11:34:49.123115 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0827 11:34:49.123138 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0827 11:34:49.128692 1 server.go:583] Version: v1.18.3
I0827 11:34:49.132146 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0827 11:34:49.132216 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0827 11:34:49.134849 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0827 11:34:49.143428 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0827 11:34:49.143543 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0827 11:34:49.144823 1 config.go:315] Starting service config controller
I0827 11:34:49.144933 1 shared_informer.go:223] Waiting for caches to sync for service config
I0827 11:34:49.145091 1 config.go:133] Starting endpoints config controller
I0827 11:34:49.145186 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0827 11:34:49.245547 1 shared_informer.go:230] Caches are synced for endpoints config
I0827 11:34:49.245805 1 shared_informer.go:230] Caches are synced for service config

==> kube-scheduler [65911a94591b] <==
I0827 11:34:22.492520 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0827 11:34:22.492994 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0827 11:34:23.996160 1 serving.go:313] Generated self-signed cert in-memory
W0827 11:34:35.142438 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0827 11:34:35.142511 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0827 11:34:35.142535 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0827 11:34:35.142547 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0827 11:34:35.344164 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0827 11:34:35.344264 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0827 11:34:35.346866 1 authorization.go:47] Authorization is disabled
W0827 11:34:35.346989 1 authentication.go:40] Authentication is disabled
I0827 11:34:35.347031 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0827 11:34:35.350485 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0827 11:34:35.351748 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0827 11:34:35.353911 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0827 11:34:35.351777 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0827 11:34:35.399546 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0827 11:34:35.401141 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0827 11:34:35.401848 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0827 11:34:35.402232 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0827 11:34:35.402537 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0827 11:34:35.403131 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0827 11:34:35.406282 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0827 11:34:35.411984 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0827 11:34:35.412723 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0827 11:34:36.239120 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0827 11:34:36.284497 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0827 11:34:36.348390 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0827 11:34:36.954436 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0827 11:34:39.657385 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0827 11:34:43.209898 1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
E0827 11:34:45.563305 1 factory.go:503] pod: kube-system/coredns-66bff467f8-rhq45 is already present in the active queue
E0827 11:34:51.840019 1 factory.go:503] pod: kube-system/ingress-nginx-admission-create-w9drk is already present in unschedulable queue
E0827 11:38:15.924219 1 scheduler.go:599] error selecting node for pod: running "VolumeBinding" filter plugin for pod "mysql55-deployment-dfbffbb6c-hdqwj": pod has unbound immediate PersistentVolumeClaims
E0827 11:38:15.924780 1 factory.go:478] Error scheduling default/mysql55-deployment-dfbffbb6c-hdqwj: running "VolumeBinding" filter plugin for pod "mysql55-deployment-dfbffbb6c-hdqwj": pod has unbound immediate PersistentVolumeClaims; retrying

==> kubelet <==
-- Logs begin at Thu 2020-08-27 11:32:50 UTC, end at Thu 2020-08-27 12:27:14 UTC. --
Aug 27 11:36:34 minikube kubelet[3685]: W0827 11:36:34.812918 3685 pod_container_deletor.go:77] Container "464020776a36ad20068c3b47c7f8de7040d87aedda6313e7e85b4cfde2a37dd3" not found in pod's containers
Aug 27 11:36:34 minikube kubelet[3685]: W0827 11:36:34.813657 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-creds-85f59c657-7h96n through plugin: invalid network status for
Aug 27 11:36:35 minikube kubelet[3685]: W0827 11:36:35.849218 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-creds-85f59c657-7h96n through plugin: invalid network status for
Aug 27 11:36:54 minikube kubelet[3685]: E0827 11:36:54.702776 3685 controller.go:178] failed to update node lease, error: etcdserver: request timed out
Aug 27 11:36:56 minikube kubelet[3685]: E0827 11:36:56.272344 3685 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-minikube.162f1cefdb05e658", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-minikube", UID:"888781d47be13edcb8a2dd656eb13b2f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfca04944e22fc58, ext:129591202696, loc:(*time.Location)(0x701d4a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfca04944e22fc58, ext:129591202696, loc:(*time.Location)(0x701d4a0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
Aug 27 11:36:57 minikube kubelet[3685]: E0827 11:36:57.322442 3685 kubelet_node_status.go:402] Error updating node status, will retry: failed to patch status "{"status":{"$setElementOrder/conditions":[{"type":"MemoryPressure"},{"type":"DiskPressure"},{"type":"PIDPressure"},{"type":"Ready"}],"conditions":[{"lastHeartbeatTime":"2020-08-27T11:36:49Z","type":"MemoryPressure"},{"lastHeartbeatTime":"2020-08-27T11:36:49Z","type":"DiskPressure"},{"lastHeartbeatTime":"2020-08-27T11:36:49Z","type":"PIDPressure"},{"lastHeartbeatTime":"2020-08-27T11:36:49Z","type":"Ready"}],"images":[{"names":["quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:fc650620719e460df04043512ec4af146b7d9da163616960e58aceeaf4ea5ba1","quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0"],"sizeBytes":327377834},{"names":["k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646","k8s.gcr.io/etcd:3.4.3-0"],"sizeBytes":288426917},{"names":["kubernetesui/dashboard@sha256:a705c04e83badb4fdb2b95eb6b126f3c2759677b2f953742f3b08a1fada07d9d","kubernetesui/dashboard:v2.0.1"],"sizeBytes":222771101},{"names":["k8s.gcr.io/kube-apiserver@sha256:e1c8ce568634f79f76b6e8168c929511ad841ea7692271caf6fd3779c3545c2d","k8s.gcr.io/kube-apiserver:v1.18.3"],"sizeBytes":172997403},{"names":["k8s.gcr.io/kube-controller-manager@sha256:d62a4f41625e1631a2683cbdf1c9c9bd27f0b9c5d8d8202990236fc0d5ef1703","k8s.gcr.io/kube-controller-manager:v1.18.3"],"sizeBytes":162388763},{"names":["k8s.gcr.io/kube-proxy@sha256:6a093c22e305039b7bd6c3f8eab8f202ad8238066ed210857b25524443aa8aff","k8s.gcr.io/kube-proxy:v1.18.3"],"sizeBytes":117090625},{"names":["k8s.gcr.io/kube-scheduler@sha256:5381cd9680bf5fb16a5c8ac60141eaab242c1c4960f1c32a21807efcca3e765b","k8s.gcr.io/kube-scheduler:v1.18.3"],"sizeBytes":95279899},{"names":["jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7","jettech/kube-webhook-certgen:v1.2.2"],"sizeBytes":49003629},{"names":["k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800","k8s.gcr.io/coredns:1.6.7"],"sizeBytes":43794147},{"names":["kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf","kubernetesui/metrics-scraper:v1.0.4"],"sizeBytes":36937728},{"names":["gcr.io/k8s-minikube/storage-provisioner@sha256:a7b2848b673e6a0927a16f30445d8f1b66f1504a35def0efa35e3dcac56b713e","gcr.io/k8s-minikube/storage-provisioner:v2"],"sizeBytes":32219136},{"names":["k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f","k8s.gcr.io/pause:3.2"],"sizeBytes":682696}]}}" for node "minikube": etcdserver: request timed out
Aug 27 11:37:01 minikube kubelet[3685]: E0827 11:37:01.866110 3685 controller.go:178] failed to update node lease, error: etcdserver: request timed out
Aug 27 11:37:01 minikube kubelet[3685]: E0827 11:37:01.883669 3685 event.go:269] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events: read tcp 192.168.64.22:43268->192.168.64.22:8443: use of closed network connection' (may retry after sleeping)
Aug 27 11:37:07 minikube kubelet[3685]: E0827 11:37:07.350296 3685 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "minikube": Get https://control-plane.minikube.internal:8443/api/v1/nodes/minikube?timeout=10s: context deadline exceeded
Aug 27 11:37:07 minikube kubelet[3685]: E0827 11:37:07.368173 3685 controller.go:178] failed to update node lease, error: Put https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: read tcp 192.168.64.22:43926->192.168.64.22:8443: use of closed network connection
Aug 27 11:37:07 minikube kubelet[3685]: E0827 11:37:07.370464 3685 event.go:269] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events: read tcp 192.168.64.22:43926->192.168.64.22:8443: use of closed network connection' (may retry after sleeping)
Aug 27 11:37:15 minikube kubelet[3685]: E0827 11:37:15.201405 3685 controller.go:178] failed to update node lease, error: etcdserver: request timed out
Aug 27 11:37:17 minikube kubelet[3685]: E0827 11:37:17.407254 3685 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "minikube": Get https://control-plane.minikube.internal:8443/api/v1/nodes/minikube?timeout=10s: context deadline exceeded
Aug 27 11:37:17 minikube kubelet[3685]: E0827 11:37:17.470898 3685 controller.go:178] failed to update node lease, error: Put https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: read tcp 192.168.64.22:44046->192.168.64.22:8443: use of closed network connection
Aug 27 11:37:17 minikube kubelet[3685]: W0827 11:37:17.476044 3685 reflector.go:404] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: watch of *v1.Node ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Unexpected watch close - watch lasted less than a second and no items received
Aug 27 11:37:17 minikube kubelet[3685]: W0827 11:37:17.476271 3685 reflector.go:404] object-"kube-system"/"registry-creds-dpr": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"registry-creds-dpr": Unexpected watch close - watch lasted less than a second and no items received
Aug 27 11:37:17 minikube kubelet[3685]: W0827 11:37:17.476423 3685 reflector.go:404] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: watch of *v1.Service ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Unexpected watch close - watch lasted less than a second and no items received
Aug 27 11:37:17 minikube kubelet[3685]: E0827 11:37:17.476552 3685 event.go:269] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events: read tcp 192.168.64.22:44098->192.168.64.22:8443: use of closed network connection' (may retry after sleeping)
Aug 27 11:37:17 minikube kubelet[3685]: I0827 11:37:17.476589 3685 controller.go:106] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update node lease
Aug 27 11:37:25 minikube kubelet[3685]: I0827 11:37:25.645829 3685 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 974ea58ab5f0bd2ed91956014b8559acd54aac63a37c1a67efd049361112aa97
Aug 27 11:38:00 minikube kubelet[3685]: W0827 11:38:00.141441 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/registry-creds-85f59c657-7h96n through plugin: invalid network status for
Aug 27 11:38:16 minikube kubelet[3685]: I0827 11:38:16.003137 3685 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 27 11:38:16 minikube kubelet[3685]: I0827 11:38:16.317016 3685 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3" (UniqueName: "kubernetes.io/host-path/3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7-pvc-614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3") pod "mysql55-deployment-dfbffbb6c-hdqwj" (UID: "3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7")
Aug 27 11:38:16 minikube kubelet[3685]: I0827 11:38:16.317364 3685 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hs99t" (UniqueName: "kubernetes.io/secret/3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7-default-token-hs99t") pod "mysql55-deployment-dfbffbb6c-hdqwj" (UID: "3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7")
Aug 27 11:38:16 minikube kubelet[3685]: W0827 11:38:16.673134 3685 kubelet_pods.go:858] Unable to retrieve pull secret default/awsecr-cred for default/mysql55-deployment-dfbffbb6c-hdqwj due to secret "awsecr-cred" not found. The image pull may not succeed.
Aug 27 11:38:17 minikube kubelet[3685]: W0827 11:38:17.602893 3685 pod_container_deletor.go:77] Container "f2b5e4bb86d6d9d18808677564c70b6bd0658baf616916d7ed267c3382aaf4a1" not found in pod's containers
Aug 27 11:38:17 minikube kubelet[3685]: W0827 11:38:17.605671 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/mysql55-deployment-dfbffbb6c-hdqwj through plugin: invalid network status for
Aug 27 11:38:17 minikube kubelet[3685]: E0827 11:38:17.814696 3685 remote_image.go:113] PullImage "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:38:17 minikube kubelet[3685]: E0827 11:38:17.814787 3685 kuberuntime_image.go:50] Pull image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:38:17 minikube kubelet[3685]: E0827 11:38:17.814869 3685 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:38:17 minikube kubelet[3685]: E0827 11:38:17.814921 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials"
Aug 27 11:38:18 minikube kubelet[3685]: W0827 11:38:18.623591 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/mysql55-deployment-dfbffbb6c-hdqwj through plugin: invalid network status for
Aug 27 11:38:18 minikube kubelet[3685]: W0827 11:38:18.635574 3685 kubelet_pods.go:858] Unable to retrieve pull secret default/awsecr-cred for default/mysql55-deployment-dfbffbb6c-hdqwj due to secret "awsecr-cred" not found. The image pull may not succeed.
Aug 27 11:38:18 minikube kubelet[3685]: E0827 11:38:18.662089 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ImagePullBackOff: "Back-off pulling image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20""
Aug 27 11:38:19 minikube kubelet[3685]: W0827 11:38:19.667916 3685 kubelet_pods.go:858] Unable to retrieve pull secret default/awsecr-cred for default/mysql55-deployment-dfbffbb6c-hdqwj due to secret "awsecr-cred" not found. The image pull may not succeed.
Aug 27 11:38:19 minikube kubelet[3685]: E0827 11:38:19.691999 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ImagePullBackOff: "Back-off pulling image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20""
Aug 27 11:38:32 minikube kubelet[3685]: W0827 11:38:32.284314 3685 kubelet_pods.go:858] Unable to retrieve pull secret default/awsecr-cred for default/mysql55-deployment-dfbffbb6c-hdqwj due to secret "awsecr-cred" not found. The image pull may not succeed.
Aug 27 11:38:32 minikube kubelet[3685]: E0827 11:38:32.533364 3685 remote_image.go:113] PullImage "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:38:32 minikube kubelet[3685]: E0827 11:38:32.533421 3685 kuberuntime_image.go:50] Pull image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:38:32 minikube kubelet[3685]: E0827 11:38:32.533501 3685 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:38:32 minikube kubelet[3685]: E0827 11:38:32.534380 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials"
Aug 27 11:38:47 minikube kubelet[3685]: W0827 11:38:47.284539 3685 kubelet_pods.go:858] Unable to retrieve pull secret default/awsecr-cred for default/mysql55-deployment-dfbffbb6c-hdqwj due to secret "awsecr-cred" not found. The image pull may not succeed.
Aug 27 11:38:47 minikube kubelet[3685]: E0827 11:38:47.287677 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ImagePullBackOff: "Back-off pulling image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20""
Aug 27 11:39:01 minikube kubelet[3685]: W0827 11:39:01.291521 3685 kubelet_pods.go:858] Unable to retrieve pull secret default/awsecr-cred for default/mysql55-deployment-dfbffbb6c-hdqwj due to secret "awsecr-cred" not found. The image pull may not succeed.
Aug 27 11:39:01 minikube kubelet[3685]: E0827 11:39:01.554821 3685 remote_image.go:113] PullImage "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:39:01 minikube kubelet[3685]: E0827 11:39:01.555144 3685 kuberuntime_image.go:50] Pull image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:39:01 minikube kubelet[3685]: E0827 11:39:01.555339 3685 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials
Aug 27 11:39:01 minikube kubelet[3685]: E0827 11:39:01.555455 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://############.dkr.ecr.ap-southeast-1.amazonaws.com/v2/mysql55/manifests/0.0.20: no basic auth credentials"
Aug 27 11:39:16 minikube kubelet[3685]: E0827 11:39:16.364867 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ImagePullBackOff: "Back-off pulling image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20""
Aug 27 11:39:30 minikube kubelet[3685]: E0827 11:39:30.333137 3685 pod_workers.go:191] Error syncing pod 3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7 ("mysql55-deployment-dfbffbb6c-hdqwj_default(3e7c1ebb-904b-4669-8ea7-4ee36b4fdde7)"), skipping: failed to "StartContainer" for "mysql55" with ImagePullBackOff: "Back-off pulling image "############.dkr.ecr.ap-southeast-1.amazonaws.com/mysql55:0.0.20""
Aug 27 11:41:27 minikube kubelet[3685]: W0827 11:41:27.228843 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/mysql55-deployment-dfbffbb6c-hdqwj through plugin: invalid network status for
Aug 27 11:41:47 minikube kubelet[3685]: I0827 11:41:47.470194 3685 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 27 11:41:47 minikube kubelet[3685]: E0827 11:41:47.483590 3685 reflector.go:178] object-"default"/"mongo-secret": Failed to list *v1.Secret: secrets "mongo-secret" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "minikube" and this object
Aug 27 11:41:47 minikube kubelet[3685]: I0827 11:41:47.924907 3685 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-e21c19cb-5863-458d-94a1-ebb77dccbc34" (UniqueName: "kubernetes.io/host-path/6344ae67-3da4-4acc-9628-cbe295277929-pvc-e21c19cb-5863-458d-94a1-ebb77dccbc34") pod "mongo40-deployment-76d6587bc-q44nw" (UID: "6344ae67-3da4-4acc-9628-cbe295277929")
Aug 27 11:41:47 minikube kubelet[3685]: I0827 11:41:47.925076 3685 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hs99t" (UniqueName: "kubernetes.io/secret/6344ae67-3da4-4acc-9628-cbe295277929-default-token-hs99t") pod "mongo40-deployment-76d6587bc-q44nw" (UID: "6344ae67-3da4-4acc-9628-cbe295277929")
Aug 27 11:41:50 minikube kubelet[3685]: W0827 11:41:50.048825 3685 pod_container_deletor.go:77] Container "c39d585d188e6b5e2a73db71afd6c8670afaa380ea78f91926267731a3022a3e" not found in pod's containers
Aug 27 11:41:50 minikube kubelet[3685]: W0827 11:41:50.057428 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/mongo40-deployment-76d6587bc-q44nw through plugin: invalid network status for
Aug 27 11:41:51 minikube kubelet[3685]: W0827 11:41:51.453304 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/mongo40-deployment-76d6587bc-q44nw through plugin: invalid network status for
Aug 27 11:47:51 minikube kubelet[3685]: W0827 11:47:51.652877 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/mongo40-deployment-76d6587bc-q44nw through plugin: invalid network status for
Aug 27 11:47:52 minikube kubelet[3685]: W0827 11:47:52.828774 3685 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/mongo40-deployment-76d6587bc-q44nw through plugin: invalid network status for

==> storage-provisioner [974ea58ab5f0] <==
I0827 11:35:02.617250 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0827 11:35:02.663406 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0827 11:35:02.666073 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_91b214af-75c1-42a7-b988-3d350fb64c3a!
I0827 11:35:02.666180 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acffddd5-8fdf-4d62-9a84-38be874642a5", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_91b214af-75c1-42a7-b988-3d350fb64c3a became leader
I0827 11:35:02.776658 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_91b214af-75c1-42a7-b988-3d350fb64c3a!
I0827 11:36:33.543985 1 controller.go:1284] provision "default/live-repo-volume-claim" class "standard": started
I0827 11:36:33.560446 1 controller.go:1392] provision "default/live-repo-volume-claim" class "standard": volume "pvc-e7b9908d-c465-4239-81be-8871384980a9" provisioned
I0827 11:36:33.560481 1 controller.go:1409] provision "default/live-repo-volume-claim" class "standard": succeeded
I0827 11:36:33.560489 1 volume_store.go:212] Trying to save persistentvolume "pvc-e7b9908d-c465-4239-81be-8871384980a9"
I0827 11:36:33.564026 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"live-repo-volume-claim", UID:"e7b9908d-c465-4239-81be-8871384980a9", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/live-repo-volume-claim"
I0827 11:36:33.788574 1 volume_store.go:219] persistentvolume "pvc-e7b9908d-c465-4239-81be-8871384980a9" saved
I0827 11:36:33.795603 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"live-repo-volume-claim", UID:"e7b9908d-c465-4239-81be-8871384980a9", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e7b9908d-c465-4239-81be-8871384980a9
I0827 11:36:33.877288 1 controller.go:1284] provision "default/persist-storage-claim" class "standard": started
I0827 11:36:33.887979 1 controller.go:1392] provision "default/persist-storage-claim" class "standard": volume "pvc-ecdada70-86b7-448b-9412-709f04ccdf4e" provisioned
I0827 11:36:33.888045 1 controller.go:1409] provision "default/persist-storage-claim" class "standard": succeeded
I0827 11:36:33.888055 1 volume_store.go:212] Trying to save persistentvolume "pvc-ecdada70-86b7-448b-9412-709f04ccdf4e"
I0827 11:36:33.888610 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"persist-storage-claim", UID:"ecdada70-86b7-448b-9412-709f04ccdf4e", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/persist-storage-claim"
I0827 11:36:33.907985 1 volume_store.go:219] persistentvolume "pvc-ecdada70-86b7-448b-9412-709f04ccdf4e" saved
I0827 11:36:33.909372 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"persist-storage-claim", UID:"ecdada70-86b7-448b-9412-709f04ccdf4e", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ecdada70-86b7-448b-9412-709f04ccdf4e
E0827 11:36:53.850993 1 leaderelection.go:331] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: etcdserver: request timed out
I0827 11:36:56.649801 1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded
F0827 11:36:56.663761 1 controller.go:877] leaderelection lost
I0827 11:36:56.654906 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_91b214af-75c1-42a7-b988-3d350fb64c3a stopped leading

==> storage-provisioner [d571a86fccfe] <==
I0827 11:37:28.229016 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0827 11:37:45.684701 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0827 11:37:45.691363 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_08bdd504-4a0a-4915-9606-703ef57b468c!
I0827 11:37:45.692039 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acffddd5-8fdf-4d62-9a84-38be874642a5", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_08bdd504-4a0a-4915-9606-703ef57b468c became leader
I0827 11:37:46.030071 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_08bdd504-4a0a-4915-9606-703ef57b468c!
I0827 11:38:15.622601 1 controller.go:1284] provision "default/mysql-volume-claim10" class "standard": started
I0827 11:38:15.660295 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mysql-volume-claim10", UID:"614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/mysql-volume-claim10"
I0827 11:38:15.682665 1 controller.go:1392] provision "default/mysql-volume-claim10" class "standard": volume "pvc-614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3" provisioned
I0827 11:38:15.682754 1 controller.go:1409] provision "default/mysql-volume-claim10" class "standard": succeeded
I0827 11:38:15.682770 1 volume_store.go:212] Trying to save persistentvolume "pvc-614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3"
I0827 11:38:15.788787 1 volume_store.go:219] persistentvolume "pvc-614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3" saved
I0827 11:38:15.791501 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mysql-volume-claim10", UID:"614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-614ee6e6-d8e7-4cc1-b7f8-f7952d6b82f3
I0827 11:41:46.913080 1 controller.go:1284] provision "default/mongo-volume-claim" class "standard": started
I0827 11:41:46.972450 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-volume-claim", UID:"e21c19cb-5863-458d-94a1-ebb77dccbc34", APIVersion:"v1", ResourceVersion:"961", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/mongo-volume-claim"
I0827 11:41:46.977622 1 controller.go:1392] provision "default/mongo-volume-claim" class "standard": volume "pvc-e21c19cb-5863-458d-94a1-ebb77dccbc34" provisioned
I0827 11:41:46.977710 1 controller.go:1409] provision "default/mongo-volume-claim" class "standard": succeeded
I0827 11:41:46.977728 1 volume_store.go:212] Trying to save persistentvolume "pvc-e21c19cb-5863-458d-94a1-ebb77dccbc34"
I0827 11:41:47.069186 1 volume_store.go:219] persistentvolume "pvc-e21c19cb-5863-458d-94a1-ebb77dccbc34" saved
I0827 11:41:47.074643 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongo-volume-claim", UID:"e21c19cb-5863-458d-94a1-ebb77dccbc34", APIVersion:"v1", ResourceVersion:"961", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e21c19cb-5863-458d-94a1-ebb77dccbc34

Screenshot 2020-08-27 at 7 20 53 PM

@tstromberg
Copy link
Contributor

tstromberg commented Aug 27, 2020

NOTE: Activity Monitor measures CPU usage per core: so, Kubernetes is taking up 1/2 a core.

This is expected: Kubernetes has a high idle. Use "minikube pause" to pause Kubernetes, which should return hyperkit usage back to the ~2% range. We actively do track CPU usage across drivers:

https://docs.google.com/spreadsheets/d/1PYmR4lcEOtV1HOKrDezhRkBGXQMxMtHGu1WBLJvexDA/edit#gid=1614668143

Clearly, we'd like to improve this - but in many ways, it's out of our hands.

@supersexy
Copy link

Why was this closed? When K8S will be used in every data center of the world, it is extremely important to make it as energy efficient as possible - I hope we do not need to discuss that this is an absolute basic requirement for the survival of humans in general, not only for data center software? Did you really understand that climate change is a very serious issue?

It is absurd to roll out software en large that makes data centers less energy efficient, it should be a crime.

Please make it a top priority, your children will thank you for that.

Right now you have a chance to really do something useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants