Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to build image with compile steps #6182

Closed
robbdimitrov opened this issue Dec 30, 2019 · 2 comments
Closed

Unable to build image with compile steps #6182

robbdimitrov opened this issue Dec 30, 2019 · 2 comments

Comments

@robbdimitrov
Copy link

robbdimitrov commented Dec 30, 2019

The issue occurs when building an Angular app image within the minikube docker env.
Build succeeds on normal local docker shell, but the ng build step, producing the final artifacts
and placing them inside a new dist folder (created by the angular-cli) does nothing, just hangs for a bit and there are neither errors nor logs.

The exact command to reproduce the issue:

This is the Dockerfile for the image:

FROM node:13.5-alpine as builder
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --no-optional
COPY . .
RUN npm run build

FROM nginx:1.17-alpine
COPY nginx/default.conf /etc/nginx/conf.d/
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /usr/src/app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]

Using the command docker build -t frontend src/frontend

The full output of the command that failed:

Sending build context to Docker daemon  638.5kB
Step 1/12 : FROM node:13.5-alpine as builder
 ---> e1495e4ac50d
Step 2/12 : RUN mkdir -p /app
 ---> Running in 178e3524113f
Removing intermediate container 178e3524113f
 ---> f7b4045cea28
Step 3/12 : WORKDIR /app
 ---> Running in ea8a2cafa611
Removing intermediate container ea8a2cafa611
 ---> 860049ec4dea
Step 4/12 : COPY package*.json ./
 ---> e27004123598
Step 5/12 : RUN npm install --no-optional
 ---> Running in 7dd900bf1039

> [email protected] postinstall /app/node_modules/babel-runtime/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"

Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library!

The project needs your help! Please consider supporting of core-js on Open Collective or Patreon: 
> https://opencollective.com/core-js 
> https://www.patreon.com/zloirock 

Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -)


> [email protected] postinstall /app/node_modules/core-js
> node scripts/postinstall || echo "ignore"

Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library!

The project needs your help! Please consider supporting of core-js on Open Collective or Patreon: 
> https://opencollective.com/core-js 
> https://www.patreon.com/zloirock 

Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -)


> @angular/[email protected] postinstall /app/node_modules/@angular/cli
> node ./bin/postinstall/script.js

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/webpack-dev-server/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/watchpack/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/@angular/compiler-cli/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

added 1043 packages from 566 contributors and audited 15798 packages in 105.512s

20 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

Removing intermediate container 7dd900bf1039
 ---> 63e54f767c97
Step 6/12 : COPY . .
 ---> fddaa9fb1880
Step 7/12 : RUN npm run build
 ---> Running in 317f4d271153

> pixelgram@ build /app
> ng build --prod

Removing intermediate container 317f4d271153
 ---> 2118b0fde55f
Step 8/12 : FROM nginx:1.17-alpine
 ---> a624d888d69f
Step 9/12 : COPY nginx/default.conf /etc/nginx/conf.d/
 ---> Using cache
 ---> 95ae92c3f9a8
Step 10/12 : RUN rm -rf /usr/share/nginx/html/*
 ---> Using cache
 ---> b04ef7eaf780
Step 11/12 : COPY --from=builder /app/dist /usr/share/nginx/html
COPY failed: stat /var/lib/docker/overlay2/e5975e513aa34fbbc0bb42a551b564dde38ec0e11b9c0e6497b2761ebfa79fc8/merged/app/dist: no such file or directory

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Mon 2019-12-30 15:43:49 UTC, end at Mon 2019-12-30 16:47:28 UTC. --
Dec 30 16:28:37 minikube dockerd[1878]: time="2019-12-30T16:28:37.659811060Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/905596cba44adabe0ec71fb06d2ed4fdb5b1765165c12169538666e466f71b2a/shim.sock" debug=false pid=17402
Dec 30 16:28:37 minikube dockerd[1878]: time="2019-12-30T16:28:37.664634690Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce/shim.sock" debug=false pid=17403
Dec 30 16:28:37 minikube dockerd[1878]: time="2019-12-30T16:28:37.790030810Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/50203625f4e6891b4cdd734d6628947843fe837e97f6f1d758bdb020186a7d51/shim.sock" debug=false pid=17442
Dec 30 16:30:39 minikube dockerd[1878]: time="2019-12-30T16:30:39.697769778Z" level=info msg="shim reaped" id=a73038522be62fb71e55643111dc63fefef3da88e020753ab7e6206ce9c5c809
Dec 30 16:30:39 minikube dockerd[1878]: time="2019-12-30T16:30:39.732964202Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:30:40 minikube dockerd[1878]: time="2019-12-30T16:30:40.320631869Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0e08a0c0919355d2a79dac6306f4bc5cf2d6da9a2d8c77b9ff04e98c8eac8b00/shim.sock" debug=false pid=17621
Dec 30 16:30:40 minikube dockerd[1878]: time="2019-12-30T16:30:40.515811813Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9ff1aa9885ddef323e005a9292b59bcd15a0d59673a08b8ff2199651cee2b7c3/shim.sock" debug=false pid=17637
Dec 30 16:32:04 minikube dockerd[1878]: time="2019-12-30T16:32:04.812922276Z" level=info msg="shim reaped" id=50203625f4e6891b4cdd734d6628947843fe837e97f6f1d758bdb020186a7d51
Dec 30 16:32:04 minikube dockerd[1878]: time="2019-12-30T16:32:04.826560214Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:32:04 minikube dockerd[1878]: time="2019-12-30T16:32:04.827167760Z" level=warning msg="50203625f4e6891b4cdd734d6628947843fe837e97f6f1d758bdb020186a7d51 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/50203625f4e6891b4cdd734d6628947843fe837e97f6f1d758bdb020186a7d51/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:32:05 minikube dockerd[1878]: time="2019-12-30T16:32:05.709075653Z" level=info msg="shim reaped" id=9ff1aa9885ddef323e005a9292b59bcd15a0d59673a08b8ff2199651cee2b7c3
Dec 30 16:32:05 minikube dockerd[1878]: time="2019-12-30T16:32:05.716991976Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:32:05 minikube dockerd[1878]: time="2019-12-30T16:32:05.717172062Z" level=warning msg="9ff1aa9885ddef323e005a9292b59bcd15a0d59673a08b8ff2199651cee2b7c3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9ff1aa9885ddef323e005a9292b59bcd15a0d59673a08b8ff2199651cee2b7c3/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:32:05 minikube dockerd[1878]: time="2019-12-30T16:32:05.745239966Z" level=info msg="shim reaped" id=f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce
Dec 30 16:32:05 minikube dockerd[1878]: time="2019-12-30T16:32:05.756161780Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:32:05 minikube dockerd[1878]: time="2019-12-30T16:32:05.756251611Z" level=warning msg="f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:32:19 minikube dockerd[1878]: time="2019-12-30T16:32:19.883851462Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d192e5aed4efc3d730290760dd0a75525c8d189530a9582c363ed157b96c6eaa/shim.sock" debug=false pid=18051
Dec 30 16:33:11 minikube dockerd[1878]: time="2019-12-30T16:33:11.920738290Z" level=info msg="shim reaped" id=0e08a0c0919355d2a79dac6306f4bc5cf2d6da9a2d8c77b9ff04e98c8eac8b00
Dec 30 16:33:11 minikube dockerd[1878]: time="2019-12-30T16:33:11.930679293Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:33:12 minikube dockerd[1878]: time="2019-12-30T16:33:12.389752824Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/edd0833a88f60e5dc61ac3eb1bc7d4833a00605da38fdca5cfe490f603be99d6/shim.sock" debug=false pid=18143
Dec 30 16:33:12 minikube dockerd[1878]: time="2019-12-30T16:33:12.502043300Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b7aa43a7532333d1f171b027cf46b5289b035b8b72b2ce1e2ac391f87173bfec/shim.sock" debug=false pid=18166
Dec 30 16:33:16 minikube dockerd[1878]: time="2019-12-30T16:33:16.673507667Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cf47ad4ceb2de15abf5389e5e7839c848da0d43c16036951f7aa6a41c0381821/shim.sock" debug=false pid=18305
Dec 30 16:34:30 minikube dockerd[1878]: time="2019-12-30T16:34:30.200760439Z" level=info msg="shim reaped" id=cf47ad4ceb2de15abf5389e5e7839c848da0d43c16036951f7aa6a41c0381821
Dec 30 16:34:30 minikube dockerd[1878]: time="2019-12-30T16:34:30.211136044Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:35:07 minikube dockerd[1878]: time="2019-12-30T16:35:07.793108193Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a4ab7faf0327a64be916becff2fd4dfb92d1b4df50b2b283d7820c22b0aac2ba/shim.sock" debug=false pid=19113
Dec 30 16:36:44 minikube dockerd[1878]: time="2019-12-30T16:36:44.215073821Z" level=info msg="shim reaped" id=b7aa43a7532333d1f171b027cf46b5289b035b8b72b2ce1e2ac391f87173bfec
Dec 30 16:36:44 minikube dockerd[1878]: time="2019-12-30T16:36:44.229952522Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:36:44 minikube dockerd[1878]: time="2019-12-30T16:36:44.235744713Z" level=warning msg="b7aa43a7532333d1f171b027cf46b5289b035b8b72b2ce1e2ac391f87173bfec cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b7aa43a7532333d1f171b027cf46b5289b035b8b72b2ce1e2ac391f87173bfec/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:36:44 minikube dockerd[1878]: time="2019-12-30T16:36:44.995252674Z" level=info msg="shim reaped" id=a6854655f8ae9769a52cca1a7a9648a876bd4b9e4c61abea5d546817b904edad
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.017224159Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.019861400Z" level=warning msg="a6854655f8ae9769a52cca1a7a9648a876bd4b9e4c61abea5d546817b904edad cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a6854655f8ae9769a52cca1a7a9648a876bd4b9e4c61abea5d546817b904edad/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.140472132Z" level=info msg="shim reaped" id=905596cba44adabe0ec71fb06d2ed4fdb5b1765165c12169538666e466f71b2a
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.157505227Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.164047381Z" level=warning msg="905596cba44adabe0ec71fb06d2ed4fdb5b1765165c12169538666e466f71b2a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/905596cba44adabe0ec71fb06d2ed4fdb5b1765165c12169538666e466f71b2a/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.360806272Z" level=info msg="shim reaped" id=d192e5aed4efc3d730290760dd0a75525c8d189530a9582c363ed157b96c6eaa
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.387113896Z" level=info msg="shim reaped" id=a4ab7faf0327a64be916becff2fd4dfb92d1b4df50b2b283d7820c22b0aac2ba
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.410585775Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.411161104Z" level=warning msg="d192e5aed4efc3d730290760dd0a75525c8d189530a9582c363ed157b96c6eaa cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d192e5aed4efc3d730290760dd0a75525c8d189530a9582c363ed157b96c6eaa/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.420274510Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.530004665Z" level=info msg="shim reaped" id=edd0833a88f60e5dc61ac3eb1bc7d4833a00605da38fdca5cfe490f603be99d6
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.533091025Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.533366534Z" level=warning msg="edd0833a88f60e5dc61ac3eb1bc7d4833a00605da38fdca5cfe490f603be99d6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/edd0833a88f60e5dc61ac3eb1bc7d4833a00605da38fdca5cfe490f603be99d6/mounts/shm, flags: 0x2: no such file or directory"
Dec 30 16:36:45 minikube dockerd[1878]: time="2019-12-30T16:36:45.700755524Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9851b9984047a2fdba1e0728b50569da62ee83746ee638d387b5e0d83d15cd2c/shim.sock" debug=false pid=19676
Dec 30 16:36:49 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58312: tls: first record does not look like a TLS handshake
Dec 30 16:36:49 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58313: tls: first record does not look like a TLS handshake
Dec 30 16:36:49 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58314: tls: oversized record received with length 21536
Dec 30 16:37:04 minikube dockerd[1878]: time="2019-12-30T16:37:04.668250220Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/56a3d5a84b1f3d34af7d79bfd10508b00ad773a9e1fc4600a3ad7a8031df22c8/shim.sock" debug=false pid=19838
Dec 30 16:37:12 minikube dockerd[1878]: time="2019-12-30T16:37:12.665314873Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/161c606c504ca4343b9d8cf3d0dd672ad650ecdc6a9f93842f05ff408cba0be1/shim.sock" debug=false pid=19951
Dec 30 16:37:18 minikube dockerd[1878]: time="2019-12-30T16:37:18.664009023Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f67f81d405aa9b71ede0d4a64f3a54af75c1e3e6f0c86f527c8e3c9d46854a71/shim.sock" debug=false pid=20036
Dec 30 16:37:29 minikube dockerd[1878]: time="2019-12-30T16:37:29.658150872Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c24b64d93a7ef960c7f71735d05837e448746fcf3392a73ff1d856d6f92f222c/shim.sock" debug=false pid=20152
Dec 30 16:37:45 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58317: tls: first record does not look like a TLS handshake
Dec 30 16:37:45 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58318: tls: first record does not look like a TLS handshake
Dec 30 16:37:45 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58319: tls: oversized record received with length 21536
Dec 30 16:37:45 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58320: tls: oversized record received with length 21536
Dec 30 16:37:59 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58323: tls: first record does not look like a TLS handshake
Dec 30 16:37:59 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58324: tls: first record does not look like a TLS handshake
Dec 30 16:37:59 minikube dockerd[1878]: http: TLS handshake error from 192.168.64.1:58325: tls: oversized record received with length 21536
Dec 30 16:39:19 minikube dockerd[1878]: time="2019-12-30T16:39:19.489749469Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e1724ef34b09d790d9bf6bc2a51bda3225beac7fc2e6517c149bbbf94277dccf/shim.sock" debug=false pid=20907
Dec 30 16:47:02 minikube dockerd[1878]: time="2019-12-30T16:47:02.946428962Z" level=info msg="shim reaped" id=e1724ef34b09d790d9bf6bc2a51bda3225beac7fc2e6517c149bbbf94277dccf
Dec 30 16:47:02 minikube dockerd[1878]: time="2019-12-30T16:47:02.957267788Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
c24b64d93a7ef       4689081edb103       9 minutes ago       Running             storage-provisioner         16                  b7ef300efc121
f67f81d405aa9       5eb3b74868724       10 minutes ago      Running             kube-controller-manager     7                   eda29d9fdd20f
161c606c504ca       78c190f736b11       10 minutes ago      Running             kube-scheduler              7                   e9ce7729e0a68
56a3d5a84b1f3       eb51a35975256       10 minutes ago      Running             kubernetes-dashboard        11                  06c3aa32255dd
9851b9984047a       3b08661dc379d       10 minutes ago      Running             dashboard-metrics-scraper   10                  c04ce9fe0790c
b7aa43a753233       4689081edb103       14 minutes ago      Exited              storage-provisioner         15                  b7ef300efc121
edd0833a88f60       5eb3b74868724       14 minutes ago      Exited              kube-controller-manager     6                   eda29d9fdd20f
d192e5aed4efc       78c190f736b11       15 minutes ago      Exited              kube-scheduler              6                   e9ce7729e0a68
905596cba44ad       3b08661dc379d       18 minutes ago      Exited              dashboard-metrics-scraper   9                   c04ce9fe0790c
a6854655f8ae9       eb51a35975256       19 minutes ago      Exited              kubernetes-dashboard        10                  06c3aa32255dd
ea84a75273043       70f311871ae12       23 minutes ago      Running             coredns                     2                   c0c68bf6d068b
ce7b594be55bf       0cae8d5cc64c7       23 minutes ago      Running             kube-apiserver              2                   40b180e4d1a41
e5c02bec108d4       70f311871ae12       23 minutes ago      Running             coredns                     2                   559974a0030f7
916a332e1e977       70f311871ae12       About an hour ago   Exited              coredns                     1                   559974a0030f7
14120f6e36d5b       70f311871ae12       About an hour ago   Exited              coredns                     1                   c0c68bf6d068b
e28b45ac3cac3       7d54289267dc5       About an hour ago   Running             kube-proxy                  1                   a21b1ed7dd371
dabe00f3a0951       0cae8d5cc64c7       About an hour ago   Exited              kube-apiserver              1                   40b180e4d1a41
6179d9eef9071       303ce5db0e90d       About an hour ago   Running             etcd                        1                   d9b9f9458cb85
93997ba105b4b       bd12a212f9dcb       About an hour ago   Running             kube-addon-manager          4                   e1bec3876d957
24214263aecc7       7d54289267dc5       19 hours ago        Exited              kube-proxy                  0                   27cfb15a5eafe
050e2d46acbb9       303ce5db0e90d       19 hours ago        Exited              etcd                        0                   d4a599713a4cc
f648bba41021e       bd12a212f9dcb       19 hours ago        Exited              kube-addon-manager          3                   1b9788de15b17

==> coredns ["14120f6e36d5"] <==
E1230 15:45:21.399238       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:.443: i/o timeou9t
966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kuEber3e0t e1s5":
45:21.402025       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflect[4I4N3F/O]ppil/vg/inna/mresaadcye:s?Sltiimlilt =w5a0i0t&irnegs ooun: e"Vkerbieornn=e0t:e sd"i
al [INFO] plugintcp 10.96.0.1:443: i//rtaidmye:o uStt
ill waiting on: "kubernetes"
I1230 15:45:21.3987E51 2 3 0   1 5 :14 5t:r2a1c.e4.0g3o2:2882 ]  Trac e1[ 1r4e8f5l2e6ctor.go:125] pkg/mod/k8s.io/cl098]: "Reflector pkg/imeondt/[email protected]./[email protected]/-t7o8odl2sa/fc7a9c2hbea/reflectbor/.tgooo:l98/:c aFcahiel/erde ftloe cltiosrt. g*ov:19.8S eLrivsitcAen:d GWeatt ctht"p s(:scpt a1r0t.e9d6:. 02.011:94-4132:- 3i0/ o1 5t:i4m4e:o5u1t.
3964733E8172 3+00 01060: 2U4T:C0 1m.=3+104.62528   1 0 3 9 116 6r)e f(lteocttaolr .tgiom:e2:8 33]0 .p0k0g2/0mod8/6k689s1.si)o:/
[email protected][.104-82502169009682]0:0 8[53100.10-07280d826a6f9719s2]b a[b3/0t.o0o0l2s0/8c6a6c9h1es/]r eEfNlDe
ctor.gEo1:29380:  1F5a:i4l5e:d2 1t.o3 9w9a23t8c h   * v 1 . E1n drpeofilnetcst:o rG.egto :h1t2t5p]s :p/k/g1/0m.o9d6/.k08.s1.:i4o4/3c/laipein/tv-1g/[email protected]?0r1e9s0o6u20085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https:/c/e1V0e.r96s.i0o.n1=:14543346/6a&ptii/mve1o/uetn=d9pmo4i3nst&st?ilmiemoiutt=S5e0c0o&nrdess=o5u8r3c&ewVaetrcshi=otnr=u0e::  ddiiaall t ctpc p1 01.09.69.60..01.:14:4434:3 :i /coo ntniemceto:u tc
onnectiEo1n2 3r0e f1u5s:e4d5
:21.399E21383 0   1 6 : 2 41: 0r1e.f3l2e1c8t9o3r . g o : 1 2 51]  rpekfgl/emcotdo/rk.8gso.:i2o8/3c]l ipekngt/-mgood@/vk08.s0..i0o-/[email protected]/5t1o0o1l-s7/8cda2cahfe7/9r2ebfalbe/cttooorl.sg/oc:a9c8h:e /Fa96i.l0e.d1 :t4o4 3l/iaspti /*v1/servv1i.cEensd?proeisnotusr:c eGVeetr shitotnp=s1:4//91207.89&6t.i0m.e1o:u4t4=35/ma3p5is/&vt1i/meenoduptoSienctosn?dlsi=m3i3t5=&5w0a0t&crhe=storuurec:e Vdeirasli otnc=p0 :1 0d.i9a6l. 0t.c1p: 41403.:9 6c.o0n.n1e:c4t4:3 :c oin/noe cttiimoeno urte
fused
E1230 16:24:01.327E915273 0   1 5 : 4 51: 2r1e.f3l9e9c2t3o8r . g o : 2 8 31]  rpekfgl/emcotdo/rk.8gos:.1i2o5/]c lpikegn/tm-ogdo/@kv80s..0i.o0/[email protected]/1t0o1o-l7s8/dc2aacfh7e9/2rbaebf/lteocotlosr/.cgaoc:h9e8/:r eFfalielcetdo rt.og ow:a9t8c:h  F*avi1l.eNda mteos plaicset:  *Gve1t. Ehntdtpposi:n/t/s1:0 .G9e6t. 0h.t1t:p4s4:3///a1p0i./9v61./0n.a1m:e4s4p3a/caepsi?/rve1s/oeunrdcpeoVienrtss?iloinm=i1t4=9520708&&rteismoeuoructe=V8emr3s2iso&nt=i0m:e oduitaSle ctocnpd s1=05.1926&.w0a.t1c:h4=4t3r:u ei:/ od itailm tecopu t1
0.96.I01.213404 31:5 :c4o5n:n2e1c.t4:0 1c8o6n8n e      1 trace.go:82] Trace[1680105181]: "Reflector pkg/mod/k8s.io/[email protected]/tools/ccchet/iroenf lreecftuosed
E1230 16:24:02.391979       1 reflector.go:125] pkg/mod/k8s.io/clier.go:98 ListAndWatnh"[email protected].:0 -22001199-01622-03008 15:44:51.3190310-77988d822a f+709020b0a UT/Ct omo=+s./2c1a7c6h4e5/6r5e6f)l e(cttootra.lg ot:i9m8e::  F30.l0e8d5 4t4454s):
Trace[1680105181]: [30.008544454s] [30.008544454s] END
E1230 15:45:21.402025       1 reflector.go:12o5 ]l ipsktg /*mvo1d./Nka8mse.sipoa/ccel:i eGnett- ghot@tvp0s.:0/./01-02.09169.06.210:404835/1a0p1i-/7v81d/2naafm7e9s2pbaacbe/st?oloilmsi/tc=a5c0h0e&/rreesfoluerccteoVre.rgsoi:o9n8=:0 :F adiialle dt ctpo  1l0i.s9t6 .*0v.11.:N4a4m3e:s pcaoce: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersionnn=e0c:t :d icaoln ntecp 10.96.0.1:443: i/o timeocuttion rEe1f2u3s0e d1
5:45:21.402025       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
EE1213203 01 51:64:52:42:10.24.0329022675       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.456614       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.EnIdpo1i2n3t0s :1 5G:e4t5 :h2t1t.p4s0:3/1/7180 . 9 6 . 0 . 11: 4t4r3a/caep.ig/ov:18/2e]n dTproaicnet[s1?2l4i8m0i8t8=956010]&:r e"sRoeufrlceecVteorrs ipokng=/0m:o dd/ika8ls .ticop/ [email protected]/tools/cache/reflector.go:98 ListAnd.W9a6t.c0h."1 :(4s4t3a:r tceodn:n e2c0t1:9 -c1o2n-n3e0c t1i5o:n4 4r:e5f1u.s3e9d2
978351 +0000 UTC m=+0.217544107) (total time: 30.009705408s):
Trace[1248088961]: [30.009705408s] [30.009705408s] END
E1230 15:45:21.403228       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.403228       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.403228       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 16:24:01.314658       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=153466&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.314658       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=153466&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.314658       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=153466&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.321893       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=149278&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.321893       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=149278&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.321893       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=149278&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.327957       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=149278&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.327957       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=149278&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:01.327957       1 reflector.go:283] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=149278&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.391979       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.391979       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.391979       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.392675       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.392675       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.392675       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.456614       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.456614       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:02.456614       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> coredns ["916a332e1e97"] <==
E1230 15:45:21.401001       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.401382       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: F[ailed tIoN FlOi]s tp l*uvg1i.nN/armeeasdpya:c eS:t iGlelt  whatittpisn:g/ /o1n0:. 9"6k.u0b.e1r:ne4t3e/sa"pNiFO/] 1p/namespaces?limit=500&resourceVersion=0: dial tcpl u1g0i.n9/6r.e0l.o1a:d4:4 3R:u nin/ion gt icmoenofuitg
uratioEn1 2M3D05  1=5 :44e52:3251f.c4c0326892619 6 6   e 7 618 r1e6fblcector.go:125] pkg/mod/k8s.di9o0/3c4leibecn7t
[email protected]
101-7l8idn2ufx7/9a2mbda6b4/,t ogool1s./1c3.c4h,e /c2effdl1ebc2t
or.go:98: Failed to list *v1.[SIeNrFvOi]c ep:l uGgeitn /hrtetapdsy::/ /S1t0i.l9l6 .w0a.i1t:i4n4g3 /oanp:i /"vk1u/bseerrneitceess"?
limi[tI=N50FO&]r epslouugricne/Vreerasdiyo:n =S0t: dial itl.9l6 .w0a.i1t:i4n4g3 :o ni: o" ktuibmeeronuett
es"
I1230 15:45:21.400553       1 trace.go:82] Trace[365195219]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 15:44:51.393191558 +0000 UTC m=+0.190045834) (total time: 30.005180355s):
Trace[365195219]: [30.005180355s] [30.005180355s] END
E1230 15:45:21.401001       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.401001       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.401001       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1230 15:45:21.400355       1 trace.go:82] Trace[1344046382]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 15:44:51.399203822 +0000 UTC m=+0.196058093) (total time: 30.001120157s):
Trace[1344046382]: [30.001120157s] [30.001120157s] END
E1230 15:45:21.401382       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.401382       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.401382       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1230 15:45:21.401980       1 trace.go:82] Trace[654510664]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 15:44:51.393115468 +0000 UTC m=+0.189969714) (total time: 30.008701457s):
Trace[654510664]: [30.008701457s] [30.008701457s] END
E1230 15:45:21.402821       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.402821       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1230 15:45:21.402821       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

==> coredns ["e5c02bec108d"] <==
E1230 16:24:08.942741       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:08.942741       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:08.942741       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:08.942872       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:08.942872       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:08.942872       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:08.942973       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resoEurceVersion=2:3 0d i1a6l: 2t4c:p0 81.09.49267.401  1 : 4 43: connect: connection refused
E1230 16:24:08.942973       1 reflecto r.go:125] pkg 1 reflector.go:125] pkg/mod/k8s.io/[email protected]//tmooodl/sk/8csa.cihoe//[email protected]/tools/oc:ac8h:e /Fraeiflleedc ttoor .lgios:t9 8*:v 1F.aEinleddp otion tlsi:s tG e*tv 1h.tNtapmse:s/p/a1c0e.:9 6G.e0t. 1h:t4t4p3s/:a/p/i1/0v.19/6e.n0d.p1o:i4n4t3s/?alpiim/ivt1=/5n0a0m&ersepsaocuersc?elViemristi=o5n0=00&:r edsioaulr ctecVpe r1si.o9n6=.00:. 1d:i4a4l tcp 10.39:6 .0o.n1n:e4c4t3::  ccoonnnneecctti:o nc ornenfeucsteido
n refused
E1230 16:24:08.942872       1 Er1e2f3l0e c1t6o:r2.4g:o0:81.2954]2 9p7k3g / m o d / k 81s .reifol/eccltioern.tg-og:o1@2v50]. 0p.k0g-/2m0o1d9/0k82s0.0i8o5/[email protected]/-t2o0o1l9s0/6c2a0c0h8e5/1r0e1f-l7e8cdt2oarf.7g9o2:b9a8b:/ tFoaoillse/dc atcoh el/irsetf l*evc1t.oSre.rgvoi:c9e8::  GFeati lhetdt tos :l/i/s1t0 .*9v61..0N.a1m:e4s4p3a/caep:i /Gve1t/ shetrtvpisc:e/s/?1l0i.m9i6t.=05.010:&4r4e3s/oaupric/evV1e/rnsamespaces?limit=500&resourceVersion=0: dial tcpi o1n0=.09:6 .d0i.a1l: 4t4c3p:  1c0o.n9n6e.c0t.:1 :c4o4n3n:c tcioonnn ercetf:u sceodn
nectiEo1n2 3r0e f1us6e:d2
4:09.E914243804 21 6 : 2 4 : 0 81. 9r4e2f9l7e3c t o r . g o :11 2r5e]f lpekcgt/omro.dg/ok:81s2.5i]o /pcklgi/emnotd-/[email protected]/[email protected]/1t-o7o8lds2/acfa7ch9e2/braebf/lteoctoolrs./gcoa:c9h8e:/ rFeafilleecdt otro. gloi:s9t8 :* vF1a.iElneddpotion tlsi:s tG e*tv 1h.tNtapmse:s/p/a1c0e.:9 6G.e0t. 1h:t4t4p3s/:a/p/i1/0v.19/6e.n0d.p1o:i4n4t3s/?alpiim/ivt1=/5n0a0m&ersepsaocuersc?elViemristi=o5n0=00&:r edsioaulr ctecVpe r1s0i.o9n6=.00:. 1d:i4a4l3 :t ccpo n1n0e.c9t6:. 0c.o1n:n4e4c3t:i ocno nrneefcuts:e dc
onnectEio1n2 3r0e f1u6s:e2d4
:09.9E41483402  1 6 : 2  4 :10 9r.e9fl4e4c8t4o2r . g o : 1 2 51]  rpekfgl/emcotdo/rk.8gs.oi:o1/2c5l]i epnktg-/gmoo@dv/0k.80s..0i-o2/[email protected]/8t5o1o0l1s-/7c8adc2haef/7r9e2fblaebc/ttooro.lgso/:c9a8c:h eF/arielfelde ctoo rl.igsto :*9v81:.EFnadiploeidn ttso:  lGiestt  h*tvt1p.sE:n/d/p1o0i.n9t6s.:0 .G1e:443/api/tv 1h/tetnpdsp:o/i/n1ts0?.l9i6m.0:. 1c:on4n3e/catp:i /cvo1n/neencdtpiooinn trse?fluismeidt
=500E&1r2e3s0o u1r6c:e2V4e:r0s9i.o9n4=408:4 2d i a l tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.945178       1 reflector.go:125] pkg/mod/k8s.io/clie n t - g1o @rve0f.l0e.c0t-o2r0.1g9o0:6122050]8 5p1k0g1/-m7o8dd/2ka8fs79i2ob/acbl/iteonotl-sg/[email protected]/.r0e-f2l0e1c9t0o6r2.0g0o8:59180:1 -F7a8idl2eadf 7t9o2 blaibs/tt o*ovl1s./Scearcvhe/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/i1/encdepo nGtest? lhitmtipts=:5/0/01&0.96.0.1:44r/esaopuir/v1e/seervisciso?nl=m0i:t 5d00arle stocur e1V0e.r9s6i.on=.01:: 4dia:  tccopn n10c.96 .c0o.n1:e4c3t: oonnnerctf ucsonnec
tion refusedE1230 16:24:09E.192305 1176:2 4 :09 .946 44   r e f l1e cetflerctogr.:o112255]  ppkkgg//mmoodd//kk88ss..iioo/cliieenntt--ggoo@@vv00..00..00--220011990066220000885150101--78d22af7f92b9ab/btaoblstcacohe/srecleccthor./gr:e98:l eFailedr tog olist *:v 1F.Nimesepac : Goe t hittps: /*1v01..9S6e0.v:443/ea:i /Gve1/ ametspapcse:s?l/im0t.=9500.&0e.so:rc4Ver/saipoin/0v dialstcp 1v0i.96e.0.?1i:m4i4t3=:5 0c0o&nrneescotu:r cceoVnenrescitoino=n0 :r edfiuasle dt
cp 10E1630 01.:12:8:00.33:5 3c6o     e c 1 :recloenctor.got:i2o5]  pkge/foud/ke8ds
io/client-go@v0300. -1261:90620085101-78d42af17728b b / o o s /c1c hre/rfelflecctoor..og:98::1 2a5ile d ko l/iso d*/v1.8Nam.espac/ec lthee nser-vgeor@ vw0a.s0 .u0n-a2b0l1e9 0t6o2 0r0e8t5u1r0n1 -a7 8rde2sapfo7n9s2eb abi/nttohe ltsm/e aallohtetedr,ebfut emayt otrillg be: 9r8o:cesFsinigltehe  reque sl i(gett  *avm1e.spaecesv)
ceE:1 2G0e 16 :28:t0.3s75/09 1 0 . 9 6  10 refl:e4t4or.go:125i] pv1/services?limit=kg/0mod&/r8es.oo/[email protected]=.00:- di1l9 0t6c2p0 01805.19061.-07.81d:24a4f37:9 2cboanbn/etcoto:l sc/ocnancehcet/iroenf lreecftuosre.dg
o:98: FEa1i2l3e0d  1t6o: list *v1.Service: 2h4e: 0s9e.r9v4e51 8w a s   u n a b1l er to return a eefslponse in the time allotted, but may still be processing the request (get services)
E1230 16:28:00.375600       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
ector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.946144       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.946144       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.946144       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kubernetes"
I1230 16:28:00.375289       1 trace.go:82] Trace[722209099]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 16:24:10.948525197 +0000 UTC m=+2.305603481) (total time: 3m49.422645346s):
Trace[722209099]: [3m49.422645346s] [3m49.422645346s] END
E1230 16:28:00.375336       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E1230 16:28:00.375336       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E1230 16:28:00.375336       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
I1230 16:28:00.375500       1 trace.go:82] Trace[2024012484]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 16:24:10.947591273 +0000 UTC m=+2.304669567) (total time: 3m49.427891502s):
Trace[2024012484]: [3m49.427891502s] [3m49.427891502s] END
E1230 16:28:00.375509       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E1230 16:28:00.375509       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E1230 16:28:00.375509       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
I1230 16:28:00.375586       1 trace.go:82] Trace[1446843071]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 16:24:10.94564222 +0000 UTC m=+2.302720523) (total time: 3m49.425687735s):
Trace[1446843071]: [3m49.425687735s] [3m49.425687735s] END
E1230 16:28:00.375600       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E1230 16:28:00.375600       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E1230 16:28:00.375600       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"

==> coredns ["ea84a7527304"] <==
E1230 16:24:09.711408       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.711611       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443E/api/v1/namespaces?limit=52030&resource0e r16:2s4:0n.=701:1 4d0i8a l   t c p   110 .96.0.1:443:r ecfolneceocr.:o :c125n epcktgi/omno dr/ekf8uss.eido
/clE1230 16:24:09.712112       1 reflector.go:125] pkg/mod/k8s.io/[email protected]@fv709.20b.a0b-/2t0o1o906/0c0a8c5h1e01-r7e8fde2catfo7.9go:b98b /Faiolodl to ciast *ve1.Ernepfoliencst:o re.t https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVgrosio9n80:: dFiali lcepd  10.96.0.1:443: connect: connection refuse 
list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.711408       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.711408       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.711611       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.711611       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.711611       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.712112       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.712112       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
E1230 16:24:09.712112       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
I1230 16:28:00.375051       1 trace.go:82] Trace[141140740]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 16:24:10.717903729 +0000 UTC m=+1.078322701) (total time: 3m49.656639154s):
Trace[141140740]: [3m49.656639154s] [3m49.656639154s] END
E1230 16:28:00.375082       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E1230 16:28:00.375082       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E1230 16:28:00.375082       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
I1230 16:28:00.375302       1 trace.go:82] Trace[828707524]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 16:24:10.711561954 +0000 UTC m=+1.071980888) (total time: 3m49.663718718s):
Trace[828707524]: [3m49.663718718s] [3m49.663718718s] END
E1230 16:28:00.375343       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E1230 16:28:00.375343       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E1230 16:28:00.375343       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
I1230 16:28:00.375455       1 trace.go:82] Trace[1616396329]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-12-30 16:24:10.712996049 +0000 UTC m=+1.073415036) (total time: 3m49.661984589s):
Trace[1616396329]: [3m49.661984589s] [3m49.661984589s] END
E1230 16:28:00.375469       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E1230 16:28:00.375469       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E1230 16:28:00.375469       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
[INFO] plugin/ready: Still waiting on: "kubernetes"
E1230 16:28:00.375082       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E1230 16:28:00.375343      [ 1 IeNfFlOe]c oprl.uggoi:n1/2r5e]a dpyk:g /Smtoidl/lk 8wsa.iitoi/ncgl ioenn:t -"[email protected]"0
190620085101-78d2af792bab/tools/cache/reflector.go[:I8N:F OF]a iplleudg itno/ rleiasdty :* vS1t.iSlelr vwiaciet:i ntgh eo ns:e r"vkeub rwae tuensa"b
le to return a response in the time allotted, but may still be processing the request (get services)
E1230 16:28:00.375469       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)

==> dmesg <==
[  +0.000003] lowmem_reserve[]: 0 0 0 0
[  +0.000002] Node 0 DMA: 14*4kB (UME) 15*8kB (UE) 15*16kB (UE) 19*32kB (UE) 7*64kB (UE) 7*128kB (UME) 7*256kB (UME) 2*512kB (UE) 2*1024kB (UE) 0*2048kB 0*4096kB = 7232kB
[  +0.000010] Node 0 DMA32: 55*4kB (UMEH) 64*8kB (UMEH) 194*16kB (UMEH) 21*32kB (UEH) 1*64kB (H) 0*128kB 1*256kB (H) 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 5340kB
[  +0.000013] 112638 total pagecache pages
[  +0.000001] 0 pages in swap cache
[  +0.000001] Swap cache stats: add 0, delete 0, find 0/0
[  +0.000001] Free swap  = 0kB
[  +0.000000] Total swap = 0kB
[  +0.000001] 511902 pages RAM
[  +0.000000] 0 pages HighMem/MovableOnly
[  +0.000001] 14515 pages reserved
[  +0.000316] Out of memory: Kill process 17500 (metrics-sidecar) score 1000 or sacrifice child
[  +0.000033] Killed process 17500 (metrics-sidecar) total-vm:245976kB, anon-rss:2756kB, file-rss:0kB, shmem-rss:0kB
[  +0.727879] coredns invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=-998
[  +0.000007] CPU: 0 PID: 17074 Comm: coredns Tainted: G           O     4.15.0 #1
[  +0.000000] Hardware name:   BHYVE, BIOS 1.00 03/14/2014
[  +0.000001] Call Trace:
[  +0.000007]  dump_stack+0x5c/0x82
[  +0.000004]  dump_header+0x66/0x281
[  +0.000003]  oom_kill_process+0x223/0x430
[  +0.000001]  out_of_memory+0x28d/0x490
[  +0.000003]  __alloc_pages_slowpath+0x9db/0xd60
[  +0.000003]  __alloc_pages_nodemask+0x21e/0x240
[  +0.000001]  filemap_fault+0x1e7/0x5d0
[  +0.000002]  ? filemap_map_pages+0x10c/0x290
[  +0.000002]  ext4_filemap_fault+0x27/0x36
[  +0.000003]  __do_fault+0x18/0x60
[  +0.000002]  __handle_mm_fault+0x668/0xa70
[  +0.000003]  ? hrtimer_try_to_cancel+0x10/0xe0
[  +0.000002]  handle_mm_fault+0xa5/0x1f0
[  +0.000003]  __do_page_fault+0x235/0x4b0
[  +0.000002]  ? syscall_slow_exit_work+0xba/0xc0
[  +0.000003]  ? page_fault+0x36/0x60
[  +0.000002]  page_fault+0x4c/0x60
[  +0.000002] RIP: 0033:0x42c840
[  +0.000001] RSP: 002b:000000c00006bf48 EFLAGS: 00010246
[  +0.000001] Mem-Info:
[  +0.000004] active_anon:395013 inactive_anon:70582 isolated_anon:0
               active_file:30 inactive_file:56 isolated_file:0
               unevictable:0 dirty:0 writeback:0 unstable:0
               slab_reclaimable:6680 slab_unreclaimable:9819
               mapped:20405 shmem:112306 pagetables:3316 bounce:0
               free:3108 free_pcp:13 free_cma:0
[  +0.000002] Node 0 active_anon:1580052kB inactive_anon:282328kB active_file:120kB inactive_file:224kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:81620kB dirty:0kB writeback:0kB shmem:449224kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  +0.000001] Node 0 DMA free:7228kB min:44kB low:56kB high:68kB active_anon:7560kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:16kB pagetables:68kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  +0.000003] lowmem_reserve[]: 0 1797 1797 1797
[  +0.000003] Node 0 DMA32 free:5204kB min:5400kB low:7240kB high:9080kB active_anon:1572492kB inactive_anon:282328kB active_file:116kB inactive_file:656kB unevictable:0kB writepending:0kB present:2031616kB managed:1973640kB mlocked:0kB kernel_stack:8464kB pagetables:13196kB bounce:0kB free_pcp:52kB local_pcp:40kB free_cma:0kB
[  +0.000003] lowmem_reserve[]: 0 0 0 0
[  +0.000002] Node 0 DMA: 13*4kB (UE) 15*8kB (UE) 15*16kB (UE) 19*32kB (UE) 7*64kB (UE) 7*128kB (UME) 7*256kB (UME) 2*512kB (UE) 2*1024kB (UE) 0*2048kB 0*4096kB = 7228kB
[  +0.000009] Node 0 DMA32: 252*4kB (UMEH) 70*8kB (UMEH) 193*16kB (UMEH) 21*32kB (UEH) 1*64kB (H) 0*128kB 1*256kB (H) 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 6160kB
[  +0.000010] 112365 total pagecache pages
[  +0.000000] 0 pages in swap cache
[  +0.000001] Swap cache stats: add 0, delete 0, find 0/0
[  +0.000001] Free swap  = 0kB
[  +0.000000] Total swap = 0kB
[  +0.000000] 511902 pages RAM
[  +0.000001] 0 pages HighMem/MovableOnly
[  +0.000000] 14515 pages reserved
[  +0.000132] Out of memory: Kill process 19208 (ng build --prod) score 410 or sacrifice child
[  +0.000148] Killed process 19208 (ng build --prod) total-vm:1063716kB, anon-rss:803388kB, file-rss:0kB, shmem-rss:0kB

==> kernel <==
 16:47:28 up  1:04,  0 users,  load average: 0.39, 2.74, 8.84
Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2018.05.3"

==> kube-addon-manager ["93997ba105b4"] <==
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-30T16:47:12+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-30T16:47:14+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
error: no objects passed to apply
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-30T16:47:18+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-30T16:47:18+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-30T16:47:23+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-30T16:47:23+00:00 ==
error: no objects passed to apply
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==

==> kube-addon-manager ["f648bba41021"] <==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-30T15:42:03+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-30T15:42:04+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-30T15:42:08+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-30T15:42:09+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-12-30T15:42:14+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-12-30T15:42:14+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==

==> kube-apiserver ["ce7b594be55b"] <==
Trace[226991171]: [264.576622ms] [260.909602ms] Transaction prepared
Trace[226991171]: [4.68547042s] [4.420893798s] Transaction committed
I1230 16:32:31.859937       1 trace.go:116] Trace[1615787604]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:32:26.479967749 +0000 UTC m=+496.821157641) (total time: 5.351492495s):
Trace[1615787604]: [232.28539ms] [157.549893ms] Conversion done
Trace[1615787604]: [5.336346707s] [5.102478151s] Object stored in database
I1230 16:32:35.711633       1 trace.go:116] Trace[1445639381]: "Get" url:/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication,user-agent:kube-scheduler/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:32:28.085861661 +0000 UTC m=+498.427051647) (total time: 7.508028146s):
Trace[1445639381]: [7.424729736s] [7.424062992s] About to write a response
I1230 16:33:11.368698       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:33:11.387197       1 log.go:172] http: TLS handshake error from 192.168.64.3:54394: EOF
I1230 16:33:11.402830       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:33:11.404688       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:33:11.404991       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:33:11.409790       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:33:11.409834       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:33:11.432967       1 trace.go:116] Trace[1835363076]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2019-12-30 16:32:23.37473592 +0000 UTC m=+493.715926004) (total time: 48.057975737s):
Trace[1835363076]: [48.057975737s] [47.891084273s] END
E1230 16:33:11.626295       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1230 16:33:11.627038       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:33:11.977152       1 trace.go:116] Trace[2021721392]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.17.0 (linux/amd64) kubernetes/70132b0,client:::1 (started: 2019-12-30 16:32:25.646292847 +0000 UTC m=+495.987482843) (total time: 46.330795184s):
Trace[2021721392]: [46.330696691s] [46.323430739s] About to write a response
I1230 16:33:12.336654       1 trace.go:116] Trace[914799300]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2019-12-30 16:33:11.560363689 +0000 UTC m=+541.901553556) (total time: 776.253184ms):
Trace[914799300]: [776.198627ms] [776.172991ms] About to write a response
I1230 16:33:12.343880       1 trace.go:116] Trace[391790295]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.17.0 (linux/amd64) kubernetes/70132b0,client:::1 (started: 2019-12-30 16:33:11.464255737 +0000 UTC m=+541.805445608) (total time: 879.582667ms):
Trace[391790295]: [879.551021ms] [879.537534ms] About to write a response
I1230 16:33:12.395877       1 trace.go:116] Trace[475623687]: "Create" url:/apis/storage.k8s.io/v1/storageclasses,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2019-12-30 16:33:11.654545712 +0000 UTC m=+541.995735583) (total time: 741.289254ms):
Trace[475623687]: [741.289254ms] [739.918642ms] END
I1230 16:33:43.093058       1 trace.go:116] Trace[1352193844]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2019-12-30 16:33:42.47626595 +0000 UTC m=+572.817455857) (total time: 616.7426ms):
Trace[1352193844]: [616.689805ms] [616.450212ms] Transaction committed
I1230 16:33:43.093304       1 trace.go:116] Trace[1612792918]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:33:42.475842061 +0000 UTC m=+572.817031983) (total time: 617.414148ms):
Trace[1612792918]: [617.278182ms] [616.949974ms] Object stored in database
I1230 16:33:43.094053       1 trace.go:116] Trace[261073899]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2019-12-30 16:33:42.564268088 +0000 UTC m=+572.905457968) (total time: 529.742086ms):
Trace[261073899]: [529.683442ms] [529.653107ms] About to write a response
I1230 16:33:43.211168       1 trace.go:116] Trace[1427172516]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.17.0 (linux/amd64) kubernetes/70132b0,client:::1 (started: 2019-12-30 16:33:42.56439743 +0000 UTC m=+572.905587321) (total time: 646.729169ms):
Trace[1427172516]: [646.666291ms] [646.656825ms] About to write a response
I1230 16:33:43.259852       1 trace.go:116] Trace[733441265]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2019-12-30 16:33:42.589978891 +0000 UTC m=+572.931168855) (total time: 669.825154ms):
Trace[733441265]: [669.775442ms] [666.05659ms] Transaction committed
I1230 16:33:43.259995       1 trace.go:116] Trace[1278042704]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2019-12-30 16:33:42.568368567 +0000 UTC m=+572.909558465) (total time: 691.596367ms):
Trace[1278042704]: [691.516348ms] [691.387086ms] Object stored in database
E1230 16:34:02.475994       1 repair.go:247] the cluster IP 10.107.123.242 for service kubernetes-dashboard/kubernetes-dashboard is not within the service CIDR 10.96.0.0/12; please recreate
I1230 16:36:44.190119       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:36:44.191037       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:36:44.194768       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:36:44.315373       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:36:44.350161       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
I1230 16:36:44.380950       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
E1230 16:36:44.492606       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1230 16:36:44.599884       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1230 16:36:44.602008       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1230 16:37:02.536132       1 repair.go:247] the cluster IP 10.107.123.242 for service kubernetes-dashboard/kubernetes-dashboard is not within the service CIDR 10.96.0.0/12; please recreate
I1230 16:37:43.353967       1 trace.go:116] Trace[1341279519]: "List etcd3" key:/ingress/kubernetes-dashboard,resourceVersion:,limit:0,continue: (started: 2019-12-30 16:37:42.439861815 +0000 UTC m=+812.781051727) (total time: 914.059061ms):
Trace[1341279519]: [914.059061ms] [914.059061ms] END
I1230 16:37:43.354146       1 trace.go:116] Trace[1279577911]: "List" url:/apis/extensions/v1beta1/namespaces/kubernetes-dashboard/ingresses,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2019-12-30 16:37:42.439794997 +0000 UTC m=+812.780984890) (total time: 914.327301ms):
Trace[1279577911]: [914.254762ms] [914.198336ms] Listing from storage done
E1230 16:40:02.559750       1 repair.go:247] the cluster IP 10.107.123.242 for service kubernetes-dashboard/kubernetes-dashboard is not within the service CIDR 10.96.0.0/12; please recreate
I1230 16:42:13.588691       1 trace.go:116] Trace[44769168]: "Get" url:/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2019-12-30 16:42:12.577119503 +0000 UTC m=+1082.918309371) (total time: 1.011539477s):
Trace[44769168]: [1.011448086s] [1.011437345s] About to write a response
E1230 16:43:02.609271       1 repair.go:247] the cluster IP 10.107.123.242 for service kubernetes-dashboard/kubernetes-dashboard is not within the service CIDR 10.96.0.0/12; please recreate
I1230 16:45:13.621735       1 trace.go:116] Trace[253766870]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2019-12-30 16:45:12.827400129 +0000 UTC m=+1263.168590047) (total time: 794.301697ms):
Trace[253766870]: [794.254331ms] [793.939325ms] About to write a response
E1230 16:46:02.627979       1 repair.go:247] the cluster IP 10.107.123.242 for service kubernetes-dashboard/kubernetes-dashboard is not within the service CIDR 10.96.0.0/12; please recreate

==> kube-apiserver ["dabe00f3a095"] <==
E1230 16:24:01.218668       1 controller.go:222] unable to sync kubernetes service: Post https://[::1]:8443/api/v1/namespaces: unexpected EOF
I1230 16:24:01.297750       1 trace.go:116] Trace[585131366]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.0 (linux/amd64) kubernetes/70132b0/leader-election,client:127.0.0.1 (started: 2019-12-30 16:23:55.892241176 +0000 UTC m=+2355.616886668) (total time: 5.405448133s):
Trace[585131366]: [5.405448133s] [5.39997083s] END
I1230 16:24:01.307639       1 trace.go:116] Trace[632099366]: "Create" url:/apis/storage.k8s.io/v1/storageclasses,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:127.0.0.1 (started: 2019-12-30 16:23:52.905615795 +0000 UTC m=+2352.630261437) (total time: 8.401984022s):
Trace[632099366]: [8.401984022s] [8.346769626s] END
I1230 16:24:01.321146       1 trace.go:116] Trace[1171978129]: "List" url:/apis/node.k8s.io/v1beta1/runtimeclasses,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.97448748 +0000 UTC m=+2356.699132927) (total time: 4.346617092s):
Trace[1171978129]: [4.346617092s] [4.346596806s] END
I1230 16:24:01.328051       1 trace.go:116] Trace[545973668]: "List etcd3" key:/csidrivers,resourceVersion:149278,limit:500,continue: (started: 2019-12-30 16:23:56.960696185 +0000 UTC m=+2356.685341641) (total time: 4.36732476s):
Trace[545973668]: [4.36732476s] [4.36732476s] END
E1230 16:24:01.328127       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.328179       1 trace.go:116] Trace[1135101139]: "GetToList etcd3" key:/secrets/kubernetes-dashboard/kubernetes-dashboard-token-pkxxs,resourceVersion:150796,limit:500,continue: (started: 2019-12-30 16:23:56.976174197 +0000 UTC m=+2356.700819645) (total time: 4.351981098s):
Trace[1135101139]: [4.351981098s] [4.351981098s] END
E1230 16:24:01.328190       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.328228       1 trace.go:116] Trace[1652778102]: "GetToList etcd3" key:/configmaps/kube-system/kube-proxy,resourceVersion:149316,limit:500,continue: (started: 2019-12-30 16:23:56.955611028 +0000 UTC m=+2356.680256524) (total time: 4.372597393s):
Trace[1652778102]: [4.372597393s] [4.372597393s] END
E1230 16:24:01.328238       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.328313       1 trace.go:116] Trace[472171726]: "GetToList etcd3" key:/secrets/kube-system/coredns-token-lzzw5,resourceVersion:150796,limit:500,continue: (started: 2019-12-30 16:23:56.976199019 +0000 UTC m=+2356.700844473) (total time: 4.352051708s):
Trace[472171726]: [4.352051708s] [4.352051708s] END
E1230 16:24:01.328322       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.328676       1 trace.go:116] Trace[429685124]: "List etcd3" key:/pods,resourceVersion:153491,limit:500,continue: (started: 2019-12-30 16:23:56.975682141 +0000 UTC m=+2356.700327670) (total time: 4.352971552s):
Trace[429685124]: [4.352971552s] [4.352971552s] END
E1230 16:24:01.328695       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.373219       1 trace.go:116] Trace[1180183549]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2019-12-30 16:23:56.972027665 +0000 UTC m=+2356.696673235) (total time: 4.401114274s):
Trace[1180183549]: [4.401114274s] [4.401114274s] END
E1230 16:24:01.373513       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.376186       1 trace.go:116] Trace[1970889012]: "Patch" url:/api/v1/namespaces/kube-system/events/coredns-6955765f44-kqf8v.15e5325d0a294b8a,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.96080718 +0000 UTC m=+2356.685452637) (total time: 4.415339413s):
Trace[1970889012]: [4.415339413s] [4.414821316s] END
E1230 16:24:01.385829       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.386814       1 trace.go:116] Trace[845164284]: "Get" url:/api/v1/namespaces/kube-public,user-agent:kube-apiserver/v1.17.0 (linux/amd64) kubernetes/70132b0,client:::1 (started: 2019-12-30 16:23:51.46910421 +0000 UTC m=+2351.193749655) (total time: 9.917675958s):
Trace[845164284]: [9.917675958s] [9.917665789s] END
E1230 16:24:01.386927       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:01.387240       1 trace.go:116] Trace[1598852885]: "Create" url:/api/v1/namespaces,user-agent:kube-apiserver/v1.17.0 (linux/amd64) kubernetes/70132b0,client:::1 (started: 2019-12-30 16:23:51.752222851 +0000 UTC m=+2351.476868360) (total time: 9.634993605s):
Trace[1598852885]: [141.54748ms] [133.946112ms] Conversion done
Trace[1598852885]: [9.634993605s] [9.434027056s] END
E1230 16:24:01.393643       1 controller.go:202] unable to create required kubernetes system namespace kube-public: Post https://[::1]:8443/api/v1/namespaces: dial tcp [::1]:8443: connect: connection refused
E1230 16:24:01.436438       1 controller.go:202] unable to create required kubernetes system namespace kube-node-lease: Post https://[::1]:8443/api/v1/namespaces: dial tcp [::1]:8443: connect: connection refused
I1230 16:24:01.439295       1 trace.go:116] Trace[38037644]: "List" url:/api/v1/namespaces/kube-system/secrets,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.973846447 +0000 UTC m=+2356.698491891) (total time: 4.465404826s):
Trace[38037644]: [4.465404826s] [4.465357478s] END
I1230 16:24:01.440055       1 trace.go:116] Trace[681928500]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.94488935 +0000 UTC m=+2356.669534790) (total time: 4.4951273s):
Trace[681928500]: [4.4951273s] [4.492477089s] END
I1230 16:24:01.443959       1 trace.go:116] Trace[1702299901]: "List" url:/api/v1/nodes,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.879207788 +0000 UTC m=+2356.603853231) (total time: 4.564712124s):
Trace[1702299901]: [4.564712124s] [4.564076292s] END
I1230 16:24:01.444327       1 trace.go:116] Trace[782877358]: "List" url:/api/v1/namespaces/kube-system/configmaps,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.654695781 +0000 UTC m=+2356.379341347) (total time: 4.789603685s):
Trace[782877358]: [4.789603685s] [4.688485963s] END
I1230 16:24:01.459641       1 trace.go:116] Trace[1369357906]: "List" url:/apis/storage.k8s.io/v1beta1/csidrivers,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.960498674 +0000 UTC m=+2356.685144117) (total time: 4.499063243s):
Trace[1369357906]: [4.499063243s] [4.499031905s] END
I1230 16:24:01.465983       1 trace.go:116] Trace[1159335229]: "List" url:/api/v1/namespaces/kubernetes-dashboard/secrets,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.97611542 +0000 UTC m=+2356.700760861) (total time: 4.48982162s):
Trace[1159335229]: [4.48982162s] [4.489769899s] END
I1230 16:24:01.472823       1 trace.go:116] Trace[29733788]: "List" url:/api/v1/namespaces/kube-system/configmaps,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.955332968 +0000 UTC m=+2356.679978411) (total time: 4.517447909s):
Trace[29733788]: [4.517447909s] [4.517193335s] END
I1230 16:24:01.474923       1 trace.go:116] Trace[930909435]: "List" url:/api/v1/namespaces/kube-system/secrets,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.960608954 +0000 UTC m=+2356.685254412) (total time: 4.514255963s):
Trace[930909435]: [4.514255963s] [4.507828097s] END
I1230 16:24:01.475173       1 trace.go:116] Trace[485187356]: "List" url:/api/v1/pods,user-agent:kubelet/v1.17.0 (linux/amd64) kubernetes/70132b0,client:127.0.0.1 (started: 2019-12-30 16:23:56.975114846 +0000 UTC m=+2356.699760290) (total time: 4.500041167s):
Trace[485187356]: [4.500041167s] [4.500021757s] END
E1230 16:24:02.068821       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1230 16:24:02.081178       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1230 16:24:02.108679       1 trace.go:116] Trace[469147088]: "Get" url:/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-key-holder,user-agent:dashboard/v2.0.0-beta8,client:172.17.0.5 (started: 2019-12-30 16:24:01.026889741 +0000 UTC m=+2360.751535255) (total time: 1.081309226s):
Trace[469147088]: [1.081309226s] [1.077712744s] END
W1230 16:24:06.314703       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
E1230 16:24:07.390534       1 controller.go:183] StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.3, ResourceVersion: 0, AdditionalErrorMsg: 

==> kube-controller-manager ["edd0833a88f6"] <==
I1230 16:33:34.566664       1 shared_informer.go:197] Waiting for caches to sync for job
I1230 16:33:34.607540       1 controllermanager.go:533] Started "replicaset"
W1230 16:33:34.607633       1 controllermanager.go:525] Skipping "ttl-after-finished"
I1230 16:33:34.607746       1 replica_set.go:180] Starting replicaset controller
I1230 16:33:34.607756       1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
I1230 16:33:34.792253       1 controllermanager.go:533] Started "persistentvolume-binder"
I1230 16:33:34.792494       1 pv_controller_base.go:294] Starting persistent volume controller
I1230 16:33:34.792800       1 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1230 16:33:34.805788       1 controllermanager.go:533] Started "attachdetach"
I1230 16:33:34.806157       1 attach_detach_controller.go:342] Starting attach detach controller
I1230 16:33:34.806273       1 shared_informer.go:197] Waiting for caches to sync for attach detach
I1230 16:33:34.820008       1 controllermanager.go:533] Started "pv-protection"
I1230 16:33:34.820365       1 pv_protection_controller.go:81] Starting PV protection controller
I1230 16:33:34.820634       1 shared_informer.go:197] Waiting for caches to sync for PV protection
I1230 16:33:34.907038       1 controllermanager.go:533] Started "podgc"
I1230 16:33:34.907211       1 gc_controller.go:88] Starting GC controller
I1230 16:33:34.907269       1 shared_informer.go:197] Waiting for caches to sync for GC
I1230 16:33:35.091914       1 controllermanager.go:533] Started "ttl"
W1230 16:33:35.092183       1 controllermanager.go:525] Skipping "nodeipam"
I1230 16:33:35.092893       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1230 16:33:35.093160       1 ttl_controller.go:116] Starting TTL controller
I1230 16:33:35.093210       1 shared_informer.go:197] Waiting for caches to sync for TTL
I1230 16:33:35.229705       1 shared_informer.go:204] Caches are synced for TTL 
W1230 16:33:35.231135       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1230 16:33:35.232020       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I1230 16:33:35.238701       1 shared_informer.go:204] Caches are synced for endpoint 
I1230 16:33:35.238891       1 shared_informer.go:204] Caches are synced for GC 
I1230 16:33:35.239856       1 shared_informer.go:204] Caches are synced for stateful set 
I1230 16:33:35.240188       1 shared_informer.go:204] Caches are synced for service account 
I1230 16:33:35.240445       1 shared_informer.go:204] Caches are synced for PV protection 
I1230 16:33:35.245001       1 shared_informer.go:204] Caches are synced for ReplicationController 
I1230 16:33:35.254838       1 shared_informer.go:204] Caches are synced for HPA 
I1230 16:33:35.255847       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I1230 16:33:35.262589       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I1230 16:33:35.290259       1 shared_informer.go:204] Caches are synced for namespace 
I1230 16:33:35.291210       1 shared_informer.go:204] Caches are synced for PVC protection 
I1230 16:33:35.293651       1 shared_informer.go:204] Caches are synced for persistent volume 
I1230 16:33:35.296534       1 shared_informer.go:204] Caches are synced for expand 
I1230 16:33:35.298959       1 shared_informer.go:204] Caches are synced for daemon sets 
I1230 16:33:35.307917       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I1230 16:33:35.311364       1 shared_informer.go:204] Caches are synced for taint 
I1230 16:33:35.311796       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W1230 16:33:35.312064       1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1230 16:33:35.312345       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
I1230 16:33:35.313104       1 taint_manager.go:186] Starting NoExecuteTaintManager
I1230 16:33:35.313997       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"7996c0cf-e602-4eff-871c-ad8fa11f3c67", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I1230 16:33:35.356243       1 shared_informer.go:204] Caches are synced for disruption 
I1230 16:33:35.356270       1 disruption.go:338] Sending events to api server.
I1230 16:33:35.356801       1 shared_informer.go:204] Caches are synced for deployment 
I1230 16:33:35.556236       1 shared_informer.go:204] Caches are synced for resource quota 
I1230 16:33:35.573659       1 shared_informer.go:204] Caches are synced for job 
I1230 16:33:35.593617       1 shared_informer.go:204] Caches are synced for resource quota 
I1230 16:33:35.679549       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I1230 16:33:35.738983       1 shared_informer.go:204] Caches are synced for attach detach 
I1230 16:33:35.743995       1 shared_informer.go:204] Caches are synced for garbage collector 
I1230 16:33:35.744435       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1230 16:33:36.021444       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1230 16:33:36.021549       1 shared_informer.go:204] Caches are synced for garbage collector 
I1230 16:36:44.185757       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded
F1230 16:36:44.187914       1 controllermanager.go:279] leaderelection lost

==> kube-controller-manager ["f67f81d405aa"] <==
I1230 16:37:39.113544       1 controllermanager.go:533] Started "job"
I1230 16:37:39.113606       1 job_controller.go:143] Starting job controller
I1230 16:37:39.113987       1 shared_informer.go:197] Waiting for caches to sync for job
I1230 16:37:39.252319       1 controllermanager.go:533] Started "csrcleaner"
I1230 16:37:39.252445       1 cleaner.go:81] Starting CSR cleaner controller
I1230 16:37:39.401715       1 node_lifecycle_controller.go:77] Sending events to api server
E1230 16:37:39.401787       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
W1230 16:37:39.401800       1 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I1230 16:37:39.552336       1 controllermanager.go:533] Started "persistentvolume-binder"
W1230 16:37:39.552527       1 controllermanager.go:525] Skipping "ttl-after-finished"
W1230 16:37:39.552555       1 controllermanager.go:525] Skipping "endpointslice"
I1230 16:37:39.552625       1 pv_controller_base.go:294] Starting persistent volume controller
I1230 16:37:39.552696       1 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1230 16:37:39.702650       1 controllermanager.go:533] Started "serviceaccount"
I1230 16:37:39.702872       1 serviceaccounts_controller.go:116] Starting service account controller
I1230 16:37:39.703065       1 shared_informer.go:197] Waiting for caches to sync for service account
I1230 16:37:40.530844       1 controllermanager.go:533] Started "garbagecollector"
I1230 16:37:40.537492       1 garbagecollector.go:129] Starting garbage collector controller
I1230 16:37:40.542304       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1230 16:37:40.544899       1 graph_builder.go:282] GraphBuilder running
I1230 16:37:40.557320       1 controllermanager.go:533] Started "daemonset"
I1230 16:37:40.558048       1 daemon_controller.go:255] Starting daemon sets controller
I1230 16:37:40.558087       1 shared_informer.go:197] Waiting for caches to sync for daemon sets
I1230 16:37:40.592994       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1230 16:37:40.622326       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I1230 16:37:40.622731       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I1230 16:37:40.624098       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I1230 16:37:40.632488       1 shared_informer.go:204] Caches are synced for PV protection 
I1230 16:37:40.654520       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I1230 16:37:40.803758       1 shared_informer.go:204] Caches are synced for service account 
I1230 16:37:40.814822       1 shared_informer.go:204] Caches are synced for job 
I1230 16:37:40.818645       1 shared_informer.go:204] Caches are synced for namespace 
I1230 16:37:40.845114       1 shared_informer.go:204] Caches are synced for ReplicationController 
I1230 16:37:40.852767       1 shared_informer.go:204] Caches are synced for endpoint 
I1230 16:37:40.856596       1 shared_informer.go:204] Caches are synced for ReplicaSet 
W1230 16:37:40.864809       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1230 16:37:40.865329       1 shared_informer.go:204] Caches are synced for TTL 
I1230 16:37:40.865448       1 shared_informer.go:204] Caches are synced for daemon sets 
I1230 16:37:40.865858       1 shared_informer.go:204] Caches are synced for disruption 
I1230 16:37:40.865899       1 disruption.go:338] Sending events to api server.
I1230 16:37:40.902540       1 shared_informer.go:204] Caches are synced for deployment 
I1230 16:37:40.904952       1 shared_informer.go:204] Caches are synced for HPA 
I1230 16:37:40.953720       1 shared_informer.go:204] Caches are synced for taint 
I1230 16:37:40.954099       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W1230 16:37:40.954320       1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1230 16:37:40.954582       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
I1230 16:37:40.954996       1 shared_informer.go:204] Caches are synced for GC 
I1230 16:37:40.955320       1 taint_manager.go:186] Starting NoExecuteTaintManager
I1230 16:37:40.955896       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"7996c0cf-e602-4eff-871c-ad8fa11f3c67", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I1230 16:37:41.093914       1 shared_informer.go:204] Caches are synced for expand 
I1230 16:37:41.121071       1 shared_informer.go:204] Caches are synced for attach detach 
I1230 16:37:41.140200       1 shared_informer.go:204] Caches are synced for stateful set 
I1230 16:37:41.145661       1 shared_informer.go:204] Caches are synced for garbage collector 
I1230 16:37:41.145710       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1230 16:37:41.152425       1 shared_informer.go:204] Caches are synced for PVC protection 
I1230 16:37:41.152951       1 shared_informer.go:204] Caches are synced for persistent volume 
I1230 16:37:41.193548       1 shared_informer.go:204] Caches are synced for resource quota 
I1230 16:37:41.230508       1 shared_informer.go:204] Caches are synced for resource quota 
I1230 16:37:42.018202       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1230 16:37:42.018282       1 shared_informer.go:204] Caches are synced for garbage collector 

==> kube-proxy ["24214263aecc"] <==
W1229 21:19:27.369301       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I1229 21:19:27.396954       1 node.go:135] Successfully retrieved node IP: 192.168.64.3
I1229 21:19:27.397029       1 server_others.go:145] Using iptables Proxier.
W1229 21:19:27.397612       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1229 21:19:27.398240       1 server.go:571] Version: v1.17.0
I1229 21:19:27.404322       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1229 21:19:27.407722       1 config.go:313] Starting service config controller
I1229 21:19:27.407895       1 shared_informer.go:197] Waiting for caches to sync for service config
I1229 21:19:27.416575       1 config.go:131] Starting endpoints config controller
I1229 21:19:27.417491       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1229 21:19:27.508818       1 shared_informer.go:204] Caches are synced for service config 
I1229 21:19:27.527597       1 shared_informer.go:204] Caches are synced for endpoints config 
E1230 15:42:16.999942       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=149277&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 15:42:17.016625       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=88336&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-proxy ["e28b45ac3cac"] <==
W1230 15:44:51.667463       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I1230 15:44:51.725888       1 node.go:135] Successfully retrieved node IP: 192.168.64.3
I1230 15:44:51.725937       1 server_others.go:145] Using iptables Proxier.
W1230 15:44:51.726068       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1230 15:44:51.726297       1 server.go:571] Version: v1.17.0
I1230 15:44:51.739511       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1230 15:44:51.740883       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1230 15:44:51.745460       1 conntrack.go:83] Setting conntrack hashsize to 32768
I1230 15:44:51.755784       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1230 15:44:51.756062       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1230 15:44:51.758942       1 config.go:313] Starting service config controller
I1230 15:44:51.758964       1 shared_informer.go:197] Waiting for caches to sync for service config
I1230 15:44:51.766314       1 config.go:131] Starting endpoints config controller
I1230 15:44:51.766370       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1230 15:44:51.866663       1 shared_informer.go:204] Caches are synced for service config 
I1230 15:44:51.866792       1 shared_informer.go:204] Caches are synced for endpoints config 
W1230 16:24:02.221309       1 reflector.go:340] k8s.io/client-go/informers/factory.go:135: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
W1230 16:24:02.221205       1 reflector.go:340] k8s.io/client-go/informers/factory.go:135: watch of *v1.Endpoints ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
E1230 16:24:03.903006       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=153466: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:03.929720       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=149278: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:05.480278       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=153466: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:05.775478       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=149278: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:07.192989       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=149278: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:07.193030       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=153466: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:08.332420       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=149278: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:08.334004       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=153466: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:09.372482       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=153466: dial tcp 127.0.0.1:8443: connect: connection refused
E1230 16:24:09.372714       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=149278: dial tcp 127.0.0.1:8443: connect: connection refused
I1230 16:28:00.816497       1 trace.go:116] Trace[1364190157]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2019-12-30 16:24:10.399135884 +0000 UTC m=+2359.245815241) (total time: 3m50.417157027s):
Trace[1364190157]: [3m50.417157027s] [3m50.417157027s] END
E1230 16:28:00.816614       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=153466: net/http: TLS handshake timeout
I1230 16:28:00.816756       1 trace.go:116] Trace[513464183]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2019-12-30 16:24:10.431010191 +0000 UTC m=+2359.277689572) (total time: 3m50.385700468s):
Trace[513464183]: [3m50.385700468s] [3m50.385700468s] END
E1230 16:28:00.816772       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=149278: net/http: TLS handshake timeout

==> kube-scheduler ["161c606c504c"] <==
I1230 16:37:13.435272       1 serving.go:312] Generated self-signed cert in-memory
W1230 16:37:14.108105       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1230 16:37:14.108881       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1230 16:37:14.143880       1 authorization.go:47] Authorization is disabled
W1230 16:37:14.143929       1 authentication.go:92] Authentication is disabled
I1230 16:37:14.143942       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1230 16:37:14.145630       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1230 16:37:14.145760       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1230 16:37:14.145889       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1230 16:37:14.146255       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1230 16:37:14.146861       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I1230 16:37:14.147152       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I1230 16:37:14.247849       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I1230 16:37:14.248337       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1230 16:37:14.249069       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I1230 16:37:30.383057       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kube-scheduler ["d192e5aed4ef"] <==
I1230 16:32:23.220187       1 serving.go:312] Generated self-signed cert in-memory
W1230 16:32:27.342814       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1230 16:32:27.346687       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1230 16:33:11.403657       1 authorization.go:47] Authorization is disabled
W1230 16:33:11.403817       1 authentication.go:92] Authentication is disabled
I1230 16:33:11.407675       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1230 16:33:11.422253       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1230 16:33:11.423145       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1230 16:33:11.442382       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I1230 16:33:11.442662       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I1230 16:33:11.447312       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1230 16:33:11.447371       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1230 16:33:11.478140       1 log.go:172] http: TLS handshake error from 127.0.0.1:56554: EOF
I1230 16:33:11.526642       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1230 16:33:11.548126       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I1230 16:33:11.553701       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I1230 16:33:27.402411       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
I1230 16:36:44.042685       1 leaderelection.go:288] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
F1230 16:36:44.046619       1 server.go:257] leaderelection lost

==> kubelet <==
-- Logs begin at Mon 2019-12-30 15:43:49 UTC, end at Mon 2019-12-30 16:47:28 UTC. --
Dec 30 16:28:01 minikube kubelet[2261]: W1230 16:28:01.324477    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-v6r9n through plugin: invalid network status for
Dec 30 16:28:01 minikube kubelet[2261]: W1230 16:28:01.359305    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-zqpk2 through plugin: invalid network status for
Dec 30 16:28:01 minikube kubelet[2261]: W1230 16:28:01.511684    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-kqf8v through plugin: invalid network status for
Dec 30 16:28:02 minikube kubelet[2261]: E1230 16:28:02.001323    2261 kubelet.go:1844] skipping pod synchronization - container runtime is down
Dec 30 16:28:02 minikube kubelet[2261]: I1230 16:28:02.126693    2261 trace.go:116] Trace[9680736]: "Reflector ListAndWatch" name:object-"kube-system"/"coredns" (started: 2019-12-30 16:24:24.889832186 +0000 UTC m=+2394.920441395) (total time: 3m37.236826161s):
Dec 30 16:28:02 minikube kubelet[2261]: Trace[9680736]: [3m37.236810351s] [3m37.236810351s] Objects listed
Dec 30 16:28:03 minikube kubelet[2261]: E1230 16:28:03.601734    2261 kubelet.go:1844] skipping pod synchronization - container runtime is down
Dec 30 16:28:06 minikube kubelet[2261]: E1230 16:28:06.854844    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"
Dec 30 16:28:06 minikube kubelet[2261]: E1230 16:28:06.860830    2261 pod_workers.go:191] Error syncing pod ec308db8-ce0f-460b-980a-2f60f8337d20 ("dashboard-metrics-scraper-7b64584c5c-zqpk2_kubernetes-dashboard(ec308db8-ce0f-460b-980a-2f60f8337d20)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7b64584c5c-zqpk2_kubernetes-dashboard(ec308db8-ce0f-460b-980a-2f60f8337d20)"
Dec 30 16:28:07 minikube kubelet[2261]: W1230 16:28:07.614367    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-h2ktp through plugin: invalid network status for
Dec 30 16:28:07 minikube kubelet[2261]: W1230 16:28:07.645212    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-zqpk2 through plugin: invalid network status for
Dec 30 16:28:07 minikube kubelet[2261]: E1230 16:28:07.664302    2261 pod_workers.go:191] Error syncing pod ec308db8-ce0f-460b-980a-2f60f8337d20 ("dashboard-metrics-scraper-7b64584c5c-zqpk2_kubernetes-dashboard(ec308db8-ce0f-460b-980a-2f60f8337d20)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7b64584c5c-zqpk2_kubernetes-dashboard(ec308db8-ce0f-460b-980a-2f60f8337d20)"
Dec 30 16:28:07 minikube kubelet[2261]: E1230 16:28:07.690626    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"
Dec 30 16:28:29 minikube kubelet[2261]: E1230 16:28:29.363066    2261 controller.go:177] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 30 16:28:35 minikube kubelet[2261]: E1230 16:28:35.513901    2261 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "minikube": Get https://localhost:8443/api/v1/nodes/minikube?resourceVersion=0&timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 30 16:28:36 minikube kubelet[2261]: E1230 16:28:36.489031    2261 controller.go:177] failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "minikube": the object has been modified; please apply your changes to the latest version and try again
Dec 30 16:28:44 minikube kubelet[2261]: W1230 16:28:44.033866    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-zqpk2 through plugin: invalid network status for
Dec 30 16:29:17 minikube kubelet[2261]: E1230 16:29:17.077601    2261 controller.go:177] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 30 16:30:38 minikube kubelet[2261]: W1230 16:30:38.839040    2261 status_manager.go:530] Failed to get status for pod "kube-proxy-g74zl_kube-system(754b570e-ef1e-4793-a78b-4e1fc4c8fafb)": the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-proxy-g74zl)
Dec 30 16:30:38 minikube kubelet[2261]: E1230 16:30:38.849117    2261 remote_runtime.go:222] StartContainer "f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Dec 30 16:30:38 minikube kubelet[2261]: E1230 16:30:38.850071    2261 kuberuntime_manager.go:803] container start failed: RunContainerError: context deadline exceeded
Dec 30 16:30:38 minikube kubelet[2261]: E1230 16:30:38.852875    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with RunContainerError: "context deadline exceeded"
Dec 30 16:30:38 minikube kubelet[2261]: E1230 16:30:38.909172    2261 event.go:272] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/kube-system/events/coredns-6955765f44-kqf8v.15e5325d29e410d1: stream error: stream ID 323; INTERNAL_ERROR' (may retry after sleeping)
Dec 30 16:30:39 minikube kubelet[2261]: E1230 16:30:39.223540    2261 remote_runtime.go:222] StartContainer "905596cba44adabe0ec71fb06d2ed4fdb5b1765165c12169538666e466f71b2a" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Dec 30 16:30:39 minikube kubelet[2261]: E1230 16:30:39.229680    2261 kuberuntime_manager.go:803] container start failed: RunContainerError: context deadline exceeded
Dec 30 16:30:39 minikube kubelet[2261]: E1230 16:30:39.230140    2261 pod_workers.go:191] Error syncing pod ec308db8-ce0f-460b-980a-2f60f8337d20 ("dashboard-metrics-scraper-7b64584c5c-zqpk2_kubernetes-dashboard(ec308db8-ce0f-460b-980a-2f60f8337d20)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with RunContainerError: "context deadline exceeded"
Dec 30 16:30:40 minikube kubelet[2261]: E1230 16:30:40.026711    2261 remote_runtime.go:295] ContainerStatus "2bf028c27bb99e5b0494fd2f9b87a2b2c3cba78593ee870f9567e80584bbca2c" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 2bf028c27bb99e5b0494fd2f9b87a2b2c3cba78593ee870f9567e80584bbca2c
Dec 30 16:30:40 minikube kubelet[2261]: E1230 16:30:40.026793    2261 kuberuntime_manager.go:955] getPodContainerStatuses for pod "dashboard-metrics-scraper-7b64584c5c-zqpk2_kubernetes-dashboard(ec308db8-ce0f-460b-980a-2f60f8337d20)" failed: rpc error: code = Unknown desc = Error: No such container: 2bf028c27bb99e5b0494fd2f9b87a2b2c3cba78593ee870f9567e80584bbca2c
Dec 30 16:30:41 minikube kubelet[2261]: W1230 16:30:41.433073    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-zqpk2 through plugin: invalid network status for
Dec 30 16:32:05 minikube kubelet[2261]: W1230 16:32:05.186208    2261 container.go:412] Failed to create summary reader for "/kubepods/burstable/podff67867321338ffd885039e188f6b424/50203625f4e6891b4cdd734d6628947843fe837e97f6f1d758bdb020186a7d51": none of the resources are being tracked.
Dec 30 16:32:05 minikube kubelet[2261]: E1230 16:32:05.561627    2261 remote_runtime.go:261] RemoveContainer "873db80de76a9a794128f3e0849a8ba5fd6dc7fd9aed2fd604473d32b2fa7daf" from runtime service failed: rpc error: code = Unknown desc = failed to remove container "873db80de76a9a794128f3e0849a8ba5fd6dc7fd9aed2fd604473d32b2fa7daf": Error response from daemon: removal of container 873db80de76a9a794128f3e0849a8ba5fd6dc7fd9aed2fd604473d32b2fa7daf is already in progress
Dec 30 16:32:06 minikube kubelet[2261]: E1230 16:32:06.406898    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"
Dec 30 16:32:06 minikube kubelet[2261]: E1230 16:32:06.464318    2261 pod_workers.go:191] Error syncing pod e7ce3a6ee9fa0ec547ac7b4b17af0dcb ("kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"
Dec 30 16:32:06 minikube kubelet[2261]: E1230 16:32:06.487204    2261 pod_workers.go:191] Error syncing pod ff67867321338ffd885039e188f6b424 ("kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"
Dec 30 16:32:07 minikube kubelet[2261]: E1230 16:32:07.514144    2261 pod_workers.go:191] Error syncing pod ff67867321338ffd885039e188f6b424 ("kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"
Dec 30 16:32:13 minikube kubelet[2261]: E1230 16:32:13.242996    2261 pod_workers.go:191] Error syncing pod e7ce3a6ee9fa0ec547ac7b4b17af0dcb ("kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"
Dec 30 16:32:18 minikube kubelet[2261]: E1230 16:32:18.710571    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"
Dec 30 16:32:36 minikube kubelet[2261]: E1230 16:32:36.528720    2261 controller.go:177] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Dec 30 16:32:37 minikube kubelet[2261]: E1230 16:32:31.594829    2261 event.go:263] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-scheduler-minikube.15e5309958cc651b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"153755", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-minikube", UID:"ff67867321338ffd885039e188f6b424", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-scheduler}"}, Reason:"Pulled", Message:"Container image \"k8s.gcr.io/kube-scheduler:v1.17.0\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713317479, loc:(*time.Location)(0x6e77d80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf7aa784e8117800, ext:2869702842614, loc:(*time.Location)(0x6e77d80)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
Dec 30 16:33:00 minikube kubelet[2261]: W1230 16:32:49.170678    2261 status_manager.go:546] Failed to update status for pod "storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)": failed to patch status "{\"status\":{\"containerStatuses\":[{\"containerID\":\"docker://f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce\",\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v1.8.1\",\"imageID\":\"docker://sha256:4689081edb103a9e8174bf23a255bfbe0b2d9ed82edc907abab6989d1c60f02c\",\"lastState\":{\"terminated\":{\"containerID\":\"docker://f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce\",\"exitCode\":137,\"finishedAt\":\"2019-12-30T16:32:04Z\",\"reason\":\"Error\",\"startedAt\":\"2019-12-30T16:30:39Z\"}},\"name\":\"storage-provisioner\",\"ready\":false,\"restartCount\":14,\"started\":false,\"state\":{\"waiting\":{\"message\":\"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)\",\"reason\":\"CrashLoopBackOff\"}}}]}}" for pod "kube-system"/"storage-provisioner": etcdserver: request timed out
Dec 30 16:33:11 minikube kubelet[2261]: E1230 16:33:11.614015    2261 controller.go:177] failed to update node lease, error: Put https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Dec 30 16:33:11 minikube kubelet[2261]: E1230 16:33:11.640654    2261 event.go:272] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/kube-system/events/kube-scheduler-minikube.15e530995ee65d2a: read tcp 127.0.0.1:51604->127.0.0.1:8443: use of closed network connection' (may retry after sleeping)
Dec 30 16:33:11 minikube kubelet[2261]: I1230 16:33:11.911243    2261 setters.go:535] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-12-30 16:33:11.910521298 +0000 UTC m=+2921.941130452 LastTransitionTime:2019-12-30 16:33:11.910521298 +0000 UTC m=+2921.941130452 Reason:KubeletNotReady Message:container runtime is down}
Dec 30 16:33:12 minikube kubelet[2261]: E1230 16:33:12.271933    2261 controller.go:177] failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "minikube": the object has been modified; please apply your changes to the latest version and try again
Dec 30 16:36:45 minikube kubelet[2261]: E1230 16:36:45.247765    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"
Dec 30 16:36:45 minikube kubelet[2261]: E1230 16:36:45.283673    2261 remote_runtime.go:261] RemoveContainer "f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce" from runtime service failed: rpc error: code = Unknown desc = failed to remove container "f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce": Error response from daemon: removal of container f575bea287ef19cdab8203e812057b3eb8d0ea685832c049ddcd6161cb8de9ce is already in progress
Dec 30 16:36:46 minikube kubelet[2261]: E1230 16:36:46.301188    2261 pod_workers.go:191] Error syncing pod e7ce3a6ee9fa0ec547ac7b4b17af0dcb ("kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"
Dec 30 16:36:46 minikube kubelet[2261]: E1230 16:36:46.361140    2261 pod_workers.go:191] Error syncing pod ff67867321338ffd885039e188f6b424 ("kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"
Dec 30 16:36:46 minikube kubelet[2261]: W1230 16:36:46.376439    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-h2ktp through plugin: invalid network status for
Dec 30 16:36:46 minikube kubelet[2261]: E1230 16:36:46.412113    2261 pod_workers.go:191] Error syncing pod 6387bda2-7e7d-4107-9ce3-34db3322ab07 ("kubernetes-dashboard-79d9cd965-h2ktp_kubernetes-dashboard(6387bda2-7e7d-4107-9ce3-34db3322ab07)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79d9cd965-h2ktp_kubernetes-dashboard(6387bda2-7e7d-4107-9ce3-34db3322ab07)"
Dec 30 16:36:46 minikube kubelet[2261]: W1230 16:36:46.417778    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-zqpk2 through plugin: invalid network status for
Dec 30 16:36:47 minikube kubelet[2261]: E1230 16:36:47.563969    2261 pod_workers.go:191] Error syncing pod ff67867321338ffd885039e188f6b424 ("kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"
Dec 30 16:36:47 minikube kubelet[2261]: W1230 16:36:47.573015    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-h2ktp through plugin: invalid network status for
Dec 30 16:36:51 minikube kubelet[2261]: E1230 16:36:51.267095    2261 pod_workers.go:191] Error syncing pod 6387bda2-7e7d-4107-9ce3-34db3322ab07 ("kubernetes-dashboard-79d9cd965-h2ktp_kubernetes-dashboard(6387bda2-7e7d-4107-9ce3-34db3322ab07)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79d9cd965-h2ktp_kubernetes-dashboard(6387bda2-7e7d-4107-9ce3-34db3322ab07)"
Dec 30 16:36:53 minikube kubelet[2261]: E1230 16:36:53.147151    2261 pod_workers.go:191] Error syncing pod e7ce3a6ee9fa0ec547ac7b4b17af0dcb ("kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"
Dec 30 16:36:59 minikube kubelet[2261]: E1230 16:36:59.570118    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"
Dec 30 16:37:00 minikube kubelet[2261]: E1230 16:37:00.570429    2261 pod_workers.go:191] Error syncing pod ff67867321338ffd885039e188f6b424 ("kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(ff67867321338ffd885039e188f6b424)"
Dec 30 16:37:04 minikube kubelet[2261]: E1230 16:37:04.569373    2261 pod_workers.go:191] Error syncing pod e7ce3a6ee9fa0ec547ac7b4b17af0dcb ("kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(e7ce3a6ee9fa0ec547ac7b4b17af0dcb)"
Dec 30 16:37:04 minikube kubelet[2261]: W1230 16:37:04.887231    2261 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-h2ktp through plugin: invalid network status for
Dec 30 16:37:14 minikube kubelet[2261]: E1230 16:37:14.569495    2261 pod_workers.go:191] Error syncing pod 243bf53e-3c0b-49d6-957a-13d5b722e71d ("storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(243bf53e-3c0b-49d6-957a-13d5b722e71d)"

==> kubernetes-dashboard ["56a3d5a84b1f"] <==
2019/12/30 16:37:04 Starting overwatch
2019/12/30 16:37:04 Using namespace: kubernetes-dashboard
2019/12/30 16:37:04 Using in-cluster config to connect to apiserver
2019/12/30 16:37:04 Using secret token for csrf signing
2019/12/30 16:37:04 Initializing csrf token from kubernetes-dashboard-csrf secret
2019/12/30 16:37:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2019/12/30 16:37:04 Successful initial request to the apiserver, version: v1.17.0
2019/12/30 16:37:04 Generating JWE encryption key
2019/12/30 16:37:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2019/12/30 16:37:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2019/12/30 16:37:05 Initializing JWE encryption key from synchronized object
2019/12/30 16:37:05 Creating in-cluster Sidecar client
2019/12/30 16:37:05 Successful request to sidecar
2019/12/30 16:37:05 Serving insecurely on HTTP port: 9090

==> kubernetes-dashboard ["a6854655f8ae"] <==
2019/12/30 16:28:07 Starting overwatch
2019/12/30 16:28:07 Using namespace: kubernetes-dashboard
2019/12/30 16:28:07 Using in-cluster config to connect to apiserver
2019/12/30 16:28:07 Using secret token for csrf signing
2019/12/30 16:28:07 Initializing csrf token from kubernetes-dashboard-csrf secret
2019/12/30 16:28:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2019/12/30 16:28:07 Successful initial request to the apiserver, version: v1.17.0
2019/12/30 16:28:07 Generating JWE encryption key
2019/12/30 16:28:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2019/12/30 16:28:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2019/12/30 16:28:09 Initializing JWE encryption key from synchronized object
2019/12/30 16:28:09 Creating in-cluster Sidecar client
2019/12/30 16:28:09 Serving insecurely on HTTP port: 9090
2019/12/30 16:28:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/12/30 16:30:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2019/12/30 16:31:09 Successful request to sidecar

==> storage-provisioner ["b7aa43a75323"] <==

==> storage-provisioner ["c24b64d93a7e"] <==

The operating system version:
macOS 10.15.2
Docker version 19.03.5, build 633a0ea
minikube version: v1.6.2

@robbdimitrov
Copy link
Author

Setting the memory of the minikube vm to 3gb allowed the build process to finish without memory errors.

$ minikube config set memory 3000

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 31, 2019

minikube should probably have some better resource monitoring "built in", like #3574

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants