Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Do something more useful when the weave bridge is DOWN #3133

Closed
bboreham opened this issue Oct 2, 2017 · 33 comments · Fixed by #3381
Closed

Do something more useful when the weave bridge is DOWN #3133

bboreham opened this issue Oct 2, 2017 · 33 comments · Fixed by #3381

Comments

@bboreham
Copy link
Contributor

bboreham commented Oct 2, 2017

Encountered during an instance of #2998 - nothing was working on one node because its bridge was DOWN.

Not sure if the best thing to do is to make this more clear to the administrator, or try to mend the bridge so it can be UP.

Is it always a bug? Are there reasons for the machine owner to deliberately set it DOWN?

@Bregor
Copy link
Contributor

Bregor commented Oct 3, 2017

Yeah, same here.

  • kube-proxy --proxy-mode=iptables ...
  • ufw
  • weave-2.0.4

weave interface state changes state to DOWN from host to host randomly. Approximately one-two hosts per day in cluster of 15 nodes (kubernetes-1.6).

Affected since ufw installation.

@bboreham
Copy link
Contributor Author

bboreham commented Oct 3, 2017

@Bregor do you think it goes DOWN mid-session or could it be just on reboot?

@Bregor
Copy link
Contributor

Bregor commented Oct 3, 2017

I'm pretty sure it's in mid-session. Moreover: node reboot not "cures" this condition. The guaranteed way to fix it for some time is to download weave shell-script on host, run weave reset and then kubectl delete -n kube-system weave-net-xxxx.

@Bregor
Copy link
Contributor

Bregor commented Oct 3, 2017

INFO: 2017/10/03 16:31:06.868714 weave  2.0.4
INFO: 2017/10/03 16:31:06.869439 Bridge type is bridged_fastdp
INFO: 2017/10/03 16:31:06.869479 Communication between peers is unencrypted.
INFO: 2017/10/03 16:31:06.877052 Our name is 32:94:18:9e:76:2d(10.129.30.46)
INFO: 2017/10/03 16:31:06.877152 Launch detected - using supplied peer list: [10.129.1.246 10.129.15.172 10.129.24.57 10.129.25.110 10.129.26.94 10.129.28.101 10.129.28.179 10.129.30.46 10.129.30.61 10.129.33.140 10.129.38.142 10.129.38.143 10.129.38.17]
INFO: 2017/10/03 16:31:06.877193 Checking for pre-existing addresses on weave bridge
INFO: 2017/10/03 16:31:06.911194 [allocator 32:94:18:9e:76:2d] Initialising with persisted data
INFO: 2017/10/03 16:31:06.911314 Sniffing traffic on datapath (via ODP)
INFO: 2017/10/03 16:31:06.912625 ->[10.129.30.46:6783] attempting connection
INFO: 2017/10/03 16:31:06.912875 ->[10.129.28.179:6783] attempting connection
...
INFO: 2017/10/03 16:31:07.070193 ->[10.129.38.142:6783|e6:ac:50:73:78:a4(10.129.38.142)]: connection fully established
INFO: 2017/10/03 16:31:07.071315 ->[10.129.28.179:6783|da:45:8f:e0:d0:8b(10.129.28.179)]: connection fully established
INFO: 2017/10/03 16:31:07.071623 ->[10.129.38.17:6783|26:b1:50:0b:a4:28(10.129.38.17)]: connection fully established
INFO: 2017/10/03 16:31:07.072003 ->[10.129.1.246:6783|0e:be:a1:c5:17:2e(10.129.1.246)]: connection fully established
INFO: 2017/10/03 16:31:07.095631 Discovered remote MAC 7a:58:0d:c9:4f:dc at 2e:10:a2:f4:cd:df(10.129.33.140)
arping: interface weave is down
exit status 1: iptables: No chain/target/match by that name.

INFO: 2017/10/03 16:31:16.569980 Discovered remote MAC 2a:2b:ff:34:cf:b2 at 2a:2b:ff:34:cf:b2(10.129.30.61)
INFO: 2017/10/03 16:31:22.526726 Discovered local MAC ee:8b:e3:c2:fc:5a
INFO: 2017/10/03 16:31:24.574531 Discovered local MAC d6:7a:74:74:0e:51```

@Bregor
Copy link
Contributor

Bregor commented Oct 3, 2017

This time right after host restart

@alok87
Copy link
Contributor

alok87 commented Dec 4, 2017

We are also facing the same issue and it mostly happens in the new scaled up node.

Not working node

admin@ip-172-31-103-223:~$ ip route
default via 172.31.103.1 dev eth0
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1
172.31.103.0/24 dev eth0  proto kernel  scope link  src 172.31.103.223

Working node

admin@ip-172-31-103-88:~$ ip route
default via 172.31.103.1 dev eth0
100.96.0.0/11 dev weave  proto kernel  scope link  src 100.102.0.0
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1
172.31.103.0/24 dev eth0  proto kernel  scope link  src 172.31.103.88

Output of ip link show from not working node https://weave-community.slack.com/archives/C2ND76PAA/p1512378485000014

  • Issue gets resolved after rebooting the node sometimes

  • This is observed to happen in the new nodes that comes up during the day. And happens sometimes only.

@bboreham what should we do to fix it.

@deitch
Copy link
Contributor

deitch commented Dec 11, 2017

We just stumbled across this twice in less than a week. Unclear if it was before or after restart.

@brb
Copy link
Contributor

brb commented Dec 11, 2017

@deitch do you have dmesg output of such node?

@deitch
Copy link
Contributor

deitch commented Dec 11, 2017

I can get it. Any particular journalctl output you want? It is pretty big, would be helpful if you gave me something to look for to narrow it down.

@brb
Copy link
Contributor

brb commented Dec 11, 2017

For the beginning, it'd be good to see a message of weave bridge being brought down in dmesg and a few messages above/bellow it. Later on, we could check journalctl events with similar timings.

@mmowatt
Copy link

mmowatt commented Dec 11, 2017

@brb I am working with @deitch on this. Looks like after a system reboot the weave interface never was brought back up until I manually did it today.

-- Reboot --
Dec 08 02:05:13 ip-10-201-20-232.ec2.internal systemd[1]: Starting Network Service...
Dec 08 02:05:13 ip-10-201-20-232.ec2.internal systemd-networkd[680]: Enumeration completed
Dec 08 02:05:13 ip-10-201-20-232.ec2.internal systemd-networkd[680]: eth0: IPv6 successfully enabled
Dec 08 02:05:13 ip-10-201-20-232.ec2.internal systemd[1]: Started Network Service.
Dec 08 02:05:13 ip-10-201-20-232.ec2.internal systemd-networkd[680]: eth0: Gained carrier
Dec 08 02:05:13 ip-10-201-20-232.ec2.internal systemd-networkd[680]: eth0: DHCPv4 address 10.201.20.232/24 via 10.201.20.1
Dec 08 02:05:15 ip-10-201-20-232.ec2.internal systemd-networkd[680]: eth0: Gained IPv6LL
Dec 08 02:05:27 ip-10-201-20-232.ec2.internal systemd-networkd[680]: eth0: Configured
Dec 08 02:05:29 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth02b5391: Gained carrier
Dec 08 02:05:29 ip-10-201-20-232.ec2.internal systemd-networkd[680]: docker0: Gained carrier
Dec 08 02:05:30 ip-10-201-20-232.ec2.internal systemd-networkd[680]: docker0: Gained IPv6LL
Dec 08 02:05:30 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth02b5391: Lost carrier
Dec 08 02:05:30 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth514b150: Gained carrier
Dec 08 02:05:31 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth514b150: Lost carrier
Dec 08 02:05:32 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth3f3ffd8: Gained carrier
Dec 08 02:05:32 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth3f3ffd8: Lost carrier
Dec 08 02:05:32 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth9767f25: Gained carrier
Dec 08 02:05:32 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth9767f25: Lost carrier
Dec 08 02:05:32 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth6f9707d: Gained carrier
Dec 08 02:05:32 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth6f9707d: Lost carrier
Dec 08 02:05:33 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth86782b9: Gained carrier
Dec 08 02:05:33 ip-10-201-20-232.ec2.internal systemd-networkd[680]: veth86782b9: Lost carrier
Dec 08 02:05:33 ip-10-201-20-232.ec2.internal systemd-networkd[680]: docker0: Lost carrier
Dec 08 02:05:40 ip-10-201-20-232.ec2.internal systemd-networkd[680]: vxlan-33006: Could not find udev device: No such device
Dec 08 02:05:40 ip-10-201-20-232.ec2.internal systemd-networkd[680]: vxlan-33006: Failed
Dec 08 02:05:40 ip-10-201-20-232.ec2.internal systemd-networkd[680]: Could not add new link: No such device
Dec 08 02:05:40 ip-10-201-20-232.ec2.internal systemd-networkd[680]: vethwe-bridge: Gained carrier
Dec 08 02:05:40 ip-10-201-20-232.ec2.internal systemd-networkd[680]: vethwe-datapath: Gained carrier
Dec 08 02:05:40 ip-10-201-20-232.ec2.internal systemd-networkd[680]: datapath: Gained carrier
Dec 08 02:05:41 ip-10-201-20-232.ec2.internal systemd-networkd[680]: vethwe-datapath: Gained IPv6LL
Dec 08 02:05:41 ip-10-201-20-232.ec2.internal systemd-networkd[680]: vethwe-bridge: Gained IPv6LL
Dec 08 02:05:41 ip-10-201-20-232.ec2.internal systemd-networkd[680]: datapath: Gained IPv6LL
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd[1]: Stopping Network Service...
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd[1]: Stopped Network Service.
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd[1]: Starting Network Service...
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwe-bridge: Gained IPv6LL
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwe-datapath: Gained IPv6LL
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: datapath: Gained IPv6LL
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: docker0: Gained IPv6LL
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: eth0: Gained IPv6LL
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: Enumeration completed
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd[1]: Started Network Service.
Dec 08 02:06:07 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: eth0: DHCPv4 address 10.201.20.232/24 via 10.201.20.1
Dec 08 02:06:19 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vxlan-6784: Gained carrier
Dec 08 02:06:19 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: eth0: Configured
Dec 08 02:06:20 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl04e779b: Gained carrier
Dec 08 02:06:20 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl7880324: Gained carrier
Dec 08 02:06:21 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vxlan-6784: Gained IPv6LL
Dec 08 02:06:21 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl04e779b: Gained IPv6LL
Dec 08 02:06:21 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepga0f77d3: Could not find udev device: No such device
Dec 08 02:06:21 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepga0f77d3: Failed
Dec 08 02:06:21 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: Could not add new link: No such device
Dec 08 02:06:21 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl7880324: Gained IPv6LL
Dec 08 02:06:21 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepla0f77d3: Gained carrier
Dec 08 02:06:22 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl4bbfa5d: Gained carrier
Dec 08 02:06:23 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepla0f77d3: Gained IPv6LL
Dec 08 02:06:23 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepla08263a: Gained carrier
Dec 08 02:06:23 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl882a7ef: Gained carrier
Dec 08 02:06:24 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl4bbfa5d: Gained IPv6LL
Dec 08 02:06:25 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepla08263a: Gained IPv6LL
Dec 08 02:06:25 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl882a7ef: Gained IPv6LL
Dec 08 02:06:26 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepld4135f0: Gained carrier
Dec 08 02:06:26 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl02d3e16: Gained carrier
Dec 08 02:06:27 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepld4135f0: Gained IPv6LL
Dec 08 02:06:27 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl02d3e16: Gained IPv6LL
Dec 08 02:06:28 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl863bde9: Gained carrier
Dec 08 02:06:29 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl863bde9: Gained IPv6LL
Dec 08 02:06:34 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl1c2a04c: Gained carrier
Dec 08 02:06:34 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethweplecfd571: Gained carrier
Dec 08 02:06:35 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl3adc761: Gained carrier
Dec 08 02:06:35 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethweplf0acd7c: Gained carrier
Dec 08 02:06:35 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl4a15bc7: Gained carrier
Dec 08 02:06:36 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl3adc761: Gained IPv6LL
Dec 08 02:06:36 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethweplf0acd7c: Gained IPv6LL
Dec 08 02:06:36 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethweplecfd571: Gained IPv6LL
Dec 08 02:06:36 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl1c2a04c: Gained IPv6LL
Dec 08 02:06:37 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl4a15bc7: Gained IPv6LL
Dec 08 02:06:37 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl9b220cb: Gained carrier
Dec 08 02:06:38 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethweplf0b5e8d: Gained carrier
Dec 08 02:06:38 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethwepl9b220cb: Gained IPv6LL
Dec 08 02:06:40 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: vethweplf0b5e8d: Gained IPv6LL
Dec 11 15:49:27 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: weave: Gained carrier
Dec 11 15:49:28 ip-10-201-20-232.ec2.internal systemd-networkd[9080]: weave: Gained IPv6LL

Here is also the dmesg

[    0.000000] random: get_random_bytes called from start_kernel+0x42/0x4cf with crng_init=0
[    0.000000] Linux version 4.13.16-coreos-r2 (jenkins@jenkins-worker-5) (gcc version 4.9.4 (Gentoo Hardened 4.9.4 p1.0, pie-0.6.4)) #1 SMP Wed Dec 6 04:27:34 UTC 2017
[    0.000000] Command line: BOOT_IMAGE=/coreos/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 coreos.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 verity.usrhash=071bf5f9e6e8622a733f2e8ac999ac40fa64641180dbfbe2e92f0aaf1bfcb78f
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009e000-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000efffffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000fc000000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000040fffffff] usable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] random: fast init done
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
[    0.000000] Hypervisor detected: Xen HVM
[    0.000000] Xen version 4.2.
[    0.000000] Xen Platform PCI: I/O protocol version 1
[    0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
[    0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.
               You might have to change the root device
               from /dev/hd[a-d] to /dev/xvd[a-d]
               in your root= kernel command line option
[    0.000000] HVMOP_pagetable_dying not supported
[    0.000000] tsc: Fast TSC calibration using PIT
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0x410000 max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: write-back
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF write-combining
[    0.000000]   C0000-FFFFF write-back
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 0000F0000000 mask 3FFFF8000000 uncachable
[    0.000000]   1 base 0000F8000000 mask 3FFFFC000000 uncachable
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC  UC- WT
[    0.000000] e820: last_pfn = 0xf0000 max_arch_pfn = 0x400000000
[    0.000000] Base memory trampoline at [ffff98eb80098000] 98000 size 24576
[    0.000000] Using GB pages for direct mapping
[    0.000000] BRK [0x82f2f000, 0x82f2ffff] PGTABLE
[    0.000000] BRK [0x82f30000, 0x82f30fff] PGTABLE
[    0.000000] BRK [0x82f31000, 0x82f31fff] PGTABLE
[    0.000000] BRK [0x82f32000, 0x82f32fff] PGTABLE
[    0.000000] BRK [0x82f33000, 0x82f33fff] PGTABLE
[    0.000000] ACPI: Early table checksum verification disabled
[    0.000000] ACPI: RSDP 0x00000000000EA020 000024 (v02 Xen   )
[    0.000000] ACPI: XSDT 0x00000000FC00DDC0 000054 (v01 Xen    HVM      00000000 HVML 00000000)
[    0.000000] ACPI: FACP 0x00000000FC00DA80 0000F4 (v04 Xen    HVM      00000000 HVML 00000000)
[    0.000000] ACPI: DSDT 0x00000000FC001CE0 00BD19 (v02 Xen    HVM      00000000 INTL 20090123)
[    0.000000] ACPI: FACS 0x00000000FC001CA0 000040
[    0.000000] ACPI: FACS 0x00000000FC001CA0 000040
[    0.000000] ACPI: APIC 0x00000000FC00DB80 0000D8 (v02 Xen    HVM      00000000 HVML 00000000)
[    0.000000] ACPI: HPET 0x00000000FC00DCD0 000038 (v01 Xen    HVM      00000000 HVML 00000000)
[    0.000000] ACPI: WAET 0x00000000FC00DD10 000028 (v01 Xen    HVM      00000000 HVML 00000000)
[    0.000000] ACPI: SSDT 0x00000000FC00DD40 000031 (v02 Xen    HVM      00000000 INTL 20090123)
[    0.000000] ACPI: SSDT 0x00000000FC00DD80 000031 (v02 Xen    HVM      00000000 INTL 20090123)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000040fffffff]
[    0.000000] NODE_DATA(0) allocated [mem 0x40fff9000-0x40fffefff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.000000]   Normal   [mem 0x0000000100000000-0x000000040fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009dfff]
[    0.000000]   node   0: [mem 0x0000000000100000-0x00000000efffffff]
[    0.000000]   node   0: [mem 0x0000000100000000-0x000000040fffffff]
[    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000040fffffff]
[    0.000000] On node 0 totalpages: 4194205
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3997 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 15296 pages used for memmap
[    0.000000]   DMA32 zone: 978944 pages, LIFO batch:31
[    0.000000]   Normal zone: 50176 pages used for memmap
[    0.000000]   Normal zone: 3211264 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0xb008
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-47
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 low level)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 low level)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 low level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ5 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] ACPI: IRQ10 used by override.
[    0.000000] ACPI: IRQ11 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.000000] smpboot: Allowing 15 CPUs, 11 hotplug CPUs
[    0.000000] e820: [mem 0xf0000000-0xfbffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen HVM
[    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:15 nr_node_ids:1
[    0.000000] percpu: Embedded 38 pages/cpu @ffff98ef7f200000 s115032 r8192 d32424 u262144
[    0.000000] pcpu-alloc: s115032 r8192 d32424 u262144 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 --
[    0.000000] xen: PV spinlocks enabled
[    0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 4128648
[    0.000000] Policy zone: Normal
[    0.000000] Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/coreos/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 coreos.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 verity.usrhash=071bf5f9e6e8622a733f2e8ac999ac40fa64641180dbfbe2e92f0aaf1bfcb78f
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Memory: 16397904K/16776820K available (6320K kernel code, 1218K rwdata, 2704K rodata, 33984K init, 740K bss, 378916K reserved, 0K cma-reserved)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=15, Nodes=1
[    0.000000] ftrace: allocating 26629 entries in 105 pages
[    0.001000] Hierarchical RCU implementation.
[    0.001000] 	RCU event tracing is enabled.
[    0.001000] 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=15.
[    0.001000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=15
[    0.001000] NR_IRQS: 33024, nr_irqs: 952, preallocated irqs: 16
[    0.001000] xen:events: Using 2-level ABI
[    0.001000] xen:events: Xen HVM callback vector for event delivery is enabled
[    0.001000] Console: colour VGA+ 80x25
[    0.001000] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22!
[    0.001000] console [ttyS0] enabled
[    0.001000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
[    0.001000] hpet clockevent registered
[    0.002000] tsc: Fast TSC calibration using PIT
[    0.016002] tsc: Detected 2300.187 MHz processor
[    0.018006] Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.11 BogoMIPS (lpj=2300058)
[    0.023002] pid_max: default: 32768 minimum: 301
[    0.026011] ACPI: Core revision 20170531
[    0.032322] ACPI: 3 ACPI AML tables successfully acquired and loaded
[    0.036031] Security Framework initialized
[    0.038002] SELinux:  Initializing.
[    0.039006] SELinux:  Starting in permissive mode
[    0.042751] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
[    0.048331] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
[    0.051053] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.055050] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.058236] CPU: Physical Processor ID: 0
[    0.060002] CPU: Processor Core ID: 0
[    0.062015] mce: CPU supports 2 MCE banks
[    0.064023] Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
[    0.066002] Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
[    0.070110] Freeing SMP alternatives memory: 24K
[    0.073590] smpboot: Max logical packages: 8
[    0.076584] x2apic: IRQ remapping doesn't support X2APIC mode
[    0.080002] Switched APIC routing to physical flat.
[    0.084000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=0 pin2=0
[    0.097843] clocksource: xen: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    0.102007] Xen: using vcpuop timer interface
[    0.102012] installing Xen timer for CPU 0
[    0.104054] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz (family: 0x6, model: 0x4f, stepping: 0x1)
[    0.105031] cpu 0 spinlock event irq 53
[    0.106072] Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only.
[    0.107036] Hierarchical SRCU implementation.
[    0.108431] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.109003] NMI watchdog: Shutting down hard lockup detector on all cpus
[    0.110013] smp: Bringing up secondary CPUs ...
[    0.111080] installing Xen timer for CPU 1
[    0.112051] x86: Booting SMP configuration:
[    0.113003] .... node  #0, CPUs:        #1
[    0.114050] cpu 1 spinlock event irq 59
[    0.118017] installing Xen timer for CPU 2
[    0.119053]   #2
[    0.120046] cpu 2 spinlock event irq 65
[    0.123077] installing Xen timer for CPU 3
[    0.124048]   #3
[    0.125055] cpu 3 spinlock event irq 71
[    0.128006] smp: Brought up 1 node, 4 CPUs
[    0.129004] smpboot: Total of 4 processors activated (18400.46 BogoMIPS)
[    0.130857] devtmpfs: initialized
[    0.131051] x86/mm: Memory block size: 128MB
[    0.133122] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[    0.134022] futex hash table entries: 4096 (order: 6, 262144 bytes)
[    0.135126] pinctrl core: initialized pinctrl subsystem
[    0.136719] NET: Registered protocol family 16
[    0.137219] cpuidle: using governor menu
[    0.139004] PCCT header not found.
[    0.141073] ACPI: bus type PCI registered
[    0.143004] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.146058] dca service started, version 1.12.1
[    0.149235] PCI: Using configuration type 1 for base access
[    0.153050] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.154004] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.155057] ACPI: Added _OSI(Module Device)
[    0.156004] ACPI: Added _OSI(Processor Device)
[    0.157003] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.158003] ACPI: Added _OSI(Processor Aggregator Device)
[    0.159187] xen: --> pirq=16 -> irq=9 (gsi=9)
[    0.161877] ACPI: Interpreter enabled
[    0.164014] ACPI: (supports S0 S3 S5)
[    0.166003] ACPI: Using IOAPIC for interrupt routing
[    0.168020] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.173464] ACPI: Enabled 2 GPEs in block 00 to 0F
[    0.216818] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.220011] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[    0.224009] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[    0.227012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[    0.233049] acpiphp: Slot [0] registered
[    0.236895] acpiphp: Slot [3] registered
[    0.239281] acpiphp: Slot [4] registered
[    0.241283] acpiphp: Slot [5] registered
[    0.243479] acpiphp: Slot [6] registered
[    0.246277] acpiphp: Slot [7] registered
[    0.248282] acpiphp: Slot [8] registered
[    0.251263] acpiphp: Slot [9] registered
[    0.253282] acpiphp: Slot [10] registered
[    0.256273] acpiphp: Slot [11] registered
[    0.258420] acpiphp: Slot [12] registered
[    0.260283] acpiphp: Slot [13] registered
[    0.262272] acpiphp: Slot [14] registered
[    0.265284] acpiphp: Slot [15] registered
[    0.267280] acpiphp: Slot [16] registered
[    0.269338] acpiphp: Slot [17] registered
[    0.272279] acpiphp: Slot [18] registered
[    0.274267] acpiphp: Slot [19] registered
[    0.277297] acpiphp: Slot [20] registered
[    0.279267] acpiphp: Slot [21] registered
[    0.281312] acpiphp: Slot [22] registered
[    0.284265] acpiphp: Slot [23] registered
[    0.286364] acpiphp: Slot [24] registered
[    0.288427] acpiphp: Slot [25] registered
[    0.290451] acpiphp: Slot [26] registered
[    0.293272] acpiphp: Slot [27] registered
[    0.295282] acpiphp: Slot [28] registered
[    0.297271] acpiphp: Slot [29] registered
[    0.300291] acpiphp: Slot [30] registered
[    0.302280] acpiphp: Slot [31] registered
[    0.305739] PCI host bridge to bus 0000:00
[    0.308004] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.311004] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.314004] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    0.318004] pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfbffffff window]
[    0.322004] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.324280] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    0.326064] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    0.328302] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    0.329560] pci 0000:00:01.1: reg 0x20: [io  0xc100-0xc10f]
[    0.330000] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
[    0.334004] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.337003] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
[    0.341004] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.344678] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    0.344708] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
               * this clock source is slow. Consider trying other clock sources
[    0.352140] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    0.356915] pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000
[    0.357188] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf1ffffff pref]
[    0.357371] pci 0000:00:02.0: reg 0x14: [mem 0xf3008000-0xf3008fff]
[    0.359160] pci 0000:00:03.0: [8086:10ed] type 00 class 0x020000
[    0.359734] pci 0000:00:03.0: reg 0x10: [mem 0xf3000000-0xf3003fff 64bit pref]
[    0.360222] pci 0000:00:03.0: reg 0x1c: [mem 0xf3004000-0xf3007fff 64bit pref]
[    0.362480] pci 0000:00:1f.0: [5853:0001] type 00 class 0xff8000
[    0.362864] pci 0000:00:1f.0: reg 0x10: [io  0xc000-0xc0ff]
[    0.363000] pci 0000:00:1f.0: reg 0x14: [mem 0xf2000000-0xf2ffffff pref]
[    0.364721] ACPI: PCI Interrupt Link [LNKA] (IRQs *5 10 11)
[    0.368191] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[    0.371175] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[    0.373159] ACPI: PCI Interrupt Link [LNKD] (IRQs *5 10 11)
[    0.390434] xen:balloon: Initialising balloon driver
[    0.394042] pci 0000:00:02.0: vgaarb: setting as boot VGA device
[    0.395000] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    0.400006] pci 0000:00:02.0: vgaarb: bridge control possible
[    0.402003] vgaarb: loaded
[    0.403025] PCI: Using ACPI for IRQ routing
[    0.405004] PCI: pci_cache_line_size set to 64 bytes
[    0.405390] e820: reserve RAM buffer [mem 0x0009e000-0x0009ffff]
[    0.405515] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
[    0.408015] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    0.410003] hpet0: 3 comparators, 64-bit 62.500000 MHz counter
[    0.415029] clocksource: Switched to clocksource xen
[    0.427581] VFS: Disk quotas dquot_6.6.0
[    0.429311] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.432444] pnp: PnP ACPI init
[    0.434129] system 00:00: [mem 0x00000000-0x0009ffff] could not be reserved
[    0.437352] system 00:00: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.437418] system 00:01: [io  0x08a0-0x08a3] has been reserved
[    0.440260] system 00:01: [io  0x0cc0-0x0ccf] has been reserved
[    0.443015] system 00:01: [io  0x04d0-0x04d1] has been reserved
[    0.445523] system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.445549] xen: --> pirq=17 -> irq=8 (gsi=8)
[    0.445564] pnp 00:02: Plug and Play ACPI device, IDs PNP0b00 (active)
[    0.445582] xen: --> pirq=18 -> irq=12 (gsi=12)
[    0.445591] pnp 00:03: Plug and Play ACPI device, IDs PNP0f13 (active)
[    0.445606] xen: --> pirq=19 -> irq=1 (gsi=1)
[    0.445619] pnp 00:04: Plug and Play ACPI device, IDs PNP0303 PNP030b (active)
[    0.445633] xen: --> pirq=20 -> irq=6 (gsi=6)
[    0.445638] pnp 00:05: [dma 2]
[    0.445647] pnp 00:05: Plug and Play ACPI device, IDs PNP0700 (active)
[    0.445666] xen: --> pirq=21 -> irq=4 (gsi=4)
[    0.445678] pnp 00:06: Plug and Play ACPI device, IDs PNP0501 (active)
[    0.445710] system 00:07: [io  0x10c0-0x1141] has been reserved
[    0.448424] system 00:07: [io  0xb044-0xb047] has been reserved
[    0.451246] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.467210] pnp: PnP ACPI: found 8 devices
[    0.476063] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    0.480062] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    0.480064] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    0.480065] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[    0.480067] pci_bus 0000:00: resource 7 [mem 0xf0000000-0xfbffffff window]
[    0.480228] NET: Registered protocol family 2
[    0.482247] TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
[    0.485629] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.488579] TCP: Hash tables configured (established 131072 bind 65536)
[    0.491578] UDP hash table entries: 8192 (order: 6, 262144 bytes)
[    0.494236] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
[    0.497174] NET: Registered protocol family 1
[    0.499242] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.501861] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    0.504360] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    0.507111] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    0.510880] PCI: CLS 0 bytes, default 64
[    0.952947] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    0.955669] software IO TLB [mem 0xec000000-0xf0000000] (64MB) mapped at [ffff98ec6c000000-ffff98ec6fffffff]
[    0.959898] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer
[    0.963075] RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
[    0.965244] RAPL PMU: hw unit of domain package 2^-14 Joules
[    0.967468] RAPL PMU: hw unit of domain dram 2^-16 Joules
[    0.969995] audit: initializing netlink subsys (disabled)
[    0.972114] audit: type=2000 audit(1512698705.754:1): state=initialized audit_enabled=0 res=1
[    0.972353] Initialise system trusted keyrings
[    0.972407] workingset: timestamp_bits=39 max_order=22 bucket_order=0
[    0.973647] SELinux:  Registering netfilter hooks
[    1.260850] Key type asymmetric registered
[    1.262651] Asymmetric key parser 'x509' registered
[    1.264623] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.267698] io scheduler noop registered
[    1.269341] io scheduler deadline registered
[    1.271030] io scheduler cfq registered (default)
[    1.273070] io scheduler mq-deadline registered
[    1.274831] io scheduler kyber registered
[    1.276613] intel_idle: does not run on family 6 model 79
[    1.276863] GHES: HEST is not enabled!
[    1.278540] ioatdma: Intel(R) QuickData Technology Driver 4.00
[    1.281308] xen: --> pirq=22 -> irq=47 (gsi=47)
[    1.281378] xen:grant_table: Grant tables using version 1 layout
[    1.283861] Grant table initialized
[    1.285328] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22!
[    1.287836] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    1.316397] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    1.320398] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f13:PS2M] at 0x60,0x64 irq 1,12
[    1.325606] serio: i8042 KBD port at 0x60,0x64 irq 1
[    1.327634] serio: i8042 AUX port at 0x60,0x64 irq 12
[    1.330460] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    1.334322] rtc_cmos 00:02: rtc core: registered rtc_cmos as rtc0
[    1.337016] rtc_cmos 00:02: alarms up to one day, 114 bytes nvram, hpet irqs
[    1.340121] ip_tables: (C) 2000-2006 Netfilter Core Team
[    1.342464] NET: Registered protocol family 10
[    1.344855] Segment Routing with IPv6
[    1.346479] NET: Registered protocol family 17
[    1.348410] Key type dns_resolver registered
[    1.350581] sched_clock: Marking stable (1350526110, 0)->(8602563817, -7252037707)
[    1.354228] registered taskstats version 1
[    1.355819] Loading compiled-in X.509 certificates
[    1.395301] Loaded X.509 cert 'CoreOS, Inc: Module signing key for 4.13.16-coreos-r2: 1554f8e30ca18dd9dba0e055bd5af48ac518dbed'
[    1.400207] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
[    1.403074] xenbus_probe_frontend: Device with no driver: device/vbd/51712
[    1.405697] xenbus_probe_frontend: Device with no driver: device/vbd/268453888
[    1.408525] xenbus_probe_frontend: Device with no driver: device/pci/0
[    1.411068] rtc_cmos 00:02: setting system clock to 2017-12-08 02:05:06 UTC (1512698706)
[    1.423103] Freeing unused kernel memory: 33984K
[    1.424919] Write protecting the kernel read-only data: 12288k
[    1.427920] Freeing unused kernel memory: 1860K
[    1.432673] Freeing unused kernel memory: 1392K
[    1.441998] systemd[1]: systemd 234 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT -GNUTLS -ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN default-hierarchy=legacy)
[    1.450845] systemd[1]: Detected virtualization xen.
[    1.452907] systemd[1]: Detected architecture x86-64.
[    1.455087] systemd[1]: Running in initial RAM disk.
[    1.458907] systemd[1]: No hostname configured.
[    1.460713] systemd[1]: Set hostname to <localhost>.
[    1.462958] systemd[1]: Initializing machine ID from random generator.
[    1.508468] systemd[1]: Listening on udev Control Socket.
[    1.512620] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[    1.518471] systemd[1]: Reached target Paths.
[    1.557610] audit: type=1130 audit(1512698706.646:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.576323] audit: type=1130 audit(1512698706.665:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.588592] audit: type=1130 audit(1512698706.677:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.641088] audit: type=1130 audit(1512698706.730:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.658419] device-mapper: uevent: version 1.0.3
[    1.660878] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: [email protected]
[    1.684019] audit: type=1130 audit(1512698706.772:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.708869] audit: type=1130 audit(1512698706.794:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.758581] audit: type=1130 audit(1512698706.847:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.819650] audit: type=1130 audit(1512698706.908:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    1.862973] SCSI subsystem initialized
[    1.862994] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 4.1.0-k
[    1.862995] ixgbevf: Copyright (c) 2009 - 2015 Intel Corporation.
[    1.877599] ixgbevf 0000:00:03.0: 0e:4d:19:c0:f3:a0
[    1.880191] ixgbevf 0000:00:03.0: MAC: 1
[    1.882023] ixgbevf 0000:00:03.0: Intel(R) 82599 Virtual Function
[    1.886452] libata version 3.00 loaded.
[    1.888318] ata_piix 0000:00:01.1: version 2.13
[    1.889912] AVX2 version of gcm_enc/dec engaged.
[    1.892373] AES CTR mode by8 optimization enabled
[    1.894721] scsi host0: ata_piix
[    1.896444] scsi host1: ata_piix
[    1.898355] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc100 irq 14
[    1.902211] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc108 irq 15
[    1.918883] blkfront: xvda: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
[    1.937732]  xvda: xvda1 xvda2 xvda3 xvda4 xvda6 xvda7 xvda9
[    1.939310] blkfront: xvdbu: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
[    2.015083] tsc: Refined TSC clocksource calibration: 2299.941 MHz
[    2.018832] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2126fd51e2f, max_idle_ns: 440795225370 ns
[    2.150726] audit: type=1130 audit(1512698707.239:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    2.253658] EXT4-fs (xvda9): mounted filesystem with ordered data mode. Opts: (null)
[    2.291887] EXT4-fs (dm-0): mounted filesystem without journal. Opts: (null)
[    2.613416] systemd-journald[155]: Received SIGTERM from PID 1 (systemd).
[    2.617094] random: crng init done
[    2.629302] systemd: 20 output lines suppressed due to ratelimiting
[    2.721499] SELinux: 4096 avtab hash slots, 13428 rules.
[    2.722898] SELinux: 4096 avtab hash slots, 13428 rules.
[    2.723365] SELinux:  6 users, 6 roles, 1327 types, 55 bools, 1 sens, 1024 cats
[    2.723367] SELinux:  92 classes, 13428 rules
[    2.723968] SELinux:  Permission validate_trans in class security not defined in policy.
[    2.729599] SELinux:  Permission getrlimit in class process not defined in policy.
[    2.741564] SELinux:  Permission module_load in class system not defined in policy.
[    2.748319] SELinux:  Permission map in class file not defined in policy.
[    2.751726] SELinux:  Permission map in class dir not defined in policy.
[    2.755322] SELinux:  Permission map in class lnk_file not defined in policy.
[    2.759099] SELinux:  Permission map in class chr_file not defined in policy.
[    2.762730] SELinux:  Permission map in class blk_file not defined in policy.
[    2.766107] SELinux:  Permission map in class sock_file not defined in policy.
[    2.769736] SELinux:  Permission map in class fifo_file not defined in policy.
[    2.773410] SELinux:  Permission map in class socket not defined in policy.
[    2.776885] SELinux:  Permission map in class tcp_socket not defined in policy.
[    2.780427] SELinux:  Permission map in class udp_socket not defined in policy.
[    2.783944] SELinux:  Permission map in class rawip_socket not defined in policy.
[    2.787515] SELinux:  Permission map in class netlink_socket not defined in policy.
[    2.791435] SELinux:  Permission map in class packet_socket not defined in policy.
[    2.795155] SELinux:  Permission map in class key_socket not defined in policy.
[    2.798680] SELinux:  Permission map in class unix_stream_socket not defined in policy.
[    2.802535] SELinux:  Permission map in class unix_dgram_socket not defined in policy.
[    2.806371] SELinux:  Permission map in class netlink_route_socket not defined in policy.
[    2.810591] SELinux:  Permission map in class netlink_tcpdiag_socket not defined in policy.
[    2.814627] SELinux:  Permission map in class netlink_nflog_socket not defined in policy.
[    2.818560] SELinux:  Permission map in class netlink_xfrm_socket not defined in policy.
[    2.822466] SELinux:  Permission map in class netlink_selinux_socket not defined in policy.
[    2.826736] SELinux:  Permission map in class netlink_iscsi_socket not defined in policy.
[    2.830825] SELinux:  Permission map in class netlink_audit_socket not defined in policy.
[    2.834781] SELinux:  Permission map in class netlink_fib_lookup_socket not defined in policy.
[    2.839278] SELinux:  Permission map in class netlink_connector_socket not defined in policy.
[    2.843371] SELinux:  Permission map in class netlink_netfilter_socket not defined in policy.
[    2.847432] SELinux:  Permission map in class netlink_dnrt_socket not defined in policy.
[    2.851410] SELinux:  Permission map in class netlink_kobject_uevent_socket not defined in policy.
[    2.855845] SELinux:  Permission map in class netlink_generic_socket not defined in policy.
[    2.859934] SELinux:  Permission map in class netlink_scsitransport_socket not defined in policy.
[    2.864198] SELinux:  Permission map in class netlink_rdma_socket not defined in policy.
[    2.868044] SELinux:  Permission map in class netlink_crypto_socket not defined in policy.
[    2.872366] SELinux:  Permission map in class appletalk_socket not defined in policy.
[    2.876073] SELinux:  Permission map in class dccp_socket not defined in policy.
[    2.879708] SELinux:  Permission map in class tun_socket not defined in policy.
[    2.883183] SELinux:  Class cap_userns not defined in policy.
[    2.886043] SELinux:  Class cap2_userns not defined in policy.
[    2.889022] SELinux:  Class sctp_socket not defined in policy.
[    2.891718] SELinux:  Class icmp_socket not defined in policy.
[    2.894523] SELinux:  Class ax25_socket not defined in policy.
[    2.897322] SELinux:  Class ipx_socket not defined in policy.
[    2.900306] SELinux:  Class netrom_socket not defined in policy.
[    2.903239] SELinux:  Class atmpvc_socket not defined in policy.
[    2.905972] SELinux:  Class x25_socket not defined in policy.
[    2.908935] SELinux:  Class rose_socket not defined in policy.
[    2.911729] SELinux:  Class decnet_socket not defined in policy.
[    2.914521] SELinux:  Class atmsvc_socket not defined in policy.
[    2.917339] SELinux:  Class rds_socket not defined in policy.
[    2.920069] SELinux:  Class irda_socket not defined in policy.
[    2.922728] SELinux:  Class pppox_socket not defined in policy.
[    2.925754] SELinux:  Class llc_socket not defined in policy.
[    2.928419] SELinux:  Class can_socket not defined in policy.
[    2.931254] SELinux:  Class tipc_socket not defined in policy.
[    2.933936] SELinux:  Class bluetooth_socket not defined in policy.
[    2.936876] SELinux:  Class iucv_socket not defined in policy.
[    2.939805] SELinux:  Class rxrpc_socket not defined in policy.
[    2.942734] SELinux:  Class isdn_socket not defined in policy.
[    2.945578] SELinux:  Class phonet_socket not defined in policy.
[    2.948506] SELinux:  Class ieee802154_socket not defined in policy.
[    2.951625] SELinux:  Class caif_socket not defined in policy.
[    2.954332] SELinux:  Class alg_socket not defined in policy.
[    2.957001] SELinux:  Class nfc_socket not defined in policy.
[    2.959904] SELinux:  Class vsock_socket not defined in policy.
[    2.962907] SELinux:  Class kcm_socket not defined in policy.
[    2.965541] SELinux:  Class qipcrtr_socket not defined in policy.
[    2.968523] SELinux:  Class smc_socket not defined in policy.
[    2.971482] SELinux:  Class infiniband_pkey not defined in policy.
[    2.974458] SELinux:  Class infiniband_endport not defined in policy.
[    2.977485] SELinux: the above unknown classes and permissions will be allowed
[    2.980799] SELinux:  policy capability network_peer_controls=1
[    2.983666] SELinux:  policy capability open_perms=1
[    2.986048] SELinux:  policy capability extended_socket_class=0
[    2.988749] SELinux:  policy capability always_check_network=0
[    2.991544] SELinux:  policy capability cgroup_seclabel=0
[    2.994030] SELinux:  Completing initialization.
[    2.994030] SELinux:  Setting up existing superblocks.
[    3.021584] systemd[1]: Successfully loaded SELinux policy in 324.177ms.
[    3.036546] systemd[1]: Relabelled /dev and /run in 4.950ms.
[    4.450122] systemd-journald[544]: Received request to flush runtime journal from PID 1
[    4.714954] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
[    4.719599] ACPI: Power Button [PWRF]
[    4.721982] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3
[    4.726639] ACPI: Sleep Button [SLPF]
[    4.744018] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[    4.802522] EDAC MC: Ver: 3.0.0
[    4.809282] EDAC sbridge: Seeking for: PCI ID 8086:6fa0
[    4.809284] EDAC sbridge:  Ver: 1.1.2
[    4.811341] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
[    4.842899] mousedev: PS/2 mouse device common for all mice
[    7.892736] kauditd_printk_skb: 58 callbacks suppressed
[    7.892737] audit: type=1130 audit(1512698712.981:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.915245] EXT4-fs (xvda6): mounted filesystem with ordered data mode. Opts: commit=600
[    7.943371] audit: type=1130 audit(1512698713.032:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.955541] audit: type=1131 audit(1512698713.032:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    7.997401] audit: type=1130 audit(1512698713.086:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.011050] audit: type=1131 audit(1512698713.086:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.047956] audit: type=1130 audit(1512698713.136:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.146114] audit: type=1130 audit(1512698713.235:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.159024] audit: type=1127 audit(1512698713.239:76): pid=652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib64/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
[    8.193598] audit: type=1130 audit(1512698713.282:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.209153] audit: type=1130 audit(1512698713.298:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib64/systemd/systemd" hostname=? addr=? terminal=? res=success'
[    8.658023] ixgbevf 0000:00:03.0: NIC Link is Up 10 Gbps
[    8.659569] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[    8.668249] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[    9.223744] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
[   23.504945] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   23.512750] Bridge firewalling registered
[   23.553268] Initializing XFRM netlink socket
[   23.560209] Netfilter messages via NETLINK v0.30.
[   23.565326] ctnetlink v0.93: registering with nfnetlink.
[   23.616111] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
[   23.948976] docker0: port 1(veth02b5391) entered blocking state
[   23.952695] docker0: port 1(veth02b5391) entered disabled state
[   23.956381] device veth02b5391 entered promiscuous mode
[   23.959822] IPv6: ADDRCONF(NETDEV_UP): veth02b5391: link is not ready
[   23.963295] docker0: port 1(veth02b5391) entered blocking state
[   23.966187] docker0: port 1(veth02b5391) entered forwarding state
[   23.969333] docker0: port 1(veth02b5391) entered disabled state
[   24.055373] eth0: renamed from veth5a1d04b
[   24.064260] IPv6: ADDRCONF(NETDEV_CHANGE): veth02b5391: link becomes ready
[   24.067875] docker0: port 1(veth02b5391) entered blocking state
[   24.070817] docker0: port 1(veth02b5391) entered forwarding state
[   24.073936] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[   25.572568] ip6_tables: (C) 2000-2006 Netfilter Core Team
[   25.602113] docker0: port 1(veth02b5391) entered disabled state
[   25.604459] veth5a1d04b: renamed from eth0
[   25.620751] docker0: port 1(veth02b5391) entered disabled state
[   25.623412] device veth02b5391 left promiscuous mode
[   25.625386] docker0: port 1(veth02b5391) entered disabled state
[   25.761432] docker0: port 1(veth514b150) entered blocking state
[   25.764105] docker0: port 1(veth514b150) entered disabled state
[   25.766798] device veth514b150 entered promiscuous mode
[   25.769256] IPv6: ADDRCONF(NETDEV_UP): veth514b150: link is not ready
[   25.772191] docker0: port 1(veth514b150) entered blocking state
[   25.774928] docker0: port 1(veth514b150) entered forwarding state
[   25.857766] eth0: renamed from veth6ab86f1
[   25.866240] IPv6: ADDRCONF(NETDEV_CHANGE): veth514b150: link becomes ready
[   26.689446] docker0: port 1(veth514b150) entered disabled state
[   26.692898] veth6ab86f1: renamed from eth0
[   26.715888] docker0: port 1(veth514b150) entered disabled state
[   26.719879] device veth514b150 left promiscuous mode
[   26.723728] docker0: port 1(veth514b150) entered disabled state
[   26.842116] docker0: port 1(veth3f3ffd8) entered blocking state
[   26.845543] docker0: port 1(veth3f3ffd8) entered disabled state
[   26.848806] device veth3f3ffd8 entered promiscuous mode
[   26.851951] IPv6: ADDRCONF(NETDEV_UP): veth3f3ffd8: link is not ready
[   26.855452] docker0: port 1(veth3f3ffd8) entered blocking state
[   26.858599] docker0: port 1(veth3f3ffd8) entered forwarding state
[   26.939402] eth0: renamed from vethefb7ba3
[   26.948251] IPv6: ADDRCONF(NETDEV_CHANGE): veth3f3ffd8: link becomes ready
[   27.087842] docker0: port 1(veth3f3ffd8) entered disabled state
[   27.090662] vethefb7ba3: renamed from eth0
[   27.109071] docker0: port 1(veth3f3ffd8) entered disabled state
[   27.112327] device veth3f3ffd8 left promiscuous mode
[   27.114477] docker0: port 1(veth3f3ffd8) entered disabled state
[   27.223859] docker0: port 1(veth9767f25) entered blocking state
[   27.226585] docker0: port 1(veth9767f25) entered disabled state
[   27.229431] device veth9767f25 entered promiscuous mode
[   27.233274] IPv6: ADDRCONF(NETDEV_UP): veth9767f25: link is not ready
[   27.235741] docker0: port 1(veth9767f25) entered blocking state
[   27.238363] docker0: port 1(veth9767f25) entered forwarding state
[   27.318439] eth0: renamed from veth34279d5
[   27.326334] IPv6: ADDRCONF(NETDEV_CHANGE): veth9767f25: link becomes ready
[   27.416757] docker0: port 1(veth9767f25) entered disabled state
[   27.422018] veth34279d5: renamed from eth0
[   27.446330] docker0: port 1(veth9767f25) entered disabled state
[   27.449828] device veth9767f25 left promiscuous mode
[   27.452634] docker0: port 1(veth9767f25) entered disabled state
[   27.604316] docker0: port 1(veth6f9707d) entered blocking state
[   27.606919] docker0: port 1(veth6f9707d) entered disabled state
[   27.609521] device veth6f9707d entered promiscuous mode
[   27.612939] IPv6: ADDRCONF(NETDEV_UP): veth6f9707d: link is not ready
[   27.615850] docker0: port 1(veth6f9707d) entered blocking state
[   27.618296] docker0: port 1(veth6f9707d) entered forwarding state
[   27.663164] docker0: port 1(veth6f9707d) entered disabled state
[   27.705149] eth0: renamed from veth234515d
[   27.712266] IPv6: ADDRCONF(NETDEV_CHANGE): veth6f9707d: link becomes ready
[   27.715075] docker0: port 1(veth6f9707d) entered blocking state
[   27.717707] docker0: port 1(veth6f9707d) entered forwarding state
[   27.876190] docker0: port 1(veth6f9707d) entered disabled state
[   27.879810] veth234515d: renamed from eth0
[   27.907538] docker0: port 1(veth6f9707d) entered disabled state
[   27.910099] device veth6f9707d left promiscuous mode
[   27.912380] docker0: port 1(veth6f9707d) entered disabled state
[   28.076089] docker0: port 1(veth86782b9) entered blocking state
[   28.078841] docker0: port 1(veth86782b9) entered disabled state
[   28.081449] device veth86782b9 entered promiscuous mode
[   28.083557] IPv6: ADDRCONF(NETDEV_UP): veth86782b9: link is not ready
[   28.086097] docker0: port 1(veth86782b9) entered blocking state
[   28.088363] docker0: port 1(veth86782b9) entered forwarding state
[   28.170493] eth0: renamed from vethcc8bddf
[   28.181247] IPv6: ADDRCONF(NETDEV_CHANGE): veth86782b9: link becomes ready
[   28.268898] docker0: port 1(veth86782b9) entered disabled state
[   28.271350] vethcc8bddf: renamed from eth0
[   28.294445] docker0: port 1(veth86782b9) entered disabled state
[   28.297006] device veth86782b9 left promiscuous mode
[   28.299032] docker0: port 1(veth86782b9) entered disabled state
[   34.268128] ip_set: protocol 6
[   35.132376] openvswitch: Open vSwitch switching datapath
[   35.172459] device datapath entered promiscuous mode
[   35.212361] weave: port 1(vethwedu) entered blocking state
[   35.215219] weave: port 1(vethwedu) entered disabled state
[   35.219787] device vethwedu entered promiscuous mode
[   35.222937] device vethwedu left promiscuous mode
[   35.225771] weave: port 1(vethwedu) entered disabled state
[   35.239886] weave: port 1(vethwe-bridge) entered blocking state
[   35.243560] weave: port 1(vethwe-bridge) entered disabled state
[   35.247638] device vethwe-bridge entered promiscuous mode
[   35.250772] IPv6: ADDRCONF(NETDEV_UP): vethwe-datapath: link is not ready
[   35.254818] device vethwe-datapath entered promiscuous mode
[   35.259635] IPv6: ADDRCONF(NETDEV_CHANGE): vethwe-datapath: link becomes ready
[   42.408025] sysdigcloud_probe: loading out-of-tree module taints kernel.
[   42.411473] sysdigcloud_probe: module verification failed: signature and/or required key missing - tainting kernel
[   42.416251] sysdigcloud_probe: driver loading, sysdigcloud-probe 0.73.2
[   42.469727] sysdigcloud_probe: adding new consumer ffff98ef77085ac0
[   42.472979] sysdigcloud_probe: initializing ring buffer for CPU 0
[   42.481547] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   42.484425] sysdigcloud_probe: initializing ring buffer for CPU 1
[   42.493725] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   42.496627] sysdigcloud_probe: initializing ring buffer for CPU 2
[   42.504439] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   42.507381] sysdigcloud_probe: initializing ring buffer for CPU 3
[   42.514948] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   42.517838] sysdigcloud_probe: starting capture
[   48.695038] EXT4-fs (xvdbu): mounted filesystem with ordered data mode. Opts: (null)
[   48.704321] EXT4-fs (xvdbu): re-mounted. Opts: (null)
[   63.802449] sysdigcloud_probe: deallocating consumer ffff98ef77085ac0
[   63.813702] sysdigcloud_probe: no more consumers, stopping capture
[   74.721726] device vxlan-6784 entered promiscuous mode
[   75.029930] weave: port 2(vethwepl04e779b) entered blocking state
[   75.029931] weave: port 2(vethwepl04e779b) entered disabled state
[   75.029982] device vethwepl04e779b entered promiscuous mode
[   75.101901] eth0: renamed from vethwepg04e779b
[   75.114856] IPv6: ADDRCONF(NETDEV_UP): vethwepl04e779b: link is not ready
[   75.127031] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl04e779b: link becomes ready
[   75.278545] sysdigcloud_probe: driver unloading
[   75.280841] weave: port 3(vethwepl7880324) entered blocking state
[   75.280842] weave: port 3(vethwepl7880324) entered disabled state
[   75.280889] device vethwepl7880324 entered promiscuous mode
[   75.317015] eth0: renamed from vethwepg7880324
[   75.350474] IPv6: ADDRCONF(NETDEV_UP): vethwepl7880324: link is not ready
[   75.372049] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl7880324: link becomes ready
[   76.434373] weave: port 4(vethwepla0f77d3) entered blocking state
[   76.434374] weave: port 4(vethwepla0f77d3) entered disabled state
[   76.434420] device vethwepla0f77d3 entered promiscuous mode
[   76.491025] eth0: renamed from vethwepga0f77d3
[   76.604978] IPv6: ADDRCONF(NETDEV_UP): vethwepla0f77d3: link is not ready
[   76.619840] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepla0f77d3: link becomes ready
[   77.105997] weave: port 5(vethwepl4bbfa5d) entered blocking state
[   77.109624] weave: port 5(vethwepl4bbfa5d) entered disabled state
[   77.113445] device vethwepl4bbfa5d entered promiscuous mode
[   77.135702] eth0: renamed from vethwepg4bbfa5d
[   77.149606] IPv6: ADDRCONF(NETDEV_UP): vethwepl4bbfa5d: link is not ready
[   77.168366] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl4bbfa5d: link becomes ready
[   78.595411] weave: port 6(vethwepla08263a) entered blocking state
[   78.599372] weave: port 6(vethwepla08263a) entered disabled state
[   78.608152] device vethwepla08263a entered promiscuous mode
[   78.640966] eth0: renamed from vethwepga08263a
[   78.658090] IPv6: ADDRCONF(NETDEV_UP): vethwepla08263a: link is not ready
[   78.678579] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepla08263a: link becomes ready
[   78.821719] weave: port 7(vethwepl882a7ef) entered blocking state
[   78.825156] weave: port 7(vethwepl882a7ef) entered disabled state
[   78.829189] device vethwepl882a7ef entered promiscuous mode
[   78.864096] eth0: renamed from vethwepg882a7ef
[   78.874350] IPv6: ADDRCONF(NETDEV_UP): vethwepl882a7ef: link is not ready
[   78.893336] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl882a7ef: link becomes ready
[   80.814895] weave: port 8(vethwepld4135f0) entered blocking state
[   80.818552] weave: port 8(vethwepld4135f0) entered disabled state
[   80.822195] device vethwepld4135f0 entered promiscuous mode
[   80.868113] eth0: renamed from vethwepgd4135f0
[   80.885505] IPv6: ADDRCONF(NETDEV_UP): vethwepld4135f0: link is not ready
[   80.930120] weave: port 9(vethwepl02d3e16) entered blocking state
[   80.934050] weave: port 9(vethwepl02d3e16) entered disabled state
[   80.937264] device vethwepl02d3e16 entered promiscuous mode
[   80.969967] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepld4135f0: link becomes ready
[   80.981228] eth0: renamed from vethwepg02d3e16
[   80.999345] IPv6: ADDRCONF(NETDEV_UP): vethwepl02d3e16: link is not ready
[   81.019170] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl02d3e16: link becomes ready
[   81.189300] sysdigcloud_probe: driver loading, sysdigcloud-probe 0.73.2
[   81.285702] sysdigcloud_probe: adding new consumer ffff98ef00808000
[   81.317225] sysdigcloud_probe: initializing ring buffer for CPU 0
[   81.371079] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   81.374777] sysdigcloud_probe: initializing ring buffer for CPU 1
[   81.385382] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   81.389645] sysdigcloud_probe: initializing ring buffer for CPU 2
[   81.398824] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   81.402478] sysdigcloud_probe: initializing ring buffer for CPU 3
[   81.412935] sysdigcloud_probe: CPU buffer initialized, size=8388608
[   81.416564] sysdigcloud_probe: starting capture
[   83.176025] weave: port 10(vethwepl863bde9) entered blocking state
[   83.179288] weave: port 10(vethwepl863bde9) entered disabled state
[   83.182432] device vethwepl863bde9 entered promiscuous mode
[   83.227304] eth0: renamed from vethwepg863bde9
[   83.241265] IPv6: ADDRCONF(NETDEV_UP): vethwepl863bde9: link is not ready
[   83.267269] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl863bde9: link becomes ready
[   89.668930] weave: port 11(vethweplecfd571) entered blocking state
[   89.671515] weave: port 11(vethweplecfd571) entered disabled state
[   89.674039] device vethweplecfd571 entered promiscuous mode
[   89.699568] weave: port 12(vethwepl3adc761) entered blocking state
[   89.702601] weave: port 12(vethwepl3adc761) entered disabled state
[   89.705453] device vethwepl3adc761 entered promiscuous mode
[   89.710116] weave: port 13(vethwepl1c2a04c) entered blocking state
[   89.713004] weave: port 13(vethwepl1c2a04c) entered disabled state
[   89.715845] device vethwepl1c2a04c entered promiscuous mode
[   89.760415] eth0: renamed from vethwepg3adc761
[   89.780561] eth0: renamed from vethwepgecfd571
[   89.789635] eth0: renamed from vethwepg1c2a04c
[   89.809178] IPv6: ADDRCONF(NETDEV_UP): vethwepl3adc761: link is not ready
[   89.817168] IPv6: ADDRCONF(NETDEV_UP): vethwepl1c2a04c: link is not ready
[   89.825083] IPv6: ADDRCONF(NETDEV_UP): vethweplecfd571: link is not ready
[   89.837250] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl1c2a04c: link becomes ready
[   89.891691] IPv6: ADDRCONF(NETDEV_CHANGE): vethweplecfd571: link becomes ready
[   89.926229] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl3adc761: link becomes ready
[   90.030079] weave: port 14(vethweplf0acd7c) entered blocking state
[   90.033715] weave: port 14(vethweplf0acd7c) entered disabled state
[   90.055739] device vethweplf0acd7c entered promiscuous mode
[   90.087881] eth0: renamed from vethwepgf0acd7c
[   90.107633] IPv6: ADDRCONF(NETDEV_UP): vethweplf0acd7c: link is not ready
[   90.143903] IPv6: ADDRCONF(NETDEV_CHANGE): vethweplf0acd7c: link becomes ready
[   90.680292] weave: port 15(vethwepl4a15bc7) entered blocking state
[   90.683543] weave: port 15(vethwepl4a15bc7) entered disabled state
[   90.686343] device vethwepl4a15bc7 entered promiscuous mode
[   90.709193] eth0: renamed from vethwepg4a15bc7
[   90.718012] IPv6: ADDRCONF(NETDEV_UP): vethwepl4a15bc7: link is not ready
[   90.752889] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl4a15bc7: link becomes ready
[   92.389837] weave: port 16(vethwepl9b220cb) entered blocking state
[   92.394997] weave: port 16(vethwepl9b220cb) entered disabled state
[   92.399565] device vethwepl9b220cb entered promiscuous mode
[   92.428454] eth0: renamed from vethwepg9b220cb
[   92.443727] IPv6: ADDRCONF(NETDEV_UP): vethwepl9b220cb: link is not ready
[   92.576018] IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl9b220cb: link becomes ready
[   93.112536] weave: port 17(vethweplf0b5e8d) entered blocking state
[   93.115904] weave: port 17(vethweplf0b5e8d) entered disabled state
[   93.119477] device vethweplf0b5e8d entered promiscuous mode
[   93.182720] eth0: renamed from vethwepgf0b5e8d
[   93.196650] IPv6: ADDRCONF(NETDEV_UP): vethweplf0b5e8d: link is not ready
[   93.219450] IPv6: ADDRCONF(NETDEV_CHANGE): vethweplf0b5e8d: link becomes ready
[308661.871387] weave: port 17(vethweplf0b5e8d) entered blocking state
[308661.873973] weave: port 17(vethweplf0b5e8d) entered forwarding state
[308661.876393] weave: port 16(vethwepl9b220cb) entered blocking state
[308661.878824] weave: port 16(vethwepl9b220cb) entered forwarding state
[308661.881384] weave: port 15(vethwepl4a15bc7) entered blocking state
[308661.883848] weave: port 15(vethwepl4a15bc7) entered forwarding state
[308661.886413] weave: port 14(vethweplf0acd7c) entered blocking state
[308661.888802] weave: port 14(vethweplf0acd7c) entered forwarding state
[308661.891640] weave: port 13(vethwepl1c2a04c) entered blocking state
[308661.894029] weave: port 13(vethwepl1c2a04c) entered forwarding state
[308661.896616] weave: port 12(vethwepl3adc761) entered blocking state
[308661.898953] weave: port 12(vethwepl3adc761) entered forwarding state
[308661.901433] weave: port 11(vethweplecfd571) entered blocking state
[308661.903754] weave: port 11(vethweplecfd571) entered forwarding state
[308661.906259] weave: port 10(vethwepl863bde9) entered blocking state
[308661.908732] weave: port 10(vethwepl863bde9) entered forwarding state
[308661.911291] weave: port 9(vethwepl02d3e16) entered blocking state
[308661.913904] weave: port 9(vethwepl02d3e16) entered forwarding state
[308661.916368] weave: port 8(vethwepld4135f0) entered blocking state
[308661.918827] weave: port 8(vethwepld4135f0) entered forwarding state
[308661.921332] weave: port 7(vethwepl882a7ef) entered blocking state
[308661.923794] weave: port 7(vethwepl882a7ef) entered forwarding state
[308661.926231] weave: port 6(vethwepla08263a) entered blocking state
[308661.928718] weave: port 6(vethwepla08263a) entered forwarding state
[308661.931540] weave: port 5(vethwepl4bbfa5d) entered blocking state
[308661.934161] weave: port 5(vethwepl4bbfa5d) entered forwarding state
[308661.936650] weave: port 4(vethwepla0f77d3) entered blocking state
[308661.939067] weave: port 4(vethwepla0f77d3) entered forwarding state
[308661.941570] weave: port 3(vethwepl7880324) entered blocking state
[308661.944012] weave: port 3(vethwepl7880324) entered forwarding state
[308661.946860] weave: port 2(vethwepl04e779b) entered blocking state
[308661.949493] weave: port 2(vethwepl04e779b) entered forwarding state
[308661.952448] weave: port 1(vethwe-bridge) entered blocking state
[308661.955258] weave: port 1(vethwe-bridge) entered forwarding state

@deitch
Copy link
Contributor

deitch commented Dec 11, 2017

Adding to it: when we do bring the weave interface back up ip li set weave up, we cannot access some services in the cluster, notably kubernetes (which in our case is 10.52.0.1)

@deitch
Copy link
Contributor

deitch commented Dec 11, 2017

FWIW:

  1. I can access the kube API servers - both via the kubernetes service IP/port and directly - from the underlying host
  2. I cannot access the kube API servers - neither via the kubernetes service IP/port nor directly - from a container running on that host (it works from others where weave never went down).

On underlying host:

$ # using service IP
$ curl -k https://10.52.0.1:443
User "system:anonymous" cannot get path "/".: "No policy matched."
$ # direct IP of an API server
$ curl -k https://10.50.21.5:6443
User "system:anonymous" cannot get path "/".: "No policy matched."

In a container:

$ # using service IP
$ nsenter -t 29730 -n curl -k https://10.52.0.1:443
<hangs>
$ # directly
$ nsenter -t 29730 -n curl -k https://10.50.21.5:6443
<hangs>

What would be really helpful:

  1. How does the weave interface get configured to survive reboots? Or is it not?
  2. How do I ensure it comes back up?
  3. Once it comes back up, how do I ensure proxying is working correctly?

@deitch
Copy link
Contributor

deitch commented Dec 11, 2017

Bingo.

The following iptables entries were not in the rebooted node. When added, everything came back to life. Provided by iptables-save | grep WEAVE on a good host and a bad one, and then diffed.

:WEAVE - [0:0]
-A POSTROUTING -j WEAVE
-A WEAVE -s 10.51.0.0/16 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.51.0.0/16 -d 10.51.0.0/16 -j MASQUERADE
-A WEAVE -s 10.51.0.0/16 ! -d 10.51.0.0/16 -j MASQUERADE
-A WEAVE ! -s 10.51.0.0/16 -d 10.51.0.0/16 -j MASQUERADE

@brb
Copy link
Contributor

brb commented Dec 12, 2017

Thanks for the logs and the investigation!

The weave bridge should not survive reboots, and it should be created by weave-kube (when it starts).

Does systemd-networkd manage the weave bridge? Could you paste networkctl status?

In any case, we should perhaps monitor via netlink subscribe when the bridge goes down and do something more useful.

@deitch
Copy link
Contributor

deitch commented Dec 12, 2017

Thanks for the logs and the investigation!

Quite welcome. I figure the better detail I get, the faster we both get it resolved.

The weave bridge should not survive reboots, and it should be created by weave-kube (when it starts)

So when the node starts, the weave pod should get scheduled because of it is a DaemonSet, which it does. And it looks like the weave bridge is back, but was set to DOWN. So:

  1. Why did it not bring it back correctly?
  2. Why were the WEAVE-NPC entries in iptables, but not the WEAVE entries? Would the weave pod starting up not have reset those?

Output of networkctl status:

$ networkctl status
*      State: routable
       Address: 10.50.21.151 on eth0
                172.17.0.1 on docker0
                10.51.32.0 on weave
                fe80::101d:8ff:fee6:10f4 on eth0
                fe80::42:7dff:fe7c:3be3 on docker0
                fe80::20c1:f6ff:fea2:633e on datapath
                fe80::58cc:7fff:fece:6ae0 on weave
                fe80::9caf:bfff:fe34:b2e0 on vethwe-datapath
                fe80::d449:58ff:feef:5360 on vethwe-bridge
                fe80::8466:fbff:fee7:3d4e on vethwepld512fe5
                fe80::3055:aaff:fe7a:7d02 on vethwepl10e4d55
                fe80::50d8:6cff:fe03:f8c3 on vethwepl5fdf578
                fe80::3cca:fbff:fef9:cbee on vethwepl242dcae
                fe80::bc88:f3ff:fe82:4b1 on vxlan-6784
                fe80::d4b1:9dff:fef4:3970 on vethwepl2c8c02a
                fe80::4462:53ff:fe88:da4f on vethwepl646563e
       Gateway: 10.50.21.1 on eth0
           DNS: 10.50.0.2
Search Domains: ec2.internal

$ networkctl
IDX LINK             TYPE               OPERATIONAL SETUP
  1 lo               loopback           carrier     unmanaged
  2 eth0             ether              routable    configured
  3 docker0          ether              no-carrier  unmanaged
 16 datapath         ether              degraded    unmanaged
 18 weave            ether              routable    unmanaged
 19 dummy0           ether              off         unmanaged
 21 vethwe-datapath  ether              degraded    unmanaged
 22 vethwe-bridge    ether              degraded    unmanaged
 25 vethwepld512fe5  ether              degraded    unmanaged
 33 vethwepl10e4d55  ether              degraded    unmanaged
 44 vethwepl5fdf578  ether              degraded    unmanaged
 46 vethwepl242dcae  ether              degraded    unmanaged
 51 vxlan-6784       ether              degraded    unmanaged
 69 vethwepl2c8c02a  ether              degraded    unmanaged
 71 vethwepl646563e  ether              degraded    unmanaged

@deitch
Copy link
Contributor

deitch commented Dec 12, 2017

FWIW, here are the 1st ~100 lines of the weave-kube logs https://gist.github.com/deitch/0c665e0f7f04c2b2d4c8e0eb5706f295

@brb
Copy link
Contributor

brb commented Dec 13, 2017

  1. Why did it not bring it back correctly?

The current code skips the initialization steps (incl. setting the required iptables chains and setting the bridge interface up) if the bridge exists (see https://github.com/weaveworks/weave/blob/v2.1.3/net/bridge.go#L231) which is a bug. So, if the container weave-kube is restarted during the initialization of the bridge, subsequent weave-kube will end up running in an invalid state.

  1. Why were the WEAVE-NPC entries in iptables, but not the WEAVE entries? Would the weave pod starting up not have reset those?

These are handled independently by the weave-npc container which is part of the same DaemonSet as weave-kube.

Just noting, that you are running quite an ancient version of weave - v2.0.1, while v2.1.3 exists. Mind upgrading?


I suggest the following actions:

  • Fix EnsureBridge that it would check a current state of the system and do missing steps.
  • Any iptables failure should log enough context, as exit status 1: iptables: No chain/target/match by that name. is not very helpful.
  • Monitor the weave interfaces (weave, datapath, vethwe-bridge, etc.) and for the beginning, log an error if any of those goes down.

@brb brb added the bug label Dec 13, 2017
@brb brb self-assigned this Dec 13, 2017
@deitch
Copy link
Contributor

deitch commented Dec 13, 2017

Mind upgrading?

Already planned to upgrade in next few days. Been backed up with other requirements.

The current code skips the initialization steps (incl. setting the required iptables chains and setting the bridge interface up) if the bridge exists

Aha. That explains, well, everything. :-)

Fix EnsureBridge that it would check a current state of the system and do missing steps
Monitor the weave interfaces log an error if any of those goes down

And maybe have weave try and repair it?

Essentially, when the weave pod starts, there are several possible scenarios:

  • new node: weave does everything correctly
  • existing node, bridge exists: weave does not install iptables rules or ensure they exist, and does not ensure bridge is up and functioning
  • existing node, bridge does not exist: weave does everything correctly

From a practical operational standpoint, what steps can I take on host startup so that a reboot is non-disabling? What can I do so that reboot = weave behaves correctly? On a reboot, I already have kubelet and kube-proxy systemd units, they may start up, the weave bridge probably already exists, iptables rules may or may not be in place.

@brb
Copy link
Contributor

brb commented Dec 14, 2017

And maybe have weave try and repair it?

Yep! That's what I meant by "do missing steps".

From a practical operational standpoint, what steps can I take on host startup so that a reboot is non-disabling?

Ideally - none, as the bridge interface should not survive system reboots. In your case, I suspect that weave-kube fails to start for the first time creating and leaving the bridge in invalid state. The assumption is based on timing from dmesg:

[   35.254818] device vethwe-datapath entered promiscuous mode
...
[   74.721726] device vxlan-6784 entered promiscuous mode

Both interfaces are created during the initialization, and there is a ~40sec gap in-between.

You should be able to access the previous weave-kube container logs by running kubectl logs -p $WEAVE_POD weave -n=kube-system, where $WEAVE_POD - a name of weave-net pod on the infected host (kubectl get pods -n kube-system -l name=weave-net -o wide to get names of all hosts), and see why it has failed. But I strongly to suggest to upgrade weave as there were quite a few fixes to weave-kube since the vsn you run (notably #3134).

@deitch
Copy link
Contributor

deitch commented Dec 14, 2017

I strongly to suggest to upgrade weave as there were quite a few fixes to weave-kube since the vsn you run

We did last night. Ironically, it caused a minor problem, as the (now-properly-working) NPC blocked some services from working. We knew it wasn't working 100% with the defaults, but put it in place anyways so it would work when it did. But we misconfigured it. Oops. :-)

Ideally - none, as the bridge interface should not survive system reboots

But we aren't in an ideal world. Can I put an init container or similar in place to ensure that everything starts up correctly? I am tempted to have an init container that checks for the weave bridge and removes it if it is there, but if, for some reason, the weave pod itself restarts while app pods are running (not during a restart), that will kill any running apps. Oops.

I suspect that weave-kube fails to start for the first time

That seems strange to me. In our case, we had the host up and running for quite some time, weave running, pods running and getting IPs and communicating, meaning that the weave bridge must have come up correctly. Only when it rebooted did the problems begin.

Unfortunately, getting the previous pod isn't going to help anymore. We upgraded to 2.1.3 last night, so you would need to go back several pods, which won't help. Well, we have fluentd sending to loggly, but when the bridge was down, the fluentd pod could not communicate out, so those are gone.

So:

  1. we have 2.1.3
  2. We know the underlying bug with weave. Is there a PR in place? ETA?
  3. Prior to any of that, in our real-world scenario, what can I do to handle this error so that a reboot doesn't equal "node down until a human gets involved"?

@brb
Copy link
Contributor

brb commented Dec 15, 2017

I suspect that weave-kube fails to start for the first time

I meant for the first time after a reboot.

  1. we have 2.1.3

Good. Have you experienced the problem with 2.1.3?

  1. We know the underlying bug with weave. Is there a PR in place? ETA?

I'm hoping to have a PR ready next week.

  1. Prior to any of that, in our real-world scenario, what can I do to handle this error so that a reboot doesn't equal "node down until a human gets involved"?

I don't see any easy way. Before starting the weave pod, you'd need to call destroy_bridge, which is not exported, and afterwards to force k8s to CNI-ADD all existing containers on the host.

@deitch
Copy link
Contributor

deitch commented Dec 16, 2017

I meant for the first time after a reboot.

Ah, right. Got it.

Have you experienced the problem with 2.1.3?

Haven't had a reboot since we upgraded a few days ago. We can try it, but since we know the cause, and it isn't consistent, I am not sure the test will be scientific. :-)

I'm hoping to have a PR ready next week.
I don't see any easy way.

Well, if it is that soon, we will live with the risk. Anything we do on our own infra is going to take enough time to validate, that we will be better off with 2.1.4 (or whichever).

I probably should do a PR myself - I have added one or two to weave net - but out on holiday Monday-Tuesday and totally overloaded with end of year stuff.

May I ask that you flag the PR on this issue so interested parties can track it?

Thanks!

@brb
Copy link
Contributor

brb commented Dec 18, 2017

May I ask that you flag the PR on this issue so interested parties can track it?

Sure, #3204

@deitch
Copy link
Contributor

deitch commented Dec 18, 2017

Thanks!

@alok87
Copy link
Contributor

alok87 commented Dec 27, 2017

@deitch our issue gets fixed not by reboot every-time, but when we delete all the master and nodes. What should i do to fix this issue on my own. Not sure when the release is happening for this. And our production can get impacted cause of this. Currently the problem is in staging where there is huge up and down of nodes.

@deitch
Copy link
Contributor

deitch commented Dec 27, 2017

@alok87 I don't understand. Do you mean a new worker node comes up, yet it fails to join the existing weave network?

@alok87
Copy link
Contributor

alok87 commented Dec 27, 2017

@deitch Yes a worker node comes up and has the weave interface missing

@deitch
Copy link
Contributor

deitch commented Dec 27, 2017

@alok87

  1. Is the weave DaemonSet configured correctly?
  2. Does the weave pod get started on the node? If yes, what do the logs show (and kubectl describe for that pod)? If not, what events show up in the kubelet logs?

@alok87
Copy link
Contributor

alok87 commented Dec 28, 2017

@deitch This is our weave DS - https://gist.github.com/alok87/07a2ea274a8962726cb7e875c5ad5887
It works in production as there is mostly scaling up and very few scale down. But in staging there is huge scale up and down every day. 20 comes up and 20 goes down every day. And we observed the issue to happen in the node whose IP was used few days back

@brb
Copy link
Contributor

brb commented Dec 28, 2017

@alok87 you are running 2.0.1. I suggest you to update to the latest 2.1.3 vsn which has fixed the issue for a few (#2998).

@alok87
Copy link
Contributor

alok87 commented Dec 28, 2017

@brb we are at 1.7.6 staging cluster. we upgraded weave today to 2.1.3 https://github.com/weaveworks/weave/releases/download/v2.1.3/weave-daemonset-k8s-1.7.yaml
But the weave started restarting continuously and we had to roll back to the same version of weave as our production(2.0.1)

This was exactly happening after upgrading cluster 1.7.6 with weave-2.1.3

root@voice-5834b78-857cd6bd59-w6v6h:/www/app# curl http://mystaging.central:8080/status
curl: (6) Could not resolve host: mystaging.central

Restarting the kube dns fixes it. Why is kube dns restart required after weave upgrade?

@brb
Copy link
Contributor

brb commented Dec 28, 2017

@aloka87 I suggest you to open a separate issue, and please don't forget to include as much info as possible (e.g. the restarting weave pod logs).

bboreham added a commit that referenced this issue Jan 9, 2018
Do not skip bridge creation if bridge exists
@brb brb removed their assignment Jun 12, 2018
murali-reddy added a commit that referenced this issue Aug 20, 2018
is found to be in DOWN state

Fixes #3133 Do something more useful when the weave bridge is DOWN

On Weave restart there is already logic in place to create the bridge interface
and bring it up, this fix only monotors the weave bridge interface and logs error if
it is in DOWN state
murali-reddy added a commit that referenced this issue Aug 23, 2018
is found to be in DOWN state

Fixes #3133 Do something more useful when the weave bridge is DOWN

On Weave restart there is already logic in place to create the bridge interface
and bring it up, this fix only monotors the weave bridge interface and logs error if
it is in DOWN state
murali-reddy added a commit that referenced this issue Aug 23, 2018
is found to be in DOWN state

Fixes #3133 Do something more useful when the weave bridge is DOWN

On Weave restart there is already logic in place to create the bridge interface
and bring it up, this fix only monotors the weave bridge interface and logs error if
it is in DOWN state
@brb brb added this to the 2.5 milestone Sep 3, 2018
@brb brb closed this as completed in #3381 Sep 3, 2018
brb added a commit that referenced this issue Sep 3, 2018
Monitor and throw error message in the logs if Weave bridge interface is down

Fix #3133
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants