Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" #112

Closed
wajika opened this issue Jan 4, 2022 · 1 comment

Comments

@wajika
Copy link

wajika commented Jan 4, 2022

2022-01-04 06:54:27 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" time=2022-01-04 06:54:27.659593600 +0000 record={"priority"=>"6", "boot_id"=>"1eb5082bf87e45f4a1cf55f506fc7bfe", "machine_id"=>"20191225111607875619293640639763", "hostname"=>"ack-lonsid20236prd", "source_monotonic_timestamp"=>"26486496831163", "transport"=>"kernel", "syslog_facility"=>"0", "syslog_identifier"=>"kernel", "message"=>"IPVS: Creating netns size=2048 id=563IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth3fcf31bb: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth3fcf31bb: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 13(veth3fcf31bb) entered blocking statecni0: port 13(veth3fcf31bb) entered disabled statedevice veth3fcf31bb entered promiscuous modecni0: port 13(veth3fcf31bb) entered blocking statecni0: port 13(veth3fcf31bb) entered forwarding stateIPVS: Creating netns size=2048 id=564IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth49535c1e: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth49535c1e: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 23(veth49535c1e) entered blocking statecni0: port 23(veth49535c1e) entered disabled statedevice veth49535c1e entered promiscuous modecni0: port 23(veth49535c1e) entered blocking statecni0: port 23(veth49535c1e) entered forwarding statecni0: port 23(veth49535c1e) entered disabled statedevice veth49535c1e left promiscuous modecni0: port 23(veth49535c1e) entered disabled stateIPVS: Creating netns size=2048 id=565IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth56eb1eea: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth56eb1eea: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 23(veth56eb1eea) entered blocking statecni0: port 23(veth56eb1eea) entered 
disabled statedevice veth56eb1eea entered promiscuous modecni0: port 23(veth56eb1eea) entered blocking statecni0: port 23(veth56eb1eea) entered forwarding statecni0: port 13(veth3fcf31bb) entered disabled statedevice veth3fcf31bb left promiscuous modecni0: port 13(veth3fcf31bb) entered disabled statecni0: port 7(veth8a5c2fb6) entered disabled statedevice veth8a5c2fb6 left promiscuous modecni0: port 7(veth8a5c2fb6) entered disabled statecni0: port 23(veth56eb1eea) entered disabled statedevice veth56eb1eea left promiscuous modecni0: port 23(veth56eb1eea) entered disabled stateIPVS: Creating netns size=2048 id=566IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): vethb7972af5: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): vethb7972af5: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 7(vethb7972af5) entered blocking statecni0: port 7(vethb7972af5) entered disabled statedevice vethb7972af5 entered promiscuous modecni0: port 7(vethb7972af5) entered blocking statecni0: port 7(vethb7972af5) entered forwarding state"} 
2022-01-04 06:54:27.659756759 +0000 fluent.warn: {"error":"#<Fluent::Plugin::ConcatFilter::TimeoutError: Timeout flush: kernel:default>","location":null,"tag":"kernel","time":1641279267,"record":{"priority":"6","boot_id":"1eb5082bf87e45f4a1cf55f506fc7bfe","machine_id":"20191225111607875619293640639763","hostname":"ack-lonsid20236prd","source_monotonic_timestamp":"26486496831163","transport":"kernel","syslog_facility":"0","syslog_identifier":"kernel","message":"IPVS: Creating netns size=2048 id=563IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth3fcf31bb: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth3fcf31bb: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 13(veth3fcf31bb) entered blocking statecni0: port 13(veth3fcf31bb) entered disabled statedevice veth3fcf31bb entered promiscuous modecni0: port 13(veth3fcf31bb) entered blocking statecni0: port 13(veth3fcf31bb) entered forwarding stateIPVS: Creating netns size=2048 id=564IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth49535c1e: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth49535c1e: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 23(veth49535c1e) entered blocking statecni0: port 23(veth49535c1e) entered disabled statedevice veth49535c1e entered promiscuous modecni0: port 23(veth49535c1e) entered blocking statecni0: port 23(veth49535c1e) entered forwarding statecni0: port 23(veth49535c1e) entered disabled statedevice veth49535c1e left promiscuous modecni0: port 23(veth49535c1e) entered disabled stateIPVS: Creating netns size=2048 id=565IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth56eb1eea: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth56eb1eea: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 23(veth56eb1eea) entered blocking statecni0: port 23(veth56eb1eea) entered disabled statedevice veth56eb1eea entered pr 
omiscuous modecni0: port 23(veth56eb1eea) entered blocking statecni0: port 23(veth56eb1eea) entered forwarding statecni0: port 13(veth3fcf31bb) entered disabled statedevice veth3fcf31bb left promiscuous modecni0: port 13(veth3fcf31bb) entered disabled statecni0: port 7(veth8a5c2fb6) entered disabled statedevice veth8a5c2fb6 left promiscuous modecni0: port 7(veth8a5c2fb6) entered disabled statecni0: port 23(veth56eb1eea) entered disabled statedevice veth56eb1eea left promiscuous modecni0: port 23(veth56eb1eea) entered disabled stateIPVS: Creating netns size=2048 id=566IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): vethb7972af5: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): vethb7972af5: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 7(vethb7972af5) entered blocking statecni0: port 7(vethb7972af5) entered disabled statedevice vethb7972af5 entered promiscuous modecni0: port 7(vethb7972af5) entered blocking statecni0: port 7(vethb7972af5) entered forwarding state"},"message":"dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error=\"Timeout flush: kernel:default\" location=nil tag=\"kernel\" time=2022-01-04 06:54:27.659593600 +0000 record={\"priority\"=>\"6\", \"boot_id\"=>\"1eb5082bf87e45f4a1cf55f506fc7bfe\", \"machine_id\"=>\"20191225111607875619293640639763\", \"hostname\"=>\"ack-lonsid20236prd\", \"source_monotonic_timestamp\"=>\"26486496831163\", \"transport\"=>\"kernel\", \"syslog_facility\"=>\"0\", \"syslog_identifier\"=>\"kernel\", \"message\"=>\"IPVS: Creating netns size=2048 id=563IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth3fcf31bb: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth3fcf31bb: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 13(veth3fcf31bb) entered blocking statecni0: port 13(veth3fcf31bb) entered disabled statedevice veth3fcf31bb entered promiscuous modecni0: port 13(veth3fcf31bb) entered blocking statecni0: port 13(veth3fc 
f31bb) entered forwarding stateIPVS: Creating netns size=2048 id=564IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth49535c1e: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth49535c1e: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 23(veth49535c1e) entered blocking statecni0: port 23(veth49535c1e) entered disabled statedevice veth49535c1e entered promiscuous modecni0: port 23(veth49535c1e) entered blocking statecni0: port 23(veth49535c1e) entered forwarding statecni0: port 23(veth49535c1e) entered disabled statedevice veth49535c1e left promiscuous modecni0: port 23(veth49535c1e) entered disabled stateIPVS: Creating netns size=2048 id=565IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): veth56eb1eea: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): veth56eb1eea: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 23(veth56eb1eea) entered blocking statecni0: port 23(veth56eb1eea) entered disabled statedevice veth56eb1eea entered promiscuous modecni0: port 23(veth56eb1eea) entered blocking statecni0: port 23(veth56eb1eea) entered forwarding statecni0: port 13(veth3fcf31bb) entered disabled statedevice veth3fcf31bb left promiscuous modecni0: port 13(veth3fcf31bb) entered disabled statecni0: port 7(veth8a5c2fb6) entered disabled statedevice veth8a5c2fb6 left promiscuous modecni0: port 7(veth8a5c2fb6) entered disabled statecni0: port 23(veth56eb1eea) entered disabled statedevice veth56eb1eea left promiscuous modecni0: port 23(veth56eb1eea) entered disabled stateIPVS: Creating netns size=2048 id=566IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): vethb7972af5: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): vethb7972af5: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readycni0: port 7(vethb7972af5) entered blocking statecni0: port 7(vethb7972af5) entered disabled statedevice vethb7972af5 entered promiscuous modecni0: port 7(vethb7972af5) enter 
ed blocking statecni0: port 7(vethb7972af5) entered forwarding state\"}"} 

Sorry, I did not publish in the default structure.
I just deployed fluentd in kubernetes. I don’t understand what the problem is, I didn’t understand it.
kubernetes 1.14
fluentd 1.12.0

I saw a similar issue, but the issue is different #83.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: kube-system
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es-v2.4.0
  namespace: kube-system
  labels:
    k8s-app: fluentd-es
    version: v2.4.0
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v2.4.0
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        kubernetes.io/cluster-service: "true"
        version: v2.4.0
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-node-critical
      serviceAccountName: fluentd-es
      containers:
      - name: fluentd-es
        image: registry.cn-hangzhou.aliyuncs.com/google-containerss/fluentd-custom:1.12.0
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-es-config-v0.2.0
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-es-config-v0.2.0
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>

  containers.input.conf: |-
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>

    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>

    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>

    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>

    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>

    # Logs from systemd-journal for interesting services.
    # TODO(random-liu): Remove this after cri container runtime rolls out.
    <source>
      @id journald-docker
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "docker.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-docker.pos
      </storage>
      read_from_head true
      tag docker
    </source>

    <source>
      @id journald-container-runtime
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-container-runtime.pos
      </storage>
      read_from_head true
      tag container-runtime
    </source>

    <source>
      @id journald-kubelet
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-kubelet.pos
      </storage>
      read_from_head true
      tag kubelet
    </source>

    <source>
      @id journald-node-problem-detector
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-node-problem-detector.pos
      </storage>
      read_from_head true
      tag node-problem-detector
    </source>

    <source>
      @id kernel
      @type systemd
      matches [{ "_TRANSPORT": "kernel" }]
      <storage>
        @type local
        persistent true
        path /var/log/kernel.pos
      </storage>
      <entry>
        fields_strip_underscores true
        fields_lowercase true
      </entry>
      read_from_head true
      tag kernel
    </source>

  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @id forward
      @type forward
    </source>

  monitoring.conf: |-
    # Prometheus Exporter Plugin
    # input plugin that exports metrics
    <source>
      @id prometheus
      @type prometheus
    </source>

    <source>
      @id monitor_agent
      @type monitor_agent
    </source>

    # input plugin that collects metrics from MonitorAgent
    <source>
      @id prometheus_monitor
      @type prometheus_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

    # input plugin that collects metrics for output plugin
    <source>
      @id prometheus_output_monitor
      @type prometheus_output_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

    # input plugin that collects metrics for in_tail plugin
    <source>
      @id prometheus_tail_monitor
      @type prometheus_tail_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>

  output.conf: |-
    <match **>
      @id kafka2
      @type kafka2
      # list of seed brokers
      brokers 192.168.10.145:9092,192.168.10.146:9092,192.168.10.147:9092
      use_event_time true
      topic_key aliyun-k8s-prod-cluster
      default_topic messages
      required_acks -1
      compression_codec gzip
      # buffer settings
      <buffer topic>
        @type file
        path /var/log/td-agent/buffer/td
        flush_interval 3s
      </buffer>
      <format>
        @type json
      </format>
    </match>

Please give me some help, thanks.

@wajika
Copy link
Author

wajika commented Jan 4, 2022

I noticed tag="kernel", try to remove it, it works normally

@wajika wajika closed this as completed Jan 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant