We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug I'm getting this strange error in the fluentd pods below and logs are not being collected.
[warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel"
To Reproduce Deploy fluentd with the config file use below
Expected behavior
Your Environment
If you hit the problem with older fluentd version, try latest version first.
Your Configuration
2019-08-27 00:56:58 +0000 [info]: parsing config file is succeeded path="/etc/fluent/fluent.conf" 2019-08-27 00:56:59 +0000 [info]: using configuration file: <ROOT> <match fluent.**> @type null </match> <source> @id fluentd-containers.log @type tail path "/var/log/containers/*.log" pos_file "/var/log/containers.log.pos" tag "raw.kubernetes.*" read_from_head true <parse> @type "multi_format" <pattern> format json time_key "time" time_format "%Y-%m-%dT%H:%M:%S.%NZ" time_type string </pattern> <pattern> format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/ time_format "%Y-%m-%dT%H:%M:%S.%N%:z" expression ^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$ ignorecase false multiline false </pattern> </parse> </source> <match raw.kubernetes.**> @id raw.kubernetes @type detect_exceptions remove_tag_prefix "raw" message "log" stream "stream" multiline_flush_interval 5 max_bytes 500000 max_lines 1000 </match> <filter **> @id filter_concat @type concat key "message" multiline_end_regexp "/\\n$/" separator "" </filter> <filter kubernetes.**> @id filter_kubernetes_metadata @type kubernetes_metadata </filter> <filter kubernetes.**> @id filter_parser @type parser key_name "log" reserve_data true remove_key_name_field true <parse> @type "multi_format" <pattern> format json </pattern> <pattern> format none </pattern> </parse> </filter> <match **> @id elasticsearch @type elasticsearch @log_level "info" include_tag_key true type_name "_doc" host "elastic.ew.oc.001.private.theagilehub.net" port 30998 scheme http ssl_version TLSv1_2 ssl_verify true user "" password xxxxxx logstash_format true logstash_prefix "logstash" reconnect_on_error true <buffer> @type "file" path "/var/log/fluentd-buffers/kubernetes.system.buffer" flush_mode interval retry_type exponential_backoff flush_thread_count 2 flush_interval 5s retry_forever retry_max_interval 30 chunk_limit_size 2M queue_limit_length 8 overflow_action block </buffer> </match> <system> root_dir "/tmp/fluentd-buffers/" </system> <source> @id minion @type tail format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/ time_format %Y-%m-%d %H:%M:%S path "/var/log/salt/minion" pos_file "/var/log/salt.pos" tag "salt" <parse> time_format %Y-%m-%d %H:%M:%S @type regexp expression ^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$ </parse> </source> <source> @id startupscript.log @type tail format syslog path "/var/log/startupscript.log" pos_file "/var/log/startupscript.log.pos" tag "startupscript" <parse> @type syslog </parse> </source> <source> @id docker.log @type tail format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/ path "/var/log/docker.log" pos_file "/var/log/docker.log.pos" tag "docker" <parse> @type regexp expression ^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))? </parse> </source> <source> @id etcd.log @type tail format none path "/var/log/etcd.log" pos_file "/var/log/etcd.log.pos" tag "etcd" <parse> @type none </parse> </source> <source> @id kubelet.log @type tail format multiline multiline_flush_interval 5s format_firstline /^\w\d{4}/ format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ time_format %m%d %H:%M:%S.%N path "/var/log/kubelet.log" pos_file "/var/log/kubelet.log.pos" tag "kubelet" <parse> time_format %m%d %H:%M:%S.%N format_firstline /^\w\d{4}/ @type multiline format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ </parse> </source> <source> @id kube-proxy.log @type tail format multiline multiline_flush_interval 5s format_firstline /^\w\d{4}/ format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ time_format %m%d %H:%M:%S.%N path "/var/log/kube-proxy.log" pos_file "/var/log/kube-proxy.log.pos" tag "kube-proxy" <parse> time_format %m%d %H:%M:%S.%N format_firstline /^\w\d{4}/ @type multiline format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ </parse> </source> <source> @id kube-apiserver.log @type tail format multiline multiline_flush_interval 5s format_firstline /^\w\d{4}/ format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ time_format %m%d %H:%M:%S.%N path "/var/log/kube-apiserver.log" pos_file "/var/log/kube-apiserver.log.pos" tag "kube-apiserver" <parse> time_format %m%d %H:%M:%S.%N format_firstline /^\w\d{4}/ @type multiline format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ </parse> </source> <source> @id kube-controller-manager.log @type tail format multiline multiline_flush_interval 5s format_firstline /^\w\d{4}/ format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ time_format %m%d %H:%M:%S.%N path "/var/log/kube-controller-manager.log" pos_file "/var/log/kube-controller-manager.log.pos" tag "kube-controller-manager" <parse> time_format %m%d %H:%M:%S.%N format_firstline /^\w\d{4}/ @type multiline format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ </parse> </source> <source> @id kube-scheduler.log @type tail format multiline multiline_flush_interval 5s format_firstline /^\w\d{4}/ format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ time_format %m%d %H:%M:%S.%N path "/var/log/kube-scheduler.log" pos_file "/var/log/kube-scheduler.log.pos" tag "kube-scheduler" <parse> time_format %m%d %H:%M:%S.%N format_firstline /^\w\d{4}/ @type multiline format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ </parse> </source> <source> @id glbc.log @type tail format multiline multiline_flush_interval 5s format_firstline /^\w\d{4}/ format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ time_format %m%d %H:%M:%S.%N path "/var/log/glbc.log" pos_file "/var/log/glbc.log.pos" tag "glbc" <parse> time_format %m%d %H:%M:%S.%N format_firstline /^\w\d{4}/ @type multiline format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ </parse> </source> <source> @id cluster-autoscaler.log @type tail format multiline multiline_flush_interval 5s format_firstline /^\w\d{4}/ format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ time_format %m%d %H:%M:%S.%N path "/var/log/cluster-autoscaler.log" pos_file "/var/log/cluster-autoscaler.log.pos" tag "cluster-autoscaler" <parse> time_format %m%d %H:%M:%S.%N format_firstline /^\w\d{4}/ @type multiline format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/ </parse> </source> <source> @id journald-docker @type systemd matches [{"_SYSTEMD_UNIT":"docker.service"}] read_from_head true tag "docker" <storage> @type "local" persistent true path "/var/log/journald-docker.pos" </storage> </source> <source> @id journald-container-runtime @type systemd matches [{"_SYSTEMD_UNIT":"{{ fluentd_container_runtime_service }}.service"}] read_from_head true tag "container-runtime" <storage> @type "local" persistent true path "/var/log/journald-container-runtime.pos" </storage> </source> <source> @id journald-kubelet @type systemd matches [{"_SYSTEMD_UNIT":"kubelet.service"}] read_from_head true tag "kubelet" <storage> @type "local" persistent true path "/var/log/journald-kubelet.pos" </storage> </source> <source> @id journald-node-problem-detector @type systemd matches [{"_SYSTEMD_UNIT":"node-problem-detector.service"}] read_from_head true tag "node-problem-detector" <storage> @type "local" persistent true path "/var/log/journald-node-problem-detector.pos" </storage> </source> <source> @id kernel @type systemd matches [{"_TRANSPORT":"kernel"}] read_from_head true tag "kernel" <storage> @type "local" persistent true path "/var/log/kernel.pos" </storage> <entry> fields_strip_underscores true fields_lowercase true </entry> </source> </ROOT> 2019-08-27 00:56:59 +0000 [info]: starting fluentd-1.5.1 pid=50 ruby="2.3.3" 2019-08-27 00:56:59 +0000 [info]: spawn command to main: cmdline=["/usr/bin/ruby2.3", "-Eascii-8bit:ascii-8bit", "/usr/local/bin/fluentd", "--under-supervisor"] 2019-08-27 00:56:59 +0000 [info]: gem 'fluent-plugin-concat' version '2.3.0' 2019-08-27 00:56:59 +0000 [info]: gem 'fluent-plugin-detect-exceptions' version '0.0.12' 2019-08-27 00:56:59 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '3.5.2' 2019-08-27 00:56:59 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.2.0' 2019-08-27 00:56:59 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0' 2019-08-27 00:56:59 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.4.0' 2019-08-27 00:56:59 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2' 2019-08-27 00:56:59 +0000 [info]: gem 'fluentd' version '1.5.1' 2019-08-27 00:56:59 +0000 [info]: adding match pattern="fluent.**" type="null" 2019-08-27 00:56:59 +0000 [info]: adding match pattern="raw.kubernetes.**" type="detect_exceptions" 2019-08-27 00:56:59 +0000 [info]: adding filter pattern="**" type="concat" 2019-08-27 00:56:59 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata" 2019-08-27 00:57:00 +0000 [info]: adding filter pattern="kubernetes.**" type="parser" 2019-08-27 00:57:00 +0000 [info]: adding match pattern="**" type="elasticsearch" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="tail" 2019-08-27 00:57:00 +0000 [info]: adding source type="systemd" 2019-08-27 00:57:00 +0000 [info]: adding source type="systemd" 2019-08-27 00:57:00 +0000 [info]: adding source type="systemd" 2019-08-27 00:57:00 +0000 [info]: adding source type="systemd" 2019-08-27 00:57:00 +0000 [info]: adding source type="systemd" 2019-08-27 00:57:00 +0000 [info]: #0 starting fluentd worker pid=59 ppid=50 worker=0 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/nginx-proxy-westeurope-prod-platform-k8s-nodes00-03_kube-system_nginx-proxy-c9c79501db54beddee9f99bcae5900ae2add426bd1e134c2af67c80419bc15ae.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/kube-proxy-7zntr_kube-system_kube-proxy-e48561c80e183c90c3ceccfa903a23eb725a9ce23d2ca42a47f2a83e3ac521c8.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/nodelocaldns-p2js4_kube-system_node-cache-98b97f4f9a39336127f0792cf81db0616c032062f950920a0531be38a88e78d5.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/nginx-proxy-westeurope-prod-platform-k8s-nodes00-03_kube-system_nginx-proxy-b2c309321867c47ddea5c2d8a74c1747d6aa7521bf2114e9afddfa25658db3be.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/kube-proxy-7zntr_kube-system_kube-proxy-5832c8caee50e7972a98733f49c1175d6d2a3d43da79bbb56b34fb9c4085d7e2.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/nodelocaldns-p2js4_kube-system_node-cache-7f94e73d194e7641365e489eb77082f7772c5bb93257a13cadd8c99a78d8ddc7.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/nginx-ingress-nginx-public-5874845f7b-8xz4h_platform_nginx-ingress-nginx-public-0f2b7ef5a2df460e4fdc1d24762106df1b5da480f27cbfe085bd2f7947ac37a2.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/prometheus-node-exporter-lthj2_platform_prometheus-node-exporter-89df1d96c771fb5b3967516b89e5b631a931f6b0b7c36d0423ebc8c1576a71bd.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/cert-manager-6f59bd9578-zb4pz_platform_cert-manager-d5804f93394a6581d78595a6646ff70d31d3907628e1252ba6ae195b4384a125.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/prometheus-alertmanager-77bf5669b4-mqt8m_platform_prometheus-alertmanager-226ccb5ddd9fb025fde05ca9f0c8ac97871b21a286732a8370f2d36403d78454.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/prometheus-alertmanager-77bf5669b4-mqt8m_platform_prometheus-alertmanager-configmap-reload-17e0153771d385ed5b4d1426b6d8d593df3e48abee694b0eab15d2b9b16a8bb6.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/weave-net-4vv9c_kube-system_weave-34b1603baa3f5dfb4c2b26415b4757ba5b718a0e9112e5a106df2d3e1d556c39.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/weave-net-4vv9c_kube-system_weave-npc-c49ee718a07165917e702f18fb0d6a6c2ce2535899fc9b750eeb8a2f2309840e.log 2019-08-27 00:57:00 +0000 [info]: #0 [fluentd-containers.log] following tail of /var/log/containers/fluentd-elasticsearch-brcjq_platform_fluentd-elasticsearch-6adf43440c44aceebe96a5e6eae59736c22d252d92676d35b8cd30df3b74171a.log 2019-08-27 00:57:00 +0000 [info]: #0 fluentd worker is now running worker=0 2019-08-27 00:57:05 +0000 [error]: #0 [elasticsearch] unexpected error while checking flushed chunks. ignored. error_class=RuntimeError error="can't enqueue buffer file: path = /var/log/fluentd-buffers/kubernetes.system.buffer/buffer.b5910ebfffa9bea3d7174de2a7d93b6a5.log, error = 'No such file or directory @ rb_file_s_rename - (/var/log/fluentd-buffers/kubernetes.system.buffer/buffer.b5910ebfffa9bea3d7174de2a7d93b6a5.log, /var/log/fluentd-buffers/kubernetes.system.buffer/buffer.q5910ebfffa9bea3d7174de2a7d93b6a5.log)'" 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer/file_chunk.rb:123:in `rescue in enqueued!' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer/file_chunk.rb:111:in `enqueued!' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer.rb:437:in `block (2 levels) in enqueue_chunk' 2019-08-27 00:57:05 +0000 [error]: #0 /usr/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer.rb:431:in `block in enqueue_chunk' 2019-08-27 00:57:05 +0000 [error]: #0 /usr/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer.rb:430:in `enqueue_chunk' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer.rb:473:in `block in enqueue_all' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer.rb:466:in `each' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/buffer.rb:466:in `enqueue_all' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin/output.rb:1370:in `enqueue_thread_run' 2019-08-27 00:57:05 +0000 [error]: #0 /var/lib/gems/2.3.0/gems/fluentd-1.5.1/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
Your Error Log
2019-08-27 00:57:06 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" time=2019-08-27 00:57:06.319511351 +0000 record={"priority"=>"6", "boot_id"=>"522ebfe87c784a67a0e756d20ffbf9b7", "machine_id"=>"320fc5cd279b42febd7a45857a46a325", "transport"=>"kernel", "syslog_facility"=>"0", "syslog_identifier"=>"kernel", "hostname"=>"westeurope-prod-platform-k8s-nodes00-03", "message"=>"weave: port 5(vethwepl0a05e9b) entered disabled statedevice vethwepl0a05e9b left promiscuous modeweave: port 5(vethwepl0a05e9b) entered disabled stateIPv4: martian source 10.233.112.5 from 10.233.112.5, on dev datapathll header: 00000000: ff ff ff ff ff ff 96 13 c3 c9 bb 11 08 06 ..............IPv4: martian source 10.233.72.6 from 10.233.72.6, on dev datapathll header: 00000000: ff ff ff ff ff ff 92 23 42 81 98 ba 08 06 .......#B.....IPv4: martian source 10.233.112.0 from 10.233.112.5, on dev datapathll header: 00000000: ff ff ff ff ff ff 96 13 c3 c9 bb 11 08 06 ..............IPv4: martian source 10.233.72.0 from 10.233.72.6, on dev datapathll header: 00000000: ff ff ff ff ff ff 92 23 42 81 98 ba 08 06 .......#B.....IPVS: Creating netns size=2048 id=23weave: port 5(vethwepld2d4c46) entered blocking stateweave: port 5(vethwepld2d4c46) entered disabled statedevice vethwepld2d4c46 entered promiscuous modeIPv6: ADDRCONF(NETDEV_UP): vethwepld2d4c46: link is not readyIPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): vethwepld2d4c46: link becomes readyweave: port 5(vethwepld2d4c46) entered blocking stateweave: port 5(vethwepld2d4c46) entered forwarding stateIPv4: martian source 10.233.96.4 from 10.233.96.4, on dev eth0ll header: 00000000: ff ff ff ff ff ff 66 11 57 0c 9e 4c 08 06 ......f.W..L..IPv4: martian source 10.233.96.4 from 10.233.96.4, on dev datapathll header: 00000000: ff ff ff ff ff ff 66 11 57 0c 9e 4c 08 06 ......f.W..L..IPv4: martian source 10.233.96.0 from 10.233.96.4, on dev eth0ll header: 00000000: ff ff ff ff ff ff 66 11 57 0c 9e 4c 08 06 ......f.W..L..IPv4: martian source 10.233.96.0 from 10.233.96.4, on dev datapathll header: 00000000: ff ff ff ff ff ff 66 11 57 0c 9e 4c 08 06 ......f.W..L..", "source_monotonic_timestamp"=>"273954083382"}
Additional context
The text was updated successfully, but these errors were encountered:
This is not fluentd core issue, so this is not right place. Please report it on plugin repository instead. Closed.
Sorry, something went wrong.
No branches or pull requests
Describe the bug
I'm getting this strange error in the fluentd pods below and logs are not being collected.
To Reproduce
Deploy fluentd with the config file use below
Expected behavior
Your Environment
If you hit the problem with older fluentd version, try latest version first.
Your Configuration
Your Error Log
Additional context
The text was updated successfully, but these errors were encountered: