You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[SERVICE]
Daemon Off
Flush x
Log_Level x
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen x
HTTP_Port x
Health_Check On
inputs: |
[INPUT]
Name tail
Path /logs/y*
Tag logs
Mem_Buf_Limit 100MB
Skip_Long_Lines Off
Refresh_Interval 30
Path_Key filename
Skip_Empty_Lines On
[INPUT]
Name tail
Path /logs/x/*
Tag logs
Mem_Buf_Limit 100MB
Skip_Long_Lines Off
Refresh_Interval 30
Path_Key filename
Skip_Empty_Lines On
filters: |
[FILTER]
name multiline
match *
multiline.key_content log
multiline.parser java
[FILTER]
Name modify
Match *
Add cluster_name $var
Add cluster_id $var
outputs: |
[OUTPUT]
Name es
Match logs
Host ip
http_user x
http_passwd y
tls On
tls.verify off
Logstash_Format off
Index logs
Logstash_DateFormat %Y.%m
Suppress_Type_Name On
customParsers: |
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
[PARSER]
Name log_parser
Format regex
Regex \[(?<timestamp>.*)\]\[(?<correlationID>.*)\]\[(?<severity>.*)\]\[(?<class>.*)\]\[(?<thread>.*)\] (?<text>.*)
[PARSER]
Name equipment_parser
Format regex
Regex ^(?<log>.*)$
[PARSER]
Name kafka_parser
Format regex
Regex \[(?<timestamp>.*)\] (?<severity>FATAL|ERROR|WARN|INFO|DEBUG|TRACE) (?<text>.*)
[PARSER]
Name postgres_parser
Format regex
Regex ^(?<timestamp>.{23}).*(?<severity>FATAL|ERROR|WARN|INFO|DEBUG|TRACE|LOG|DETAIL): (?<text>.*)
Additional context
I try to show the logs from a specifc path from a volume!
The text was updated successfully, but these errors were encountered:
If you're mounting the same files into each pod and telling both pods to read those files then yes it will read both files once per pod - that's what you've told it to do. You need to restrict the files you want read by each pod.
Bug Report
Describe the bug
Pods of fluentbit duplicate the same log from the same path with the tail plugin
To Reproduce
kubernetes
fluentbit helm chart (daemonset)
elastic search
Steps to reproduce the problem:
In my kubernetes cluster are 3 nodes. So i have 3 fluentbit pods that are running
In the values.yaml of the fluentbit chart, i configured a volume and volumemount
In that path are logs.
When I read with fluentbit the logs, with tail plugin, and output them with es.
I have the replication of logs (as number of pods of fluentbit)
if is 2 pods - duplicated. I tried with only 1 fluentbit pod, and they did not duplicated!
Expected behavior
The logs will not be duplicated in the elasticsearch/,kibana
Code
extraVolumes:
persistentVolumeClaim:
claimName: x-pvc
extraVolumeMounts:
subPath: /logs/x/
mountPath: /logs/x/
subPath: l/ogs/y/
mountPath: /logs/y/
config:
service: |
inputs: |
filters: |
outputs: |
customParsers: |
Additional context
I try to show the logs from a specifc path from a volume!
The text was updated successfully, but these errors were encountered: