Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs duplicated when fluentbit read from mountvolume #9460

Open
daVidy31 opened this issue Oct 2, 2024 · 2 comments
Open

Logs duplicated when fluentbit read from mountvolume #9460

daVidy31 opened this issue Oct 2, 2024 · 2 comments

Comments

@daVidy31
Copy link

daVidy31 commented Oct 2, 2024

Bug Report

Describe the bug

Pods of fluentbit duplicate the same log from the same path with the tail plugin

To Reproduce

  • kubernetes

  • fluentbit helm chart (daemonset)

  • elastic search

  • Steps to reproduce the problem:
    In my kubernetes cluster are 3 nodes. So i have 3 fluentbit pods that are running

In the values.yaml of the fluentbit chart, i configured a volume and volumemount

In that path are logs.
When I read with fluentbit the logs, with tail plugin, and output them with es.

I have the replication of logs (as number of pods of fluentbit)

if is 2 pods - duplicated. I tried with only 1 fluentbit pod, and they did not duplicated!

Expected behavior
The logs will not be duplicated in the elasticsearch/,kibana

Code

extraVolumes:

  • name: volume
    persistentVolumeClaim:
    claimName: x-pvc

extraVolumeMounts:

  • name: volume
    subPath: /logs/x/
    mountPath: /logs/x/
  • name: volume
    subPath: l/ogs/y/
    mountPath: /logs/y/

config:
service: |

[SERVICE]
    Daemon Off
    Flush x
    Log_Level x
    Parsers_File /fluent-bit/etc/parsers.conf
    Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
    HTTP_Server On
    HTTP_Listen x
    HTTP_Port x
    Health_Check On

inputs: |

[INPUT]
    Name tail
    Path /logs/y*
    Tag logs
    Mem_Buf_Limit 100MB
    Skip_Long_Lines Off
    Refresh_Interval 30
    Path_Key filename
    Skip_Empty_Lines On

[INPUT]
    Name tail
    Path /logs/x/*
    Tag logs
    Mem_Buf_Limit 100MB
    Skip_Long_Lines Off
    Refresh_Interval 30
    Path_Key filename
    Skip_Empty_Lines On

filters: |

[FILTER]
    name multiline
    match *
    multiline.key_content log
    multiline.parser java

[FILTER]
    Name modify
    Match *
    Add cluster_name $var
    Add cluster_id $var

outputs: |

[OUTPUT]
    Name es
    Match logs
    Host ip
    http_user     x
    http_passwd   y
    tls On
    tls.verify off
    Logstash_Format off
    Index logs
    Logstash_DateFormat %Y.%m
    Suppress_Type_Name On

customParsers: |

[PARSER]
    Name docker_no_time
    Format json
    Time_Keep Off
    Time_Key time
    Time_Format %Y-%m-%dT%H:%M:%S.%L

[PARSER]
    Name log_parser
    Format regex
    Regex \[(?<timestamp>.*)\]\[(?<correlationID>.*)\]\[(?<severity>.*)\]\[(?<class>.*)\]\[(?<thread>.*)\] (?<text>.*)

[PARSER]
    Name equipment_parser
    Format regex
    Regex ^(?<log>.*)$

[PARSER]
    Name kafka_parser
    Format regex
    Regex \[(?<timestamp>.*)\] (?<severity>FATAL|ERROR|WARN|INFO|DEBUG|TRACE) (?<text>.*)

[PARSER]
    Name postgres_parser
    Format regex
    Regex ^(?<timestamp>.{23}).*(?<severity>FATAL|ERROR|WARN|INFO|DEBUG|TRACE|LOG|DETAIL): (?<text>.*)

Additional context
I try to show the logs from a specifc path from a volume!

@patrick-stephens
Copy link
Contributor

If you're mounting the same files into each pod and telling both pods to read those files then yes it will read both files once per pod - that's what you've told it to do. You need to restrict the files you want read by each pod.

@daVidy31
Copy link
Author

daVidy31 commented Oct 4, 2024

Hello, how to restrict, in the values.yaml?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants