-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes container logs - duplicate logs found when using read_bytes_limit_per_second parameter #3464
Comments
It looks like all the logs in 0.log file are duplicated. |
I test both read_bytes_limit_per_second=100000, 500000. Found the same issue of duplicated logs. |
There are 2 causes on this issue: * Wrong inode is set to TailWatcher when follow_inode is true * A key (TargetInfo) in @Tails isn't updated for a same path even if new one has different inode Fix #3464 Signed-off-by: Takuro Ashie <[email protected]>
Thanks for your report. |
There are 2 causes on this issue: * Wrong inode is set to TailWatcher when follow_inode is true * A key (TargetInfo) in @Tails isn't updated for a same path even if new one has different inode Fix #3464 Signed-off-by: Takuro Ashie <[email protected]>
Hey @kenhys , thanks a lot for the quick fix! I have tested it out and not seeing this issue any more. |
Describe the bug
Continue on #3434. I followed @ashie suggestion and did the stress test again on our EFK stack with read_bytes_limit_per_second parameter and Fluentd version
v1.13.2
. However, I found that logs are duplicated in Elasticsearch.To Reproduce
Container Runtime Version:
containerd://1.4
Kubelet logging configuration:
--container-log-max-files=50 --container-log-max-size=100Mi
Expected behavior
Expect Kibana to also receive 1,200,000 logs. However, it receives 1,622,486 entires.
Your Environment
Your Configuration
Fluentd Chart
values.yaml
:Your Error Log
Additional context
When the log is rotated, 2 entries added to the pos file.
Pos file:
One example of the duplicated log:
The above log is found duplicated in Kibana. However, it only appears once in 0.log file.
The text was updated successfully, but these errors were encountered: