-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a feature to clean up position informations on in_tail #1126
Comments
in_tail removes untracked file positions at start phase.
I'm not sure. I didn't receive any report of pos_file releated performance issue. fluentd/lib/fluent/plugin/in_tail.rb Line 746 in 9d0a8dc
You can see pos_file implementation is here. |
The file which grows infinitely sounds terrible in production environment (even if that growing speed is slow, or if there are no reports about performance regression). But currently, we have a plan to switch to use storage plugin from pos file for that purpose. |
I added note for this issue on http://docs.fluentd.org/articles/in_tail. |
How do we get this feature (defect) fixed? This is creating a situation for us - as the POS file is growing rather quickly and the process has to continue to comb thru this file. We see constant CPU cycles even after no additional data has been written to the tailed log. |
@mrkstate Run a cron that kills fluentd every 30 minutes https://github.com/roffe/kube-gelf/blob/master/cron.yaml |
hi @roffe If i kiil fluentd won't it create problem for logging ? e.g. fluentd was down for 2 minutes so in that case won't I loose the log data for those 2 minutes ? |
@Krishna1408 no, since the position file will let it know where to pickup from last time |
Is restarting fluentd still the only solution to this? I've just confirmed it's still happening and would be nice to know if a fix is underway or if I can try and tackle the issue. |
This feature will be released in the next version. #2805 |
we see position file corruption with the compaction enabled, see #2918 |
Google Container Engine uses fluentd pods to collect container log files via the in_tail plugin and forward the logs to Stackdriver logging.
When a container is deleted, kubernetes also deletes the containers log file and there will never be a log file at this filesystem path again.
However the position file will never clean up the obsolete line in the position file although the position value is ffffffffffffffff.
We see production clusters with position files of over 10000 lines.
Can this cause performance problems with fluentd? Should this be fixed?
The config stanza for the containers log files is:
The text was updated successfully, but these errors were encountered: