-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fluentd not picking up all container logs #2348
Comments
Hi, we are experiencing the similar situation. Logs randomly missing.
Container input configuration like below:
We are using DC/OS which have similar config with Kubernetes to tail log files. Our all nodes affected. Some logs are missing. stdout the logs on aggregator side but it seems logs not taken by aggregator. Agent 1 Files:
Agent 1 Pos File Status
Agent 2 Files:
Agent 2 Pos File Status
For these files we only got logs for some of the files like:
There is no warn, error at forwarder or aggregator logs for this cron01 job. But we got too much "[warn]: #0 got incomplete line before first line from ...." for other services, crons. We have disabled the multiline, now missing less log but still some cron logs missing. (Cron run every 3 minutes) Do you have any idea? |
Hi, @repeatedly , thanks for this great log collector. Our problem is resolved. The problem was cron's total run time. Cron finishes it's job in in 1 minute. Forwarder's refresh_interval config default value is 60 seconds. Cron runs, create a new log file and then forwarder updates the file list and tail the cron log file. But there is not any new logs, because cron finished the job. Because of also not having "read_from_head true" in our forwarder configs, then we do not get any log about that cron. We got some logs if refresh_interval and cron run times intersects. Documentation states setting "enable_stat_watcher false" to prevent possible stuck issue with inotify. But we have set "enable_watch_timer false" and "enable_stat_watcher true". Does this cause any problem @repeatedly ? Thanks. We have updated our configs as below.
|
I am trying to push my application container logs which resides in directory inside my application contianer (/logs/*.log) via fluentd, i have created fluentd as a service inside my ecs cluster while my application services are also running in parallel. I have made an handshake by passing fluentd-address to application, even i can able to curl into my fluentd container running on ecs-ec2 host. but somehow it cant able to send the logs to my cloudwatch stream. Please find attachment for fluent.conf |
Hi @yusufgungor, I ran into a similar issue collecting logs from our Kubernetes cluster. We keep missing some log messages from certain pods and CronJobs, until we realized only pods that start and stop very quickly are affected. I managed to pin point the issue to the scenario you described. Log files are created with a few lines, and because the source dies almost immediately, fluentd only starts tailing the log file a few moments after that. I tried using |
Nevermind... I was using |
I too have experience of Fluentd not reading log files generated from cron job every one minute. Is there any solution for that ???I read above comments and tried myself. Instead from my log file I have following message "unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/adump/adump.pos" |
This seems to be an issue for which no direct solution is available. We have a similar configuration, only stdout and stderr are picked and other logs are ignored!! Is there any solution? Here is our configuration:
|
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days |
This issue was automatically closed because of stale in 30 days |
1.4.1
Debian 9
4.14.65+
Logging shows,
However I can see there's more logs that fluentd should've discovered,
Am i missing something here ?
The text was updated successfully, but these errors were encountered: