-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
timekey in buffer config is not used for s3 output frequency, is there any way to change it? #348
Comments
So, I finally found a bit of consistency in controlling the file output frequency using After testing a bunch of different timekey values, I noticed that it always writes out 11 files per timekey period. With a timekey of 60s, it creates 11 files per minute. At 120s it creates about 5-6 files per minute, at 240s 2-3 files per minute. I changed the timekey to 660, and as expected it now creates one file per minute. The server is continuously receiving logs every second, but the number and size of incoming logs varies throughout the day, yet it consistently creates 1 file per minute (11 per timekey). One file per minute is my desired setting, and this technically achieves that, but it's pretty gross. Also, having the timekey larger than 60 causes the %M variable to represent the current minute of the timekey (0, 11, 22, etc.), instead of the actual current minute from the system clock, so I can't really use it in path or s3_object_key_format. Am I the only one seeing this? Here is the config I am using:
I have tried various different settings, and none of them has changed the observed behavior:
|
We are also witnessing this weirdness. We are using Why 11, were does it come from, is it possible to adjust or configure it? These other issues here might be because of the same reason: #339, #315, #237 |
It appears, that the issue is not with this S3 plugin, but rather the way Fluentd output buffer is configured. When |
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days |
This issue was automatically closed because of stale in 30 days |
It seems that the buffer timekey is not used at all to determine the frequency to write files to s3, as the fluentd documentation states:
The README in this repo does seem to indicate that it writes logs as it receives them, but there isn't any mention of how to control how frequently logs are written to S3:
I have tried using every combination of timekey, flush_interval, and chunk limits on both file and memory buffers, changing time_slice_format, and every other setting I could think of, and there seems to be no way to change the frequency of writing out files to S3. I even updated from 4.0.0 to 4.0.1 as there was some "timekey optimization" done in the release notes, which didn't help.
Is there a recommended method to receive a constant stream of logs and write them out to s3 at specific intervals, such as to create only one file per minute?
The text was updated successfully, but these errors were encountered: