Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use flush_thread_count value for queued_chunks_limit_size when queued_chunks_limit_size is not specified #2173

Merged

Conversation

repeatedly
Copy link
Member

@repeatedly repeatedly commented Nov 9, 2018

Unlimited queued_chunks_limit_size causes unexpected resource consumption, e.g. lots of smaller files with file buffer, when
the destination has a problem with smaller flush_interval.
The default behaviour should be safe so use flush_thread_count by default.

Signed-off-by: Masahiro Nakagawa [email protected]

…_chunks_limit_size is not specified

Unlimited queued_chunks_limit_size causes unexpected resource consumption when
the destination has a problem with smaller flush_interval.
The default behaviour should be safe so use flush_thread_count by default.

Signed-off-by: Masahiro Nakagawa <[email protected]>
@repeatedly repeatedly added v1 enhancement Feature request or improve operations labels Nov 9, 2018
@repeatedly repeatedly self-assigned this Nov 9, 2018
@repeatedly repeatedly requested a review from cosmo0920 November 9, 2018 04:29
Copy link
Contributor

@cosmo0920 cosmo0920 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Patch intention seems good.
But some test cases conflicts old and new behavior.
Could you check it?

@@ -372,7 +372,7 @@ def write(metadata_and_data, format: nil, size: nil, enqueue: false)
end

def queue_full?
@queued_chunks_limit_size && (synchronize { @queue.size } >= @queued_chunks_limit_size)
synchronize { @queue.size } >= @queued_chunks_limit_size
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buffer test complains @queued_chunks_limit_sise is nil and not to be able to compare.
Could you check it?
https://travis-ci.org/fluent/fluentd/jobs/452714881#L547

@@ -354,6 +354,10 @@ def configure(conf)
log.warn "'flush_interval' is ignored because default 'flush_mode' is not 'interval': '#{@flush_mode}'"
end
end

if @buffer.queued_chunks_limit_size.nil?
@buffer.queued_chunks_limit_size = @buffer_config.flush_thread_count
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@i.buffer.queue.size always seems to be assumed 1 in test cases.
Could you check it?
https://travis-ci.org/fluent/fluentd/jobs/452714881#L654

We have smaller queued_chunks_limit_size tests so
set larger value, 100 in this patch, is enough for existing tests.

Signed-off-by: Masahiro Nakagawa <[email protected]>
@repeatedly
Copy link
Member Author

I forgot to change tests. Just push the fix.

Copy link
Contributor

@cosmo0920 cosmo0920 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
I've run test cases locally and confirmed all tests are passed.

@repeatedly repeatedly merged commit a8d2bbb into master Nov 9, 2018
@repeatedly repeatedly deleted the change-queued_chunks_limit_size-default-behaviour branch November 9, 2018 16:29
christianberg added a commit to zalando-incubator/kubernetes-on-aws that referenced this pull request Mar 5, 2019
This increases the number of chunks that can be queued to be sent to
S3. The [documentation][1] claims that this number is unlimited when
not set, but the default was in fact [recently set][2] to `1`, which
causes a backlog of chunks to build up when there is a larger number
of log files.

[1]: https://docs.fluentd.org/v1.0/articles/buffer-section#buffering-parameters
[2]: fluent/fluentd#2173
christianberg added a commit to zalando-incubator/kubernetes-on-aws that referenced this pull request Mar 5, 2019
This increases the number of chunks that can be queued to be sent to
S3. The [documentation][1] claims that this number is unlimited when
not set, but the default was in fact [recently set][2] to `1`, which
causes a backlog of chunks to build up when there is a larger number
of log files.

[1]: https://docs.fluentd.org/v1.0/articles/buffer-section#buffering-parameters
[2]: fluent/fluentd#2173

Signed-off-by: Christian Berg <[email protected]>
njuettner pushed a commit to zalando-incubator/kubernetes-on-aws that referenced this pull request Mar 7, 2019
This increases the number of chunks that can be queued to be sent to
S3. The [documentation][1] claims that this number is unlimited when
not set, but the default was in fact [recently set][2] to `1`, which
causes a backlog of chunks to build up when there is a larger number
of log files.

[1]: https://docs.fluentd.org/v1.0/articles/buffer-section#buffering-parameters
[2]: fluent/fluentd#2173

Signed-off-by: Christian Berg <[email protected]>
Signed-off-by: njuettner <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Feature request or improve operations v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants