Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry-pick to 5.3: Fix close_timeout for new started harvesters #3718

Merged
merged 1 commit into from
Mar 3, 2017

Conversation

ruflin
Copy link
Member

@ruflin ruflin commented Mar 3, 2017

Cherry-pick of PR #3715 to 5.3 branch. Original message:

If close_timeout is set, a file handler inside the harvester is closed after close_timeout is reached even if the output is blocked. For new which were found for harvesting during the output was blocked, it was possible that the Setup of the file happened but then the initial state could not be sent. So the harvester was never started, file handler was open but close_timeout this not apply. Setup and state update are now turned around. This has the affect, that in case of an error during the setup phase, the state must be revert again.

See #3091 (comment)

If close_timeout is set, a file handler inside the harvester is closed after `close_timeout` is reached even if the output is blocked. For new which were found for harvesting during the output was blocked, it was possible that the Setup of the file happened but then the initial state could not be sent. So the harvester was never started, file handler was open but `close_timeout` this not apply. Setup and state update are now turned around. This has the affect, that in case of an error during the setup phase, the state must be revert again.

See elastic#3091 (comment)
(cherry picked from commit e13a7f2)
@ruflin
Copy link
Member Author

ruflin commented Mar 3, 2017

jenkins, retest it

@urso urso merged commit 16ccdae into elastic:5.3 Mar 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants