-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor the shutdown sequence of this plugin to follow the new semantic #55
Conversation
8e4fdb0
to
7c64775
Compare
delete_file_from_bucket(object) | ||
FileUtils.remove_entry_secure(filename, true) | ||
|
||
if download_remote_file(object, filename) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why is this refactoring need? do you think this is properly covered by test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this was needed.
You can download pretty large files from an s3 bucket, How do we know If the download is completed?
If they aren't we have to take action to delete them, because when logstash start again it will read the file and it will be incomplete and the sincedb will be updated it will never redownload the files again possibly losing events.
If you look at download_remote_file
or process_local_log
the changes arent that big.
Its just a way to know that something went wrong and to get out of the processing loop as quick as I can without throwing any errors.
There is a few unit test covering this part and minimal integration test.
7c64775
to
ebe47cc
Compare
@talevy updated with your comments |
LGTM |
The change to LGTM |
I'll rebase this and merge it, thanks for the reviews! |
This plugin is now interruptable, this plugin require a bit more care to shutdown it down for a few reasons: 1. It can be stuck streaming file to disk from a S3 bucket. 2. It can be stuck reading a large file For case 1, we cancel the streaming and delete the incomplete temp file and not update the sincedb. For case 2, we will cancel reading the file and not update the sincedb, this will result in duplicates in the logging stream which is fine for now since the S3 plugin doesn't support saving offset yet. see #54 This PR also add a fix to non deterministic test run in a local environment by specifying a new sincedb for each run. Fixes: #53 #42
ebe47cc
to
dbf7745
Compare
Merged sucessfully into master! |
This plugin is now interruptable, this plugin require a bit more care to
shutdown it down for a few reasons:
For case 1, we cancel the streaming and delete the incomplete temp file
and not update the sincedb.
For case 2, we will cancel reading the file and not update the sincedb,
this will result in duplicates in the logging stream which is fine for now since the S3 plugin
doesn't support saving offset yet. see #54
This PR also add a fix to non deterministic test run in a local
environment by specifying a new sincedb for each run.
Fixes: #53 #42