Fix Elasticsearch output retry backoff when receiving 429s#45073
Fix Elasticsearch output retry backoff when receiving 429s#45073faec merged 6 commits intoelastic:mainfrom
Conversation
🤖 GitHub commentsExpand to view the GitHub comments
Just comment with:
|
|
This pull request does not have a backport label.
To fixup this pull request, you need to add the backport labels for the needed
|
|
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
leehinman
left a comment
There was a problem hiding this comment.
Code changes LGTM. I'm wondering about cost/benefit of a test to make sure we keep the behavior in the future. What do you think?
Yeah, I waffled about this, since it's one simple check in the middle of a function that expects a live connection. But ok, I split off the return value into a helper function based on the computed stats, and unit tested that, which is pretty simple but will keep someone from accidentally skipping the check if they reorganize that section of the code. |
|
@Mergifyio backport 8.17 8.18 8.19 9.0 9.1 |
✅ Backports have been createdDetails
|
See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) # Conflicts: # libbeat/outputs/elasticsearch/client.go
See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) # Conflicts: # libbeat/outputs/elasticsearch/client.go
See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b)
See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b)
See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) # Conflicts: # libbeat/outputs/elasticsearch/client.go
…ceiving 429s (#45099) * Fix Elasticsearch output retry backoff when receiving 429s (#45073) See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) # Conflicts: # libbeat/outputs/elasticsearch/client.go * Update CHANGELOG.next.asciidoc * Update client.go --------- Co-authored-by: Fae Charlton <fae.charlton@elastic.co> Co-authored-by: Pierre HILBERT <pierre.hilbert@elastic.co>
…45098) See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) Co-authored-by: Fae Charlton <fae.charlton@elastic.co>
…45097) See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) Co-authored-by: Fae Charlton <fae.charlton@elastic.co>
…eceiving 429s (#45096) * Fix Elasticsearch output retry backoff when receiving 429s (#45073) See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) # Conflicts: # libbeat/outputs/elasticsearch/client.go * Update CHANGELOG.next.asciidoc * Update client.go --------- Co-authored-by: Fae Charlton <fae.charlton@elastic.co> Co-authored-by: Pierre HILBERT <pierre.hilbert@elastic.co>
…eceiving 429s (#45095) * Fix Elasticsearch output retry backoff when receiving 429s (#45073) See #36926. This fix has two components: - Return an error from `Publish` when the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline. - Break the backoff counters for `Publish` and `Connect` into separate values, so a successful `Connect` call (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled. (cherry picked from commit 8b25d5b) # Conflicts: # libbeat/outputs/elasticsearch/client.go * Update CHANGELOG.next.asciidoc * Update client.go --------- Co-authored-by: Fae Charlton <fae.charlton@elastic.co> Co-authored-by: Pierre HILBERT <pierre.hilbert@elastic.co>
See #36926. This fix has two components:
Publishwhen the Elasticsearch output gets a 429 (too many requests) from Elasticsearch. This triggers a retry delay and reconnection attempt in the pipeline.PublishandConnectinto separate values, so a successfulConnectcall (which for Elasticsearch just means that an empty http GET gave an ok response) doesn't reset the exponential backoff for bulk ingest requests when they are being throttled.Checklist
I have made corresponding changes to the documentationI have made corresponding change to the default configuration filesI have added tests that prove my fix is effective or that my feature worksCHANGELOG.next.asciidocorCHANGELOG-developer.next.asciidoc.How to test this PR locally
This comment on the issue has local testing instructions.
Related issues