-
Notifications
You must be signed in to change notification settings - Fork 1.7k
GH-2496: Reuse retry topic for maxInterval delay #2497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 5 commits
bdbc580
c5becce
d0d5c23
4c6cf75
20865a6
500d7dc
d466fe5
f7ff884
6cd53ef
5c2bc89
7ccdf16
c038603
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||
|---|---|---|---|---|---|---|---|---|
|
|
@@ -146,7 +146,8 @@ public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate<String, MyOtherPo | |||||||
| ---- | ||||||||
| ==== | ||||||||
|
|
||||||||
| NOTE: The retry topics' and dlt's consumers will be assigned to a consumer group with a group id that is the combination of the one with you provide in the `groupId` parameter of the `@KafkaListener` annotation with the topic's suffix. If you don't provide any they'll all belong to the same group, and rebalance on a retry topic will cause an unnecessary rebalance on the main topic. | ||||||||
| NOTE: The retry topics' and dlt's consumers will be assigned to a consumer group with a group id that is the combination of the one with you provide in the `groupId` parameter of the `@KafkaListener` annotation with the topic's suffix. | ||||||||
| If you don't provide any they'll all belong to the same group, and rebalance on a retry topic will cause an unnecessary rebalance on the main topic. | ||||||||
|
|
||||||||
| IMPORTANT: If the consumer is configured with an <<error-handling-deserializer,`ErrorHandlingDeserializer`>>, to handle deserilialization exceptions, it is important to configure the `KafkaTemplate` and its producer with a serializer that can handle normal objects as well as raw `byte[]` values, which result from deserialization exceptions. | ||||||||
| The generic value type of the template should be `Object`. | ||||||||
|
|
@@ -401,39 +402,6 @@ If your back off policy requires delays with values bigger than that, adjust the | |||||||
|
|
||||||||
| IMPORTANT: The first attempt counts against `maxAttempts`, so if you provide a `maxAttempts` value of 4 there'll be the original attempt plus 3 retries. | ||||||||
|
|
||||||||
| ===== Single Topic Fixed Delay Retries | ||||||||
|
|
||||||||
| If you're using fixed delay policies such as `FixedBackOffPolicy` or `NoBackOffPolicy` you can use a single topic to accomplish the non-blocking retries. | ||||||||
| This topic will be suffixed with the provided or default suffix, and will not have either the index or the delay values appended. | ||||||||
|
|
||||||||
| ==== | ||||||||
| [source, java] | ||||||||
| ---- | ||||||||
| @RetryableTopic(backoff = @Backoff(2000), fixedDelayTopicStrategy = FixedDelayStrategy.SINGLE_TOPIC) | ||||||||
| @KafkaListener(topics = "my-annotated-topic") | ||||||||
| public void processMessage(MyPojo message) { | ||||||||
| // ... message processing | ||||||||
| } | ||||||||
| ---- | ||||||||
| ==== | ||||||||
|
|
||||||||
| ==== | ||||||||
| [source, java] | ||||||||
| ---- | ||||||||
| @Bean | ||||||||
| public RetryTopicConfiguration myRetryTopic(KafkaTemplate<String, MyPojo> template) { | ||||||||
| return RetryTopicConfigurationBuilder | ||||||||
| .newInstance() | ||||||||
| .fixedBackoff(3000) | ||||||||
| .maxAttempts(5) | ||||||||
| .useSingleTopicForFixedDelays() | ||||||||
| .create(template); | ||||||||
| } | ||||||||
| ---- | ||||||||
| ==== | ||||||||
|
|
||||||||
| NOTE: The default behavior is creating separate retry topics for each attempt, appended with their index value: retry-0, retry-1, ... | ||||||||
|
|
||||||||
| ===== Global timeout | ||||||||
|
|
||||||||
| You can set the global timeout for the retrying process. | ||||||||
|
|
@@ -683,7 +651,7 @@ IMPORTANT: Note that the blocking retries behavior is allowlist - you add the ex | |||||||
|
|
||||||||
| IMPORTANT: The non-blocking exception classification behavior also depends on the specific topic's configuration. | ||||||||
|
|
||||||||
| ==== Topic Naming | ||||||||
| ==== Topic Amount | ||||||||
|
|
||||||||
| Retry topics and DLT are named by suffixing the main topic with a provided or default value, appended by either the delay or index for that topic. | ||||||||
|
|
||||||||
|
|
@@ -693,6 +661,12 @@ Examples: | |||||||
|
|
||||||||
| "my-other-topic" -> "my-topic-myRetrySuffix-1000", "my-topic-myRetrySuffix-2000", ..., "my-topic-myDltSuffix". | ||||||||
|
|
||||||||
| NOTE: The default behavior is to create separate retry topics for each attempt, appended with an index value: retry-0, retry-1, ..., retry-n. | ||||||||
| Therefore, by default the amount of retry topics is the configured `maxAttempts` minus 1. | ||||||||
|
||||||||
| Therefore, by default the amount of retry topics is the configured `maxAttempts` minus 1. | |
| Therefore, by default the number of retry topics is the configured `maxAttempts` minus 1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you know this, but there's a one sentence per line rule for docs, this line has 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you know this, but there's a one sentence per line rule for docs, this line has 2.
I did not know that (first time with asciidoc). Maybe due to the file having been changed since then, the highlighted section does not show a multi-sentence line, but I will search here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not know that (first time with asciidoc).
I thought you did because everything else was ok :)
Maybe due to the file having been changed since then, the highlighted section does not show a multi-sentence line, but I will search here.
I probably highlighted the wrong line, sorry. It's the line right next to that one ("If you're using exponential backoff policy")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use one sentence per line to isolate future changes to a single sentence.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The default behavior is to work with an amount of retry topics equal to the configured `maxAttempts` minus 1, and when using exponential backoff, the retry topics are suffixed with the delay values, with the last retry topics (corresponding to the `maxInterval` delay) being suffixed with an additional index. | |
| The default behavior is to work with the number of retry topics equal to the configured `maxAttempts` minus 1 and, when using exponential backoff, the retry topics are suffixed with the delay values, with the last retry topic (corresponding to the `maxInterval` delay) being suffixed with an additional index. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| This will be the default in a future release. |
| Original file line number | Diff line number | Diff line change | ||||||
|---|---|---|---|---|---|---|---|---|
| @@ -1,5 +1,5 @@ | ||||||||
| /* | ||||||||
| * Copyright 2018-2022 the original author or authors. | ||||||||
| * Copyright 2018-2023 the original author or authors. | ||||||||
| * | ||||||||
| * Licensed under the Apache License, Version 2.0 (the "License"); | ||||||||
| * you may not use this file except in compliance with the License. | ||||||||
|
|
@@ -25,6 +25,7 @@ | |||||||
| import org.springframework.kafka.retrytopic.DltStrategy; | ||||||||
| import org.springframework.kafka.retrytopic.FixedDelayStrategy; | ||||||||
| import org.springframework.kafka.retrytopic.RetryTopicConstants; | ||||||||
| import org.springframework.kafka.retrytopic.SameIntervalTopicReuseStrategy; | ||||||||
| import org.springframework.kafka.retrytopic.TopicSuffixingStrategy; | ||||||||
| import org.springframework.retry.annotation.Backoff; | ||||||||
|
|
||||||||
|
|
@@ -177,6 +178,13 @@ | |||||||
| */ | ||||||||
| TopicSuffixingStrategy topicSuffixingStrategy() default TopicSuffixingStrategy.SUFFIX_WITH_DELAY_VALUE; | ||||||||
|
|
||||||||
|
|
||||||||
| /** | ||||||||
| * Topic reuse strategy for sequential attempts made with a same backoff interval. | ||||||||
| * @return the strategy. | ||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||
| */ | ||||||||
| SameIntervalTopicReuseStrategy sameIntervalTopicReuseStrategy() default SameIntervalTopicReuseStrategy.MULTIPLE_TOPICS; | ||||||||
|
|
||||||||
| /** | ||||||||
| * Whether or not create a DLT, and redeliver to the DLT if delivery fails or just give up. | ||||||||
| * @return the dlt strategy. | ||||||||
|
|
@@ -186,7 +194,9 @@ | |||||||
| /** | ||||||||
| * Whether to use a single or multiple topics when using a fixed delay. | ||||||||
| * @return the fixed delay strategy. | ||||||||
| * @deprecated in a future release, will be replaced by {@link #sameIntervalTopicReuseStrategy()}. | ||||||||
| */ | ||||||||
| @Deprecated(forRemoval = true) // in 3.1 | ||||||||
| FixedDelayStrategy fixedDelayTopicStrategy() default FixedDelayStrategy.MULTIPLE_TOPICS; | ||||||||
|
|
||||||||
| /** | ||||||||
|
|
||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -35,6 +35,7 @@ | |
| import org.springframework.kafka.listener.ExceptionClassifier; | ||
| import org.springframework.kafka.listener.ListenerExecutionFailedException; | ||
| import org.springframework.kafka.listener.TimestampedException; | ||
| import org.springframework.kafka.retrytopic.DestinationTopic.Type; | ||
| import org.springframework.lang.Nullable; | ||
| import org.springframework.util.Assert; | ||
|
|
||
|
|
@@ -133,7 +134,7 @@ && isNotFatalException(e) | |
| } | ||
|
|
||
| private DestinationTopic resolveRetryDestination(DestinationTopicHolder destinationTopicHolder) { | ||
| return destinationTopicHolder.getSourceDestination().isSingleTopicRetry() | ||
| return destinationTopicHolder.getSourceDestination().isReusableRetryTopic() | ||
| ? destinationTopicHolder.getSourceDestination() | ||
| : destinationTopicHolder.getNextDestination(); | ||
| } | ||
|
|
@@ -192,13 +193,26 @@ public void addDestinationTopics(String mainListenerId, List<DestinationTopic> d | |
| throw new IllegalStateException("Cannot add new destinations, " | ||
| + DefaultDestinationTopicResolver.class.getSimpleName() + " is already refreshed."); | ||
| } | ||
| validateDestinations(destinationsToAdd); | ||
| synchronized (this.sourceDestinationsHolderMap) { | ||
| Map<String, DestinationTopicHolder> map = this.sourceDestinationsHolderMap.computeIfAbsent(mainListenerId, | ||
| id -> new HashMap<>()); | ||
| map.putAll(correlatePairSourceAndDestinationValues(destinationsToAdd)); | ||
| } | ||
| } | ||
|
|
||
| private void validateDestinations(List<DestinationTopic> destinationsToAdd) { | ||
| for (int i = 0; i < destinationsToAdd.size(); i++) { | ||
| DestinationTopic destination = destinationsToAdd.get(i); | ||
| if (destination.isReusableRetryTopic()) { | ||
| Assert.isTrue((i == (destinationsToAdd.size() - 1) || | ||
| ((i == (destinationsToAdd.size() - 2)) && (destinationsToAdd.get(i + 1).isDltTopic()))), | ||
| String.format("In the destination topic chain, the type %s can only be " | ||
| + "specified as the last retry topic.", Type.REUSABLE_RETRY_TOPIC)); | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This invariant was implemented because right now, the idea is to have topic reuse only for exponential backoff, therefore for the But anyway, no existing backoff policy should have attempts sequences with a same interval in the middle of the chain. |
||
| } | ||
| } | ||
| } | ||
|
|
||
| private Map<String, DestinationTopicHolder> correlatePairSourceAndDestinationValues( | ||
| List<DestinationTopic> destinationList) { | ||
| return IntStream | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.