Skip to content

storeMessageOffset: ignore state error #129

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

felixschlegel
Copy link
Contributor

Motivation:

Previously, we failed the entire KafkaConsumer if storing
a message offset through RDKafkaClient.storeMessageOffset
failed because the partition the offset should be committed to
was unassigned (which can happen during rebalance).

We should not fail the consumer when committing during
rebalance.

The worst thing that could happen here is that storing the offset
fails and we re-read a message, which is fine since KafkaConsumers with
automatic commits are designed for at-least-once processing:

https://docs.confluent.io/platform/current/clients/consumer.html#offset-management

Modifications:

  • RDKafkaClient.storeMessageOffset: don't throw when receiving
    error
    RD_KAFKA_RESP_ERR__STATE

Motivation:

Previously, we failed the entire `KafkaConsumer` if storing
a message offset through `RDKafkaClient.storeMessageOffset`
failed because the partition the offset should be committed to
was unassigned (which can happen during rebalance).

We should not fail the consumer when committing during
rebalance.

The worst thing that could happen here is that storing the offset
fails and we re-read a message, which is fine since KafkaConsumers with
automatic commits are designed for at-least-once processing:

https://docs.confluent.io/platform/current/clients/consumer.html#offset-management

Modifications:

* `RDKafkaClient.storeMessageOffset`: don't throw when receiving
  error
  `RD_KAFKA_RESP_ERR__STATE`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants