Releases: confluentinc/confluent-kafka-go
v1.9.0
v1.9.0 is a feature release:
- OAUTHBEARER OIDC support
- KIP-140 Admin API ACL support
- Added MockCluster for functional testing of applications without the need
for a real Kafka cluster (by @SourceFellows and @kkoehler, #729).
See examples/mock_cluster.
Fixes
- Fix Rebalance events behavior for static membership (@jliunyu, #757, #798).
- Fix consumer close taking 10 seconds when there's no rebalance needed (@jliunyu, #757).
confluent-kafka-go is based on librdkafka v1.9.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.8.2
confluent-kafka-go v1.8.2
This is a maintenance release:
- Bundles librdkafka v1.8.2
- Check termination channel while reading delivery reports (by @zjj)
- Added convenience method Consumer.StoreMessage() (@finncolman, #676)
confluent-kafka-go is based on librdkafka v1.8.2, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no confluent-kafka-go v1.8.0 and v1.8.1 releases.
v1.7.0
confluent-kafka-go is based on librdkafka v1.7.0, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Enhancements
- Experimental Windows support (by @neptoess).
- The produced message headers are now available in the delivery report
Message.Headers
if the Producer'sgo.delivery.report.fields
configuration property is set to includeheaders
, e.g.:
"go.delivery.report.fields": "key,value,headers"
This comes at a performance cost and are thus disabled by default.
Fixes
- AdminClient.CreateTopics() previously did not accept default value(-1) of
ReplicationFactor without specifying an explicit ReplicaAssignment, this is
now fixed.
v1.6.1
v1.6.1
v1.6.1 is a feature release:
- KIP-429: Incremental consumer rebalancing - see cooperative_consumer_example.go
for an example how to use the new incremental rebalancing consumer. - KIP-480: Sticky producer partitioner - increase throughput and decrease
latency by sticking to a single random partition for some time. - KIP-447: Scalable transactional producer - a single transaction producer can
now be used for multiple input partitions. - Add support for
go.delivery.report.fields
by @kevinconaway
Fixes
- For dynamically linked builds (
-tags dynamic
) there was previously a possible conflict
between the bundled librdkafka headers and the system installed ones. This is now fixed. (@KJTsanaktsidis)
confluent-kafka-go is based on and bundles librdkafka v1.6.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.5.2
confluent-kafka-go v1.5.2
v1.5.2 is a maintenance release with the following fixes and enhancements:
- Bundles librdkafka v1.5.2 - see release notes for all enhancements and fixes.
- Documentation fixes
confluent-kafka-go is based on librdkafka v1.5.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.4.2
confluent-kafka-go v1.4.2
v1.4.2 is a maintenance release:
- The bundled librdkafka directory (
kafka/librdkafka
) is no longer pruned by Go mod vendor import. - Bundled librdkafka upgraded to v1.4.2, highlights:
- System root CA certificates should now be picked up automatically on most platforms
- Fix produce/consume hang after partition goes away and comes back,
such as when a topic is deleted and re-created (regression in v1.3.0).
librdkafka v1.4.2 changes
See the librdkafka v1.4.2 release notes for changes to the bundled librdkafka included with the Go client.
v1.4.0
confluent-kafka-go v1.4.0
- Added Transactional Producer API and full Exactly-Once-Semantics (EOS) support.
- A prebuilt version of the latest version of librdkafka is now bundled with the confluent-kafka-go client. A separate installation of librdkafka is NO LONGER REQUIRED or used.
- Added support for sending client (librdkafka) logs to
Logs()
channel. - Added
Consumer.Position()
to retrieve the current consumer offsets. - The
Error
type now has additional attributes, such asIsRetriable()
to deem if the errored operation can be retried. This is currently only exposed for the Transactional API. - Removed support for Go < 1.9
Transactional API
librdkafka and confluent-kafka-go now has complete Exactly-Once-Semantics (EOS) functionality, supporting the idempotent producer (since v1.0.0), a transaction-aware consumer (since v1.2.0) and full producer transaction support (in this release).
This enables developers to create Exactly-Once applications with Apache Kafka.
See the Transactions in Apache Kafka page for an introduction and check the transactions example for a complete transactional application example.
Bundled librdkafka
The confluent-kafka-go client now comes with batteries included, namely prebuilt versions of librdkafka for the most popular platforms, you will thus no longer need to install or manage librdkafka separately.
Supported platforms are:
- Mac OSX
- glibc-based Linux x64 (e.g., RedHat, Debian, etc) - lacks Kerberos/GSSAPI support
- musl-based Linux x64 (Alpine) - lacks Kerberos/GSSAPI support
These prebuilt librdkafka has all features (e.g., SSL, compression, etc) except for the Linux builds which due to libsasl2 dependencies does not have Kerberos/GSSAPI support.
If you need Kerberos support, or you are running on a platform where the prebuilt librdkafka builds are not available (see above), you will need to install librdkafka separately (preferably through the Confluent APT and RPM repositories) and build your application with -tags dynamic
to disable the builtin librdkafka and instead link your application dynamically to librdkafka.
librdkafka v1.4.0 changes
Full librdkafka v1.4.0 release notes.
Highlights:
- KIP-98: Transactional Producer API
- KIP-345: Static consumer group membership (by @rnpridgeon)
- KIP-511: Report client software name and version to broker
- SASL SCRAM security fixes.
v1.3.0
confluent-kafka-go v1.3.0
-
Purge messages API (by @khorshuheng at GoJek).
-
ClusterID and ControllerID APIs.
-
Go Modules support.
-
Fixed memory leak on calls to
NewAdminClient()
. (discovered by @gabeysunda) -
Requires librdkafka v1.3.0 or later
librdkafka v1.3.0 changes
Full librdkafka v1.3.0 release notes.
- KIP-392: Fetch messages from closest replica/follower (by @mhowlett).
- Experimental mock broker to make application and librdkafka development testing easier.
- Fixed consumer_lag in stats when consuming from broker versions <0.11.0.0 (regression in librdkafka v1.2.0).
v1.1.0
confluent-kafka-go v1.1.0
- OAUTHBEARER SASL authentication (KIP-255) by Ron Dagostini (@rondagostino) at StateStreet.
- Offset commit metadata (@damour, #353)
- Requires librdkafka v1.1.0 or later
Noteworthy librdkafka v1.1.0 changes
Full librdkafka v1.1.0 release notes.
- SASL OAUTHBEARER support (by @rondagostino at StateStreet)
- In-memory SSL certificates (PEM, DER, PKCS#12) support (by @noahdav at Microsoft)
- Pluggable broker SSL certificate verification callback (by @noahdav at Microsoft)
- Use Windows Root/CA SSL Certificate Store (by @noahdav at Microsoft)
ssl.endpoint.identification.algorithm=https
(off by default) to validate the broker hostname matches the certificate. Requires OpenSSL >= 1.0.2.- Improved GSSAPI/Kerberos ticket refresh
Upgrade considerations
- Windows SSL users will no longer need to specify a CA certificate file/directory (
ssl.ca.location
), librdkafka will load the CA certs by default from the Windows Root Certificate Store. - SSL peer (broker) certificate verification is now enabled by default (disable with
enable.ssl.certificate.verification=false
) %{broker.name}
is no longer supported insasl.kerberos.kinit.cmd
since kinit refresh is no longer executed per broker, but per client instance.
SSL
New configuration properties:
ssl.key.pem
- client's private key as a string in PEM formatssl.certificate.pem
- client's public key as a string in PEM formatenable.ssl.certificate.verification
- enable(default)/disable OpenSSL's builtin broker certificate verification.enable.ssl.endpoint.identification.algorithm
- to verify the broker's hostname with its certificate (disabled by default).- The private key data is now securely cleared from memory after last use.
Enhancements
- Bump
message.timeout.ms
max value from 15 minutes to 24 days (@sarkanyi)
Fixes
- SASL GSSAPI/Kerberos: Don't run kinit refresh for each broker, just per client instance.
- SASL GSSAPI/Kerberos: Changed
sasl.kerberos.kinit.cmd
to first attempt ticket refresh, then acquire. - SASL: Proper locking on broker name acquisition.
- Consumer:
max.poll.interval.ms
now correctly handles blocking poll calls, allowing a longer poll timeout than the max poll interval.
v1.0.0
confluent-kafka-go v1.0.0
This release adds support for librdkafka v1.0.0, featuring the EOS Idempotent Producer, Sparse connections, KIP-62 - max.poll.interval.ms
support, zstd, and more.
See the librdkafka v1.0.0 release notes for more information and upgrade considerations.
Go client enhancements
- Now requires librdkafka v1.0.0.
- A new
IsFatal()
function has been added toKafkaError
to help the application differentiate between temporary and fatal errors. Fatal errors are currently only triggered by the idempotent producer. - Added
kafka.NewError()
to make it possible to create error objects from user code / unit test (Artem Yarulin)
Go client fixes
- Deprecate the use of
default.topic.config
. Topic configuration should now be set on the standard ConfigMap. - Reject delivery.report.only.error=true on producer creation (#306)
- Avoid use of "Deprecated: " prefix (#268)
- PartitionEOF must now be explicitly enabled thru
enable.partition.eof
Make sure to check out the Idempotent Producer example