Skip to content

Releases: questdb/kafka-questdb-connector

v0.14

27 Nov 10:36
Compare
Choose a tag to compare

This release improves error handling consistency by expanding Dead Letter Queue (DLQ) functionality. Previously, messages were only sent to the configured DLQ when the Kafka Connect framework threw errors (e.g., during deserialization failures). Now, the DLQ captures all error scenarios, regardless of their origin point in the system.

What's Fixed

  • Invalid entries are sent to a Dead Letter Queue (when configured)

Breaking change

This release upgrades the internal QuestDB ILP client to version 8.2, which introduces a new dependency on Linux systems. The updated client uses native code that requires GNU glibc 2.28 or higher on Linux distributions. This requirement may impact compatibility with older Linux systems. If this limitation affects your deployment, please open an issue to discuss alternatives.

Full Changelog: v0.13...v0.14

v0.13

18 Jun 14:07
Compare
Choose a tag to compare

🚀 What’s New?

This release introduces templating for the target table name 🎯 and includes no other changes.

🔧 Features

It enables dynamic generation of the QuestDB target table name based on the message key and the originating topic.
For example: table=${topic}_${key} or table=from_kafka_${topic}

Supported placeholders: ${key} and ${topic}. Placeholders are case-sensitive and an unsupported placeholder will throw an error on connector startup. When a message does not have a key then ${key} resolves to a string null.

🔗 More Info

For more details, see the original PR: #24.

Full Changelog: v0.12...v0.13

v0.12

24 May 13:12
Compare
Choose a tag to compare

This release improves the flushing behavior. Flushing is now managed by the connector, rather than depending on the embedded ILP client. With more contextual awareness, the connector can make better decisions, leading to reduced latency and higher throughput. The flushing parameters can still be configured via client.conf.string, as with any other client settings.

What's Changed

  • Use auto flush configuration from client config string by @jerrinot in #22

What's New

  • Log connector version and git revision by @jerrinot in #21

What's Fixed

  • Fix NPE when Kafka Connect requests a commit right after an error by @jerrinot in #23

Full Changelog: v0.11...v0.12

v0.11

07 Apr 17:35
Compare
Choose a tag to compare

This release focuses on enhancing the usability of the HTTP transport. It addresses the issue where the connector buffers some rows locally but fails to flush its buffer due to the absence of new messages and Kafka Connect not invoking the connector. Previously, such rows remained buffered until the next OFFSET_FLUSH_INTERVAL_MS, which is set to 60 seconds by default, causing excessive latency. This issue has been resolved with the introduction of a new property, allowed.lag, which establishes an upper limit on the maximum duration that rows can be buffered locally in the absence of new messages in Kafka topics.

Additionally, this release disables interval-based flushing in ILP clients, unless explicitly configured. This change is made to leverage Kafka Connect's native mechanism for controlling flushes.

What's New

  • feat: decrease latency when some rows are locally buffered by @jerrinot in #17
  • feat: disable interval-based auto-flushes by default by @jerrinot in #18
  • feat: support unix epoch timestamps in seconds by @jerrinot in #15

Full Changelog: v0.10...v0.11

v0.10

06 Apr 11:54
Compare
Choose a tag to compare

This release introduces support for HTTP Transport.

The HTTP transport is recommended for most users. See the QuestDB documentation for considerations when choosing a transport. The Pull Request includes a basic configuration guide, which can be utilized until the official documentation and code samples are updated.

What's New

What's Fixed

  • Corrected the default timestamp pattern, a fix contributed by @jerrinot in PR #11.

Known Issues

  • There is a known bug with the HTTP transport concerning interval-based flushes. This issue originates from the QuestDB client, which this connector uses internally. It is scheduled to be addressed in the next QuestDB release. Until then, it is recommended to disable interval-based flushes.

v0.9

08 Aug 11:44
Compare
Choose a tag to compare

What's Fixed

  • Regression in auth token configuration by @jerrinot in #9

Full Changelog: v0.8...v0.9

v0.8

07 Aug 09:47
Compare
Choose a tag to compare

What's New

  • Builtin support for parsing string timestamps without relying on Kafka Connect transforms by @jerrinot in #7
  • Optionally use Kafka timestamps as designated timestamps by @jerrinot in #8

What's Fixed

  • Bug when using Kafka Connect SMT timestamp transform configured to produce Timestamp and a resulting message has no schema. See b229a92

Full Changelog: v0.7...v0.8

v0.7

01 Aug 13:15
Compare
Choose a tag to compare

What's Fixed

  • Handle gracefully empty input. This can happen with for example when using filters. See the test in commit.

What's changed

  • QuestDB version used by internal client upgraded to 7.2.1

Full Changelog: v0.6...v0.7

v0.6

03 Mar 13:00
Compare
Choose a tag to compare

What's New

  • Implemented Reconnection Mechanism when QuestDB is temporarily unavailable
  • Build creates a package with metadata for the Confluent hub

What's Changed

  • Codestyle improvments
  • Table name configuration validation

Full Changelog: v0.5...v0.6

v0.5

12 Dec 14:09
Compare
Choose a tag to compare

What's Changed

  • chore: added a configuration option to always sent selected numeric columns as double
  • chore: update QuestDB version by @amyshwang in #6

New Contributors

Full Changelog: v0.4...v0.5