Skip to content

Commit

Permalink
merge upstream 2022 08 01 (#19)
Browse files Browse the repository at this point in the history
Co-authored-by: Bill Rose <[email protected]>
Co-authored-by: Magnus Edenhill <[email protected]>
Co-authored-by: Nikhil Benesch <[email protected]>
Co-authored-by: Emanuele Sabellico <[email protected]>
Co-authored-by: Jing Liu <[email protected]>
Co-authored-by: Matt Clarke <[email protected]>
Co-authored-by: Leo Singer <[email protected]>
Co-authored-by: Ladislav <[email protected]>
Co-authored-by: Ladislav Snizek <[email protected]>
Co-authored-by: Lance Shelton <[email protected]>
Co-authored-by: Robin Moffatt <[email protected]>
Co-authored-by: Sergio Arroutbi <[email protected]>
Co-authored-by: Khem Raj <[email protected]>
Co-authored-by: Bill Rose <[email protected]>
Co-authored-by: Dmytro Milinevskyi <[email protected]>
Co-authored-by: Mikhail Avdienko <[email protected]>
Co-authored-by: wding <[email protected]>
Co-authored-by: Shawn <[email protected]>
Co-authored-by: ihsinme <[email protected]>
Co-authored-by: Emanuele Sabellico <[email protected]>
Co-authored-by: Roman Schmitz <[email protected]>
Co-authored-by: Miklos Espak <[email protected]>
Co-authored-by: Alice Rum <[email protected]>
Co-authored-by: Eli Smaga <[email protected]>
  • Loading branch information
1 parent db35d56 commit 2c4adfb
Show file tree
Hide file tree
Showing 74 changed files with 1,685 additions and 320 deletions.
2 changes: 1 addition & 1 deletion .appveyor.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 1.9.0-R-post{build}
version: 1.9.1-R-post{build}
pull_requests:
do_not_increment_build_number: true
image: Visual Studio 2019
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/base.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ jobs:
steps:
- uses: actions/checkout@v2
- run: |
sudo apt update
sudo apt install -y python3 python3-pip python3-setuptools libcurl4-openssl-dev libssl-dev libsasl2-dev
python3 -m pip install -r tests/requirements.txt
- run: |
Expand All @@ -24,6 +25,7 @@ jobs:
steps:
- uses: actions/checkout@v2
- run: |
sudo apt update
sudo apt install -y python3 python3-pip python3-setuptools clang-format
python3 -m pip install -r packaging/tools/requirements.txt
- name: Style checker
Expand Down
29 changes: 29 additions & 0 deletions .semaphore/semaphore.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
version: v1.0
name: M1 Pipeline
agent:
machine:
type: s1-prod-mac-m1
blocks:
- name: 'Build, Test, Package'
task:
jobs:
- name: 'Build'
env_vars:
- name: CC
value: gcc
commands:
- cd $SEM_WORKSPACE
- checkout
- export WORKSPACE=$SEM_WORKSPACE/librdkafka
- cd $WORKSPACE
- mkdir dest artifacts
- ./configure --install-deps --source-deps-only --enable-static --disable-lz4-ext --prefix="$WORKSPACE/dest" --enable-strip
- make -j2 all examples check
- make -j2 -C tests build
- make -C tests run_local_quick
- make install
- cd $WORKSPACE/dest
- tar cvzf ${WORKSPACE}/artifacts/librdkafka-${CC}.tar.gz .
- artifact push job ${WORKSPACE}/artifacts/librdkafka-${CC}.tar.gz
- cd $WORKSPACE
- sha256sum artifacts/*
83 changes: 82 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,23 @@
# librdkafka v1.9.2

librdkafka v1.9.2 is a maintenance release:

* The SASL OAUTHBEAR OIDC POST field was sometimes truncated by one byte (#3192).
* The bundled version of OpenSSL has been upgraded to version 1.1.1q for non-Windows builds. Windows builds remain on OpenSSL 1.1.1n for the time being.
* The bundled version of Curl has been upgraded to version 7.84.0.



# librdkafka v1.9.1

librdkafka v1.9.1 is a maintenance release:

* The librdkafka.redist NuGet package now contains OSX M1/arm64 builds.
* Self-contained static libraries can now be built on OSX M1 too, thanks to
disabling curl's configure runtime check.



# librdkafka v1.9.0

librdkafka v1.9.0 is a feature release:
Expand Down Expand Up @@ -33,6 +53,17 @@ librdkafka v1.9.0 is a feature release:
* Windows: Added native Win32 IO/Queue scheduling. This removes the
internal TCP loopback connections that were previously used for timely
queue wakeups.
* Added `socket.connection.setup.timeout.ms` (default 30s).
The maximum time allowed for broker connection setups (TCP connection as
well as SSL and SASL handshakes) is now limited to this value.
This fixes the issue with stalled broker connections in the case of network
or load balancer problems.
The Java clients has an exponential backoff to this timeout which is
limited by `socket.connection.setup.timeout.max.ms` - this was not
implemented in librdkafka due to differences in connection handling and
`ERR__ALL_BROKERS_DOWN` error reporting. Having a lower initial connection
setup timeout and then increase the timeout for the next attempt would
yield possibly false-positive `ERR__ALL_BROKERS_DOWN` too early.
* SASL OAUTHBEARER refresh callbacks can now be scheduled for execution
on librdkafka's background thread. This solves the problem where an
application has a custom SASL OAUTHBEARER refresh callback and thus needs to
Expand All @@ -43,6 +74,9 @@ librdkafka v1.9.0 is a feature release:
can now be triggered automatically on the librdkafka background thread.
* `rd_kafka_queue_get_background()` now creates the background thread
if not already created.
* Added `rd_kafka_consumer_close_queue()` and `rd_kafka_consumer_closed()`.
This allow applications and language bindings to implement asynchronous
consumer close.
* Bundled zlib upgraded to version 1.2.12.
* Bundled OpenSSL upgraded to 1.1.1n.
* Added `test.mock.broker.rtt` to simulate RTT/latency for mock brokers.
Expand All @@ -61,11 +95,32 @@ librdkafka v1.9.0 is a feature release:
was configured.
This regression was introduced in v1.8.0 due to use of vcpkgs and how
keystore file was read. #3554.
* Windows 32-bit only: 64-bit atomic reads were in fact not atomic and could
in rare circumstances yield incorrect values.
One manifestation of this issue was the `max.poll.interval.ms` consumer
timer expiring even though the application was polling according to profile.
Fixed by @WhiteWind (#3815).
* `rd_kafka_clusterid()` would previously fail with timeout if
called on cluster with no visible topics (#3620).
The clusterid is now returned as soon as metadata has been retrieved.
* Fix hang in `rd_kafka_list_groups()` if there are no available brokers
to connect to (#3705).
* Millisecond timeouts (`timeout_ms`) in various APIs, such as `rd_kafka_poll()`,
was limited to roughly 36 hours before wrapping. (#3034)
* If a metadata request triggered by `rd_kafka_metadata()` or consumer group rebalancing
encountered a non-retriable error it would not be propagated to the caller and thus
cause a stall or timeout, this has now been fixed. (@aiquestion, #3625)
* AdminAPI `DeleteGroups()` and `DeleteConsumerGroupOffsets()`:
if the given coordinator connection was not up by the time these calls were
initiated and the first connection attempt failed then no further connection
attempts were performed, ulimately leading to the calls timing out.
This is now fixed by keep retrying to connect to the group coordinator
until the connection is successful or the call times out.
Additionally, the coordinator will be now re-queried once per second until
the coordinator comes up or the call times out, to detect change in
coordinators.
* Mock cluster `rd_kafka_mock_broker_set_down()` would previously
accept and then disconnect new connections, it now refuses new connections.


### Consumer fixes
Expand All @@ -75,6 +130,11 @@ librdkafka v1.9.0 is a feature release:
See **Upgrade considerations** above for more information.
* `rd_kafka_*assign()` will now reset/clear the stored offset.
See **Upgrade considerations** above for more information.
* `seek()` followed by `pause()` would overwrite the seeked offset when
later calling `resume()`. This is now fixed. (#3471).
**Note**: Avoid storing offsets (`offsets_store()`) after calling
`seek()` as this may later interfere with resuming a paused partition,
instead store offsets prior to calling seek.
* A `ERR_MSG_SIZE_TOO_LARGE` consumer error would previously be raised
if the consumer received a maximum sized FetchResponse only containing
(transaction) aborted messages with no control messages. The fetching did
Expand All @@ -91,9 +151,16 @@ librdkafka v1.9.0 is a feature release:
* Fix crash (`cant handle op type`) when using `consume_batch_queue()` (et.al)
and an OAUTHBEARER refresh callback was set.
The callback is now triggered by the consume call. (#3263)
* Fix `partition.assignment.strategy` ordering when multiple strategies are configured.
If there is more than one eligible strategy, preference is determined by the
configured order of strategies. The partitions are assigned to group members according
to the strategy order preference now. (#3818)
* Any form of unassign*() (absolute or incremental) is now allowed during
consumer close rebalancing and they're all treated as absolute unassigns.
(@kevinconaway)


### Producer fixes
### Transactional producer fixes

* Fix message loss in idempotent/transactional producer.
A corner case has been identified that may cause idempotent/transactional
Expand Down Expand Up @@ -127,6 +194,20 @@ librdkafka v1.9.0 is a feature release:
broker (added in Apache Kafka 2.8), which could cause the producer to
seemingly hang.
This error code is now correctly handled by raising a fatal error.
* If the given group coordinator connection was not up by the time
`send_offsets_to_transactions()` was called, and the first connection
attempt failed then no further connection attempts were performed, ulimately
leading to `send_offsets_to_transactions()` timing out, and possibly
also the transaction timing out on the transaction coordinator.
This is now fixed by keep retrying to connect to the group coordinator
until the connection is successful or the call times out.
Additionally, the coordinator will be now re-queried once per second until
the coordinator comes up or the call times out, to detect change in
coordinators.


### Producer fixes

* Improved producer queue wakeup scheduling. This should significantly
decrease the number of wakeups and thus syscalls for high message rate
producers. (#3538, #2912)
Expand Down
17 changes: 17 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,19 @@ if(WITH_ZLIB)
endif()
# }

# CURL {
find_package(CURL QUIET)
if(CURL_FOUND)
set(with_curl_default ON)
else()
set(with_curl_default OFF)
endif()
option(WITH_CURL "With CURL" ${with_curl_default})
if(WITH_CURL)
list(APPEND BUILT_WITH "CURL")
endif()
# }

# ZSTD {
find_package(ZSTD QUIET)
if(ZSTD_FOUND)
Expand Down Expand Up @@ -148,6 +161,10 @@ if(WITH_SASL)
endif()
# }

if(WITH_SSL AND WITH_CURL)
set(WITH_OAUTHBEARER_OIDC ON)
endif()

# LZ4 {
option(ENABLE_LZ4_EXT "Enable external LZ4 library support" ON)
set(WITH_LZ4_EXT OFF)
Expand Down
3 changes: 2 additions & 1 deletion CONFIGURATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ socket.nagle.disable | * | true, false | false
socket.max.fails | * | 0 .. 1000000 | 1 | low | Disconnect from broker when this number of send failures (e.g., timed out requests) is reached. Disable with 0. WARNING: It is highly recommended to leave this setting at its default value of 1 to avoid the client and broker to become desynchronized in case of request timeouts. NOTE: The connection is automatically re-established. <br>*Type: integer*
broker.address.ttl | * | 0 .. 86400000 | 1000 | low | How long to cache the broker address resolving results (milliseconds). <br>*Type: integer*
broker.address.family | * | any, v4, v6 | any | low | Allowed broker IP address families: any, v4, v6 <br>*Type: enum value*
socket.connection.setup.timeout.ms | * | 1000 .. 2147483647 | 30000 | medium | Maximum time allowed for broker connection setup (TCP connection setup as well SSL and SASL handshake). If the connection to the broker is not fully functional after this the connection will be closed and retried. <br>*Type: integer*
connections.max.idle.ms | * | 0 .. 2147483647 | 0 | medium | Close broker connections after the specified time of inactivity. Disable with 0. If this property is left at its default value some heuristics are performed to determine a suitable default value, this is currently limited to identifying brokers on Azure (see librdkafka issue #3109 for more info). <br>*Type: integer*
reconnect.backoff.jitter.ms | * | 0 .. 3600000 | 0 | low | **DEPRECATED** No longer used. See `reconnect.backoff.ms` and `reconnect.backoff.max.ms`. <br>*Type: integer*
reconnect.backoff.ms | * | 0 .. 3600000 | 100 | medium | The initial time to wait before reconnecting to a broker after the connection has been closed. The time is increased exponentially until `reconnect.backoff.max.ms` is reached. -25% to +50% jitter is applied to each reconnect backoff. A value of 0 disables the backoff and reconnects immediately. <br>*Type: integer*
Expand All @@ -43,7 +44,7 @@ log_level | * | 0 .. 7 | 6
log.queue | * | true, false | false | low | Disable spontaneous log_cb from internal librdkafka threads, instead enqueue log messages on queue set with `rd_kafka_set_log_queue()` and serve log callbacks or events through the standard poll APIs. **NOTE**: Log messages will linger in a temporary queue until the log queue has been set. <br>*Type: boolean*
log.thread.name | * | true, false | true | low | Print internal thread name in log messages (useful for debugging librdkafka internals) <br>*Type: boolean*
enable.random.seed | * | true, false | true | low | If enabled librdkafka will initialize the PRNG with srand(current_time.milliseconds) on the first invocation of rd_kafka_new() (required only if rand_r() is not available on your platform). If disabled the application must call srand() prior to calling rd_kafka_new(). <br>*Type: boolean*
log.connection.close | * | true, false | true | low | Log broker disconnects. It might be useful to turn this off when interacting with 0.9 brokers with an aggressive `connection.max.idle.ms` value. <br>*Type: boolean*
log.connection.close | * | true, false | true | low | Log broker disconnects. It might be useful to turn this off when interacting with 0.9 brokers with an aggressive `connections.max.idle.ms` value. <br>*Type: boolean*
background_event_cb | * | | | low | Background queue event callback (set with rd_kafka_conf_set_background_event_cb()) <br>*Type: see dedicated API*
socket_cb | * | | | low | Socket creation callback to provide race-free CLOEXEC <br>*Type: see dedicated API*
connect_cb | * | | | low | Socket connect callback <br>*Type: see dedicated API*
Expand Down
5 changes: 3 additions & 2 deletions INTRODUCTION.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ Example using `linger.ms=1000`:
```


The default setting of `linger.ms=0.1` is not suitable for
The default setting of `linger.ms=5` is not suitable for
high throughput, it is recommended to set this value to >50ms, with
throughput leveling out somewhere around 100-1000ms depending on
message produce pattern and sizes.
Expand Down Expand Up @@ -1026,7 +1026,7 @@ from any thread at any time:

* `log_cb` - Logging callback - allows the application to output log messages
generated by librdkafka.
* `partitioner` - Partitioner callback - application provided message partitioner.
* `partitioner_cb` - Partitioner callback - application provided message partitioner.
The partitioner may be called in any thread at any time, it may be
called multiple times for the same key.
Partitioner function contraints:
Expand Down Expand Up @@ -1930,6 +1930,7 @@ The [Apache Kafka Implementation Proposals (KIPs)](https://cwiki.apache.org/conf
| KIP-580 - Exponential backoff for Kafka clients | WIP | Partially supported |
| KIP-584 - Versioning scheme for features | WIP | Not supported |
| KIP-588 - Allow producers to recover gracefully from txn timeouts | 2.8.0 (WIP) | Not supported |
| KIP-601 - Configurable socket connection timeout | 2.7.0 | Supported |
| KIP-602 - Use all resolved addresses by default | 2.6.0 | Supported |
| KIP-651 - Support PEM format for SSL certs and keys | 2.7.0 | Supported |
| KIP-654 - Aborted txns with non-flushed msgs should not be fatal | 2.7.0 | Supported |
Expand Down
5 changes: 5 additions & 0 deletions configure.self
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,11 @@ void foo (void) {
# SASL OAUTHBEARER's default unsecured JWS implementation
# requires base64 encoding from OpenSSL
mkl_allvar_set WITH_SASL_OAUTHBEARER WITH_SASL_OAUTHBEARER y

if [[ $WITH_CURL == y ]]; then
mkl_allvar_set WITH_OAUTHBEARER_OIDC WITH_OAUTHBEARER_OIDC y
fi

# SASL AWS MSK IAM requires base64 encoding from OpenSSL
if mkl_lib_check "curl" "" disable CC "-lcurl" \
"#include <curl/curl.h>"; then
Expand Down
14 changes: 10 additions & 4 deletions mklove/Makefile.base
Original file line number Diff line number Diff line change
Expand Up @@ -108,16 +108,16 @@ $(LIBNAME_LDS):
$(LIBFILENAME): $(OBJS) $(LIBNAME_LDS)
@printf "$(MKL_YELLOW)Creating shared library $@$(MKL_CLR_RESET)\n"
$(CC_LD) $(LDFLAGS) $(LIB_LDFLAGS) $(OBJS) -o $@ $(LIBS)
ifeq ($(WITH_STRIP),y)
cp $@ $(LIBFILENAMEDBG)
ifeq ($(WITH_STRIP),y)
$(STRIP) -S $@
endif

$(LIBNAME).a: $(OBJS)
@printf "$(MKL_YELLOW)Creating static library $@$(MKL_CLR_RESET)\n"
$(AR) rcs$(ARFLAGS) $@ $(OBJS)
ifeq ($(WITH_STRIP),y)
cp $@ $(LIBNAME)-dbg.a
ifeq ($(WITH_STRIP),y)
$(STRIP) -S $@
$(RANLIB) $@
endif
Expand Down Expand Up @@ -162,8 +162,14 @@ endif # MKL_DYNAMIC_LIBS

else # MKL_STATIC_LIBS is empty
_STATIC_FILENAME=$(LIBNAME).a
$(LIBNAME)-static.a:
@printf "$(MKL_RED)WARNING:$(MKL_YELLOW) $@: Not creating self-contained static library $@: no static libraries available/enabled$(MKL_CLR_RESET)\n"
$(LIBNAME)-static.a: $(LIBNAME).a
@printf "$(MKL_RED)WARNING:$(MKL_YELLOW) $@: No static libraries available/enabled for inclusion in self-contained static library $@: this library will be identical to $(LIBNAME).a$(MKL_CLR_RESET)\n"
ifneq ($(MKL_DYNAMIC_LIBS),)
@printf "$(MKL_RED)WARNING:$(MKL_YELLOW) $@: The following libraries were not available as static libraries and need to be linked dynamically: $(MKL_DYNAMIC_LIBS)$(MKL_CLR_RESET)\n"
cp $(LIBNAME).a $@
cp $(LIBNAME)-dbg.a $(LIBNAME)-static-dbg.a
cp $@ $(LIBNAME)-static-dbg.a
endif # MKL_DYNAMIC_LIBS
endif # MKL_STATIC_LIBS

endif # MKL_NO_SELFCONTAINED_STATIC_LIB
Expand Down
31 changes: 21 additions & 10 deletions mklove/modules/configure.base
Original file line number Diff line number Diff line change
Expand Up @@ -846,21 +846,32 @@ function mkl_generate_late_vars {
done
}


# Generate MKL_DYNAMIC_LIBS and MKL_STATIC_LIBS for Makefile.config
#
# Params: $LIBS
function mkl_generate_libs {
while [[ $# -gt 0 ]]; do
if [[ $1 == -l* ]]; then
mkl_mkvar_append "" MKL_DYNAMIC_LIBS $1
elif [[ $1 == *.a ]]; then
mkl_mkvar_append "" MKL_STATIC_LIBS $1
elif [[ $1 == -framework ]]; then
mkl_mkvar_append "" MKL_DYNAMIC_LIBS "$1 $2"
shift # two args
else
mkl_dbg "Ignoring arg $1 from LIBS while building STATIC and DYNAMIC lists"
fi
shift # remove arg
done
}

# Generate output files.
# Must be called following a succesful configure run.
function mkl_generate {

# Generate MKL_STATIC_LIBS and MKL_DYNAMIC_LIBS from LIBS
local arg=
for arg in $LIBS ; do
if [[ $arg == -l* ]]; then
mkl_mkvar_append "" MKL_DYNAMIC_LIBS $arg
elif [[ $arg == *.a ]]; then
mkl_mkvar_append "" MKL_STATIC_LIBS $arg
else
mkl_dbg "Ignoring arg $arg from LIBS while building STATIC and DYNAMIC lists"
fi
done
mkl_generate_libs $LIBS

local mf=
for mf in $MKL_GENERATORS ; do
Expand Down
Loading

0 comments on commit 2c4adfb

Please sign in to comment.