Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump librdkafka to 2.2.0 #5

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ I am looking for *your* help to make this project even better! If you're interes

The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.

__This library currently uses `librdkafka` version `1.9.2`.__
__This library currently uses `librdkafka` version `2.2.0`.__

## Reference Docs

Expand Down Expand Up @@ -59,7 +59,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk

### Windows

Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.1.9.2.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.2.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.

Requirements:
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows) (the easies way to get it: `npm install --global --production windows-build-tools`, if your node version is 6.x or below, pleasse use `npm install --global --production [email protected]`)
Expand Down Expand Up @@ -96,7 +96,7 @@ var Kafka = require('node-rdkafka');

## Configuration

You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.9.2/CONFIGURATION.md)
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.2.0/CONFIGURATION.md)

Configuration keys that have the suffix `_cb` are designated as callbacks. Some
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
Expand Down Expand Up @@ -131,7 +131,7 @@ You can also get the version of `librdkafka`
const Kafka = require('node-rdkafka');
console.log(Kafka.librdkafkaVersion);

// #=> 1.9.2
// #=> 2.2.0
```

## Sending Messages
Expand All @@ -144,7 +144,7 @@ var producer = new Kafka.Producer({
});
```

A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.9.2/CONFIGURATION.md) file described previously.
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.2.0/CONFIGURATION.md) file described previously.

The following example illustrates a list with several `librdkafka` options set.

Expand Down Expand Up @@ -266,7 +266,7 @@ To see the configuration options available to you, see the [Configuration](#conf
|`producer.beginTransaction(callback)`| Starts a new transaction. |
|`producer.sendOffsetsToTransaction(offsets, consumer, timeout, callback)`| Sends consumed topic-partition-offsets to the broker, which will get committed along with the transaction. |
|`producer.abortTransaction(timeout, callback)`| Aborts the ongoing transaction. |
|`producer.commitTransaction(timeout, callback)`| Commits the ongoing transaction. |
|`producer.commitTransaction(timeout, callback)`| Commits the ongoing transaction. |

##### Events

Expand Down
46 changes: 35 additions & 11 deletions config.d.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// ====== Generated from librdkafka 1.9.2 file CONFIGURATION.md ======
// ====== Generated from librdkafka 2.2.0 file CONFIGURATION.md ======
// Code that generated this is a derivative work of the code from Nam Nguyen
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb

Expand Down Expand Up @@ -306,6 +306,11 @@ export interface GlobalConfig {
*/
"open_cb"?: any;

/**
* Address resolution callback (set with rd_kafka_conf_set_resolve_cb()).
*/
"resolve_cb"?: any;

/**
* Application opaque (set with rd_kafka_conf_set_opaque())
*/
Expand Down Expand Up @@ -351,6 +356,13 @@ export interface GlobalConfig {
*/
"broker.version.fallback"?: string;

/**
* Allow automatic topic creation on the broker when subscribing to or assigning non-existent topics. The broker must also be configured with `auto.create.topics.enable=true` for this configuration to take effect. Note: the default value (true) for the producer is different from the default value (false) for the consumer. Further, the consumer default value is different from the Java consumer (true), and this property is not supported by the Java producer. Requires broker version >= 0.11.0.0, for older broker versions only the broker configuration applies.
*
* @default false
*/
"allow.auto.create.topics"?: boolean;

/**
* Protocol used to communicate with brokers.
*
Expand Down Expand Up @@ -446,7 +458,12 @@ export interface GlobalConfig {
"ssl.keystore.password"?: string;

/**
* Path to OpenSSL engine library. OpenSSL >= 1.1.0 required.
* Comma-separated list of OpenSSL 3.0.x implementation providers. E.g., "default,legacy".
*/
"ssl.providers"?: string;

/**
* **DEPRECATED** Path to OpenSSL engine library. OpenSSL >= 1.1.x required. DEPRECATED: OpenSSL engine support is deprecated and should be replaced by OpenSSL 3 providers.
*/
"ssl.engine.location"?: string;

Expand All @@ -472,7 +489,7 @@ export interface GlobalConfig {
/**
* Endpoint identification algorithm to validate broker hostname using broker certificate. https - Server (broker) hostname verification as specified in RFC2818. none - No endpoint verification. OpenSSL >= 1.0.2 required.
*
* @default none
* @default https
*/
"ssl.endpoint.identification.algorithm"?: 'none' | 'https';

Expand Down Expand Up @@ -602,6 +619,13 @@ export interface GlobalConfig {
*/
"client.rack"?: string;

/**
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. NOTE: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
*
* @default use_all_dns_ips
*/
"client.dns.lookup"?: 'use_all_dns_ips' | 'resolve_canonical_bootstrap_servers_only';

/**
* Enables or disables `event.*` emitting.
*
Expand Down Expand Up @@ -638,7 +662,7 @@ export interface ProducerGlobalConfig extends GlobalConfig {
"enable.gapless.guarantee"?: boolean;

/**
* Maximum number of messages allowed on the producer queue. This queue is shared by all topics and partitions.
* Maximum number of messages allowed on the producer queue. This queue is shared by all topics and partitions. A value of 0 disables this limit.
*
* @default 100000
*/
Expand Down Expand Up @@ -841,6 +865,13 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
*/
"fetch.wait.max.ms"?: number;

/**
* How long to postpone the next fetch request for a topic+partition in case the current fetch queue thresholds (queued.min.messages or queued.max.messages.kbytes) have been exceded. This property may need to be decreased if the queue thresholds are set low and the application is experiencing long (~1s) delays between messages. Low values may increase CPU utilization.
*
* @default 1000
*/
"fetch.queue.backoff.ms"?: number;

/**
* Initial maximum number of bytes per topic+partition to request when fetching messages from the broker. If the client encounters a message larger than this value it will gradually try to increase it until the entire message can be fetched.
*
Expand Down Expand Up @@ -918,13 +949,6 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
* @default false
*/
"check.crcs"?: boolean;

/**
* Allow automatic topic creation on the broker when subscribing to or assigning non-existent topics. The broker must also be configured with `auto.create.topics.enable=true` for this configuraiton to take effect. Note: The default value (false) is different from the Java consumer (true). Requires broker version >= 0.11.0.0, for older broker versions only the broker configuration applies.
*
* @default false
*/
"allow.auto.create.topics"?: boolean;
}

export interface TopicConfig {
Expand Down
2 changes: 1 addition & 1 deletion deps/librdkafka
14 changes: 5 additions & 9 deletions deps/windows-install.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
# read librdkafka version from package.json
import json
import os
import glob

with open('../package.json') as f:
librdkafkaVersion = json.load(f)['librdkafka']
librdkafkaWinSufix = '7' if librdkafkaVersion == '0.11.5' else '';
Expand Down Expand Up @@ -62,15 +64,9 @@ def createdir(dir):
shutil.copy2(includePath + '/rdkafka.h', depsIncludeDir)
shutil.copy2(includePath + '/rdkafkacpp.h', depsIncludeDir)

shutil.copy2(dllPath + '/libcrypto-1_1-x64.dll', buildReleaseDir)
shutil.copy2(dllPath + '/libcurl.dll', buildReleaseDir)
shutil.copy2(dllPath + '/librdkafka.dll', buildReleaseDir)
shutil.copy2(dllPath + '/librdkafkacpp.dll', buildReleaseDir)
shutil.copy2(dllPath + '/libssl-1_1-x64.dll', buildReleaseDir)
shutil.copy2(dllPath + '/msvcp140.dll', buildReleaseDir)
shutil.copy2(dllPath + '/vcruntime140.dll', buildReleaseDir)
shutil.copy2(dllPath + '/zlib1.dll', buildReleaseDir)
shutil.copy2(dllPath + '/zstd.dll', buildReleaseDir)
# copy all the required dlls
for filename in glob.glob(os.path.join(dllPath, '*.dll')):
shutil.copy2(filename, buildReleaseDir)

# clean up
os.remove(outputFile)
Expand Down
4 changes: 3 additions & 1 deletion errors.d.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// ====== Generated from librdkafka 1.9.2 file src-cpp/rdkafkacpp.h ======
// ====== Generated from librdkafka 2.2.0 file src-cpp/rdkafkacpp.h ======
export const CODES: { ERRORS: {
/* Internal errors to rdkafka: */
/** Begin internal error codes (**-200**) */
Expand Down Expand Up @@ -126,6 +126,8 @@ export const CODES: { ERRORS: {
ERR__NOOP: number,
/** No offset to automatically reset to (**-140**) */
ERR__AUTO_OFFSET_RESET: number,
/** Partition log truncation detected (**-139**) */
ERR__LOG_TRUNCATION: number,
/** End internal error codes (**-100**) */
ERR__END: number,
/* Kafka broker errors: */
Expand Down
2 changes: 1 addition & 1 deletion lib/error.js
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ LibrdKafkaError.wrap = errorWrap;
* @enum {number}
* @constant
*/
// ====== Generated from librdkafka 1.9.2 file src-cpp/rdkafkacpp.h ======
// ====== Generated from librdkafka 2.2.0 file src-cpp/rdkafkacpp.h ======
LibrdKafkaError.codes = {

/* Internal errors to rdkafka: */
Expand Down
6 changes: 3 additions & 3 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"name": "node-rdkafka-acosom",
"version": "v2.16.1",
"description": "Node.js bindings for librdkafka",
"librdkafka": "1.9.2",
"librdkafka": "2.2.0",
"main": "lib/index.js",
"scripts": {
"configure": "node-gyp configure",
Expand Down Expand Up @@ -42,12 +42,12 @@
"jsdoc": "^3.4.0",
"jshint": "^2.10.1",
"mocha": "^5.2.0",
"node-gyp": "^8.4.1",
"node-gyp": "^9.4.0",
"toolkit-jsdoc": "^1.0.0"
},
"dependencies": {
"bindings": "^1.3.1",
"nan": "^2.14.0",
"nan": "^2.18.0",
"tap": "^16.3.4"
},
"engines": {
Expand Down
20 changes: 0 additions & 20 deletions src/binding.cc
Original file line number Diff line number Diff line change
Expand Up @@ -15,21 +15,8 @@ using NodeKafka::KafkaConsumer;
using NodeKafka::AdminClient;
using NodeKafka::Topic;

using node::AtExit;
using RdKafka::ErrorCode;

static void RdKafkaCleanup(void*) { // NOLINT
/*
* Wait for RdKafka to decommission.
* This is not strictly needed but
* allows RdKafka to clean up all its resources before the application
* exits so that memory profilers such as valgrind wont complain about
* memory leaks.
*/

RdKafka::wait_destroyed(5000);
}

NAN_METHOD(NodeRdKafkaErr2Str) {
int points = Nan::To<int>(info[0]).FromJust();
// Cast to error code
Expand Down Expand Up @@ -74,13 +61,6 @@ void ConstantsInit(v8::Local<v8::Object> exports) {
}

void Init(v8::Local<v8::Object> exports, v8::Local<v8::Value> m_, void* v_) {
#if NODE_MAJOR_VERSION <= 9 || (NODE_MAJOR_VERSION == 10 && NODE_MINOR_VERSION <= 15)
AtExit(RdKafkaCleanup);
#else
v8::Local<v8::Context> context = Nan::GetCurrentContext();
node::Environment* env = node::GetCurrentEnvironment(context);
AtExit(env, RdKafkaCleanup, NULL);
#endif
KafkaConsumer::Init(exports);
Producer::Init(exports);
AdminClient::Init(exports);
Expand Down