Skip to content

Commit 9d71be4

Browse files
committed
upgrade to cp 7.6.1
1 parent 60f53fb commit 9d71be4

File tree

9 files changed

+50
-43
lines changed

9 files changed

+50
-43
lines changed

.env

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
KAFKA_VERSION=3.7.0
2-
CONFLUENT_VERSION=7.6.0
2+
CONFLUENT_VERSION=7.6.1
33
POSTGRES_VERSION=10.5
44
POSTGRES_ALPINE_VERSION=14.1-alpine
55
KEYCLOAK_VERSION=legacy

README.adoc

+27-22
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
= Practical examples with Apache Kafka®
22
:author: Giovanni Marigi
33
4-
:revdate: Febraury 28, 2024
4+
:revdate: April 16, 2024
55
:revnumber: 1.2.1
66
:version-label!:
77
:toc: left
@@ -259,6 +259,7 @@ Topic create:
259259
----
260260
kubectl exec --stdin --tty kafka-0 -- /bin/bash
261261
kafka-topics --bootstrap-server localhost:9092 --create --topic test-1
262+
exit
262263
----
263264

264265
Topic list:
@@ -267,6 +268,7 @@ Topic list:
267268
----
268269
kubectl exec --stdin --tty kafka-0 -- /bin/bash
269270
kafka-topics --bootstrap-server localhost:9092 --list
271+
exit
270272
----
271273

272274
Topic describe:
@@ -275,6 +277,7 @@ Topic describe:
275277
----
276278
kubectl exec --stdin --tty kafka-0 -- /bin/bash
277279
kafka-topics --bootstrap-server localhost:9092 --topic test-1 --describe
280+
exit
278281
----
279282

280283
Produce messages to Topic:
@@ -283,6 +286,7 @@ Produce messages to Topic:
283286
----
284287
kubectl exec --stdin --tty kafka-0 -- /bin/bash
285288
kafka-producer-perf-test --num-records 1000000 --record-size 1000 --throughput -1 --topic test-1 --producer-props bootstrap.servers=localhost:9092
289+
exit
286290
----
287291

288292
Consume messages from Topic:
@@ -291,6 +295,7 @@ Consume messages from Topic:
291295
----
292296
kubectl exec --stdin --tty kafka-0 -- /bin/bash
293297
kafka-console-consumer --bootstrap-server localhost:9092 --topic test-1 --from-beginning
298+
exit
294299
----
295300

296301
==== Tear Down
@@ -470,7 +475,7 @@ LogAppendTime:1697359857981 BIQAWWOIFIAKNYFEPTPMIXPQAXFEIKUFFXIDHILBPCBTHWDRMALH
470475

471476
Folder link:interceptors/[interceptors/]
472477

473-
This example shows how to create a custom producer interceptor. Java class link:interceptors/src/main/java/org/hifly/kafka/interceptor/producer/CreditCardProducerInterceptor.java[_CreditCardProducerInterceptor_] will mask a sensitive info on producer record (credit card).
478+
This example shows how to create a custom producer interceptor. Java class link:interceptors/src/main/java/org/hifly/kafka/interceptor/producer/CreditCardProducerInterceptor.java[_CreditCardProducerInterceptor_] will mask a sensitive info on producer record (credit card number).
474479

475480
Compile and package:
476481

@@ -506,7 +511,7 @@ Topic: test_custom_data - Partition: 0 - Offset: 1
506511

507512
Folder link:kafka-python-producer/[kafka-python-producer/]
508513

509-
Install python lib link:https://docs.confluent.io/kafka-clients/python/current/overview.html[_confluent-kafka_]:
514+
Install confluent-kafka-python lib link:https://docs.confluent.io/kafka-clients/python/current/overview.html[_confluent-kafka_]:
510515

511516
[source,bash]
512517
----
@@ -520,7 +525,7 @@ or:
520525
python3 -m pip install confluent-kafka
521526
----
522527

523-
Create "kafka-topic" topic:
528+
Create _kafka-topic_ topic:
524529

525530
[source,bash]
526531
----
@@ -618,7 +623,7 @@ docker exec -it broker /opt/kafka/bin/kafka-topics.sh --bootstrap-server broker:
618623
docker exec -it broker /opt/kafka/bin/kafka-topics.sh --bootstrap-server broker:9092 --create --topic users_clicks --replication-factor 1 --partitions 3
619624
----
620625

621-
Run 2 consumer instances (2 different shells/terminals) belonging to the same group and subscribed to _user_ and _user_clicks_ topics. Consumers uses
626+
Run 2 consumer instances (2 different shells/terminals) belonging to the same consumer group and subscribed to _user_ and _user_clicks_ topics. Consumers uses
622627
link:https://kafka.apache.org/37/javadoc/org/apache/kafka/clients/consumer/RangeAssignor.html[_org.apache.kafka.clients.consumer.RangeAssignor_] to distribute partition ownership.
623628

624629
[source,bash]
@@ -656,7 +661,7 @@ docker exec -it broker /opt/kafka/bin/kafka-topics.sh --bootstrap-server broker:
656661
docker exec -it broker /opt/kafka/bin/kafka-topics.sh --bootstrap-server broker:9092 --create --topic users_clicks --replication-factor 1 --partitions 3
657662
----
658663

659-
Run 2 consumer instances (2 different shells/terminals) belonging to the same group and subscribed to _user_ and _user_clicks_ topics; consumers uses
664+
Run 2 consumer instances (2 different shells/terminals) belonging to the same consumer group and subscribed to _user_ and _user_clicks_ topics; consumers uses
660665
link:https://kafka.apache.org/37/javadoc/org/apache/kafka/clients/consumer/RoundRobinAssignor.html[_org.apache.kafka.clients.consumer.RoundRobinAssignor_] to distribute partition ownership.
661666

662667
[source,bash]
@@ -730,7 +735,7 @@ Try to shut down consumer instances (CTRL+C) and then re-start them again; verif
730735

731736
=== Read from the closest replica
732737

733-
This example shows how to use the feature (since Apache Kafka® 2.4+) for consumers to read messages from the closest replica.
738+
This example shows how to use the feature (since Apache Kafka® 2.4+) for consumers to read messages from the closest replica, even if it is not a leader of the partition.
734739

735740
Start a cluster with 3 brokers on 3 different racks, _dc1_, _dc2_ and _dc3_:
736741

@@ -798,7 +803,7 @@ Folder link:kafka-consumer-retry-topics/[kafka-consumer-retry-topics/]
798803
This solution could be implemented on consumer side to handle errors in processing records without blocking the input topic.
799804

800805
. Consumer processes records and commit the offset (_auto-commit_).
801-
. If a record can't be processed _(simple condition here is the existence of a specific HEADER)_, it is sent to a retry topic, if the number of retries is not yet exhausted.
806+
. If a record can't be processed _(simple condition here to raise an error, is the existence of a specific message HEADER named ERROR)_, it is sent to a retry topic, if the number of retries is not yet exhausted.
802807
. When the number of retries is exhausted, record is sent to a DLQ topic.
803808
. Number of retries is set at Consumer instance level.
804809

@@ -884,7 +889,7 @@ Consumer 23d06b51-5780-4efc-9c33-a93b3caa3b48 - partition 0 - lastOffset 1
884889

885890
Folder link:kafka-python-consumer/[kafka-python-consumer/]
886891

887-
Install python lib link:https://docs.confluent.io/kafka-clients/python/current/overview.html[_confluent-kafka_]:
892+
Install confluent kafka python lib link:https://docs.confluent.io/kafka-clients/python/current/overview.html[_confluent-kafka_]:
888893

889894
[source,bash]
890895
----
@@ -984,7 +989,7 @@ Folder link:compression/[compression/]
984989
This example will show that messages sent to the same topic with different _compression.type_.
985990
Messages with different compression can be read by the same consumer instance.
986991

987-
Compressions supported on producer side are:
992+
Compressions supported on producer are:
988993

989994
- _none_ (no compression)
990995
- _gzip_
@@ -1672,9 +1677,9 @@ Folder: link:kafka-unixcommand-connector/[kafka-unixcommand-connector]
16721677

16731678
Implementation of a sample Kafka Connect Source Connector; it executes _unix commands_ (e.g. _fortune_, _ls -ltr, netstat_) and sends its output to a topic.
16741679

1675-
IMPORTANT: commands are executed on connect worker node.
1680+
IMPORTANT: unix commands are executed on connect worker node.
16761681

1677-
This connector relies on Confluent Schema Registry to convert Avro messages using converter:
1682+
This connector relies on Confluent Schema Registry to convert messages using an Avro converter:
16781683
link:https://github.com/confluentinc/schema-registry/blob/master/avro-converter/src/main/java/io/confluent/connect/avro/AvroConverter.java[_io.confluent.connect.avro.AvroConverter_].
16791684

16801685
Connector link:kafka-unixcommand-connector/config/source.quickstart.json[source.quickstart.json]:
@@ -1695,7 +1700,7 @@ Connector link:kafka-unixcommand-connector/config/source.quickstart.json[source.
16951700

16961701
Parameters for source connector:
16971702

1698-
- _command_ – unix command to execute (e.g. ls -ltr)
1703+
- _command_ – unix command to execute (e.g. ls -ltr, fortune)
16991704
- _topic_ – output topic
17001705
- _poll.ms_ – poll interval in milliseconds between every execution
17011706

@@ -1716,7 +1721,7 @@ scripts/bootstrap-unixcommand-connector.sh
17161721

17171722
This will create an image based on link:https://hub.docker.com/r/confluentinc/cp-kafka-connect-base/tags[_confluentinc/cp-kafka-connect-base_] using a custom link:kafka-unixcommand-connector/Dockerfile[_Dockerfile_].
17181723

1719-
It will use the Confluent utility link:https://docs.confluent.io/kafka-connectors/confluent-hub/client.html[_confluent-hub install_] to install the plugin in connect.
1724+
It will use the confluent-hub utility link:https://docs.confluent.io/kafka-connectors/confluent-hub/client.html[_confluent-hub install_] to install the plugin in connect.
17201725

17211726

17221727
Deploy the connector:
@@ -1794,7 +1799,7 @@ A MongoDB sink connector will be created with this link:kafka-smt-custom/config/
17941799

17951800
Original json messages will be sent to _test_ topic.
17961801

1797-
Sink connector will apply the SMT and store the records in MongoDB _pets_ collection from _Tutorial2_ database.
1802+
Sink connector will apply the SMT and store the records in MongoDB _pets_ collection from _Tutorial2_ database, using a key generated by the SMT.
17981803

17991804
Teardown:
18001805

@@ -1944,7 +1949,7 @@ A MongoDB sink connector will be created with this link:kafka-connect-sink-dlq/c
19441949
}
19451950
----
19461951

1947-
Send json messages to _test_ topic (second message is a bad json message):
1952+
Send json messages to _test_ topic (second message is a malformed json message):
19481953

19491954
[source,bash]
19501955
----
@@ -1990,7 +1995,7 @@ Run the example:
19901995
scripts/bootstrap-connect-sink-http.sh
19911996
----
19921997

1993-
A web application listening on port _8010_ will start up.
1998+
A web application, exposing REST APIs, listening on port _8010_ will start up.
19941999

19952000
A HTTP sink connector will be created with this link:kafka-connect-sink-http/config/http_sink.json[config]:
19962001

@@ -2074,7 +2079,7 @@ A S3 sink connector will be created with this link:kafka-connect-sink-s3/config/
20742079
}
20752080
----
20762081

2077-
Sink connector will read messages from topic _gaming-player-activity_ and store in S3 bucket _gaming-player-activity-bucket_ using _io.confluent.connect.s3.format.avro.AvroFormat_ as format class.
2082+
Sink connector will read messages from topic _gaming-player-activity_ and store them in a S3 bucket _gaming-player-activity-bucket_ using _io.confluent.connect.s3.format.avro.AvroFormat_ as format class.
20782083

20792084
Sink connector will generate a new object storage entry every 100 messages (_flush_size_).
20802085

@@ -2100,7 +2105,7 @@ scripts/tear-down-connect-sink-s3.sh
21002105

21012106
==== Parquet format
21022107

2103-
Same example but Sink connector will read Avro messages from topic _gaming-player-activity_ and store them in S3 bucket _gaming-player-activity-bucket_ using _io.confluent.connect.s3.format.parquet.ParquetFormat_ as format class.
2108+
Same example but Sink connector will read Avro messages from topic _gaming-player-activity_ and store them in a S3 bucket _gaming-player-activity-bucket_ using _io.confluent.connect.s3.format.parquet.ParquetFormat_ as format class.
21042109

21052110
The format of data stored in MinIO will be Parquet.
21062111

@@ -2218,15 +2223,15 @@ scripts/tear-down-connect-source-sap-hana.sh
22182223

22192224
Folder: link:kafka-connect-source-event-router/[kafka-connect-source-event-router]
22202225

2221-
In this example, some SMT transformations (in chain) are used to create an Event Router starting from an input _outbox table_.
2226+
In this example, some SMT transformations (chained) are used to create an Event Router starting from an input _outbox table_.
22222227

2223-
The outbox table contains different operations for the same aggregate (_Consumer Loan_); the different operations are sent on specific topics following this routing:
2228+
The outbox table contains different operations for the same aggregate (_Consumer Loan_); the different operations are sent on specific topics following these routing rules:
22242229

22252230
- operation: CREATE --> topic: _loan_
22262231
- operation: INSTALLMENT_PAYMENT --> topic: _loan_payment_
22272232
- operation: EARLY_LOAN_CLOSURE --> topic: _loan_
22282233

2229-
Records from the outbox table are fetched using a jdbc source connector.
2234+
Records from the outbox table are fetched using a JDBC Source Connector.
22302235

22312236
Run the example:
22322237

cdc-debezium-informix/config/debezium-source-informix.json

+3-1
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@
1010
"topic.prefix": "test",
1111
"table.include.list": "iot.informix.cust_db",
1212
"schema.history.internal.kafka.bootstrap.servers": "broker:9092",
13-
"schema.history.internal.kafka.topic": "schemahistory.test"
13+
"schema.history.internal.kafka.topic": "schemahistory.test",
14+
"schema.history.internal.store.only.captured.tables.ddl": "true",
15+
"snapshot.mode": "always"
1416
}
1517
}

confluent-for-kubernetes/k8s/confluent-platform-reducted.yaml

+4-4
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ metadata:
77
spec:
88
replicas: 1
99
image:
10-
application: confluentinc/cp-zookeeper:7.6.0
11-
init: confluentinc/confluent-init-container:2.8.0
10+
application: confluentinc/cp-zookeeper:7.6.1
11+
init: confluentinc/confluent-init-container:2.8.2
1212
dataVolumeCapacity: 1Gi
1313
logVolumeCapacity: 1Gi
1414
---
@@ -20,8 +20,8 @@ metadata:
2020
spec:
2121
replicas: 3
2222
image:
23-
application: confluentinc/cp-kafka:7.6.0
24-
init: confluentinc/confluent-init-container:2.8.0
23+
application: confluentinc/cp-kafka:7.6.1
24+
init: confluentinc/confluent-init-container:2.8.2
2525
dataVolumeCapacity: 1Gi
2626
metricReporter:
2727
enabled: false

confluent-for-kubernetes/k8s/confluent-platform.yaml

+11-11
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ metadata:
77
spec:
88
replicas: 1
99
image:
10-
application: confluentinc/cp-zookeeper:7.6.0
11-
init: confluentinc/confluent-init-container:2.8.0
10+
application: confluentinc/cp-zookeeper:7.6.1
11+
init: confluentinc/confluent-init-container:2.8.2
1212
dataVolumeCapacity: 1Gi
1313
logVolumeCapacity: 1Gi
1414
---
@@ -20,8 +20,8 @@ metadata:
2020
spec:
2121
replicas: 3
2222
image:
23-
application: confluentinc/cp-kafka:7.6.0
24-
init: confluentinc/confluent-init-container:2.8.0
23+
application: confluentinc/cp-kafka:7.6.1
24+
init: confluentinc/confluent-init-container:2.8.2
2525
dataVolumeCapacity: 1Gi
2626
metricReporter:
2727
enabled: false
@@ -34,8 +34,8 @@ metadata:
3434
spec:
3535
replicas: 1
3636
image:
37-
application: confluentinc/cp-kafka-connect-base:7.6.0
38-
init: confluentinc/confluent-init-container:2.8.0
37+
application: confluentinc/cp-kafka-connect-base:7.6.1
38+
init: confluentinc/confluent-init-container:2.8.2
3939
dependencies:
4040
kafka:
4141
bootstrapEndpoint: kafka:9071
@@ -49,7 +49,7 @@ spec:
4949
replicas: 1
5050
image:
5151
application: confluentinc/ksqldb-server:0.28.2
52-
init: confluentinc/confluent-init-container:2.8.0
52+
init: confluentinc/confluent-init-container:2.8.2
5353
dataVolumeCapacity: 1Gi
5454
---
5555
apiVersion: platform.confluent.io/v1beta1
@@ -60,8 +60,8 @@ metadata:
6060
spec:
6161
replicas: 1
6262
image:
63-
application: confluentinc/cp-schema-registry:7.6.0
64-
init: confluentinc/confluent-init-container:2.8.0
63+
application: confluentinc/cp-schema-registry:7.6.1
64+
init: confluentinc/confluent-init-container:2.8.2
6565
---
6666
apiVersion: platform.confluent.io/v1beta1
6767
kind: KafkaRestProxy
@@ -73,6 +73,6 @@ spec:
7373
schemaRegistry:
7474
url: http://schemaregistry.confluent.svc.cluster.local:8081
7575
image:
76-
application: confluentinc/cp-kafka-rest:7.6.0
77-
init: confluentinc/confluent-init-container:2.8.0
76+
application: confluentinc/cp-kafka-rest:7.6.1
77+
init: confluentinc/confluent-init-container:2.8.2
7878
replicas: 1

kafka-smt-aspectj/Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM confluentinc/cp-kafka-connect-base:7.6.0
1+
FROM confluentinc/cp-kafka-connect-base:7.6.1
22

33
COPY agent/aspectjweaver-1.9.19.jar /usr/share/java/aspectjweaver-1.9.19.jar
44

kafka-smt-custom/Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
FROM confluentinc/cp-kafka-connect-base:7.6.0
1+
FROM confluentinc/cp-kafka-connect-base:7.6.1
22

33
COPY target/kafka-smt-custom-1.2.1.jar /usr/share/java/kafkaconnect_smt-1.2.1.jar

kafka-unixcommand-connector/Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM confluentinc/cp-kafka-connect-base:7.6.0
1+
FROM confluentinc/cp-kafka-connect-base:7.6.1
22

33
COPY target/kafka-unixcommand-connector-1.2.1-package.zip /tmp/kafka-unixcommand-connector-1.2.1-package.zip
44

pom.xml

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@
3636
<maven.compiler.target>17</maven.compiler.target>
3737
<kafka.version>3.7.0</kafka.version>
3838
<avro.version>1.11.3</avro.version>
39-
<confluent.version>7.6.0</confluent.version>
39+
<confluent.version>7.6.1</confluent.version>
4040
<apicurio.registry.version>2.4.1.Final</apicurio.registry.version>
4141
<hortonworks.registry.version>0.3.0</hortonworks.registry.version>
4242
<slf4j.version>1.7.15</slf4j.version>

0 commit comments

Comments
 (0)