Skip to content
This repository was archived by the owner on Nov 15, 2024. It is now read-only.

Commit 6121e91

Browse files
brandonJYsrowen
authored andcommitted
[DOCS] change to dataset for java code in structured-streaming-kafka-integration document
## What changes were proposed in this pull request? In latest structured-streaming-kafka-integration document, Java code example for Kafka integration is using `DataFrame<Row>`, shouldn't it be changed to `DataSet<Row>`? ## How was this patch tested? manual test has been performed to test the updated example Java code in Spark 2.2.1 with Kafka 1.0 Author: brandonJY <[email protected]> Closes apache#20312 from brandonJY/patch-2.
1 parent 4cd2ecc commit 6121e91

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/structured-streaming-kafka-integration.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
6161
{% highlight java %}
6262

6363
// Subscribe to 1 topic
64-
DataFrame<Row> df = spark
64+
Dataset<Row> df = spark
6565
.readStream()
6666
.format("kafka")
6767
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
@@ -70,7 +70,7 @@ DataFrame<Row> df = spark
7070
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
7171

7272
// Subscribe to multiple topics
73-
DataFrame<Row> df = spark
73+
Dataset<Row> df = spark
7474
.readStream()
7575
.format("kafka")
7676
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
@@ -79,7 +79,7 @@ DataFrame<Row> df = spark
7979
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
8080

8181
// Subscribe to a pattern
82-
DataFrame<Row> df = spark
82+
Dataset<Row> df = spark
8383
.readStream()
8484
.format("kafka")
8585
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
@@ -171,7 +171,7 @@ df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
171171
{% highlight java %}
172172

173173
// Subscribe to 1 topic defaults to the earliest and latest offsets
174-
DataFrame<Row> df = spark
174+
Dataset<Row> df = spark
175175
.read()
176176
.format("kafka")
177177
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
@@ -180,7 +180,7 @@ DataFrame<Row> df = spark
180180
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");
181181

182182
// Subscribe to multiple topics, specifying explicit Kafka offsets
183-
DataFrame<Row> df = spark
183+
Dataset<Row> df = spark
184184
.read()
185185
.format("kafka")
186186
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
@@ -191,7 +191,7 @@ DataFrame<Row> df = spark
191191
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");
192192

193193
// Subscribe to a pattern, at the earliest and latest offsets
194-
DataFrame<Row> df = spark
194+
Dataset<Row> df = spark
195195
.read()
196196
.format("kafka")
197197
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")

0 commit comments

Comments
 (0)