Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/structured-streaming-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -511,7 +511,7 @@ returned by `SparkSession.readStream()`. In [R](api/R/read.stream.html), with th
There are a few built-in sources.

- **File source** - Reads files written in a directory as a stream of data. Files will be processed in the order of file modification time. If `latestFirst` is set, order will be reversed. Supported file formats are text, CSV, JSON, ORC, Parquet. See the docs of the DataStreamReader interface for a more up-to-date list, and supported options for each file format. Note that the files must be atomically placed in the given directory, which in most file systems, can be achieved by file move operations.
- **Kafka source** - Reads data from Kafka. It's compatible with Kafka broker versions 0.10.0 or higher. See the [Kafka Integration Guide](structured-streaming-kafka-0-10-integration.html) for more details.
- **Kafka source** - Reads data from Kafka. It's compatible with Kafka broker versions 0.10.0 or higher. See the [Kafka Integration Guide](structured-streaming-kafka-integration.html) for more details.

- **Socket source (for testing)** - Reads UTF8 text data from a socket connection. The listening server socket is at the driver. Note that this should be used only for testing as this does not provide end-to-end fault-tolerance guarantees.

Expand Down Expand Up @@ -582,7 +582,7 @@ Here are the details of all the sources in Spark.
<tr>
<td><b>Kafka Source</b></td>
<td>
See the <a href="structured-streaming-kafka-0-10-integration.html">Kafka Integration Guide</a>.
See the <a href="structured-streaming-kafka-integration.html">Kafka Integration Guide</a>.
</td>
<td>Yes</td>
<td></td>
Expand Down Expand Up @@ -1835,7 +1835,7 @@ Here are the details of all the sinks in Spark.
<tr>
<td><b>Kafka Sink</b></td>
<td>Append, Update, Complete</td>
<td>See the <a href="structured-streaming-kafka-0-10-integration.html">Kafka Integration Guide</a></td>
<td>See the <a href="structured-streaming-kafka-integration.html">Kafka Integration Guide</a></td>
<td>Yes (at-least-once)</td>
<td>More details in the <a href="structured-streaming-kafka-integration.html">Kafka Integration Guide</a></td>
</tr>
Expand Down