Skip to content

Commit e3f83fe

Browse files
committed
fix sql and streaming doc warnings
1 parent 2b4371e commit e3f83fe

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

python/pyspark/sql/dataframe.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -943,6 +943,7 @@ def replace(self, to_replace, value, subset=None):
943943
Columns specified in subset that do not have matching data type are ignored.
944944
For example, if `value` is a string, and subset contains a non-string column,
945945
then the non-string column is simply ignored.
946+
946947
>>> df4.replace(10, 20).show()
947948
+----+------+-----+
948949
| age|height| name|

python/pyspark/streaming/kafka.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,11 +132,12 @@ def createRDD(sc, kafkaParams, offsetRanges, leaders={},
132132
.. note:: Experimental
133133
134134
Create a RDD from Kafka using offset ranges for each topic and partition.
135+
135136
:param sc: SparkContext object
136137
:param kafkaParams: Additional params for Kafka
137138
:param offsetRanges: list of offsetRange to specify topic:partition:[start, end) to consume
138139
:param leaders: Kafka brokers for each TopicAndPartition in offsetRanges. May be an empty
139-
map, in which case leaders will be looked up on the driver.
140+
map, in which case leaders will be looked up on the driver.
140141
:param keyDecoder: A function used to decode key (default is utf8_decoder)
141142
:param valueDecoder: A function used to decode value (default is utf8_decoder)
142143
:return: A RDD object

0 commit comments

Comments
 (0)