Skip to content

Commit 2300eb5

Browse files
cocoatomoJoshRosen
authored andcommitted
[SPARK-3773][PySpark][Doc] Sphinx build warning
When building Sphinx documents for PySpark, we have 12 warnings. Their causes are almost docstrings in broken ReST format. To reproduce this issue, we should run following commands on the commit: 6e27cb6. ```bash $ cd ./python/docs $ make clean html ... /Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of pyspark.SparkContext.sequenceFile:4: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of pyspark.RDD.saveAsSequenceFile:4: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:14: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:16: WARNING: Definition list ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.LogisticRegressionWithSGD.train:17: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:14: ERROR: Unexpected indentation. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:16: WARNING: Definition list ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring of pyspark.mllib.classification.SVMWithSGD.train:17: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/<user>/MyRepos/Scala/spark/python/docs/pyspark.mllib.rst:50: WARNING: missing attribute mentioned in :members: or __all__: module pyspark.mllib.regression, attribute RidgeRegressionModelLinearRegressionWithSGD /Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/tree.py:docstring of pyspark.mllib.tree.DecisionTreeModel.predict:3: ERROR: Unexpected indentation. ... checking consistency... /Users/<user>/MyRepos/Scala/spark/python/docs/modules.rst:: WARNING: document isn't included in any toctree ... copying static files... WARNING: html_static_path entry u'/Users/<user>/MyRepos/Scala/spark/python/docs/_static' does not exist ... build succeeded, 12 warnings. ``` Author: cocoatomo <[email protected]> Closes apache#2653 from cocoatomo/issues/3773-sphinx-build-warnings and squashes the following commits: 6f65661 [cocoatomo] [SPARK-3773][PySpark][Doc] Sphinx build warning
1 parent 4f01265 commit 2300eb5

File tree

6 files changed

+28
-23
lines changed

6 files changed

+28
-23
lines changed

python/docs/modules.rst

Lines changed: 0 additions & 7 deletions
This file was deleted.

python/pyspark/context.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -410,6 +410,7 @@ def sequenceFile(self, path, keyClass=None, valueClass=None, keyConverter=None,
410410
Read a Hadoop SequenceFile with arbitrary key and value Writable class from HDFS,
411411
a local file system (available on all nodes), or any Hadoop-supported file system URI.
412412
The mechanism is as follows:
413+
413414
1. A Java RDD is created from the SequenceFile or other InputFormat, and the key
414415
and value Writable classes
415416
2. Serialization is attempted via Pyrolite pickling

python/pyspark/mllib/classification.py

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -89,11 +89,14 @@ def train(cls, data, iterations=100, step=1.0, miniBatchFraction=1.0,
8989
@param regParam: The regularizer parameter (default: 1.0).
9090
@param regType: The type of regularizer used for training
9191
our model.
92-
Allowed values: "l1" for using L1Updater,
93-
"l2" for using
94-
SquaredL2Updater,
95-
"none" for no regularizer.
96-
(default: "none")
92+
93+
:Allowed values:
94+
- "l1" for using L1Updater
95+
- "l2" for using SquaredL2Updater
96+
- "none" for no regularizer
97+
98+
(default: "none")
99+
97100
@param intercept: Boolean parameter which indicates the use
98101
or not of the augmented representation for
99102
training data (i.e. whether bias features
@@ -158,11 +161,14 @@ def train(cls, data, iterations=100, step=1.0, regParam=1.0,
158161
@param initialWeights: The initial weights (default: None).
159162
@param regType: The type of regularizer used for training
160163
our model.
161-
Allowed values: "l1" for using L1Updater,
162-
"l2" for using
163-
SquaredL2Updater,
164-
"none" for no regularizer.
165-
(default: "none")
164+
165+
:Allowed values:
166+
- "l1" for using L1Updater
167+
- "l2" for using SquaredL2Updater,
168+
- "none" for no regularizer.
169+
170+
(default: "none")
171+
166172
@param intercept: Boolean parameter which indicates the use
167173
or not of the augmented representation for
168174
training data (i.e. whether bias features

python/pyspark/mllib/regression.py

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
from pyspark.mllib.linalg import SparseVector, _convert_to_vector
2323
from pyspark.serializers import PickleSerializer, AutoBatchedSerializer
2424

25-
__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 'RidgeRegressionModel'
25+
__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 'RidgeRegressionModel',
2626
'LinearRegressionWithSGD', 'LassoWithSGD', 'RidgeRegressionWithSGD']
2727

2828

@@ -155,11 +155,14 @@ def train(cls, data, iterations=100, step=1.0, miniBatchFraction=1.0,
155155
@param regParam: The regularizer parameter (default: 1.0).
156156
@param regType: The type of regularizer used for training
157157
our model.
158-
Allowed values: "l1" for using L1Updater,
159-
"l2" for using
160-
SquaredL2Updater,
161-
"none" for no regularizer.
162-
(default: "none")
158+
159+
:Allowed values:
160+
- "l1" for using L1Updater,
161+
- "l2" for using SquaredL2Updater,
162+
- "none" for no regularizer.
163+
164+
(default: "none")
165+
163166
@param intercept: Boolean parameter which indicates the use
164167
or not of the augmented representation for
165168
training data (i.e. whether bias features

python/pyspark/mllib/tree.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ def __del__(self):
4848
def predict(self, x):
4949
"""
5050
Predict the label of one or more examples.
51+
5152
:param x: Data point (feature vector),
5253
or an RDD of data points (feature vectors).
5354
"""

python/pyspark/rdd.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1208,6 +1208,7 @@ def saveAsSequenceFile(self, path, compressionCodecClass=None):
12081208
Output a Python RDD of key-value pairs (of form C{RDD[(K, V)]}) to any Hadoop file
12091209
system, using the L{org.apache.hadoop.io.Writable} types that we convert from the
12101210
RDD's key and value types. The mechanism is as follows:
1211+
12111212
1. Pyrolite is used to convert pickled Python RDD into RDD of Java objects.
12121213
2. Keys and values of this Java RDD are converted to Writables and written out.
12131214

0 commit comments

Comments
 (0)