File tree Expand file tree Collapse file tree 2 files changed +8
-8
lines changed
core/src/main/scala/spark Expand file tree Collapse file tree 2 files changed +8
-8
lines changed Original file line number Diff line number Diff line change @@ -368,13 +368,13 @@ abstract class RDD[T: ClassManifest](
368368 * @param printPipeContext Before piping elements, this function is called as an oppotunity
369369 * to pipe context data. Print line function (like out.println) will be
370370 * passed as printPipeContext's parameter.
371- * @param printPipeContext Use this function to customize how to pipe elements. This function
372- * will be called with each RDD element as the 1st parameter, and the
373- * print line function (like out.println()) as the 2nd parameter.
374- * An example of pipe the RDD data of groupBy() in a streaming way,
375- * instead of constructing a huge String to concat all the elements:
376- * def printRDDElement(record:(String, Seq[String]), f:String=>Unit) =
377- * for (e <- record._2){f(e)}
371+ * @param printRDDElement Use this function to customize how to pipe elements. This function
372+ * will be called with each RDD element as the 1st parameter, and the
373+ * print line function (like out.println()) as the 2nd parameter.
374+ * An example of pipe the RDD data of groupBy() in a streaming way,
375+ * instead of constructing a huge String to concat all the elements:
376+ * def printRDDElement(record:(String, Seq[String]), f:String=>Unit) =
377+ * for (e <- record._2){f(e)}
378378 * @return the result RDD
379379 */
380380 def pipe (
Original file line number Diff line number Diff line change @@ -62,7 +62,7 @@ class PipedRDD[T: ClassManifest](
6262 val out = new PrintWriter (proc.getOutputStream)
6363
6464 // input the pipe context firstly
65- if ( printPipeContext != null ) {
65+ if (printPipeContext != null ) {
6666 printPipeContext(out.println(_))
6767 }
6868 for (elem <- firstParent[T ].iterator(split, context)) {
You can’t perform that action at this time.
0 commit comments