-
Notifications
You must be signed in to change notification settings - Fork 29k
[MINOR] Typo fixes #17434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MINOR] Typo fixes #17434
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -82,13 +82,13 @@ private[spark] class SortShuffleManager(conf: SparkConf) extends ShuffleManager | |
| override val shuffleBlockResolver = new IndexShuffleBlockResolver(conf) | ||
|
|
||
| /** | ||
| * Register a shuffle with the manager and obtain a handle for it to pass to tasks. | ||
| * Obtains a [[ShuffleHandle]] to pass to tasks. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why remove some of the docs in instances like this? it's not obvious it was superfluous
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's a copy from the main method this one overrides. Since the method does not do what the scaladoc said I thought I'd make it current. It might've "Register(ed) a shuffle with the manager" in the past but not today. It was misleading. |
||
| */ | ||
| override def registerShuffle[K, V, C]( | ||
| shuffleId: Int, | ||
| numMaps: Int, | ||
| dependency: ShuffleDependency[K, V, C]): ShuffleHandle = { | ||
| if (SortShuffleWriter.shouldBypassMergeSort(SparkEnv.get.conf, dependency)) { | ||
| if (SortShuffleWriter.shouldBypassMergeSort(conf, dependency)) { | ||
| // If there are fewer than spark.shuffle.sort.bypassMergeThreshold partitions and we don't | ||
| // need map-side aggregation, then write numPartitions files directly and just concatenate | ||
| // them at the end. This avoids doing serialization and deserialization twice to merge | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -492,7 +492,7 @@ class AstBuilder extends SqlBaseBaseVisitor[AnyRef] with Logging { | |
| } | ||
|
|
||
| /** | ||
| * Add an [[Aggregate]] to a logical plan. | ||
| * Add an [[Aggregate]] or [[GroupingSets]] to a logical plan. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably OK, but it's harder to know whether this is correct. I'm also aware that many
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Didn't check the javadoc and the change follows
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The package |
||
| */ | ||
| private def withAggregation( | ||
| ctx: AggregationContext, | ||
|
|
@@ -519,7 +519,7 @@ class AstBuilder extends SqlBaseBaseVisitor[AnyRef] with Logging { | |
| } | ||
|
|
||
| /** | ||
| * Add a Hint to a logical plan. | ||
| * Add a [[Hint]] to a logical plan. | ||
| */ | ||
| private def withHints( | ||
| ctx: HintContext, | ||
|
|
@@ -545,7 +545,7 @@ class AstBuilder extends SqlBaseBaseVisitor[AnyRef] with Logging { | |
| } | ||
|
|
||
| /** | ||
| * Create a single relation referenced in a FROM claused. This method is used when a part of the | ||
| * Create a single relation referenced in a FROM clause. This method is used when a part of the | ||
| * join condition is nested, for example: | ||
| * {{{ | ||
| * select * from t1 join (t2 cross join t3) on col1 = col2 | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -60,7 +60,7 @@ import org.apache.spark.util.Utils | |
| * The builder can also be used to create a new session: | ||
| * | ||
| * {{{ | ||
| * SparkSession.builder() | ||
| * SparkSession.builder | ||
| * .master("local") | ||
| * .appName("Word Count") | ||
| * .config("spark.some.config.option", "some-value") | ||
|
|
@@ -323,7 +323,7 @@ class SparkSession private( | |
| * // |-- age: integer (nullable = true) | ||
| * | ||
| * dataFrame.createOrReplaceTempView("people") | ||
| * sparkSession.sql("select name from people").collect.foreach(println) | ||
| * sparkSession.sql("select name from people").show | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I imagine this is an OK modification but it's not really a typo fix. I'd avoid changes that aren't fixing problems
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree. Just for the record, |
||
| * }}} | ||
| * | ||
| * @since 2.0.0 | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not "for testing only". I'd even say that it's more often used in a non-test code than test code. That made the comment no longer correct. See
ShuffleExternalSorterandUnsafeShuffleWriterwhich both areShuffleWriterinstances.