Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -323,10 +323,11 @@ package object config {

private[spark] val REDUCER_MAX_REQ_SIZE_SHUFFLE_TO_MEM =
ConfigBuilder("spark.reducer.maxReqSizeShuffleToMem")
.internal()
.doc("The blocks of a shuffle request will be fetched to disk when size of the request is " +
"above this threshold. This is to avoid a giant request takes too much memory.")
.bytesConf(ByteUnit.BYTE)
.createWithDefaultString("200m")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to put a .internal() here.

.createWithDefault(Long.MaxValue)

private[spark] val TASK_METRICS_TRACK_UPDATED_BLOCK_STATUSES =
ConfigBuilder("spark.taskMetrics.trackUpdatedBlockStatuses")
Expand Down
8 changes: 0 additions & 8 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -528,14 +528,6 @@ Apart from these, the following properties are also available, and may be useful
By allowing it to limit the number of fetch requests, this scenario can be mitigated.
</td>
</tr>
<tr>
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't document it as it's not a safe feature now.

<td>200m</td>
<td>
The blocks of a shuffle request will be fetched to disk when size of the request is above
this threshold. This is to avoid a giant request takes too much memory.
</td>
</tr>
<tr>
<td><code>spark.shuffle.compress</code></td>
<td>true</td>
Expand Down