Skip to content

Commit bb981cc

Browse files
committed
[SPARK-4183] Enable NettyBlockTransferService by default
1 parent ee29ef3 commit bb981cc

File tree

2 files changed

+11
-1
lines changed

2 files changed

+11
-1
lines changed

core/src/main/scala/org/apache/spark/SparkEnv.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -274,7 +274,7 @@ object SparkEnv extends Logging {
274274
val shuffleMemoryManager = new ShuffleMemoryManager(conf)
275275

276276
val blockTransferService =
277-
conf.get("spark.shuffle.blockTransferService", "nio").toLowerCase match {
277+
conf.get("spark.shuffle.blockTransferService", "netty").toLowerCase match {
278278
case "netty" =>
279279
new NettyBlockTransferService(conf)
280280
case "nio" =>

docs/configuration.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -359,6 +359,16 @@ Apart from these, the following properties are also available, and may be useful
359359
map-side aggregation and there are at most this many reduce partitions.
360360
</td>
361361
</tr>
362+
<tr>
363+
<td><code>spark.shuffle.blockTransferService</code></td>
364+
<td>netty</td>
365+
<td>
366+
Implementation to use for transferring shuffle and cached blocks between executors. There
367+
are two implementations available: <code>netty</code> and <code>nio</code>. Netty-based
368+
block transfer is intended to be simpler but equally efficient and is the default option
369+
starting in 1.2.
370+
</td>
371+
</tr>
362372
</table>
363373

364374
#### Spark UI

0 commit comments

Comments
 (0)