[SPARK-4183] Enable NettyBlockTransferService by default
Note that we're turning this on for at least the first part of the QA period as a trial. We want to enable this (and deprecate the NioBlockTransferService) as soon as possible in the hopes that NettyBlockTransferService will be more stable and easier to maintain. We will turn it off if we run into major issues. Author: Aaron Davidson <aaron@databricks.com> Closes #3049 from aarondav/enable-netty and squashes the following commits: bb981cc [Aaron Davidson] [SPARK-4183] Enable NettyBlockTransferService by default
This commit is contained in:
parent
ebd6480587
commit
1ae51f6dc7
|
@ -371,6 +371,16 @@ Apart from these, the following properties are also available, and may be useful
|
|||
map-side aggregation and there are at most this many reduce partitions.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><code>spark.shuffle.blockTransferService</code></td>
|
||||
<td>netty</td>
|
||||
<td>
|
||||
Implementation to use for transferring shuffle and cached blocks between executors. There
|
||||
are two implementations available: <code>netty</code> and <code>nio</code>. Netty-based
|
||||
block transfer is intended to be simpler but equally efficient and is the default option
|
||||
starting in 1.2.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
#### Spark UI
|
||||
|
|
Loading…
Reference in a new issue