[SPARK-3890][Docs]remove redundant spark.executor.memory in doc

Introduced in f7e79bc42c, I'm not sure why we need two spark.executor.memory here.

Author: WangTaoTheTonic <barneystinson@aliyun.com>
Author: WangTao <barneystinson@aliyun.com>

Closes #2745 from WangTaoTheTonic/redundantconfig and squashes the following commits:

e7564dc [WangTao] too long line
fdbdb1f [WangTaoTheTonic] trivial workaround
d06b6e5 [WangTaoTheTonic] remove redundant spark.executor.memory in doc
This commit is contained in:
WangTaoTheTonic 2014-10-16 19:12:39 -07:00 committed by Andrew Or
parent 642b246beb
commit e7f4ea8a52

View file

@ -161,14 +161,6 @@ Apart from these, the following properties are also available, and may be useful
#### Runtime Environment
<table class="table">
<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
<tr>
<td><code>spark.executor.memory</code></td>
<td>512m</td>
<td>
Amount of memory to use per executor process, in the same format as JVM memory strings
(e.g. <code>512m</code>, <code>2g</code>).
</td>
</tr>
<tr>
<td><code>spark.executor.extraJavaOptions</code></td>
<td>(none)</td>
@ -365,7 +357,7 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.ui.port</code></td>
<td>4040</td>
<td>
Port for your application's dashboard, which shows memory and workload data
Port for your application's dashboard, which shows memory and workload data.
</td>
</tr>
<tr>
@ -880,8 +872,8 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.scheduler.revive.interval</code></td>
<td>1000</td>
<td>
The interval length for the scheduler to revive the worker resource offers to run tasks.
(in milliseconds)
The interval length for the scheduler to revive the worker resource offers to run tasks
(in milliseconds).
</td>
</tr>
</tr>
@ -893,7 +885,7 @@ Apart from these, the following properties are also available, and may be useful
to wait for before scheduling begins. Specified as a double between 0 and 1.
Regardless of whether the minimum ratio of resources has been reached,
the maximum amount of time it will wait before scheduling begins is controlled by config
<code>spark.scheduler.maxRegisteredResourcesWaitingTime</code>
<code>spark.scheduler.maxRegisteredResourcesWaitingTime</code>.
</td>
</tr>
<tr>