[SPARK-22133][DOCS] Documentation for Mesos Reject Offer Configurations

## What changes were proposed in this pull request?
Documentation about Mesos Reject Offer Configurations

## Related PR
https://github.com/apache/spark/pull/19510 for `spark.mem.max`

Author: Li, YanKit | Wilson | RIT <yankit.li@rakuten.com>

Closes #19555 from windkit/spark_22133.
This commit is contained in:
Li, YanKit | Wilson | RIT 2017-11-08 17:55:21 +00:00 committed by Sean Owen
parent 87343e1556
commit 6447d7bc1d

View file

@ -203,7 +203,7 @@ details and default values.
Executors are brought up eagerly when the application starts, until
`spark.cores.max` is reached. If you don't set `spark.cores.max`, the
Spark application will reserve all resources offered to it by Mesos,
Spark application will consume all resources offered to it by Mesos,
so we of course urge you to set this variable in any sort of
multi-tenant cluster, including one which runs multiple concurrent
Spark applications.
@ -680,6 +680,30 @@ See the [configuration page](configuration.html) for information on Spark config
driver disconnects, the master immediately tears down the framework.
</td>
</tr>
<tr>
<td><code>spark.mesos.rejectOfferDuration</code></td>
<td><code>120s</code></td>
<td>
Time to consider unused resources refused, serves as a fallback of
`spark.mesos.rejectOfferDurationForUnmetConstraints`,
`spark.mesos.rejectOfferDurationForReachedMaxCores`
</td>
</tr>
<tr>
<td><code>spark.mesos.rejectOfferDurationForUnmetConstraints</code></td>
<td><code>spark.mesos.rejectOfferDuration</code></td>
<td>
Time to consider unused resources refused with unmet constraints
</td>
</tr>
<tr>
<td><code>spark.mesos.rejectOfferDurationForReachedMaxCores</code></td>
<td><code>spark.mesos.rejectOfferDuration</code></td>
<td>
Time to consider unused resources refused when maximum number of cores
<code>spark.cores.max</code> is reached
</td>
</tr>
</table>
# Troubleshooting and Debugging