[SPARK-595][DOCS] Add local-cluster mode option in Documentation
### What changes were proposed in this pull request?
Add local-cluster mode option to submitting-applications.md
### Why are the changes needed?
Help users to find/use this option for unit tests.
### Does this PR introduce _any_ user-facing change?
Yes, docs changed.
### How was this patch tested?
`SKIP_API=1 bundle exec jekyll build`
<img width="460" alt="docchange" src="https://user-images.githubusercontent.com/87687356/127125380-6beb4601-7cf4-4876-b2c6-459454ce2a02.png">
Closes #33537 from yutoacts/SPARK-595.
Lead-authored-by: Yuto Akutsu <yuto.akutsu@jp.nttdata.com>
Co-authored-by: Yuto Akutsu <yuto.akutsu@nttdata.com>
Co-authored-by: Yuto Akutsu <87687356+yutoacts@users.noreply.github.com>
Signed-off-by: Thomas Graves <tgraves@apache.org>
(cherry picked from commit 41b011e416
)
Signed-off-by: Thomas Graves <tgraves@apache.org>
This commit is contained in:
parent
586eb5d4c6
commit
a5d0eafa32
|
@ -162,9 +162,10 @@ The master URL passed to Spark can be in one of the following formats:
|
|||
<tr><th>Master URL</th><th>Meaning</th></tr>
|
||||
<tr><td> <code>local</code> </td><td> Run Spark locally with one worker thread (i.e. no parallelism at all). </td></tr>
|
||||
<tr><td> <code>local[K]</code> </td><td> Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine). </td></tr>
|
||||
<tr><td> <code>local[K,F]</code> </td><td> Run Spark locally with K worker threads and F maxFailures (see <a href="configuration.html#scheduling">spark.task.maxFailures</a> for an explanation of this variable) </td></tr>
|
||||
<tr><td> <code>local[K,F]</code> </td><td> Run Spark locally with K worker threads and F maxFailures (see <a href="configuration.html#scheduling">spark.task.maxFailures</a> for an explanation of this variable). </td></tr>
|
||||
<tr><td> <code>local[*]</code> </td><td> Run Spark locally with as many worker threads as logical cores on your machine.</td></tr>
|
||||
<tr><td> <code>local[*,F]</code> </td><td> Run Spark locally with as many worker threads as logical cores on your machine and F maxFailures.</td></tr>
|
||||
<tr><td> <code>local-cluster[N,C,M]</code> </td><td> Local-cluster mode is only for unit tests. It emulates a distributed cluster in a single JVM with N number of workers, C cores per worker and M MiB of memory per worker.</td></tr>
|
||||
<tr><td> <code>spark://HOST:PORT</code> </td><td> Connect to the given <a href="spark-standalone.html">Spark standalone
|
||||
cluster</a> master. The port must be whichever one your master is configured to use, which is 7077 by default.
|
||||
</td></tr>
|
||||
|
|
Loading…
Reference in a new issue