2ffd1eafd2
YARN - SparkPi was updated to not take in master as an argument; we should update the docs to reflect that. - The default YARN build guide should be in maven, not sbt. - This PR also adds a paragraph on steps to debug a YARN application. Standalone - Emphasize spark-submit more. Right now it's one small paragraph preceding the legacy way of launching through `org.apache.spark.deploy.Client`. - The way we set configurations / environment variables according to the old docs is outdated. This needs to reflect changes introduced by the Spark configuration changes we made. In general, this PR also adds a little more documentation on the new spark-shell, spark-submit, spark-defaults.conf etc here and there. Author: Andrew Or <andrewor14@gmail.com> Closes #701 from andrewor14/yarn-docs and squashes the following commits: e2c2312 [Andrew Or] Merge in changes in #752 (SPARK-1814) 25cfe7b [Andrew Or] Merge in the warning from SPARK-1753 a8c39c5 [Andrew Or] Minor changes 336bbd9 [Andrew Or] Tabs -> spaces 4d9d8f7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 041017a [Andrew Or] Abstract Spark submit documentation to cluster-overview.html 3cc0649 [Andrew Or] Detail how to set configurations + remove legacy instructions 5b7140a [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 85a51fc [Andrew Or] Update run-example, spark-shell, configuration etc. c10e8c7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 381fe32 [Andrew Or] Update docs for standalone mode 757c184 [Andrew Or] Add a note about the requirements for the debugging trick f8ca990 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 924f04c [Andrew Or] Revert addition of --deploy-mode d5fe17b [Andrew Or] Update the YARN docs
44 lines
2.7 KiB
Bash
Executable file
44 lines
2.7 KiB
Bash
Executable file
#!/usr/bin/env bash
|
||
|
||
# This file is sourced when running various Spark programs.
|
||
# Copy it as spark-env.sh and edit that to configure Spark for your site.
|
||
|
||
# Options read when launching programs locally with
|
||
# ./bin/run-example or ./bin/spark-submit
|
||
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
|
||
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
|
||
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
|
||
# - SPARK_CLASSPATH, default classpath entries to append
|
||
|
||
# Options read by executors and drivers running inside the cluster
|
||
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
|
||
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
|
||
# - SPARK_CLASSPATH, default classpath entries to append
|
||
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
|
||
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos
|
||
|
||
# Options read in YARN client mode
|
||
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
|
||
# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
|
||
# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
|
||
# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
|
||
# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
|
||
# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
|
||
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
|
||
# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
|
||
# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.
|
||
|
||
# Options for the daemons used in the standalone deploy mode:
|
||
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
|
||
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
|
||
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
|
||
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
|
||
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
|
||
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
|
||
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
|
||
# - SPARK_WORKER_DIR, to set the working directory of worker processes
|
||
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
|
||
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
|
||
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
|
||
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
|