This is a fairly large PR to clean up and update the docs for 1.0. The major changes are: * A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs * New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark * Spark-submit guide moved to a separate page and expanded slightly * Various cleanups of the menu system, security docs, and others * Updated look of title bar to differentiate the docs from previous Spark versions You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html. Author: Matei Zaharia <matei@databricks.com> Closes #896 from mateiz/1.0-docs and squashes the following commits: 03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs 0779508 [Matei Zaharia] tweak ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks 1bf4112 [Matei Zaharia] Review comments 4414f88 [Matei Zaharia] tweaks d04e979 [Matei Zaharia] Fix some old links to Java guide a34ed33 [Matei Zaharia] tweak 541bb3b [Matei Zaharia] miscellaneous changes fcefdec [Matei Zaharia] Moved submitting apps to separate doc 61d72b4 [Matei Zaharia] stuff 181f217 [Matei Zaharia] migration guide, remove old language guides e11a0da [Matei Zaharia] Add more API functions 6a030a9 [Matei Zaharia] tweaks 8db0ae3 [Matei Zaharia] Added key-value pairs section 318d2c9 [Matei Zaharia] tweaks 1c81477 [Matei Zaharia] New section on basics and function syntax e38f559 [Matei Zaharia] Actually added programming guide to Git a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout 3b6a876 [Matei Zaharia] More CSS tweaks 01ec8bf [Matei Zaharia] More CSS tweaks e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
7.4 KiB
layout | title |
---|---|
global | Submitting Applications |
The spark-submit
script in Spark's bin
directory is used to launch applications on a cluster.
It can use all of Spark's supported cluster managers
through a uniform interface so you don't have to configure your application specially for each one.
Bundling Your Application's Dependencies
If your code depends on other projects, you will need to package them alongside
your application in order to distribute the code to a Spark cluster. To do this,
to create an assembly jar (or "uber" jar) containing your code and its dependencies. Both
sbt and
Maven
have assembly plugins. When creating assembly jars, list Spark and Hadoop
as provided
dependencies; these need not be bundled since they are provided by
the cluster manager at runtime. Once you have an assembled jar you can call the bin/spark-submit
script as shown here while passing your jar.
For Python, you can use the --py-files
argument of spark-submit
to add .py
, .zip
or .egg
files to be distributed with your application. If you depend on multiple Python files we recommend
packaging them into a .zip
or .egg
.
Launching Applications with spark-submit
Once a user application is bundled, it can be launched using the bin/spark-submit
script.
This script takes care of setting up the classpath with Spark and its
dependencies, and can support different cluster managers and deploy modes that Spark supports:
{% highlight bash %}
./bin/spark-submit
--class
--master
--deploy-mode
... # other options
[application-arguments]
{% endhighlight %}
Some of the commonly used options are:
--class
: The entry point for your application (e.g.org.apache.spark.examples.SparkPi
)--master
: The master URL for the cluster (e.g.spark://23.195.26.187:7077
)--deploy-mode
: Whether to deploy your driver program within the cluster or run it locally as an external client (eithercluster
orclient
)application-jar
: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, anhdfs://
path or afile://
path that is present on all nodes.application-arguments
: Arguments passed to the main method of your main class, if any
For Python applications, simply pass a .py
file in the place of <application-jar>
instead of a JAR,
and add Python .zip
, .egg
or .py
files to the search path with --py-files
.
To enumerate all options available to spark-submit
run it with --help
. Here are a few
examples of common options:
{% highlight bash %}
Run application locally on 8 cores
./bin/spark-submit
--class org.apache.spark.examples.SparkPi
--master local[8]
/path/to/examples.jar
100
Run on a Spark standalone cluster
./bin/spark-submit
--class org.apache.spark.examples.SparkPi
--master spark://207.184.161.138:7077
--executor-memory 20G
--total-executor-cores 100
/path/to/examples.jar
1000
Run on a YARN cluster
export HADOOP_CONF_DIR=XXX
./bin/spark-submit
--class org.apache.spark.examples.SparkPi
--master yarn-cluster \ # can also be yarn-client
for client mode
--executor-memory 20G
--num-executors 50
/path/to/examples.jar
1000
Run a Python application on a cluster
./bin/spark-submit
--master spark://207.184.161.138:7077
examples/src/main/python/pi.py
1000
{% endhighlight %}
Master URLs
The master URL passed to Spark can be in one of the following formats:
Master URL | Meaning |
---|---|
local | Run Spark locally with one worker thread (i.e. no parallelism at all). |
local[K] | Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine). |
local[*] | Run Spark locally with as many worker threads as logical cores on your machine. |
spark://HOST:PORT | Connect to the given Spark standalone cluster master. The port must be whichever one your master is configured to use, which is 7077 by default. |
mesos://HOST:PORT | Connect to the given Mesos cluster.
The port must be whichever one your is configured to use, which is 5050 by default.
Or, for a Mesos cluster using ZooKeeper, use mesos://zk://... .
|
yarn-client | Connect to a YARN cluster in client mode. The cluster location will be found based on the HADOOP_CONF_DIR variable. |
yarn-cluster | Connect to a YARN cluster in cluster mode. The cluster location will be found based on HADOOP_CONF_DIR. |
Loading Configuration from a File
The spark-submit
script can load default Spark configuration values from a
properties file and pass them on to your application. By default it will read options
from conf/spark-defaults.conf
in the Spark directory. For more detail, see the section on
loading default configurations.
Loading default Spark configurations this way can obviate the need for certain flags to
spark-submit
. For instance, if the spark.master
property is set, you can safely omit the
--master
flag from spark-submit
. In general, configuration values explicitly set on a
SparkConf
take the highest precedence, then flags passed to spark-submit
, then values in the
defaults file.
If you are ever unclear where configuration options are coming from, you can print out fine-grained
debugging information by running spark-submit
with the --verbose
option.
Advanced Dependency Management
When using spark-submit
, the application jar along with any jars included with the --jars
option
will be automatically transferred to the cluster. Spark uses the following URL scheme to allow
different strategies for disseminating jars:
- file: - Absolute paths and
file:/
URIs are served by the driver's HTTP file server, and every executor pulls the file from the driver HTTP server. - hdfs:, http:, https:, ftp: - these pull down files and JARs from the URI as expected
- local: - a URI starting with local:/ is expected to exist as a local file on each worker node. This means that no network IO will be incurred, and works well for large files/JARs that are pushed to each worker, or shared via NFS, GlusterFS, etc.
Note that JARs and files are copied to the working directory for each SparkContext on the executor nodes.
This can use up a significant amount of space over time and will need to be cleaned up. With YARN, cleanup
is handled automatically, and with Spark standalone, automatic cleanup can be configured with the
spark.worker.cleanup.appDataTtl
property.
For python, the equivalent --py-files
option can be used to distribute .egg
, .zip
and .py
libraries
to executors.
More Information
Once you have deployed your application, the cluster mode overview describes the components involved in distributed execution, and how to monitor and debug applications.