--- layout: global title: Running Spark on Mesos --- * This will become a table of contents (this text will be scraped). {:toc} Spark can run on hardware clusters managed by [Apache Mesos](http://mesos.apache.org/). The advantages of deploying Spark with Mesos include: - dynamic partitioning between Spark and other [frameworks](https://mesos.apache.org/documentation/latest/mesos-frameworks/) - scalable partitioning between multiple instances of Spark # How it Works In a standalone cluster deployment, the cluster manager in the below diagram is a Spark master instance. When using Mesos, the Mesos master replaces the Spark master as the cluster manager.
Now when a driver creates a job and starts issuing tasks for scheduling, Mesos determines what machines handle what tasks. Because it takes into account other frameworks when scheduling these many short-lived tasks, multiple frameworks can coexist on the same cluster without resorting to a static partitioning of resources. To get started, follow the steps below to install Mesos and deploy Spark jobs via Mesos. # Installing Mesos Spark {{site.SPARK_VERSION}} is designed for use with Mesos {{site.MESOS_VERSION}} and does not require any special patches of Mesos. If you already have a Mesos cluster running, you can skip this Mesos installation step. Otherwise, installing Mesos for Spark is no different than installing Mesos for use by other frameworks. You can install Mesos either from source or using prebuilt packages. ## From Source To install Apache Mesos from source, follow these steps: 1. Download a Mesos release from a [mirror](http://www.apache.org/dyn/closer.cgi/mesos/{{site.MESOS_VERSION}}/) 2. Follow the Mesos [Getting Started](http://mesos.apache.org/gettingstarted) page for compiling and installing Mesos **Note:** If you want to run Mesos without installing it into the default paths on your system (e.g., if you lack administrative privileges to install it), pass the `--prefix` option to `configure` to tell it where to install. For example, pass `--prefix=/home/me/mesos`. By default the prefix is `/usr/local`. ## Third-Party Packages The Apache Mesos project only publishes source releases, not binary packages. But other third party projects publish binary releases that may be helpful in setting Mesos up. One of those is Mesosphere. To install Mesos using the binary releases provided by Mesosphere: 1. Download Mesos installation package from [downloads page](http://mesosphere.io/downloads/) 2. Follow their instructions for installation and configuration The Mesosphere installation documents suggest setting up ZooKeeper to handle Mesos master failover, but Mesos can be run without ZooKeeper using a single master as well. ## Verification To verify that the Mesos cluster is ready for Spark, navigate to the Mesos master webui at port `:5050` Confirm that all expected machines are present in the slaves tab. # Connecting Spark to Mesos To use Mesos from Spark, you need a Spark binary package available in a place accessible by Mesos, and a Spark driver program configured to connect to Mesos. Alternatively, you can also install Spark in the same location in all the Mesos slaves, and configure `spark.mesos.executor.home` (defaults to SPARK_HOME) to point to that location. ## Uploading Spark Package When Mesos runs a task on a Mesos slave for the first time, that slave must have a Spark binary package for running the Spark Mesos executor backend. The Spark package can be hosted at any Hadoop-accessible URI, including HTTP via `http://`, [Amazon Simple Storage Service](http://aws.amazon.com/s3) via `s3n://`, or HDFS via `hdfs://`. To use a precompiled package: 1. Download a Spark binary package from the Spark [download page](https://spark.apache.org/downloads.html) 2. Upload to hdfs/http/s3 To host on HDFS, use the Hadoop fs put command: `hadoop fs -put spark-{{site.SPARK_VERSION}}.tar.gz /path/to/spark-{{site.SPARK_VERSION}}.tar.gz` Or if you are using a custom-compiled version of Spark, you will need to create a package using the `make-distribution.sh` script included in a Spark source tarball/checkout. 1. Download and build Spark using the instructions [here](index.html) 2. Create a binary package using `make-distribution.sh --tgz`. 3. Upload archive to http/s3/hdfs ## Using a Mesos Master URL The Master URLs for Mesos are in the form `mesos://host:5050` for a single-master Mesos cluster, or `mesos://zk://host:2181` for a multi-master Mesos cluster using ZooKeeper. ## Client Mode In client mode, a Spark Mesos framework is launched directly on the client machine and waits for the driver output. The driver needs some configuration in `spark-env.sh` to interact properly with Mesos: 1. In `spark-env.sh` set some environment variables: * `export MESOS_NATIVE_JAVA_LIBRARY=
Property Name | Default | Meaning |
---|---|---|
spark.mesos.coarse |
false | If set to "true", runs over Mesos clusters in "coarse-grained" sharing mode, where Spark acquires one long-lived Mesos task on each machine instead of one Mesos task per Spark task. This gives lower-latency scheduling for short queries, but leaves resources in use for the whole duration of the Spark job. |
spark.mesos.extra.cores |
0 | Set the extra amount of cpus to request per task. This setting is only used for Mesos coarse grain mode. The total amount of cores requested per task is the number of cores in the offer plus the extra cores configured. Note that total amount of cores the executor will request in total will not exceed the spark.cores.max setting. |
spark.mesos.mesosExecutor.cores |
1.0 | (Fine-grained mode only) Number of cores to give each Mesos executor. This does not include the cores used to run the Spark tasks. In other words, even if no Spark task is being run, each Mesos executor will occupy the number of cores configured here. The value can be a floating point number. |
spark.mesos.executor.docker.image |
(none) |
Set the name of the docker image that the Spark executors will run in. The selected
image must have Spark installed, as well as a compatible version of the Mesos library.
The installed path of Spark in the image can be specified with spark.mesos.executor.home ;
the installed path of the Mesos library can be specified with spark.executorEnv.MESOS_NATIVE_LIBRARY .
|
spark.mesos.executor.docker.volumes |
(none) |
Set the list of volumes which will be mounted into the Docker image, which was set using
spark.mesos.executor.docker.image . The format of this property is a comma-separated list of
mappings following the form passed to docker run -v. That is they take the form:
[host_path:]container_path[:ro|:rw] |
spark.mesos.executor.docker.portmaps |
(none) |
Set the list of incoming ports exposed by the Docker image, which was set using
spark.mesos.executor.docker.image . The format of this property is a comma-separated list of
mappings which take the form:
host_port:container_port[:tcp|:udp] |
spark.mesos.executor.home |
driver side SPARK_HOME |
Set the directory in which Spark is installed on the executors in Mesos. By default, the
executors will simply use the driver's Spark home directory, which may not be visible to
them. Note that this is only relevant if a Spark binary package is not specified through
spark.executor.uri .
|
spark.mesos.executor.memoryOverhead |
executor memory * 0.10, with minimum of 384 | The amount of additional memory, specified in MB, to be allocated per executor. By default, the overhead will be larger of either 384 or 10% of `spark.executor.memory`. If it's set, the final overhead will be this value. |