2013-09-06 00:29:37 -04:00
|
|
|
---
|
|
|
|
layout: global
|
|
|
|
title: Cluster Mode Overview
|
|
|
|
---
|
|
|
|
|
2013-09-08 00:41:18 -04:00
|
|
|
This document gives a short overview of how Spark runs on clusters, to make it easier to understand
|
[SPARK-1566] consolidate programming guide, and general doc updates
This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
* A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
* New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
* Spark-submit guide moved to a separate page and expanded slightly
* Various cleanups of the menu system, security docs, and others
* Updated look of title bar to differentiate the docs from previous Spark versions
You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
Author: Matei Zaharia <matei@databricks.com>
Closes #896 from mateiz/1.0-docs and squashes the following commits:
03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
0779508 [Matei Zaharia] tweak
ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
1bf4112 [Matei Zaharia] Review comments
4414f88 [Matei Zaharia] tweaks
d04e979 [Matei Zaharia] Fix some old links to Java guide
a34ed33 [Matei Zaharia] tweak
541bb3b [Matei Zaharia] miscellaneous changes
fcefdec [Matei Zaharia] Moved submitting apps to separate doc
61d72b4 [Matei Zaharia] stuff
181f217 [Matei Zaharia] migration guide, remove old language guides
e11a0da [Matei Zaharia] Add more API functions
6a030a9 [Matei Zaharia] tweaks
8db0ae3 [Matei Zaharia] Added key-value pairs section
318d2c9 [Matei Zaharia] tweaks
1c81477 [Matei Zaharia] New section on basics and function syntax
e38f559 [Matei Zaharia] Actually added programming guide to Git
a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
3b6a876 [Matei Zaharia] More CSS tweaks
01ec8bf [Matei Zaharia] More CSS tweaks
e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
2014-05-30 03:34:33 -04:00
|
|
|
the components involved. Read through the [application submission guide](submitting-applications.html)
|
|
|
|
to submit applications to a cluster.
|
2013-09-08 00:41:18 -04:00
|
|
|
|
|
|
|
# Components
|
|
|
|
|
|
|
|
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext
|
|
|
|
object in your main program (called the _driver program_).
|
|
|
|
Specifically, to run on a cluster, the SparkContext can connect to several types of _cluster managers_
|
|
|
|
(either Spark's own standalone cluster manager or Mesos/YARN), which allocate resources across
|
|
|
|
applications. Once connected, Spark acquires *executors* on nodes in the cluster, which are
|
2014-03-13 15:11:33 -04:00
|
|
|
processes that run computations and store data for your application.
|
2013-09-08 00:41:18 -04:00
|
|
|
Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to
|
|
|
|
the executors. Finally, SparkContext sends *tasks* for the executors to run.
|
|
|
|
|
|
|
|
<p style="text-align: center;">
|
|
|
|
<img src="img/cluster-overview.png" title="Spark cluster components" alt="Spark cluster components" />
|
|
|
|
</p>
|
|
|
|
|
|
|
|
There are several useful things to note about this architecture:
|
|
|
|
|
|
|
|
1. Each application gets its own executor processes, which stay up for the duration of the whole
|
|
|
|
application and run tasks in multiple threads. This has the benefit of isolating applications
|
|
|
|
from each other, on both the scheduling side (each driver schedules its own tasks) and executor
|
|
|
|
side (tasks from different applications run in different JVMs). However, it also means that
|
|
|
|
data cannot be shared across different Spark applications (instances of SparkContext) without
|
|
|
|
writing it to an external storage system.
|
|
|
|
2. Spark is agnostic to the underlying cluster manager. As long as it can acquire executor
|
|
|
|
processes, and these communicate with each other, it is relatively easy to run it even on a
|
|
|
|
cluster manager that also supports other applications (e.g. Mesos/YARN).
|
2015-04-09 06:37:20 -04:00
|
|
|
3. The driver program must listen for and accept incoming connections from its executors throughout
|
|
|
|
its lifetime (e.g., see [spark.driver.port and spark.fileserver.port in the network config
|
|
|
|
section](configuration.html#networking)). As such, the driver program must be network
|
|
|
|
addressable from the worker nodes.
|
|
|
|
4. Because the driver schedules tasks on the cluster, it should be run close to the worker
|
2013-09-08 00:41:18 -04:00
|
|
|
nodes, preferably on the same local area network. If you'd like to send requests to the
|
|
|
|
cluster remotely, it's better to open an RPC to the driver and have it submit operations
|
|
|
|
from nearby than to run a driver far away from the worker nodes.
|
|
|
|
|
|
|
|
# Cluster Manager Types
|
|
|
|
|
|
|
|
The system currently supports three cluster managers:
|
|
|
|
|
|
|
|
* [Standalone](spark-standalone.html) -- a simple cluster manager included with Spark that makes it
|
|
|
|
easy to set up a cluster.
|
|
|
|
* [Apache Mesos](running-on-mesos.html) -- a general cluster manager that can also run Hadoop MapReduce
|
|
|
|
and service applications.
|
2013-12-06 19:54:06 -05:00
|
|
|
* [Hadoop YARN](running-on-yarn.html) -- the resource manager in Hadoop 2.
|
2013-09-08 00:41:18 -04:00
|
|
|
|
|
|
|
In addition, Spark's [EC2 launch scripts](ec2-scripts.html) make it easy to launch a standalone
|
|
|
|
cluster on Amazon EC2.
|
|
|
|
|
[SPARK-1566] consolidate programming guide, and general doc updates
This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
* A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
* New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
* Spark-submit guide moved to a separate page and expanded slightly
* Various cleanups of the menu system, security docs, and others
* Updated look of title bar to differentiate the docs from previous Spark versions
You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
Author: Matei Zaharia <matei@databricks.com>
Closes #896 from mateiz/1.0-docs and squashes the following commits:
03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
0779508 [Matei Zaharia] tweak
ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
1bf4112 [Matei Zaharia] Review comments
4414f88 [Matei Zaharia] tweaks
d04e979 [Matei Zaharia] Fix some old links to Java guide
a34ed33 [Matei Zaharia] tweak
541bb3b [Matei Zaharia] miscellaneous changes
fcefdec [Matei Zaharia] Moved submitting apps to separate doc
61d72b4 [Matei Zaharia] stuff
181f217 [Matei Zaharia] migration guide, remove old language guides
e11a0da [Matei Zaharia] Add more API functions
6a030a9 [Matei Zaharia] tweaks
8db0ae3 [Matei Zaharia] Added key-value pairs section
318d2c9 [Matei Zaharia] tweaks
1c81477 [Matei Zaharia] New section on basics and function syntax
e38f559 [Matei Zaharia] Actually added programming guide to Git
a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
3b6a876 [Matei Zaharia] More CSS tweaks
01ec8bf [Matei Zaharia] More CSS tweaks
e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
2014-05-30 03:34:33 -04:00
|
|
|
# Submitting Applications
|
|
|
|
|
|
|
|
Applications can be submitted to a cluster of any type using the `spark-submit` script.
|
|
|
|
The [application submission guide](submitting-applications.html) describes how to do this.
|
2014-05-12 22:44:14 -04:00
|
|
|
|
2013-09-08 00:41:18 -04:00
|
|
|
# Monitoring
|
|
|
|
|
2013-09-11 02:12:27 -04:00
|
|
|
Each driver program has a web UI, typically on port 4040, that displays information about running
|
|
|
|
tasks, executors, and storage usage. Simply go to `http://<driver-node>:4040` in a web browser to
|
2013-09-08 00:41:18 -04:00
|
|
|
access this UI. The [monitoring guide](monitoring.html) also describes other monitoring options.
|
|
|
|
|
|
|
|
# Job Scheduling
|
|
|
|
|
|
|
|
Spark gives control over resource allocation both _across_ applications (at the level of the cluster
|
|
|
|
manager) and _within_ applications (if multiple computations are happening on the same SparkContext).
|
|
|
|
The [job scheduling overview](job-scheduling.html) describes this in more detail.
|
2013-09-08 16:36:50 -04:00
|
|
|
|
|
|
|
# Glossary
|
|
|
|
|
|
|
|
The following table summarizes terms you'll see used to refer to cluster concepts:
|
|
|
|
|
|
|
|
<table class="table">
|
|
|
|
<thead>
|
|
|
|
<tr><th style="width: 130px;">Term</th><th>Meaning</th></tr>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<tr>
|
|
|
|
<td>Application</td>
|
2013-09-09 00:25:49 -04:00
|
|
|
<td>User program built on Spark. Consists of a <em>driver program</em> and <em>executors</em> on the cluster.</td>
|
2013-09-08 16:36:50 -04:00
|
|
|
</tr>
|
2014-04-21 13:26:33 -04:00
|
|
|
<tr>
|
|
|
|
<td>Application jar</td>
|
|
|
|
<td>
|
|
|
|
A jar containing the user's Spark application. In some cases users will want to create
|
|
|
|
an "uber jar" containing their application along with its dependencies. The user's jar
|
|
|
|
should never include Hadoop or Spark libraries, however, these will be added at runtime.
|
|
|
|
</td>
|
|
|
|
</tr>
|
2013-09-08 16:36:50 -04:00
|
|
|
<tr>
|
|
|
|
<td>Driver program</td>
|
|
|
|
<td>The process running the main() function of the application and creating the SparkContext</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>Cluster manager</td>
|
|
|
|
<td>An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN)</td>
|
|
|
|
</tr>
|
2014-03-29 17:41:36 -04:00
|
|
|
<tr>
|
|
|
|
<td>Deploy mode</td>
|
|
|
|
<td>Distinguishes where the driver process runs. In "cluster" mode, the framework launches
|
|
|
|
the driver inside of the cluster. In "client" mode, the submitter launches the driver
|
|
|
|
outside of the cluster.</td>
|
2014-05-06 23:07:22 -04:00
|
|
|
</tr>
|
2013-09-08 16:36:50 -04:00
|
|
|
<tr>
|
|
|
|
<td>Worker node</td>
|
|
|
|
<td>Any node that can run application code in the cluster</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>Executor</td>
|
|
|
|
<td>A process launched for an application on a worker node, that runs tasks and keeps data in memory
|
|
|
|
or disk storage across them. Each application has its own executors.</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>Task</td>
|
|
|
|
<td>A unit of work that will be sent to one executor</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>Job</td>
|
|
|
|
<td>A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action
|
|
|
|
(e.g. <code>save</code>, <code>collect</code>); you'll see this term used in the driver's logs.</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>Stage</td>
|
|
|
|
<td>Each job gets divided into smaller sets of tasks called <em>stages</em> that depend on each other
|
|
|
|
(similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs.</td>
|
|
|
|
</tr>
|
|
|
|
</tbody>
|
|
|
|
</table>
|