Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Kay Ousterhout fac6085cd7 [SPARK-1397] Notify SparkListeners when stages fail or are cancelled.
[I wanted to post this for folks to comment but it depends on (and thus includes the changes in) a currently outstanding PR, #305.  You can look at just the second commit: 93f08baf73 to see just the changes relevant to this PR]

Previously, when stages fail or get cancelled, the SparkListener is only notified
indirectly through the SparkListenerJobEnd, where we sometimes pass in a single
stage that failed.  This worked before job cancellation, because jobs would only fail
due to a single stage failure.  However, with job cancellation, multiple running stages
can fail when a job gets cancelled.  Right now, this is not handled correctly, which
results in stages that get stuck in the “Running Stages” window in the UI even
though they’re dead.

This PR changes the SparkListenerStageCompleted event to a SparkListenerStageEnded
event, and uses this event to tell SparkListeners when stages fail in addition to when
they complete successfully.  This change is NOT publicly backward compatible for two
reasons.  First, it changes the SparkListener interface.  We could alternately add a new event,
SparkListenerStageFailed, and keep the existing SparkListenerStageCompleted.  However,
this is less consistent with the listener events for tasks / jobs ending, and will result in some
code duplication for listeners (because failed and completed stages are handled in similar
ways).  Note that I haven’t finished updating the JSON code to correctly handle the new event
because I’m waiting for feedback on whether this is a good or bad idea (hence the “WIP”).

It is also not backwards compatible because it changes the publicly visible JobWaiter.jobFailed()
method to no longer include a stage that caused the failure.  I think this change should definitely
stay, because with cancellation (as described above), a failure isn’t necessarily caused by a
single stage.

Author: Kay Ousterhout <kayousterhout@gmail.com>

Closes #309 from kayousterhout/stage_cancellation and squashes the following commits:

5533ecd [Kay Ousterhout] Fixes in response to Mark's review
320c7c7 [Kay Ousterhout] Notify SparkListeners when stages fail or are cancelled.
2014-04-08 14:42:02 -07:00
assembly SPARK-1314: Use SPARK_HIVE to determine if we include Hive in packaging 2014-04-06 17:48:41 -07:00
bagel SPARK-1193. Fix indentation in pom.xmls 2014-03-07 23:10:35 -08:00
bin SPARK-1445: compute-classpath should not print error if lib_managed not found 2014-04-08 14:40:20 -07:00
conf Revert "[SPARK-1150] fix repo location in create script" 2014-03-01 17:15:38 -08:00
core [SPARK-1397] Notify SparkListeners when stages fail or are cancelled. 2014-04-08 14:42:02 -07:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
dev SPARK-1431: Allow merging conflicting pull requests 2014-04-06 21:04:45 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs SPARK-1099: Introduce local[*] mode to infer number of cores 2014-04-07 13:06:30 -07:00
ec2 SPARK-1156: allow user to login into a cluster without slaves 2014-03-05 21:47:34 -08:00
examples SPARK-1387. Update build plugins, avoid plugin version warning, centralize versions 2014-04-06 17:41:01 -07:00
external SPARK-1352 - Comment style single space before ending */ check. 2014-03-30 10:06:56 -07:00
extras Spark 1095 : Adding explicit return types to all public methods 2014-03-26 18:24:55 -07:00
graphx SPARK-1387. Update build plugins, avoid plugin version warning, centralize versions 2014-04-06 17:41:01 -07:00
mllib SPARK-1387. Update build plugins, avoid plugin version warning, centralize versions 2014-04-06 17:41:01 -07:00
project [SPARK-1331] Added graceful shutdown to Spark Streaming 2014-04-08 00:00:17 -07:00
python SPARK-1099: Introduce local[*] mode to infer number of cores 2014-04-07 13:06:30 -07:00
repl SPARK-1099: Introduce local[*] mode to infer number of cores 2014-04-07 13:06:30 -07:00
sbin SPARK-1286: Make usage of spark-env.sh idempotent 2014-03-24 22:24:21 -07:00
sbt [SQL] Un-ignore a test that is now passing. 2014-03-26 18:19:15 -07:00
sql [SPARK-1402] Added 3 more compression schemes 2014-04-07 22:24:12 -07:00
streaming [SPARK-1331] Added graceful shutdown to Spark Streaming 2014-04-08 00:00:17 -07:00
tools SPARK-1325. The maven build error for Spark Tools 2014-03-26 18:32:14 -07:00
yarn Remove extra semicolon in import statement and unused import in ApplicationMaster 2014-04-08 14:23:16 -07:00
.gitignore SPARK-1336 Reducing the output of run-tests script. 2014-03-29 23:03:03 -07:00
.rat-excludes HOTFIX for broken CI, by SPARK-1336 2014-04-04 22:49:19 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE Merge the old sbt-launch-lib.bash with the new sbt-launcher jar downloading logic. 2014-03-02 00:35:23 -08:00
make-distribution.sh fix path for jar, make sed actually work on OSX 2014-03-28 13:33:35 -07:00
NOTICE [SPARK-1212] Adding sparse data support and update KMeans 2014-03-23 17:34:02 -07:00
pom.xml SPARK-1314: Use SPARK_HIVE to determine if we include Hive in packaging 2014-04-06 17:48:41 -07:00
README.md Removed reference to incubation in README.md. 2014-02-26 16:52:26 -08:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building

Spark requires Scala 2.10. The project is built using Simple Build Tool (SBT), which can be obtained here. If SBT is installed we will use the system version of sbt otherwise we will attempt to download it automatically. To build Spark and its example programs, run:

./sbt/sbt assembly

Once you've built Spark, the easiest way to start using it is the shell:

./bin/spark-shell

Or, for the Python API, the Python shell (./bin/pyspark).

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> <params>. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

Running tests

Testing first requires Building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.