Apache Spark - A unified analytics engine for large-scale data processing
Go to file
CodingCat 43e4a29dac SPARK-1399: show stage failure reason in UI
https://issues.apache.org/jira/browse/SPARK-1399

refactor StageTable a bit to support additional column for failed stage

Author: CodingCat <zhunansjtu@gmail.com>
Author: Nan Zhu <CodingCat@users.noreply.github.com>

Closes #421 from CodingCat/SPARK-1399 and squashes the following commits:

2caba36 [CodingCat] remove dummy tag
77cf305 [CodingCat] create dummy element to wrap columns
3989ce2 [CodingCat] address Aaron's comments
18fc09f [Nan Zhu] fix compile error
00ea30a [Nan Zhu] address Kay's comments
16ac83d [CodingCat] set a default value of failureReason
35df3df [CodingCat] address andrew's comments
06d21a4 [CodingCat] address andrew's comments
25a6db6 [CodingCat] style fix
dc8856d [CodingCat] show stage failure reason in UI
2014-04-21 14:10:23 -07:00
assembly SPARK-1314: Use SPARK_HIVE to determine if we include Hive in packaging 2014-04-06 17:48:41 -07:00
bagel SPARK-1488. Resolve scalac feature warnings during build 2014-04-14 19:50:00 -07:00
bin Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
conf Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
core SPARK-1399: show stage failure reason in UI 2014-04-21 14:10:23 -07:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
dev Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
ec2 Add Spark v0.9.1 to ec2 launch script and use it as the default 2014-04-10 18:25:54 -07:00
examples SPARK-1462: Examples of ML algorithms are using deprecated APIs 2014-04-16 18:23:07 -07:00
external Remove Unnecessary Whitespace's 2014-04-10 15:04:13 -07:00
extras Spark 1271: Co-Group and Group-By should pass Iterable[X] 2014-04-08 18:15:59 -07:00
graphx SPARK-1329: Create pid2vid with correct number of partitions 2014-04-16 17:16:55 -07:00
mllib [SPARK-1535] ALS: Avoid the garbage-creating ctor of DoubleMatrix 2014-04-19 15:10:18 -07:00
project [SPARK-1520] remove fastutil from dependencies 2014-04-18 10:03:15 -07:00
python Add insertInto and saveAsTable to Python API. 2014-04-19 15:08:54 -07:00
repl REPL cleanup. 2014-04-19 17:33:37 -07:00
sbin [SPARK-1276] Add a HistoryServer to render persisted UI 2014-04-10 10:39:34 -07:00
sbt [SQL] Un-ignore a test that is now passing. 2014-03-26 18:19:15 -07:00
sql Reuses Row object in ExistingRdd.productToRowRdd() 2014-04-18 10:02:27 -07:00
streaming HOTFIX: Ignore streaming UI test 2014-04-17 17:33:24 -07:00
tools SPARK-1374: PySpark API for SparkSQL 2014-04-15 00:07:55 -07:00
yarn Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
.gitignore SPARK-1336 Reducing the output of run-tests script. 2014-03-29 23:03:03 -07:00
.rat-excludes Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE Merge the old sbt-launch-lib.bash with the new sbt-launcher jar downloading logic. 2014-03-02 00:35:23 -08:00
make-distribution.sh fix path for jar, make sed actually work on OSX 2014-03-28 13:33:35 -07:00
NOTICE [SPARK-1212] Adding sparse data support and update KMeans 2014-03-23 17:34:02 -07:00
pom.xml [SPARK-1520] remove fastutil from dependencies 2014-04-18 10:03:15 -07:00
README.md README update 2014-04-18 22:34:39 -07:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building Spark

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> <params>. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.