Apache Spark - A unified analytics engine for large-scale data processing
Go to file
William Benton a484030dae SPARK-897: preemptively serialize closures
These commits cause `ClosureCleaner.clean` to attempt to serialize the cleaned closure with the default closure serializer and throw a `SparkException` if doing so fails.  This behavior is enabled by default but can be disabled at individual callsites of `SparkContext.clean`.

Commit 98e01ae8 fixes some no-op assertions in `GraphSuite` that this work exposed; I'm happy to put that in a separate PR if that would be more appropriate.

Author: William Benton <willb@redhat.com>

Closes #143 from willb/spark-897 and squashes the following commits:

bceab8a [William Benton] Commented DStream corner cases for serializability checking.
64d04d2 [William Benton] FailureSuite now checks both messages and causes.
3b3f74a [William Benton] Stylistic and doc cleanups.
b215dea [William Benton] Fixed spurious failures in ImplicitOrderingSuite
be1ecd6 [William Benton] Don't check serializability of DStream transforms.
abe816b [William Benton] Make proactive serializability checking optional.
5bfff24 [William Benton] Adds proactive closure-serializablilty checking
ed2ccf0 [William Benton] Test cases for SPARK-897.
2014-06-29 23:27:34 -07:00
assembly [SPARK-2029] Bump pom.xml version number of master branch to 1.1.0-SNAPSHOT. 2014-06-05 11:27:33 -07:00
bagel HOTFIX: Increase time limit for Bagel test 2014-06-10 13:15:11 -07:00
bin [SPARK-2118] spark class should complain if tools jar is missing. 2014-06-23 13:35:09 -07:00
conf SPARK-1902 Silence stacktrace from logs when doing port failover to port n+1 2014-06-20 18:26:10 -07:00
core SPARK-897: preemptively serialize closures 2014-06-29 23:27:34 -07:00
data [SPARK-1874][MLLIB] Clean up MLlib sample data 2014-05-19 21:29:33 -07:00
dev Strip '@' symbols when merging pull requests. 2014-06-26 17:09:24 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs [SPARK-2003] Fix python SparkContext example 2014-06-27 18:20:33 -07:00
ec2 Fixing AWS instance type information based upon current EC2 data 2014-06-26 15:21:29 -07:00
examples [SPARK-2060][SQL] Querying JSON Datasets with SQL and DSL in Spark SQL 2014-06-17 19:14:59 -07:00
external SPARK-2034. KafkaInputDStream doesn't close resources and may prevent JVM shutdown 2014-06-22 01:12:15 -07:00
extras [SPARK-2029] Bump pom.xml version number of master branch to 1.1.0-SNAPSHOT. 2014-06-05 11:27:33 -07:00
graphx [SPARK-2124] Move aggregation into shuffle implementations 2014-06-23 20:25:46 -07:00
mllib [SPARK-2172] PySpark cannot import mllib modules in YARN-client mode 2014-06-25 23:07:16 -07:00
project [SPARK-1768] History server enhancements. 2014-06-23 13:53:44 -07:00
python [SPARK-1394] Remove SIGCHLD handler in worker subprocess 2014-06-28 18:39:27 -07:00
repl [SPARK-1841]: update scalatest to version 2.1.5 2014-06-06 11:45:21 -07:00
sbin [SPARK-1768] History server enhancements. 2014-06-23 13:53:44 -07:00
sbt [SQL] Un-ignore a test that is now passing. 2014-03-26 18:19:15 -07:00
sql [SQL]Extract the joinkeys from join condition 2014-06-26 19:18:11 -07:00
streaming SPARK-897: preemptively serialize closures 2014-06-29 23:27:34 -07:00
tools [SPARK-2069] MIMA false positives 2014-06-11 10:47:06 -07:00
yarn Remove use of spark.worker.instances 2014-06-26 08:20:27 -05:00
.gitignore [SPARK-2069] MIMA false positives 2014-06-11 10:47:06 -07:00
.rat-excludes HOTFIX: Add excludes for new MIMA files 2014-06-21 15:20:15 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
make-distribution.sh [SPARK-2233] make-distribution script should list the git hash in the RELEASE file 2014-06-28 13:07:12 -07:00
NOTICE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
pom.xml Fix some tests. 2014-06-20 20:05:12 -07:00
README.md [SPARK-1876] Windows fixes to deal with latest distribution layout changes 2014-05-19 15:02:35 -07:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00
tox.ini Added license header for tox.ini. 2014-05-25 01:49:45 -07:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building Spark

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

(You do not need to do this if you downloaded a pre-built package.)

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.