Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Reynold Xin eea3aab4f2 Made spark_ec2.py PEP8 compliant.
The change set is actually pretty small -- mostly whitespace changes. Admittedly this is a scary change due to the lack of tests to cover the ec2 scripts, and also because indentation actually impacts control flow in Python ...

Look at changes without whitespace diff here: https://github.com/apache/spark/pull/891/files?w=1

Author: Reynold Xin <rxin@apache.org>

Closes #891 from rxin/spark-ec2-pep8 and squashes the following commits:

ac1bf11 [Reynold Xin] Made spark_ec2.py PEP8 compliant.
2014-06-01 15:39:04 -07:00
assembly [SPARK-1876] Windows fixes to deal with latest distribution layout changes 2014-05-19 15:02:35 -07:00
bagel Package docs 2014-05-14 22:24:41 -07:00
bin spark-submit: add exec at the end of the script 2014-05-24 22:39:27 -07:00
conf [SPARK-1753 / 1773 / 1814] Update outdated docs for spark-submit, YARN, standalone etc. 2014-05-12 19:44:14 -07:00
core Improve maven plugin configuration 2014-05-31 14:36:27 -07:00
data [SPARK-1874][MLLIB] Clean up MLlib sample data 2014-05-19 21:29:33 -07:00
dev Updated dev Python scripts to make them PEP8 compliant. 2014-05-26 21:40:52 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs updated java code blocks in spark SQL guide such that ctx will refer to ... 2014-05-31 19:44:13 -07:00
ec2 Made spark_ec2.py PEP8 compliant. 2014-06-01 15:39:04 -07:00
examples Fix PEP8 violations in examples/src/main/python. 2014-05-25 14:48:27 -07:00
external Spark 1916 2014-05-28 15:51:32 -07:00
extras SPARK-1798. Tests should clean up temp files 2014-05-12 14:16:19 -07:00
graphx initial version of LPA 2014-05-29 15:39:25 -07:00
mllib SPARK-1925: Replace '&' with '&&' 2014-05-26 14:34:58 -07:00
project Optionally include Hive as a dependency of the REPL. 2014-05-31 12:24:35 -07:00
python SPARK-1917: fix PySpark import of scipy.special functions 2014-05-31 14:59:09 -07:00
repl Improve maven plugin configuration 2014-05-31 14:36:27 -07:00
sbin Include the sbin/spark-config.sh in spark-executor 2014-05-08 20:43:37 -07:00
sbt [SQL] Un-ignore a test that is now passing. 2014-03-26 18:19:15 -07:00
sql [SQL] SPARK-1964 Add timestamp to hive metastore type parser. 2014-05-31 12:34:22 -07:00
streaming SPARK-1878: Fix the incorrect initialization order 2014-05-19 16:41:31 -07:00
tools [SPARK-1820] Make GenerateMimaIgnore @DeveloperApi annotation aware. 2014-05-29 23:20:20 -07:00
yarn Improve maven plugin configuration 2014-05-31 14:36:27 -07:00
.gitignore SPARK-1565, update examples to be used with spark-submit script. 2014-05-08 10:23:05 -07:00
.rat-excludes SPARK-1846 Ignore logs directory in RAT checks 2014-05-15 11:05:39 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
make-distribution.sh SPARK-1911: Emphasize that Spark jars should be built with Java 6. 2014-05-24 18:27:00 -07:00
NOTICE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
pom.xml Improve maven plugin configuration 2014-05-31 14:36:27 -07:00
README.md [SPARK-1876] Windows fixes to deal with latest distribution layout changes 2014-05-19 15:02:35 -07:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00
tox.ini Added license header for tox.ini. 2014-05-25 01:49:45 -07:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building Spark

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

(You do not need to do this if you downloaded a pre-built package.)

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.