Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Patrick Wendell 21570b4633 Documentation: Encourage use of reduceByKey instead of groupByKey.
Author: Patrick Wendell <pwendell@gmail.com>

Closes #784 from pwendell/group-by-key and squashes the following commits:

9b4505f [Patrick Wendell] Small fix
6347924 [Patrick Wendell] Documentation: Encourage use of reduceByKey instead of groupByKey.
2014-05-14 22:24:04 -07:00
assembly SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
bagel SPARK-1798. Tests should clean up temp files 2014-05-12 14:16:19 -07:00
bin [SPARK-1736] Spark submit for Windows 2014-05-12 17:39:40 -07:00
conf [SPARK-1753 / 1773 / 1814] Update outdated docs for spark-submit, YARN, standalone etc. 2014-05-12 19:44:14 -07:00
core Documentation: Encourage use of reduceByKey instead of groupByKey. 2014-05-14 22:24:04 -07:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
dev Adding hadoop-2.2 profile to the build 2014-05-12 17:48:11 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs Documentation: Encourage use of reduceByKey instead of groupByKey. 2014-05-14 22:24:04 -07:00
ec2 Address SPARK-1717 2014-05-04 21:59:10 -07:00
examples Fixed streaming examples docs to use run-example instead of spark-submit 2014-05-14 04:17:32 -07:00
external Fixed streaming examples docs to use run-example instead of spark-submit 2014-05-14 04:17:32 -07:00
extras SPARK-1798. Tests should clean up temp files 2014-05-12 14:16:19 -07:00
graphx SPARK-1798. Tests should clean up temp files 2014-05-12 14:16:19 -07:00
mllib [SPARK-1696][MLLIB] use alpha in dense dspr 2014-05-14 17:18:30 -07:00
project Fix: sbt test throw an java.lang.OutOfMemoryError: PermGen space 2014-05-14 11:19:26 -07:00
python Documentation: Encourage use of reduceByKey instead of groupByKey. 2014-05-14 22:24:04 -07:00
repl SPARK-1798. Tests should clean up temp files 2014-05-12 14:16:19 -07:00
sbin Include the sbin/spark-config.sh in spark-executor 2014-05-08 20:43:37 -07:00
sbt [SQL] Un-ignore a test that is now passing. 2014-03-26 18:19:15 -07:00
sql [SPARK-1826] fix the head notation of package object dsl 2014-05-14 17:59:11 -07:00
streaming SPARK-1798. Tests should clean up temp files 2014-05-12 14:16:19 -07:00
tools Improved build configuration 2014-04-28 22:51:46 -07:00
yarn [SPARK-1774] Respect SparkSubmit --jars on YARN (client) 2014-05-10 20:58:02 -07:00
.gitignore SPARK-1565, update examples to be used with spark-submit script. 2014-05-08 10:23:05 -07:00
.rat-excludes Fix: sbt test throw an java.lang.OutOfMemoryError: PermGen space 2014-05-14 11:19:26 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
make-distribution.sh BUILD: Add more content to make-distribution.sh. 2014-05-12 23:26:43 -07:00
NOTICE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
pom.xml Fix dep exclusion: avro-ipc, not avro, depends on netty. 2014-05-14 00:37:57 -07:00
README.md SPARK-1565 (Addendum): Replace run-example with spark-submit. 2014-05-08 22:26:36 -07:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building Spark

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example org.apache.spark.examples.SparkLR

will run the Logistic Regression example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.