Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Patrick Wendell 98b65593bd SPARK-1691: Support quoted arguments inside of spark-submit.
This is a fairly straightforward fix. The bug was reported by @vanzin and the fix was proposed by @deanwampler and myself. Please take a look!

Author: Patrick Wendell <pwendell@gmail.com>

Closes #609 from pwendell/quotes and squashes the following commits:

8bed767 [Patrick Wendell] SPARK-1691: Support quoted arguments inside of spark-submit.
2014-05-01 01:15:51 -07:00
assembly SPARK-1119 and other build improvements 2014-04-23 10:19:32 -07:00
bagel Improved build configuration 2014-04-28 22:51:46 -07:00
bin SPARK-1691: Support quoted arguments inside of spark-submit. 2014-05-01 01:15:51 -07:00
conf Assorted clean-up for Spark-on-YARN. 2014-04-22 19:22:06 -07:00
core Fix SPARK-1629: Spark should inline use of commons-lang `SystemUtils.IS_... 2014-04-30 09:49:45 -07:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
dev HOTFIX: minor change to release script 2014-04-29 00:59:38 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs SPARK-1004. PySpark on YARN 2014-04-29 23:24:34 -07:00
ec2 Add Spark v0.9.1 to ec2 launch script and use it as the default 2014-04-10 18:25:54 -07:00
examples Handle the vals that never used 2014-04-29 22:07:20 -07:00
external Improved build configuration 2014-04-28 22:51:46 -07:00
extras Spark 1271: Co-Group and Group-By should pass Iterable[X] 2014-04-08 18:15:59 -07:00
graphx Improved build configuration 2014-04-28 22:51:46 -07:00
mllib [SPARK-1646] Micro-optimisation of ALS 2014-04-29 22:04:34 -07:00
project [SPARK-1636][MLLIB] Move main methods to examples 2014-04-29 00:41:03 -07:00
python SPARK-1004. PySpark on YARN 2014-04-29 23:24:34 -07:00
repl Improved build configuration 2014-04-28 22:51:46 -07:00
sbin SPARK-1004. PySpark on YARN 2014-04-29 23:24:34 -07:00
sbt [SQL] Un-ignore a test that is now passing. 2014-03-26 18:19:15 -07:00
sql Improved build configuration 2014-04-28 22:51:46 -07:00
streaming Improved build configuration 2014-04-28 22:51:46 -07:00
tools Improved build configuration 2014-04-28 22:51:46 -07:00
yarn SPARK-1588. Restore SPARK_YARN_USER_ENV and SPARK_JAVA_OPTS for YARN. 2014-04-29 12:54:02 -07:00
.gitignore SPARK-1619 Launch spark-shell with spark-submit 2014-04-24 23:59:16 -07:00
.rat-excludes Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE Merge the old sbt-launch-lib.bash with the new sbt-launcher jar downloading logic. 2014-03-02 00:35:23 -08:00
make-distribution.sh Improved build configuration 2014-04-28 22:51:46 -07:00
NOTICE [SPARK-1212] Adding sparse data support and update KMeans 2014-03-23 17:34:02 -07:00
pom.xml Improved build configuration 2014-04-28 22:51:46 -07:00
README.md README update 2014-04-18 22:34:39 -07:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building Spark

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> <params>. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.