Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Ivan Wick 5cd11d51c1 Set spark.executor.uri from environment variable (needed by Mesos)
The Mesos backend uses this property when setting up a slave process.  It is similarly set in the Scala repl (org.apache.spark.repl.SparkILoop), but I couldn't find any analogous for pyspark.

Author: Ivan Wick <ivanwick+github@gmail.com>

This patch had conflicts when merged, resolved by
Committer: Matei Zaharia <matei@databricks.com>

Closes #311 from ivanwick/master and squashes the following commits:

da0c3e4 [Ivan Wick] Set spark.executor.uri from environment variable (needed by Mesos)
2014-04-10 17:49:30 -07:00
assembly SPARK-1314: Use SPARK_HIVE to determine if we include Hive in packaging 2014-04-06 17:48:41 -07:00
bagel Remove Unnecessary Whitespace's 2014-04-10 15:04:13 -07:00
bin [SPARK-1276] Add a HistoryServer to render persisted UI 2014-04-10 10:39:34 -07:00
conf Revert "[SPARK-1150] fix repo location in create script" 2014-03-01 17:15:38 -08:00
core SPARK-1202 - Add a "cancel" button in the UI for stages 2014-04-10 17:10:11 -07:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
dev SPARK-1431: Allow merging conflicting pull requests 2014-04-06 21:04:45 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs SPARK-1202 - Add a "cancel" button in the UI for stages 2014-04-10 17:10:11 -07:00
ec2 SPARK-1156: allow user to login into a cluster without slaves 2014-03-05 21:47:34 -08:00
examples SPARK-1446: Spark examples should not do a System.exit 2014-04-10 00:37:21 -07:00
external Remove Unnecessary Whitespace's 2014-04-10 15:04:13 -07:00
extras Spark 1271: Co-Group and Group-By should pass Iterable[X] 2014-04-08 18:15:59 -07:00
graphx Remove Unnecessary Whitespace's 2014-04-10 15:04:13 -07:00
mllib Remove Unnecessary Whitespace's 2014-04-10 15:04:13 -07:00
project Revert "SPARK-1433: Upgrade Mesos dependency to 0.17.0" 2014-04-10 14:43:29 -07:00
python Set spark.executor.uri from environment variable (needed by Mesos) 2014-04-10 17:49:30 -07:00
repl Remove Unnecessary Whitespace's 2014-04-10 15:04:13 -07:00
sbin [SPARK-1276] Add a HistoryServer to render persisted UI 2014-04-10 10:39:34 -07:00
sbt [SQL] Un-ignore a test that is now passing. 2014-03-26 18:19:15 -07:00
sql [SQL] Improve column pruning in the optimizer. 2014-04-10 16:20:33 -07:00
streaming Remove Unnecessary Whitespace's 2014-04-10 15:04:13 -07:00
tools SPARK-1093: Annotate developer and experimental API's 2014-04-09 01:14:46 -07:00
yarn Remove extra semicolon in import statement and unused import in ApplicationMaster 2014-04-08 14:23:16 -07:00
.gitignore SPARK-1336 Reducing the output of run-tests script. 2014-03-29 23:03:03 -07:00
.rat-excludes Spark-939: allow user jars to take precedence over spark jars 2014-04-08 22:30:03 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE Merge the old sbt-launch-lib.bash with the new sbt-launcher jar downloading logic. 2014-03-02 00:35:23 -08:00
make-distribution.sh fix path for jar, make sed actually work on OSX 2014-03-28 13:33:35 -07:00
NOTICE [SPARK-1212] Adding sparse data support and update KMeans 2014-03-23 17:34:02 -07:00
pom.xml Revert "SPARK-1433: Upgrade Mesos dependency to 0.17.0" 2014-04-10 14:43:29 -07:00
README.md Removed reference to incubation in README.md. 2014-02-26 16:52:26 -08:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building

Spark requires Scala 2.10. The project is built using Simple Build Tool (SBT), which can be obtained here. If SBT is installed we will use the system version of sbt otherwise we will attempt to download it automatically. To build Spark and its example programs, run:

./sbt/sbt assembly

Once you've built Spark, the easiest way to start using it is the shell:

./bin/spark-shell

Or, for the Python API, the Python shell (./bin/pyspark).

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> <params>. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

Running tests

Testing first requires Building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.