Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Matei Zaharia 11891e68c3 Merge pull request #327 from lucarosellini/master
Added ‘-i’ command line option to Spark REPL

We had to create a new implementation of both scala.tools.nsc.CompilerCommand and scala.tools.nsc.Settings, because using scala.tools.nsc.GenericRunnerSettings would bring in other options (-howtorun, -save and -execute) which don’t make sense in Spark.
Any new Spark specific command line option could now be added to org.apache.spark.repl.SparkRunnerSettings class.

Since the behavior of loading a script from the command line should be the same as loading it using the “:load” command inside the shell, the script should be loaded when the SparkContext is available, that’s why we had to move the call to ‘loadfiles(settings)’ _after_ the call to postInitialization(). This still doesn’t work if ‘isAsync = true’.
2014-01-08 00:32:18 -05:00
assembly Add log4j exclusion rule to maven. 2014-01-07 12:56:24 -08:00
bagel Use scala.binary.version in POMs 2013-12-15 12:39:58 -08:00
bin CR feedback (sbt -> sbt/sbt and correct JAR path in script) :) 2014-01-05 23:29:26 -08:00
conf add the comments about SPARK_WORKER_DIR 2014-01-07 12:53:04 -05:00
core Merge pull request #350 from mateiz/standalone-limit 2014-01-08 00:30:03 -05:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
docker A little revise for the document 2013-10-29 00:28:56 +08:00
docs Address review comments 2014-01-07 19:30:23 -05:00
ec2 a few left over document change 2014-01-02 21:48:44 +05:30
examples Add log4j exclusion rule to maven. 2014-01-07 12:56:24 -08:00
mllib Merge branch 'master' into MatrixFactorizationModel-fix 2014-01-07 15:22:42 -08:00
project Merge pull request #331 from holdenk/master 2014-01-07 00:18:20 -08:00
python Merge branch 'master' into MatrixFactorizationModel-fix 2014-01-07 15:22:42 -08:00
repl Merge pull request #327 from lucarosellini/master 2014-01-08 00:32:18 -05:00
repl-bin Merge branch 'scripts-reorg' of github.com:shane-huang/incubator-spark into spark-915-segregate-scripts 2014-01-02 17:55:21 +05:30
sbin Update stop-slaves.sh 2014-01-07 11:11:59 +08:00
sbt Add ASF header to the new sbt script. 2014-01-07 21:07:27 -08:00
streaming Removing SPARK_EXAMPLES_JAR in the code 2014-01-05 11:49:42 -08:00
tools Use scala.binary.version in POMs 2013-12-15 12:39:58 -08:00
yarn merge upstream/master 2014-01-03 16:06:34 +08:00
.gitignore And update docs to match 2014-01-04 21:45:22 -08:00
LICENSE Updated LICENSE with third-party licenses 2013-09-02 16:43:06 -07:00
make-distribution.sh Finish documentation changes 2014-01-05 22:12:47 -08:00
NOTICE Add Apache license headers and LICENSE and NOTICE files 2013-07-16 17:21:33 -07:00
pom.xml Merge pull request #338 from ScrapCodes/ning-upgrade 2014-01-06 11:40:32 -08:00
README.md Code review feedback 2014-01-05 22:05:30 -08:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.incubator.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.incubator.apache.org/documentation.html. This README file only contains basic setup instructions.

Building

Spark requires Scala 2.10. The project is built using Simple Build Tool (SBT), which can be obtained here. If SBT is installed we will use the system version of sbt otherwise we will attempt to download it automatically. To build Spark and its example programs, run:

./sbt/sbt assembly

Once you've built Spark, the easiest way to start using it is the shell:

./bin/spark-shell

Or, for the Python API, the Python shell (./bin/pyspark).

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> <params>. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

Running tests

Testing first requires Building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Apache Incubator Notice

Apache Spark is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.