Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Tathagata Das de00bc63db Fixed deadlock in BlockManager.
1. Changed the lock structure of BlockManager by replacing the 337 coarse-grained locks to use BlockInfo objects as per-block fine-grained locks.
2. Changed the MemoryStore lock structure by making the block putting threads lock on a different object (not the memory store) thus making sure putting threads minimally blocks to the getting treads.
3. Added spark.storage.ThreadingTest to stress test the BlockManager using 5 block producer and 5 block consumer threads.
2012-11-09 14:09:37 -08:00
bagel/src Some doc fixes, including showing version number in nav bar again 2012-10-13 19:05:11 -07:00
bin Use SPARK_MASTER_IP if it is set in start-slaves.sh. 2012-10-19 01:08:23 -07:00
conf Document how to configure SPARK_MEM & co on a per-job basis 2012-10-13 16:20:25 -07:00
core/src Fixed deadlock in BlockManager. 2012-11-09 14:09:37 -08:00
docs Merge pull request #294 from JoshRosen/docs/quickstart 2012-10-27 16:56:39 -07:00
ec2 Fix check for existing instances during EC2 launch. 2012-11-03 17:02:47 -07:00
examples/src/main Some doc and usability improvements: 2012-10-12 17:53:20 -07:00
project Bump up version to 0.7.0-SNAPSHOT for master branch 2012-10-22 11:49:42 -07:00
repl Bump up version to 0.7.0-SNAPSHOT for master branch 2012-10-22 11:49:42 -07:00
sbt Made run script add test-classes onto the classpath only if SPARK_TESTING is set; fixes #216 2012-10-07 04:19:16 +00:00
.gitignore Ignore file spark-tests.log in git 2012-10-01 15:08:20 -07:00
kmeans_data.txt Fixed bugs 2012-01-09 11:59:52 -08:00
LICENSE Added BSD license 2010-12-07 10:32:17 -08:00
lr_data.txt Test commit 2012-02-06 09:58:06 -08:00
README.md tweak 2012-10-14 12:04:58 -07:00
run Tweaked run file to live more happily with typesafe's debian package 2012-10-22 13:11:05 -07:00
run.cmd Add spark-shell.cmd 2012-09-25 07:26:29 -07:00
run2.cmd Don't check for JARs in core/lib anymore 2012-10-04 15:11:43 -07:00
spark-executor Further refactoring, and start of a standalone scheduler backend 2012-07-06 17:56:44 -07:00
spark-shell More work to allow Spark to run on the standalone deploy cluster. 2012-07-08 14:00:04 -07:00
spark-shell.cmd Add spark-shell.cmd 2012-09-25 07:26:29 -07:00

Spark

Lightning-Fast Cluster Computing - http://www.spark-project.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark-project.org/documentation.html. This README file only contains basic setup instructions.

Building

Spark requires Scala 2.9.2. The project is built using Simple Build Tool (SBT), which is packaged with it. To build Spark and its example programs, run:

sbt/sbt package

To run Spark, you will need to have Scala's bin directory in your PATH, or you will need to set the SCALA_HOME environment variable to point to where you've installed Scala. Scala must be accessible through one of these methods on your cluster's worker nodes as well as its master.

To run one of the examples, use ./run <class> <params>. For example:

./run spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <host> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the HDFS API has changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the HADOOP_VERSION variable at the top of project/SparkBuild.scala, then rebuilding Spark.

Configuration

Please refer to the "Configuration" guide in the online documentation for a full overview on how to configure Spark. At the minimum, you will need to create a conf/spark-env.sh script (copy conf/spark-env.sh.template) and set the following two variables:

  • SCALA_HOME: Location where Scala is installed.

  • MESOS_NATIVE_LIBRARY: Your Mesos library (only needed if you want to run on Mesos). For example, this might be /usr/local/lib/libmesos.so on Linux.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.