Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Jyotiska NK 67fa71cba2 Added doctest for map function in rdd.py
Doctest added for map in rdd.py

Author: Jyotiska NK <jyotiska123@gmail.com>

Closes #177 from jyotiska/pyspark_rdd_map_doctest and squashes the following commits:

a38527f [Jyotiska NK] Added doctest for map function in rdd.py
2014-03-19 14:04:45 -07:00
assembly SPARK-1167: Remove metrics-ganglia from default build due to LGPL issues... 2014-03-11 11:16:59 -07:00
bagel SPARK-1193. Fix indentation in pom.xmls 2014-03-07 23:10:35 -08:00
bin SPARK-929: Fully deprecate usage of SPARK_MEM 2014-03-09 11:08:39 -07:00
conf Revert "[SPARK-1150] fix repo location in create script" 2014-03-01 17:15:38 -08:00
core [SPARK-1132] Persisting Web UI through refactoring the SparkListener interface 2014-03-19 13:17:01 -07:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
dev SPARK-1167: Remove metrics-ganglia from default build due to LGPL issues... 2014-03-11 11:16:59 -07:00
docker SPARK-1136: Fix FaultToleranceTest for Docker 0.8.1 2014-03-07 10:22:27 -08:00
docs [SPARK-1132] Persisting Web UI through refactoring the SparkListener interface 2014-03-19 13:17:01 -07:00
ec2 SPARK-1156: allow user to login into a cluster without slaves 2014-03-05 21:47:34 -08:00
examples SPARK-1254. Consolidate, order, and harmonize repository declarations in Maven/SBT builds 2014-03-15 16:44:34 -07:00
external SPARK-1254. Consolidate, order, and harmonize repository declarations in Maven/SBT builds 2014-03-15 16:44:34 -07:00
extras SPARK-1167: Remove metrics-ganglia from default build due to LGPL issues... 2014-03-11 11:16:59 -07:00
graphx SPARK-1255: Allow user to pass Serializer object instead of class name for shuffle. 2014-03-16 09:57:21 -07:00
mllib [SPARK-1266] persist factors in implicit ALS 2014-03-18 17:20:42 -07:00
project Revert "SPARK-1236 - Upgrade Jetty to 9.1.3.v20140225." 2014-03-18 00:46:03 -07:00
python Added doctest for map function in rdd.py 2014-03-19 14:04:45 -07:00
repl SPARK-782 Clean up for ASM dependency. 2014-03-09 13:17:07 -07:00
sbin Bundle tachyon: SPARK-1269 2014-03-18 22:04:57 -07:00
sbt Allow sbt to use more than 1G of heap. 2014-03-07 23:23:59 -08:00
streaming SPARK-1254. Consolidate, order, and harmonize repository declarations in Maven/SBT builds 2014-03-15 16:44:34 -07:00
tools SPARK-1193. Fix indentation in pom.xmls 2014-03-07 23:10:35 -08:00
yarn [bugfix] wrong client arg, should use executor-cores 2014-03-13 20:27:36 -07:00
.gitignore Restricting /lib to top level directory in .gitignore 2014-01-20 20:39:30 -08:00
LICENSE Merge the old sbt-launch-lib.bash with the new sbt-launcher jar downloading logic. 2014-03-02 00:35:23 -08:00
make-distribution.sh Bundle tachyon: SPARK-1269 2014-03-18 22:04:57 -07:00
NOTICE Update copyright year in NOTICE to 2014 2014-03-18 14:34:31 -07:00
pom.xml Revert "SPARK-1236 - Upgrade Jetty to 9.1.3.v20140225." 2014-03-18 00:46:03 -07:00
README.md Removed reference to incubation in README.md. 2014-02-26 16:52:26 -08:00
scalastyle-config.xml Merge pull request #567 from ScrapCodes/style2. 2014-02-09 22:17:52 -08:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building

Spark requires Scala 2.10. The project is built using Simple Build Tool (SBT), which can be obtained here. If SBT is installed we will use the system version of sbt otherwise we will attempt to download it automatically. To build Spark and its example programs, run:

./sbt/sbt assembly

Once you've built Spark, the easiest way to start using it is the shell:

./bin/spark-shell

Or, for the Python API, the Python shell (./bin/pyspark).

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> <params>. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

Running tests

Testing first requires Building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.