Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Xiangrui Meng 20424dad30 [SPARK-2174][MLLIB] treeReduce and treeAggregate
In `reduce` and `aggregate`, the driver node spends linear time on the number of partitions. It becomes a bottleneck when there are many partitions and the data from each partition is big.

SPARK-1485 (#506) tracks the progress of implementing AllReduce on Spark. I did several implementations including butterfly, reduce + broadcast, and treeReduce + broadcast. treeReduce + BT broadcast seems to be right way to go for Spark. Using binary tree may introduce some overhead in communication, because the driver still need to coordinate on data shuffling. In my experiments, n -> sqrt(n) -> 1 gives the best performance in general, which is why I set "depth = 2" in MLlib algorithms. But it certainly needs more testing.

I left `treeReduce` and `treeAggregate` public for easy testing. Some numbers from a test on 32-node m3.2xlarge cluster.

code:

~~~
import breeze.linalg._
import org.apache.log4j._

Logger.getRootLogger.setLevel(Level.OFF)

for (n <- Seq(1, 10, 100, 1000, 10000, 100000, 1000000)) {
  val vv = sc.parallelize(0 until 1024, 1024).map(i => DenseVector.zeros[Double](n))
  var start = System.nanoTime(); vv.treeReduce(_ + _, 2); println((System.nanoTime() - start) / 1e9)
  start = System.nanoTime(); vv.reduce(_ + _); println((System.nanoTime() - start) / 1e9)
}
~~~

out:

| n | treeReduce(,2) | reduce |
|---|---------------------|-----------|
| 10 | 0.215538731 | 0.204206899 |
| 100 | 0.278405907 | 0.205732582 |
| 1000 | 0.208972182 | 0.214298272 |
| 10000 | 0.194792071 | 0.349353687 |
| 100000 | 0.347683285 | 6.086671892 |
| 1000000 | 2.589350682 | 66.572906702 |

CC: @pwendell

This is clearly more scalable than the default implementation. My question is whether we should use this implementation in `reduce` and `aggregate` or put them as separate methods. The concern is that users may use `reduce` and `aggregate` as collect, where having multiple stages doesn't reduce the data size. However, in this case, `collect` is more appropriate.

Author: Xiangrui Meng <meng@databricks.com>

Closes #1110 from mengxr/tree and squashes the following commits:

c6cd267 [Xiangrui Meng] make depth default to 2
b04b96a [Xiangrui Meng] address comments
9bcc5d3 [Xiangrui Meng] add depth for readability
7495681 [Xiangrui Meng] fix compile error
142a857 [Xiangrui Meng] merge master
d58a087 [Xiangrui Meng] move treeReduce and treeAggregate to mllib
8a2a59c [Xiangrui Meng] Merge branch 'master' into tree
be6a88a [Xiangrui Meng] use treeAggregate in mllib
0f94490 [Xiangrui Meng] add docs
eb71c33 [Xiangrui Meng] add treeReduce
fe42a5e [Xiangrui Meng] add treeAggregate
2014-07-29 01:16:41 -07:00
assembly [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
bagel [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
bin [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
conf SPARK-1902 Silence stacktrace from logs when doing port failover to port n+1 2014-06-20 18:26:10 -07:00
core [SPARK-2726] and [SPARK-2727] Remove SortOrder and do in-place sort. 2014-07-29 01:12:44 -07:00
data/mllib SPARK-2363. Clean MLlib's sample data files 2014-07-13 19:27:43 -07:00
dev [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
ec2 Added t2 instance types 2014-07-18 12:23:47 -07:00
examples [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
external [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
extras [SPARK-1776] Have Spark's SBT build read dependencies from Maven. 2014-07-10 11:03:37 -07:00
graphx [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
mllib [SPARK-2174][MLLIB] treeReduce and treeAggregate 2014-07-29 01:16:41 -07:00
project [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
python [SPARK-791] [PySpark] fix pickle itemgetter with cloudpickle 2014-07-29 01:02:18 -07:00
repl [SPARK-2452] Create a new valid for each instead of using lineId. 2014-07-22 00:38:26 -07:00
sbin [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
sbt [SPARK-2437] Rename MAVEN_PROFILES to SBT_MAVEN_PROFILES and add SBT_MAVEN_PROPERTIES 2014-07-11 11:52:35 -07:00
sql Excess judgment 2014-07-28 21:39:02 -07:00
streaming [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
tools [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
yarn [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
.gitignore [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
.rat-excludes [SPARK-2384] Add tooltips to UI. 2014-07-08 22:57:21 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE SPARK-2047: Introduce an in-mem Sorter, and use it to reduce mem usage 2014-07-22 11:58:53 -07:00
make-distribution.sh SPARK-2587: Fix error message in make-distribution.sh 2014-07-19 20:24:13 -07:00
NOTICE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
pom.xml [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
README.md README update: added "for Big Data". 2014-07-15 02:20:01 -07:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00
tox.ini Added license header for tox.ini. 2014-05-25 01:49:45 -07:00

Apache Spark

Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLLib for machine learning, GraphX for graph processing, and Spark Streaming.

http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building Spark

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

(You do not need to do this if you downloaded a pre-built package.)

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting -Dhadoop.version when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ sbt/sbt -Dhadoop.version=1.2.1 assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ sbt/sbt -Dhadoop.version=2.0.0-mr1-cdh4.2.0 assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set -Pyarn:

# Apache Hadoop 2.0.5-alpha
$ sbt/sbt -Dhadoop.version=2.0.5-alpha -Pyarn assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ sbt/sbt -Dhadoop.version=2.0.0-cdh4.2.0 -Pyarn assembly

# Apache Hadoop 2.2.X and newer
$ sbt/sbt -Dhadoop.version=2.2.0 -Pyarn assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.