Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Michael Armbrust 41bbd23004 [SPARK-11654][SQL] add reduce to GroupedDataset
This PR adds a new method, `reduce`, to `GroupedDataset`, which allows similar operations to `reduceByKey` on a traditional `PairRDD`.

```scala
val ds = Seq("abc", "xyz", "hello").toDS()
ds.groupBy(_.length).reduce(_ + _).collect()  // not actually commutative :P

res0: Array(3 -> "abcxyz", 5 -> "hello")
```

While implementing this method and its test cases several more deficiencies were found in our encoder handling.  Specifically, in order to support positional resolution, named resolution and tuple composition, it is important to keep the unresolved encoder around and to use it when constructing new `Datasets` with the same object type but different output attributes.  We now divide the encoder lifecycle into three phases (that mirror the lifecycle of standard expressions) and have checks at various boundaries:

 - Unresoved Encoders: all users facing encoders (those constructed by implicits, static methods, or tuple composition) are unresolved, meaning they have only `UnresolvedAttributes` for named fields and `BoundReferences` for fields accessed by ordinal.
 - Resolved Encoders: internal to a `[Grouped]Dataset` the encoder is resolved, meaning all input has been resolved to a specific `AttributeReference`.  Any encoders that are placed into a logical plan for use in object construction should be resolved.
 - BoundEncoder: Are constructed by physical plans, right before actual conversion from row -> object is performed.

It is left to future work to add explicit checks for resolution and provide good error messages when it fails.  We might also consider enforcing the above constraints in the type system (i.e. `fromRow` only exists on a `ResolvedEncoder`), but we should probably wait before spending too much time on this.

Author: Michael Armbrust <michael@databricks.com>
Author: Wenchen Fan <wenchen@databricks.com>

Closes #9673 from marmbrus/pr/9628.
2015-11-12 17:20:30 -08:00
assembly Update version to 1.6.0-SNAPSHOT. 2015-09-15 00:54:20 -07:00
bagel [SPARK-10300] [BUILD] [TESTS] Add support for test tags in run-tests.py. 2015-10-07 14:11:21 -07:00
bin [SPARK-2960][DEPLOY] Support executing Spark from symlinks (reopen) 2015-11-04 10:49:34 +00:00
build [SPARK-11052] Spaces in the build dir causes failures in the build/mv… 2015-10-13 22:11:08 +01:00
conf [SPARK-11242][SQL] In conf/spark-env.sh.template SPARK_DRIVER_MEMORY is documented incorrectly 2015-10-22 13:56:18 -07:00
core [SPARK-11709] include creation site info in SparkContext.assertNotStopped error message 2015-11-12 16:43:04 -08:00
data/mllib [MLLIB] [DOC] Seed fix in mllib naive bayes example 2015-07-18 10:12:48 -07:00
dev [SPARK-7841][BUILD] Stop using retrieveManaged to retrieve dependencies in SBT 2015-11-10 10:14:19 -08:00
docker [SPARK-11491] Update build to use Scala 2.10.5 2015-11-04 16:58:38 -08:00
docker-integration-tests [SPARK-9818] Re-enable Docker tests for JDBC data source 2015-11-10 15:58:30 -08:00
docs [SPARK-11667] Update dynamic allocation docs to reflect supported cluster managers 2015-11-12 15:48:42 -08:00
ec2 [SPARK-10532][EC2] Added --profile option to specify the name of profile 2015-10-29 13:08:55 -07:00
examples [SPARK-11290][STREAMING] Basic implementation of trackStateByKey 2015-11-10 23:16:18 -08:00
external [SPARK-11361][STREAMING] Show scopes of RDD operations inside DStream.foreachRDD and DStream.transform in DAG viz 2015-11-10 16:54:06 -08:00
extras [SPARK-6152] Use shaded ASM5 to support closure cleaning of Java 8 compiled classes 2015-11-11 11:16:39 -08:00
graphx Fixed error in scaladoc of convertToCanonicalEdges 2015-11-12 12:14:00 -08:00
launcher [SPARK-11655][CORE] Fix deadlock in handling of launcher stop(). 2015-11-12 14:29:16 -08:00
licenses [SPARK-10833] [BUILD] Inline, organize BSD/MIT licenses in LICENSE 2015-09-28 22:56:43 -04:00
mllib [SPARK-11712][ML] Make spark.ml LDAModel be abstract 2015-11-12 17:03:19 -08:00
network [SPARK-11252][NETWORK] ShuffleClient should release connection after fetching blocks had been completed for external shuffle 2015-11-10 10:40:08 -08:00
project [BUILD][MINOR] Remove non-exist yarnStable module in Sbt project 2015-11-12 17:23:24 +01:00
python [SPARK-11658] simplify documentation for PySpark combineByKey 2015-11-12 15:50:47 -08:00
R [SPARK-11420] Updating Stddev support via Imperative Aggregate 2015-11-12 13:47:34 -08:00
repl [SPARK-6152] Use shaded ASM5 to support closure cleaning of Java 8 compiled classes 2015-11-11 11:16:39 -08:00
sbin [SPARK-11218][CORE] show help messages for start-slave and start-master 2015-11-09 13:22:05 +01:00
sbt Adde LICENSE Header to build/mvn, build/sbt and sbt/sbt 2014-12-29 10:48:53 -08:00
sql [SPARK-11654][SQL] add reduce to GroupedDataset 2015-11-12 17:20:30 -08:00
streaming [SPARK-11290][STREAMING][TEST-MAVEN] Fix the test for maven build 2015-11-12 14:52:03 -08:00
tags [SPARK-9818] Re-enable Docker tests for JDBC data source 2015-11-10 15:58:30 -08:00
tools Update version to 1.6.0-SNAPSHOT. 2015-09-15 00:54:20 -07:00
unsafe [SPARK-7542][SQL] Support off-heap index/sort buffer 2015-11-05 19:02:18 -08:00
yarn [SPARK-11615] Drop @VisibleForTesting annotation 2015-11-10 16:52:59 -08:00
.gitattributes [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
.gitignore [SPARK-8495] [SPARKR] Add a .lintr file to validate the SparkR files and the lint-r script 2015-06-20 16:10:14 -07:00
.rat-excludes [SPARK-10718] [BUILD] Update License on conf files and corresponding excludes file update 2015-09-22 11:03:21 +01:00
CONTRIBUTING.md [SPARK-6889] [DOCS] CONTRIBUTING.md updates to accompany contribution doc updates 2015-04-21 22:34:31 -07:00
LICENSE [SPARK-11491] Update build to use Scala 2.10.5 2015-11-04 16:58:38 -08:00
make-distribution.sh [SPARK-11236] [TEST-MAVEN] [TEST-HADOOP1.0] [CORE] Update Tachyon dependency 0.7.1 -> 0.8.1 2015-11-02 17:02:31 -08:00
NOTICE [SPARK-10833] [BUILD] Inline, organize BSD/MIT licenses in LICENSE 2015-09-28 22:56:43 -04:00
pom.xml [SPARK-6152] Use shaded ASM5 to support closure cleaning of Java 8 compiled classes 2015-11-11 11:16:39 -08:00
pylintrc [SPARK-9116] [SQL] [PYSPARK] support Python only UDT in __main__ 2015-07-29 22:30:49 -07:00
README.md [SPARK-11305][DOCS] Remove Third-Party Hadoop Distributions Doc Page 2015-11-01 12:25:49 +00:00
scalastyle-config.xml [SPARK-11615] Drop @VisibleForTesting annotation 2015-11-10 16:52:59 -08:00
tox.ini [SPARK-7427] [PYSPARK] Make sharedParams match in Scala, Python 2015-05-10 19:18:32 -07:00

Apache Spark

Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page and project wiki. This README file only contains basic setup instructions.

Building Spark

Spark is built using Apache Maven. To build Spark and its example programs, run:

build/mvn -DskipTests clean package

(You do not need to do this if you downloaded a pre-built package.) More detailed documentation is available from the project site, at "Building Spark".

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./dev/run-tests

Please see the guidance on how to run tests for a module, or individual tests.

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.

Please refer to the build documentation at "Specifying the Hadoop Version" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions.

Configuration

Please refer to the Configuration Guide in the online documentation for an overview on how to configure Spark.