I used the sbt-unidoc plugin (https://github.com/sbt/sbt-unidoc) to create a unified Scaladoc of our public packages, and generate Javadocs as well. One limitation is that I haven't found an easy way to exclude packages in the Javadoc; there is a SBT task that identifies Java sources to run javadoc on, but it's been very difficult to modify it from outside to change what is set in the unidoc package. Some SBT-savvy people should help with this. The Javadoc site also lacks package-level descriptions and things like that, so we may want to look into that. We may decide not to post these right now if it's too limited compared to the Scala one. Example of the built doc site: http://people.csail.mit.edu/matei/spark-unified-docs/ Author: Matei Zaharia <matei@databricks.com> This patch had conflicts when merged, resolved by Committer: Patrick Wendell <pwendell@gmail.com> Closes #457 from mateiz/better-docs and squashes the following commits: a63d4a3 [Matei Zaharia] Skip Java/Scala API docs for Python package 5ea1f43 [Matei Zaharia] Fix links to Java classes in Java guide, fix some JS for scrolling to anchors on page load f05abc0 [Matei Zaharia] Don't include java.lang package names 995e992 [Matei Zaharia] Skip internal packages and class names with $ in JavaDoc a14a93c [Matei Zaharia] typo 76ce64d [Matei Zaharia] Add groups to Javadoc index page, and a first package-info.java ed6f994 [Matei Zaharia] Generate JavaDoc as well, add titles, update doc site to use unified docs acb993d [Matei Zaharia] Add Unidoc plugin for the projects we want Unidoced
7.5 KiB
layout | title |
---|---|
global | Spark Overview |
Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Scala, Java, and Python that make parallel jobs easy to write, and an optimized engine that supports general computation graphs. It also supports a rich set of higher-level tools including Shark (Hive on Spark), MLlib for machine learning, GraphX for graph processing, and Spark Streaming.
Downloading
Get Spark by visiting the downloads page of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have java
to installed on your system PATH
, or the JAVA_HOME
environment variable pointing to a Java installation.
Building
Spark uses Simple Build Tool, which is bundled with it. To compile the code, go into the top-level Spark directory and run
sbt/sbt assembly
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}. If you write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the right version of Scala from scala-lang.org.
Running the Examples and Shell
Spark comes with several sample programs. Scala and Java examples are in the examples
directory, and Python examples are in python/examples
.
To run one of the Java or Scala sample programs, use ./bin/run-example <class> <params>
in the top-level Spark directory
(the bin/run-example
script sets up the appropriate paths and launches that program).
For example, try ./bin/run-example org.apache.spark.examples.SparkPi local
.
To run a Python sample program, use ./bin/pyspark <sample-program> <params>
. For example, try ./bin/pyspark ./python/examples/pi.py local
.
Each example prints usage help when run with no parameters.
Note that all of the sample programs take a <master>
parameter specifying the cluster URL
to connect to. This can be a URL for a distributed cluster,
or local
to run locally with one thread, or local[N]
to run locally with N threads. You should start by using
local
for testing.
Finally, you can run Spark interactively through modified versions of the Scala shell (./bin/spark-shell
) or
Python interpreter (./bin/pyspark
). These are a great way to learn the framework.
Launching on a Cluster
The Spark cluster mode overview explains the key concepts in running on a cluster. Spark can run both by itself, or over several existing cluster managers. It currently provides several options for deployment:
- Amazon EC2: our EC2 scripts let you launch a cluster in about 5 minutes
- Standalone Deploy Mode: simplest way to deploy Spark on a private cluster
- Apache Mesos
- Hadoop YARN
A Note About Hadoop Versions
Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported
storage systems. Because the HDFS protocol has changed in different versions of
Hadoop, you must build Spark against the same version that your cluster uses.
By default, Spark links to Hadoop 1.0.4. You can change this by setting the
SPARK_HADOOP_VERSION
variable when compiling:
SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly
In addition, if you wish to run Spark on YARN, set
SPARK_YARN
to true
:
SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
Note that on Windows, you need to set the environment variables on separate lines, e.g., set SPARK_HADOOP_VERSION=1.2.1
.
Where to Go from Here
Programming guides:
- Quick Start: a quick introduction to the Spark API; start here!
- Spark Programming Guide: an overview of Spark concepts, and details on the Scala API
- Java Programming Guide: using Spark from Java
- Python Programming Guide: using Spark from Python
- Spark Streaming: Spark's API for processing data streams
- Spark SQL: Support for running relational queries on Spark
- MLlib (Machine Learning): Spark's built-in machine learning library
- Bagel (Pregel on Spark): simple graph processing model
- GraphX (Graphs on Spark): Spark's new API for graphs
API Docs:
Deployment guides:
- Cluster Overview: overview of concepts and components when running on a cluster
- Amazon EC2: scripts that let you launch a cluster on EC2 in about 5 minutes
- Standalone Deploy Mode: launch a standalone cluster quickly without a third-party cluster manager
- Mesos: deploy a private cluster using Apache Mesos
- YARN: deploy Spark on top of Hadoop NextGen (YARN)
Other documents:
- Configuration: customize Spark via its configuration system
- Tuning Guide: best practices to optimize performance and memory use
- Security: Spark security support
- Hardware Provisioning: recommendations for cluster hardware
- Job Scheduling: scheduling resources across and within Spark applications
- Building Spark with Maven: build Spark using the Maven system
- Contributing to Spark
External resources:
- Spark Homepage
- Shark: Apache Hive over Spark
- Mailing Lists: ask questions about Spark here
- AMP Camps: a series of training camps at UC Berkeley that featured talks and exercises about Spark, Shark, Mesos, and more. Videos, slides and exercises are available online for free.
- Code Examples: more are also available in the examples subfolder of Spark
- Paper Describing Spark
- Paper Describing Spark Streaming
Community
To get help using Spark or keep up with Spark development, sign up for the user mailing list.
If you're in the San Francisco Bay Area, there's a regular Spark meetup every few weeks. Come by to meet the developers and other users.
Finally, if you'd like to contribute code to Spark, read how to contribute.