spark-instrumented-optimizer/docs/index.md
Tathagata Das a975a19f21 [SPARK-1504], [SPARK-1505], [SPARK-1558] Updated Spark Streaming guide
- SPARK-1558: Updated custom receiver guide to match it with the new API
- SPARK-1504: Added deployment and monitoring subsection to streaming
- SPARK-1505: Added migration guide for migrating from 0.9.x and below to Spark 1.0
- Updated various Java streaming examples to use JavaReceiverInputDStream to highlight the API change.
- Removed the requirement for cleaner ttl from streaming guide

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #652 from tdas/doc-fix and squashes the following commits:

cb4f4b7 [Tathagata Das] Possible fix for flaky graceful shutdown test.
ab71f7f [Tathagata Das] Merge remote-tracking branch 'apache-github/master' into doc-fix
8d6ff9b [Tathagata Das] Addded migration guide to Spark Streaming.
7d171df [Tathagata Das] Added reference to JavaReceiverInputStream in examples and streaming guide.
49edd7c [Tathagata Das] Change java doc links to use Java docs.
11528d7 [Tathagata Das] Updated links on index page.
ff80970 [Tathagata Das] More updates to streaming guide.
4dc42e9 [Tathagata Das] Added monitoring and other documentation in the streaming guide.
14c6564 [Tathagata Das] Updated custom receiver guide.
2014-05-05 15:28:19 -07:00

7.6 KiB

layout title
global Spark Overview

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Scala, Java, and Python that make parallel jobs easy to write, and an optimized engine that supports general computation graphs. It also supports a rich set of higher-level tools including Shark (Hive on Spark), MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Downloading

Get Spark by visiting the downloads page of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.

Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have java to installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation.

Building

Spark uses Simple Build Tool, which is bundled with it. To compile the code, go into the top-level Spark directory and run

sbt/sbt assembly

For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}. If you write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the right version of Scala from scala-lang.org.

Running the Examples and Shell

Spark comes with several sample programs. Scala and Java examples are in the examples directory, and Python examples are in python/examples. To run one of the Java or Scala sample programs, use ./bin/run-example <class> <params> in the top-level Spark directory (the bin/run-example script sets up the appropriate paths and launches that program). For example, try ./bin/run-example org.apache.spark.examples.SparkPi local. To run a Python sample program, use ./bin/pyspark <sample-program> <params>. For example, try ./bin/pyspark ./python/examples/pi.py local.

Each example prints usage help when run with no parameters.

Note that all of the sample programs take a <master> parameter specifying the cluster URL to connect to. This can be a URL for a distributed cluster, or local to run locally with one thread, or local[N] to run locally with N threads. You should start by using local for testing.

Finally, you can run Spark interactively through modified versions of the Scala shell (./bin/spark-shell) or Python interpreter (./bin/pyspark). These are a great way to learn the framework.

Launching on a Cluster

The Spark cluster mode overview explains the key concepts in running on a cluster. Spark can run both by itself, or over several existing cluster managers. It currently provides several options for deployment:

A Note About Hadoop Versions

Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported storage systems. Because the HDFS protocol has changed in different versions of Hadoop, you must build Spark against the same version that your cluster uses. By default, Spark links to Hadoop 1.0.4. You can change this by setting the SPARK_HADOOP_VERSION variable when compiling:

SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly

In addition, if you wish to run Spark on YARN, set SPARK_YARN to true:

SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

Note that on Windows, you need to set the environment variables on separate lines, e.g., set SPARK_HADOOP_VERSION=1.2.1.

Where to Go from Here

Programming guides:

API Docs:

Deployment guides:

  • Cluster Overview: overview of concepts and components when running on a cluster
  • Amazon EC2: scripts that let you launch a cluster on EC2 in about 5 minutes
  • Standalone Deploy Mode: launch a standalone cluster quickly without a third-party cluster manager
  • Mesos: deploy a private cluster using Apache Mesos
  • YARN: deploy Spark on top of Hadoop NextGen (YARN)

Other documents:

External resources:

Community

To get help using Spark or keep up with Spark development, sign up for the user mailing list.

If you're in the San Francisco Bay Area, there's a regular Spark meetup every few weeks. Come by to meet the developers and other users.

Finally, if you'd like to contribute code to Spark, read how to contribute.