spark-instrumented-optimizer/docs/index.md
Sandeep a000b5c3b0 SPARK-1637: Clean up examples for 1.0
- [x] Move all of them into subpackages of org.apache.spark.examples (right now some are in org.apache.spark.streaming.examples, for instance, and others are in org.apache.spark.examples.mllib)
- [x] Move Python examples into examples/src/main/python
- [x] Update docs to reflect these changes

Author: Sandeep <sandeep@techaddict.me>

This patch had conflicts when merged, resolved by
Committer: Matei Zaharia <matei@databricks.com>

Closes #571 from techaddict/SPARK-1637 and squashes the following commits:

47ef86c [Sandeep] Changes based on Discussions on PR, removing use of RawTextHelper from examples
8ed2d3f [Sandeep] Docs Updated for changes, Change for java examples
5f96121 [Sandeep] Move Python examples into examples/src/main/python
0a8dd77 [Sandeep] Move all Scala Examples to org.apache.spark.examples (some are in org.apache.spark.streaming.examples, for instance, and others are in org.apache.spark.examples.mllib)
2014-05-06 17:27:52 -07:00

6.8 KiB

layout title
global Spark Overview

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Scala, Java, and Python that make parallel jobs easy to write, and an optimized engine that supports general computation graphs. It also supports a rich set of higher-level tools including Shark (Hive on Spark), MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Downloading

Get Spark by visiting the downloads page of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}. The downloads page contains Spark packages for many popular HDFS versions. If you'd like to build Spark from scratch, visit the building with Maven page.

Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have java to installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation.

For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}. If you write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the right version of Scala from scala-lang.org.

Running the Examples and Shell

Spark comes with several sample programs. Scala, Java and Python examples are in the examples/src/main directory. To run one of the Java or Scala sample programs, use ./bin/run-example <class> <params> in the top-level Spark directory (the bin/run-example script sets up the appropriate paths and launches that program). For example, try ./bin/run-example org.apache.spark.examples.SparkPi local. To run a Python sample program, use ./bin/pyspark <sample-program> <params>. For example, try ./bin/pyspark ./examples/src/main/python/pi.py local.

Each example prints usage help when run with no parameters.

Note that all of the sample programs take a <master> parameter specifying the cluster URL to connect to. This can be a URL for a distributed cluster, or local to run locally with one thread, or local[N] to run locally with N threads. You should start by using local for testing.

Finally, you can run Spark interactively through modified versions of the Scala shell (./bin/spark-shell) or Python interpreter (./bin/pyspark). These are a great way to learn the framework.

Launching on a Cluster

The Spark cluster mode overview explains the key concepts in running on a cluster. Spark can run both by itself, or over several existing cluster managers. It currently provides several options for deployment:

Where to Go from Here

Programming guides:

API Docs:

Deployment guides:

  • Cluster Overview: overview of concepts and components when running on a cluster
  • Amazon EC2: scripts that let you launch a cluster on EC2 in about 5 minutes
  • Standalone Deploy Mode: launch a standalone cluster quickly without a third-party cluster manager
  • Mesos: deploy a private cluster using Apache Mesos
  • YARN: deploy Spark on top of Hadoop NextGen (YARN)

Other documents:

External resources:

Community

To get help using Spark or keep up with Spark development, sign up for the user mailing list.

If you're in the San Francisco Bay Area, there's a regular Spark meetup every few weeks. Come by to meet the developers and other users.

Finally, if you'd like to contribute code to Spark, read how to contribute.