2012-09-30 20:43:20 -04:00
---
layout: global
2012-10-09 17:30:23 -04:00
title: Quick Start
2012-09-30 20:43:20 -04:00
---
* This will become a table of contents (this text will be scraped).
{:toc}
2013-09-07 00:34:12 -04:00
This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's interactive Scala shell (don't worry if you don't know Scala -- you will not need much for this), then show how to write standalone applications in Scala, Java, and Python.
2012-12-29 01:51:28 -05:00
See the [programming guide ](scala-programming-guide.html ) for a more complete reference.
2012-09-30 20:43:20 -04:00
2014-05-06 15:07:46 -04:00
To follow along with this guide, first download a packaged release of Spark from the
[Spark website ](http://spark.apache.org/downloads.html ). Since we won't be using HDFS,
you can download a package for any version of Hadoop.
2012-10-03 02:54:03 -04:00
2012-10-09 17:30:23 -04:00
# Interactive Analysis with the Spark Shell
2012-09-30 20:43:20 -04:00
2012-10-09 17:30:23 -04:00
## Basics
2012-09-30 20:43:20 -04:00
2012-10-09 17:30:23 -04:00
Spark's interactive shell provides a simple way to learn the API, as well as a powerful tool to analyze datasets interactively.
2014-05-12 22:44:14 -04:00
Start the shell by running the following in the Spark directory.
./bin/spark-shell
2012-09-30 20:43:20 -04:00
2012-10-09 17:30:23 -04:00
Spark's primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. Let's make a new RDD from the text of the README file in the Spark source directory:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
scala> val textFile = sc.textFile("README.md")
textFile: spark.RDD[String] = spark.MappedRDD@2ee9b6e3
{% endhighlight %}
2012-10-09 17:30:23 -04:00
RDDs have _[actions](scala-programming-guide.html#actions)_ , which return values, and _[transformations](scala-programming-guide.html#transformations)_ , which return pointers to new RDDs. Let's start with a few actions:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
scala> textFile.count() // Number of items in this RDD
res0: Long = 74
scala> textFile.first() // First item in this RDD
2013-09-09 00:25:49 -04:00
res1: String = # Apache Spark
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2012-10-09 17:30:23 -04:00
Now let's use a transformation. We will use the [`filter` ](scala-programming-guide.html#transformations ) transformation to return a new RDD with a subset of the items in the file.
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> val linesWithSpark = textFile.filter(line => line.contains("Spark"))
linesWithSpark: spark.RDD[String] = spark.FilteredRDD@7dd4af09
2012-09-30 20:43:20 -04:00
{% endhighlight %}
We can chain together transformations and actions:
{% highlight scala %}
scala> textFile.filter(line => line.contains("Spark")).count() // How many lines contain "Spark"?
res3: Long = 15
{% endhighlight %}
2013-08-31 20:28:07 -04:00
## More on RDD Operations
2013-04-12 15:32:58 -04:00
RDD actions and transformations can be used for more complex computations. Let's say we want to find the line with the most words:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)
2012-09-30 20:43:20 -04:00
res4: Long = 16
{% endhighlight %}
2012-10-23 16:49:52 -04:00
This first maps a line to an integer value, creating a new RDD. `reduce` is called on that RDD to find the largest line count. The arguments to `map` and `reduce` are Scala function literals (closures), and can use any language feature or Scala/Java library. For example, we can easily call functions declared elsewhere. We'll use `Math.max()` function to make this code easier to understand:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> import java.lang.Math
2012-09-30 20:43:20 -04:00
import java.lang.Math
scala> textFile.map(line => line.split(" ").size).reduce((a, b) => Math.max(a, b))
res5: Int = 16
{% endhighlight %}
2012-10-03 02:54:03 -04:00
One common data flow pattern is MapReduce, as popularized by Hadoop. Spark can implement MapReduce flows easily:
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)
wordCounts: spark.RDD[(java.lang.String, Int)] = spark.ShuffledAggregatedRDD@71f027b8
2012-10-03 02:54:03 -04:00
{% endhighlight %}
2012-10-09 17:30:23 -04:00
Here, we combined the [`flatMap` ](scala-programming-guide.html#transformations ), [`map` ](scala-programming-guide.html#transformations ) and [`reduceByKey` ](scala-programming-guide.html#transformations ) transformations to compute the per-word counts in the file as an RDD of (String, Int) pairs. To collect the word counts in our shell, we can use the [`collect` ](scala-programming-guide.html#actions ) action:
2012-10-03 02:54:03 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> wordCounts.collect()
2012-10-03 02:54:03 -04:00
res6: Array[(java.lang.String, Int)] = Array((need,2), ("",43), (Extra,3), (using,1), (passed,1), (etc.,1), (its,1), (`/usr/local/lib/libmesos.so`,1), (`SCALA_HOME`,1), (option,1), (these,1), (#,1), (`PATH`,,2), (200,1), (To,3),...
{% endhighlight %}
2012-09-30 20:43:20 -04:00
## Caching
2012-10-09 17:30:23 -04:00
Spark also supports pulling data sets into a cluster-wide in-memory cache. This is very useful when data is accessed repeatedly, such as when querying a small "hot" dataset or when running an iterative algorithm like PageRank. As a simple example, let's mark our `linesWithSpark` dataset to be cached:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> linesWithSpark.cache()
res7: spark.RDD[String] = spark.FilteredRDD@17e51082
2012-09-30 20:43:20 -04:00
2012-10-09 17:30:23 -04:00
scala> linesWithSpark.count()
2012-10-03 02:54:03 -04:00
res8: Long = 15
2012-10-09 17:30:23 -04:00
scala> linesWithSpark.count()
res9: Long = 15
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2014-04-21 13:26:33 -04:00
It may seem silly to use Spark to explore and cache a 30-line text file. The interesting part is
that these same functions can be used on very large data sets, even when they are striped across
tens or hundreds of nodes. You can also do this interactively by connecting `bin/spark-shell` to
a cluster, as described in the [programming guide ](scala-programming-guide.html#initializing-spark ).
2012-09-30 20:43:20 -04:00
2014-04-21 13:26:33 -04:00
# A Standalone Application
Now say we wanted to write a standalone application using the Spark API. We will walk through a
simple application in both Scala (with SBT), Java (with Maven), and Python.
2012-09-30 20:43:20 -04:00
2014-04-21 13:26:33 -04:00
< div class = "codetabs" >
< div data-lang = "scala" markdown = "1" >
We'll create a very simple Spark application in Scala. So simple, in fact, that it's
named `SimpleApp.scala` :
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2013-09-07 00:34:12 -04:00
/*** SimpleApp.scala ** */
2013-09-01 01:17:40 -04:00
import org.apache.spark.SparkContext
2013-09-01 03:32:28 -04:00
import org.apache.spark.SparkContext._
2014-04-21 13:26:33 -04:00
import org.apache.spark.SparkConf
2012-09-30 20:43:20 -04:00
2013-09-07 00:34:12 -04:00
object SimpleApp {
2013-03-28 16:47:37 -04:00
def main(args: Array[String]) {
2014-04-21 13:26:33 -04:00
val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
2013-03-28 16:47:37 -04:00
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
2012-09-30 20:43:20 -04:00
}
{% endhighlight %}
2014-04-21 13:26:33 -04:00
This program just counts the number of lines containing 'a' and the number containing 'b' in the
Spark README. Note that you'll need to replace YOUR_SPARK_HOME with the location where Spark is
installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkContext,
we initialize a SparkContext as part of the program.
2014-04-22 00:57:40 -04:00
We pass the SparkContext constructor a
[SparkConf ](api/scala/index.html#org.apache.spark.SparkConf )
object which contains information about our
2014-04-21 13:26:33 -04:00
application. We also call sc.addJar to make sure that when our application is launched in cluster
mode, the jar file containing it will be shipped automatically to worker nodes.
2012-10-03 02:54:03 -04:00
2014-04-21 13:26:33 -04:00
This file depends on the Spark API, so we'll also include an sbt configuration file, `simple.sbt`
which explains that Spark is a dependency. This file also adds a repository that Spark depends on:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
name := "Simple Project"
version := "1.0"
2012-10-08 20:14:53 -04:00
scalaVersion := "{{site.SCALA_VERSION}}"
2012-10-03 02:54:03 -04:00
2013-09-01 01:17:40 -04:00
libraryDependencies += "org.apache.spark" %% "spark-core" % "{{site.SPARK_VERSION}}"
2012-10-14 14:48:24 -04:00
2013-08-31 20:40:33 -04:00
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2014-04-21 13:26:33 -04:00
For sbt to work correctly, we'll need to layout `SimpleApp.scala` and `simple.sbt`
according to the typical directory structure. Once that is in place, we can create a JAR package
containing the application's code, then use the `spark-submit` script to run our program.
2012-09-30 20:43:20 -04:00
{% highlight bash %}
2014-04-21 13:26:33 -04:00
# Your directory layout should look like this
2012-10-23 16:49:52 -04:00
$ find .
2012-09-30 20:43:20 -04:00
.
./simple.sbt
./src
./src/main
./src/main/scala
2013-09-07 00:34:12 -04:00
./src/main/scala/SimpleApp.scala
2012-09-30 20:43:20 -04:00
2014-04-21 13:26:33 -04:00
# Package a jar containing your application
$ sbt package
...
[info] Packaging {..}/{..}/target/scala-2.10/simple-project_2.10-1.0.jar
# Use spark-submit to run your application
2014-04-26 22:24:29 -04:00
$ YOUR_SPARK_HOME/bin/spark-submit \
2014-04-21 13:26:33 -04:00
--class "SimpleApp" \
2014-04-26 22:24:29 -04:00
--master local[4] \
target/scala-2.10/simple-project_2.10-1.0.jar
2012-09-30 20:43:20 -04:00
...
2013-04-25 13:39:28 -04:00
Lines with a: 46, Lines with b: 23
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2014-04-21 13:26:33 -04:00
< / div >
< div data-lang = "java" markdown = "1" >
This example will use Maven to compile an application jar, but any similar build system will work.
2012-10-03 02:54:03 -04:00
2013-09-07 00:34:12 -04:00
We'll create a very simple Spark application, `SimpleApp.java` :
2012-09-30 20:43:20 -04:00
{% highlight java %}
2013-09-07 00:34:12 -04:00
/*** SimpleApp.java ** */
2013-09-01 01:17:40 -04:00
import org.apache.spark.api.java.*;
2014-04-21 13:26:33 -04:00
import org.apache.spark.SparkConf;
2013-09-01 01:17:40 -04:00
import org.apache.spark.api.java.function.Function;
2012-09-30 20:43:20 -04:00
2013-09-07 00:34:12 -04:00
public class SimpleApp {
2012-09-30 20:43:20 -04:00
public static void main(String[] args) {
2014-04-21 13:26:33 -04:00
String logFile = "YOUR_SPARK_HOME/README.md"; // Should be some file on your system
SparkConf conf = new SparkConf().setAppName("Simple Application");
JavaSparkContext sc = new JavaSparkContext(conf);
2012-09-30 20:43:20 -04:00
JavaRDD< String > logData = sc.textFile(logFile).cache();
long numAs = logData.filter(new Function< String , Boolean > () {
public Boolean call(String s) { return s.contains("a"); }
}).count();
long numBs = logData.filter(new Function< String , Boolean > () {
public Boolean call(String s) { return s.contains("b"); }
}).count();
2012-10-09 17:30:23 -04:00
System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
2012-09-30 20:43:20 -04:00
}
}
{% endhighlight %}
2014-04-21 13:26:33 -04:00
This program just counts the number of lines containing 'a' and the number containing 'b' in a text
file. Note that you'll need to replace YOUR_SPARK_HOME with the location where Spark is installed.
As with the Scala example, we initialize a SparkContext, though we use the special
`JavaSparkContext` class to get a Java-friendly one. We also create RDDs (represented by
`JavaRDD` ) and run transformations on them. Finally, we pass functions to Spark by creating classes
that extend `spark.api.java.function.Function` . The
[Java programming guide ](java-programming-guide.html ) describes these differences in more detail.
2012-10-03 02:54:03 -04:00
2014-04-21 13:26:33 -04:00
To build the program, we also write a Maven `pom.xml` file that lists Spark as a dependency.
Note that Spark artifacts are tagged with a Scala version.
2012-09-30 20:43:20 -04:00
{% highlight xml %}
< project >
< groupId > edu.berkeley< / groupId >
< artifactId > simple-project< / artifactId >
< modelVersion > 4.0.0< / modelVersion >
< name > Simple Project< / name >
< packaging > jar< / packaging >
< version > 1.0< / version >
2013-01-11 12:24:48 -05:00
< repositories >
< repository >
2013-02-26 01:18:47 -05:00
< id > Akka repository< / id >
< url > http://repo.akka.io/releases< / url >
2013-01-11 12:24:48 -05:00
< / repository >
< / repositories >
2012-09-30 20:43:20 -04:00
< dependencies >
< dependency > <!-- Spark dependency -->
2013-09-01 01:17:40 -04:00
< groupId > org.apache.spark< / groupId >
2014-02-19 18:54:03 -05:00
< artifactId > spark-core_{{site.SCALA_BINARY_VERSION}}< / artifactId >
2012-10-08 20:14:53 -04:00
< version > {{site.SPARK_VERSION}}< / version >
2012-09-30 20:43:20 -04:00
< / dependency >
< / dependencies >
< / project >
{% endhighlight %}
2012-10-09 17:30:23 -04:00
We lay out these files according to the canonical Maven directory structure:
2012-09-30 20:43:20 -04:00
{% highlight bash %}
$ find .
./pom.xml
./src
./src/main
./src/main/java
2013-09-07 00:34:12 -04:00
./src/main/java/SimpleApp.java
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2014-04-21 13:26:33 -04:00
Now, we can package the application using Maven and execute it with `./bin/spark-submit` .
2012-10-03 02:54:03 -04:00
2012-09-30 20:43:20 -04:00
{% highlight bash %}
2014-04-21 13:26:33 -04:00
# Package a jar containing your application
2012-10-09 17:30:23 -04:00
$ mvn package
2014-04-21 13:26:33 -04:00
...
[INFO] Building jar: {..}/{..}/target/simple-project-1.0.jar
# Use spark-submit to run your application
2014-04-26 22:24:29 -04:00
$ YOUR_SPARK_HOME/bin/spark-submit \
2014-04-21 13:26:33 -04:00
--class "SimpleApp" \
2014-04-26 22:24:29 -04:00
--master local[4] \
target/simple-project-1.0.jar
2012-09-30 20:43:20 -04:00
...
2013-04-25 13:39:28 -04:00
Lines with a: 46, Lines with b: 23
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2012-10-03 02:54:03 -04:00
2014-04-21 13:26:33 -04:00
< / div >
< div data-lang = "python" markdown = "1" >
2013-09-07 00:34:12 -04:00
Now we will show how to write a standalone application using the Python API (PySpark).
2012-12-29 01:51:28 -05:00
2013-09-07 00:34:12 -04:00
As an example, we'll create a simple Spark application, `SimpleApp.py` :
2012-12-29 01:51:28 -05:00
{% highlight python %}
2013-09-07 00:34:12 -04:00
"""SimpleApp.py"""
2012-12-29 01:51:28 -05:00
from pyspark import SparkContext
2014-04-21 13:26:33 -04:00
logFile = "YOUR_SPARK_HOME/README.md" # Should be some file on your system
2013-09-07 00:34:12 -04:00
sc = SparkContext("local", "Simple App")
2012-12-29 01:51:28 -05:00
logData = sc.textFile(logFile).cache()
numAs = logData.filter(lambda s: 'a' in s).count()
numBs = logData.filter(lambda s: 'b' in s).count()
print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
{% endhighlight %}
2014-04-21 13:26:33 -04:00
This program just counts the number of lines containing 'a' and the number containing 'b' in a
text file.
Note that you'll need to replace YOUR_SPARK_HOME with the location where Spark is installed.
2013-04-25 13:39:28 -04:00
As with the Scala and Java examples, we use a SparkContext to create RDDs.
2014-04-21 13:26:33 -04:00
We can pass Python functions to Spark, which are automatically serialized along with any variables
that they reference.
For applications that use custom classes or third-party libraries, we can add those code
dependencies to SparkContext to ensure that they will be available on remote machines; this is
described in more detail in the [Python programming guide ](python-programming-guide.html ).
2013-09-07 00:34:12 -04:00
`SimpleApp` is simple enough that we do not need to specify any code dependencies.
2012-12-29 01:51:28 -05:00
2014-01-02 08:20:12 -05:00
We can run this application using the `bin/pyspark` script:
2012-12-29 01:51:28 -05:00
{% highlight python %}
$ cd $SPARK_HOME
2014-01-02 08:20:12 -05:00
$ ./bin/pyspark SimpleApp.py
2012-12-29 01:51:28 -05:00
...
2013-04-25 13:39:28 -04:00
Lines with a: 46, Lines with b: 23
2012-12-29 01:51:28 -05:00
{% endhighlight python %}
2014-04-21 13:26:33 -04:00
< / div >
< / div >
2013-08-31 20:28:07 -04:00
2014-04-21 13:26:33 -04:00
# Where to go from here
Congratulations on running your first Spark application!
2013-08-31 20:28:07 -04:00
2014-04-21 13:26:33 -04:00
* For an in-depth overview of the API see "Programming Guides" menu section.
* For running applications on a cluster head to the [deployment overview ](cluster-overview.html ).
2014-04-22 00:57:40 -04:00
* For configuration options available to Spark applications see the [configuration page ](configuration.html ).