2012-09-30 20:43:20 -04:00
---
layout: global
2012-10-09 17:30:23 -04:00
title: Quick Start
2015-02-05 14:12:50 -05:00
description: Quick start tutorial for Spark SPARK_VERSION_SHORT
2012-09-30 20:43:20 -04:00
---
* This will become a table of contents (this text will be scraped).
{:toc}
2014-05-15 00:45:20 -04:00
This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's
interactive shell (in Python or Scala),
2014-10-15 00:37:51 -04:00
then show how to write applications in Java, Scala, and Python.
2012-09-30 20:43:20 -04:00
2014-05-06 15:07:46 -04:00
To follow along with this guide, first download a packaged release of Spark from the
[Spark website ](http://spark.apache.org/downloads.html ). Since we won't be using HDFS,
you can download a package for any version of Hadoop.
2012-10-03 02:54:03 -04:00
2017-03-07 14:32:36 -05:00
Note that, before Spark 2.0, the main programming interface of Spark was the Resilient Distributed Dataset (RDD). After Spark 2.0, RDDs are replaced by Dataset, which is strongly-typed like an RDD, but with richer optimizations under the hood. The RDD interface is still supported, and you can get a more complete reference at the [RDD programming guide ](rdd-programming-guide.html ). However, we highly recommend you to switch to use Dataset, which has better performance than RDD. See the [SQL programming guide ](sql-programming-guide.html ) to get more information about Dataset.
2012-10-09 17:30:23 -04:00
# Interactive Analysis with the Spark Shell
2012-09-30 20:43:20 -04:00
2012-10-09 17:30:23 -04:00
## Basics
2012-09-30 20:43:20 -04:00
2014-05-15 00:45:20 -04:00
Spark's shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively.
It is available in either Scala (which runs on the Java VM and is thus a good way to use existing Java libraries)
or Python. Start it by running the following in the Spark directory:
< div class = "codetabs" >
< div data-lang = "scala" markdown = "1" >
2014-05-12 22:44:14 -04:00
./bin/spark-shell
2012-09-30 20:43:20 -04:00
2017-03-07 14:32:36 -05:00
Spark's primary abstraction is a distributed collection of items called a Dataset. Datasets can be created from Hadoop InputFormats (such as HDFS files) or by transforming other Datasets. Let's make a new Dataset from the text of the README file in the Spark source directory:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2017-03-07 14:32:36 -05:00
scala> val textFile = spark.read.textFile("README.md")
textFile: org.apache.spark.sql.Dataset[String] = [value: string]
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
You can get values from Dataset directly, by calling some actions, or transform the Dataset to get a new one. For more details, please read the _[API doc](api/scala/index.html#org.apache.spark.sql.Dataset)_ .
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2017-03-07 14:32:36 -05:00
scala> textFile.count() // Number of items in this Dataset
2016-08-16 06:37:54 -04:00
res0: Long = 126 // May be different from yours as README.md will change over time, similar to other outputs
2012-09-30 20:43:20 -04:00
2017-03-07 14:32:36 -05:00
scala> textFile.first() // First item in this Dataset
2013-09-09 00:25:49 -04:00
res1: String = # Apache Spark
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
Now let's transform this Dataset to a new one. We call `filter` to return a new Dataset with a subset of the items in the file.
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> val linesWithSpark = textFile.filter(line => line.contains("Spark"))
2017-03-07 14:32:36 -05:00
linesWithSpark: org.apache.spark.sql.Dataset[String] = [value: string]
2012-09-30 20:43:20 -04:00
{% endhighlight %}
We can chain together transformations and actions:
{% highlight scala %}
scala> textFile.filter(line => line.contains("Spark")).count() // How many lines contain "Spark"?
res3: Long = 15
{% endhighlight %}
2014-05-15 00:45:20 -04:00
< / div >
< div data-lang = "python" markdown = "1" >
./bin/pyspark
2017-03-07 14:32:36 -05:00
Spark's primary abstraction is a distributed collection of items called a Dataset. Datasets can be created from Hadoop InputFormats (such as HDFS files) or by transforming other Datasets. Due to Python's dynamic nature, we don't need the Dataset to be strongly-typed in Python. As a result, all Datasets in Python are Dataset[Row], and we call it `DataFrame` to be consistent with the data frame concept in Pandas and R. Let's make a new DataFrame from the text of the README file in the Spark source directory:
2014-05-15 00:45:20 -04:00
{% highlight python %}
2017-03-07 14:32:36 -05:00
>>> textFile = spark.read.text("README.md")
2014-05-15 00:45:20 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
You can get values from DataFrame directly, by calling some actions, or transform the DataFrame to get a new one. For more details, please read the _[API doc](api/python/index.html#pyspark.sql.DataFrame)_ .
2014-05-15 00:45:20 -04:00
{% highlight python %}
2017-03-07 14:32:36 -05:00
>>> textFile.count() # Number of rows in this DataFrame
2014-05-15 00:45:20 -04:00
126
2017-03-07 14:32:36 -05:00
>>> textFile.first() # First row in this DataFrame
Row(value=u'# Apache Spark')
2014-05-15 00:45:20 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
Now let's transform this DataFrame to a new one. We call `filter` to return a new DataFrame with a subset of the lines in the file.
2014-05-15 00:45:20 -04:00
{% highlight python %}
2017-03-07 14:32:36 -05:00
>>> linesWithSpark = textFile.filter(textFile.value.contains("Spark"))
2014-05-15 00:45:20 -04:00
{% endhighlight %}
We can chain together transformations and actions:
{% highlight python %}
2017-03-07 14:32:36 -05:00
>>> textFile.filter(textFile.value.contains("Spark")).count() # How many lines contain "Spark"?
2014-05-15 00:45:20 -04:00
15
{% endhighlight %}
< / div >
< / div >
2017-03-07 14:32:36 -05:00
## More on Dataset Operations
Dataset actions and transformations can be used for more complex computations. Let's say we want to find the line with the most words:
2012-09-30 20:43:20 -04:00
2014-05-15 00:45:20 -04:00
< div class = "codetabs" >
< div data-lang = "scala" markdown = "1" >
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)
2014-05-15 00:45:20 -04:00
res4: Long = 15
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
This first maps a line to an integer value, creating a new Dataset. `reduce` is called on that Dataset to find the largest word count. The arguments to `map` and `reduce` are Scala function literals (closures), and can use any language feature or Scala/Java library. For example, we can easily call functions declared elsewhere. We'll use `Math.max()` function to make this code easier to understand:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> import java.lang.Math
2012-09-30 20:43:20 -04:00
import java.lang.Math
scala> textFile.map(line => line.split(" ").size).reduce((a, b) => Math.max(a, b))
2014-05-15 00:45:20 -04:00
res5: Int = 15
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2012-10-03 02:54:03 -04:00
One common data flow pattern is MapReduce, as popularized by Hadoop. Spark can implement MapReduce flows easily:
{% highlight scala %}
2017-03-07 14:32:36 -05:00
scala> val wordCounts = textFile.flatMap(line => line.split(" ")).groupByKey(identity).count()
wordCounts: org.apache.spark.sql.Dataset[(String, Long)] = [value: string, count(1): bigint]
2012-10-03 02:54:03 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
Here, we call `flatMap` to transform a Dataset of lines to a Dataset of words, and then combine `groupByKey` and `count` to compute the per-word counts in the file as a Dataset of (String, Long) pairs. To collect the word counts in our shell, we can call `collect` :
2012-10-03 02:54:03 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> wordCounts.collect()
2014-05-15 00:45:20 -04:00
res6: Array[(String, Int)] = Array((means,1), (under,2), (this,3), (Because,1), (Python,2), (agree,1), (cluster.,1), ...)
2012-10-03 02:54:03 -04:00
{% endhighlight %}
2014-05-15 00:45:20 -04:00
< / div >
< div data-lang = "python" markdown = "1" >
{% highlight python %}
2017-03-07 14:32:36 -05:00
>>> from pyspark.sql.functions import *
>>> textFile.select(size(split(textFile.value, "\s+")).name("numWords")).agg(max(col("numWords"))).collect()
[Row(max(numWords)=15)]
2014-05-15 00:45:20 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
This first maps a line to an integer value and aliases it as "numWords", creating a new DataFrame. `agg` is called on that DataFrame to find the largest word count. The arguments to `select` and `agg` are both _[Column](api/python/index.html#pyspark.sql.Column)_ , we can use `df.colName` to get a column from a DataFrame. We can also import pyspark.sql.functions, which provides a lot of convenient functions to build a new Column from an old one.
2014-05-15 00:45:20 -04:00
One common data flow pattern is MapReduce, as popularized by Hadoop. Spark can implement MapReduce flows easily:
{% highlight python %}
2017-03-07 14:32:36 -05:00
>>> wordCounts = textFile.select(explode(split(textFile.value, "\s+")).as("word")).groupBy("word").count()
2014-05-15 00:45:20 -04:00
{% endhighlight %}
2017-03-07 14:32:36 -05:00
Here, we use the `explode` function in `select` , to transfrom a Dataset of lines to a Dataset of words, and then combine `groupBy` and `count` to compute the per-word counts in the file as a DataFrame of 2 columns: "word" and "count". To collect the word counts in our shell, we can call `collect` :
2014-05-15 00:45:20 -04:00
{% highlight python %}
>>> wordCounts.collect()
2017-03-07 14:32:36 -05:00
[Row(word=u'online', count=1), Row(word=u'graphs', count=1), ...]
2014-05-15 00:45:20 -04:00
{% endhighlight %}
< / div >
< / div >
2012-09-30 20:43:20 -04:00
## Caching
2012-10-09 17:30:23 -04:00
Spark also supports pulling data sets into a cluster-wide in-memory cache. This is very useful when data is accessed repeatedly, such as when querying a small "hot" dataset or when running an iterative algorithm like PageRank. As a simple example, let's mark our `linesWithSpark` dataset to be cached:
2012-09-30 20:43:20 -04:00
2014-05-15 00:45:20 -04:00
< div class = "codetabs" >
< div data-lang = "scala" markdown = "1" >
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2012-10-09 17:30:23 -04:00
scala> linesWithSpark.cache()
2017-03-07 14:32:36 -05:00
res7: linesWithSpark.type = [value: string]
2012-09-30 20:43:20 -04:00
2012-10-09 17:30:23 -04:00
scala> linesWithSpark.count()
2016-08-16 06:37:54 -04:00
res8: Long = 15
2012-10-09 17:30:23 -04:00
scala> linesWithSpark.count()
2016-08-16 06:37:54 -04:00
res9: Long = 15
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2014-05-15 00:45:20 -04:00
It may seem silly to use Spark to explore and cache a 100-line text file. The interesting part is
2014-04-21 13:26:33 -04:00
that these same functions can be used on very large data sets, even when they are striped across
tens or hundreds of nodes. You can also do this interactively by connecting `bin/spark-shell` to
2017-03-07 14:32:36 -05:00
a cluster, as described in the [RDD programming guide ](rdd-programming-guide.html#using-the-shell ).
2012-09-30 20:43:20 -04:00
2014-05-15 00:45:20 -04:00
< / div >
< div data-lang = "python" markdown = "1" >
{% highlight python %}
>>> linesWithSpark.cache()
>>> linesWithSpark.count()
2016-08-16 06:37:54 -04:00
15
2014-05-15 00:45:20 -04:00
>>> linesWithSpark.count()
2016-08-16 06:37:54 -04:00
15
2014-05-15 00:45:20 -04:00
{% endhighlight %}
It may seem silly to use Spark to explore and cache a 100-line text file. The interesting part is
that these same functions can be used on very large data sets, even when they are striped across
tens or hundreds of nodes. You can also do this interactively by connecting `bin/pyspark` to
2017-03-07 14:32:36 -05:00
a cluster, as described in the [RDD programming guide ](rdd-programming-guide.html#using-the-shell ).
2014-05-15 00:45:20 -04:00
< / div >
< / div >
2014-10-15 00:37:51 -04:00
# Self-Contained Applications
2015-09-08 09:36:34 -04:00
Suppose we wish to write a self-contained application using the Spark API. We will walk through a
simple application in Scala (with sbt), Java (with Maven), and Python.
2012-09-30 20:43:20 -04:00
2014-04-21 13:26:33 -04:00
< div class = "codetabs" >
< div data-lang = "scala" markdown = "1" >
2015-09-08 09:36:34 -04:00
We'll create a very simple Spark application in Scala--so simple, in fact, that it's
2014-04-21 13:26:33 -04:00
named `SimpleApp.scala` :
2012-09-30 20:43:20 -04:00
{% highlight scala %}
2014-05-15 00:45:20 -04:00
/* SimpleApp.scala */
2017-03-07 14:32:36 -05:00
import org.apache.spark.sql.SparkSession
2012-09-30 20:43:20 -04:00
2013-09-07 00:34:12 -04:00
object SimpleApp {
2013-03-28 16:47:37 -04:00
def main(args: Array[String]) {
2014-04-21 13:26:33 -04:00
val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
2017-03-07 14:32:36 -05:00
val spark = SparkSession.builder.appName("Simple Application").getOrCreate()
val logData = spark.read.textFile(logFile).cache()
2013-03-28 16:47:37 -04:00
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
2016-10-07 13:31:41 -04:00
println(s"Lines with a: $numAs, Lines with b: $numBs")
2017-03-07 14:32:36 -05:00
spark.stop()
2013-03-28 16:47:37 -04:00
}
2012-09-30 20:43:20 -04:00
}
{% endhighlight %}
2014-11-27 12:03:17 -05:00
Note that applications should define a `main()` method instead of extending `scala.App` .
Subclasses of `scala.App` may not work correctly.
2014-04-21 13:26:33 -04:00
This program just counts the number of lines containing 'a' and the number containing 'b' in the
Spark README. Note that you'll need to replace YOUR_SPARK_HOME with the location where Spark is
2017-03-07 14:32:36 -05:00
installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkSession,
we initialize a SparkSession as part of the program.
2014-04-21 13:26:33 -04:00
2017-03-07 14:32:36 -05:00
We call `SparkSession.builder` to construct a [[SparkSession]], then set the application name, and finally call `getOrCreate` to get the [[SparkSession]] instance.
2012-10-03 02:54:03 -04:00
2017-03-07 14:32:36 -05:00
Our application depends on the Spark API, so we'll also include an sbt configuration file,
`build.sbt` , which explains that Spark is a dependency. This file also adds a repository that
2014-05-28 18:49:54 -04:00
Spark depends on:
2012-09-30 20:43:20 -04:00
{% highlight scala %}
name := "Simple Project"
version := "1.0"
2012-10-08 20:14:53 -04:00
scalaVersion := "{{site.SCALA_VERSION}}"
2012-10-03 02:54:03 -04:00
2017-03-07 14:32:36 -05:00
libraryDependencies += "org.apache.spark" %% "spark-sql" % "{{site.SPARK_VERSION}}"
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2017-02-28 18:07:16 -05:00
For sbt to work correctly, we'll need to layout `SimpleApp.scala` and `build.sbt`
2014-04-21 13:26:33 -04:00
according to the typical directory structure. Once that is in place, we can create a JAR package
containing the application's code, then use the `spark-submit` script to run our program.
2012-09-30 20:43:20 -04:00
{% highlight bash %}
2014-04-21 13:26:33 -04:00
# Your directory layout should look like this
2012-10-23 16:49:52 -04:00
$ find .
2012-09-30 20:43:20 -04:00
.
2017-02-28 18:07:16 -05:00
./build.sbt
2012-09-30 20:43:20 -04:00
./src
./src/main
./src/main/scala
2013-09-07 00:34:12 -04:00
./src/main/scala/SimpleApp.scala
2012-09-30 20:43:20 -04:00
2014-04-21 13:26:33 -04:00
# Package a jar containing your application
$ sbt package
...
2016-06-08 12:22:55 -04:00
[info] Packaging {..}/{..}/target/scala-{{site.SCALA_BINARY_VERSION}}/simple-project_{{site.SCALA_BINARY_VERSION}}-1.0.jar
2014-04-21 13:26:33 -04:00
# Use spark-submit to run your application
2014-04-26 22:24:29 -04:00
$ YOUR_SPARK_HOME/bin/spark-submit \
2014-04-21 13:26:33 -04:00
--class "SimpleApp" \
2014-04-26 22:24:29 -04:00
--master local[4] \
2016-06-08 12:22:55 -04:00
target/scala-{{site.SCALA_BINARY_VERSION}}/simple-project_{{site.SCALA_BINARY_VERSION}}-1.0.jar
2012-09-30 20:43:20 -04:00
...
2013-04-25 13:39:28 -04:00
Lines with a: 46, Lines with b: 23
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2014-04-21 13:26:33 -04:00
< / div >
< div data-lang = "java" markdown = "1" >
2015-09-08 09:36:34 -04:00
This example will use Maven to compile an application JAR, but any similar build system will work.
2012-10-03 02:54:03 -04:00
2013-09-07 00:34:12 -04:00
We'll create a very simple Spark application, `SimpleApp.java` :
2012-09-30 20:43:20 -04:00
{% highlight java %}
2014-05-15 00:45:20 -04:00
/* SimpleApp.java */
2017-03-07 14:32:36 -05:00
import org.apache.spark.sql.SparkSession;
2012-09-30 20:43:20 -04:00
2013-09-07 00:34:12 -04:00
public class SimpleApp {
2012-09-30 20:43:20 -04:00
public static void main(String[] args) {
2014-04-21 13:26:33 -04:00
String logFile = "YOUR_SPARK_HOME/README.md"; // Should be some file on your system
2017-03-07 14:32:36 -05:00
SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
Dataset< String > logData = spark.read.textFile(logFile).cache();
2012-09-30 20:43:20 -04:00
2017-02-16 07:32:45 -05:00
long numAs = logData.filter(s -> s.contains("a")).count();
long numBs = logData.filter(s -> s.contains("b")).count();
2012-09-30 20:43:20 -04:00
2012-10-09 17:30:23 -04:00
System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
2017-03-07 14:32:36 -05:00
spark.stop();
2012-09-30 20:43:20 -04:00
}
}
{% endhighlight %}
2017-03-07 14:32:36 -05:00
This program just counts the number of lines containing 'a' and the number containing 'b' in the
Spark README. Note that you'll need to replace YOUR_SPARK_HOME with the location where Spark is
installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkSession,
we initialize a SparkSession as part of the program.
2012-10-03 02:54:03 -04:00
2014-04-21 13:26:33 -04:00
To build the program, we also write a Maven `pom.xml` file that lists Spark as a dependency.
Note that Spark artifacts are tagged with a Scala version.
2012-09-30 20:43:20 -04:00
{% highlight xml %}
< project >
< groupId > edu.berkeley< / groupId >
< artifactId > simple-project< / artifactId >
< modelVersion > 4.0.0< / modelVersion >
< name > Simple Project< / name >
< packaging > jar< / packaging >
< version > 1.0< / version >
< dependencies >
< dependency > <!-- Spark dependency -->
2013-09-01 01:17:40 -04:00
< groupId > org.apache.spark< / groupId >
2017-03-07 14:32:36 -05:00
< artifactId > spark-sql_{{site.SCALA_BINARY_VERSION}}< / artifactId >
2012-10-08 20:14:53 -04:00
< version > {{site.SPARK_VERSION}}< / version >
2012-09-30 20:43:20 -04:00
< / dependency >
< / dependencies >
< / project >
{% endhighlight %}
2012-10-09 17:30:23 -04:00
We lay out these files according to the canonical Maven directory structure:
2012-09-30 20:43:20 -04:00
{% highlight bash %}
$ find .
./pom.xml
./src
./src/main
./src/main/java
2013-09-07 00:34:12 -04:00
./src/main/java/SimpleApp.java
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2014-04-21 13:26:33 -04:00
Now, we can package the application using Maven and execute it with `./bin/spark-submit` .
2012-10-03 02:54:03 -04:00
2012-09-30 20:43:20 -04:00
{% highlight bash %}
2015-09-08 09:36:34 -04:00
# Package a JAR containing your application
2012-10-09 17:30:23 -04:00
$ mvn package
2014-04-21 13:26:33 -04:00
...
[INFO] Building jar: {..}/{..}/target/simple-project-1.0.jar
# Use spark-submit to run your application
2014-04-26 22:24:29 -04:00
$ YOUR_SPARK_HOME/bin/spark-submit \
2014-04-21 13:26:33 -04:00
--class "SimpleApp" \
2014-04-26 22:24:29 -04:00
--master local[4] \
target/simple-project-1.0.jar
2012-09-30 20:43:20 -04:00
...
2013-04-25 13:39:28 -04:00
Lines with a: 46, Lines with b: 23
2012-09-30 20:43:20 -04:00
{% endhighlight %}
2012-10-03 02:54:03 -04:00
2014-04-21 13:26:33 -04:00
< / div >
< div data-lang = "python" markdown = "1" >
2014-10-15 00:37:51 -04:00
Now we will show how to write an application using the Python API (PySpark).
2012-12-29 01:51:28 -05:00
2013-09-07 00:34:12 -04:00
As an example, we'll create a simple Spark application, `SimpleApp.py` :
2012-12-29 01:51:28 -05:00
{% highlight python %}
2013-09-07 00:34:12 -04:00
"""SimpleApp.py"""
2017-03-07 14:32:36 -05:00
from pyspark.sql import SparkSession
2012-12-29 01:51:28 -05:00
2014-04-21 13:26:33 -04:00
logFile = "YOUR_SPARK_HOME/README.md" # Should be some file on your system
2017-03-07 14:32:36 -05:00
spark = SparkSession.builder().appName(appName).master(master).getOrCreate()
logData = spark.read.text(logFile).cache()
2012-12-29 01:51:28 -05:00
2017-03-07 14:32:36 -05:00
numAs = logData.filter(logData.value.contains('a')).count()
numBs = logData.filter(logData.value.contains('b')).count()
2012-12-29 01:51:28 -05:00
2015-07-31 16:45:28 -04:00
print("Lines with a: %i, lines with b: %i" % (numAs, numBs))
2016-10-07 13:31:41 -04:00
2017-03-07 14:32:36 -05:00
spark.stop()
2012-12-29 01:51:28 -05:00
{% endhighlight %}
2014-04-21 13:26:33 -04:00
This program just counts the number of lines containing 'a' and the number containing 'b' in a
text file.
Note that you'll need to replace YOUR_SPARK_HOME with the location where Spark is installed.
2017-03-07 14:32:36 -05:00
As with the Scala and Java examples, we use a SparkSession to create Datasets.
2014-05-15 00:45:20 -04:00
For applications that use custom classes or third-party libraries, we can also add code
dependencies to `spark-submit` through its `--py-files` argument by packaging them into a
.zip file (see `spark-submit --help` for details).
2013-09-07 00:34:12 -04:00
`SimpleApp` is simple enough that we do not need to specify any code dependencies.
2012-12-29 01:51:28 -05:00
2014-05-15 00:45:20 -04:00
We can run this application using the `bin/spark-submit` script:
2012-12-29 01:51:28 -05:00
2015-05-23 03:00:30 -04:00
{% highlight bash %}
2014-05-15 00:45:20 -04:00
# Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--master local[4] \
SimpleApp.py
2012-12-29 01:51:28 -05:00
...
2013-04-25 13:39:28 -04:00
Lines with a: 46, Lines with b: 23
2015-05-23 03:00:30 -04:00
{% endhighlight %}
2012-12-29 01:51:28 -05:00
2014-04-21 13:26:33 -04:00
< / div >
< / div >
2013-08-31 20:28:07 -04:00
2014-05-15 00:45:20 -04:00
# Where to Go from Here
2014-04-21 13:26:33 -04:00
Congratulations on running your first Spark application!
2013-08-31 20:28:07 -04:00
2017-03-07 14:32:36 -05:00
* For an in-depth overview of the API, start with the [RDD programming guide ](rdd-programming-guide.html ) and the [SQL programming guide ](sql-programming-guide.html ), or see "Programming Guides" menu for other components.
[SPARK-1566] consolidate programming guide, and general doc updates
This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
* A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
* New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
* Spark-submit guide moved to a separate page and expanded slightly
* Various cleanups of the menu system, security docs, and others
* Updated look of title bar to differentiate the docs from previous Spark versions
You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
Author: Matei Zaharia <matei@databricks.com>
Closes #896 from mateiz/1.0-docs and squashes the following commits:
03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
0779508 [Matei Zaharia] tweak
ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
1bf4112 [Matei Zaharia] Review comments
4414f88 [Matei Zaharia] tweaks
d04e979 [Matei Zaharia] Fix some old links to Java guide
a34ed33 [Matei Zaharia] tweak
541bb3b [Matei Zaharia] miscellaneous changes
fcefdec [Matei Zaharia] Moved submitting apps to separate doc
61d72b4 [Matei Zaharia] stuff
181f217 [Matei Zaharia] migration guide, remove old language guides
e11a0da [Matei Zaharia] Add more API functions
6a030a9 [Matei Zaharia] tweaks
8db0ae3 [Matei Zaharia] Added key-value pairs section
318d2c9 [Matei Zaharia] tweaks
1c81477 [Matei Zaharia] New section on basics and function syntax
e38f559 [Matei Zaharia] Actually added programming guide to Git
a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
3b6a876 [Matei Zaharia] More CSS tweaks
01ec8bf [Matei Zaharia] More CSS tweaks
e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
2014-05-30 03:34:33 -04:00
* For running applications on a cluster, head to the [deployment overview ](cluster-overview.html ).
* Finally, Spark includes several samples in the `examples` directory
([Scala]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/examples),
[Java ]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/java/org/apache/spark/examples ),
2015-05-23 03:00:30 -04:00
[Python ]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/python ),
[R ]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/r )).
[SPARK-1566] consolidate programming guide, and general doc updates
This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
* A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
* New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
* Spark-submit guide moved to a separate page and expanded slightly
* Various cleanups of the menu system, security docs, and others
* Updated look of title bar to differentiate the docs from previous Spark versions
You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
Author: Matei Zaharia <matei@databricks.com>
Closes #896 from mateiz/1.0-docs and squashes the following commits:
03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
0779508 [Matei Zaharia] tweak
ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
1bf4112 [Matei Zaharia] Review comments
4414f88 [Matei Zaharia] tweaks
d04e979 [Matei Zaharia] Fix some old links to Java guide
a34ed33 [Matei Zaharia] tweak
541bb3b [Matei Zaharia] miscellaneous changes
fcefdec [Matei Zaharia] Moved submitting apps to separate doc
61d72b4 [Matei Zaharia] stuff
181f217 [Matei Zaharia] migration guide, remove old language guides
e11a0da [Matei Zaharia] Add more API functions
6a030a9 [Matei Zaharia] tweaks
8db0ae3 [Matei Zaharia] Added key-value pairs section
318d2c9 [Matei Zaharia] tweaks
1c81477 [Matei Zaharia] New section on basics and function syntax
e38f559 [Matei Zaharia] Actually added programming guide to Git
a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
3b6a876 [Matei Zaharia] More CSS tweaks
01ec8bf [Matei Zaharia] More CSS tweaks
e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
2014-05-30 03:34:33 -04:00
You can run them as follows:
{% highlight bash %}
# For Scala and Java, use run-example:
./bin/run-example SparkPi
# For Python examples, use spark-submit directly:
./bin/spark-submit examples/src/main/python/pi.py
2015-05-23 03:00:30 -04:00
# For R examples, use spark-submit directly:
./bin/spark-submit examples/src/main/r/dataframe.R
[SPARK-1566] consolidate programming guide, and general doc updates
This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
* A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
* New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
* Spark-submit guide moved to a separate page and expanded slightly
* Various cleanups of the menu system, security docs, and others
* Updated look of title bar to differentiate the docs from previous Spark versions
You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
Author: Matei Zaharia <matei@databricks.com>
Closes #896 from mateiz/1.0-docs and squashes the following commits:
03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
0779508 [Matei Zaharia] tweak
ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
1bf4112 [Matei Zaharia] Review comments
4414f88 [Matei Zaharia] tweaks
d04e979 [Matei Zaharia] Fix some old links to Java guide
a34ed33 [Matei Zaharia] tweak
541bb3b [Matei Zaharia] miscellaneous changes
fcefdec [Matei Zaharia] Moved submitting apps to separate doc
61d72b4 [Matei Zaharia] stuff
181f217 [Matei Zaharia] migration guide, remove old language guides
e11a0da [Matei Zaharia] Add more API functions
6a030a9 [Matei Zaharia] tweaks
8db0ae3 [Matei Zaharia] Added key-value pairs section
318d2c9 [Matei Zaharia] tweaks
1c81477 [Matei Zaharia] New section on basics and function syntax
e38f559 [Matei Zaharia] Actually added programming guide to Git
a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
3b6a876 [Matei Zaharia] More CSS tweaks
01ec8bf [Matei Zaharia] More CSS tweaks
e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
2014-05-30 03:34:33 -04:00
{% endhighlight %}