2012-12-29 01:51:28 -05:00
---
layout: global
title: Python Programming Guide
---
2013-09-02 01:12:03 -04:00
The Spark Python API (PySpark) exposes the Spark programming model to Python.
2012-12-29 01:51:28 -05:00
To learn the basics of Spark, we recommend reading through the
[Scala programming guide ](scala-programming-guide.html ) first; it should be
easy to follow even if you don't know Scala.
This guide will show how to use the Spark features described there in Python.
2013-07-29 02:50:48 -04:00
2012-12-29 01:51:28 -05:00
# Key Differences in the Python API
There are a few key differences between the Python and Scala APIs:
2013-09-02 01:12:03 -04:00
* Python is dynamically typed, so RDDs can hold objects of multiple types.
2013-10-09 15:08:04 -04:00
* PySpark does not yet support a few API calls, such as `lookup` and non-text input files, though these will be added in future releases.
2012-12-29 01:51:28 -05:00
2013-01-02 00:25:49 -05:00
In PySpark, RDDs support the same methods as their Scala counterparts but take Python functions and return Python collection types.
Short functions can be passed to RDD methods using Python's [`lambda` ](http://www.diveintopython.net/power_of_introspection/lambda_functions.html ) syntax:
{% highlight python %}
logData = sc.textFile(logFile).cache()
2013-06-27 00:14:38 -04:00
errors = logData.filter(lambda line: "ERROR" in line)
2013-01-02 00:25:49 -05:00
{% endhighlight %}
2013-09-02 01:12:03 -04:00
You can also pass functions that are defined with the `def` keyword; this is useful for longer functions that can't be expressed using `lambda` :
2013-01-02 00:25:49 -05:00
{% highlight python %}
def is_error(line):
2013-06-27 00:14:38 -04:00
return "ERROR" in line
2013-01-02 00:25:49 -05:00
errors = logData.filter(is_error)
{% endhighlight %}
2013-09-02 01:12:03 -04:00
Functions can access objects in enclosing scopes, although modifications to those objects within RDD methods will not be propagated back:
2013-01-02 00:25:49 -05:00
{% highlight python %}
error_keywords = ["Exception", "Error"]
def is_error(line):
2013-06-27 00:14:38 -04:00
return any(keyword in line for keyword in error_keywords)
2013-01-02 00:25:49 -05:00
errors = logData.filter(is_error)
{% endhighlight %}
2014-03-13 15:11:33 -04:00
PySpark will automatically ship these functions to executors, along with any objects that they reference.
Instances of classes will be serialized and shipped to executors by PySpark, but classes themselves cannot be automatically distributed to executors.
2014-05-10 15:46:51 -04:00
The [Standalone Use ](#standalone-programs ) section describes how to ship code dependencies to executors.
2012-12-29 01:51:28 -05:00
2014-01-02 08:20:12 -05:00
In addition, PySpark fully supports interactive use---simply run `./bin/pyspark` to launch an interactive shell.
2013-09-02 01:12:03 -04:00
2013-07-29 02:50:48 -04:00
2012-12-29 01:51:28 -05:00
# Installing and Configuring PySpark
2014-01-15 17:20:39 -05:00
PySpark requires Python 2.6 or higher.
2013-09-07 00:34:12 -04:00
PySpark applications are executed using a standard CPython interpreter in order to support Python modules that use C extensions.
2012-12-29 01:51:28 -05:00
We have not tested PySpark with Python 3 or with alternative Python interpreters, such as [PyPy ](http://pypy.org/ ) or [Jython ](http://www.jython.org/ ).
2013-09-02 01:12:03 -04:00
By default, PySpark requires `python` to be available on the system `PATH` and use it to run programs; an alternate Python executable may be specified by setting the `PYSPARK_PYTHON` environment variable in `conf/spark-env.sh` (or `.cmd` on Windows).
2012-12-29 01:51:28 -05:00
All of PySpark's library dependencies, including [Py4J ](http://py4j.sourceforge.net/ ), are bundled with PySpark and automatically imported.
# Interactive Use
2014-05-17 01:36:23 -04:00
The `bin/pyspark` script launches a Python interpreter that is configured to run PySpark applications. To use `pyspark` interactively, first build Spark, then launch it directly from the command line:
2013-01-30 17:49:18 -05:00
{% highlight bash %}
2014-01-06 01:05:30 -05:00
$ sbt/sbt assembly
2014-01-02 08:20:12 -05:00
$ ./bin/pyspark
2013-01-30 17:49:18 -05:00
{% endhighlight %}
The Python shell can be used explore data interactively and is a simple way to learn the API:
2012-12-29 01:51:28 -05:00
{% highlight python %}
>>> words = sc.textFile("/usr/share/dict/words")
>>> words.filter(lambda w: w.startswith("spar")).take(5)
[u'spar', u'sparable', u'sparada', u'sparadrap', u'sparagrass']
2013-01-30 18:04:06 -05:00
>>> help(pyspark) # Show all pyspark functions
2012-12-29 01:51:28 -05:00
{% endhighlight %}
2014-04-07 16:06:30 -04:00
By default, the `bin/pyspark` shell creates SparkContext that runs applications locally on all of
2014-05-17 01:36:23 -04:00
your machine's logical cores. To connect to a non-local cluster, or to specify a number of cores,
set the `--master` flag. For example, to use the `bin/pyspark` shell with a
[standalone Spark cluster ](spark-standalone.html ):
2013-01-02 00:25:49 -05:00
2013-02-18 05:12:41 -05:00
{% highlight bash %}
2014-05-17 01:36:23 -04:00
$ ./bin/pyspark --master spark://1.2.3.4:7077
2013-01-02 00:25:49 -05:00
{% endhighlight %}
2014-04-07 16:06:30 -04:00
Or, to use exactly four cores on the local machine:
2013-07-29 02:50:48 -04:00
{% highlight bash %}
2014-05-17 01:36:23 -04:00
$ ./bin/pyspark --master local[4]
2013-07-29 02:50:48 -04:00
{% endhighlight %}
2014-05-17 01:36:23 -04:00
Under the hood `bin/pyspark` is a wrapper around the
[Spark submit script ](cluster-overview.html#launching-applications-with-spark-submit ), so these
two scripts share the same list of options. For a complete list of options, run `bin/pyspark` with
the `--help` option.
2013-07-29 02:50:48 -04:00
## IPython
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
It is also possible to launch the PySpark shell in [IPython ](http://ipython.org ), the
2014-04-15 03:19:43 -04:00
enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To
2014-01-07 20:46:02 -05:00
use IPython, set the `IPYTHON` variable to `1` when running `bin/pyspark` :
2013-07-29 02:50:48 -04:00
{% highlight bash %}
2014-01-02 08:20:12 -05:00
$ IPYTHON=1 ./bin/pyspark
2013-07-29 02:50:48 -04:00
{% endhighlight %}
Alternatively, you can customize the `ipython` command by setting `IPYTHON_OPTS` . For example, to launch
the [IPython Notebook ](http://ipython.org/notebook.html ) with PyLab graphing support:
2013-01-02 00:25:49 -05:00
2013-07-29 02:50:48 -04:00
{% highlight bash %}
2014-01-02 08:20:12 -05:00
$ IPYTHON_OPTS="notebook --pylab inline" ./bin/pyspark
2013-07-29 02:50:48 -04:00
{% endhighlight %}
2014-05-17 01:36:23 -04:00
IPython also works on a cluster or on multiple cores if you set the `--master` flag.
2013-07-29 02:50:48 -04:00
# Standalone Programs
2012-12-29 01:51:28 -05:00
2014-05-17 01:36:23 -04:00
PySpark can also be used from standalone Python scripts by creating a SparkContext in your script
and running the script using `bin/spark-submit` . The Quick Start guide includes a
[complete example ](quick-start.html#standalone-applications ) of a standalone Python application.
2012-12-29 01:51:28 -05:00
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
Code dependencies can be deployed by passing .zip or .egg files in the `--py-files` option of `spark-submit` :
2012-12-29 01:51:28 -05:00
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
{% highlight bash %}
./bin/spark-submit --py-files lib1.zip,lib2.zip my_script.py
2012-12-29 01:51:28 -05:00
{% endhighlight %}
Files listed here will be added to the `PYTHONPATH` and shipped to remote worker machines.
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
Code dependencies can also be added to an existing SparkContext at runtime using its `addPyFile()` method.
2012-12-29 01:51:28 -05:00
2013-12-30 22:17:28 -05:00
You can set [configuration properties ](configuration.html#spark-properties ) by passing a
2014-04-22 00:57:40 -04:00
[SparkConf ](api/python/pyspark.conf.SparkConf-class.html ) object to SparkContext:
2013-10-22 14:49:52 -04:00
{% highlight python %}
2013-12-30 22:17:28 -05:00
from pyspark import SparkConf, SparkContext
conf = (SparkConf()
2014-05-17 01:36:23 -04:00
.setMaster("local")
2013-12-30 22:17:28 -05:00
.setAppName("My app")
.set("spark.executor.memory", "1g"))
sc = SparkContext(conf = conf)
2013-10-22 14:49:52 -04:00
{% endhighlight %}
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
`spark-submit` supports launching Python applications on standalone, Mesos or YARN clusters, through
its `--master` argument. However, it currently requires the Python driver program to run on the local
machine, not the cluster (i.e. the `--deploy-mode` parameter cannot be `cluster` ).
2013-09-02 01:12:03 -04:00
# API Docs
2014-04-22 00:57:40 -04:00
[API documentation ](api/python/index.html ) for PySpark is available as Epydoc.
2013-09-02 01:12:03 -04:00
Many of the methods also contain [doctests ](http://docs.python.org/2/library/doctest.html ) that provide additional usage examples.
2013-07-29 02:50:48 -04:00
2014-01-12 03:15:34 -05:00
# Libraries
[MLlib ](mllib-guide.html ) is also available in PySpark. To use it, you'll need
2014-04-15 03:19:43 -04:00
[NumPy ](http://www.numpy.org ) version 1.4 or newer. The [MLlib guide ](mllib-guide.html ) contains
2014-01-12 03:15:34 -05:00
some example applications.
2012-12-29 01:51:28 -05:00
# Where to Go from Here
2014-05-06 20:27:52 -04:00
PySpark also includes several sample programs in the [`examples/src/main/python` folder ](https://github.com/apache/spark/tree/master/examples/src/main/python ).
2013-09-02 01:12:03 -04:00
You can run them by passing the files to `pyspark` ; e.g.:
2013-09-01 01:38:50 -04:00
2014-05-17 01:36:23 -04:00
./bin/spark-submit examples/src/main/python/wordcount.py README.md
2013-09-01 01:38:50 -04:00
2014-05-17 01:36:23 -04:00
Each program prints usage help when run without the sufficient arguments.