2013-08-31 21:08:05 -04:00
|
|
|
# Apache Spark
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-07-15 05:20:01 -04:00
|
|
|
Spark is a fast and general cluster computing system for Big Data. It provides
|
2014-07-15 05:15:29 -04:00
|
|
|
high-level APIs in Scala, Java, and Python, and an optimized engine that
|
|
|
|
supports general computation graphs for data analysis. It also supports a
|
|
|
|
rich set of higher-level tools including Spark SQL for SQL and structured
|
2014-09-05 02:37:06 -04:00
|
|
|
data processing, MLlib for machine learning, GraphX for graph processing,
|
|
|
|
and Spark Streaming for stream processing.
|
2014-07-15 05:15:29 -04:00
|
|
|
|
|
|
|
<http://spark.apache.org/>
|
2011-06-22 20:24:04 -04:00
|
|
|
|
|
|
|
|
|
|
|
## Online Documentation
|
|
|
|
|
|
|
|
You can find the latest Spark documentation, including a programming
|
2014-11-09 20:40:48 -05:00
|
|
|
guide, on the [project web page](http://spark.apache.org/documentation.html)
|
|
|
|
and [project wiki](https://cwiki.apache.org/confluence/display/SPARK).
|
2012-09-05 16:24:09 -04:00
|
|
|
This README file only contains basic setup instructions.
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-04-19 01:34:39 -04:00
|
|
|
## Building Spark
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-09-16 12:18:03 -04:00
|
|
|
Spark is built using [Apache Maven](http://maven.apache.org/).
|
|
|
|
To build Spark and its example programs, run:
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-09-16 12:18:03 -04:00
|
|
|
mvn -DskipTests clean package
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-05-19 18:02:35 -04:00
|
|
|
(You do not need to do this if you downloaded a pre-built package.)
|
2014-09-16 12:18:03 -04:00
|
|
|
More detailed documentation is available from the project site, at
|
2015-02-02 15:33:49 -05:00
|
|
|
["Building Spark"](http://spark.apache.org/docs/latest/building-spark.html).
|
2014-05-19 18:02:35 -04:00
|
|
|
|
2014-04-19 01:34:39 -04:00
|
|
|
## Interactive Scala Shell
|
|
|
|
|
|
|
|
The easiest way to start using Spark is through the Scala shell:
|
2013-03-17 17:47:44 -04:00
|
|
|
|
2014-01-02 08:07:40 -05:00
|
|
|
./bin/spark-shell
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-04-19 01:34:39 -04:00
|
|
|
Try the following command, which should return 1000:
|
|
|
|
|
|
|
|
scala> sc.parallelize(1 to 1000).count()
|
|
|
|
|
|
|
|
## Interactive Python Shell
|
|
|
|
|
|
|
|
Alternatively, if you prefer Python, you can use the Python shell:
|
|
|
|
|
|
|
|
./bin/pyspark
|
|
|
|
|
|
|
|
And run the following command, which should also return 1000:
|
|
|
|
|
|
|
|
>>> sc.parallelize(range(1000)).count()
|
|
|
|
|
|
|
|
## Example Programs
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2013-08-31 21:08:05 -04:00
|
|
|
Spark also comes with several sample programs in the `examples` directory.
|
2014-05-09 01:26:17 -04:00
|
|
|
To run one of them, use `./bin/run-example <class> [params]`. For example:
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-05-19 18:02:35 -04:00
|
|
|
./bin/run-example SparkPi
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-05-19 18:02:35 -04:00
|
|
|
will run the Pi example locally.
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-05-09 01:26:17 -04:00
|
|
|
You can set the MASTER environment variable when running examples to submit
|
|
|
|
examples to a cluster. This can be a mesos:// or spark:// URL,
|
|
|
|
"yarn-cluster" or "yarn-client" to run on YARN, and "local" to run
|
|
|
|
locally with one thread, or "local[N]" to run locally with N threads. You
|
|
|
|
can also use an abbreviated class name if the class is in the `examples`
|
|
|
|
package. For instance:
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-05-09 01:26:17 -04:00
|
|
|
MASTER=spark://host:7077 ./bin/run-example SparkPi
|
|
|
|
|
|
|
|
Many of the example programs print usage help if no params are given.
|
2011-06-22 20:24:04 -04:00
|
|
|
|
2014-04-19 01:34:39 -04:00
|
|
|
## Running Tests
|
2014-01-02 03:39:37 -05:00
|
|
|
|
2014-04-19 01:34:39 -04:00
|
|
|
Testing first requires [building Spark](#building-spark). Once Spark is built, tests
|
2014-01-03 20:32:25 -05:00
|
|
|
can be run using:
|
2014-01-02 03:39:37 -05:00
|
|
|
|
2014-08-26 20:50:04 -04:00
|
|
|
./dev/run-tests
|
2014-04-19 01:34:39 -04:00
|
|
|
|
2014-09-16 12:18:03 -04:00
|
|
|
Please see the guidance on how to
|
2014-09-16 14:48:20 -04:00
|
|
|
[run all automated tests](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-AutomatedTesting).
|
2014-09-16 12:18:03 -04:00
|
|
|
|
2012-10-14 15:00:25 -04:00
|
|
|
## A Note About Hadoop Versions
|
2012-03-17 16:49:55 -04:00
|
|
|
|
|
|
|
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported
|
2013-08-21 17:51:56 -04:00
|
|
|
storage systems. Because the protocols have changed in different versions of
|
2012-03-17 16:49:55 -04:00
|
|
|
Hadoop, you must build Spark against the same version that your cluster runs.
|
2013-08-21 17:51:56 -04:00
|
|
|
|
2014-09-16 12:18:03 -04:00
|
|
|
Please refer to the build documentation at
|
2015-03-12 11:01:33 -04:00
|
|
|
["Specifying the Hadoop Version"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version)
|
2014-09-16 12:18:03 -04:00
|
|
|
for detailed guidance on building for a particular distribution of Hadoop, including
|
|
|
|
building for particular Hive and Hive Thriftserver distributions. See also
|
|
|
|
["Third Party Hadoop Distributions"](http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html)
|
|
|
|
for guidance on building a Spark application that works with a particular
|
|
|
|
distribution.
|
2014-08-13 17:42:57 -04:00
|
|
|
|
2011-06-22 20:24:04 -04:00
|
|
|
## Configuration
|
|
|
|
|
2014-02-26 19:52:26 -05:00
|
|
|
Please refer to the [Configuration guide](http://spark.apache.org/docs/latest/configuration.html)
|
2013-08-31 21:08:05 -04:00
|
|
|
in the online documentation for an overview on how to configure Spark.
|