Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Aaron Davidson d069c5d9d2 [SPARK-3029] Disable local execution of Spark jobs by default
Currently, local execution of Spark jobs is only used by take(), and it can be problematic as it can load a significant amount of data onto the driver. The worst case scenarios occur if the RDD is cached (guaranteed to load whole partition), has very large elements, or the partition is just large and we apply a filter with high selectivity or computational overhead.

Additionally, jobs that run locally in this manner do not show up in the web UI, and are thus harder to track or understand what is occurring.

This PR adds a flag to disable local execution, which is turned OFF by default, with the intention of perhaps eventually removing this functionality altogether. Removing it now is a tougher proposition since it is part of the public runJob API. An alternative solution would be to limit the flag to take()/first() to avoid impacting any external users of this API, but such usage (or, at least, reliance upon the feature) is hopefully minimal.

Author: Aaron Davidson <aaron@databricks.com>

Closes #1321 from aarondav/allowlocal and squashes the following commits:

136b253 [Aaron Davidson] Fix DAGSchedulerSuite
5599d55 [Aaron Davidson] [RFC] Disable local execution of Spark jobs by default
2014-08-14 01:37:38 -07:00
assembly [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
bagel [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
bin [SPARK-3006] Failed to execute spark-shell in Windows OS 2014-08-13 22:17:07 -07:00
conf SPARK-1902 Silence stacktrace from logs when doing port failover to port n+1 2014-06-20 18:26:10 -07:00
core [SPARK-3029] Disable local execution of Spark jobs by default 2014-08-14 01:37:38 -07:00
data/mllib SPARK-2363. Clean MLlib's sample data files 2014-07-13 19:27:43 -07:00
dev [SPARK-2894] spark-shell doesn't accept flags 2014-08-09 21:11:00 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs [SPARK-3029] Disable local execution of Spark jobs by default 2014-08-14 01:37:38 -07:00
ec2 SPARK-2246: Add user-data option to EC2 scripts 2014-08-03 10:27:58 -07:00
examples [SPARK-2784][SQL] Deprecate hql() method in favor of a config option, 'spark.sql.dialect' 2014-08-03 12:28:29 -07:00
external [HOTFIX][Streaming] Handle port collisions in flume polling test 2014-08-06 16:34:53 -07:00
extras [SPARK-1981] Add AWS Kinesis streaming support 2014-08-02 13:35:35 -07:00
graphx SPARK-2045 Sort-based shuffle 2014-07-30 18:07:59 -07:00
mllib [SPARK-2995][MLLIB] add ALS.setIntermediateRDDStorageLevel 2014-08-13 23:53:44 -07:00
project [SPARK-2923][MLLIB] Implement some basic BLAS routines 2014-08-11 22:33:45 -07:00
python [SPARK-2983] [PySpark] improve performance of sortByKey() 2014-08-13 14:57:12 -07:00
repl [SPARK-2157] Enable tight firewall rules for Spark 2014-08-06 00:07:40 -07:00
sbin [SPARK-2678][Core][SQL] A workaround for SPARK-2678 2014-08-06 12:28:35 -07:00
sbt [SPARK-2437] Rename MAVEN_PROFILES to SBT_MAVEN_PROFILES and add SBT_MAVEN_PROPERTIES 2014-07-11 11:52:35 -07:00
sql [SPARK-2986] [SQL] fixed: setting properties does not effect 2014-08-13 17:45:24 -07:00
streaming [SPARK-2454] Do not ship spark home to Workers 2014-08-02 00:45:38 -07:00
tools SPARK-2566. Update ShuffleWriteMetrics incrementally 2014-08-06 13:10:33 -07:00
yarn [SPARK-2635] Fix race condition at SchedulerBackend.isReady in standalone mode 2014-08-08 22:52:56 -07:00
.gitignore [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) 2014-07-28 12:07:30 -07:00
.rat-excludes [SPARK-2800]: Exclude scalastyle-output.xml Apache RAT checks 2014-08-01 19:35:16 -07:00
.travis.yml Cut down the granularity of travis tests. 2014-03-27 08:53:42 -07:00
LICENSE SPARK-2414 [BUILD] Add LICENSE entry for jquery 2014-08-02 21:55:56 -07:00
make-distribution.sh Fix some bugs with spaces in directory name. 2014-08-03 19:47:05 -07:00
NOTICE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
pom.xml SPARK-2879 part 2 [BUILD] Use HTTPS to access Maven Central and other repos 2014-08-07 00:04:18 -07:00
README.md [SPARK-2963] [SQL] There no documentation about building to use HiveServer and CLI for SparkSQL 2014-08-13 14:42:57 -07:00
scalastyle-config.xml SPARK-1096, a space after comment start style checker. 2014-03-28 00:21:49 -07:00
tox.ini [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00

Apache Spark

Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLLib for machine learning, GraphX for graph processing, and Spark Streaming.

http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.apache.org/documentation.html. This README file only contains basic setup instructions.

Building Spark

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

(You do not need to do this if you downloaded a pre-built package.)

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting -Dhadoop.version when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ sbt/sbt -Dhadoop.version=1.2.1 assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ sbt/sbt -Dhadoop.version=2.0.0-mr1-cdh4.2.0 assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set -Pyarn:

# Apache Hadoop 2.0.5-alpha
$ sbt/sbt -Dhadoop.version=2.0.5-alpha -Pyarn assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ sbt/sbt -Dhadoop.version=2.0.0-cdh4.2.0 -Pyarn assembly

# Apache Hadoop 2.2.X and newer
$ sbt/sbt -Dhadoop.version=2.2.0 -Pyarn assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

A Note About Thrift JDBC server and CLI for Spark SQL

Spark SQL supports Thrift JDBC server and CLI. See sql-programming-guide.md for more information about those features. You can use those features by setting -Phive-thriftserver when building Spark as follows.

$ sbt/sbt -Phive-thriftserver assembly

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.