spark-instrumented-optimizer/docs
Patrick Wendell fb98488fc8 Clean up and simplify Spark configuration
Over time as we've added more deployment modes, this have gotten a bit unwieldy with user-facing configuration options in Spark. Going forward we'll advise all users to run `spark-submit` to launch applications. This is a WIP patch but it makes the following improvements:

1. Improved `spark-env.sh.template` which was missing a lot of things users now set in that file.
2. Removes the shipping of SPARK_CLASSPATH, SPARK_JAVA_OPTS, and SPARK_LIBRARY_PATH to the executors on the cluster. This was an ugly hack. Instead it introduces config variables spark.executor.extraJavaOpts, spark.executor.extraLibraryPath, and spark.executor.extraClassPath.
3. Adds ability to set these same variables for the driver using `spark-submit`.
4. Allows you to load system properties from a `spark-defaults.conf` file when running `spark-submit`. This will allow setting both SparkConf options and other system properties utilized by `spark-submit`.
5. Made `SPARK_LOCAL_IP` an environment variable rather than a SparkConf property. This is more consistent with it being set on each node.

Author: Patrick Wendell <pwendell@gmail.com>

Closes #299 from pwendell/config-cleanup and squashes the following commits:

127f301 [Patrick Wendell] Improvements to testing
a006464 [Patrick Wendell] Moving properties file template.
b4b496c [Patrick Wendell] spark-defaults.properties -> spark-defaults.conf
0086939 [Patrick Wendell] Minor style fixes
af09e3e [Patrick Wendell] Mention config file in docs and clean-up docs
b16e6a2 [Patrick Wendell] Cleanup of spark-submit script and Scala quick start guide
af0adf7 [Patrick Wendell] Automatically add user jar
a56b125 [Patrick Wendell] Responses to Tom's review
d50c388 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into config-cleanup
a762901 [Patrick Wendell] Fixing test failures
ffa00fe [Patrick Wendell] Review feedback
fda0301 [Patrick Wendell] Note
308f1f6 [Patrick Wendell] Properly escape quotes and other clean-up for YARN
e83cd8f [Patrick Wendell] Changes to allow re-use of test applications
be42f35 [Patrick Wendell] Handle case where SPARK_HOME is not set
c2a2909 [Patrick Wendell] Test compile fixes
4ee6f9d [Patrick Wendell] Making YARN doc changes consistent
afc9ed8 [Patrick Wendell] Cleaning up line limits and two compile errors.
b08893b [Patrick Wendell] Additional improvements.
ace4ead [Patrick Wendell] Responses to review feedback.
b72d183 [Patrick Wendell] Review feedback for spark env file
46555c1 [Patrick Wendell] Review feedback and import clean-ups
437aed1 [Patrick Wendell] Small fix
761ebcd [Patrick Wendell] Library path and classpath for drivers
7cc70e4 [Patrick Wendell] Clean up terminology inside of spark-env script
5b0ba8e [Patrick Wendell] Don't ship executor envs
84cc5e5 [Patrick Wendell] Small clean-up
1f75238 [Patrick Wendell] SPARK_JAVA_OPTS --> SPARK_MASTER_OPTS for master settings
4982331 [Patrick Wendell] Remove SPARK_LIBRARY_PATH
6eaf7d0 [Patrick Wendell] executorJavaOpts
0faa3b6 [Patrick Wendell] Stash of adding config options in submit script and YARN
ac2d65e [Patrick Wendell] Change spark.local.dir -> SPARK_LOCAL_DIRS
2014-04-21 10:26:33 -07:00
..
_layouts SPARK-1251 Support for optimizing and executing structured queries 2014-03-20 18:03:20 -07:00
_plugins SPARK-1374: PySpark API for SparkSQL 2014-04-15 00:07:55 -07:00
css SPARK-1093: Annotate developer and experimental API's 2014-04-09 01:14:46 -07:00
img Merge pull request #497 from tdas/docs-update 2014-01-28 21:51:05 -08:00
js SPARK-1093: Annotate developer and experimental API's 2014-04-09 01:14:46 -07:00
_config.yml Revert "SPARK-1433: Upgrade Mesos dependency to 0.17.0" 2014-04-10 14:43:29 -07:00
api.md Soften wording about GraphX superseding Bagel 2014-01-10 23:48:32 -08:00
bagel-programming-guide.md Removed reference to incubation in Spark user docs. 2014-02-27 21:13:22 -08:00
building-with-maven.md SPARK-1387. Update build plugins, avoid plugin version warning, centralize versions 2014-04-06 17:41:01 -07:00
cluster-overview.md Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
configuration.md Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
contributing-to-spark.md Work in progress: 2013-09-08 00:29:11 -07:00
ec2-scripts.md fix persistent-hdfs 2013-11-01 17:47:37 -07:00
graphx-programming-guide.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
hadoop-third-party-distributions.md Code review feedback 2014-01-05 22:05:30 -08:00
hardware-provisioning.md Change port from 3030 to 4040 2013-09-11 10:01:38 -07:00
index.md Some clean up in build/docs 2014-04-11 10:45:27 -07:00
java-programming-guide.md [java8API] SPARK-964 Investigate the potential for using JDK 8 lambda expressions for the Java/Scala APIs 2014-03-03 22:31:30 -08:00
job-scheduling.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
mllib-classification-regression.md [WIP] SPARK-1430: Support sparse data in Python MLlib 2014-04-15 20:33:24 -07:00
mllib-clustering.md [WIP] SPARK-1430: Support sparse data in Python MLlib 2014-04-15 20:33:24 -07:00
mllib-collaborative-filtering.md Merge pull request #552 from martinjaggi/master. Closes #552. 2014-02-08 11:39:13 -08:00
mllib-guide.md [WIP] SPARK-1430: Support sparse data in Python MLlib 2014-04-15 20:33:24 -07:00
mllib-linear-algebra.md Principal Component Analysis 2014-03-20 10:39:20 -07:00
mllib-optimization.md Merge pull request #566 from martinjaggi/copy-MLlib-d. 2014-02-09 15:19:50 -08:00
monitoring.md [SPARK-1276] Add a HistoryServer to render persisted UI 2014-04-10 10:39:34 -07:00
python-programming-guide.md SPARK-1426: Make MLlib work with NumPy versions older than 1.7 2014-04-15 00:19:43 -07:00
quick-start.md Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
README.md SPARK-1374: PySpark API for SparkSQL 2014-04-15 00:07:55 -07:00
running-on-mesos.md Updated docs for SparkConf and handled review comments 2013-12-30 22:17:28 -05:00
running-on-yarn.md SPARK-1408 Modify Spark on Yarn to point to the history server when app ... 2014-04-17 16:36:37 -05:00
scala-programming-guide.md Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
security.md SPARK-1189: Add Security to Spark - Akka, Http, ConnectionManager, UI use servlets 2014-03-06 18:27:50 -06:00
spark-debugger.md Removed reference to incubation in Spark user docs. 2014-02-27 21:13:22 -08:00
spark-standalone.md SPARK-1126. spark-app preliminary 2014-03-29 14:41:36 -07:00
sql-programming-guide.md Clean up and simplify Spark configuration 2014-04-21 10:26:33 -07:00
streaming-custom-receivers.md Merge pull request #577 from hsaputra/fix_simple_streaming_doc. 2014-02-11 14:46:22 -08:00
streaming-programming-guide.md maintain arbitrary state data for each key 2014-03-09 22:42:12 -07:00
tuning.md Update tuning.md 2014-04-10 14:59:58 -07:00

Welcome to the Spark documentation!

This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.

Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.

Generating the Documentation HTML

We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.

In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.

The markdown code can be compiled to HTML using the Jekyll tool. To use the jekyll command, you will need to have Jekyll installed. The easiest way to do this is via a Ruby Gem, see the jekyll installation instructions. Compiling the site with Jekyll will create a directory called _site containing index.html as well as the rest of the compiled files.

You can modify the default Jekyll build as follows:

# Skip generating API docs (which takes a while)
$ SKIP_SCALADOC=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build

Pygments

We also use pygments (http://pygments.org) for syntax highlighting in documentation markdown pages, so you will also need to install that (it requires Python) by running sudo easy_install Pygments.

To mark a block of code in your markdown to be syntax highlighted by jekyll during the compile phase, use the following sytax:

{% highlight scala %}
// Your scala code goes here, you can replace scala with many other
// supported languages too.
{% endhighlight %}

API Docs (Scaladoc and Epydoc)

You can build just the Spark scaladoc by running sbt/sbt doc from the SPARK_PROJECT_ROOT directory.

Similarly, you can build just the PySpark epydoc by running epydoc --config epydoc.conf from the SPARK_PROJECT_ROOT/pyspark directory. Documentation is only generated for classes that are listed as public in __init__.py.

When you run jekyll in the docs directory, it will also copy over the scaladoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run sbt/sbt doc before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc. The jekyll plugin also generates the PySpark docs using epydoc.

NOTE: To skip the step of building and copying over the Scala and Python API docs, run SKIP_API=1 jekyll.