spark-instrumented-optimizer/docs
Haoyuan Li b50ddfde03 SPARK-1305: Support persisting RDD's directly to Tachyon
Move the PR#468 of apache-incubator-spark to the apache-spark
"Adding an option to persist Spark RDD blocks into Tachyon."

Author: Haoyuan Li <haoyuan@cs.berkeley.edu>
Author: RongGu <gurongwalker@gmail.com>

Closes #158 from RongGu/master and squashes the following commits:

72b7768 [Haoyuan Li] merge master
9f7fa1b [Haoyuan Li] fix code style
ae7834b [Haoyuan Li] minor cleanup
a8b3ec6 [Haoyuan Li] merge master branch
e0f4891 [Haoyuan Li] better check offheap.
55b5918 [RongGu] address matei's comment on the replication of offHeap storagelevel
7cd4600 [RongGu] remove some logic code for tachyonstore's replication
51149e7 [RongGu] address aaron's comment on returning value of the remove() function in tachyonstore
8adfcfa [RongGu] address arron's comment on inTachyonSize
120e48a [RongGu] changed the root-level dir name in Tachyon
5cc041c [Haoyuan Li] address aaron's comments
9b97935 [Haoyuan Li] address aaron's comments
d9a6438 [Haoyuan Li] fix for pspark
77d2703 [Haoyuan Li] change python api.git status
3dcace4 [Haoyuan Li] address matei's comments
91fa09d [Haoyuan Li] address patrick's comments
589eafe [Haoyuan Li] use TRY_CACHE instead of MUST_CACHE
64348b2 [Haoyuan Li] update conf docs.
ed73e19 [Haoyuan Li] Merge branch 'master' of github.com:RongGu/spark-1
619a9a8 [RongGu] set number of directories in TachyonStore back to 64; added a TODO tag for duplicated code from the DiskStore
be79d77 [RongGu] find a way to clean up some unnecessay metods and classed to make the code simpler
49cc724 [Haoyuan Li] update docs with off_headp option
4572f9f [RongGu] reserving the old apply function API of StorageLevel
04301d3 [RongGu] rename StorageLevel.TACHYON to Storage.OFF_HEAP
c9aeabf [RongGu] rename the StorgeLevel.TACHYON as StorageLevel.OFF_HEAP
76805aa [RongGu] unifies the config properties name prefix; add the configs into docs/configuration.md
e700d9c [RongGu] add the SparkTachyonHdfsLR example and some comments
fd84156 [RongGu] use randomUUID to generate sparkapp directory name on tachyon;minor code style fix
939e467 [Haoyuan Li] 0.4.1-thrift from maven central
86a2eab [Haoyuan Li] tachyon 0.4.1-thrift is in the staging repo. but jenkins failed to download it. temporarily revert it back to 0.4.1
16c5798 [RongGu] make the dependency on tachyon as tachyon-0.4.1-thrift
eacb2e8 [RongGu] Merge branch 'master' of https://github.com/RongGu/spark-1
bbeb4de [RongGu] fix the JsonProtocolSuite test failure problem
6adb58f [RongGu] Merge branch 'master' of https://github.com/RongGu/spark-1
d827250 [RongGu] fix JsonProtocolSuie test failure
716e93b [Haoyuan Li] revert the version
ca14469 [Haoyuan Li] bump tachyon version to 0.4.1-thrift
2825a13 [RongGu] up-merging to the current master branch of the apache spark
6a22c1a [Haoyuan Li] fix scalastyle
8968b67 [Haoyuan Li] exclude more libraries from tachyon dependency to be the same as referencing tachyon-client.
77be7e8 [RongGu] address mateiz's comment about the temp folder name problem. The implementation followed mateiz's advice.
1dcadf9 [Haoyuan Li] typo
bf278fa [Haoyuan Li] fix python tests
e82909c [Haoyuan Li] minor cleanup
776a56c [Haoyuan Li] address patrick's and ali's comments from the previous PR
8859371 [Haoyuan Li] various minor fixes and clean up
e3ddbba [Haoyuan Li] add doc to use Tachyon cache mode.
fcaeab2 [Haoyuan Li] address Aaron's comment
e554b1e [Haoyuan Li] add python code
47304b3 [Haoyuan Li] make tachyonStore in BlockMananger lazy val; add more comments StorageLevels.
dc8ef24 [Haoyuan Li] add old storelevel constructor
e01a271 [Haoyuan Li] update tachyon 0.4.1
8011a96 [RongGu] fix a brought-in mistake in StorageLevel
70ca182 [RongGu] a bit change in comment
556978b [RongGu] fix the scalastyle errors
791189b [RongGu] "Adding an option to persist Spark RDD blocks into Tachyon." move the PR#468 of apache-incubator-spark to the apache-spark
2014-04-04 20:38:20 -07:00
..
_layouts SPARK-1251 Support for optimizing and executing structured queries 2014-03-20 18:03:20 -07:00
_plugins SPARK-1251 Support for optimizing and executing structured queries 2014-03-20 18:03:20 -07:00
css Merge pull request #552 from martinjaggi/master. Closes #552. 2014-02-08 11:39:13 -08:00
img Merge pull request #497 from tdas/docs-update 2014-01-28 21:51:05 -08:00
js SPARK-1135: fix broken anchors in docs 2014-02-26 11:20:16 -08:00
_config.yml [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
api.md Soften wording about GraphX superseding Bagel 2014-01-10 23:48:32 -08:00
bagel-programming-guide.md Removed reference to incubation in Spark user docs. 2014-02-27 21:13:22 -08:00
building-with-maven.md SPARK-1064 2014-03-11 22:39:17 -07:00
cluster-overview.md SPARK-1375. Additional spark-submit cleanup 2014-04-04 13:28:42 -07:00
configuration.md SPARK-1305: Support persisting RDD's directly to Tachyon 2014-04-04 20:38:20 -07:00
contributing-to-spark.md Work in progress: 2013-09-08 00:29:11 -07:00
ec2-scripts.md fix persistent-hdfs 2013-11-01 17:47:37 -07:00
graphx-programming-guide.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
hadoop-third-party-distributions.md Code review feedback 2014-01-05 22:05:30 -08:00
hardware-provisioning.md Change port from 3030 to 4040 2013-09-11 10:01:38 -07:00
index.md SPARK-1251 Support for optimizing and executing structured queries 2014-03-20 18:03:20 -07:00
java-programming-guide.md [java8API] SPARK-964 Investigate the potential for using JDK 8 lambda expressions for the Java/Scala APIs 2014-03-03 22:31:30 -08:00
job-scheduling.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
mllib-classification-regression.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
mllib-clustering.md Merge pull request #552 from martinjaggi/master. Closes #552. 2014-02-08 11:39:13 -08:00
mllib-collaborative-filtering.md Merge pull request #552 from martinjaggi/master. Closes #552. 2014-02-08 11:39:13 -08:00
mllib-guide.md Principal Component Analysis 2014-03-20 10:39:20 -07:00
mllib-linear-algebra.md Principal Component Analysis 2014-03-20 10:39:20 -07:00
mllib-optimization.md Merge pull request #566 from martinjaggi/copy-MLlib-d. 2014-02-09 15:19:50 -08:00
monitoring.md SPARK-1167: Remove metrics-ganglia from default build due to LGPL issues... 2014-03-11 11:16:59 -07:00
python-programming-guide.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
quick-start.md [SPARK-1105] fix site scala version error in docs 2014-02-19 15:54:03 -08:00
README.md Add Jekyll tag to isolate "production-only" doc components. 2014-03-02 18:19:01 -08:00
running-on-mesos.md Updated docs for SparkConf and handled review comments 2013-12-30 22:17:28 -05:00
running-on-yarn.md SPARK-1376. In the yarn-cluster submitter, rename "args" option to "arg" 2014-04-01 08:26:31 +05:30
scala-programming-guide.md SPARK-1305: Support persisting RDD's directly to Tachyon 2014-04-04 20:38:20 -07:00
security.md SPARK-1189: Add Security to Spark - Akka, Http, ConnectionManager, UI use servlets 2014-03-06 18:27:50 -06:00
spark-debugger.md Removed reference to incubation in Spark user docs. 2014-02-27 21:13:22 -08:00
spark-standalone.md SPARK-1126. spark-app preliminary 2014-03-29 14:41:36 -07:00
sql-programming-guide.md [SQL] SPARK-1333 First draft of java API 2014-04-03 15:45:34 -07:00
streaming-custom-receivers.md Merge pull request #577 from hsaputra/fix_simple_streaming_doc. 2014-02-11 14:46:22 -08:00
streaming-programming-guide.md maintain arbitrary state data for each key 2014-03-09 22:42:12 -07:00
tuning.md SPARK-929: Fully deprecate usage of SPARK_MEM 2014-03-09 11:08:39 -07:00

Welcome to the Spark documentation!

This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.

Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.

Generating the Documentation HTML

We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.

In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.

The markdown code can be compiled to HTML using the Jekyll tool. To use the jekyll command, you will need to have Jekyll installed. The easiest way to do this is via a Ruby Gem, see the jekyll installation instructions. Compiling the site with Jekyll will create a directory called _site containing index.html as well as the rest of the compiled files.

You can modify the default Jekyll build as follows:

# Skip generating API docs (which takes a while)
$ SKIP_SCALADOC=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build

Pygments

We also use pygments (http://pygments.org) for syntax highlighting in documentation markdown pages, so you will also need to install that (it requires Python) by running sudo easy_install Pygments.

To mark a block of code in your markdown to be syntax highlighted by jekyll during the compile phase, use the following sytax:

{% highlight scala %}
// Your scala code goes here, you can replace scala with many other
// supported languages too.
{% endhighlight %}

API Docs (Scaladoc and Epydoc)

You can build just the Spark scaladoc by running sbt/sbt doc from the SPARK_PROJECT_ROOT directory.

Similarly, you can build just the PySpark epydoc by running epydoc --config epydoc.conf from the SPARK_PROJECT_ROOT/pyspark directory.

When you run jekyll in the docs directory, it will also copy over the scaladoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run sbt/sbt doc before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc. The jekyll plugin also generates the PySpark docs using epydoc.

NOTE: To skip the step of building and copying over the Scala and Python API docs, run SKIP_API=1 jekyll.