spark-instrumented-optimizer/docs
Matei Zaharia 63ca581d9c [WIP] SPARK-1430: Support sparse data in Python MLlib
This PR adds a SparseVector class in PySpark and updates all the regression, classification and clustering algorithms and models to support sparse data, similar to MLlib. I chose to add this class because SciPy is quite difficult to install in many environments (more so than NumPy), but I plan to add support for SciPy sparse vectors later too, and make the methods work transparently on objects of either type.

On the Scala side, we keep Python sparse vectors sparse and pass them to MLlib. We always return dense vectors from our models.

Some to-do items left:
- [x] Support SciPy's scipy.sparse matrix objects when SciPy is available. We can easily add a function to convert these to our own SparseVector.
- [x] MLlib currently uses a vector with one extra column on the left to represent what we call LabeledPoint in Scala. Do we really want this? It may get annoying once you deal with sparse data since you must add/subtract 1 to each feature index when training. We can remove this API in 1.0 and use tuples for labeling.
- [x] Explain how to use these in the Python MLlib docs.

CC @mengxr, @joshrosen

Author: Matei Zaharia <matei@databricks.com>

Closes #341 from mateiz/py-ml-update and squashes the following commits:

d52e763 [Matei Zaharia] Remove no-longer-needed slice code and handle review comments
ea5a25a [Matei Zaharia] Fix remaining uses of copyto() after merge
b9f97a3 [Matei Zaharia] Fix test
1e1bd0f [Matei Zaharia] Add MLlib logistic regression example in Python
88bc01f [Matei Zaharia] Clean up inheritance of LinearModel in Python, and expose its parametrs
37ab747 [Matei Zaharia] Fix some examples and docs due to changes in MLlib API
da0f27e [Matei Zaharia] Added a MLlib K-means example and updated docs to discuss sparse data
c48e85a [Matei Zaharia] Added some tests for passing lists as input, and added mllib/tests.py to run-tests script.
a07ba10 [Matei Zaharia] Fix some typos and calculation of initial weights
74eefe7 [Matei Zaharia] Added LabeledPoint class in Python
889dde8 [Matei Zaharia] Support scipy.sparse matrices in all our algorithms and models
ab244d1 [Matei Zaharia] Allow SparseVectors to be initialized using a dict
a5d6426 [Matei Zaharia] Add linalg.py to run-tests script
0e7a3d8 [Matei Zaharia] Keep vectors sparse in Java when reading LabeledPoints
eaee759 [Matei Zaharia] Update regression, classification and clustering models for sparse data
2abbb44 [Matei Zaharia] Further work to get linear models working with sparse data
154f45d [Matei Zaharia] Update docs, name some magic values
881fef7 [Matei Zaharia] Added a sparse vector in Python and made Java-Python format more compact
2014-04-15 20:33:24 -07:00
..
_layouts SPARK-1251 Support for optimizing and executing structured queries 2014-03-20 18:03:20 -07:00
_plugins SPARK-1374: PySpark API for SparkSQL 2014-04-15 00:07:55 -07:00
css SPARK-1093: Annotate developer and experimental API's 2014-04-09 01:14:46 -07:00
img Merge pull request #497 from tdas/docs-update 2014-01-28 21:51:05 -08:00
js SPARK-1093: Annotate developer and experimental API's 2014-04-09 01:14:46 -07:00
_config.yml Revert "SPARK-1433: Upgrade Mesos dependency to 0.17.0" 2014-04-10 14:43:29 -07:00
api.md Soften wording about GraphX superseding Bagel 2014-01-10 23:48:32 -08:00
bagel-programming-guide.md Removed reference to incubation in Spark user docs. 2014-02-27 21:13:22 -08:00
building-with-maven.md SPARK-1387. Update build plugins, avoid plugin version warning, centralize versions 2014-04-06 17:41:01 -07:00
cluster-overview.md SPARK-1375. Additional spark-submit cleanup 2014-04-04 13:28:42 -07:00
configuration.md SPARK-1202 - Add a "cancel" button in the UI for stages 2014-04-10 17:10:11 -07:00
contributing-to-spark.md Work in progress: 2013-09-08 00:29:11 -07:00
ec2-scripts.md fix persistent-hdfs 2013-11-01 17:47:37 -07:00
graphx-programming-guide.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
hadoop-third-party-distributions.md Code review feedback 2014-01-05 22:05:30 -08:00
hardware-provisioning.md Change port from 3030 to 4040 2013-09-11 10:01:38 -07:00
index.md Some clean up in build/docs 2014-04-11 10:45:27 -07:00
java-programming-guide.md [java8API] SPARK-964 Investigate the potential for using JDK 8 lambda expressions for the Java/Scala APIs 2014-03-03 22:31:30 -08:00
job-scheduling.md SPARK-1183. Don't use "worker" to mean executor 2014-03-13 12:11:33 -07:00
mllib-classification-regression.md [WIP] SPARK-1430: Support sparse data in Python MLlib 2014-04-15 20:33:24 -07:00
mllib-clustering.md [WIP] SPARK-1430: Support sparse data in Python MLlib 2014-04-15 20:33:24 -07:00
mllib-collaborative-filtering.md Merge pull request #552 from martinjaggi/master. Closes #552. 2014-02-08 11:39:13 -08:00
mllib-guide.md [WIP] SPARK-1430: Support sparse data in Python MLlib 2014-04-15 20:33:24 -07:00
mllib-linear-algebra.md Principal Component Analysis 2014-03-20 10:39:20 -07:00
mllib-optimization.md Merge pull request #566 from martinjaggi/copy-MLlib-d. 2014-02-09 15:19:50 -08:00
monitoring.md [SPARK-1276] Add a HistoryServer to render persisted UI 2014-04-10 10:39:34 -07:00
python-programming-guide.md SPARK-1426: Make MLlib work with NumPy versions older than 1.7 2014-04-15 00:19:43 -07:00
quick-start.md small fix ( proogram -> program ) 2014-04-04 21:32:00 -07:00
README.md SPARK-1374: PySpark API for SparkSQL 2014-04-15 00:07:55 -07:00
running-on-mesos.md Updated docs for SparkConf and handled review comments 2013-12-30 22:17:28 -05:00
running-on-yarn.md SPARK-1376. In the yarn-cluster submitter, rename "args" option to "arg" 2014-04-01 08:26:31 +05:30
scala-programming-guide.md SPARK-1099: Introduce local[*] mode to infer number of cores 2014-04-07 13:06:30 -07:00
security.md SPARK-1189: Add Security to Spark - Akka, Http, ConnectionManager, UI use servlets 2014-03-06 18:27:50 -06:00
spark-debugger.md Removed reference to incubation in Spark user docs. 2014-02-27 21:13:22 -08:00
spark-standalone.md SPARK-1126. spark-app preliminary 2014-03-29 14:41:36 -07:00
sql-programming-guide.md SPARK-1374: PySpark API for SparkSQL 2014-04-15 00:07:55 -07:00
streaming-custom-receivers.md Merge pull request #577 from hsaputra/fix_simple_streaming_doc. 2014-02-11 14:46:22 -08:00
streaming-programming-guide.md maintain arbitrary state data for each key 2014-03-09 22:42:12 -07:00
tuning.md Update tuning.md 2014-04-10 14:59:58 -07:00

Welcome to the Spark documentation!

This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.

Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.

Generating the Documentation HTML

We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.

In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.

The markdown code can be compiled to HTML using the Jekyll tool. To use the jekyll command, you will need to have Jekyll installed. The easiest way to do this is via a Ruby Gem, see the jekyll installation instructions. Compiling the site with Jekyll will create a directory called _site containing index.html as well as the rest of the compiled files.

You can modify the default Jekyll build as follows:

# Skip generating API docs (which takes a while)
$ SKIP_SCALADOC=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build

Pygments

We also use pygments (http://pygments.org) for syntax highlighting in documentation markdown pages, so you will also need to install that (it requires Python) by running sudo easy_install Pygments.

To mark a block of code in your markdown to be syntax highlighted by jekyll during the compile phase, use the following sytax:

{% highlight scala %}
// Your scala code goes here, you can replace scala with many other
// supported languages too.
{% endhighlight %}

API Docs (Scaladoc and Epydoc)

You can build just the Spark scaladoc by running sbt/sbt doc from the SPARK_PROJECT_ROOT directory.

Similarly, you can build just the PySpark epydoc by running epydoc --config epydoc.conf from the SPARK_PROJECT_ROOT/pyspark directory. Documentation is only generated for classes that are listed as public in __init__.py.

When you run jekyll in the docs directory, it will also copy over the scaladoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run sbt/sbt doc before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc. The jekyll plugin also generates the PySpark docs using epydoc.

NOTE: To skip the step of building and copying over the Scala and Python API docs, run SKIP_API=1 jekyll.