spark-instrumented-optimizer/python/docs/index.rst
Nicholas Chammas 2dd0388617 [SPARK-16772][PYTHON][DOCS] Fix API doc references to UDFRegistration + Update "important classes"
## Proposed Changes

* Update the list of "important classes" in `pyspark.sql` to match 2.0.
* Fix references to `UDFRegistration` so that the class shows up in the docs. It currently [doesn't](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html).
* Remove some unnecessary whitespace in the Python RST doc files.

I reused the [existing JIRA](https://issues.apache.org/jira/browse/SPARK-16772) I created last week for similar API doc fixes.

## How was this patch tested?

* I ran `lint-python` successfully.
* I ran `make clean build` on the Python docs and confirmed the results are as expected locally in my browser.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #14496 from nchammas/SPARK-16772-UDFRegistration.
2016-08-06 05:02:59 +01:00

53 lines
1.1 KiB
ReStructuredText

.. pyspark documentation master file, created by
sphinx-quickstart on Thu Aug 28 15:17:47 2014.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Spark Python API Docs!
===================================
Contents:
.. toctree::
:maxdepth: 2
pyspark
pyspark.sql
pyspark.streaming
pyspark.ml
pyspark.mllib
Core classes:
---------------
:class:`pyspark.SparkContext`
Main entry point for Spark functionality.
:class:`pyspark.RDD`
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
:class:`pyspark.streaming.StreamingContext`
Main entry point for Spark Streaming functionality.
:class:`pyspark.streaming.DStream`
A Discretized Stream (DStream), the basic abstraction in Spark Streaming.
:class:`pyspark.sql.SQLContext`
Main entry point for DataFrame and SQL functionality.
:class:`pyspark.sql.DataFrame`
A distributed collection of data grouped into named columns.
Indices and tables
==================
* :ref:`search`