[SPARK-10440] [STREAMING] [DOCS] Update python API stuff in the programming guides and python docs

- Fixed information around Python API tags in streaming programming guides
- Added missing stuff in python docs

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #8595 from tdas/SPARK-10440.
This commit is contained in:
Tathagata Das 2015-09-04 23:16:39 -10:00 committed by Reynold Xin
parent 6c751940ea
commit 7a4f326c00
4 changed files with 33 additions and 12 deletions

View file

@ -5,8 +5,6 @@ title: Spark Streaming + Flume Integration Guide
[Apache Flume](https://flume.apache.org/) is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Here we explain how to configure Flume and Spark Streaming to receive data from Flume. There are two approaches to this.
<span class="badge" style="background-color: grey">Python API</span> Flume is not yet available in the Python API.
## Approach 1: Flume-style Push-based Approach
Flume is designed to push data between Flume agents. In this approach, Spark Streaming essentially sets up a receiver that acts an Avro agent for Flume, to which Flume can push the data. Here are the configuration steps.

View file

@ -50,13 +50,7 @@ all of which are presented in this guide.
You will find tabs throughout this guide that let you choose between code snippets of
different languages.
**Note:** Python API for Spark Streaming has been introduced in Spark 1.2. It has all the DStream
transformations and almost all the output operations available in Scala and Java interfaces.
However, it only has support for basic sources like text files and text data over sockets.
APIs for additional sources, like Kafka and Flume, will be available in the future.
Further information about available features in the Python API are mentioned throughout this
document; look out for the tag
<span class="badge" style="background-color: grey">Python API</span>.
**Note:** There are a few APIs that are either different or not available in Python. Throughout this guide, you will find the tag <span class="badge" style="background-color: grey">Python API</span> highlighting these differences.
***************************************************************************************************
@ -683,7 +677,7 @@ for Java, and [StreamingContext](api/python/pyspark.streaming.html#pyspark.strea
{:.no_toc}
<span class="badge" style="background-color: grey">Python API</span> As of Spark {{site.SPARK_VERSION_SHORT}},
out of these sources, *only* Kafka, Flume and MQTT are available in the Python API. We will add more advanced sources in the Python API in future.
out of these sources, Kafka, Kinesis, Flume and MQTT are available in the Python API.
This category of sources require interfacing with external non-Spark libraries, some of them with
complex dependencies (e.g., Kafka and Flume). Hence, to minimize issues related to version conflicts
@ -725,9 +719,9 @@ Some of these advanced sources are as follows.
- **Kafka:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Kafka 0.8.2.1. See the [Kafka Integration Guide](streaming-kafka-integration.html) for more details.
- **Flume:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Flume 1.4.0. See the [Flume Integration Guide](streaming-flume-integration.html) for more details.
- **Flume:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Flume 1.6.0. See the [Flume Integration Guide](streaming-flume-integration.html) for more details.
- **Kinesis:** See the [Kinesis Integration Guide](streaming-kinesis-integration.html) for more details.
- **Kinesis:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Kinesis Client Library 1.2.1. See the [Kinesis Integration Guide](streaming-kinesis-integration.html) for more details.
- **Twitter:** Spark Streaming's TwitterUtils uses Twitter4j 3.0.3 to get the public stream of tweets using
[Twitter's Streaming API](https://dev.twitter.com/docs/streaming-apis). Authentication information

View file

@ -29,6 +29,14 @@ Core classes:
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
:class:`pyspark.streaming.StreamingContext`
Main entry point for Spark Streaming functionality.
:class:`pyspark.streaming.DStream`
A Discretized Stream (DStream), the basic abstraction in Spark Streaming.
:class:`pyspark.sql.SQLContext`
Main entry point for DataFrame and SQL functionality.

View file

@ -15,3 +15,24 @@ pyspark.streaming.kafka module
:members:
:undoc-members:
:show-inheritance:
pyspark.streaming.kinesis module
--------------------------------
.. automodule:: pyspark.streaming.kinesis
:members:
:undoc-members:
:show-inheritance:
pyspark.streaming.flume.module
------------------------------
.. automodule:: pyspark.streaming.flume
:members:
:undoc-members:
:show-inheritance:
pyspark.streaming.mqtt module
-----------------------------
.. automodule:: pyspark.streaming.mqtt
:members:
:undoc-members:
:show-inheritance: