spark-instrumented-optimizer/docs
Gabor Somogyi 98a8725e66 [SPARK-27022][DSTREAMS] Add kafka delegation token support.
## What changes were proposed in this pull request?

It adds Kafka delegation token support for DStreams. Please be aware as Kafka native sink is not available for DStreams this PR contains delegation token usage only on consumer side.

What this PR contains:
* Usage of token through dynamic JAAS configuration
* `KafkaConfigUpdater` moved to `kafka-0-10-token-provider`
* `KafkaSecurityHelper` functionality moved into `KafkaTokenUtil`
* Documentation

## How was this patch tested?

Existing unit tests + on cluster.

Long running Kafka to file tests on 4 node cluster with randomly thrown artificial exceptions.

Test scenario:

* 4 node cluster
* Yarn
* Kafka broker version 2.1.0
* security.protocol = SASL_SSL
* sasl.mechanism = SCRAM-SHA-512

Kafka broker settings:

* delegation.token.expiry.time.ms=600000 (10 min)
* delegation.token.max.lifetime.ms=1200000 (20 min)
* delegation.token.expiry.check.interval.ms=300000 (5 min)

After each 7.5 minutes new delegation token obtained from Kafka broker (10 min * 0.75).
When token expired after 10 minutes (Spark obtains new one and doesn't renew the old), the brokers expiring thread comes after each 5 minutes (invalidates expired tokens) and artificial exception has been thrown inside the Spark application (such case Spark closes connection), then the latest delegation token picked up correctly.

cd docs/
SKIP_API=1 jekyll build
Manual webpage check.

Closes #23929 from gaborgsomogyi/SPARK-27022.

Authored-by: Gabor Somogyi <gabor.g.somogyi@gmail.com>
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
2019-03-07 11:36:37 -08:00
..
_data [SPARK-26215][SQL] Define reserved/non-reserved keywords based on the ANSI SQL standard 2019-02-23 08:38:47 +09:00
_includes [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
_layouts [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
_plugins [SPARK-25956] Make Scala 2.12 as default Scala version in Spark 3.0 2018-11-14 16:22:23 -08:00
css [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
img [SPARK-22648][K8S] Spark on Kubernetes - Documentation 2017-12-21 17:21:11 -08:00
js [SPARK-25754][DOC] Change CDN for MathJax 2018-10-17 06:52:00 -05:00
_config.yml [SPARK-26266][BUILD] Update to Scala 2.12.8 2018-12-08 05:59:53 -06:00
api.md [SPARK-21485][SQL][DOCS] Spark SQL documentation generation for built-in functions 2017-07-26 09:38:51 -07:00
building-spark.md [MINOR][DOC] Documentation on JVM options for SBT 2019-01-22 18:27:24 -06:00
cloud-integration.md [MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it 2019-03-05 08:26:30 -06:00
cluster-overview.md [SPARK-25909] fix documentation on cluster managers 2018-11-02 11:05:10 -05:00
configuration.md [SPARK-26792][CORE] Apply custom log URL to Spark UI 2019-03-04 10:36:04 -08:00
contributing-to-spark.md [DOCS][MINOR] Fix a few broken links and typos, and, nit, use HTTPS more consistently 2018-08-22 01:02:17 +08:00
graphx-programming-guide.md [SPARK-26771][CORE][GRAPHX] Make .unpersist(), .destroy() consistently non-blocking by default 2019-02-01 18:29:55 -06:00
hadoop-provided.md [SPARK-6511] [docs] Fix example command in hadoop-provided docs. 2015-06-11 15:29:03 -07:00
hardware-provisioning.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00
index.md [SPARK-26807][DOCS] Clarify that Pyspark is on PyPi now 2019-03-02 14:23:53 +09:00
job-scheduling.md [SPARK-20220][DOCS] Documentation Add thrift scheduling pool config to scheduling docs 2018-07-17 09:22:16 +08:00
ml-advanced.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
ml-ann.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-classification-regression.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
ml-clustering.md [SPARK-25997][ML] add Python example code for Power Iteration Clustering in spark.ml 2019-01-31 19:33:44 -06:00
ml-collaborative-filtering.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
ml-datasource.md [MINOR][DOCS] Fix typos 2018-11-30 09:03:46 -06:00
ml-decision-tree.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-ensembles.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-features.md [SPARK-11215][ML] Add multiple columns support to StringIndexer 2019-01-29 09:21:25 -06:00
ml-frequent-pattern-mining.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
ml-guide.md [SPARK-11215][ML] Add multiple columns support to StringIndexer 2019-01-29 09:21:25 -06:00
ml-linear-methods.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-migration-guides.md [DOCS][MINOR] Fix a few broken links and typos, and, nit, use HTTPS more consistently 2018-08-22 01:02:17 +08:00
ml-pipeline.md [MINOR][DOCS] Fix typos 2018-11-30 09:03:46 -06:00
ml-statistics.md [SPARK-23254][ML] Add user guide entry and example for DataFrame multivariate summary 2018-07-11 13:56:09 -05:00
ml-survival-regression.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-tuning.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-classification-regression.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-clustering.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-collaborative-filtering.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
mllib-data-types.md [SPARK-24628][DOC] Typos of the example code in docs/mllib-data-types.md 2018-07-18 09:45:56 -05:00
mllib-decision-tree.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00
mllib-dimensionality-reduction.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-ensembles.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-evaluation-metrics.md [SPARK-26351][MLLIB] Update doc and minor correction in the mllib evaluation metrics 2019-01-20 18:11:14 -06:00
mllib-feature-extraction.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-frequent-pattern-mining.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
mllib-guide.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-isotonic-regression.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
mllib-linear-methods.md [MINOR][DOCS] Fix typos 2018-11-30 09:03:46 -06:00
mllib-migration-guides.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-naive-bayes.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-optimization.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-pmml-model-export.md [MINOR][DOC] Fix a few markdown typos 2018-04-03 09:36:44 +08:00
mllib-statistics.md [SPARK-19550][BUILD][CORE][WIP] Remove Java 7 support 2017-02-16 12:32:45 +00:00
monitoring.md [SPARK-26928][CORE] Add driver CPU Time to the metrics system 2019-03-05 10:47:39 -08:00
programming-guide.md [SPARK-21267][SS][DOCS] Update Structured Streaming Documentation 2017-07-06 17:28:20 -07:00
quick-start.md [MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it 2019-03-05 08:26:30 -06:00
rdd-programming-guide.md [SPARK-26771][CORE][GRAPHX] Make .unpersist(), .destroy() consistently non-blocking by default 2019-02-01 18:29:55 -06:00
README.md [SPARK-25273][DOC] How to install testthat 1.0.2 2018-08-30 20:25:26 +08:00
running-on-kubernetes.md [SPARK-27023][K8S] Make k8s client timeouts configurable 2019-03-06 11:14:39 -08:00
running-on-mesos.md [SPARK-26324][DOCS] Add Spark docs for Running in Mesos with SSL 2018-12-20 08:29:51 -06:00
running-on-yarn.md [SPARK-26688][YARN] Provide configuration of initially blacklisted YARN nodes 2019-03-04 14:14:20 -06:00
security.md [SPARK-26772][YARN] Delete ServiceCredentialProvider and make HadoopDelegationTokenProvider a developer API 2019-02-15 14:43:13 -08:00
spark-standalone.md [SPARK-27047] Document stop-slave.sh in spark-standalone 2019-03-06 09:12:24 -06:00
sparkr.md [SPARK-19827][R] spark.ml R API for PIC 2018-12-10 18:28:13 -06:00
sql-data-sources-avro.md [SPARK-26870][SQL] Move to_avro/from_avro into functions object due to Java compatibility 2019-02-15 10:24:35 +08:00
sql-data-sources-hive-tables.md [SPARK-24360][SQL] Support Hive 3.1 metastore 2019-01-30 20:33:21 -08:00
sql-data-sources-jdbc.md [SPARK-24423][FOLLOW-UP][SQL] Fix error example 2018-12-04 07:57:58 -06:00
sql-data-sources-json.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-data-sources-load-save-functions.md [SPARK-26835][DOCS] Notes API documentation for available options of Data sources in SparkSQL guide 2019-02-13 08:02:51 -06:00
sql-data-sources-orc.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-data-sources-parquet.md [MINOR][DOC] Writing to partitioned Hive metastore Parquet tables is not supported for Spark SQL 2019-02-01 18:34:13 -06:00
sql-data-sources-troubleshooting.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-data-sources.md [SPARK-24499][SQL][DOC][FOLLOWUP] Fix some broken links 2018-10-19 23:55:19 -07:00
sql-distributed-sql-engine.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-getting-started.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
sql-migration-guide-hive-compatibility.md [SPARK-26091][SQL] Upgrade to 2.3.4 for Hive Metastore Client 2.3 2018-11-17 03:28:43 -08:00
sql-migration-guide-upgrade.md [SPARK-27035][SQL] Get more precise current time 2019-03-06 08:32:16 -06:00
sql-migration-guide.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-performance-tuning.md [SPARK-24499][SQL][DOC][FOLLOW-UP] Fix spelling in doc 2018-10-23 12:19:31 +08:00
sql-programming-guide.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
sql-pyspark-pandas-with-arrow.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
sql-reference.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
sql-reserved-and-non-reserved-keywords.md [SPARK-26982][SQL] Enhance describe framework to describe the output of a query. 2019-03-02 11:21:23 +08:00
storage-openstack-swift.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
streaming-custom-receivers.md [SPARK-25598][STREAMING][BUILD][TEST-MAVEN] Remove flume connector in Spark 3 2018-10-11 14:28:06 -07:00
streaming-kafka-0-10-integration.md [SPARK-27022][DSTREAMS] Add kafka delegation token support. 2019-03-07 11:36:37 -08:00
streaming-kafka-integration.md [SPARK-25705][BUILD][STREAMING][TEST-MAVEN] Remove Kafka 0.8 integration 2018-10-16 09:10:24 -05:00
streaming-kinesis-integration.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00
streaming-programming-guide.md [MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it 2019-03-05 08:26:30 -06:00
structured-streaming-kafka-integration.md [SPARK-26592][SS][DOC] Add Kafka proxy user caveat to documentation 2019-03-05 09:58:51 -08:00
structured-streaming-programming-guide.md [MINOR][DOCS] Fix for contradiction in condition formula of keeping intermediate state of window in structured streaming docs 2019-02-13 08:01:20 -06:00
submitting-applications.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
tuning.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00

Welcome to the Spark documentation!

This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at https://spark.apache.org/documentation.html.

Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that correspond to whichever version of Spark you currently have checked out of revision control.

Prerequisites

The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Java, Python, R and SQL.

You need to have Ruby and Python installed. Also install the following libraries:

$ sudo gem install jekyll jekyll-redirect-from pygments.rb
$ sudo pip install Pygments
# Following is needed only for generating API docs
$ sudo pip install sphinx pypandoc mkdocs
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
$ sudo Rscript -e 'devtools::install_version("roxygen2", version = "5.0.1", repos="http://cran.stat.ucla.edu/")'
$ sudo Rscript -e 'devtools::install_version("testthat", version = "1.0.2", repos="http://cran.stat.ucla.edu/")'

Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0.

Note: Other versions of roxygen2 might work in SparkR documentation generation but RoxygenNote field in $SPARK_HOME/R/pkg/DESCRIPTION is 5.0.1, which is updated if the version is mismatched.

Generating the Documentation HTML

We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.

In this directory you will find text files formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.

Execute jekyll build from the docs/ directory to compile the site. Compiling the site with Jekyll will create a directory called _site containing index.html as well as the rest of the compiled files.

$ cd docs
$ jekyll build

You can modify the default Jekyll build as follows:

# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build

# Serve content locally on port 4000
$ jekyll serve --watch

# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build

API Docs (Scaladoc, Javadoc, Sphinx, roxygen2, MkDocs)

You can build just the Spark scaladoc and javadoc by running build/sbt unidoc from the $SPARK_HOME directory.

Similarly, you can build just the PySpark docs by running make html from the $SPARK_HOME/python/docs directory. Documentation is only generated for classes that are listed as public in __init__.py. The SparkR docs can be built by running $SPARK_HOME/R/create-docs.sh, and the SQL docs can be built by running $SPARK_HOME/sql/create-docs.sh after building Spark first.

When you run jekyll build in the docs directory, it will also copy over the scaladoc and javadoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run build/sbt unidoc before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc and javadoc using Unidoc. The jekyll plugin also generates the PySpark docs using Sphinx, SparkR docs using roxygen2 and SQL docs using MkDocs.

NOTE: To skip the step of building and copying over the Scala, Java, Python, R and SQL API docs, run SKIP_API=1 jekyll build. In addition, SKIP_SCALADOC=1, SKIP_PYTHONDOC=1, SKIP_RDOC=1 and SKIP_SQLDOC=1 can be used to skip a single step of the corresponding language. SKIP_SCALADOC indicates skipping both the Scala and Java docs.