spark-instrumented-optimizer/docs
Stavros Kontopoulos 05168e725d [SPARK-24793][K8S] Enhance spark-submit for app management
- supports `--kill` & `--status` flags.
- supports globs which is useful in general check this long standing [issue](https://github.com/kubernetes/kubernetes/issues/17144#issuecomment-272052461) for kubectl.

Manually against running apps. Example output:

Submission Id reported at launch time:

```
2019-01-20 23:47:56 INFO  Client:58 - Waiting for application spark-pi with submissionId spark:spark-pi-1548020873671-driver to finish...
```

Killing the app:

```
./bin/spark-submit --kill spark:spark-pi-1548020873671-driver --master  k8s://https://192.168.2.8:8443
2019-01-20 23:48:07 WARN  Utils:70 - Your hostname, universe resolves to a loopback address: 127.0.0.1; using 192.168.2.8 instead (on interface wlp2s0)
2019-01-20 23:48:07 WARN  Utils:70 - Set SPARK_LOCAL_IP if you need to bind to another address

```

App terminates with 143 (SIGTERM, since we have tiny this should lead to [graceful shutdown](https://cloud.google.com/solutions/best-practices-for-building-containers)):

```
2019-01-20 23:48:08 INFO  LoggingPodStatusWatcherImpl:58 - State changed, new state:
	 pod name: spark-pi-1548020873671-driver
	 namespace: spark
	 labels: spark-app-selector -> spark-e4730c80e1014b72aa77915a2203ae05, spark-role -> driver
	 pod uid: 0ba9a794-1cfd-11e9-8215-a434d9270a65
	 creation time: 2019-01-20T21:47:55Z
	 service account name: spark-sa
	 volumes: spark-local-dir-1, spark-conf-volume, spark-sa-token-b7wcm
	 node name: minikube
	 start time: 2019-01-20T21:47:55Z
	 phase: Running
	 container status:
		 container name: spark-kubernetes-driver
		 container image: skonto/spark:k8s-3.0.0
		 container state: running
		 container started at: 2019-01-20T21:48:00Z
2019-01-20 23:48:09 INFO  LoggingPodStatusWatcherImpl:58 - State changed, new state:
	 pod name: spark-pi-1548020873671-driver
	 namespace: spark
	 labels: spark-app-selector -> spark-e4730c80e1014b72aa77915a2203ae05, spark-role -> driver
	 pod uid: 0ba9a794-1cfd-11e9-8215-a434d9270a65
	 creation time: 2019-01-20T21:47:55Z
	 service account name: spark-sa
	 volumes: spark-local-dir-1, spark-conf-volume, spark-sa-token-b7wcm
	 node name: minikube
	 start time: 2019-01-20T21:47:55Z
	 phase: Failed
	 container status:
		 container name: spark-kubernetes-driver
		 container image: skonto/spark:k8s-3.0.0
		 container state: terminated
		 container started at: 2019-01-20T21:48:00Z
		 container finished at: 2019-01-20T21:48:08Z
		 exit code: 143
		 termination reason: Error
2019-01-20 23:48:09 INFO  LoggingPodStatusWatcherImpl:58 - Container final statuses:
	 container name: spark-kubernetes-driver
	 container image: skonto/spark:k8s-3.0.0
	 container state: terminated
	 container started at: 2019-01-20T21:48:00Z
	 container finished at: 2019-01-20T21:48:08Z
	 exit code: 143
	 termination reason: Error
2019-01-20 23:48:09 INFO  Client:58 - Application spark-pi with submissionId spark:spark-pi-1548020873671-driver finished.
2019-01-20 23:48:09 INFO  ShutdownHookManager:58 - Shutdown hook called
2019-01-20 23:48:09 INFO  ShutdownHookManager:58 - Deleting directory /tmp/spark-f114b2e0-5605-4083-9203-a4b1c1f6059e

```

Glob scenario:

```
./bin/spark-submit --status spark:spark-pi* --master  k8s://https://192.168.2.8:8443
2019-01-20 22:27:44 WARN  Utils:70 - Your hostname, universe resolves to a loopback address: 127.0.0.1; using 192.168.2.8 instead (on interface wlp2s0)
2019-01-20 22:27:44 WARN  Utils:70 - Set SPARK_LOCAL_IP if you need to bind to another address
Application status (driver):
	 pod name: spark-pi-1547948600328-driver
	 namespace: spark
	 labels: spark-app-selector -> spark-f13f01702f0b4503975ce98252d59b94, spark-role -> driver
	 pod uid: c576e1c6-1c54-11e9-8215-a434d9270a65
	 creation time: 2019-01-20T01:43:22Z
	 service account name: spark-sa
	 volumes: spark-local-dir-1, spark-conf-volume, spark-sa-token-b7wcm
	 node name: minikube
	 start time: 2019-01-20T01:43:22Z
	 phase: Running
	 container status:
		 container name: spark-kubernetes-driver
		 container image: skonto/spark:k8s-3.0.0
		 container state: running
		 container started at: 2019-01-20T01:43:27Z
Application status (driver):
	 pod name: spark-pi-1547948792539-driver
	 namespace: spark
	 labels: spark-app-selector -> spark-006d252db9b24f25b5069df357c30264, spark-role -> driver
	 pod uid: 38375b4b-1c55-11e9-8215-a434d9270a65
	 creation time: 2019-01-20T01:46:35Z
	 service account name: spark-sa
	 volumes: spark-local-dir-1, spark-conf-volume, spark-sa-token-b7wcm
	 node name: minikube
	 start time: 2019-01-20T01:46:35Z
	 phase: Succeeded
	 container status:
		 container name: spark-kubernetes-driver
		 container image: skonto/spark:k8s-3.0.0
		 container state: terminated
		 container started at: 2019-01-20T01:46:39Z
		 container finished at: 2019-01-20T01:46:56Z
		 exit code: 0
		 termination reason: Completed

```

Closes #23599 from skonto/submit_ops_extension.

Authored-by: Stavros Kontopoulos <stavros.kontopoulos@lightbend.com>
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
2019-03-26 11:55:03 -07:00
..
_data [SPARK-26215][SQL] Define reserved/non-reserved keywords based on the ANSI SQL standard 2019-02-23 08:38:47 +09:00
_includes [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
_layouts [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
_plugins [SPARK-25956] Make Scala 2.12 as default Scala version in Spark 3.0 2018-11-14 16:22:23 -08:00
css [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
img [SPARK-22648][K8S] Spark on Kubernetes - Documentation 2017-12-21 17:21:11 -08:00
js [SPARK-25754][DOC] Change CDN for MathJax 2018-10-17 06:52:00 -05:00
_config.yml [SPARK-26266][BUILD] Update to Scala 2.12.8 2018-12-08 05:59:53 -06:00
api.md [SPARK-21485][SQL][DOCS] Spark SQL documentation generation for built-in functions 2017-07-26 09:38:51 -07:00
building-spark.md [SPARK-26132][BUILD][CORE] Remove support for Scala 2.11 in Spark 3.0.0 2019-03-25 10:46:42 -05:00
cloud-integration.md [MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it 2019-03-05 08:26:30 -06:00
cluster-overview.md [SPARK-25909] fix documentation on cluster managers 2018-11-02 11:05:10 -05:00
configuration.md [MINOR][DOC] Add missing space after comma 2019-03-25 15:22:07 -05:00
contributing-to-spark.md [DOCS][MINOR] Fix a few broken links and typos, and, nit, use HTTPS more consistently 2018-08-22 01:02:17 +08:00
graphx-programming-guide.md [MINOR][DOC] Add missing space after comma 2019-03-25 15:22:07 -05:00
hadoop-provided.md [SPARK-6511] [docs] Fix example command in hadoop-provided docs. 2015-06-11 15:29:03 -07:00
hardware-provisioning.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00
index.md [SPARK-26132][BUILD][CORE] Remove support for Scala 2.11 in Spark 3.0.0 2019-03-25 10:46:42 -05:00
job-scheduling.md [SPARK-20220][DOCS] Documentation Add thrift scheduling pool config to scheduling docs 2018-07-17 09:22:16 +08:00
ml-advanced.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
ml-ann.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-classification-regression.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
ml-clustering.md [SPARK-25997][ML] add Python example code for Power Iteration Clustering in spark.ml 2019-01-31 19:33:44 -06:00
ml-collaborative-filtering.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
ml-datasource.md [MINOR][DOCS] Fix typos 2018-11-30 09:03:46 -06:00
ml-decision-tree.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-ensembles.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-features.md [SPARK-11215][ML] Add multiple columns support to StringIndexer 2019-01-29 09:21:25 -06:00
ml-frequent-pattern-mining.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
ml-guide.md [SPARK-11215][ML] Add multiple columns support to StringIndexer 2019-01-29 09:21:25 -06:00
ml-linear-methods.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-migration-guides.md [DOCS][MINOR] Fix a few broken links and typos, and, nit, use HTTPS more consistently 2018-08-22 01:02:17 +08:00
ml-pipeline.md [MINOR][DOCS] Fix typos 2018-11-30 09:03:46 -06:00
ml-statistics.md [SPARK-23254][ML] Add user guide entry and example for DataFrame multivariate summary 2018-07-11 13:56:09 -05:00
ml-survival-regression.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-tuning.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-classification-regression.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-clustering.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-collaborative-filtering.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
mllib-data-types.md [SPARK-24628][DOC] Typos of the example code in docs/mllib-data-types.md 2018-07-18 09:45:56 -05:00
mllib-decision-tree.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00
mllib-dimensionality-reduction.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-ensembles.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-evaluation-metrics.md [SPARK-26351][MLLIB] Update doc and minor correction in the mllib evaluation metrics 2019-01-20 18:11:14 -06:00
mllib-feature-extraction.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-frequent-pattern-mining.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
mllib-guide.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-isotonic-regression.md [MINOR] Update all DOI links to preferred resolver 2018-11-25 17:43:55 -06:00
mllib-linear-methods.md [MINOR][DOCS] Fix typos 2018-11-30 09:03:46 -06:00
mllib-migration-guides.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-naive-bayes.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-optimization.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-pmml-model-export.md [MINOR][DOC] Fix a few markdown typos 2018-04-03 09:36:44 +08:00
mllib-statistics.md [MINOR][DOC] Add missing space after comma 2019-03-25 15:22:07 -05:00
monitoring.md [SPARK-26928][CORE][FOLLOWUP] Fix JVMCPUSource file name and minor updates to doc 2019-03-25 15:35:24 -05:00
programming-guide.md [SPARK-21267][SS][DOCS] Update Structured Streaming Documentation 2017-07-06 17:28:20 -07:00
quick-start.md [MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it 2019-03-05 08:26:30 -06:00
rdd-programming-guide.md [SPARK-26771][CORE][GRAPHX] Make .unpersist(), .destroy() consistently non-blocking by default 2019-02-01 18:29:55 -06:00
README.md [SPARK-25273][DOC] How to install testthat 1.0.2 2018-08-30 20:25:26 +08:00
running-on-kubernetes.md [SPARK-24793][K8S] Enhance spark-submit for app management 2019-03-26 11:55:03 -07:00
running-on-mesos.md [SPARK-26324][DOCS] Add Spark docs for Running in Mesos with SSL 2018-12-20 08:29:51 -06:00
running-on-yarn.md [SPARK-26688][YARN] Provide configuration of initially blacklisted YARN nodes 2019-03-04 14:14:20 -06:00
security.md [SPARK-26772][YARN] Delete ServiceCredentialProvider and make HadoopDelegationTokenProvider a developer API 2019-02-15 14:43:13 -08:00
spark-standalone.md [SPARK-26288][CORE][FOLLOW-UP][DOC] Fix broken tag in the doc. 2019-03-26 13:08:40 +09:00
sparkr.md [SPARK-19827][R] spark.ml R API for PIC 2018-12-10 18:28:13 -06:00
sql-data-sources-avro.md [SPARK-26856][PYSPARK] Python support for from_avro and to_avro APIs 2019-03-11 10:15:07 +09:00
sql-data-sources-hive-tables.md [SPARK-24360][SQL] Support Hive 3.1 metastore 2019-01-30 20:33:21 -08:00
sql-data-sources-jdbc.md [SPARK-24423][FOLLOW-UP][SQL] Fix error example 2018-12-04 07:57:58 -06:00
sql-data-sources-json.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-data-sources-load-save-functions.md [SPARK-26835][DOCS] Notes API documentation for available options of Data sources in SparkSQL guide 2019-02-13 08:02:51 -06:00
sql-data-sources-orc.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-data-sources-parquet.md [MINOR][DOC] Writing to partitioned Hive metastore Parquet tables is not supported for Spark SQL 2019-02-01 18:34:13 -06:00
sql-data-sources-troubleshooting.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-data-sources.md [SPARK-24499][SQL][DOC][FOLLOWUP] Fix some broken links 2018-10-19 23:55:19 -07:00
sql-distributed-sql-engine.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-getting-started.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
sql-keywords.md [SPARK-27161][SQL][FOLLOWUP] Drops non-keywords from docs/sql-keywords.md 2019-03-19 20:18:40 +08:00
sql-migration-guide-hive-compatibility.md [SPARK-26091][SQL] Upgrade to 2.3.4 for Hive Metastore Client 2.3 2018-11-17 03:28:43 -08:00
sql-migration-guide-upgrade.md [SPARK-27119][SQL] Do not infer schema when reading Hive serde table with native data source 2019-03-11 09:44:29 -07:00
sql-migration-guide.md [SPARK-24499][SQL][DOC] Split the page of sql-programming-guide.html to multiple separate pages 2018-10-18 11:59:06 -07:00
sql-performance-tuning.md [SPARK-24499][SQL][DOC][FOLLOW-UP] Fix spelling in doc 2018-10-23 12:19:31 +08:00
sql-programming-guide.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
sql-pyspark-pandas-with-arrow.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
sql-reference.md [MINOR][DOCS][WIP] Fix Typos 2018-11-29 10:39:00 -06:00
storage-openstack-swift.md [SPARK-26132][BUILD][CORE] Remove support for Scala 2.11 in Spark 3.0.0 2019-03-25 10:46:42 -05:00
streaming-custom-receivers.md [SPARK-25598][STREAMING][BUILD][TEST-MAVEN] Remove flume connector in Spark 3 2018-10-11 14:28:06 -07:00
streaming-kafka-0-10-integration.md [SPARK-27022][DSTREAMS] Add kafka delegation token support. 2019-03-07 11:36:37 -08:00
streaming-kafka-integration.md [SPARK-25705][BUILD][STREAMING][TEST-MAVEN] Remove Kafka 0.8 integration 2018-10-16 09:10:24 -05:00
streaming-kinesis-integration.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00
streaming-programming-guide.md [MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it 2019-03-05 08:26:30 -06:00
structured-streaming-kafka-integration.md [SPARK-26592][SS][DOC] Add Kafka proxy user caveat to documentation 2019-03-05 09:58:51 -08:00
structured-streaming-programming-guide.md [MINOR][DOCS] Fix for contradiction in condition formula of keeping intermediate state of window in structured streaming docs 2019-02-13 08:01:20 -06:00
submitting-applications.md [SPARK-27261][DOC] Improve app submission doc for passing multiple configs 2019-03-24 21:55:48 -07:00
tuning.md [SPARK-25696] The storage memory displayed on spark Application UI is… 2018-12-10 18:27:01 -06:00

Welcome to the Spark documentation!

This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at https://spark.apache.org/documentation.html.

Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that correspond to whichever version of Spark you currently have checked out of revision control.

Prerequisites

The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Java, Python, R and SQL.

You need to have Ruby and Python installed. Also install the following libraries:

$ sudo gem install jekyll jekyll-redirect-from pygments.rb
$ sudo pip install Pygments
# Following is needed only for generating API docs
$ sudo pip install sphinx pypandoc mkdocs
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
$ sudo Rscript -e 'devtools::install_version("roxygen2", version = "5.0.1", repos="http://cran.stat.ucla.edu/")'
$ sudo Rscript -e 'devtools::install_version("testthat", version = "1.0.2", repos="http://cran.stat.ucla.edu/")'

Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0.

Note: Other versions of roxygen2 might work in SparkR documentation generation but RoxygenNote field in $SPARK_HOME/R/pkg/DESCRIPTION is 5.0.1, which is updated if the version is mismatched.

Generating the Documentation HTML

We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.

In this directory you will find text files formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.

Execute jekyll build from the docs/ directory to compile the site. Compiling the site with Jekyll will create a directory called _site containing index.html as well as the rest of the compiled files.

$ cd docs
$ jekyll build

You can modify the default Jekyll build as follows:

# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build

# Serve content locally on port 4000
$ jekyll serve --watch

# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build

API Docs (Scaladoc, Javadoc, Sphinx, roxygen2, MkDocs)

You can build just the Spark scaladoc and javadoc by running build/sbt unidoc from the $SPARK_HOME directory.

Similarly, you can build just the PySpark docs by running make html from the $SPARK_HOME/python/docs directory. Documentation is only generated for classes that are listed as public in __init__.py. The SparkR docs can be built by running $SPARK_HOME/R/create-docs.sh, and the SQL docs can be built by running $SPARK_HOME/sql/create-docs.sh after building Spark first.

When you run jekyll build in the docs directory, it will also copy over the scaladoc and javadoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run build/sbt unidoc before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc and javadoc using Unidoc. The jekyll plugin also generates the PySpark docs using Sphinx, SparkR docs using roxygen2 and SQL docs using MkDocs.

NOTE: To skip the step of building and copying over the Scala, Java, Python, R and SQL API docs, run SKIP_API=1 jekyll build. In addition, SKIP_SCALADOC=1, SKIP_PYTHONDOC=1, SKIP_RDOC=1 and SKIP_SQLDOC=1 can be used to skip a single step of the corresponding language. SKIP_SCALADOC indicates skipping both the Scala and Java docs.