spark-instrumented-optimizer/docs
“attilapiros” b56e9c613f [SPARK-16630][YARN] Blacklist a node if executors won't launch on it
## What changes were proposed in this pull request?

This change extends YARN resource allocation handling with blacklisting functionality.
This handles cases when node is messed up or misconfigured such that a container won't launch on it. Before this change backlisting only focused on task execution but this change introduces YarnAllocatorBlacklistTracker which tracks allocation failures per host (when enabled via "spark.yarn.blacklist.executor.launch.blacklisting.enabled").

## How was this patch tested?

### With unit tests

Including a new suite: YarnAllocatorBlacklistTrackerSuite.

#### Manually

It was tested on a cluster by deleting the Spark jars on one of the node.

#### Behaviour before these changes

Starting Spark as:
```
spark2-shell --master yarn --deploy-mode client --num-executors 4  --conf spark.executor.memory=4g --conf "spark.yarn.max.executor.failures=6"
```

Log is:
```
18/04/12 06:49:36 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 11, (reason: Max number of executor failures (6) reached)
18/04/12 06:49:39 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: Max number of executor failures (6) reached)
18/04/12 06:49:39 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
18/04/12 06:49:39 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://apiros-1.gce.test.com:8020/user/systest/.sparkStaging/application_1523459048274_0016
18/04/12 06:49:39 INFO util.ShutdownHookManager: Shutdown hook called
```

#### Behaviour after these changes

Starting Spark as:
```
spark2-shell --master yarn --deploy-mode client --num-executors 4  --conf spark.executor.memory=4g --conf "spark.yarn.max.executor.failures=6" --conf "spark.yarn.blacklist.executor.launch.blacklisting.enabled=true"
```

And the log is:
```
18/04/13 05:37:43 INFO yarn.YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 4505 MB memory (including 409 MB of overhead)
18/04/13 05:37:43 INFO yarn.YarnAllocator: Submitted 1 unlocalized container requests.
18/04/13 05:37:43 INFO yarn.YarnAllocator: Launching container container_1523459048274_0025_01_000008 on host apiros-4.gce.test.com for executor with ID 6
18/04/13 05:37:43 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
18/04/13 05:37:43 INFO yarn.YarnAllocator: Completed container container_1523459048274_0025_01_000007 on host: apiros-4.gce.test.com (state: COMPLETE, exit status: 1)
18/04/13 05:37:43 INFO yarn.YarnAllocatorBlacklistTracker: blacklisting host as YARN allocation failed: apiros-4.gce.test.com
18/04/13 05:37:43 INFO yarn.YarnAllocatorBlacklistTracker: adding nodes to YARN application master's blacklist: List(apiros-4.gce.test.com)
18/04/13 05:37:43 WARN yarn.YarnAllocator: Container marked as failed: container_1523459048274_0025_01_000007 on host: apiros-4.gce.test.com. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1523459048274_0025_01_000007
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:604)
        at org.apache.hadoop.util.Shell.run(Shell.java:507)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
```

Where the most important part is:

```
18/04/13 05:37:43 INFO yarn.YarnAllocatorBlacklistTracker: blacklisting host as YARN allocation failed: apiros-4.gce.test.com
18/04/13 05:37:43 INFO yarn.YarnAllocatorBlacklistTracker: adding nodes to YARN application master's blacklist: List(apiros-4.gce.test.com)
```

And execution was continued (no shutdown called).

### Testing the backlisting of the whole cluster

Starting Spark with YARN blacklisting enabled then removing a the Spark core jar one by one from all the cluster nodes. Then executing a simple spark job which fails checking the yarn log the expected exit status is contained:

```
18/06/15 01:07:10 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 11, (reason: Due to executor failures all available nodes are blacklisted)
18/06/15 01:07:13 INFO util.ShutdownHookManager: Shutdown hook called
```

Author: “attilapiros” <piros.attila.zsolt@gmail.com>

Closes #21068 from attilapiros/SPARK-16630.
2018-06-21 09:17:18 -05:00
..
_data [SPARK-20505][ML] Add docs and examples for ml.stat.Correlation and ml.stat.ChiSquareTest. 2017-05-18 11:54:09 +08:00
_includes [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
_layouts [SPARK-22648][K8S] Spark on Kubernetes - Documentation 2017-12-21 17:21:11 -08:00
_plugins [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
css [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
img [SPARK-22648][K8S] Spark on Kubernetes - Documentation 2017-12-21 17:21:11 -08:00
js [SPARK-19402][DOCS] Support LaTex inline formula correctly and fix warnings in Scala/Java APIs generation 2017-02-01 13:26:16 +00:00
_config.yml [SPARK-23028] Bump master branch version to 2.4.0-SNAPSHOT 2018-01-13 00:37:59 +08:00
api.md [SPARK-21485][SQL][DOCS] Spark SQL documentation generation for built-in functions 2017-07-26 09:38:51 -07:00
building-spark.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
cloud-integration.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
cluster-overview.md [SPARK-23104][K8S][DOCS] Changes to Kubernetes scheduler documentation 2018-01-19 10:23:13 -08:00
configuration.md [SPARK-24416] Fix configuration specification for killBlacklisted executors 2018-06-12 13:55:08 -05:00
contributing-to-spark.md [SPARK-18073][DOCS][WIP] Migrate wiki to spark.apache.org web site 2016-11-23 11:25:47 +00:00
graphx-programming-guide.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
hadoop-provided.md [SPARK-6511] [docs] Fix example command in hadoop-provided docs. 2015-06-11 15:29:03 -07:00
hardware-provisioning.md [SPARK-19660][CORE][SQL] Replace the configuration property names that are deprecated in the version of Hadoop 2.6 2017-02-28 10:13:42 +00:00
index.md [SPARK-22648][K8S] Spark on Kubernetes - Documentation 2017-12-21 17:21:11 -08:00
job-scheduling.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
ml-advanced.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
ml-ann.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-classification-regression.md [MINOR][ML][DOC] Improved Naive Bayes user guide explanation 2018-05-09 10:34:57 -07:00
ml-clustering.md [SPARK-19386][SPARKR][DOC] Bisecting k-means in SparkR documentation 2017-02-03 12:19:47 -08:00
ml-collaborative-filtering.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
ml-decision-tree.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-ensembles.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-features.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
ml-frequent-pattern-mining.md [SPARK-19791][ML] Add doc and example for fpgrowth 2017-04-29 10:51:45 -07:00
ml-guide.md [MINOR][DOC] Fix a few markdown typos 2018-04-03 09:36:44 +08:00
ml-linear-methods.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-migration-guides.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
ml-pipeline.md [SPARK-23154][ML][DOC] Document backwards compatibility guarantees for ML persistence 2018-02-13 11:18:45 -08:00
ml-statistics.md [SPARK-20505][ML] Add docs and examples for ml.stat.Correlation and ml.stat.ChiSquareTest. 2017-05-18 11:54:09 +08:00
ml-survival-regression.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
ml-tuning.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-classification-regression.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-clustering.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-collaborative-filtering.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-data-types.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-decision-tree.md [MINOR][DOC] Fix the link of 'Getting Started' 2017-12-17 10:52:01 -06:00
mllib-dimensionality-reduction.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-ensembles.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-evaluation-metrics.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-feature-extraction.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-frequent-pattern-mining.md [MINOR][DOCS] s/It take/It takes/g 2017-12-31 15:38:10 -06:00
mllib-guide.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-isotonic-regression.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-linear-methods.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-migration-guides.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-naive-bayes.md [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide 2016-07-15 13:38:23 -07:00
mllib-optimization.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
mllib-pmml-model-export.md [MINOR][DOC] Fix a few markdown typos 2018-04-03 09:36:44 +08:00
mllib-statistics.md [SPARK-19550][BUILD][CORE][WIP] Remove Java 7 support 2017-02-16 12:32:45 +00:00
monitoring.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
programming-guide.md [SPARK-21267][SS][DOCS] Update Structured Streaming Documentation 2017-07-06 17:28:20 -07:00
quick-start.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
rdd-programming-guide.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
README.md [MINOR][DOCS] Fix R documentation generation instruction for roxygen2 2018-04-11 19:44:01 +08:00
running-on-kubernetes.md [SPARK-23984][K8S] Initial Python Bindings for PySpark on K8s 2018-06-08 11:18:34 -07:00
running-on-mesos.md [SPARK-24326][MESOS] add support for local:// scheme for the app jar 2018-05-31 21:25:45 -07:00
running-on-yarn.md [SPARK-16630][YARN] Blacklist a node if executors won't launch on it 2018-06-21 09:17:18 -05:00
security.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
spark-standalone.md [SPARK-24340][CORE] Clean up non-shuffle disk block manager files following executor exits on a Standalone cluster 2018-06-01 13:46:05 -07:00
sparkr.md [SPARK-23291][SPARK-23291][R][FOLLOWUP] Update SparkR migration note for 2018-05-07 14:52:14 -07:00
sql-programming-guide.md [SPARK-24444][DOCS][PYTHON] Improve Pandas UDF docs to explain column assignment 2018-06-01 11:58:59 +08:00
storage-openstack-swift.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
streaming-custom-receivers.md [SPARK-21508][DOC] Fix example code provided in Spark Streaming Documentation 2017-07-29 13:26:10 +01:00
streaming-flume-integration.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
streaming-kafka-0-8-integration.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
streaming-kafka-0-10-integration.md [SPARK-19185][DSTREAM] Make Kafka consumer cache configurable 2017-06-08 09:55:43 -07:00
streaming-kafka-integration.md [SPARK-21893][BUILD][STREAMING][WIP] Put Kafka 0.8 behind a profile 2017-09-13 10:10:40 +01:00
streaming-kinesis-integration.md [SPARK-20855][Docs][DStream] Update the Spark kinesis docs to use the KinesisInputDStream builder instead of deprecated KinesisUtils 2017-07-25 08:27:03 +01:00
streaming-programming-guide.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
structured-streaming-kafka-integration.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
structured-streaming-programming-guide.md [SPARK-24520] Double braces in documentations 2018-06-11 17:12:33 -05:00
submitting-applications.md [MINOR][DOC] Fix some typos and grammar issues 2018-04-06 13:37:08 +08:00
tuning.md [SPARK-24134][DOCS] A missing full-stop in doc "Tuning Spark". 2018-06-11 17:13:11 -05:00

Welcome to the Spark documentation!

This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.

Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that correspond to whichever version of Spark you currently have checked out of revision control.

Prerequisites

The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Java, Python, R and SQL.

You need to have Ruby and Python installed. Also install the following libraries:

$ sudo gem install jekyll jekyll-redirect-from pygments.rb
$ sudo pip install Pygments
# Following is needed only for generating API docs
$ sudo pip install sphinx pypandoc mkdocs
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
$ sudo Rscript -e 'devtools::install_version("roxygen2", version = "5.0.1", repos="http://cran.stat.ucla.edu/")'

Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0.

Note: Other versions of roxygen2 might work in SparkR documentation generation but RoxygenNote field in $SPARK_HOME/R/pkg/DESCRIPTION is 5.0.1, which is updated if the version is mismatched.

Generating the Documentation HTML

We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.

In this directory you will find text files formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.

Execute jekyll build from the docs/ directory to compile the site. Compiling the site with Jekyll will create a directory called _site containing index.html as well as the rest of the compiled files.

$ cd docs
$ jekyll build

You can modify the default Jekyll build as follows:

# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build

# Serve content locally on port 4000
$ jekyll serve --watch

# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build

API Docs (Scaladoc, Javadoc, Sphinx, roxygen2, MkDocs)

You can build just the Spark scaladoc and javadoc by running build/sbt unidoc from the $SPARK_HOME directory.

Similarly, you can build just the PySpark docs by running make html from the $SPARK_HOME/python/docs directory. Documentation is only generated for classes that are listed as public in __init__.py. The SparkR docs can be built by running $SPARK_HOME/R/create-docs.sh, and the SQL docs can be built by running $SPARK_HOME/sql/create-docs.sh after building Spark first.

When you run jekyll build in the docs directory, it will also copy over the scaladoc and javadoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run build/sbt unidoc before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc and javadoc using Unidoc. The jekyll plugin also generates the PySpark docs using Sphinx, SparkR docs using roxygen2 and SQL docs using MkDocs.

NOTE: To skip the step of building and copying over the Scala, Java, Python, R and SQL API docs, run SKIP_API=1 jekyll build. In addition, SKIP_SCALADOC=1, SKIP_PYTHONDOC=1, SKIP_RDOC=1 and SKIP_SQLDOC=1 can be used to skip a single step of the corresponding language. SKIP_SCALADOC indicates skipping both the Scala and Java docs.