3f94e64aa8
## What changes were proposed in this pull request? Currently the whole block is fetched into memory(off heap by default) when shuffle-read. A block is defined by (shuffleId, mapId, reduceId). Thus it can be large when skew situations. If OOM happens during shuffle read, job will be killed and users will be notified to "Consider boosting spark.yarn.executor.memoryOverhead". Adjusting parameter and allocating more memory can resolve the OOM. However the approach is not perfectly suitable for production environment, especially for data warehouse. Using Spark SQL as data engine in warehouse, users hope to have a unified parameter(e.g. memory) but less resource wasted(resource is allocated but not used). The hope is strong especially when migrating data engine to Spark from another one(e.g. Hive). Tuning the parameter for thousands of SQLs one by one is very time consuming. It's not always easy to predict skew situations, when happen, it make sense to fetch remote blocks to disk for shuffle-read, rather than kill the job because of OOM. In this pr, I propose to fetch big blocks to disk(which is also mentioned in SPARK-3019): 1. Track average size and also the outliers(which are larger than 2*avgSize) in MapStatus; 2. Request memory from `MemoryManager` before fetch blocks and release the memory to `MemoryManager` when `ManagedBuffer` is released. 3. Fetch remote blocks to disk when failing acquiring memory from `MemoryManager`, otherwise fetch to memory. This is an improvement for memory control when shuffle blocks and help to avoid OOM in scenarios like below: 1. Single huge block; 2. Sizes of many blocks are underestimated in `MapStatus` and the actual footprint of blocks is much larger than the estimated. ## How was this patch tested? Added unit test in `MapStatusSuite` and `ShuffleBlockFetcherIteratorSuite`. Author: jinxing <jinxing6042@126.com> Closes #16989 from jinxing64/SPARK-19659. |
||
---|---|---|
.. | ||
_data | ||
_includes | ||
_layouts | ||
_plugins | ||
css | ||
img | ||
js | ||
_config.yml | ||
api.md | ||
building-spark.md | ||
cloud-integration.md | ||
cluster-overview.md | ||
configuration.md | ||
contributing-to-spark.md | ||
ec2-scripts.md | ||
graphx-programming-guide.md | ||
hadoop-provided.md | ||
hardware-provisioning.md | ||
index.md | ||
java-programming-guide.md | ||
job-scheduling.md | ||
ml-advanced.md | ||
ml-ann.md | ||
ml-classification-regression.md | ||
ml-clustering.md | ||
ml-collaborative-filtering.md | ||
ml-decision-tree.md | ||
ml-ensembles.md | ||
ml-features.md | ||
ml-frequent-pattern-mining.md | ||
ml-guide.md | ||
ml-linear-methods.md | ||
ml-migration-guides.md | ||
ml-pipeline.md | ||
ml-statistics.md | ||
ml-survival-regression.md | ||
ml-tuning.md | ||
mllib-classification-regression.md | ||
mllib-clustering.md | ||
mllib-collaborative-filtering.md | ||
mllib-data-types.md | ||
mllib-decision-tree.md | ||
mllib-dimensionality-reduction.md | ||
mllib-ensembles.md | ||
mllib-evaluation-metrics.md | ||
mllib-feature-extraction.md | ||
mllib-frequent-pattern-mining.md | ||
mllib-guide.md | ||
mllib-isotonic-regression.md | ||
mllib-linear-methods.md | ||
mllib-migration-guides.md | ||
mllib-naive-bayes.md | ||
mllib-optimization.md | ||
mllib-pmml-model-export.md | ||
mllib-statistics.md | ||
monitoring.md | ||
python-programming-guide.md | ||
quick-start.md | ||
rdd-programming-guide.md | ||
README.md | ||
running-on-mesos.md | ||
running-on-yarn.md | ||
scala-programming-guide.md | ||
security.md | ||
spark-standalone.md | ||
sparkr.md | ||
sql-programming-guide.md | ||
storage-openstack-swift.md | ||
streaming-custom-receivers.md | ||
streaming-flume-integration.md | ||
streaming-kafka-0-8-integration.md | ||
streaming-kafka-0-10-integration.md | ||
streaming-kafka-integration.md | ||
streaming-kinesis-integration.md | ||
streaming-programming-guide.md | ||
structured-streaming-kafka-integration.md | ||
structured-streaming-programming-guide.md | ||
submitting-applications.md | ||
tuning.md |
Welcome to the Spark documentation!
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.
Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.
Prerequisites
The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Python and R.
You need to have Ruby and Python installed. Also install the following libraries:
$ sudo gem install jekyll jekyll-redirect-from pygments.rb
$ sudo pip install Pygments
# Following is needed only for generating API docs
$ sudo pip install sphinx pypandoc
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "roxygen2", "testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
(Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0)
Generating the Documentation HTML
We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.
In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.
Execute jekyll build
from the docs/
directory to compile the site. Compiling the site with
Jekyll will create a directory called _site
containing index.html as well as the rest of the
compiled files.
$ cd docs
$ jekyll build
You can modify the default Jekyll build as follows:
# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build
API Docs (Scaladoc, Sphinx, roxygen2)
You can build just the Spark scaladoc by running build/sbt unidoc
from the SPARK_PROJECT_ROOT directory.
Similarly, you can build just the PySpark docs by running make html
from the
SPARK_PROJECT_ROOT/python/docs directory. Documentation is only generated for classes that are listed as
public in __init__.py
. The SparkR docs can be built by running SPARK_PROJECT_ROOT/R/create-docs.sh.
When you run jekyll
in the docs
directory, it will also copy over the scaladoc for the various
Spark subprojects into the docs
directory (and then also into the _site
directory). We use a
jekyll plugin to run build/sbt unidoc
before building the site so if you haven't run it (recently) it
may take some time as it generates all of the scaladoc. The jekyll plugin also generates the
PySpark docs using Sphinx.
NOTE: To skip the step of building and copying over the Scala, Python, R API docs, run SKIP_API=1 jekyll
. In addition, SKIP_SCALADOC=1
, SKIP_PYTHONDOC=1
, and SKIP_RDOC=1
can be used to skip a single
step of the corresponding language.