2a331177ba
### What changes were proposed in this pull request? This patch introduces a new option to specify the minimum number of offsets to read per trigger i.e. minOffsetsPerTrigger and maxTriggerDelay to avoid the infinite wait for the trigger. This new option will allow skipping trigger/batch when the number of records available in Kafka is low. This is a very useful feature in cases where we have a sudden burst of data at certain intervals in a day and data volume is low for the rest of the day. 'maxTriggerDelay' option will help to avoid cases of infinite delay in scheduling trigger and the trigger will happen irrespective of records available if the maxTriggerDelay time exceeds the last trigger. It would be an optional parameter with a default value of 15 mins. This option will be only applicable if minOffsetsPerTrigger is set. minOffsetsPerTrigger option would be optional of course, but once specified it would take precedence over maxOffestsPerTrigger which will be honored only after minOffsetsPerTrigger is satisfied. ### Why are the changes needed? There are many scenarios where there is a sudden burst of data at certain intervals in a day and data volume is low for the rest of the day. Tunning such jobs is difficult as decreasing trigger processing time increasing the number of batches and hence cluster resource usage and adds to small file issues. Increasing trigger processing time adds consumer lag. This patch tries to address this issue. ### How was this patch tested? This patch was tested by adding test cases as well as manually on a cluster where the job was running for a full one day with a data burst happening once a day. Here is the picture of databurst and hence consumer lag: <img width="1198" alt="Screenshot 2021-04-29 at 11 39 35 PM" src="https://user-images.githubusercontent.com/1044003/116997587-9b2ab180-acfa-11eb-91fd-524802ce3316.png"> This is how the job behaved at burst time running every 4.5 mins (which is the specified trigger time): <img width="1154" alt="Burst Time" src="https://user-images.githubusercontent.com/1044003/116997919-12f8dc00-acfb-11eb-9b0a-98387fc67560.png"> This is job behavior during the non-burst time where it is skipping 2 to 3 triggers and running once every 9 to 13.5 mins <img width="1154" alt="Non Burst Time" src="https://user-images.githubusercontent.com/1044003/116998244-8b5f9d00-acfb-11eb-8340-33d47149ef81.png"> Here are some more stats from the two-run i.e. one normal run and the other with minOffsetsperTrigger set: | Run | Data Size | Number of Batch Runs | Number of Files | | ------------- | ------------- |------------- |------------- | | Normal Run | 54.2 GB | 320 | 21968 | | Run with minOffsetsperTrigger | 54.2 GB | 120 | 12104 | Closes #32653 from satishgopalani/SPARK-35312. Authored-by: Satish Gopalani <satish.gopalani@pubmatic.com> Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com> |
||
---|---|---|
.. | ||
.bundle | ||
_data | ||
_includes | ||
_layouts | ||
_plugins | ||
css | ||
img | ||
js | ||
.gitignore | ||
_config.yml | ||
building-spark.md | ||
cloud-integration.md | ||
cluster-overview.md | ||
configuration.md | ||
contributing-to-spark.md | ||
core-migration-guide.md | ||
Gemfile | ||
Gemfile.lock | ||
graphx-programming-guide.md | ||
hadoop-provided.md | ||
hardware-provisioning.md | ||
index.md | ||
job-scheduling.md | ||
migration-guide.md | ||
ml-advanced.md | ||
ml-ann.md | ||
ml-classification-regression.md | ||
ml-clustering.md | ||
ml-collaborative-filtering.md | ||
ml-datasource.md | ||
ml-decision-tree.md | ||
ml-ensembles.md | ||
ml-features.md | ||
ml-frequent-pattern-mining.md | ||
ml-guide.md | ||
ml-linalg-guide.md | ||
ml-linear-methods.md | ||
ml-migration-guide.md | ||
ml-pipeline.md | ||
ml-statistics.md | ||
ml-survival-regression.md | ||
ml-tuning.md | ||
mllib-classification-regression.md | ||
mllib-clustering.md | ||
mllib-collaborative-filtering.md | ||
mllib-data-types.md | ||
mllib-decision-tree.md | ||
mllib-dimensionality-reduction.md | ||
mllib-ensembles.md | ||
mllib-evaluation-metrics.md | ||
mllib-feature-extraction.md | ||
mllib-frequent-pattern-mining.md | ||
mllib-guide.md | ||
mllib-isotonic-regression.md | ||
mllib-linear-methods.md | ||
mllib-naive-bayes.md | ||
mllib-optimization.md | ||
mllib-pmml-model-export.md | ||
mllib-statistics.md | ||
monitoring.md | ||
programming-guide.md | ||
pyspark-migration-guide.md | ||
quick-start.md | ||
rdd-programming-guide.md | ||
README.md | ||
running-on-kubernetes.md | ||
running-on-mesos.md | ||
running-on-yarn.md | ||
security.md | ||
spark-standalone.md | ||
sparkr-migration-guide.md | ||
sparkr.md | ||
sql-data-sources-avro.md | ||
sql-data-sources-binaryFile.md | ||
sql-data-sources-csv.md | ||
sql-data-sources-generic-options.md | ||
sql-data-sources-hive-tables.md | ||
sql-data-sources-jdbc.md | ||
sql-data-sources-json.md | ||
sql-data-sources-load-save-functions.md | ||
sql-data-sources-orc.md | ||
sql-data-sources-parquet.md | ||
sql-data-sources-text.md | ||
sql-data-sources-troubleshooting.md | ||
sql-data-sources.md | ||
sql-distributed-sql-engine.md | ||
sql-getting-started.md | ||
sql-migration-guide.md | ||
sql-migration-old.md | ||
sql-performance-tuning.md | ||
sql-programming-guide.md | ||
sql-pyspark-pandas-with-arrow.md | ||
sql-ref-ansi-compliance.md | ||
sql-ref-datatypes.md | ||
sql-ref-datetime-pattern.md | ||
sql-ref-functions-builtin.md | ||
sql-ref-functions-udf-aggregate.md | ||
sql-ref-functions-udf-hive.md | ||
sql-ref-functions-udf-scalar.md | ||
sql-ref-functions.md | ||
sql-ref-identifier.md | ||
sql-ref-literals.md | ||
sql-ref-null-semantics.md | ||
sql-ref-syntax-aux-analyze-table.md | ||
sql-ref-syntax-aux-analyze-tables.md | ||
sql-ref-syntax-aux-analyze.md | ||
sql-ref-syntax-aux-cache-cache-table.md | ||
sql-ref-syntax-aux-cache-clear-cache.md | ||
sql-ref-syntax-aux-cache-refresh-function.md | ||
sql-ref-syntax-aux-cache-refresh-table.md | ||
sql-ref-syntax-aux-cache-refresh.md | ||
sql-ref-syntax-aux-cache-uncache-table.md | ||
sql-ref-syntax-aux-cache.md | ||
sql-ref-syntax-aux-conf-mgmt-reset.md | ||
sql-ref-syntax-aux-conf-mgmt-set-timezone.md | ||
sql-ref-syntax-aux-conf-mgmt-set.md | ||
sql-ref-syntax-aux-conf-mgmt.md | ||
sql-ref-syntax-aux-describe-database.md | ||
sql-ref-syntax-aux-describe-function.md | ||
sql-ref-syntax-aux-describe-query.md | ||
sql-ref-syntax-aux-describe-table.md | ||
sql-ref-syntax-aux-describe.md | ||
sql-ref-syntax-aux-resource-mgmt-add-archive.md | ||
sql-ref-syntax-aux-resource-mgmt-add-file.md | ||
sql-ref-syntax-aux-resource-mgmt-add-jar.md | ||
sql-ref-syntax-aux-resource-mgmt-list-archive.md | ||
sql-ref-syntax-aux-resource-mgmt-list-file.md | ||
sql-ref-syntax-aux-resource-mgmt-list-jar.md | ||
sql-ref-syntax-aux-resource-mgmt.md | ||
sql-ref-syntax-aux-show-columns.md | ||
sql-ref-syntax-aux-show-create-table.md | ||
sql-ref-syntax-aux-show-databases.md | ||
sql-ref-syntax-aux-show-functions.md | ||
sql-ref-syntax-aux-show-partitions.md | ||
sql-ref-syntax-aux-show-table.md | ||
sql-ref-syntax-aux-show-tables.md | ||
sql-ref-syntax-aux-show-tblproperties.md | ||
sql-ref-syntax-aux-show-views.md | ||
sql-ref-syntax-aux-show.md | ||
sql-ref-syntax-aux.md | ||
sql-ref-syntax-ddl-alter-database.md | ||
sql-ref-syntax-ddl-alter-table.md | ||
sql-ref-syntax-ddl-alter-view.md | ||
sql-ref-syntax-ddl-create-database.md | ||
sql-ref-syntax-ddl-create-function.md | ||
sql-ref-syntax-ddl-create-table-datasource.md | ||
sql-ref-syntax-ddl-create-table-hiveformat.md | ||
sql-ref-syntax-ddl-create-table-like.md | ||
sql-ref-syntax-ddl-create-table.md | ||
sql-ref-syntax-ddl-create-view.md | ||
sql-ref-syntax-ddl-drop-database.md | ||
sql-ref-syntax-ddl-drop-function.md | ||
sql-ref-syntax-ddl-drop-table.md | ||
sql-ref-syntax-ddl-drop-view.md | ||
sql-ref-syntax-ddl-repair-table.md | ||
sql-ref-syntax-ddl-truncate-table.md | ||
sql-ref-syntax-ddl-usedb.md | ||
sql-ref-syntax-ddl.md | ||
sql-ref-syntax-dml-insert-into.md | ||
sql-ref-syntax-dml-insert-overwrite-directory-hive.md | ||
sql-ref-syntax-dml-insert-overwrite-directory.md | ||
sql-ref-syntax-dml-insert-overwrite-table.md | ||
sql-ref-syntax-dml-insert.md | ||
sql-ref-syntax-dml-load.md | ||
sql-ref-syntax-dml.md | ||
sql-ref-syntax-hive-format.md | ||
sql-ref-syntax-qry-explain.md | ||
sql-ref-syntax-qry-select-case.md | ||
sql-ref-syntax-qry-select-clusterby.md | ||
sql-ref-syntax-qry-select-cte.md | ||
sql-ref-syntax-qry-select-distribute-by.md | ||
sql-ref-syntax-qry-select-file.md | ||
sql-ref-syntax-qry-select-groupby.md | ||
sql-ref-syntax-qry-select-having.md | ||
sql-ref-syntax-qry-select-hints.md | ||
sql-ref-syntax-qry-select-inline-table.md | ||
sql-ref-syntax-qry-select-join.md | ||
sql-ref-syntax-qry-select-lateral-view.md | ||
sql-ref-syntax-qry-select-like.md | ||
sql-ref-syntax-qry-select-limit.md | ||
sql-ref-syntax-qry-select-orderby.md | ||
sql-ref-syntax-qry-select-pivot.md | ||
sql-ref-syntax-qry-select-sampling.md | ||
sql-ref-syntax-qry-select-setops.md | ||
sql-ref-syntax-qry-select-sortby.md | ||
sql-ref-syntax-qry-select-subqueries.md | ||
sql-ref-syntax-qry-select-transform.md | ||
sql-ref-syntax-qry-select-tvf.md | ||
sql-ref-syntax-qry-select-where.md | ||
sql-ref-syntax-qry-select-window.md | ||
sql-ref-syntax-qry-select.md | ||
sql-ref-syntax-qry.md | ||
sql-ref-syntax.md | ||
sql-ref.md | ||
ss-migration-guide.md | ||
storage-openstack-swift.md | ||
streaming-custom-receivers.md | ||
streaming-kafka-0-10-integration.md | ||
streaming-kafka-integration.md | ||
streaming-kinesis-integration.md | ||
streaming-programming-guide.md | ||
structured-streaming-kafka-integration.md | ||
structured-streaming-programming-guide.md | ||
submitting-applications.md | ||
tuning.md | ||
web-ui.md |
license |
---|
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. |
Welcome to the Spark documentation!
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at https://spark.apache.org/documentation.html.
Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that correspond to whichever version of Spark you currently have checked out of revision control.
Prerequisites
The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Java, Python, R and SQL.
You need to have Ruby and
Python
installed. Make sure the bundle
command is available, if not install the Gem containing it:
$ sudo gem install bundler
After this all the required ruby dependencies can be installed from the docs/
directory via the Bundler:
$ cd docs
$ bundle install
Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0.
R Documentation
If you'd like to generate R documentation, you'll need to install Pandoc and install these libraries:
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "testthat", "rmarkdown"), repos="https://cloud.r-project.org/")'
$ sudo Rscript -e 'devtools::install_version("roxygen2", version = "7.1.1", repos="https://cloud.r-project.org/")'
Note: Other versions of roxygen2 might work in SparkR documentation generation but RoxygenNote
field in $SPARK_HOME/R/pkg/DESCRIPTION
is 7.1.1, which is updated if the version is mismatched.
API Documentation
To generate API docs for any language, you'll need to install these libraries:
$ sudo pip install 'sphinx<3.1.0' mkdocs numpy pydata_sphinx_theme ipython nbsphinx numpydoc 'jinja2<3.0.0'
Generating the Documentation HTML
We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.
In this directory you will find text files formatted using Markdown, with an ".md" suffix. You can
read those text files directly if you want. Start with index.md
.
Execute bundle exec jekyll build
from the docs/
directory to compile the site. Compiling the site with
Jekyll will create a directory called _site
containing index.html
as well as the rest of the
compiled files.
$ cd docs
$ bundle exec jekyll build
You can modify the default Jekyll build as follows:
# Skip generating API docs (which takes a while)
$ SKIP_API=1 bundle exec jekyll build
# Serve content locally on port 4000
$ bundle exec jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 bundle exec jekyll build
API Docs (Scaladoc, Javadoc, Sphinx, roxygen2, MkDocs)
You can build just the Spark scaladoc and javadoc by running ./build/sbt unidoc
from the $SPARK_HOME
directory.
Similarly, you can build just the PySpark docs by running make html
from the
$SPARK_HOME/python/docs
directory. Documentation is only generated for classes that are listed as
public in __init__.py
. The SparkR docs can be built by running $SPARK_HOME/R/create-docs.sh
, and
the SQL docs can be built by running $SPARK_HOME/sql/create-docs.sh
after building Spark first.
When you run bundle exec jekyll build
in the docs
directory, it will also copy over the scaladoc and javadoc for the various
Spark subprojects into the docs
directory (and then also into the _site
directory). We use a
jekyll plugin to run ./build/sbt unidoc
before building the site so if you haven't run it (recently) it
may take some time as it generates all of the scaladoc and javadoc using Unidoc.
The jekyll plugin also generates the PySpark docs using Sphinx, SparkR docs
using roxygen2 and SQL docs
using MkDocs.
NOTE: To skip the step of building and copying over the Scala, Java, Python, R and SQL API docs, run SKIP_API=1 bundle exec jekyll build
. In addition, SKIP_SCALADOC=1
, SKIP_PYTHONDOC=1
, SKIP_RDOC=1
and SKIP_SQLDOC=1
can be used
to skip a single step of the corresponding language. SKIP_SCALADOC
indicates skipping both the Scala and Java docs.
Automatically Rebuilding API Docs
bundle exec jekyll serve --watch
will only watch what's in docs/
, and it won't follow symlinks. That means it won't monitor your API docs under python/docs
or elsewhere.
To work around this limitation for Python, install entr
and run the following in a separate shell:
cd "$SPARK_HOME/python/docs"
find .. -type f -name '*.py' \
| entr -s 'make html && cp -r _build/html/. ../../docs/api/python'
Whenever there is a change to your Python code, entr
will automatically rebuild the Python API docs and copy them to docs/
, thus triggering a Jekyll update.