3f4060c340
This PR contains implementation of the basic submission client for the cluster mode of Spark on Kubernetes. It's step 2 from the step-wise plan documented [here](https://github.com/apache-spark-on-k8s/spark/issues/441#issuecomment-330802935). This addition is covered by the [SPIP](http://apache-spark-developers-list.1001551.n3.nabble.com/SPIP-Spark-on-Kubernetes-td22147.html) vote which passed on Aug 31. This PR and #19468 together form a MVP of Spark on Kubernetes that allows users to run Spark applications that use resources locally within the driver and executor containers on Kubernetes 1.6 and up. Some changes on pom and build/test setup are copied over from #19468 to make this PR self contained and testable. The submission client is mainly responsible for creating the Kubernetes pod that runs the Spark driver. It follows a step-based approach to construct the driver pod, as the code under the `submit.steps` package shows. The steps are orchestrated by `DriverConfigurationStepsOrchestrator`. `Client` creates the driver pod and waits for the application to complete if it's configured to do so, which is the case by default. This PR also contains Dockerfiles of the driver and executor images. They are included because some of the environment variables set in the code would not make sense without referring to the Dockerfiles. * The patch contains unit tests which are passing. * Manual testing: ./build/mvn -Pkubernetes clean package succeeded. * It is a subset of the entire changelist hosted at http://github.com/apache-spark-on-k8s/spark which is in active use in several organizations. * There is integration testing enabled in the fork currently hosted by PepperData which is being moved over to RiseLAB CI. * Detailed documentation on trying out the patch in its entirety is in: https://apache-spark-on-k8s.github.io/userdocs/running-on-kubernetes.html cc rxin felixcheung mateiz (shepherd) k8s-big-data SIG members & contributors: mccheah foxish ash211 ssuchter varunkatta kimoonkim erikerlandson tnachen ifilonenko liyinan926 Author: Yinan Li <liyinan926@gmail.com> Closes #19717 from liyinan926/spark-kubernetes-4. |
||
---|---|---|
.. | ||
_data | ||
_includes | ||
_layouts | ||
_plugins | ||
css | ||
img | ||
js | ||
_config.yml | ||
api.md | ||
building-spark.md | ||
cloud-integration.md | ||
cluster-overview.md | ||
configuration.md | ||
contributing-to-spark.md | ||
graphx-programming-guide.md | ||
hadoop-provided.md | ||
hardware-provisioning.md | ||
index.md | ||
job-scheduling.md | ||
ml-advanced.md | ||
ml-ann.md | ||
ml-classification-regression.md | ||
ml-clustering.md | ||
ml-collaborative-filtering.md | ||
ml-decision-tree.md | ||
ml-ensembles.md | ||
ml-features.md | ||
ml-frequent-pattern-mining.md | ||
ml-guide.md | ||
ml-linear-methods.md | ||
ml-migration-guides.md | ||
ml-pipeline.md | ||
ml-statistics.md | ||
ml-survival-regression.md | ||
ml-tuning.md | ||
mllib-classification-regression.md | ||
mllib-clustering.md | ||
mllib-collaborative-filtering.md | ||
mllib-data-types.md | ||
mllib-decision-tree.md | ||
mllib-dimensionality-reduction.md | ||
mllib-ensembles.md | ||
mllib-evaluation-metrics.md | ||
mllib-feature-extraction.md | ||
mllib-frequent-pattern-mining.md | ||
mllib-guide.md | ||
mllib-isotonic-regression.md | ||
mllib-linear-methods.md | ||
mllib-migration-guides.md | ||
mllib-naive-bayes.md | ||
mllib-optimization.md | ||
mllib-pmml-model-export.md | ||
mllib-statistics.md | ||
monitoring.md | ||
programming-guide.md | ||
quick-start.md | ||
rdd-programming-guide.md | ||
README.md | ||
running-on-mesos.md | ||
running-on-yarn.md | ||
security.md | ||
spark-standalone.md | ||
sparkr.md | ||
sql-programming-guide.md | ||
storage-openstack-swift.md | ||
streaming-custom-receivers.md | ||
streaming-flume-integration.md | ||
streaming-kafka-0-8-integration.md | ||
streaming-kafka-0-10-integration.md | ||
streaming-kafka-integration.md | ||
streaming-kinesis-integration.md | ||
streaming-programming-guide.md | ||
structured-streaming-kafka-integration.md | ||
structured-streaming-programming-guide.md | ||
submitting-applications.md | ||
tuning.md |
Welcome to the Spark documentation!
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.
Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.
Prerequisites
The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Java, Python, R and SQL.
You need to have Ruby and Python installed. Also install the following libraries:
$ sudo gem install jekyll jekyll-redirect-from pygments.rb
$ sudo pip install Pygments
# Following is needed only for generating API docs
$ sudo pip install sphinx pypandoc mkdocs
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "roxygen2", "testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
(Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0)
Generating the Documentation HTML
We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.
In this directory you will find text files formatted using Markdown, with an ".md" suffix. You can
read those text files directly if you want. Start with index.md
.
Execute jekyll build
from the docs/
directory to compile the site. Compiling the site with
Jekyll will create a directory called _site
containing index.html
as well as the rest of the
compiled files.
$ cd docs
$ jekyll build
You can modify the default Jekyll build as follows:
# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build
API Docs (Scaladoc, Javadoc, Sphinx, roxygen2, MkDocs)
You can build just the Spark scaladoc and javadoc by running build/sbt unidoc
from the SPARK_HOME
directory.
Similarly, you can build just the PySpark docs by running make html
from the
SPARK_HOME/python/docs
directory. Documentation is only generated for classes that are listed as
public in __init__.py
. The SparkR docs can be built by running SPARK_HOME/R/create-docs.sh
, and
the SQL docs can be built by running SPARK_HOME/sql/create-docs.sh
after building Spark first.
When you run jekyll build
in the docs
directory, it will also copy over the scaladoc and javadoc for the various
Spark subprojects into the docs
directory (and then also into the _site
directory). We use a
jekyll plugin to run build/sbt unidoc
before building the site so if you haven't run it (recently) it
may take some time as it generates all of the scaladoc and javadoc using Unidoc.
The jekyll plugin also generates the PySpark docs using Sphinx, SparkR docs
using roxygen2 and SQL docs
using MkDocs.
NOTE: To skip the step of building and copying over the Scala, Java, Python, R and SQL API docs, run SKIP_API=1 jekyll build
. In addition, SKIP_SCALADOC=1
, SKIP_PYTHONDOC=1
, SKIP_RDOC=1
and SKIP_SQLDOC=1
can be used
to skip a single step of the corresponding language. SKIP_SCALADOC
indicates skipping both the Scala and Java docs.