5b61cc6d62
## What changes were proposed in this pull request? After SPARK-12661, I guess we officially dropped Python 2.6 support. It looks there are few places missing this notes. I grepped "Python 2.6" and "python 2.6" and the results were below: ``` ./core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala: // Unpickle array.array generated by Python 2.6 ./docs/index.md:Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 were removed as of Spark 2.2.0. ./docs/rdd-programming-guide.md:Spark {{site.SPARK_VERSION}} works with Python 2.6+ or Python 3.4+. It can use the standard CPython interpreter, ./docs/rdd-programming-guide.md:Note that support for Python 2.6 is deprecated as of Spark 2.0.0, and may be removed in Spark 2.2.0. ./python/pyspark/context.py: warnings.warn("Support for Python 2.6 is deprecated as of Spark 2.0.0") ./python/pyspark/ml/tests.py: sys.stderr.write('Please install unittest2 to test with Python 2.6 or earlier') ./python/pyspark/mllib/tests.py: sys.stderr.write('Please install unittest2 to test with Python 2.6 or earlier') ./python/pyspark/serializers.py: # On Python 2.6, we can't write bytearrays to streams, so we need to convert them ./python/pyspark/sql/tests.py: sys.stderr.write('Please install unittest2 to test with Python 2.6 or earlier') ./python/pyspark/streaming/tests.py: sys.stderr.write('Please install unittest2 to test with Python 2.6 or earlier') ./python/pyspark/tests.py: sys.stderr.write('Please install unittest2 to test with Python 2.6 or earlier') ./python/pyspark/tests.py: # NOTE: dict is used instead of collections.Counter for Python 2.6 ./python/pyspark/tests.py: # NOTE: dict is used instead of collections.Counter for Python 2.6 ``` This PR only proposes to change visible changes as below: ``` ./docs/rdd-programming-guide.md:Spark {{site.SPARK_VERSION}} works with Python 2.6+ or Python 3.4+. It can use the standard CPython interpreter, ./docs/rdd-programming-guide.md:Note that support for Python 2.6 is deprecated as of Spark 2.0.0, and may be removed in Spark 2.2.0. ./python/pyspark/context.py: warnings.warn("Support for Python 2.6 is deprecated as of Spark 2.0.0") ``` This one is already correct: ``` ./docs/index.md:Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 were removed as of Spark 2.2.0. ``` ## How was this patch tested? ```bash grep -r "Python 2.6" . grep -r "python 2.6" . ``` Author: hyukjinkwon <gurwls223@gmail.com> Closes #18682 from HyukjinKwon/minor-python.26. |
||
---|---|---|
.. | ||
_data | ||
_includes | ||
_layouts | ||
_plugins | ||
css | ||
img | ||
js | ||
_config.yml | ||
api.md | ||
building-spark.md | ||
cloud-integration.md | ||
cluster-overview.md | ||
configuration.md | ||
contributing-to-spark.md | ||
graphx-programming-guide.md | ||
hadoop-provided.md | ||
hardware-provisioning.md | ||
index.md | ||
job-scheduling.md | ||
ml-advanced.md | ||
ml-ann.md | ||
ml-classification-regression.md | ||
ml-clustering.md | ||
ml-collaborative-filtering.md | ||
ml-decision-tree.md | ||
ml-ensembles.md | ||
ml-features.md | ||
ml-frequent-pattern-mining.md | ||
ml-guide.md | ||
ml-linear-methods.md | ||
ml-migration-guides.md | ||
ml-pipeline.md | ||
ml-statistics.md | ||
ml-survival-regression.md | ||
ml-tuning.md | ||
mllib-classification-regression.md | ||
mllib-clustering.md | ||
mllib-collaborative-filtering.md | ||
mllib-data-types.md | ||
mllib-decision-tree.md | ||
mllib-dimensionality-reduction.md | ||
mllib-ensembles.md | ||
mllib-evaluation-metrics.md | ||
mllib-feature-extraction.md | ||
mllib-frequent-pattern-mining.md | ||
mllib-guide.md | ||
mllib-isotonic-regression.md | ||
mllib-linear-methods.md | ||
mllib-migration-guides.md | ||
mllib-naive-bayes.md | ||
mllib-optimization.md | ||
mllib-pmml-model-export.md | ||
mllib-statistics.md | ||
monitoring.md | ||
programming-guide.md | ||
quick-start.md | ||
rdd-programming-guide.md | ||
README.md | ||
running-on-mesos.md | ||
running-on-yarn.md | ||
security.md | ||
spark-standalone.md | ||
sparkr.md | ||
sql-programming-guide.md | ||
storage-openstack-swift.md | ||
streaming-custom-receivers.md | ||
streaming-flume-integration.md | ||
streaming-kafka-0-8-integration.md | ||
streaming-kafka-0-10-integration.md | ||
streaming-kafka-integration.md | ||
streaming-kinesis-integration.md | ||
streaming-programming-guide.md | ||
structured-streaming-kafka-integration.md | ||
structured-streaming-programming-guide.md | ||
submitting-applications.md | ||
tuning.md |
Welcome to the Spark documentation!
This readme will walk you through navigating and building the Spark documentation, which is included here with the Spark source code. You can also find documentation specific to release versions of Spark at http://spark.apache.org/documentation.html.
Read on to learn more about viewing documentation in plain text (i.e., markdown) or building the documentation yourself. Why build it yourself? So that you have the docs that corresponds to whichever version of Spark you currently have checked out of revision control.
Prerequisites
The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Python and R.
You need to have Ruby and Python installed. Also install the following libraries:
$ sudo gem install jekyll jekyll-redirect-from pygments.rb
$ sudo pip install Pygments
# Following is needed only for generating API docs
$ sudo pip install sphinx pypandoc
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "roxygen2", "testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
(Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0)
Generating the Documentation HTML
We include the Spark documentation as part of the source (as opposed to using a hosted wiki, such as the github wiki, as the definitive documentation) to enable the documentation to evolve along with the source code and be captured by revision control (currently git). This way the code automatically includes the version of the documentation that is relevant regardless of which version or release you have checked out or downloaded.
In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can read those text files directly if you want. Start with index.md.
Execute jekyll build
from the docs/
directory to compile the site. Compiling the site with
Jekyll will create a directory called _site
containing index.html as well as the rest of the
compiled files.
$ cd docs
$ jekyll build
You can modify the default Jekyll build as follows:
# Skip generating API docs (which takes a while)
$ SKIP_API=1 jekyll build
# Serve content locally on port 4000
$ jekyll serve --watch
# Build the site with extra features used on the live page
$ PRODUCTION=1 jekyll build
API Docs (Scaladoc, Sphinx, roxygen2)
You can build just the Spark scaladoc by running build/sbt unidoc
from the SPARK_PROJECT_ROOT directory.
Similarly, you can build just the PySpark docs by running make html
from the
SPARK_PROJECT_ROOT/python/docs directory. Documentation is only generated for classes that are listed as
public in __init__.py
. The SparkR docs can be built by running SPARK_PROJECT_ROOT/R/create-docs.sh.
When you run jekyll
in the docs
directory, it will also copy over the scaladoc for the various
Spark subprojects into the docs
directory (and then also into the _site
directory). We use a
jekyll plugin to run build/sbt unidoc
before building the site so if you haven't run it (recently) it
may take some time as it generates all of the scaladoc. The jekyll plugin also generates the
PySpark docs using Sphinx.
NOTE: To skip the step of building and copying over the Scala, Python, R API docs, run SKIP_API=1 jekyll
. In addition, SKIP_SCALADOC=1
, SKIP_PYTHONDOC=1
, and SKIP_RDOC=1
can be used to skip a single
step of the corresponding language.