[SPARK-26807][DOCS] Clarify that Pyspark is on PyPi now
## What changes were proposed in this pull request? Docs still say that Spark will be available on PyPi "in the future"; just needs to be updated. ## How was this patch tested? Doc build Closes #23933 from srowen/SPARK-26807. Authored-by: Sean Owen <sean.owen@databricks.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This commit is contained in:
parent
4a486d6716
commit
a97a19dd93
|
@ -20,7 +20,7 @@ Please see [Spark Security](security.html) before downloading and running Spark.
|
|||
Get Spark from the [downloads page](https://spark.apache.org/downloads.html) of the project website. This documentation is for Spark version {{site.SPARK_VERSION}}. Spark uses Hadoop's client libraries for HDFS and YARN. Downloads are pre-packaged for a handful of popular Hadoop versions.
|
||||
Users can also download a "Hadoop free" binary and run Spark with any Hadoop version
|
||||
[by augmenting Spark's classpath](hadoop-provided.html).
|
||||
Scala and Java users can include Spark in their projects using its Maven coordinates and in the future Python users can also install Spark from PyPI.
|
||||
Scala and Java users can include Spark in their projects using its Maven coordinates and Python users can install Spark from PyPI.
|
||||
|
||||
|
||||
If you'd like to build Spark from
|
||||
|
|
Loading…
Reference in a new issue