[SPARK-35050][DOCS][MESOS] Document deprecation of Apache Mesos in 3.2.0

### What changes were proposed in this pull request?

Deprecate Apache Mesos support for Spark 3.2.0 by adding documentation to this effect.

### Why are the changes needed?

Apache Mesos is ceasing development (https://lists.apache.org/thread.html/rab2a820507f7c846e54a847398ab20f47698ec5bce0c8e182bfe51ba%40%3Cdev.mesos.apache.org%3E) ; at some point we'll want to drop support, so, deprecate it now.

This doesn't mean it'll go away in 3.3.0.

### Does this PR introduce _any_ user-facing change?

No, docs only.

### How was this patch tested?

N/A

Closes #32150 from srowen/SPARK-35050.

Authored-by: Sean Owen <srowen@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
This commit is contained in:
Sean Owen 2021-04-14 13:17:58 +09:00 committed by HyukjinKwon
parent faa928cefc
commit 700aa1769c
4 changed files with 6 additions and 2 deletions

View file

@ -65,7 +65,7 @@ The system currently supports several cluster managers:
* [Standalone](spark-standalone.html) -- a simple cluster manager included with Spark that makes it
easy to set up a cluster.
* [Apache Mesos](running-on-mesos.html) -- a general cluster manager that can also run Hadoop MapReduce
and service applications.
and service applications. (Deprecated)
* [Hadoop YARN](running-on-yarn.html) -- the resource manager in Hadoop 2.
* [Kubernetes](running-on-kubernetes.html) -- an open-source system for automating deployment, scaling,
and management of containerized applications.

View file

@ -32,6 +32,8 @@ license: |
- In Spark 3.2, `spark.launcher.childConectionTimeout` is deprecated (typo) though still works. Use `spark.launcher.childConnectionTimeout` instead.
- In Spark 3.2, support for Apache Mesos as a resource manager is deprecated and will be removed in a future version.
## Upgrading from Core 3.0 to 3.1
- In Spark 3.0 and below, `SparkContext` can be created in executors. Since Spark 3.1, an exception will be thrown when creating `SparkContext` in executors. You can allow it by setting the configuration `spark.executor.allowSparkContext` when creating `SparkContext` in executors.

View file

@ -99,7 +99,7 @@ Spark can run both by itself, or over several existing cluster managers. It curr
options for deployment:
* [Standalone Deploy Mode](spark-standalone.html): simplest way to deploy Spark on a private cluster
* [Apache Mesos](running-on-mesos.html)
* [Apache Mesos](running-on-mesos.html) (deprecated)
* [Hadoop YARN](running-on-yarn.html)
* [Kubernetes](running-on-kubernetes.html)

View file

@ -19,6 +19,8 @@ license: |
---
* This will become a table of contents (this text will be scraped).
{:toc}
*Note*: Apache Mesos support is deprecated as of Apache Spark 3.2.0. It will be removed in a future version.
Spark can run on hardware clusters managed by [Apache Mesos](http://mesos.apache.org/).