[SPARK-31960][YARN][DOCS][FOLLOW-UP] Document the behaviour change of Hadoop's classpath propagation in migration guide

### What changes were proposed in this pull request?

This PR is a followup of https://github.com/apache/spark/pull/28788, and proposes to update migration guide.

### Why are the changes needed?

To tell users about the behaviour change.

### Does this PR introduce _any_ user-facing change?

Yes, it updates migration guides for users.

### How was this patch tested?

GitHub Actions' documentation build should test it.

Closes #30903 from HyukjinKwon/SPARK-31960-followup.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
This commit is contained in:
HyukjinKwon 2020-12-23 18:04:28 +09:00
parent 2287f56a3e
commit d98c216e19

View file

@ -30,6 +30,8 @@ license: |
- In Spark 3.0 and below, `SparkContext` can be created in executors. Since Spark 3.1, an exception will be thrown when creating `SparkContext` in executors. You can allow it by setting the configuration `spark.executor.allowSparkContext` when creating `SparkContext` in executors.
- In Spark 3.0 and below, Spark propagated the Hadoop classpath from `yarn.application.classpath` and `mapreduce.application.classpath` into the Spark application submitted to YARN when Spark distribution is with the built-in Hadoop. Since Spark 3.1, it does not propagate anymore when the Spark distribution is with the built-in Hadoop in order to prevent the failure from the different transitive dependencies picked up from the Hadoop cluster such as Guava and Jackson. To restore the behavior before Spark 3.1, you can set `spark.yarn.populateHadoopClasspath` to `true`.
## Upgrading from Core 2.4 to 3.0
- The `org.apache.spark.ExecutorPlugin` interface and related configuration has been replaced with