From b36b1c7e8a69f0aa02d7471fc7dadd32ed57ade1 Mon Sep 17 00:00:00 2001 From: Gengliang Wang Date: Thu, 19 Aug 2021 21:30:00 +0800 Subject: [PATCH] =?UTF-8?q?Revert=20"[SPARK-35083][FOLLOW-UP][CORE]=20Add?= =?UTF-8?q?=20migration=20guide=20for=20the=20re=E2=80=A6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit …mote scheduler pool files support" This reverts commit e3902d1975ee6a6a6f672eb6b4f318bcdd237e3f. The feature is improvement instead of behavior change. Closes #33789 from gengliangwang/revertDoc. Authored-by: Gengliang Wang Signed-off-by: Gengliang Wang --- docs/core-migration-guide.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/core-migration-guide.md b/docs/core-migration-guide.md index 02ed43034d..1dee5029d8 100644 --- a/docs/core-migration-guide.md +++ b/docs/core-migration-guide.md @@ -24,8 +24,6 @@ license: | ## Upgrading from Core 3.1 to 3.2 -- Since Spark 3.2, the fair scheduler also supports reading a configuration file from a remote node. `spark.scheduler.allocation.file` can either be a local file path or HDFS file path. - - Since Spark 3.2, `spark.hadoopRDD.ignoreEmptySplits` is set to `true` by default which means Spark will not create empty partitions for empty input splits. To restore the behavior before Spark 3.2, you can set `spark.hadoopRDD.ignoreEmptySplits` to `false`. - Since Spark 3.2, `spark.eventLog.compression.codec` is set to `zstd` by default which means Spark will not fallback to use `spark.io.compression.codec` anymore.