[SPARK-30812][SQL] update the skew join configs by adding the prefix "skewedJoinOptimization"

### What changes were proposed in this pull request?
This is a follow up in [PR#27563](https://github.com/apache/spark/pull/27563).
This PR adds the prefix of "skewedJoinOptimization" in the skew join related configs.

### Why are the changes needed?
address remaining address

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
only update config and no need new ut.

Closes #27630 from JkSelf/renameskewjoinconfig.

Authored-by: jiake <ke.a.jia@intel.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
This commit is contained in:
jiake 2020-02-19 15:55:29 +08:00 committed by Wenchen Fan
parent c0715221b2
commit 10a4eafcfe

View file

@ -432,14 +432,14 @@ object SQLConf {
.createWithDefault(true) .createWithDefault(true)
val ADAPTIVE_EXECUTION_SKEWED_PARTITION_SIZE_THRESHOLD = val ADAPTIVE_EXECUTION_SKEWED_PARTITION_SIZE_THRESHOLD =
buildConf("spark.sql.adaptive.optimizeSkewedJoin.skewedPartitionSizeThreshold") buildConf("spark.sql.adaptive.skewedJoinOptimization.skewedPartitionSizeThreshold")
.doc("Configures the minimum size in bytes for a partition that is considered as a skewed " + .doc("Configures the minimum size in bytes for a partition that is considered as a skewed " +
"partition in adaptive skewed join.") "partition in adaptive skewed join.")
.bytesConf(ByteUnit.BYTE) .bytesConf(ByteUnit.BYTE)
.createWithDefaultString("64MB") .createWithDefaultString("64MB")
val ADAPTIVE_EXECUTION_SKEWED_PARTITION_FACTOR = val ADAPTIVE_EXECUTION_SKEWED_PARTITION_FACTOR =
buildConf("spark.sql.adaptive.optimizeSkewedJoin.skewedPartitionFactor") buildConf("spark.sql.adaptive.skewedJoinOptimization.skewedPartitionFactor")
.doc("A partition is considered as a skewed partition if its size is larger than" + .doc("A partition is considered as a skewed partition if its size is larger than" +
" this factor multiple the median partition size and also larger than " + " this factor multiple the median partition size and also larger than " +
s" ${ADAPTIVE_EXECUTION_SKEWED_PARTITION_SIZE_THRESHOLD.key}") s" ${ADAPTIVE_EXECUTION_SKEWED_PARTITION_SIZE_THRESHOLD.key}")
@ -447,7 +447,7 @@ object SQLConf {
.createWithDefault(10) .createWithDefault(10)
val ADAPTIVE_EXECUTION_SKEWED_PARTITION_MAX_SPLITS = val ADAPTIVE_EXECUTION_SKEWED_PARTITION_MAX_SPLITS =
buildConf("spark.sql.adaptive.optimizeSkewedJoin.skewedPartitionMaxSplits") buildConf("spark.sql.adaptive.skewedJoinOptimization.skewedPartitionMaxSplits")
.doc("Configures the maximum number of task to handle a skewed partition in adaptive skewed" + .doc("Configures the maximum number of task to handle a skewed partition in adaptive skewed" +
"join.") "join.")
.intConf .intConf