[SPARK-31037][SQL][FOLLOW-UP] Replace legacy ReduceNumShufflePartitions with CoalesceShufflePartitions in comment

### What changes were proposed in this pull request?

Replace legacy `ReduceNumShufflePartitions` with `CoalesceShufflePartitions` in comment.

### Why are the changes needed?

Rule `ReduceNumShufflePartitions` has renamed to `CoalesceShufflePartitions`, we should update related comment as well.

### Does this PR introduce any user-facing change?

No.

### How was this patch tested?

N/A.

Closes #27865 from Ngone51/spark_31037_followup.

Authored-by: yi.wu <yi.wu@databricks.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
This commit is contained in:
yi.wu 2020-03-10 11:09:36 -07:00 committed by Dongjoon Hyun
parent 3bd6ebff81
commit 34be83e08b
No known key found for this signature in database
GPG key ID: EDA00CE834F0FC5C
2 changed files with 5 additions and 5 deletions

View file

@ -97,12 +97,12 @@ case class AdaptiveSparkPlanExec(
@transient private val queryStageOptimizerRules: Seq[Rule[SparkPlan]] = Seq(
ReuseAdaptiveSubquery(conf, context.subqueryCache),
// Here the 'OptimizeSkewedJoin' rule should be executed
// before 'ReduceNumShufflePartitions', as the skewed partition handled
// in 'OptimizeSkewedJoin' rule, should be omitted in 'ReduceNumShufflePartitions'.
// before 'CoalesceShufflePartitions', as the skewed partition handled
// in 'OptimizeSkewedJoin' rule, should be omitted in 'CoalesceShufflePartitions'.
OptimizeSkewedJoin(conf),
CoalesceShufflePartitions(conf),
// The rule of 'OptimizeLocalShuffleReader' need to make use of the 'partitionStartIndices'
// in 'ReduceNumShufflePartitions' rule. So it must be after 'ReduceNumShufflePartitions' rule.
// in 'CoalesceShufflePartitions' rule. So it must be after 'CoalesceShufflePartitions' rule.
OptimizeLocalShuffleReader(conf),
ApplyColumnarRulesAndInsertTransitions(conf, context.session.sessionState.columnarRules),
CollapseCodegenStages(conf)

View file

@ -52,7 +52,7 @@ import org.apache.spark.sql.internal.SQLConf
* (L4-1, R4-1), (L4-2, R4-1), (L4-1, R4-2), (L4-2, R4-2)
*
* Note that, when this rule is enabled, it also coalesces non-skewed partitions like
* `ReduceNumShufflePartitions` does.
* `CoalesceShufflePartitions` does.
*/
case class OptimizeSkewedJoin(conf: SQLConf) extends Rule[SparkPlan] {
@ -191,7 +191,7 @@ case class OptimizeSkewedJoin(conf: SQLConf) extends Rule[SparkPlan] {
val leftSidePartitions = mutable.ArrayBuffer.empty[ShufflePartitionSpec]
val rightSidePartitions = mutable.ArrayBuffer.empty[ShufflePartitionSpec]
// This is used to delay the creation of non-skew partitions so that we can potentially
// coalesce them like `ReduceNumShufflePartitions` does.
// coalesce them like `CoalesceShufflePartitions` does.
val nonSkewPartitionIndices = mutable.ArrayBuffer.empty[Int]
val leftSkewDesc = new SkewDesc
val rightSkewDesc = new SkewDesc