[SPARK-33650][SQL] Fix the error from ALTER TABLE .. ADD/DROP PARTITION for non-supported partition management table

### What changes were proposed in this pull request?
In the PR, I propose to change the order of post-analysis checks for the `ALTER TABLE .. ADD/DROP PARTITION` command, and perform the general check (does the table support partition management at all) before specific checks.

### Why are the changes needed?
The error message for the table which doesn't support partition management can mislead users:
```java
PartitionSpecs are not resolved;;
'AlterTableAddPartition [UnresolvedPartitionSpec(Map(id -> 1),None)], false
+- ResolvedTable org.apache.spark.sql.connector.InMemoryTableCatalog2fd64b11, ns1.ns2.tbl, org.apache.spark.sql.connector.InMemoryTable5d3ff859
```
because it says nothing about the root cause of the issue.

### Does this PR introduce _any_ user-facing change?
Yes. After the change, the error message will be:
```
Table ns1.ns2.tbl can not alter partitions
```

### How was this patch tested?
By running the affected test suite `AlterTablePartitionV2SQLSuite`.

Closes #30594 from MaxGekk/check-order-AlterTablePartition.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
This commit is contained in:
Max Gekk 2020-12-03 16:43:15 -08:00 committed by Dongjoon Hyun
parent 7e759b2d95
commit 85949588b7
2 changed files with 19 additions and 3 deletions

View file

@ -996,12 +996,12 @@ trait CheckAnalysis extends PredicateHelper {
private def checkAlterTablePartition(
table: Table, parts: Seq[PartitionSpec]): Unit = {
(table, parts) match {
case (_, parts) if parts.exists(_.isInstanceOf[UnresolvedPartitionSpec]) =>
failAnalysis("PartitionSpecs are not resolved")
case (table, _) if !table.isInstanceOf[SupportsPartitionManagement] =>
failAnalysis(s"Table ${table.name()} can not alter partitions.")
case (_, parts) if parts.exists(_.isInstanceOf[UnresolvedPartitionSpec]) =>
failAnalysis("PartitionSpecs are not resolved")
// Skip atomic partition tables
case (_: SupportsAtomicPartitionManagement, _) =>
case (_: SupportsPartitionManagement, parts) if parts.size > 1 =>

View file

@ -245,4 +245,20 @@ class AlterTablePartitionV2SQLSuite extends DatasourceV2SQLBase {
assert(!partTable.partitionExists(expectedPartition))
}
}
test("SPARK-33650: add/drop partition into a table which doesn't support partition management") {
val t = "testcat.ns1.ns2.tbl"
withTable(t) {
spark.sql(s"CREATE TABLE $t (id bigint, data string) USING _")
Seq(
s"ALTER TABLE $t ADD PARTITION (id=1)",
s"ALTER TABLE $t DROP PARTITION (id=1)"
).foreach { alterTable =>
val errMsg = intercept[AnalysisException] {
spark.sql(alterTable)
}.getMessage
assert(errMsg.contains(s"Table $t can not alter partitions"))
}
}
}
}