[SPARK-16926] [SQL] Remove partition columns from partition metadata.
## What changes were proposed in this pull request? This removes partition columns from column metadata of partitions to match tables. A change introduced in SPARK-14388 removed partition columns from the column metadata of tables, but not for partitions. This causes TableReader to believe that the schema is different between table and partition, and create an unnecessary conversion object inspector in TableReader. ## How was this patch tested? Existing unit tests. Author: Brian Cho <bcho@fb.com> Closes #14515 from dafrista/partition-columns-metadata.
This commit is contained in:
parent
edb45734f4
commit
473d78649d
|
@ -161,7 +161,13 @@ private[hive] case class MetastoreRelation(
|
|||
|
||||
val sd = new org.apache.hadoop.hive.metastore.api.StorageDescriptor()
|
||||
tPartition.setSd(sd)
|
||||
sd.setCols(catalogTable.schema.map(toHiveColumn).asJava)
|
||||
|
||||
// Note: In Hive the schema and partition columns must be disjoint sets
|
||||
val schema = catalogTable.schema.map(toHiveColumn).filter { c =>
|
||||
!catalogTable.partitionColumnNames.contains(c.getName)
|
||||
}
|
||||
sd.setCols(schema.asJava)
|
||||
|
||||
p.storage.locationUri.foreach(sd.setLocation)
|
||||
p.storage.inputFormat.foreach(sd.setInputFormat)
|
||||
p.storage.outputFormat.foreach(sd.setOutputFormat)
|
||||
|
|
Loading…
Reference in a new issue