[MINOR][DOC] Writing to partitioned Hive metastore Parquet tables is not supported for Spark SQL
## What changes were proposed in this pull request?
Even if `spark.sql.hive.convertMetastoreParquet` is true, when writing to partitioned Hive metastore
Parquet tables, Spark SQL still can not use its own Parquet support instead of Hive SerDe.
Related code:
d53e11ffce/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala (L198)
## How was this patch tested?
N/A
Closes #23671 from 10110346/parquetdoc.
Authored-by: liuxian <liu.xian3@zte.com.cn>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
This commit is contained in:
parent
8171b156eb
commit
421ff6f60e
|
@ -157,9 +157,10 @@ turned it off by default starting from 1.5.0. You may enable it by
|
|||
|
||||
### Hive metastore Parquet table conversion
|
||||
|
||||
When reading from and writing to Hive metastore Parquet tables, Spark SQL will try to use its own
|
||||
Parquet support instead of Hive SerDe for better performance. This behavior is controlled by the
|
||||
`spark.sql.hive.convertMetastoreParquet` configuration, and is turned on by default.
|
||||
When reading from Hive metastore Parquet tables and writing to non-partitioned Hive metastore
|
||||
Parquet tables, Spark SQL will try to use its own Parquet support instead of Hive SerDe for
|
||||
better performance. This behavior is controlled by the `spark.sql.hive.convertMetastoreParquet`
|
||||
configuration, and is turned on by default.
|
||||
|
||||
#### Hive/Parquet Schema Reconciliation
|
||||
|
||||
|
|
Loading…
Reference in a new issue