spark-instrumented-optimizer/sql/hive/src
Cheng Lian 10b671447b [SPARK-16033][SQL] insertInto() can't be used together with partitionBy()
## What changes were proposed in this pull request?

When inserting into an existing partitioned table, partitioning columns should always be determined by catalog metadata of the existing table to be inserted. Extra `partitionBy()` calls don't make sense, and mess up existing data because newly inserted data may have wrong partitioning directory layout.

## How was this patch tested?

New test case added in `InsertIntoHiveTableSuite`.

Author: Cheng Lian <lian@databricks.com>

Closes #13747 from liancheng/spark-16033-insert-into-without-partition-by.
2016-06-17 20:13:04 -07:00
..
main [SPARK-15991] SparkContext.hadoopConfiguration should be always the base of hadoop conf created by SessionState 2016-06-16 17:06:24 -07:00
test [SPARK-16033][SQL] insertInto() can't be used together with partitionBy() 2016-06-17 20:13:04 -07:00