spark-instrumented-optimizer/project
Wenchen Fan 8b6232b119 [SPARK-27521][SQL] Move data source v2 to catalyst module
## What changes were proposed in this pull request?

Currently we are in a strange status that, some data source v2 interfaces(catalog related) are in sql/catalyst, some data source v2 interfaces(Table, ScanBuilder, DataReader, etc.) are in sql/core.

I don't see a reason to keep data source v2 API in 2 modules. If we should pick one module, I think sql/catalyst is the one to go.

Catalyst module already has some user-facing stuff like DataType, Row, etc. And we have to update `Analyzer` and `SessionCatalog` to support the new catalog plugin, which needs to be in the catalyst module.

This PR can solve the problem we have in https://github.com/apache/spark/pull/24246

## How was this patch tested?

existing tests

Closes #24416 from cloud-fan/move.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: gatorsmile <gatorsmile@gmail.com>
2019-06-05 09:55:55 -07:00
..
build.properties [SPARK-26317][BUILD] Upgrade SBT to 0.13.18 2018-12-10 12:04:44 -08:00
MimaBuild.scala [SPARK-25956] Make Scala 2.12 as default Scala version in Spark 3.0 2018-11-14 16:22:23 -08:00
MimaExcludes.scala [SPARK-27521][SQL] Move data source v2 to catalyst module 2019-06-05 09:55:55 -07:00
plugins.sbt [SPARK-27493][BUILD][FOLLOWUP] Upgrade ASM to 7.1 in plugins.sbt 2019-04-23 18:18:02 +00:00
SparkBuild.scala [MINOR][BUILD] Update genjavadoc to 0.13 2019-04-24 13:44:48 +09:00