c1e5688d1a
## What changes were proposed in this pull request? Since SPARK-20682, we have two `OrcFileFormat`s. This PR refactors ORC tests with three principles (with a few exceptions) 1. Move test suite into `sql/core`. 2. Create `HiveXXX` test suite in `sql/hive` by reusing `sql/core` test suite. 3. `OrcTest` will provide common helper functions and `val orcImp: String`. **Test Suites** *Native OrcFileFormat* - org.apache.spark.sql.hive.orc - OrcFilterSuite - OrcPartitionDiscoverySuite - OrcQuerySuite - OrcSourceSuite - o.a.s.sql.hive.orc - OrcHadoopFsRelationSuite *Hive built-in OrcFileFormat* - o.a.s.sql.hive.orc - HiveOrcFilterSuite - HiveOrcPartitionDiscoverySuite - HiveOrcQuerySuite - HiveOrcSourceSuite - HiveOrcHadoopFsRelationSuite **Hierarchy** ``` OrcTest -> OrcSuite -> OrcSourceSuite -> OrcQueryTest -> OrcQuerySuite -> OrcPartitionDiscoveryTest -> OrcPartitionDiscoverySuite -> OrcFilterSuite HadoopFsRelationTest -> OrcHadoopFsRelationSuite -> HiveOrcHadoopFsRelationSuite ``` Please note the followings. - Unlike the other test suites, `OrcHadoopFsRelationSuite` doesn't inherit `OrcTest`. It is inside `sql/hive` like `ParquetHadoopFsRelationSuite` due to the dependencies and follows the existing convention to use `val dataSourceName: String` - `OrcFilterSuite`s cannot reuse test cases due to the different function signatures using Hive 1.2.1 ORC classes and Apache ORC 1.4.1 classes. ## How was this patch tested? Pass the Jenkins tests with reorganized test suites. Author: Dongjoon Hyun <dongjoon@apache.org> Closes #19882 from dongjoon-hyun/SPARK-22672. |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-markdown.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes an extension of SQLContext called HiveContext that allows users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allows users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
.