e2f056f4a8
### What changes were proposed in this pull request? allow the sql test files to specify different dimensions of config sets during testing. For example, ``` --CONFIG_DIM1 a=1 --CONFIG_DIM1 b=2,c=3 --CONFIG_DIM2 x=1 --CONFIG_DIM2 y=1,z=2 ``` This example defines 2 config dimensions, and each dimension defines 2 config sets. We will run the queries 4 times: 1. a=1, x=1 2. a=1, y=1, z=2 3. b=2, c=3, x=1 4. b=2, c=3, y=1, z=2 ### Why are the changes needed? Currently `SQLQueryTestSuite` takes a long time. This is because we run each test at least 3 times, to check with different codegen modes. This is not necessary for most of the tests, e.g. DESC TABLE. We should only check these codegen modes for certain tests. With the --CONFIG_DIM directive, we can do things like: test different join operator(broadcast or shuffle join) X different codegen modes. After reducing testing time, we should be able to run thrifter server SQL tests with config settings. ### Does this PR introduce any user-facing change? no ### How was this patch tested? test only Closes #26612 from cloud-fan/test. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-markdown.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
.