a98dc60408
### What changes were proposed in this pull request? As discussed in https://github.com/apache/spark/pull/30145#discussion_r514728642 https://github.com/apache/spark/pull/30145#discussion_r514734648 We need to rewrite current Grouping Analytics grammar to support as flexible as Postgres SQL to support subsequent development. In postgres sql, it support ``` select a, b, c, count(1) from t group by cube (a, b, c); select a, b, c, count(1) from t group by cube(a, b, c); select a, b, c, count(1) from t group by cube (a, b, c, (a, b), (a, b, c)); select a, b, c, count(1) from t group by rollup(a, b, c); select a, b, c, count(1) from t group by rollup (a, b, c); select a, b, c, count(1) from t group by rollup (a, b, c, (a, b), (a, b, c)); ``` In this pr, we have done three things as below, and we will split it to different pr: - Refactor CUBE/ROLLUP (regarding them as ANTLR tokens in a parser) - Refactor GROUPING SETS (the logical node -> a new expr) - Support new syntax for CUBE/ROLLUP (e.g., GROUP BY CUBE ((a, b), (a, c))) ### Why are the changes needed? Rewrite current Grouping Analytics grammar to support as flexible as Postgres SQL to support subsequent development. ### Does this PR introduce _any_ user-facing change? User can write Grouping Analytics grammar as flexible as Postgres SQL to support subsequent development. ### How was this patch tested? Added UT Closes #30212 from AngersZhuuuu/refact-grouping-analytics. Lead-authored-by: angerszhu <angers.zhu@gmail.com> Co-authored-by: Angerszhuuuu <angers.zhu@gmail.com> Co-authored-by: AngersZhuuuu <angers.zhu@gmail.com> Co-authored-by: Takeshi Yamamuro <yamamuro@apache.org> Signed-off-by: Wenchen Fan <wenchen@databricks.com> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-api-docs.py | ||
gen-sql-config-docs.py | ||
gen-sql-functions-docs.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
, and SQL configuration documentation that gets included as part of configuration.md
in the main docs
directory.