20cd47e82d
### What changes were proposed in this pull request? This PR add unlimited MATCHED and NOT MATCHED clauses in MERGE INTO statement. ### Why are the changes needed? Now the MERGE INTO syntax is, ``` MERGE INTO [db_name.]target_table [AS target_alias] USING [db_name.]source_table [<time_travel_version>] [AS source_alias] ON <merge_condition> [ WHEN MATCHED [ AND <condition> ] THEN <matched_action> ] [ WHEN MATCHED [ AND <condition> ] THEN <matched_action> ] [ WHEN NOT MATCHED [ AND <condition> ] THEN <not_matched_action> ] ``` It would be nice if we support unlimited MATCHED and NOT MATCHED clauses in MERGE INTO statement, because users may want to deal with different "AND <condition>"s, the result of which just like a series of "CASE WHEN"s. The expected syntax looks like ``` MERGE INTO [db_name.]target_table [AS target_alias] USING [db_name.]source_table [<time_travel_version>] [AS source_alias] ON <merge_condition> [when_matched_clause [, ...]] [when_not_matched_clause [, ...]] ``` where when_matched_clause is ``` WHEN MATCHED [ AND <condition> ] THEN <matched_action> ``` and when_not_matched_clause is ``` WHEN NOT MATCHED [ AND <condition> ] THEN <not_matched_action> ``` matched_action can be one of ``` DELETE UPDATE SET * or UPDATE SET col1 = value1 [, col2 = value2, ...] ``` and not_matched_action can be one of ``` INSERT * INSERT (col1 [, col2, ...]) VALUES (value1 [, value2, ...]) ``` ### Does this PR introduce _any_ user-facing change? Yes. The SQL command changes, but it is backward compatible. ### How was this patch tested? New tests added. Closes #28875 from xianyinxin/SPARK-32030. Authored-by: xy_xin <xianyin.xxy@alibaba-inc.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-api-docs.py | ||
gen-sql-config-docs.py | ||
gen-sql-functions-docs.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
, and SQL configuration documentation that gets included as part of configuration.md
in the main docs
directory.