0aa2c284e4
### What changes were proposed in this pull request? This PR extends the current function registry and catalog to support table-valued functions by adding a table function registry. It also refactors `range` to be a built-in function in the table function registry. ### Why are the changes needed? Currently, Spark resolves table-valued functions very differently from the other functions. This change is to make the behavior for table and non-table functions consistent. It also allows Spark to display information about built-in table-valued functions: Before: ```scala scala> sql("describe function range").show(false) +--------------------------+ |function_desc | +--------------------------+ |Function: range not found.| +--------------------------+ ``` After: ```scala Function: range Class: org.apache.spark.sql.catalyst.plans.logical.Range Usage: range(start: Long, end: Long, step: Long, numPartitions: Int) range(start: Long, end: Long, step: Long) range(start: Long, end: Long) range(end: Long) // Extended Function: range Class: org.apache.spark.sql.catalyst.plans.logical.Range Usage: range(start: Long, end: Long, step: Long, numPartitions: Int) range(start: Long, end: Long, step: Long) range(start: Long, end: Long) range(end: Long) Extended Usage: Examples: > SELECT * FROM range(1); +---+ | id| +---+ | 0| +---+ > SELECT * FROM range(0, 2); +---+ |id | +---+ |0 | |1 | +---+ > SELECT range(0, 4, 2); +---+ |id | +---+ |0 | |2 | +---+ Since: 2.0.0 ``` ### Does this PR introduce _any_ user-facing change? Yes. User will not be able to create a function with name `range` in the default database: Before: ```scala scala> sql("create function range as 'range'") res3: org.apache.spark.sql.DataFrame = [] ``` After: ``` scala> sql("create function range as 'range'") org.apache.spark.sql.catalyst.analysis.FunctionAlreadyExistsException: Function 'default.range' already exists in database 'default' ``` ### How was this patch tested? Unit test Closes #31791 from allisonwang-db/spark-34678-table-func-registry. Authored-by: allisonwang-db <66282705+allisonwang-db@users.noreply.github.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-api-docs.py | ||
gen-sql-config-docs.py | ||
gen-sql-functions-docs.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
, and SQL configuration documentation that gets included as part of configuration.md
in the main docs
directory.