2656c9d304
## What changes were proposed in this pull request? This PR is to port strings.sql from PostgreSQL regression tests. https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/strings.sql The expected results can be found in the link: https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/expected/strings.out When porting the test cases, found nine PostgreSQL specific features that do not exist in Spark SQL: [SPARK-28076](https://issues.apache.org/jira/browse/SPARK-28076): Support regular expression substring [SPARK-28078](https://issues.apache.org/jira/browse/SPARK-28078): Add support other 4 REGEXP functions [SPARK-28412](https://issues.apache.org/jira/browse/SPARK-28412): OVERLAY function support byte array [SPARK-28083](https://issues.apache.org/jira/browse/SPARK-28083): ANSI SQL: LIKE predicate: ESCAPE clause [SPARK-28087](https://issues.apache.org/jira/browse/SPARK-28087): Add support split_part [SPARK-28122](https://issues.apache.org/jira/browse/SPARK-28122): Missing `sha224`/`sha256 `/`sha384 `/`sha512 ` functions [SPARK-28123](https://issues.apache.org/jira/browse/SPARK-28123): Add support string functions: btrim [SPARK-28448](https://issues.apache.org/jira/browse/SPARK-28448): Implement ILIKE operator [SPARK-28449](https://issues.apache.org/jira/browse/SPARK-28449): Missing escape_string_warning and standard_conforming_strings config Also, found five inconsistent behavior: [SPARK-27952](https://issues.apache.org/jira/browse/SPARK-27952): String Functions: regexp_replace is not compatible [SPARK-28121](https://issues.apache.org/jira/browse/SPARK-28121): decode can not accept 'escape' as charset [SPARK-27930](https://issues.apache.org/jira/browse/SPARK-27930): Replace `strpos` with `locate` or `position` in Spark SQL [SPARK-27930](https://issues.apache.org/jira/browse/SPARK-27930): Replace `to_hex` with `hex ` or in Spark SQL [SPARK-28451](https://issues.apache.org/jira/browse/SPARK-28451): `substr` returns different values ## How was this patch tested? N/A Closes #24923 from wangyum/SPARK-28071. Authored-by: Yuming Wang <yumwang@ebay.com> Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-markdown.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes an extension of SQLContext called HiveContext that allows users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
.