18440151b0
### What changes were proposed in this pull request? In the PR, I propose new expression `MakeInterval` and register it as the function `make_interval`. The function accepts the following parameters: - `years` - the number of years in the interval, positive or negative. The parameter is multiplied by 12, and added to interval's `months`. - `months` - the number of months in the interval, positive or negative. - `weeks` - the number of months in the interval, positive or negative. The parameter is multiplied by 7, and added to interval's `days`. - `hours`, `mins` - the number of hours and minutes. The parameters can be negative or positive. They are converted to microseconds and added to interval's `microseconds`. - `seconds` - the number of seconds with the fractional part in microseconds precision. It is converted to microseconds, and added to total interval's `microseconds` as `hours` and `minutes`. For example: ```sql spark-sql> select make_interval(2019, 11, 1, 1, 12, 30, 01.001001); 2019 years 11 months 8 days 12 hours 30 minutes 1.001001 seconds ``` ### Why are the changes needed? - To improve user experience with Spark SQL, and allow users making `INTERVAL` columns from other columns containing `years`, `months` ... `seconds`. Currently, users can make an `INTERVAL` column from other columns only by constructing a `STRING` column and cast it to `INTERVAL`. Have a look at the `IntervalBenchmark` as an example. - To maintain feature parity with PostgreSQL which provides such function: ```sql # SELECT make_interval(2019, 11); make_interval -------------------- 2019 years 11 mons ``` ### Does this PR introduce any user-facing change? No ### How was this patch tested? - By new tests for the `MakeInterval` expression to `IntervalExpressionsSuite` - By tests in `interval.sql` Closes #26446 from MaxGekk/make_interval. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-markdown.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
.