71022d7130
### What changes were proposed in this pull request? At the moment we do not have any function to compute length of JSON array directly. I propose a `json_array_length` function which will return the length of the outermost JSON array. - This function will return length of the outermost JSON array, if JSON array is valid. ``` scala> spark.sql("select json_array_length('[1,2,3,[33,44],{\"key\":[2,3,4]}]')").show +--------------------------------------------------+ |json_array_length([1,2,3,[33,44],{"key":[2,3,4]}])| +--------------------------------------------------+ | 5| +--------------------------------------------------+ scala> spark.sql("select json_array_length('[[1],[2,3]]')").show +------------------------------+ |json_array_length([[1],[2,3]])| +------------------------------+ | 2| +------------------------------+ ``` - In case of any other valid JSON string, invalid JSON string or null array or `NULL` input , `NULL` will be returned. ``` scala> spark.sql("select json_array_length('')").show +-------------------+ |json_array_length()| +-------------------+ | null| +-------------------+ ``` ### Why are the changes needed? - As mentioned in JIRA, this function is supported by presto, postgreSQL, redshift, SQLite, MySQL, MariaDB, IBM DB2. - for better user experience and ease of use. ``` Performance Result for Json array - [1, 2, 3, 4] Intel(R) Core(TM) i7-9750H CPU 2.60GHz JSON functions: Best Time(ms) Avg Time(ms) Stdev(ms) Rate(M/s) Per Row(ns) Relative ------------------------------------------------------------------------------------------------------------------------ json_array_length 7728 7762 53 1.3 772.8 1.0X size+from_json 12739 12895 199 0.8 1273.9 0.6X ``` ### Does this PR introduce any user-facing change? Yes, now users can get length of a json array by using `json_array_length`. ### How was this patch tested? Added UT. Closes #27759 from iRakson/jsonArrayLength. Authored-by: iRakson <raksonrakesh@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-api-docs.py | ||
gen-sql-config-docs.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
, and SQL configuration documentation that gets included as part of configuration.md
in the main docs
directory.