748f05fca9
### What changes were proposed in this pull request? This PR aims to add `zstd` codec names in the Spark generated ORC file names for consistency. ### Why are the changes needed? Like the other ORC supported codecs, we had better have `zstd` in the Spark generated ORC file names. Please note that there is no problem at reading/writing ORC zstd files currently. This PR only aims to revise the file name format for consistency. **SNAPPY** ``` scala> spark.range(10).repartition(1).write.option("compression", "snappy").orc("/tmp/snappy") $ ls -al /tmp/snappy total 24 drwxr-xr-x 6 dongjoon wheel 192 Apr 4 12:17 . drwxrwxrwt 14 root wheel 448 Apr 4 12:17 .. -rw-r--r-- 1 dongjoon wheel 8 Apr 4 12:17 ._SUCCESS.crc -rw-r--r-- 1 dongjoon wheel 12 Apr 4 12:17 .part-00000-833bb7ad-d1e1-48cc-9719-07b2d594aa4c-c000.snappy.orc.crc -rw-r--r-- 1 dongjoon wheel 0 Apr 4 12:17 _SUCCESS -rw-r--r-- 1 dongjoon wheel 231 Apr 4 12:17 part-00000-833bb7ad-d1e1-48cc-9719-07b2d594aa4c-c000.snappy.orc ``` **ZSTD (AS-IS)** ``` scala> spark.range(10).repartition(1).write.option("compression", "zstd").orc("/tmp/zstd") $ ls -al /tmp/zstd total 24 drwxr-xr-x 6 dongjoon wheel 192 Apr 4 12:17 . drwxrwxrwt 14 root wheel 448 Apr 4 12:17 .. -rw-r--r-- 1 dongjoon wheel 8 Apr 4 12:17 ._SUCCESS.crc -rw-r--r-- 1 dongjoon wheel 12 Apr 4 12:17 .part-00000-2f403ce9-7314-4db5-bca3-b1c1dd83335f-c000.orc.crc -rw-r--r-- 1 dongjoon wheel 0 Apr 4 12:17 _SUCCESS -rw-r--r-- 1 dongjoon wheel 231 Apr 4 12:17 part-00000-2f403ce9-7314-4db5-bca3-b1c1dd83335f-c000.orc ``` **ZSTD (After this PR)** ``` scala> spark.range(10).repartition(1).write.option("compression", "zstd").orc("/tmp/zstd_new") $ ls -al /tmp/zstd_new total 24 drwxr-xr-x 6 dongjoon wheel 192 Apr 4 12:28 . drwxrwxrwt 15 root wheel 480 Apr 4 12:28 .. -rw-r--r-- 1 dongjoon wheel 8 Apr 4 12:28 ._SUCCESS.crc -rw-r--r-- 1 dongjoon wheel 12 Apr 4 12:28 .part-00000-49d57329-7196-4caf-839c-4251c876e26b-c000.zstd.orc.crc -rw-r--r-- 1 dongjoon wheel 0 Apr 4 12:28 _SUCCESS -rw-r--r-- 1 dongjoon wheel 231 Apr 4 12:28 part-00000-49d57329-7196-4caf-839c-4251c876e26b-c000.zstd.orc ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the CIs with the updated UT. Closes #32051 from dongjoon-hyun/SPARK-34954. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-api-docs.py | ||
gen-sql-config-docs.py | ||
gen-sql-functions-docs.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
, and SQL configuration documentation that gets included as part of configuration.md
in the main docs
directory.