b26e7b510b
### What changes were proposed in this pull request? Infer the partitions schema by: 1. interring the common type over all partition part values, and 2. casting those values to the common type Before the changes: 1. Spark creates a literal with most appropriate type for concrete partition value i.e. `part0=-0` -> `Literal(0, IntegerType)`, `part0=abc` -> `Literal(UTF8String.fromString("abc"), StringType)`. 2. Finds the common type for all literals of a partition column. For the example above, it is `StringType`. 3. Casts those literal to the desired type: - `Cast(Literal(0, IntegerType), StringType)` -> `UTF8String.fromString("0")` - `Cast(Literal(UTF8String.fromString("abc", StringType), StringType)` -> `UTF8String.fromString("abc")` In the example, we get a partition part value "0" which is different from the original one "-0". Spark shouldn't modify partition part values of the string type because it can influence on query results. Closes #31423 ### Why are the changes needed? The changes fix the bug demonstrated by the example: 1. There are partitioned parquet files (file format doesn't matter): ``` /private/var/folders/p3/dfs6mf655d7fnjrsjvldh0tc0000gn/T/spark-e09eae99-7ecf-4ab2-b99b-f63f8dea658d ├── _SUCCESS ├── part=-0 │ └── part-00001-02144398-2896-4d21-9628-a8743d098cb4.c000.snappy.parquet └── part=AA └── part-00000-02144398-2896-4d21-9628-a8743d098cb4.c000.snappy.parquet ``` placed to two partitions "AA" and **"-0"**. 2. When reading them w/o specified schema: ``` val df = spark.read.parquet(path) df.printSchema() root |-- id: integer (nullable = true) |-- part: string (nullable = true) ``` the inferred type of the partition column `part` is the **string** type. 3. The expected values in the column `part` are "AA" and "-0" but we get: ``` df.show(false) +---+----+ |id |part| +---+----+ |0 |AA | |1 |0 | +---+----+ ``` So, Spark returns **"0"** instead of **"-0"**. ### Does this PR introduce _any_ user-facing change? This PR can change query results. ### How was this patch tested? By running new test and existing test suites: ``` $ build/sbt "test:testOnly *FileIndexSuite" $ build/sbt "test:testOnly *ParquetV1PartitionDiscoverySuite" $ build/sbt "test:testOnly *ParquetV2PartitionDiscoverySuite" ``` Closes #31549 from MaxGekk/fix-partition-file-index-2. Authored-by: Max Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-api-docs.py | ||
gen-sql-config-docs.py | ||
gen-sql-functions-docs.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
, and SQL configuration documentation that gets included as part of configuration.md
in the main docs
directory.