spark-instrumented-optimizer/sql
Max Gekk 7d496fb361 [SPARK-36854][SQL] Handle ANSI intervals by the off-heap column vector
### What changes were proposed in this pull request?
Modify `OffHeapColumnVector.reserveInternal` to handle ANSI intervals - `DayTimeIntervalType` and `YearMonthIntervalType`.

### Why are the changes needed?
The changes fix the issue which the example below demonstrates:
```scala
scala> spark.conf.set("spark.sql.columnVector.offheap.enabled", true)
scala> spark.read.parquet("/Users/maximgekk/tmp/parquet_offheap").show()
21/09/25 22:09:03 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
java.lang.RuntimeException: Unhandled YearMonthIntervalType(0,1)
	at org.apache.spark.sql.execution.vectorized.OffHeapColumnVector.reserveInternal(OffHeapColumnVector.java:562)
```
SPARK-36854 shows how the parquet files in `/Users/maximgekk/tmp/parquet_offheap` were prepared.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
By running the modified test suite:
```
$ build/sbt "sql/test:testOnly *ParquetIOSuite"
```

Closes #34106 from MaxGekk/ansi-interval-OffHeapColumnVector.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-09-26 12:44:19 +09:00
..
catalyst [SPARK-36721][SQL] Simplify boolean equalities if one side is literal 2021-09-24 10:53:24 -07:00
core [SPARK-36854][SQL] Handle ANSI intervals by the off-heap column vector 2021-09-26 12:44:19 +09:00
hive [SPARK-32709][SQL] Support writing Hive bucketed table (Parquet/ORC format with Hive hash) 2021-09-17 14:28:51 +08:00
hive-thriftserver [SPARK-36774][CORE][TESTS] Move SparkSubmitTestUtils to core module and use it in SparkSubmitSuite 2021-09-16 14:28:47 -07:00
create-docs.sh [SPARK-34010][SQL][DODCS] Use python3 instead of python in SQL documentation build 2021-01-05 19:48:10 +09:00
gen-sql-api-docs.py [SPARK-34747][SQL][DOCS] Add virtual operators to the built-in function document 2021-03-19 10:19:26 +09:00
gen-sql-config-docs.py [SPARK-36657][SQL] Update comment in 'gen-sql-config-docs.py' 2021-09-02 18:50:59 -07:00
gen-sql-functions-docs.py
mkdocs.yml
README.md

Spark SQL

This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.

Spark SQL is broken up into four subprojects:

  • Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
  • Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
  • Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
  • HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.

Running ./sql/create-docs.sh generates SQL documentation for built-in functions under sql/site, and SQL configuration documentation that gets included as part of configuration.md in the main docs directory.