spark-instrumented-optimizer/sql
Wenchen Fan 651904a2ef [SPARK-36718][SQL] Only collapse projects if we don't duplicate expensive expressions
### What changes were proposed in this pull request?

The `CollapseProject` rule can combine adjacent projects and merge the project lists. The key idea behind this rule is that the evaluation of project is relatively expensive, and that expression evaluation is cheap and that the expression duplication caused by this rule is not a problem. This last assumption is, unfortunately, not always true:
- A user can invoke some expensive UDF, this now gets invoked more often than originally intended.
- A projection is very cheap in whole stage code generation. The duplication caused by `CollapseProject` does more harm than good here.

This PR addresses this problem, by only collapsing projects when it does not duplicate expensive expressions. In practice this means an input reference may only be consumed once, or when its evaluation does not incur significant overhead (currently attributes, nested column access, aliases & literals fall in this category).

### Why are the changes needed?

We have seen multiple complains about `CollapseProject` in the past, due to it may duplicate expensive expressions. The most recent one is https://github.com/apache/spark/pull/33903 .

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

a new UT and existing test

Closes #33958 from cloud-fan/collapse.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-09-17 21:34:21 +08:00
..
catalyst [SPARK-36718][SQL] Only collapse projects if we don't duplicate expensive expressions 2021-09-17 21:34:21 +08:00
core [SPARK-36778][SQL] Support ILIKE API on Scala(dataframe) 2021-09-17 14:37:10 +03:00
hive [SPARK-32709][SQL] Support writing Hive bucketed table (Parquet/ORC format with Hive hash) 2021-09-17 14:28:51 +08:00
hive-thriftserver [SPARK-36774][CORE][TESTS] Move SparkSubmitTestUtils to core module and use it in SparkSubmitSuite 2021-09-16 14:28:47 -07:00
create-docs.sh [SPARK-34010][SQL][DODCS] Use python3 instead of python in SQL documentation build 2021-01-05 19:48:10 +09:00
gen-sql-api-docs.py [SPARK-34747][SQL][DOCS] Add virtual operators to the built-in function document 2021-03-19 10:19:26 +09:00
gen-sql-config-docs.py [SPARK-36657][SQL] Update comment in 'gen-sql-config-docs.py' 2021-09-02 18:50:59 -07:00
gen-sql-functions-docs.py
mkdocs.yml [SPARK-30731] Update deprecated Mkdocs option 2020-02-19 17:28:58 +09:00
README.md

Spark SQL

This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.

Spark SQL is broken up into four subprojects:

  • Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
  • Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
  • Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
  • HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.

Running ./sql/create-docs.sh generates SQL documentation for built-in functions under sql/site, and SQL configuration documentation that gets included as part of configuration.md in the main docs directory.