spark-instrumented-optimizer/sql
Enrico Minack 4e9c1b8ba0 [SPARK-34806][SQL] Add Observation helper for Dataset.observe
### What changes were proposed in this pull request?
This pull request introduces a helper class that simplifies usage of `Dataset.observe()` for batch datasets:

    val observation = Observation("name")
    val observed = ds.observe(observation, max($"id").as("max_id"))
    observed.count()
    val metrics = observation.get

### Why are the changes needed?
Currently, users are required to implement the `QueryExecutionListener` interface to retrieve the metrics, as well as apply some knowledge on threading and locking to pull the metrics over to the main thread. With the helper class, metrics can be retrieved from batch dataset processing with three lines of code (the action on the observed dataset does not count as a line of code here).

### Does this PR introduce _any_ user-facing change?
Yes, one new class and one `Dataset`` method.

### How was this patch tested?
Adds a unit test to `DataFrameSuite`, similar to `"get observable metrics by callback"` in `DataFrameCallbackSuite`.

Closes #33422 from EnricoMi/branch-observation.

Authored-by: Enrico Minack <github@enrico.minack.dev>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-22 08:57:04 +00:00
..
catalyst [SPARK-36063][SQL] Optimize OneRowRelation subqueries 2021-07-22 10:48:32 +08:00
core [SPARK-34806][SQL] Add Observation helper for Dataset.observe 2021-07-22 08:57:04 +00:00
hive [SPARK-28266][SQL] convertToLogicalRelation should not interpret path property when reading Hive tables 2021-07-21 22:40:39 +08:00
hive-thriftserver [SPARK-36179][SQL] Support TimestampNTZType in SparkGetColumnsOperation 2021-07-20 09:48:58 +09:00
create-docs.sh [SPARK-34010][SQL][DODCS] Use python3 instead of python in SQL documentation build 2021-01-05 19:48:10 +09:00
gen-sql-api-docs.py [SPARK-34747][SQL][DOCS] Add virtual operators to the built-in function document 2021-03-19 10:19:26 +09:00
gen-sql-config-docs.py [SPARK-32194][PYTHON] Use proper exception classes instead of plain Exception 2021-05-26 11:54:40 +09:00
gen-sql-functions-docs.py [SPARK-31562][SQL] Update ExpressionDescription for substring, current_date, and current_timestamp 2020-04-26 11:46:52 -07:00
mkdocs.yml [SPARK-30731] Update deprecated Mkdocs option 2020-02-19 17:28:58 +09:00
README.md [SPARK-30510][SQL][DOCS] Publicly document Spark SQL configuration options 2020-02-09 19:20:47 +09:00

Spark SQL

This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.

Spark SQL is broken up into four subprojects:

  • Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
  • Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
  • Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
  • HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.

Running ./sql/create-docs.sh generates SQL documentation for built-in functions under sql/site, and SQL configuration documentation that gets included as part of configuration.md in the main docs directory.