24b67ca9a8
### What changes were proposed in this pull request? Currently the `StateOperatorProgress` in `StreamingQueryProgress` is missing few metrics. ### Why are the changes needed? The main motivation is find hotspots and have better visibility in the stateful operations. Detailed explanations are in [SPARK-35896](https://issues.apache.org/jira/browse/SPARK-35896). ### Does this PR introduce _any_ user-facing change? Yes. The `StateOperatorProgress` entries within `StreamingQueryProgress` now contain additional fields as listed in [SPARK-35896](https://issues.apache.org/jira/browse/SPARK-35896). Example `StreamingQueryProgress` output in JSON form. Before: ``` { "id" : "510be3cd-a955-4faf-8456-d97c78d39af5", .... "durationMs" : { "triggerExecution" : 2856, .... }, "stateOperators" : [ { "numRowsTotal" : 1, "numRowsUpdated" : 1, "numRowsDroppedByWatermark" : 0, "customMetrics" : { "loadedMapCacheHitCount" : 0, "loadedMapCacheMissCount" : 0, "stateOnCurrentVersionSizeBytes" : 392 } }], .... } ``` After: ``` { "id" : "510be3cd-a955-4faf-8456-d97c78d39af5", .... "durationMs" : { "triggerExecution" : 2856, .... }, "stateOperators" : [ { "operatorName" : "dedupe", <-- new "numRowsTotal" : 1, "numRowsUpdated" : 1, <-- new "allUpdatesTimeMs" : 56, <-- new "numRowsRemoved" : 2, <-- new "allRemovalsTimeMs" : 45, <-- new "commitTimeMs" : 40, <-- new "numRowsDroppedByWatermark" : 0, "numShufflePartitions" : 2, <-- new "numStateStoreInstances" : 2, <-- new "customMetrics" : { "loadedMapCacheHitCount" : 0, "loadedMapCacheMissCount" : 0, "stateOnCurrentVersionSizeBytes" : 392 } }], .... } ``` ### How was this patch tested? Existing tests for regressions. Added new UTs. Closes #33091 from vkorukanti/SPARK-35896. Lead-authored-by: Venki Korukanti <venki.korukanti@gmail.com> Co-authored-by: Venki Korukanti <venki.korukanti@databricks.com> Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com> |
||
---|---|---|
.. | ||
catalyst | ||
core | ||
hive | ||
hive-thriftserver | ||
create-docs.sh | ||
gen-sql-api-docs.py | ||
gen-sql-config-docs.py | ||
gen-sql-functions-docs.py | ||
mkdocs.yml | ||
README.md |
Spark SQL
This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.
Spark SQL is broken up into four subprojects:
- Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
- Hive Support (sql/hive) - Includes extensions that allow users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allow users to run queries that include Hive UDFs, UDAFs, and UDTFs.
- HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
Running ./sql/create-docs.sh
generates SQL documentation for built-in functions under sql/site
, and SQL configuration documentation that gets included as part of configuration.md
in the main docs
directory.