spark-instrumented-optimizer/sql/core/src/main
Nong Li a180286b79 [SPARK-14210] [SQL] Add a metric for time spent in scans.
## What changes were proposed in this pull request?

This adds a metric to parquet scans that measures the time in just the scan phase. This is
only possible when the scan returns ColumnarBatches, otherwise the overhead is too high.

This combined with the pipeline metric lets us easily see what percent of the time was
in the scan.

Author: Nong Li <nong@databricks.com>

Closes #12007 from nongli/spark-14210.
2016-03-28 21:37:46 -07:00
..
java/org/apache/spark/sql [SPARK-14052] [SQL] build a BytesToBytesMap directly in HashedRelation 2016-03-28 13:07:32 -07:00
resources [SPARK-12902] [SQL] visualization for generated operators 2016-01-25 12:44:20 -08:00
scala/org/apache/spark/sql [SPARK-14210] [SQL] Add a metric for time spent in scans. 2016-03-28 21:37:46 -07:00