spark-instrumented-optimizer/sql/hive
Sean Owen 8171b156eb [SPARK-26771][CORE][GRAPHX] Make .unpersist(), .destroy() consistently non-blocking by default
## What changes were proposed in this pull request?

Make .unpersist(), .destroy() non-blocking by default and adjust callers to request blocking only where important.

This also adds an optional blocking argument to Pyspark's RDD.unpersist(), which never had one.

## How was this patch tested?

Existing tests.

Closes #23685 from srowen/SPARK-26771.

Authored-by: Sean Owen <sean.owen@databricks.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-02-01 18:29:55 -06:00
..
benchmarks [SPARK-26584][SQL] Remove spark.sql.orc.copyBatchToSpark internal conf 2019-01-10 08:42:23 -08:00
compatibility/src/test/scala/org/apache/spark/sql/hive/execution Revert [SPARK-19355][SPARK-25352] 2018-09-20 20:18:31 +08:00
src [SPARK-26771][CORE][GRAPHX] Make .unpersist(), .destroy() consistently non-blocking by default 2019-02-01 18:29:55 -06:00
pom.xml [SPARK-26306][TEST][BUILD] More memory to de-flake SorterSuite 2019-01-04 15:35:23 -06:00