spark-instrumented-optimizer/sql/core
Davies Liu 5aca6ad00c [SPARK-11767] [SQL] limit the size of caced batch
Currently the size of cached batch in only controlled by `batchSize` (default value is 10000), which does not work well with the size of serialized columns (for example, complex types). The memory used to build the batch is not accounted, it's easy to OOM (especially after unified memory management).

This PR introduce a hard limit as 4M for total columns (up to 50 columns of uncompressed primitive columns).

This also change the way to grow buffer, double it each time, then trim it once finished.

cc liancheng

Author: Davies Liu <davies@databricks.com>

Closes #9760 from davies/cache_limit.
2015-11-17 12:50:01 -08:00
..
src [SPARK-11767] [SQL] limit the size of caced batch 2015-11-17 12:50:01 -08:00
pom.xml [SPARK-6152] Use shaded ASM5 to support closure cleaning of Java 8 compiled classes 2015-11-11 11:16:39 -08:00