spark-instrumented-optimizer/sql/core/src/main
Cheng Lian 64b1d00e1a [SPARK-11007] [SQL] Adds dictionary aware Parquet decimal converters
For Parquet decimal columns that are encoded using plain-dictionary encoding, we can make the upper level converter aware of the dictionary, so that we can pre-instantiate all the decimals to avoid duplicated instantiation.

Note that plain-dictionary encoding isn't available for `FIXED_LEN_BYTE_ARRAY` for Parquet writer version `PARQUET_1_0`. So currently only decimals written as `INT32` and `INT64` can benefit from this optimization.

Author: Cheng Lian <lian@databricks.com>

Closes #9040 from liancheng/spark-11007.decimal-converter-dict-support.
2015-10-12 10:17:19 -07:00
..
java/org/apache/spark/sql [SPARK-10474] [SQL] Aggregation fails to allocate memory for pointer array (round 2) 2015-09-23 19:34:31 -07:00
resources [SPARK-9763][SQL] Minimize exposure of internal SQL classes. 2015-08-10 13:49:23 -07:00
scala/org/apache/spark/sql [SPARK-11007] [SQL] Adds dictionary aware Parquet decimal converters 2015-10-12 10:17:19 -07:00