spark-instrumented-optimizer/sql/core/src
hyukjinkwon 55d6dad6f2 [SPARK-16847][SQL] Prevent to potentially read corrupt statstics on binary in Parquet vectorized reader
## What changes were proposed in this pull request?

This problem was found in [PARQUET-251](https://issues.apache.org/jira/browse/PARQUET-251) and we disabled filter pushdown on binary columns in Spark before. We enabled this after upgrading Parquet but it seems there is potential incompatibility for Parquet files written in lower Spark versions.

Currently, this does not happen in normal Parquet reader. However, In Spark, we implemented a vectorized reader, separately with Parquet's standard API. For normal Parquet reader this is being handled but not in the vectorized reader.

It is okay to just pass `FileMetaData`. This is being handled in parquet-mr (See e3b95020f7). This will prevent loading corrupt statistics in each page in Parquet.

This PR replaces the deprecated usage of constructor.

## How was this patch tested?

N/A

Author: hyukjinkwon <gurwls223@gmail.com>

Closes #14450 from HyukjinKwon/SPARK-16847.
2016-08-06 04:40:24 +01:00
..
main [SPARK-16847][SQL] Prevent to potentially read corrupt statstics on binary in Parquet vectorized reader 2016-08-06 04:40:24 +01:00
test [SPARK-16826][SQL] Switch to java.net.URI for parse_url() 2016-08-05 20:55:58 +01:00