spark-instrumented-optimizer/sql/core
Zhenhua Wang 791d2ba346 [SPARK-31261][SQL] Avoid npe when reading bad csv input with columnNameCorruptRecord specified
### What changes were proposed in this pull request?

SPARK-25387 avoids npe for bad csv input, but when reading bad csv input with `columnNameCorruptRecord` specified, `getCurrentInput` is called and it still throws npe.

### Why are the changes needed?

Bug fix.

### Does this PR introduce any user-facing change?

No.

### How was this patch tested?

Add a test.

Closes #28029 from wzhfy/corrupt_column_npe.

Authored-by: Zhenhua Wang <wzh_zju@163.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-03-29 13:30:14 +09:00
..
benchmarks [SPARK-31119][SQL] Add interval value support for extract expression as extract source 2020-03-18 12:29:39 +08:00
src [SPARK-31261][SQL] Avoid npe when reading bad csv input with columnNameCorruptRecord specified 2020-03-29 13:30:14 +09:00
v1.2/src [SPARK-25556][SPARK-17636][SPARK-31026][SPARK-31060][SQL][TEST-HIVE1.2] Nested Column Predicate Pushdown for Parquet 2020-03-27 14:28:57 +08:00
v2.3/src [SPARK-25556][SPARK-17636][SPARK-31026][SPARK-31060][SQL][TEST-HIVE1.2] Nested Column Predicate Pushdown for Parquet 2020-03-27 14:28:57 +08:00
pom.xml [SPARK-30984][SS] Add UI test for Structured Streaming UI 2020-03-04 13:55:34 +08:00