d4c6ec6ba7
### What changes were proposed in this pull request? In the PR, I propose to fix the bug reported in SPARK-30530. CSV datasource returns invalid records in the case when `parsedSchema` is shorter than number of tokens returned by UniVocity parser. In the case `UnivocityParser.convert()` always throws `BadRecordException` independently from the result of applying filters. For the described case, I propose to save the exception in `badRecordException` and continue value conversion according to `parsedSchema`. If a bad record doesn't pass filters, `convert()` returns empty Seq otherwise throws `badRecordException`. ### Why are the changes needed? It fixes the bug reported in the JIRA ticket. ### Does this PR introduce any user-facing change? No ### How was this patch tested? Added new test from the JIRA ticket. Closes #27239 from MaxGekk/spark-30530-csv-filter-is-null. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> |
||
---|---|---|
.. | ||
benchmarks | ||
src | ||
v1.2/src | ||
v2.3/src | ||
pom.xml |