[SPARK-19809][SQL][TEST] NullPointerException on zero-size ORC file

## What changes were proposed in this pull request?

Until 2.2.1, Spark raises `NullPointerException` on zero-size ORC files. Usually, these zero-size ORC files are generated by 3rd-party apps like Flume.

```scala
scala> sql("create table empty_orc(a int) stored as orc location '/tmp/empty_orc'")

$ touch /tmp/empty_orc/zero.orc

scala> sql("select * from empty_orc").show
java.lang.RuntimeException: serious problem at
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
...
Caused by: java.lang.NullPointerException at
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:560)
```

After [SPARK-22279](https://github.com/apache/spark/pull/19499), Apache Spark with the default configuration doesn't have this bug. Although Hive 1.2.1 library code path still has the problem, we had better have a test coverage on what we have now in order to prevent future regression on it.

## How was this patch tested?

Pass a newly added test case.

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #19948 from dongjoon-hyun/SPARK-19809-EMPTY-FILE.
This commit is contained in:
Dongjoon Hyun 2017-12-13 07:42:24 +09:00 committed by hyukjinkwon
parent 704af4bd67
commit 17cdabb887

View file

@ -2172,4 +2172,21 @@ class SQLQuerySuite extends QueryTest with SQLTestUtils with TestHiveSingleton {
}
}
}
test("SPARK-19809 NullPointerException on zero-size ORC file") {
Seq("native", "hive").foreach { orcImpl =>
withSQLConf(SQLConf.ORC_IMPLEMENTATION.key -> orcImpl) {
withTempPath { dir =>
withTable("spark_19809") {
sql(s"CREATE TABLE spark_19809(a int) STORED AS ORC LOCATION '$dir'")
Files.touch(new File(s"${dir.getCanonicalPath}", "zero.orc"))
withSQLConf(HiveUtils.CONVERT_METASTORE_ORC.key -> "true") { // default since 2.3.0
checkAnswer(sql("SELECT * FROM spark_19809"), Seq.empty)
}
}
}
}
}
}
}