e71acd9a23
## What changes were proposed in this pull request? When we read a hive table and create RDDs in `TableReader`, it'll throw exception `java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.TextInputFormat cannot be cast to org.apache.hadoop.mapred.InputFormat` if the input format class of the table is from mapreduce package. Now we use NewHadoopRDD to deal with the new input format and keep HadoopRDD to the old one. This PR is from #23506. We can reproduce this issue by executing the new test with the code in old version. When create a table with `org.apache.hadoop.mapreduce.....` input format, we will find the exception thrown in `org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:190)` ## How was this patch tested? Added a new test. Closes #23559 from Deegue/fix-hadoopRDD. Lead-authored-by: heguozi <zyzzxycj@gmail.com> Co-authored-by: Yizhong Zhang <zyzzxycj@163.com> Signed-off-by: gatorsmile <gatorsmile@gmail.com> |
||
---|---|---|
.. | ||
main | ||
test |