spark-instrumented-optimizer/sql/hive
xin Wu 0e79604aed [SPARK-11522][SQL] input_file_name() returns "" for external tables
When computing partition for non-parquet relation, `HadoopRDD.compute` is used. but it does not set the thread local variable `inputFileName` in `NewSqlHadoopRDD`, like `NewSqlHadoopRDD.compute` does.. Yet, when getting the `inputFileName`, `NewSqlHadoopRDD.inputFileName` is exptected, which is empty now.
Adding the setting inputFileName in HadoopRDD.compute resolves this issue.

Author: xin Wu <xinwu@us.ibm.com>

Closes #9542 from xwu0226/SPARK-11522.
2015-11-16 08:10:48 -08:00
..
compatibility/src/test/scala/org/apache/spark/sql/hive/execution [SPARK-9034][SQL] Reflect field names defined in GenericUDTF 2015-11-02 23:52:36 -08:00
src [SPARK-11522][SQL] input_file_name() returns "" for external tables 2015-11-16 08:10:48 -08:00
pom.xml [SPARK-10300] [BUILD] [TESTS] Add support for test tags in run-tests.py. 2015-10-07 14:11:21 -07:00