spark-instrumented-optimizer/sql/core
Gengliang Wang fa09d91925 [SPARK-24919][BUILD] New linter rule for sparkContext.hadoopConfiguration
## What changes were proposed in this pull request?

In most cases, we should use `spark.sessionState.newHadoopConf()` instead of `sparkContext.hadoopConfiguration`, so that the hadoop configurations specified in Spark session
configuration will come into effect.

Add a rule matching `spark.sparkContext.hadoopConfiguration` or `spark.sqlContext.sparkContext.hadoopConfiguration` to prevent the usage.
## How was this patch tested?

Unit test

Author: Gengliang Wang <gengliang.wang@databricks.com>

Closes #21873 from gengliangwang/linterRule.
2018-07-26 16:50:59 -07:00
..
benchmarks [SPARK-24549][SQL] Support Decimal type push down to the parquet data sources 2018-07-16 15:44:51 +08:00
src [SPARK-24919][BUILD] New linter rule for sparkContext.hadoopConfiguration 2018-07-26 16:50:59 -07:00
pom.xml [SPARK-24576][BUILD] Upgrade Apache ORC to 1.5.2 2018-07-17 23:52:17 -07:00