[SPARK-19464][BUILD][HOTFIX] run-tests should use hadoop2.6

## What changes were proposed in this pull request?

After SPARK-19464, **SparkPullRequestBuilder** fails because it still tries to use hadoop2.3.

**BEFORE**
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72595/console
```
========================================================================
Building Spark
========================================================================
[error] Could not find hadoop2.3 in the list. Valid options  are ['hadoop2.6', 'hadoop2.7']
Attempting to post to Github...
 > Post successful.
```

**AFTER**
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72595/console
```
========================================================================
Building Spark
========================================================================
[info] Building Spark (w/Hive 1.2.1) using SBT with these arguments:  -Phadoop-2.6 -Pmesos -Pkinesis-asl -Pyarn -Phive-thriftserver -Phive test:package streaming-kafka-0-8-assembly/assembly streaming-flume-assembly/assembly streaming-kinesis-asl-assembly/assembly
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
```

## How was this patch tested?

Pass the existing test.

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #16858 from dongjoon-hyun/hotfix_run-tests.
This commit is contained in:
Dongjoon Hyun 2017-02-08 21:28:04 +00:00 committed by Sean Owen
parent 1aeb9f6cba
commit c618ccdbe9
No known key found for this signature in database
GPG key ID: BEB3956D6717BDDC
3 changed files with 2 additions and 11 deletions

View file

@ -231,10 +231,7 @@ test_that("varargsToStrEnv", {
test_that("basenameSansExtFromUrl", { test_that("basenameSansExtFromUrl", {
x <- paste0("http://people.apache.org/~pwendell/spark-nightly/spark-branch-2.1-bin/spark-2.1.1-", x <- paste0("http://people.apache.org/~pwendell/spark-nightly/spark-branch-2.1-bin/spark-2.1.1-",
"SNAPSHOT-2016_12_09_11_08-eb2d9bf-bin/spark-2.1.1-SNAPSHOT-bin-hadoop2.7.tgz") "SNAPSHOT-2016_12_09_11_08-eb2d9bf-bin/spark-2.1.1-SNAPSHOT-bin-hadoop2.7.tgz")
y <- paste0("http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/spark-2.1.0-",
"bin-hadoop2.4-without-hive.tgz")
expect_equal(basenameSansExtFromUrl(x), "spark-2.1.1-SNAPSHOT-bin-hadoop2.7") expect_equal(basenameSansExtFromUrl(x), "spark-2.1.1-SNAPSHOT-bin-hadoop2.7")
expect_equal(basenameSansExtFromUrl(y), "spark-2.1.0-bin-hadoop2.4-without-hive")
z <- "http://people.apache.org/~pwendell/spark-releases/spark-2.1.0--hive.tar.gz" z <- "http://people.apache.org/~pwendell/spark-releases/spark-2.1.0--hive.tar.gz"
expect_equal(basenameSansExtFromUrl(z), "spark-2.1.0--hive") expect_equal(basenameSansExtFromUrl(z), "spark-2.1.0--hive")
}) })

View file

@ -165,12 +165,6 @@ def main():
if "test-maven" in ghprb_pull_title: if "test-maven" in ghprb_pull_title:
os.environ["AMPLAB_JENKINS_BUILD_TOOL"] = "maven" os.environ["AMPLAB_JENKINS_BUILD_TOOL"] = "maven"
# Switch the Hadoop profile based on the PR title: # Switch the Hadoop profile based on the PR title:
if "test-hadoop2.2" in ghprb_pull_title:
os.environ["AMPLAB_JENKINS_BUILD_PROFILE"] = "hadoop2.2"
if "test-hadoop2.3" in ghprb_pull_title:
os.environ["AMPLAB_JENKINS_BUILD_PROFILE"] = "hadoop2.3"
if "test-hadoop2.4" in ghprb_pull_title:
os.environ["AMPLAB_JENKINS_BUILD_PROFILE"] = "hadoop2.4"
if "test-hadoop2.6" in ghprb_pull_title: if "test-hadoop2.6" in ghprb_pull_title:
os.environ["AMPLAB_JENKINS_BUILD_PROFILE"] = "hadoop2.6" os.environ["AMPLAB_JENKINS_BUILD_PROFILE"] = "hadoop2.6"
if "test-hadoop2.7" in ghprb_pull_title: if "test-hadoop2.7" in ghprb_pull_title:

View file

@ -505,14 +505,14 @@ def main():
# if we're on the Amplab Jenkins build servers setup variables # if we're on the Amplab Jenkins build servers setup variables
# to reflect the environment settings # to reflect the environment settings
build_tool = os.environ.get("AMPLAB_JENKINS_BUILD_TOOL", "sbt") build_tool = os.environ.get("AMPLAB_JENKINS_BUILD_TOOL", "sbt")
hadoop_version = os.environ.get("AMPLAB_JENKINS_BUILD_PROFILE", "hadoop2.3") hadoop_version = os.environ.get("AMPLAB_JENKINS_BUILD_PROFILE", "hadoop2.6")
test_env = "amplab_jenkins" test_env = "amplab_jenkins"
# add path for Python3 in Jenkins if we're calling from a Jenkins machine # add path for Python3 in Jenkins if we're calling from a Jenkins machine
os.environ["PATH"] = "/home/anaconda/envs/py3k/bin:" + os.environ.get("PATH") os.environ["PATH"] = "/home/anaconda/envs/py3k/bin:" + os.environ.get("PATH")
else: else:
# else we're running locally and can use local settings # else we're running locally and can use local settings
build_tool = "sbt" build_tool = "sbt"
hadoop_version = os.environ.get("HADOOP_PROFILE", "hadoop2.3") hadoop_version = os.environ.get("HADOOP_PROFILE", "hadoop2.6")
test_env = "local" test_env = "local"
print("[info] Using build tool", build_tool, "with Hadoop profile", hadoop_version, print("[info] Using build tool", build_tool, "with Hadoop profile", hadoop_version,