[SPARK-14738][BUILD] Separate docker integration tests from main build

## What changes were proposed in this pull request?

Create a maven profile for executing the docker integration tests using maven
Remove docker integration tests from main sbt build
Update documentation on how to run docker integration tests from sbt

## How was this patch tested?

Manual test of the docker integration tests as in :
mvn -Pdocker-integration-tests -pl :spark-docker-integration-tests_2.11 compile test

## Other comments

Note that the the DB2 Docker Tests are still disabled as there is a kernel version issue on the AMPLab Jenkins slaves and we would need to get them on the right level before enabling those tests. They do run ok locally with the updates from PR #12348

Author: Luciano Resende <lresende@apache.org>

Closes #12508 from lresende/docker.
This commit is contained in:
Luciano Resende 2016-05-06 12:25:45 +01:00 committed by Sean Owen
parent 157a49aa41
commit a03c5e68ab
6 changed files with 22 additions and 12 deletions

View file

@ -190,6 +190,18 @@ or
Java 8 tests are automatically enabled when a Java 8 JDK is detected.
If you have JDK 8 installed but it is not the system default, you can set JAVA_HOME to point to JDK 8 before running the tests.
# Running Docker based Integration Test Suites
Running only docker based integration tests and nothing else.
mvn install -DskipTests
mvn -Pdocker-integration-tests -pl :spark-docker-integration-tests_2.11
or
sbt docker-integration-tests/test
# Packaging without Hadoop Dependencies for YARN
The assembly directory produced by `mvn package` will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with `yarn.application.classpath`. The `hadoop-provided` profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.

View file

@ -21,12 +21,9 @@ import java.math.BigDecimal
import java.sql.{Connection, Date, Timestamp}
import java.util.Properties
import org.scalatest.Ignore
import org.apache.spark.tags.DockerTest
@DockerTest
@Ignore
class MySQLIntegrationSuite extends DockerJDBCIntegrationSuite {
override val db = new DatabaseOnDocker {
override val imageName = "mysql:5.7.9"

View file

@ -20,8 +20,6 @@ package org.apache.spark.sql.jdbc
import java.sql.Connection
import java.util.Properties
import org.scalatest.Ignore
import org.apache.spark.sql.test.SharedSQLContext
import org.apache.spark.tags.DockerTest
@ -46,12 +44,11 @@ import org.apache.spark.tags.DockerTest
* repository.
*/
@DockerTest
@Ignore
class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLContext {
import testImplicits._
override val db = new DatabaseOnDocker {
override val imageName = "wnameless/oracle-xe-11g:latest"
override val imageName = "wnameless/oracle-xe-11g:14.04.4"
override val env = Map(
"ORACLE_ROOT_PASSWORD" -> "oracle"
)

View file

@ -20,15 +20,12 @@ package org.apache.spark.sql.jdbc
import java.sql.Connection
import java.util.Properties
import org.scalatest.Ignore
import org.apache.spark.sql.Column
import org.apache.spark.sql.catalyst.expressions.Literal
import org.apache.spark.sql.types.{ArrayType, DecimalType}
import org.apache.spark.tags.DockerTest
@DockerTest
@Ignore
class PostgresIntegrationSuite extends DockerJDBCIntegrationSuite {
override val db = new DatabaseOnDocker {
override val imageName = "postgres:9.4.5"

View file

@ -101,7 +101,6 @@
<module>sql/core</module>
<module>sql/hive</module>
<module>sql/hivecontext-compatibility</module>
<module>external/docker-integration-tests</module>
<module>assembly</module>
<module>external/flume</module>
<module>external/flume-sink</module>
@ -2469,6 +2468,13 @@
</build>
</profile>
<profile>
<id>docker-integration-tests</id>
<modules>
<module>external/docker-integration-tests</module>
</modules>
</profile>
<!-- A series of build profiles where customizations for particular Hadoop releases can be made -->
<!-- Hadoop-a.b.c dependencies can be found at

View file

@ -382,7 +382,8 @@ object SparkBuild extends PomBuild {
enable(Java8TestSettings.settings)(java8Tests)
enable(DockerIntegrationTests.settings)(dockerIntegrationTests)
// SPARK-14738 - Remove docker tests from main Spark build
// enable(DockerIntegrationTests.settings)(dockerIntegrationTests)
/**
* Adds the ability to run the spark shell directly from SBT without building an assembly