spark-instrumented-optimizer/dev
Josh Rosen 3ab0138b0f [SPARK-12734][BUILD] Fix Netty exclusion and use Maven Enforcer to prevent future bugs
Netty classes are published under multiple artifacts with different names, so our build needs to exclude the `io.netty:netty` and `org.jboss.netty:netty` versions of the Netty artifact. However, our existing exclusions were incomplete, leading to situations where duplicate Netty classes would wind up on the classpath and cause compile errors (or worse).

This patch fixes the exclusion issue by adding more exclusions and uses Maven Enforcer's [banned dependencies](https://maven.apache.org/enforcer/enforcer-rules/bannedDependencies.html) rule to prevent these classes from accidentally being reintroduced. I also updated `dev/test-dependencies.sh` to run `mvn validate` so that the enforcer rules can run as part of pull request builds.

/cc rxin srowen pwendell. I'd like to backport at least the exclusion portion of this fix to `branch-1.5` in order to fix the documentation publishing job, which fails nondeterministically due to incompatible versions of Netty classes taking precedence on the compile-time classpath.

Author: Josh Rosen <rosenville@gmail.com>
Author: Josh Rosen <joshrosen@databricks.com>

Closes #10672 from JoshRosen/enforce-netty-exclusions.
2016-01-10 19:59:01 -08:00
..
audit-release [SPARK-11808] Remove Bagel. 2015-12-19 22:40:35 -08:00
create-release [SPARK-12735] Consolidate & move spark-ec2 to AMPLab managed repository. 2016-01-09 20:28:20 -08:00
deps [SPARK-12734][BUILD] Fix Netty exclusion and use Maven Enforcer to prevent future bugs 2016-01-10 19:59:01 -08:00
sparktestsupport [SPARK-12735] Consolidate & move spark-ec2 to AMPLab managed repository. 2016-01-09 20:28:20 -08:00
tests [SPARK-10359] Enumerate dependencies in a file and diff against it for new pull requests 2015-12-30 12:47:42 -08:00
.gitignore [SPARK-6219] Reuse pep8.py 2015-04-18 16:46:28 -07:00
change-scala-version.sh [SPARK-9250] Make change-scala-version more helpful w.r.t. valid Scala versions 2015-07-24 17:09:33 +01:00
change-version-to-2.10.sh [SPARK-9304] [BUILD] Improve backwards compatibility of SPARK-8401 2015-07-25 11:05:08 +01:00
change-version-to-2.11.sh [SPARK-9304] [BUILD] Improve backwards compatibility of SPARK-8401 2015-07-25 11:05:08 +01:00
check-license [SPARK-6773][Tests]Fix RAT checks still passed issue when download rat jar failed 2015-04-10 20:02:35 +01:00
github_jira_sync.py Fix install jira-python 2015-05-23 09:14:07 -07:00
lint-java [SPARK-6990][BUILD] Add Java linting script; fix minor warnings 2015-12-04 12:03:45 -08:00
lint-python [SPARK-12735] Consolidate & move spark-ec2 to AMPLab managed repository. 2016-01-09 20:28:20 -08:00
lint-r [SPARK-10328] [SPARKR] Fix generic for na.omit 2015-08-28 00:37:50 -07:00
lint-r.R [SPARK-8505] [SPARKR] Add settings to kick lint-r from ./dev/run-test.py 2015-08-27 19:38:53 -07:00
lint-scala [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
merge_spark_pr.py [SPARK-11169] Remove the extra spaces in merge script 2015-10-18 09:54:38 -07:00
mima [SPARK-7841][BUILD] Stop using retrieveManaged to retrieve dependencies in SBT 2015-11-10 10:14:19 -08:00
README.md Merge pull request #565 from pwendell/dev-scripts. Closes #565. 2014-02-08 23:13:34 -08:00
run-tests [SPARK-5161] Parallelize Python test execution 2015-06-29 21:32:40 -07:00
run-tests-jenkins [SPARK-7018][BUILD] Refactor dev/run-tests-jenkins into Python 2015-10-18 22:45:27 -07:00
run-tests-jenkins.py [SPARK-12612][PROJECT-INFRA] Add missing Hadoop profiles to dev/run-tests-*.py scripts and dev/deps 2016-01-03 22:05:02 -08:00
run-tests.py [SPARK-12625][SPARKR][SQL] replace R usage of Spark SQL deprecated API 2016-01-04 22:32:07 -08:00
scalastyle [SPARK-12152][PROJECT-INFRA] Speed up Scalastyle checks by only invoking SBT once 2015-12-06 17:35:01 -08:00
test-dependencies.sh [SPARK-12734][BUILD] Fix Netty exclusion and use Maven Enforcer to prevent future bugs 2016-01-10 19:59:01 -08:00

Spark Developer Scripts

This directory contains scripts useful to developers when packaging, testing, or committing to Spark.

Many of these scripts require Apache credentials to work correctly.