a65e86a65e
### What changes were proposed in this pull request? This PR is the 3rd try to upgrade Scala 2.12.x in order to see the feasibility. - https://github.com/apache/spark/pull/27929 (Upgrade Scala to 2.12.11, wangyum ) - https://github.com/apache/spark/pull/30940 (Upgrade Scala to 2.12.12, viirya ) `silencer` library is updated accordingly. And, Kafka version upgrade is required because it fails like the following. ``` [info] KafkaDataConsumerSuite: [info] org.apache.spark.streaming.kafka010.KafkaDataConsumerSuite *** ABORTED *** (1 second, 580 milliseconds) [info] java.lang.NoClassDefFoundError: scala/math/Ordering$$anon$7 [info] at kafka.api.ApiVersion$.orderingByVersion(ApiVersion.scala:45) ``` ### Why are the changes needed? Apache Spark was stuck to 2.12.10 due to the regression in Scala 2.12.11 and 2.12.12. This will bring all the bug fixes. - https://github.com/scala/scala/releases/tag/v2.12.13 - https://github.com/scala/scala/releases/tag/v2.12.12 - https://github.com/scala/scala/releases/tag/v2.12.11 ### Does this PR introduce _any_ user-facing change? Yes, but this is a bug-fixed version. ### How was this patch tested? Pass the CIs. Closes #31223 from dongjoon-hyun/SPARK-31168. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com> |
||
---|---|---|
.. | ||
create-release | ||
deps | ||
sparktestsupport | ||
tests | ||
.gitignore | ||
.rat-excludes | ||
.scalafmt.conf | ||
appveyor-guide.md | ||
appveyor-install-dependencies.ps1 | ||
change-scala-version.sh | ||
check-license | ||
checkstyle-suppressions.xml | ||
checkstyle.xml | ||
github_jira_sync.py | ||
lint-java | ||
lint-python | ||
lint-r | ||
lint-r.R | ||
lint-scala | ||
make-distribution.sh | ||
merge_spark_pr.py | ||
mima | ||
pip-sanity-check.py | ||
README.md | ||
requirements.txt | ||
run-pip-tests | ||
run-tests | ||
run-tests-jenkins | ||
run-tests-jenkins.py | ||
run-tests.py | ||
sbt-checkstyle | ||
scalafmt | ||
scalastyle | ||
test-dependencies.sh | ||
tox.ini |
Spark Developer Scripts
This directory contains scripts useful to developers when packaging, testing, or committing to Spark.
Many of these scripts require Apache credentials to work correctly.