15747cfd32
## What changes were proposed in this pull request? https://issues.apache.org/jira/browse/SPARK-24547 TL;DR from JIRA issue: - First time I generated images for 2.4.0 Docker was using it's cache, so actually when running jobs, old jars where still in the Docker image. This produces errors like this in the executors: `java.io.InvalidClassException: org.apache.spark.storage.BlockManagerId; local class incompatible: stream classdesc serialVersionUID = 6155820641931972169, local class serialVersionUID = -3720498261147521051` - The second problem was that the spark container is pushed, but the spark-py container wasn't yet. This was just forgotten in the initial PR. - A third problem I also ran into because I had an older docker was https://github.com/apache/spark/pull/21551 so I have not included a fix for that in this ticket. ## How was this patch tested? I've tested it on my own Spark on k8s deployment. Author: Ray Burgemeestre <ray.burgemeestre@brightcomputing.com> Closes #21555 from rayburgemeestre/SPARK-24547. |
||
---|---|---|
.. | ||
beeline | ||
beeline.cmd | ||
docker-image-tool.sh | ||
find-spark-home | ||
find-spark-home.cmd | ||
load-spark-env.cmd | ||
load-spark-env.sh | ||
pyspark | ||
pyspark.cmd | ||
pyspark2.cmd | ||
run-example | ||
run-example.cmd | ||
spark-class | ||
spark-class.cmd | ||
spark-class2.cmd | ||
spark-shell | ||
spark-shell.cmd | ||
spark-shell2.cmd | ||
spark-sql | ||
spark-sql.cmd | ||
spark-sql2.cmd | ||
spark-submit | ||
spark-submit.cmd | ||
spark-submit2.cmd | ||
sparkR | ||
sparkR.cmd | ||
sparkR2.cmd |