17586f9ed2
### What changes were proposed in this pull request? This PR aims to support Hadoop 3.2 K8s integration tests. ### Why are the changes needed? Currently, K8s integration suite assumes Hadoop 2.7 and has hard-coded parts. ### Does this PR introduce _any_ user-facing change? No. This is a dev-only change. ### How was this patch tested? Pass the Jenkins K8s IT (with Hadoop 2.7) and do the manual testing for Hadoop 3.2 as described in `README.md`. ``` ./dev/dev-run-integration-tests.sh --hadoop-profile hadoop-3.2 ``` I verified this manually like the following. ``` $ resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh \ --spark-tgz .../spark-3.1.0-SNAPSHOT-bin-3.2.0.tgz \ --exclude-tags r \ --hadoop-profile hadoop-3.2 ... KubernetesSuite: - Run SparkPi with no resources - Run SparkPi with a very long application name. - Use SparkLauncher.NO_RESOURCE - Run SparkPi with a master URL without a scheme. - Run SparkPi with an argument. - Run SparkPi with custom labels, annotations, and environment variables. - All pods have the same service account by default - Run extraJVMOptions check on driver - Run SparkRemoteFileTest using a remote data file - Run SparkPi with env and mount secrets. - Run PySpark on simple pi.py example - Run PySpark with Python2 to test a pyfiles example - Run PySpark with Python3 to test a pyfiles example - Run PySpark with memory customization - Run in client mode. - Start pod creation from template - PVs with local storage - Launcher client dependencies - Test basic decommissioning Run completed in 8 minutes, 49 seconds. Total number of tests run: 19 Suites: completed 2, aborted 0 Tests: succeeded 19, failed 0, canceled 0, ignored 0, pending 0 All tests passed. ``` Closes #28689 from dongjoon-hyun/SPARK-31881. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> |
||
---|---|---|
.. | ||
dev-run-integration-tests.sh | ||
spark-rbac.yaml |