5f91245cc2
### What changes were proposed in this pull request? This PR introduces a new protected method in `SparkFunSuite` which is only called when the test failed and can be used to collect logs for failed test. By this PR it is implemented in the Kubernetes tests by `KubernetesSuite` class where it collects all the POD logs and logs them out. This unfortunately cannot be realized with a simple "after" method as in the "after" method the test outcome is not available. Moreover this PR removes the `appLocator` as a method argument as `appLocator` is available as a member variable. ### Why are the changes needed? Currently both the driver and executors logs are lost. In [developer-tools](https://spark.apache.org/developer-tools.html) there is a hint: "Getting logs from the pods and containers directly is an exercise left to the reader." But when the test is executed by Jenkins and a failure happened we really need the POD logs to analyze problem. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? By integration testing. I have checked what would happen if one test fails, the output would be: ``` 21/02/14 11:05:34.261 ScalaTest-main-running-KubernetesSuite INFO KubernetesSuite: ===== EXTRA LOGS FOR THE FAILED TEST 21/02/14 11:05:34.278 ScalaTest-main-running-KubernetesSuite INFO KubernetesSuite: BEGIN driver POD log ++ id -u + myuid=185 ++ id -g + mygid=0 + set +e ++ getent passwd 185 + uidentry= + set -e + '[' -z '' ']' + '[' -w /etc/passwd ']' + echo '185❌185:0:anonymous uid:/opt/spark:/bin/false' + SPARK_CLASSPATH=':/opt/spark/jars/*' + env + grep SPARK_JAVA_OPT_ + sort -t_ -k4 -n + sed 's/[^=]*=\(.*\)/\1/g' + readarray -t SPARK_EXECUTOR_JAVA_OPTS + '[' -n '' ']' + '[' -z ']' + '[' -z ']' + '[' -n '' ']' + '[' -z ']' + '[' -z x ']' + SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*' + case "$1" in + shift 1 + CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$") + exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=172.17.0.3 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner local:///opt/spark/tests/decommissioning.py 21/02/14 10:02:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting decom test Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 21/02/14 10:02:29 INFO SparkContext: Running Spark version 3.2.0-SNAPSHOT 21/02/14 10:02:29 INFO ResourceUtils: ============================================================== 21/02/14 10:02:29 INFO ResourceUtils: No custom resources configured for spark.driver. 21/02/14 10:02:29 INFO ResourceUtils: ============================================================== ... 21/02/14 10:03:17 INFO ShutdownHookManager: Deleting directory /var/data/spark-fa6961ed-a2c1-444c-bfeb-20e63ba0b5cf/spark-ab4b0287-6e24-4b39-837e-9b0b62c1f26f 21/02/14 10:03:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-d6b11e7d-6a03-4a1d-8559-37cb853319bf 21/02/14 11:05:34.279 ScalaTest-main-running-KubernetesSuite INFO KubernetesSuite: END driver POD log ``` Closes #31561 from attilapiros/SPARK-34426. Authored-by: “attilapiros” <piros.attila.zsolt@gmail.com> Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com> |
||
---|---|---|
.. | ||
benchmarks | ||
src | ||
pom.xml |