7e2ed40d58
### What changes were proposed in this pull request? in `spark-daemon.sh`, `spark_rotate_log()` accepts `$2` as a custom setting for the number of maximum rotate log files, but this part of code is actually never used. This PR adds `SPARK_LOG_MAX_FILES` environment variable to represent the maximum log files of Spark daemons can rotate to. ### Why are the changes needed? the logs files that all spark daemons are hardcoded as 5, but it supposed to be configurable ### Does this PR introduce _any_ user-facing change? yes, SPARK_LOG_MAX_FILES is added to represent the maximum log files of Spark daemons can rotate to. ### How was this patch tested? verify locally for the added shell script: ```shell kentyaohulk ~ SPARK_LOG_MAX_FILES=1 sh test.sh 1 kentyaohulk ~ SPARK_LOG_MAX_FILES=a sh test.sh Error: SPARK_LOG_MAX_FILES must be a postive number ✘ kentyaohulk ~ SPARK_LOG_MAX_FILES=b sh test.sh Error: SPARK_LOG_MAX_FILES must be a postive number ✘ kentyaohulk ~ SPARK_LOG_MAX_FILES=-1 sh test.sh Error: SPARK_LOG_MAX_FILES must be a postive number ✘ kentyaohulk ~ sh test.sh 5 ✘ kentyaohulk ~ cat test.sh #!/bin/bash if [[ -z ${SPARK_LOG_MAX_FILES} ]] ; then num=5 elif [[ ${SPARK_LOG_MAX_FILES} -gt 0 ]]; then num=${SPARK_LOG_MAX_FILES} else echo "Error: SPARK_LOG_MAX_FILES must be a postive number" exit -1 fi ``` Closes #28580 from yaooqinn/SPARK-31759. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org> |
||
---|---|---|
.. | ||
fairscheduler.xml.template | ||
log4j.properties.template | ||
metrics.properties.template | ||
slaves.template | ||
spark-defaults.conf.template | ||
spark-env.sh.template |