[SPARK-7930] [CORE] [STREAMING] Fixed shutdown hook priorities
Shutdown hook for temp directories had priority 100 while SparkContext was 50. So the local root directory was deleted before SparkContext was shutdown. This leads to scary errors on running jobs, at the time of shutdown. This is especially a problem when running streaming examples, where Ctrl-C is the only way to shutdown.
The fix in this PR is to make the temp directory shutdown priority lower than SparkContext, so that the temp dirs are the last thing to get deleted, after the SparkContext has been shut down. Also, the DiskBlockManager shutdown priority is change from default 100 to temp_dir_prio + 1, so that it gets invoked just before all temp dirs are cleared.
Author: Tathagata Das <tathagata.das1565@gmail.com>
Closes #6482 from tdas/SPARK-7930 and squashes the following commits:
d7cbeb5 [Tathagata Das] Removed unnecessary line
1514d0b [Tathagata Das] Fixed shutdown hook priorities
(cherry picked from commit cd3d9a5c0c
)
Signed-off-by: Patrick Wendell <patrick@databricks.com>
This commit is contained in:
parent
aee046dfa1
commit
f7cb272b7c
|
@ -139,8 +139,8 @@ private[spark] class DiskBlockManager(blockManager: BlockManager, conf: SparkCon
|
|||
}
|
||||
|
||||
private def addShutdownHook(): AnyRef = {
|
||||
Utils.addShutdownHook { () =>
|
||||
logDebug("Shutdown hook called")
|
||||
Utils.addShutdownHook(Utils.TEMP_DIR_SHUTDOWN_PRIORITY + 1) { () =>
|
||||
logInfo("Shutdown hook called")
|
||||
DiskBlockManager.this.doStop()
|
||||
}
|
||||
}
|
||||
|
|
|
@ -73,6 +73,13 @@ private[spark] object Utils extends Logging {
|
|||
*/
|
||||
val SPARK_CONTEXT_SHUTDOWN_PRIORITY = 50
|
||||
|
||||
/**
|
||||
* The shutdown priority of temp directory must be lower than the SparkContext shutdown
|
||||
* priority. Otherwise cleaning the temp directories while Spark jobs are running can
|
||||
* throw undesirable errors at the time of shutdown.
|
||||
*/
|
||||
val TEMP_DIR_SHUTDOWN_PRIORITY = 25
|
||||
|
||||
private val MAX_DIR_CREATION_ATTEMPTS: Int = 10
|
||||
@volatile private var localRootDirs: Array[String] = null
|
||||
|
||||
|
@ -189,10 +196,11 @@ private[spark] object Utils extends Logging {
|
|||
private val shutdownDeleteTachyonPaths = new scala.collection.mutable.HashSet[String]()
|
||||
|
||||
// Add a shutdown hook to delete the temp dirs when the JVM exits
|
||||
addShutdownHook { () =>
|
||||
logDebug("Shutdown hook called")
|
||||
addShutdownHook(TEMP_DIR_SHUTDOWN_PRIORITY) { () =>
|
||||
logInfo("Shutdown hook called")
|
||||
shutdownDeletePaths.foreach { dirPath =>
|
||||
try {
|
||||
logInfo("Deleting directory " + dirPath)
|
||||
Utils.deleteRecursively(new File(dirPath))
|
||||
} catch {
|
||||
case e: Exception => logError(s"Exception while deleting Spark temp dir: $dirPath", e)
|
||||
|
|
Loading…
Reference in a new issue