[SPARK-22466][SPARK SUBMIT] export SPARK_CONF_DIR while conf is default

## What changes were proposed in this pull request?

We use SPARK_CONF_DIR to switch spark conf directory and can be visited  if we explicitly export it in spark-env.sh, but with default settings, it can't be done. This PR export SPARK_CONF_DIR while it is default.

### Before

```
KentKentsMacBookPro  ~/Documents/spark-packages/spark-2.3.0-SNAPSHOT-bin-master  bin/spark-shell --master local
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/11/08 10:28:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/08 10:28:45 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Spark context Web UI available at http://169.254.168.63:4041
Spark context available as 'sc' (master = local, app id = local-1510108125770).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_65)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sys.env.get("SPARK_CONF_DIR")
res0: Option[String] = None
```

### After

```
scala> sys.env.get("SPARK_CONF_DIR")
res0: Option[String] = Some(/Users/Kent/Documents/spark/conf)
```
## How was this patch tested?

vanzin

Author: Kent Yao <yaooqinn@hotmail.com>

Closes #19688 from yaooqinn/SPARK-22466.
This commit is contained in:
Kent Yao 2017-11-09 14:33:08 +09:00 committed by hyukjinkwon
parent 6447d7bc1d
commit ee571d79e5
3 changed files with 10 additions and 14 deletions

View file

@ -19,15 +19,13 @@ rem
rem This script loads spark-env.cmd if it exists, and ensures it is only loaded once. rem This script loads spark-env.cmd if it exists, and ensures it is only loaded once.
rem spark-env.cmd is loaded from SPARK_CONF_DIR if set, or within the current directory's rem spark-env.cmd is loaded from SPARK_CONF_DIR if set, or within the current directory's
rem conf/ subdirectory. rem conf\ subdirectory.
if [%SPARK_ENV_LOADED%] == [] ( if [%SPARK_ENV_LOADED%] == [] (
set SPARK_ENV_LOADED=1 set SPARK_ENV_LOADED=1
if not [%SPARK_CONF_DIR%] == [] ( if [%SPARK_CONF_DIR%] == [] (
set user_conf_dir=%SPARK_CONF_DIR% set SPARK_CONF_DIR=%~dp0..\conf
) else (
set user_conf_dir=..\conf
) )
call :LoadSparkEnv call :LoadSparkEnv
@ -54,6 +52,6 @@ if [%SPARK_SCALA_VERSION%] == [] (
exit /b 0 exit /b 0
:LoadSparkEnv :LoadSparkEnv
if exist "%user_conf_dir%\spark-env.cmd" ( if exist "%SPARK_CONF_DIR%\spark-env.cmd" (
call "%user_conf_dir%\spark-env.cmd" call "%SPARK_CONF_DIR%\spark-env.cmd"
) )

View file

@ -29,15 +29,12 @@ fi
if [ -z "$SPARK_ENV_LOADED" ]; then if [ -z "$SPARK_ENV_LOADED" ]; then
export SPARK_ENV_LOADED=1 export SPARK_ENV_LOADED=1
# Returns the parent of the directory this script lives in. export SPARK_CONF_DIR="${SPARK_CONF_DIR:-"${SPARK_HOME}"/conf}"
parent_dir="${SPARK_HOME}"
user_conf_dir="${SPARK_CONF_DIR:-"$parent_dir"/conf}" if [ -f "${SPARK_CONF_DIR}/spark-env.sh" ]; then
if [ -f "${user_conf_dir}/spark-env.sh" ]; then
# Promote all variable declarations to environment (exported) variables # Promote all variable declarations to environment (exported) variables
set -a set -a
. "${user_conf_dir}/spark-env.sh" . "${SPARK_CONF_DIR}/spark-env.sh"
set +a set +a
fi fi
fi fi

View file

@ -32,7 +32,8 @@
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos # - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos
# Options read in YARN client mode # Options read in YARN client/cluster mode
# - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN # - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1). # - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).