Merge remote-tracking branch 'apache/master' into project-refactor
Conflicts: examples/src/main/java/org/apache/spark/streaming/examples/JavaFlumeEventCount.java streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaStreamingContext.scala streaming/src/test/java/org/apache/spark/streaming/JavaAPISuite.java streaming/src/test/scala/org/apache/spark/streaming/InputStreamsSuite.scala streaming/src/test/scala/org/apache/spark/streaming/TestSuiteBase.scala
This commit is contained in:
commit
3b4c4c7f4d
2
.gitignore
vendored
2
.gitignore
vendored
|
@ -1,6 +1,8 @@
|
|||
*~
|
||||
*.swp
|
||||
*.ipr
|
||||
*.iml
|
||||
*.iws
|
||||
.idea/
|
||||
.settings
|
||||
.cache
|
||||
|
|
28
README.md
28
README.md
|
@ -13,20 +13,20 @@ This README file only contains basic setup instructions.
|
|||
## Building
|
||||
|
||||
Spark requires Scala 2.10. The project is built using Simple Build Tool (SBT),
|
||||
which is packaged with it. To build Spark and its example programs, run:
|
||||
which can be obtained [here](http://www.scala-sbt.org). To build Spark and its example programs, run:
|
||||
|
||||
sbt/sbt assembly
|
||||
sbt assembly
|
||||
|
||||
Once you've built Spark, the easiest way to start using it is the shell:
|
||||
|
||||
./spark-shell
|
||||
./bin/spark-shell
|
||||
|
||||
Or, for the Python API, the Python shell (`./pyspark`).
|
||||
Or, for the Python API, the Python shell (`./bin/pyspark`).
|
||||
|
||||
Spark also comes with several sample programs in the `examples` directory.
|
||||
To run one of them, use `./run-example <class> <params>`. For example:
|
||||
To run one of them, use `./bin/run-example <class> <params>`. For example:
|
||||
|
||||
./run-example org.apache.spark.examples.SparkLR local[2]
|
||||
./bin/run-example org.apache.spark.examples.SparkLR local[2]
|
||||
|
||||
will run the Logistic Regression example locally on 2 CPUs.
|
||||
|
||||
|
@ -36,7 +36,13 @@ All of the Spark samples take a `<master>` parameter that is the cluster URL
|
|||
to connect to. This can be a mesos:// or spark:// URL, or "local" to run
|
||||
locally with one thread, or "local[N]" to run locally with N threads.
|
||||
|
||||
## Running tests
|
||||
|
||||
Testing first requires [Building](#Building) Spark. Once Spark is built, tests
|
||||
can be run using:
|
||||
|
||||
`sbt test`
|
||||
|
||||
## A Note About Hadoop Versions
|
||||
|
||||
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported
|
||||
|
@ -49,22 +55,22 @@ For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop
|
|||
versions without YARN, use:
|
||||
|
||||
# Apache Hadoop 1.2.1
|
||||
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly
|
||||
$ SPARK_HADOOP_VERSION=1.2.1 sbt assembly
|
||||
|
||||
# Cloudera CDH 4.2.0 with MapReduce v1
|
||||
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly
|
||||
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt assembly
|
||||
|
||||
For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions
|
||||
with YARN, also set `SPARK_YARN=true`:
|
||||
|
||||
# Apache Hadoop 2.0.5-alpha
|
||||
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
|
||||
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt assembly
|
||||
|
||||
# Cloudera CDH 4.2.0 with MapReduce v2
|
||||
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly
|
||||
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt assembly
|
||||
|
||||
# Apache Hadoop 2.2.X and newer
|
||||
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly
|
||||
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt assembly
|
||||
|
||||
When developing a Spark application, specify the Hadoop version by adding the
|
||||
"hadoop-client" artifact to your project's dependencies. For example, if you're
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
|
||||
Copyright (c) 2009-2011, Barthelemy Dagenais All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
- Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
- Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
- The name of the author may not be used to endorse or promote products
|
||||
derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGE.
|
|
@ -1 +0,0 @@
|
|||
b7924aabe9c5e63f0a4d8bbd17019534c7ec014e
|
Binary file not shown.
|
@ -1,9 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<groupId>net.sf.py4j</groupId>
|
||||
<artifactId>py4j</artifactId>
|
||||
<version>0.7</version>
|
||||
<description>POM was created from install:install-file</description>
|
||||
</project>
|
|
@ -1,12 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<metadata>
|
||||
<groupId>net.sf.py4j</groupId>
|
||||
<artifactId>py4j</artifactId>
|
||||
<versioning>
|
||||
<release>0.7</release>
|
||||
<versions>
|
||||
<version>0.7</version>
|
||||
</versions>
|
||||
<lastUpdated>20130828020333</lastUpdated>
|
||||
</versioning>
|
||||
</metadata>
|
|
@ -67,7 +67,7 @@
|
|||
<dependency>
|
||||
<groupId>net.sf.py4j</groupId>
|
||||
<artifactId>py4j</artifactId>
|
||||
<version>0.7</version>
|
||||
<version>0.8.1</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
|
@ -124,7 +124,17 @@
|
|||
|
||||
<profiles>
|
||||
<profile>
|
||||
<id>hadoop2-yarn</id>
|
||||
<id>yarn-alpha</id>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.spark</groupId>
|
||||
<artifactId>spark-yarn-alpha_${scala.binary.version}</artifactId>
|
||||
<version>${project.version}</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</profile>
|
||||
<profile>
|
||||
<id>yarn</id>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.spark</groupId>
|
||||
|
|
|
@ -39,23 +39,20 @@
|
|||
</fileSet>
|
||||
<fileSet>
|
||||
<directory>
|
||||
${project.parent.basedir}/bin/
|
||||
${project.parent.basedir}/sbin/
|
||||
</directory>
|
||||
<outputDirectory>/bin</outputDirectory>
|
||||
<outputDirectory>/sbin</outputDirectory>
|
||||
<includes>
|
||||
<include>**/*</include>
|
||||
</includes>
|
||||
</fileSet>
|
||||
<fileSet>
|
||||
<directory>
|
||||
${project.parent.basedir}
|
||||
${project.parent.basedir}/bin/
|
||||
</directory>
|
||||
<outputDirectory>/bin</outputDirectory>
|
||||
<includes>
|
||||
<include>run-example*</include>
|
||||
<include>spark-class*</include>
|
||||
<include>spark-shell*</include>
|
||||
<include>spark-executor*</include>
|
||||
<include>**/*</include>
|
||||
</includes>
|
||||
</fileSet>
|
||||
</fileSets>
|
||||
|
|
|
@ -29,7 +29,7 @@ rem Load environment variables from conf\spark-env.cmd, if it exists
|
|||
if exist "%FWDIR%conf\spark-env.cmd" call "%FWDIR%conf\spark-env.cmd"
|
||||
|
||||
rem Build up classpath
|
||||
set CLASSPATH=%SPARK_CLASSPATH%;%FWDIR%conf
|
||||
set CLASSPATH=%FWDIR%conf
|
||||
if exist "%FWDIR%RELEASE" (
|
||||
for %%d in ("%FWDIR%jars\spark-assembly*.jar") do (
|
||||
set ASSEMBLY_JAR=%%d
|
||||
|
|
|
@ -26,7 +26,7 @@ SCALA_VERSION=2.10
|
|||
FWDIR="$(cd `dirname $0`/..; pwd)"
|
||||
|
||||
# Load environment variables from conf/spark-env.sh, if it exists
|
||||
if [ -e $FWDIR/conf/spark-env.sh ] ; then
|
||||
if [ -e "$FWDIR/conf/spark-env.sh" ] ; then
|
||||
. $FWDIR/conf/spark-env.sh
|
||||
fi
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
#
|
||||
|
||||
# Figure out where the Scala framework is installed
|
||||
FWDIR="$(cd `dirname $0`; pwd)"
|
||||
FWDIR="$(cd `dirname $0`/..; pwd)"
|
||||
|
||||
# Export this as SPARK_HOME
|
||||
export SPARK_HOME="$FWDIR"
|
||||
|
@ -31,13 +31,13 @@ if [ ! -f "$FWDIR/RELEASE" ]; then
|
|||
ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/spark-assembly*hadoop*.jar >& /dev/null
|
||||
if [[ $? != 0 ]]; then
|
||||
echo "Failed to find Spark assembly in $FWDIR/assembly/target" >&2
|
||||
echo "You need to build Spark with sbt/sbt assembly before running this program" >&2
|
||||
echo "You need to build Spark with sbt assembly before running this program" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Load environment variables from conf/spark-env.sh, if it exists
|
||||
if [ -e $FWDIR/conf/spark-env.sh ] ; then
|
||||
if [ -e "$FWDIR/conf/spark-env.sh" ] ; then
|
||||
. $FWDIR/conf/spark-env.sh
|
||||
fi
|
||||
|
|
@ -20,7 +20,7 @@ rem
|
|||
set SCALA_VERSION=2.10
|
||||
|
||||
rem Figure out where the Spark framework is installed
|
||||
set FWDIR=%~dp0
|
||||
set FWDIR=%~dp0..\
|
||||
|
||||
rem Export this as SPARK_HOME
|
||||
set SPARK_HOME=%FWDIR%
|
|
@ -25,13 +25,13 @@ esac
|
|||
SCALA_VERSION=2.10
|
||||
|
||||
# Figure out where the Scala framework is installed
|
||||
FWDIR="$(cd `dirname $0`; pwd)"
|
||||
FWDIR="$(cd `dirname $0`/..; pwd)"
|
||||
|
||||
# Export this as SPARK_HOME
|
||||
export SPARK_HOME="$FWDIR"
|
||||
|
||||
# Load environment variables from conf/spark-env.sh, if it exists
|
||||
if [ -e $FWDIR/conf/spark-env.sh ] ; then
|
||||
if [ -e "$FWDIR/conf/spark-env.sh" ] ; then
|
||||
. $FWDIR/conf/spark-env.sh
|
||||
fi
|
||||
|
||||
|
@ -55,7 +55,7 @@ if [ -e "$EXAMPLES_DIR"/target/spark-examples*[0-9Tg].jar ]; then
|
|||
fi
|
||||
if [[ -z $SPARK_EXAMPLES_JAR ]]; then
|
||||
echo "Failed to find Spark examples assembly in $FWDIR/examples/target" >&2
|
||||
echo "You need to build Spark with sbt/sbt assembly before running this program" >&2
|
||||
echo "You need to build Spark with sbt assembly before running this program" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
|
@ -20,7 +20,7 @@ rem
|
|||
set SCALA_VERSION=2.10
|
||||
|
||||
rem Figure out where the Spark framework is installed
|
||||
set FWDIR=%~dp0
|
||||
set FWDIR=%~dp0..\
|
||||
|
||||
rem Export this as SPARK_HOME
|
||||
set SPARK_HOME=%FWDIR%
|
||||
|
@ -49,7 +49,7 @@ if "x%SPARK_EXAMPLES_JAR%"=="x" (
|
|||
|
||||
rem Compute Spark classpath using external script
|
||||
set DONT_PRINT_CLASSPATH=1
|
||||
call "%FWDIR%bin\compute-classpath.cmd"
|
||||
call "%FWDIR%sbin\compute-classpath.cmd"
|
||||
set DONT_PRINT_CLASSPATH=0
|
||||
set CLASSPATH=%SPARK_EXAMPLES_JAR%;%CLASSPATH%
|
||||
|
|
@ -25,13 +25,13 @@ esac
|
|||
SCALA_VERSION=2.10
|
||||
|
||||
# Figure out where the Scala framework is installed
|
||||
FWDIR="$(cd `dirname $0`; pwd)"
|
||||
FWDIR="$(cd `dirname $0`/..; pwd)"
|
||||
|
||||
# Export this as SPARK_HOME
|
||||
export SPARK_HOME="$FWDIR"
|
||||
|
||||
# Load environment variables from conf/spark-env.sh, if it exists
|
||||
if [ -e $FWDIR/conf/spark-env.sh ] ; then
|
||||
if [ -e "$FWDIR/conf/spark-env.sh" ] ; then
|
||||
. $FWDIR/conf/spark-env.sh
|
||||
fi
|
||||
|
||||
|
@ -92,7 +92,7 @@ JAVA_OPTS="$OUR_JAVA_OPTS"
|
|||
JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH"
|
||||
JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM"
|
||||
# Load extra JAVA_OPTS from conf/java-opts, if it exists
|
||||
if [ -e $FWDIR/conf/java-opts ] ; then
|
||||
if [ -e "$FWDIR/conf/java-opts" ] ; then
|
||||
JAVA_OPTS="$JAVA_OPTS `cat $FWDIR/conf/java-opts`"
|
||||
fi
|
||||
export JAVA_OPTS
|
||||
|
@ -104,7 +104,7 @@ if [ ! -f "$FWDIR/RELEASE" ]; then
|
|||
jars_list=$(ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/ | grep "spark-assembly.*hadoop.*.jar")
|
||||
if [ "$num_jars" -eq "0" ]; then
|
||||
echo "Failed to find Spark assembly in $FWDIR/assembly/target/scala-$SCALA_VERSION/" >&2
|
||||
echo "You need to build Spark with 'sbt/sbt assembly' before running this program." >&2
|
||||
echo "You need to build Spark with 'sbt assembly' before running this program." >&2
|
||||
exit 1
|
||||
fi
|
||||
if [ "$num_jars" -gt "1" ]; then
|
||||
|
@ -129,11 +129,16 @@ fi
|
|||
|
||||
# Compute classpath using external script
|
||||
CLASSPATH=`$FWDIR/bin/compute-classpath.sh`
|
||||
CLASSPATH="$CLASSPATH:$SPARK_TOOLS_JAR"
|
||||
|
||||
if [ "$1" == "org.apache.spark.tools.JavaAPICompletenessChecker" ]; then
|
||||
CLASSPATH="$CLASSPATH:$SPARK_TOOLS_JAR"
|
||||
fi
|
||||
|
||||
if $cygwin; then
|
||||
CLASSPATH=`cygpath -wp $CLASSPATH`
|
||||
export SPARK_TOOLS_JAR=`cygpath -w $SPARK_TOOLS_JAR`
|
||||
if [ "$1" == "org.apache.spark.tools.JavaAPICompletenessChecker" ]; then
|
||||
export SPARK_TOOLS_JAR=`cygpath -w $SPARK_TOOLS_JAR`
|
||||
fi
|
||||
fi
|
||||
export CLASSPATH
|
||||
|
|
@ -20,7 +20,7 @@ rem
|
|||
set SCALA_VERSION=2.10
|
||||
|
||||
rem Figure out where the Spark framework is installed
|
||||
set FWDIR=%~dp0
|
||||
set FWDIR=%~dp0..\
|
||||
|
||||
rem Export this as SPARK_HOME
|
||||
set SPARK_HOME=%FWDIR%
|
||||
|
@ -73,7 +73,7 @@ for %%d in ("%TOOLS_DIR%\target\scala-%SCALA_VERSION%\spark-tools*assembly*.jar"
|
|||
|
||||
rem Compute classpath using external script
|
||||
set DONT_PRINT_CLASSPATH=1
|
||||
call "%FWDIR%bin\compute-classpath.cmd"
|
||||
call "%FWDIR%sbin\compute-classpath.cmd"
|
||||
set DONT_PRINT_CLASSPATH=0
|
||||
set CLASSPATH=%CLASSPATH%;%SPARK_TOOLS_JAR%
|
||||
|
|
@ -32,7 +32,7 @@ esac
|
|||
# Enter posix mode for bash
|
||||
set -o posix
|
||||
|
||||
FWDIR="`dirname $0`"
|
||||
FWDIR="$(cd `dirname $0`/..; pwd)"
|
||||
|
||||
for o in "$@"; do
|
||||
if [ "$1" = "-c" -o "$1" = "--cores" ]; then
|
||||
|
@ -90,10 +90,10 @@ if $cygwin; then
|
|||
# "Backspace sends ^H" setting in "Keys" section of the Mintty options
|
||||
# (see https://github.com/sbt/sbt/issues/562).
|
||||
stty -icanon min 1 -echo > /dev/null 2>&1
|
||||
$FWDIR/spark-class -Djline.terminal=unix $OPTIONS org.apache.spark.repl.Main "$@"
|
||||
$FWDIR/bin/spark-class -Djline.terminal=unix $OPTIONS org.apache.spark.repl.Main "$@"
|
||||
stty icanon echo > /dev/null 2>&1
|
||||
else
|
||||
$FWDIR/spark-class $OPTIONS org.apache.spark.repl.Main "$@"
|
||||
$FWDIR/bin/spark-class $OPTIONS org.apache.spark.repl.Main "$@"
|
||||
fi
|
||||
|
||||
# record the exit status lest it be overwritten:
|
|
@ -17,6 +17,7 @@ rem See the License for the specific language governing permissions and
|
|||
rem limitations under the License.
|
||||
rem
|
||||
|
||||
set FWDIR=%~dp0
|
||||
rem Find the path of sbin
|
||||
set SBIN=%~dp0..\sbin\
|
||||
|
||||
cmd /V /E /C %FWDIR%spark-class2.cmd org.apache.spark.repl.Main %*
|
||||
cmd /V /E /C %SBIN%spark-class2.cmd org.apache.spark.repl.Main %*
|
422
core/pom.xml
422
core/pom.xml
|
@ -17,215 +17,219 @@
|
|||
-->
|
||||
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<parent>
|
||||
<groupId>org.apache.spark</groupId>
|
||||
<artifactId>spark-parent</artifactId>
|
||||
<version>0.9.0-incubating-SNAPSHOT</version>
|
||||
<relativePath>../pom.xml</relativePath>
|
||||
</parent>
|
||||
|
||||
<groupId>org.apache.spark</groupId>
|
||||
<artifactId>spark-parent</artifactId>
|
||||
<version>0.9.0-incubating-SNAPSHOT</version>
|
||||
<relativePath>../pom.xml</relativePath>
|
||||
</parent>
|
||||
<artifactId>spark-core_2.10</artifactId>
|
||||
<packaging>jar</packaging>
|
||||
<name>Spark Project Core</name>
|
||||
<url>http://spark.incubator.apache.org/</url>
|
||||
|
||||
<groupId>org.apache.spark</groupId>
|
||||
<artifactId>spark-core_2.10</artifactId>
|
||||
<packaging>jar</packaging>
|
||||
<name>Spark Project Core</name>
|
||||
<url>http://spark.incubator.apache.org/</url>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.hadoop</groupId>
|
||||
<artifactId>hadoop-client</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>net.java.dev.jets3t</groupId>
|
||||
<artifactId>jets3t</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.avro</groupId>
|
||||
<artifactId>avro</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.avro</groupId>
|
||||
<artifactId>avro-ipc</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.zookeeper</groupId>
|
||||
<artifactId>zookeeper</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.eclipse.jetty</groupId>
|
||||
<artifactId>jetty-server</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.guava</groupId>
|
||||
<artifactId>guava</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.code.findbugs</groupId>
|
||||
<artifactId>jsr305</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-api</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.ning</groupId>
|
||||
<artifactId>compress-lzf</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.xerial.snappy</groupId>
|
||||
<artifactId>snappy-java</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.ow2.asm</groupId>
|
||||
<artifactId>asm</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.twitter</groupId>
|
||||
<artifactId>chill_${scala.binary.version}</artifactId>
|
||||
<version>0.3.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.twitter</groupId>
|
||||
<artifactId>chill-java</artifactId>
|
||||
<version>0.3.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>${akka.group}</groupId>
|
||||
<artifactId>akka-remote_${scala.binary.version}</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>${akka.group}</groupId>
|
||||
<artifactId>akka-slf4j_${scala.binary.version}</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.scala-lang</groupId>
|
||||
<artifactId>scala-library</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>net.liftweb</groupId>
|
||||
<artifactId>lift-json_${scala.binary.version}</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>it.unimi.dsi</groupId>
|
||||
<artifactId>fastutil</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>colt</groupId>
|
||||
<artifactId>colt</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.mesos</groupId>
|
||||
<artifactId>mesos</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.netty</groupId>
|
||||
<artifactId>netty-all</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>log4j</groupId>
|
||||
<artifactId>log4j</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-core</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-jvm</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-json</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-ganglia</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-graphite</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.derby</groupId>
|
||||
<artifactId>derby</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>commons-io</groupId>
|
||||
<artifactId>commons-io</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.scalatest</groupId>
|
||||
<artifactId>scalatest_${scala.binary.version}</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.scalacheck</groupId>
|
||||
<artifactId>scalacheck_${scala.binary.version}</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.easymock</groupId>
|
||||
<artifactId>easymock</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.novocode</groupId>
|
||||
<artifactId>junit-interface</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-log4j12</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
<build>
|
||||
<outputDirectory>target/scala-${scala.binary.version}/classes</outputDirectory>
|
||||
<testOutputDirectory>target/scala-${scala.binary.version}/test-classes</testOutputDirectory>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<phase>test</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<exportAntProperties>true</exportAntProperties>
|
||||
<tasks>
|
||||
<property name="spark.classpath" refid="maven.test.classpath" />
|
||||
<property environment="env" />
|
||||
<fail message="Please set the SCALA_HOME (or SCALA_LIBRARY_PATH if scala is on the path) environment variables and retry.">
|
||||
<condition>
|
||||
<not>
|
||||
<or>
|
||||
<isset property="env.SCALA_HOME" />
|
||||
<isset property="env.SCALA_LIBRARY_PATH" />
|
||||
</or>
|
||||
</not>
|
||||
</condition>
|
||||
</fail>
|
||||
</tasks>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.scalatest</groupId>
|
||||
<artifactId>scalatest-maven-plugin</artifactId>
|
||||
<configuration>
|
||||
<environmentVariables>
|
||||
<SPARK_HOME>${basedir}/..</SPARK_HOME>
|
||||
<SPARK_TESTING>1</SPARK_TESTING>
|
||||
<SPARK_CLASSPATH>${spark.classpath}</SPARK_CLASSPATH>
|
||||
</environmentVariables>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.hadoop</groupId>
|
||||
<artifactId>hadoop-client</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>net.java.dev.jets3t</groupId>
|
||||
<artifactId>jets3t</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.avro</groupId>
|
||||
<artifactId>avro</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.avro</groupId>
|
||||
<artifactId>avro-ipc</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.zookeeper</groupId>
|
||||
<artifactId>zookeeper</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.eclipse.jetty</groupId>
|
||||
<artifactId>jetty-server</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.guava</groupId>
|
||||
<artifactId>guava</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.code.findbugs</groupId>
|
||||
<artifactId>jsr305</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-api</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.ning</groupId>
|
||||
<artifactId>compress-lzf</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.xerial.snappy</groupId>
|
||||
<artifactId>snappy-java</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.ow2.asm</groupId>
|
||||
<artifactId>asm</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.twitter</groupId>
|
||||
<artifactId>chill_${scala.binary.version}</artifactId>
|
||||
<version>0.3.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.twitter</groupId>
|
||||
<artifactId>chill-java</artifactId>
|
||||
<version>0.3.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>${akka.group}</groupId>
|
||||
<artifactId>akka-remote_${scala.binary.version}</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>${akka.group}</groupId>
|
||||
<artifactId>akka-slf4j_${scala.binary.version}</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.scala-lang</groupId>
|
||||
<artifactId>scala-library</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>net.liftweb</groupId>
|
||||
<artifactId>lift-json_${scala.binary.version}</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>it.unimi.dsi</groupId>
|
||||
<artifactId>fastutil</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>colt</groupId>
|
||||
<artifactId>colt</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.mesos</groupId>
|
||||
<artifactId>mesos</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.netty</groupId>
|
||||
<artifactId>netty-all</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>log4j</groupId>
|
||||
<artifactId>log4j</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.clearspring.analytics</groupId>
|
||||
<artifactId>stream</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-core</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-jvm</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-json</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-ganglia</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-graphite</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.derby</groupId>
|
||||
<artifactId>derby</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>commons-io</groupId>
|
||||
<artifactId>commons-io</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.scalatest</groupId>
|
||||
<artifactId>scalatest_${scala.binary.version}</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.scalacheck</groupId>
|
||||
<artifactId>scalacheck_${scala.binary.version}</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.easymock</groupId>
|
||||
<artifactId>easymock</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.novocode</groupId>
|
||||
<artifactId>junit-interface</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-log4j12</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
<build>
|
||||
<outputDirectory>target/scala-${scala.binary.version}/classes</outputDirectory>
|
||||
<testOutputDirectory>target/scala-${scala.binary.version}/test-classes</testOutputDirectory>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<phase>test</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<exportAntProperties>true</exportAntProperties>
|
||||
<tasks>
|
||||
<property name="spark.classpath" refid="maven.test.classpath" />
|
||||
<property environment="env" />
|
||||
<fail message="Please set the SCALA_HOME (or SCALA_LIBRARY_PATH if scala is on the path) environment variables and retry.">
|
||||
<condition>
|
||||
<not>
|
||||
<or>
|
||||
<isset property="env.SCALA_HOME" />
|
||||
<isset property="env.SCALA_LIBRARY_PATH" />
|
||||
</or>
|
||||
</not>
|
||||
</condition>
|
||||
</fail>
|
||||
</tasks>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.scalatest</groupId>
|
||||
<artifactId>scalatest-maven-plugin</artifactId>
|
||||
<configuration>
|
||||
<environmentVariables>
|
||||
<SPARK_HOME>${basedir}/..</SPARK_HOME>
|
||||
<SPARK_TESTING>1</SPARK_TESTING>
|
||||
<SPARK_CLASSPATH>${spark.classpath}</SPARK_CLASSPATH>
|
||||
</environmentVariables>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
|
|
|
@ -20,19 +20,24 @@ package org.apache.spark.network.netty;
|
|||
import io.netty.bootstrap.Bootstrap;
|
||||
import io.netty.channel.Channel;
|
||||
import io.netty.channel.ChannelOption;
|
||||
import io.netty.channel.EventLoopGroup;
|
||||
import io.netty.channel.oio.OioEventLoopGroup;
|
||||
import io.netty.channel.socket.oio.OioSocketChannel;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
class FileClient {
|
||||
|
||||
private Logger LOG = LoggerFactory.getLogger(this.getClass().getName());
|
||||
private FileClientHandler handler = null;
|
||||
private final FileClientHandler handler;
|
||||
private Channel channel = null;
|
||||
private Bootstrap bootstrap = null;
|
||||
private int connectTimeout = 60*1000; // 1 min
|
||||
private EventLoopGroup group = null;
|
||||
private final int connectTimeout;
|
||||
private final int sendTimeout = 60; // 1 min
|
||||
|
||||
public FileClient(FileClientHandler handler, int connectTimeout) {
|
||||
this.handler = handler;
|
||||
|
@ -40,8 +45,9 @@ class FileClient {
|
|||
}
|
||||
|
||||
public void init() {
|
||||
group = new OioEventLoopGroup();
|
||||
bootstrap = new Bootstrap();
|
||||
bootstrap.group(new OioEventLoopGroup())
|
||||
bootstrap.group(group)
|
||||
.channel(OioSocketChannel.class)
|
||||
.option(ChannelOption.SO_KEEPALIVE, true)
|
||||
.option(ChannelOption.TCP_NODELAY, true)
|
||||
|
@ -56,6 +62,7 @@ class FileClient {
|
|||
// ChannelFuture cf = channel.closeFuture();
|
||||
//cf.addListener(new ChannelCloseListener(this));
|
||||
} catch (InterruptedException e) {
|
||||
LOG.warn("FileClient interrupted while trying to connect", e);
|
||||
close();
|
||||
}
|
||||
}
|
||||
|
@ -71,16 +78,21 @@ class FileClient {
|
|||
public void sendRequest(String file) {
|
||||
//assert(file == null);
|
||||
//assert(channel == null);
|
||||
channel.write(file + "\r\n");
|
||||
try {
|
||||
// Should be able to send the message to network link channel.
|
||||
boolean bSent = channel.writeAndFlush(file + "\r\n").await(sendTimeout, TimeUnit.SECONDS);
|
||||
if (!bSent) {
|
||||
throw new RuntimeException("Failed to send");
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
LOG.error("Error", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void close() {
|
||||
if(channel != null) {
|
||||
channel.close();
|
||||
channel = null;
|
||||
}
|
||||
if ( bootstrap!=null) {
|
||||
bootstrap.shutdown();
|
||||
if (group != null) {
|
||||
group.shutdownGracefully();
|
||||
group = null;
|
||||
bootstrap = null;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -17,15 +17,13 @@
|
|||
|
||||
package org.apache.spark.network.netty;
|
||||
|
||||
import io.netty.buffer.BufType;
|
||||
import io.netty.channel.ChannelInitializer;
|
||||
import io.netty.channel.socket.SocketChannel;
|
||||
import io.netty.handler.codec.string.StringEncoder;
|
||||
|
||||
|
||||
class FileClientChannelInitializer extends ChannelInitializer<SocketChannel> {
|
||||
|
||||
private FileClientHandler fhandler;
|
||||
private final FileClientHandler fhandler;
|
||||
|
||||
public FileClientChannelInitializer(FileClientHandler handler) {
|
||||
fhandler = handler;
|
||||
|
@ -35,7 +33,7 @@ class FileClientChannelInitializer extends ChannelInitializer<SocketChannel> {
|
|||
public void initChannel(SocketChannel channel) {
|
||||
// file no more than 2G
|
||||
channel.pipeline()
|
||||
.addLast("encoder", new StringEncoder(BufType.BYTE))
|
||||
.addLast("encoder", new StringEncoder())
|
||||
.addLast("handler", fhandler);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -19,11 +19,11 @@ package org.apache.spark.network.netty;
|
|||
|
||||
import io.netty.buffer.ByteBuf;
|
||||
import io.netty.channel.ChannelHandlerContext;
|
||||
import io.netty.channel.ChannelInboundByteHandlerAdapter;
|
||||
import io.netty.channel.SimpleChannelInboundHandler;
|
||||
|
||||
import org.apache.spark.storage.BlockId;
|
||||
|
||||
abstract class FileClientHandler extends ChannelInboundByteHandlerAdapter {
|
||||
abstract class FileClientHandler extends SimpleChannelInboundHandler<ByteBuf> {
|
||||
|
||||
private FileHeader currentHeader = null;
|
||||
|
||||
|
@ -37,13 +37,7 @@ abstract class FileClientHandler extends ChannelInboundByteHandlerAdapter {
|
|||
public abstract void handleError(BlockId blockId);
|
||||
|
||||
@Override
|
||||
public ByteBuf newInboundBuffer(ChannelHandlerContext ctx) {
|
||||
// Use direct buffer if possible.
|
||||
return ctx.alloc().ioBuffer();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void inboundBufferUpdated(ChannelHandlerContext ctx, ByteBuf in) {
|
||||
public void channelRead0(ChannelHandlerContext ctx, ByteBuf in) {
|
||||
// get header
|
||||
if (currentHeader == null && in.readableBytes() >= FileHeader.HEADER_SIZE()) {
|
||||
currentHeader = FileHeader.create(in.readBytes(FileHeader.HEADER_SIZE()));
|
||||
|
|
|
@ -22,13 +22,12 @@ import java.net.InetSocketAddress;
|
|||
import io.netty.bootstrap.ServerBootstrap;
|
||||
import io.netty.channel.ChannelFuture;
|
||||
import io.netty.channel.ChannelOption;
|
||||
import io.netty.channel.EventLoopGroup;
|
||||
import io.netty.channel.oio.OioEventLoopGroup;
|
||||
import io.netty.channel.socket.oio.OioServerSocketChannel;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
|
||||
/**
|
||||
* Server that accept the path of a file an echo back its content.
|
||||
*/
|
||||
|
@ -36,7 +35,8 @@ class FileServer {
|
|||
|
||||
private Logger LOG = LoggerFactory.getLogger(this.getClass().getName());
|
||||
|
||||
private ServerBootstrap bootstrap = null;
|
||||
private EventLoopGroup bossGroup = null;
|
||||
private EventLoopGroup workerGroup = null;
|
||||
private ChannelFuture channelFuture = null;
|
||||
private int port = 0;
|
||||
private Thread blockingThread = null;
|
||||
|
@ -45,8 +45,11 @@ class FileServer {
|
|||
InetSocketAddress addr = new InetSocketAddress(port);
|
||||
|
||||
// Configure the server.
|
||||
bootstrap = new ServerBootstrap();
|
||||
bootstrap.group(new OioEventLoopGroup(), new OioEventLoopGroup())
|
||||
bossGroup = new OioEventLoopGroup();
|
||||
workerGroup = new OioEventLoopGroup();
|
||||
|
||||
ServerBootstrap bootstrap = new ServerBootstrap();
|
||||
bootstrap.group(bossGroup, workerGroup)
|
||||
.channel(OioServerSocketChannel.class)
|
||||
.option(ChannelOption.SO_BACKLOG, 100)
|
||||
.option(ChannelOption.SO_RCVBUF, 1500)
|
||||
|
@ -89,13 +92,19 @@ class FileServer {
|
|||
public void stop() {
|
||||
// Close the bound channel.
|
||||
if (channelFuture != null) {
|
||||
channelFuture.channel().close();
|
||||
channelFuture.channel().close().awaitUninterruptibly();
|
||||
channelFuture = null;
|
||||
}
|
||||
// Shutdown bootstrap.
|
||||
if (bootstrap != null) {
|
||||
bootstrap.shutdown();
|
||||
bootstrap = null;
|
||||
|
||||
// Shutdown event groups
|
||||
if (bossGroup != null) {
|
||||
bossGroup.shutdownGracefully();
|
||||
bossGroup = null;
|
||||
}
|
||||
|
||||
if (workerGroup != null) {
|
||||
workerGroup.shutdownGracefully();
|
||||
workerGroup = null;
|
||||
}
|
||||
// TODO: Shutdown all accepted channels as well ?
|
||||
}
|
||||
|
|
|
@ -23,7 +23,6 @@ import io.netty.handler.codec.DelimiterBasedFrameDecoder;
|
|||
import io.netty.handler.codec.Delimiters;
|
||||
import io.netty.handler.codec.string.StringDecoder;
|
||||
|
||||
|
||||
class FileServerChannelInitializer extends ChannelInitializer<SocketChannel> {
|
||||
|
||||
PathResolver pResolver;
|
||||
|
@ -36,7 +35,7 @@ class FileServerChannelInitializer extends ChannelInitializer<SocketChannel> {
|
|||
public void initChannel(SocketChannel channel) {
|
||||
channel.pipeline()
|
||||
.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()))
|
||||
.addLast("strDecoder", new StringDecoder())
|
||||
.addLast("stringDecoder", new StringDecoder())
|
||||
.addLast("handler", new FileServerHandler(pResolver));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -21,22 +21,26 @@ import java.io.File;
|
|||
import java.io.FileInputStream;
|
||||
|
||||
import io.netty.channel.ChannelHandlerContext;
|
||||
import io.netty.channel.ChannelInboundMessageHandlerAdapter;
|
||||
import io.netty.channel.SimpleChannelInboundHandler;
|
||||
import io.netty.channel.DefaultFileRegion;
|
||||
|
||||
import org.apache.spark.storage.BlockId;
|
||||
import org.apache.spark.storage.FileSegment;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
class FileServerHandler extends ChannelInboundMessageHandlerAdapter<String> {
|
||||
class FileServerHandler extends SimpleChannelInboundHandler<String> {
|
||||
|
||||
PathResolver pResolver;
|
||||
private Logger LOG = LoggerFactory.getLogger(this.getClass().getName());
|
||||
|
||||
private final PathResolver pResolver;
|
||||
|
||||
public FileServerHandler(PathResolver pResolver){
|
||||
this.pResolver = pResolver;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void messageReceived(ChannelHandlerContext ctx, String blockIdString) {
|
||||
public void channelRead0(ChannelHandlerContext ctx, String blockIdString) {
|
||||
BlockId blockId = BlockId.apply(blockIdString);
|
||||
FileSegment fileSegment = pResolver.getBlockLocation(blockId);
|
||||
// if getBlockLocation returns null, close the channel
|
||||
|
@ -60,10 +64,10 @@ class FileServerHandler extends ChannelInboundMessageHandlerAdapter<String> {
|
|||
int len = new Long(length).intValue();
|
||||
ctx.write((new FileHeader(len, blockId)).buffer());
|
||||
try {
|
||||
ctx.sendFile(new DefaultFileRegion(new FileInputStream(file)
|
||||
ctx.write(new DefaultFileRegion(new FileInputStream(file)
|
||||
.getChannel(), fileSegment.offset(), fileSegment.length()));
|
||||
} catch (Exception e) {
|
||||
e.printStackTrace();
|
||||
LOG.error("Exception: ", e);
|
||||
}
|
||||
} else {
|
||||
ctx.write(new FileHeader(0, blockId).buffer());
|
||||
|
@ -73,7 +77,7 @@ class FileServerHandler extends ChannelInboundMessageHandlerAdapter<String> {
|
|||
|
||||
@Override
|
||||
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
|
||||
cause.printStackTrace();
|
||||
LOG.error("Exception: ", cause);
|
||||
ctx.close();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
# Set everything to be logged to the console
|
||||
log4j.rootCategory=INFO, console
|
||||
log4j.appender.console=org.apache.log4j.ConsoleAppender
|
||||
log4j.appender.console.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
|
||||
|
||||
# Ignore messages below warning level from Jetty, because it's a bit verbose
|
||||
log4j.logger.org.eclipse.jetty=WARN
|
|
@ -41,7 +41,7 @@ class Accumulable[R, T] (
|
|||
@transient initialValue: R,
|
||||
param: AccumulableParam[R, T])
|
||||
extends Serializable {
|
||||
|
||||
|
||||
val id = Accumulators.newId
|
||||
@transient private var value_ = initialValue // Current value on master
|
||||
val zero = param.zero(initialValue) // Zero value to be passed to workers
|
||||
|
@ -113,7 +113,7 @@ class Accumulable[R, T] (
|
|||
def setValue(newValue: R) {
|
||||
this.value = newValue
|
||||
}
|
||||
|
||||
|
||||
// Called by Java when deserializing an object
|
||||
private def readObject(in: ObjectInputStream) {
|
||||
in.defaultReadObject()
|
||||
|
@ -177,7 +177,7 @@ class GrowableAccumulableParam[R <% Growable[T] with TraversableOnce[T] with Ser
|
|||
def zero(initialValue: R): R = {
|
||||
// We need to clone initialValue, but it's hard to specify that R should also be Cloneable.
|
||||
// Instead we'll serialize it to a buffer and load it back.
|
||||
val ser = new JavaSerializer().newInstance()
|
||||
val ser = new JavaSerializer(new SparkConf(false)).newInstance()
|
||||
val copy = ser.deserialize[R](ser.serialize(initialValue))
|
||||
copy.clear() // In case it contained stuff
|
||||
copy
|
||||
|
@ -215,7 +215,7 @@ private object Accumulators {
|
|||
val originals = Map[Long, Accumulable[_, _]]()
|
||||
val localAccums = Map[Thread, Map[Long, Accumulable[_, _]]]()
|
||||
var lastId: Long = 0
|
||||
|
||||
|
||||
def newId: Long = synchronized {
|
||||
lastId += 1
|
||||
return lastId
|
||||
|
|
|
@ -46,6 +46,7 @@ private[spark] class HttpServer(resourceBase: File) extends Logging {
|
|||
if (server != null) {
|
||||
throw new ServerStateException("Server is already started")
|
||||
} else {
|
||||
logInfo("Starting HTTP Server")
|
||||
server = new Server()
|
||||
val connector = new SocketConnector
|
||||
connector.setMaxIdleTime(60*1000)
|
||||
|
|
|
@ -17,8 +17,8 @@
|
|||
|
||||
package org.apache.spark
|
||||
|
||||
import org.slf4j.Logger
|
||||
import org.slf4j.LoggerFactory
|
||||
import org.apache.log4j.{LogManager, PropertyConfigurator}
|
||||
import org.slf4j.{Logger, LoggerFactory}
|
||||
|
||||
/**
|
||||
* Utility trait for classes that want to log data. Creates a SLF4J logger for the class and allows
|
||||
|
@ -33,6 +33,7 @@ trait Logging {
|
|||
// Method to get or create the logger for this object
|
||||
protected def log: Logger = {
|
||||
if (log_ == null) {
|
||||
initializeIfNecessary()
|
||||
var className = this.getClass.getName
|
||||
// Ignore trailing $'s in the class names for Scala objects
|
||||
if (className.endsWith("$")) {
|
||||
|
@ -89,7 +90,37 @@ trait Logging {
|
|||
log.isTraceEnabled
|
||||
}
|
||||
|
||||
// Method for ensuring that logging is initialized, to avoid having multiple
|
||||
// threads do it concurrently (as SLF4J initialization is not thread safe).
|
||||
protected def initLogging() { log }
|
||||
private def initializeIfNecessary() {
|
||||
if (!Logging.initialized) {
|
||||
Logging.initLock.synchronized {
|
||||
if (!Logging.initialized) {
|
||||
initializeLogging()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private def initializeLogging() {
|
||||
// If Log4j doesn't seem initialized, load a default properties file
|
||||
val log4jInitialized = LogManager.getRootLogger.getAllAppenders.hasMoreElements
|
||||
if (!log4jInitialized) {
|
||||
val defaultLogProps = "org/apache/spark/default-log4j.properties"
|
||||
val classLoader = this.getClass.getClassLoader
|
||||
Option(classLoader.getResource(defaultLogProps)) match {
|
||||
case Some(url) => PropertyConfigurator.configure(url)
|
||||
case None => System.err.println(s"Spark was unable to load $defaultLogProps")
|
||||
}
|
||||
log.info(s"Using Spark's default log4j profile: $defaultLogProps")
|
||||
}
|
||||
Logging.initialized = true
|
||||
|
||||
// Force a call into slf4j to initialize it. Avoids this happening from mutliple threads
|
||||
// and triggering this: http://mailman.qos.ch/pipermail/slf4j-dev/2010-April/002956.html
|
||||
log
|
||||
}
|
||||
}
|
||||
|
||||
object Logging {
|
||||
@volatile private var initialized = false
|
||||
val initLock = new Object()
|
||||
}
|
||||
|
|
|
@ -50,9 +50,9 @@ private[spark] class MapOutputTrackerMasterActor(tracker: MapOutputTrackerMaster
|
|||
}
|
||||
}
|
||||
|
||||
private[spark] class MapOutputTracker extends Logging {
|
||||
private[spark] class MapOutputTracker(conf: SparkConf) extends Logging {
|
||||
|
||||
private val timeout = AkkaUtils.askTimeout
|
||||
private val timeout = AkkaUtils.askTimeout(conf)
|
||||
|
||||
// Set to the MapOutputTrackerActor living on the driver
|
||||
var trackerActor: Either[ActorRef, ActorSelection] = _
|
||||
|
@ -65,7 +65,7 @@ private[spark] class MapOutputTracker extends Logging {
|
|||
protected val epochLock = new java.lang.Object
|
||||
|
||||
private val metadataCleaner =
|
||||
new MetadataCleaner(MetadataCleanerType.MAP_OUTPUT_TRACKER, this.cleanup)
|
||||
new MetadataCleaner(MetadataCleanerType.MAP_OUTPUT_TRACKER, this.cleanup, conf)
|
||||
|
||||
// Send a message to the trackerActor and get its result within a default timeout, or
|
||||
// throw a SparkException if this fails.
|
||||
|
@ -129,7 +129,7 @@ private[spark] class MapOutputTracker extends Logging {
|
|||
if (fetchedStatuses == null) {
|
||||
// We won the race to fetch the output locs; do so
|
||||
logInfo("Doing the fetch; tracker actor = " + trackerActor)
|
||||
val hostPort = Utils.localHostPort()
|
||||
val hostPort = Utils.localHostPort(conf)
|
||||
// This try-finally prevents hangs due to timeouts:
|
||||
try {
|
||||
val fetchedBytes =
|
||||
|
@ -192,7 +192,8 @@ private[spark] class MapOutputTracker extends Logging {
|
|||
}
|
||||
}
|
||||
|
||||
private[spark] class MapOutputTrackerMaster extends MapOutputTracker {
|
||||
private[spark] class MapOutputTrackerMaster(conf: SparkConf)
|
||||
extends MapOutputTracker(conf) {
|
||||
|
||||
// Cache a serialized version of the output statuses for each shuffle to send them out faster
|
||||
private var cacheEpoch = epoch
|
||||
|
|
|
@ -52,7 +52,7 @@ object Partitioner {
|
|||
for (r <- bySize if r.partitioner != None) {
|
||||
return r.partitioner.get
|
||||
}
|
||||
if (System.getProperty("spark.default.parallelism") != null) {
|
||||
if (rdd.context.conf.contains("spark.default.parallelism")) {
|
||||
return new HashPartitioner(rdd.context.defaultParallelism)
|
||||
} else {
|
||||
return new HashPartitioner(bySize.head.partitions.size)
|
||||
|
@ -90,7 +90,7 @@ class HashPartitioner(partitions: Int) extends Partitioner {
|
|||
class RangePartitioner[K <% Ordered[K]: ClassTag, V](
|
||||
partitions: Int,
|
||||
@transient rdd: RDD[_ <: Product2[K,V]],
|
||||
private val ascending: Boolean = true)
|
||||
private val ascending: Boolean = true)
|
||||
extends Partitioner {
|
||||
|
||||
// An array of upper bounds for the first (partitions - 1) partitions
|
||||
|
|
189
core/src/main/scala/org/apache/spark/SparkConf.scala
Normal file
189
core/src/main/scala/org/apache/spark/SparkConf.scala
Normal file
|
@ -0,0 +1,189 @@
|
|||
package org.apache.spark
|
||||
|
||||
import scala.collection.JavaConverters._
|
||||
import scala.collection.mutable.HashMap
|
||||
|
||||
import com.typesafe.config.ConfigFactory
|
||||
|
||||
/**
|
||||
* Configuration for a Spark application. Used to set various Spark parameters as key-value pairs.
|
||||
*
|
||||
* Most of the time, you would create a SparkConf object with `new SparkConf()`, which will load
|
||||
* values from both the `spark.*` Java system properties and any `spark.conf` on your application's
|
||||
* classpath (if it has one). In this case, system properties take priority over `spark.conf`, and
|
||||
* any parameters you set directly on the `SparkConf` object take priority over both of those.
|
||||
*
|
||||
* For unit tests, you can also call `new SparkConf(false)` to skip loading external settings and
|
||||
* get the same configuration no matter what is on the classpath.
|
||||
*
|
||||
* All setter methods in this class support chaining. For example, you can write
|
||||
* `new SparkConf().setMaster("local").setAppName("My app")`.
|
||||
*
|
||||
* Note that once a SparkConf object is passed to Spark, it is cloned and can no longer be modified
|
||||
* by the user. Spark does not support modifying the configuration at runtime.
|
||||
*
|
||||
* @param loadDefaults whether to load values from the system properties and classpath
|
||||
*/
|
||||
class SparkConf(loadDefaults: Boolean) extends Serializable with Cloneable {
|
||||
|
||||
/** Create a SparkConf that loads defaults from system properties and the classpath */
|
||||
def this() = this(true)
|
||||
|
||||
private val settings = new HashMap[String, String]()
|
||||
|
||||
if (loadDefaults) {
|
||||
ConfigFactory.invalidateCaches()
|
||||
val typesafeConfig = ConfigFactory.systemProperties()
|
||||
.withFallback(ConfigFactory.parseResources("spark.conf"))
|
||||
for (e <- typesafeConfig.entrySet().asScala if e.getKey.startsWith("spark.")) {
|
||||
settings(e.getKey) = e.getValue.unwrapped.toString
|
||||
}
|
||||
}
|
||||
|
||||
/** Set a configuration variable. */
|
||||
def set(key: String, value: String): SparkConf = {
|
||||
if (key == null) {
|
||||
throw new NullPointerException("null key")
|
||||
}
|
||||
if (value == null) {
|
||||
throw new NullPointerException("null value")
|
||||
}
|
||||
settings(key) = value
|
||||
this
|
||||
}
|
||||
|
||||
/**
|
||||
* The master URL to connect to, such as "local" to run locally with one thread, "local[4]" to
|
||||
* run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone cluster.
|
||||
*/
|
||||
def setMaster(master: String): SparkConf = {
|
||||
set("spark.master", master)
|
||||
}
|
||||
|
||||
/** Set a name for your application. Shown in the Spark web UI. */
|
||||
def setAppName(name: String): SparkConf = {
|
||||
set("spark.app.name", name)
|
||||
}
|
||||
|
||||
/** Set JAR files to distribute to the cluster. */
|
||||
def setJars(jars: Seq[String]): SparkConf = {
|
||||
set("spark.jars", jars.mkString(","))
|
||||
}
|
||||
|
||||
/** Set JAR files to distribute to the cluster. (Java-friendly version.) */
|
||||
def setJars(jars: Array[String]): SparkConf = {
|
||||
setJars(jars.toSeq)
|
||||
}
|
||||
|
||||
/**
|
||||
* Set an environment variable to be used when launching executors for this application.
|
||||
* These variables are stored as properties of the form spark.executorEnv.VAR_NAME
|
||||
* (for example spark.executorEnv.PATH) but this method makes them easier to set.
|
||||
*/
|
||||
def setExecutorEnv(variable: String, value: String): SparkConf = {
|
||||
set("spark.executorEnv." + variable, value)
|
||||
}
|
||||
|
||||
/**
|
||||
* Set multiple environment variables to be used when launching executors.
|
||||
* These variables are stored as properties of the form spark.executorEnv.VAR_NAME
|
||||
* (for example spark.executorEnv.PATH) but this method makes them easier to set.
|
||||
*/
|
||||
def setExecutorEnv(variables: Seq[(String, String)]): SparkConf = {
|
||||
for ((k, v) <- variables) {
|
||||
setExecutorEnv(k, v)
|
||||
}
|
||||
this
|
||||
}
|
||||
|
||||
/**
|
||||
* Set multiple environment variables to be used when launching executors.
|
||||
* (Java-friendly version.)
|
||||
*/
|
||||
def setExecutorEnv(variables: Array[(String, String)]): SparkConf = {
|
||||
setExecutorEnv(variables.toSeq)
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the location where Spark is installed on worker nodes.
|
||||
*/
|
||||
def setSparkHome(home: String): SparkConf = {
|
||||
set("spark.home", home)
|
||||
}
|
||||
|
||||
/** Set multiple parameters together */
|
||||
def setAll(settings: Traversable[(String, String)]) = {
|
||||
this.settings ++= settings
|
||||
this
|
||||
}
|
||||
|
||||
/** Set a parameter if it isn't already configured */
|
||||
def setIfMissing(key: String, value: String): SparkConf = {
|
||||
if (!settings.contains(key)) {
|
||||
settings(key) = value
|
||||
}
|
||||
this
|
||||
}
|
||||
|
||||
/** Remove a parameter from the configuration */
|
||||
def remove(key: String): SparkConf = {
|
||||
settings.remove(key)
|
||||
this
|
||||
}
|
||||
|
||||
/** Get a parameter; throws a NoSuchElementException if it's not set */
|
||||
def get(key: String): String = {
|
||||
settings.getOrElse(key, throw new NoSuchElementException(key))
|
||||
}
|
||||
|
||||
/** Get a parameter, falling back to a default if not set */
|
||||
def get(key: String, defaultValue: String): String = {
|
||||
settings.getOrElse(key, defaultValue)
|
||||
}
|
||||
|
||||
/** Get a parameter as an Option */
|
||||
def getOption(key: String): Option[String] = {
|
||||
settings.get(key)
|
||||
}
|
||||
|
||||
/** Get all parameters as a list of pairs */
|
||||
def getAll: Array[(String, String)] = settings.clone().toArray
|
||||
|
||||
/** Get a parameter as an integer, falling back to a default if not set */
|
||||
def getInt(key: String, defaultValue: Int): Int = {
|
||||
getOption(key).map(_.toInt).getOrElse(defaultValue)
|
||||
}
|
||||
|
||||
/** Get a parameter as a long, falling back to a default if not set */
|
||||
def getLong(key: String, defaultValue: Long): Long = {
|
||||
getOption(key).map(_.toLong).getOrElse(defaultValue)
|
||||
}
|
||||
|
||||
/** Get a parameter as a double, falling back to a default if not set */
|
||||
def getDouble(key: String, defaultValue: Double): Double = {
|
||||
getOption(key).map(_.toDouble).getOrElse(defaultValue)
|
||||
}
|
||||
|
||||
/** Get all executor environment variables set on this SparkConf */
|
||||
def getExecutorEnv: Seq[(String, String)] = {
|
||||
val prefix = "spark.executorEnv."
|
||||
getAll.filter{case (k, v) => k.startsWith(prefix)}
|
||||
.map{case (k, v) => (k.substring(prefix.length), v)}
|
||||
}
|
||||
|
||||
/** Does the configuration contain a given parameter? */
|
||||
def contains(key: String): Boolean = settings.contains(key)
|
||||
|
||||
/** Copy this object */
|
||||
override def clone: SparkConf = {
|
||||
new SparkConf(false).setAll(settings)
|
||||
}
|
||||
|
||||
/**
|
||||
* Return a string listing all keys and values, one per line. This is useful to print the
|
||||
* configuration out for debugging.
|
||||
*/
|
||||
def toDebugString: String = {
|
||||
settings.toArray.sorted.map{case (k, v) => k + "=" + v}.mkString("\n")
|
||||
}
|
||||
}
|
|
@ -19,36 +19,23 @@ package org.apache.spark
|
|||
|
||||
import java.io._
|
||||
import java.net.URI
|
||||
import java.util.Properties
|
||||
import java.util.{UUID, Properties}
|
||||
import java.util.concurrent.atomic.AtomicInteger
|
||||
|
||||
import scala.collection.Map
|
||||
import scala.collection.{Map, Set}
|
||||
import scala.collection.generic.Growable
|
||||
import scala.collection.mutable.ArrayBuffer
|
||||
import scala.collection.mutable.HashMap
|
||||
|
||||
import scala.collection.mutable.{ArrayBuffer, HashMap}
|
||||
import scala.reflect.{ClassTag, classTag}
|
||||
|
||||
import org.apache.hadoop.conf.Configuration
|
||||
import org.apache.hadoop.fs.Path
|
||||
import org.apache.hadoop.io.ArrayWritable
|
||||
import org.apache.hadoop.io.BooleanWritable
|
||||
import org.apache.hadoop.io.BytesWritable
|
||||
import org.apache.hadoop.io.DoubleWritable
|
||||
import org.apache.hadoop.io.FloatWritable
|
||||
import org.apache.hadoop.io.IntWritable
|
||||
import org.apache.hadoop.io.LongWritable
|
||||
import org.apache.hadoop.io.NullWritable
|
||||
import org.apache.hadoop.io.Text
|
||||
import org.apache.hadoop.io.Writable
|
||||
import org.apache.hadoop.mapred.FileInputFormat
|
||||
import org.apache.hadoop.mapred.InputFormat
|
||||
import org.apache.hadoop.mapred.JobConf
|
||||
import org.apache.hadoop.mapred.SequenceFileInputFormat
|
||||
import org.apache.hadoop.mapred.TextInputFormat
|
||||
import org.apache.hadoop.mapreduce.{InputFormat => NewInputFormat}
|
||||
import org.apache.hadoop.mapreduce.{Job => NewHadoopJob}
|
||||
import org.apache.hadoop.io.{ArrayWritable, BooleanWritable, BytesWritable, DoubleWritable,
|
||||
FloatWritable, IntWritable, LongWritable, NullWritable, Text, Writable}
|
||||
import org.apache.hadoop.mapred.{FileInputFormat, InputFormat, JobConf, SequenceFileInputFormat,
|
||||
TextInputFormat}
|
||||
import org.apache.hadoop.mapreduce.{InputFormat => NewInputFormat, Job => NewHadoopJob}
|
||||
import org.apache.hadoop.mapreduce.lib.input.{FileInputFormat => NewFileInputFormat}
|
||||
|
||||
import org.apache.mesos.MesosNativeLibrary
|
||||
|
||||
import org.apache.spark.deploy.{LocalSparkCluster, SparkHadoopUtil}
|
||||
|
@ -61,53 +48,97 @@ import org.apache.spark.scheduler.cluster.mesos.{CoarseMesosSchedulerBackend, Me
|
|||
import org.apache.spark.scheduler.local.LocalBackend
|
||||
import org.apache.spark.storage.{BlockManagerSource, RDDInfo, StorageStatus, StorageUtils}
|
||||
import org.apache.spark.ui.SparkUI
|
||||
import org.apache.spark.util.{ClosureCleaner, MetadataCleaner, MetadataCleanerType,
|
||||
TimeStampedHashMap, Utils}
|
||||
import org.apache.spark.util.{Utils, TimeStampedHashMap, MetadataCleaner, MetadataCleanerType,
|
||||
ClosureCleaner}
|
||||
|
||||
/**
|
||||
* Main entry point for Spark functionality. A SparkContext represents the connection to a Spark
|
||||
* cluster, and can be used to create RDDs, accumulators and broadcast variables on that cluster.
|
||||
*
|
||||
* @param master Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
|
||||
* @param appName A name for your application, to display on the cluster web UI.
|
||||
* @param sparkHome Location where Spark is installed on cluster nodes.
|
||||
* @param jars Collection of JARs to send to the cluster. These can be paths on the local file
|
||||
* system or HDFS, HTTP, HTTPS, or FTP URLs.
|
||||
* @param environment Environment variables to set on worker nodes.
|
||||
* @param config a Spark Config object describing the application configuration. Any settings in
|
||||
* this config overrides the default configs as well as system properties.
|
||||
* @param preferredNodeLocationData used in YARN mode to select nodes to launch containers on. Can
|
||||
* be generated using [[org.apache.spark.scheduler.InputFormatInfo.computePreferredLocations]]
|
||||
* from a list of input files or InputFormats for the application.
|
||||
*/
|
||||
class SparkContext(
|
||||
val master: String,
|
||||
val appName: String,
|
||||
val sparkHome: String = null,
|
||||
val jars: Seq[String] = Nil,
|
||||
val environment: Map[String, String] = Map(),
|
||||
config: SparkConf,
|
||||
// This is used only by YARN for now, but should be relevant to other cluster types (Mesos, etc)
|
||||
// too. This is typically generated from InputFormatInfo.computePreferredLocations .. host, set
|
||||
// of data-local splits on host
|
||||
val preferredNodeLocationData: scala.collection.Map[String, scala.collection.Set[SplitInfo]] =
|
||||
scala.collection.immutable.Map())
|
||||
// too. This is typically generated from InputFormatInfo.computePreferredLocations. It contains
|
||||
// a map from hostname to a list of input format splits on the host.
|
||||
val preferredNodeLocationData: Map[String, Set[SplitInfo]] = Map())
|
||||
extends Logging {
|
||||
|
||||
// Ensure logging is initialized before we spawn any threads
|
||||
initLogging()
|
||||
/**
|
||||
* Alternative constructor that allows setting common Spark properties directly
|
||||
*
|
||||
* @param master Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
|
||||
* @param appName A name for your application, to display on the cluster web UI
|
||||
* @param conf a [[org.apache.spark.SparkConf]] object specifying other Spark parameters
|
||||
*/
|
||||
def this(master: String, appName: String, conf: SparkConf) =
|
||||
this(SparkContext.updatedConf(conf, master, appName))
|
||||
|
||||
/**
|
||||
* Alternative constructor that allows setting common Spark properties directly
|
||||
*
|
||||
* @param master Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
|
||||
* @param appName A name for your application, to display on the cluster web UI.
|
||||
* @param sparkHome Location where Spark is installed on cluster nodes.
|
||||
* @param jars Collection of JARs to send to the cluster. These can be paths on the local file
|
||||
* system or HDFS, HTTP, HTTPS, or FTP URLs.
|
||||
* @param environment Environment variables to set on worker nodes.
|
||||
*/
|
||||
def this(
|
||||
master: String,
|
||||
appName: String,
|
||||
sparkHome: String = null,
|
||||
jars: Seq[String] = Nil,
|
||||
environment: Map[String, String] = Map(),
|
||||
preferredNodeLocationData: Map[String, Set[SplitInfo]] = Map()) =
|
||||
{
|
||||
this(SparkContext.updatedConf(new SparkConf(), master, appName, sparkHome, jars, environment),
|
||||
preferredNodeLocationData)
|
||||
}
|
||||
|
||||
private[spark] val conf = config.clone()
|
||||
|
||||
/**
|
||||
* Return a copy of this SparkContext's configuration. The configuration ''cannot'' be
|
||||
* changed at runtime.
|
||||
*/
|
||||
def getConf: SparkConf = conf.clone()
|
||||
|
||||
if (!conf.contains("spark.master")) {
|
||||
throw new SparkException("A master URL must be set in your configuration")
|
||||
}
|
||||
if (!conf.contains("spark.app.name")) {
|
||||
throw new SparkException("An application must be set in your configuration")
|
||||
}
|
||||
|
||||
// Set Spark driver host and port system properties
|
||||
if (System.getProperty("spark.driver.host") == null) {
|
||||
System.setProperty("spark.driver.host", Utils.localHostName())
|
||||
}
|
||||
if (System.getProperty("spark.driver.port") == null) {
|
||||
System.setProperty("spark.driver.port", "0")
|
||||
conf.setIfMissing("spark.driver.host", Utils.localHostName())
|
||||
conf.setIfMissing("spark.driver.port", "0")
|
||||
|
||||
val jars: Seq[String] = if (conf.contains("spark.jars")) {
|
||||
conf.get("spark.jars").split(",").filter(_.size != 0)
|
||||
} else {
|
||||
null
|
||||
}
|
||||
|
||||
val master = conf.get("spark.master")
|
||||
val appName = conf.get("spark.app.name")
|
||||
|
||||
val isLocal = (master == "local" || master.startsWith("local["))
|
||||
|
||||
// Create the Spark execution environment (cache, map output tracker, etc)
|
||||
private[spark] val env = SparkEnv.createFromSystemProperties(
|
||||
private[spark] val env = SparkEnv.create(
|
||||
conf,
|
||||
"<driver>",
|
||||
System.getProperty("spark.driver.host"),
|
||||
System.getProperty("spark.driver.port").toInt,
|
||||
true,
|
||||
isLocal)
|
||||
conf.get("spark.driver.host"),
|
||||
conf.get("spark.driver.port").toInt,
|
||||
isDriver = true,
|
||||
isLocal = isLocal)
|
||||
SparkEnv.set(env)
|
||||
|
||||
// Used to store a URL for each static file/jar together with the file's local timestamp
|
||||
|
@ -116,7 +147,8 @@ class SparkContext(
|
|||
|
||||
// Keeps track of all persisted RDDs
|
||||
private[spark] val persistentRdds = new TimeStampedHashMap[Int, RDD[_]]
|
||||
private[spark] val metadataCleaner = new MetadataCleaner(MetadataCleanerType.SPARK_CONTEXT, this.cleanup)
|
||||
private[spark] val metadataCleaner =
|
||||
new MetadataCleaner(MetadataCleanerType.SPARK_CONTEXT, this.cleanup, conf)
|
||||
|
||||
// Initialize the Spark UI
|
||||
private[spark] val ui = new SparkUI(this)
|
||||
|
@ -126,23 +158,30 @@ class SparkContext(
|
|||
|
||||
// Add each JAR given through the constructor
|
||||
if (jars != null) {
|
||||
jars.foreach { addJar(_) }
|
||||
jars.foreach(addJar)
|
||||
}
|
||||
|
||||
private[spark] val executorMemory = conf.getOption("spark.executor.memory")
|
||||
.orElse(Option(System.getenv("SPARK_MEM")))
|
||||
.map(Utils.memoryStringToMb)
|
||||
.getOrElse(512)
|
||||
|
||||
// Environment variables to pass to our executors
|
||||
private[spark] val executorEnvs = HashMap[String, String]()
|
||||
// Note: SPARK_MEM is included for Mesos, but overwritten for standalone mode in ExecutorRunner
|
||||
for (key <- Seq("SPARK_CLASSPATH", "SPARK_LIBRARY_PATH", "SPARK_JAVA_OPTS", "SPARK_TESTING")) {
|
||||
val value = System.getenv(key)
|
||||
if (value != null) {
|
||||
executorEnvs(key) = value
|
||||
}
|
||||
for (key <- Seq("SPARK_CLASSPATH", "SPARK_LIBRARY_PATH", "SPARK_JAVA_OPTS");
|
||||
value <- Option(System.getenv(key))) {
|
||||
executorEnvs(key) = value
|
||||
}
|
||||
// Convert java options to env vars as a work around
|
||||
// since we can't set env vars directly in sbt.
|
||||
for { (envKey, propKey) <- Seq(("SPARK_HOME", "spark.home"), ("SPARK_TESTING", "spark.testing"))
|
||||
value <- Option(System.getenv(envKey)).orElse(Option(System.getProperty(propKey)))} {
|
||||
executorEnvs(envKey) = value
|
||||
}
|
||||
// Since memory can be set with a system property too, use that
|
||||
executorEnvs("SPARK_MEM") = SparkContext.executorMemoryRequested + "m"
|
||||
if (environment != null) {
|
||||
executorEnvs ++= environment
|
||||
}
|
||||
executorEnvs("SPARK_MEM") = executorMemory + "m"
|
||||
executorEnvs ++= conf.getExecutorEnv
|
||||
|
||||
// Set SPARK_USER for user who is running SparkContext.
|
||||
val sparkUser = Option {
|
||||
|
@ -164,24 +203,24 @@ class SparkContext(
|
|||
/** A default Hadoop Configuration for the Hadoop code (e.g. file systems) that we reuse. */
|
||||
val hadoopConfiguration = {
|
||||
val env = SparkEnv.get
|
||||
val conf = SparkHadoopUtil.get.newConfiguration()
|
||||
val hadoopConf = SparkHadoopUtil.get.newConfiguration()
|
||||
// Explicitly check for S3 environment variables
|
||||
if (System.getenv("AWS_ACCESS_KEY_ID") != null &&
|
||||
System.getenv("AWS_SECRET_ACCESS_KEY") != null) {
|
||||
conf.set("fs.s3.awsAccessKeyId", System.getenv("AWS_ACCESS_KEY_ID"))
|
||||
conf.set("fs.s3n.awsAccessKeyId", System.getenv("AWS_ACCESS_KEY_ID"))
|
||||
conf.set("fs.s3.awsSecretAccessKey", System.getenv("AWS_SECRET_ACCESS_KEY"))
|
||||
conf.set("fs.s3n.awsSecretAccessKey", System.getenv("AWS_SECRET_ACCESS_KEY"))
|
||||
hadoopConf.set("fs.s3.awsAccessKeyId", System.getenv("AWS_ACCESS_KEY_ID"))
|
||||
hadoopConf.set("fs.s3n.awsAccessKeyId", System.getenv("AWS_ACCESS_KEY_ID"))
|
||||
hadoopConf.set("fs.s3.awsSecretAccessKey", System.getenv("AWS_SECRET_ACCESS_KEY"))
|
||||
hadoopConf.set("fs.s3n.awsSecretAccessKey", System.getenv("AWS_SECRET_ACCESS_KEY"))
|
||||
}
|
||||
// Copy any "spark.hadoop.foo=bar" system properties into conf as "foo=bar"
|
||||
Utils.getSystemProperties.foreach { case (key, value) =>
|
||||
conf.getAll.foreach { case (key, value) =>
|
||||
if (key.startsWith("spark.hadoop.")) {
|
||||
conf.set(key.substring("spark.hadoop.".length), value)
|
||||
hadoopConf.set(key.substring("spark.hadoop.".length), value)
|
||||
}
|
||||
}
|
||||
val bufferSize = System.getProperty("spark.buffer.size", "65536")
|
||||
conf.set("io.file.buffer.size", bufferSize)
|
||||
conf
|
||||
val bufferSize = conf.get("spark.buffer.size", "65536")
|
||||
hadoopConf.set("io.file.buffer.size", bufferSize)
|
||||
hadoopConf
|
||||
}
|
||||
|
||||
private[spark] var checkpointDir: Option[String] = None
|
||||
|
@ -191,7 +230,7 @@ class SparkContext(
|
|||
override protected def childValue(parent: Properties): Properties = new Properties(parent)
|
||||
}
|
||||
|
||||
private[spark] def getLocalProperties(): Properties = localProperties.get()
|
||||
private[spark] def getLocalProperties: Properties = localProperties.get()
|
||||
|
||||
private[spark] def setLocalProperties(props: Properties) {
|
||||
localProperties.set(props)
|
||||
|
@ -522,7 +561,7 @@ class SparkContext(
|
|||
addedFiles(key) = System.currentTimeMillis
|
||||
|
||||
// Fetch the file locally in case a job is executed using DAGScheduler.runLocally().
|
||||
Utils.fetchFile(path, new File(SparkFiles.getRootDirectory))
|
||||
Utils.fetchFile(path, new File(SparkFiles.getRootDirectory), conf)
|
||||
|
||||
logInfo("Added file " + path + " at " + key + " with timestamp " + addedFiles(key))
|
||||
}
|
||||
|
@ -692,15 +731,27 @@ class SparkContext(
|
|||
* (in that order of preference). If neither of these is set, return None.
|
||||
*/
|
||||
private[spark] def getSparkHome(): Option[String] = {
|
||||
if (sparkHome != null) {
|
||||
Some(sparkHome)
|
||||
} else if (System.getProperty("spark.home") != null) {
|
||||
Some(System.getProperty("spark.home"))
|
||||
} else if (System.getenv("SPARK_HOME") != null) {
|
||||
Some(System.getenv("SPARK_HOME"))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
conf.getOption("spark.home").orElse(Option(System.getenv("SPARK_HOME")))
|
||||
}
|
||||
|
||||
/**
|
||||
* Support function for API backtraces.
|
||||
*/
|
||||
def setCallSite(site: String) {
|
||||
setLocalProperty("externalCallSite", site)
|
||||
}
|
||||
|
||||
/**
|
||||
* Support function for API backtraces.
|
||||
*/
|
||||
def clearCallSite() {
|
||||
setLocalProperty("externalCallSite", null)
|
||||
}
|
||||
|
||||
private[spark] def getCallSite(): String = {
|
||||
val callSite = getLocalProperty("externalCallSite")
|
||||
if (callSite == null) return Utils.formatSparkCallSite
|
||||
callSite
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -715,7 +766,7 @@ class SparkContext(
|
|||
partitions: Seq[Int],
|
||||
allowLocal: Boolean,
|
||||
resultHandler: (Int, U) => Unit) {
|
||||
val callSite = Utils.formatSparkCallSite
|
||||
val callSite = getCallSite
|
||||
val cleanedFunc = clean(func)
|
||||
logInfo("Starting job: " + callSite)
|
||||
val start = System.nanoTime
|
||||
|
@ -799,7 +850,7 @@ class SparkContext(
|
|||
func: (TaskContext, Iterator[T]) => U,
|
||||
evaluator: ApproximateEvaluator[U, R],
|
||||
timeout: Long): PartialResult[R] = {
|
||||
val callSite = Utils.formatSparkCallSite
|
||||
val callSite = getCallSite
|
||||
logInfo("Starting job: " + callSite)
|
||||
val start = System.nanoTime
|
||||
val result = dagScheduler.runApproximateJob(rdd, func, evaluator, callSite, timeout,
|
||||
|
@ -819,7 +870,7 @@ class SparkContext(
|
|||
resultFunc: => R): SimpleFutureAction[R] =
|
||||
{
|
||||
val cleanF = clean(processPartition)
|
||||
val callSite = Utils.formatSparkCallSite
|
||||
val callSite = getCallSite
|
||||
val waiter = dagScheduler.submitJob(
|
||||
rdd,
|
||||
(context: TaskContext, iter: Iterator[T]) => cleanF(iter),
|
||||
|
@ -855,22 +906,15 @@ class SparkContext(
|
|||
|
||||
/**
|
||||
* Set the directory under which RDDs are going to be checkpointed. The directory must
|
||||
* be a HDFS path if running on a cluster. If the directory does not exist, it will
|
||||
* be created. If the directory exists and useExisting is set to true, then the
|
||||
* exisiting directory will be used. Otherwise an exception will be thrown to
|
||||
* prevent accidental overriding of checkpoint files in the existing directory.
|
||||
* be a HDFS path if running on a cluster.
|
||||
*/
|
||||
def setCheckpointDir(dir: String, useExisting: Boolean = false) {
|
||||
val path = new Path(dir)
|
||||
val fs = path.getFileSystem(SparkHadoopUtil.get.newConfiguration())
|
||||
if (!useExisting) {
|
||||
if (fs.exists(path)) {
|
||||
throw new Exception("Checkpoint directory '" + path + "' already exists.")
|
||||
} else {
|
||||
fs.mkdirs(path)
|
||||
}
|
||||
def setCheckpointDir(directory: String) {
|
||||
checkpointDir = Option(directory).map { dir =>
|
||||
val path = new Path(dir, UUID.randomUUID().toString)
|
||||
val fs = path.getFileSystem(hadoopConfiguration)
|
||||
fs.mkdirs(path)
|
||||
fs.getFileStatus(path).getPath().toString
|
||||
}
|
||||
checkpointDir = Some(dir)
|
||||
}
|
||||
|
||||
/** Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD). */
|
||||
|
@ -994,7 +1038,7 @@ object SparkContext {
|
|||
|
||||
/**
|
||||
* Find the JAR from which a given class was loaded, to make it easy for users to pass
|
||||
* their JARs to SparkContext
|
||||
* their JARs to SparkContext.
|
||||
*/
|
||||
def jarOfClass(cls: Class[_]): Seq[String] = {
|
||||
val uri = cls.getResource("/" + cls.getName.replace('.', '/') + ".class")
|
||||
|
@ -1011,21 +1055,44 @@ object SparkContext {
|
|||
}
|
||||
}
|
||||
|
||||
/** Find the JAR that contains the class of a particular object */
|
||||
/**
|
||||
* Find the JAR that contains the class of a particular object, to make it easy for users
|
||||
* to pass their JARs to SparkContext. In most cases you can call jarOfObject(this) in
|
||||
* your driver program.
|
||||
*/
|
||||
def jarOfObject(obj: AnyRef): Seq[String] = jarOfClass(obj.getClass)
|
||||
|
||||
/** Get the amount of memory per executor requested through system properties or SPARK_MEM */
|
||||
private[spark] val executorMemoryRequested = {
|
||||
// TODO: Might need to add some extra memory for the non-heap parts of the JVM
|
||||
Option(System.getProperty("spark.executor.memory"))
|
||||
.orElse(Option(System.getenv("SPARK_MEM")))
|
||||
.map(Utils.memoryStringToMb)
|
||||
.getOrElse(512)
|
||||
/**
|
||||
* Creates a modified version of a SparkConf with the parameters that can be passed separately
|
||||
* to SparkContext, to make it easier to write SparkContext's constructors. This ignores
|
||||
* parameters that are passed as the default value of null, instead of throwing an exception
|
||||
* like SparkConf would.
|
||||
*/
|
||||
private def updatedConf(
|
||||
conf: SparkConf,
|
||||
master: String,
|
||||
appName: String,
|
||||
sparkHome: String = null,
|
||||
jars: Seq[String] = Nil,
|
||||
environment: Map[String, String] = Map()): SparkConf =
|
||||
{
|
||||
val res = conf.clone()
|
||||
res.setMaster(master)
|
||||
res.setAppName(appName)
|
||||
if (sparkHome != null) {
|
||||
res.setSparkHome(sparkHome)
|
||||
}
|
||||
if (!jars.isEmpty) {
|
||||
res.setJars(jars)
|
||||
}
|
||||
res.setExecutorEnv(environment.toSeq)
|
||||
res
|
||||
}
|
||||
|
||||
// Creates a task scheduler based on a given master URL. Extracted for testing.
|
||||
private
|
||||
def createTaskScheduler(sc: SparkContext, master: String, appName: String): TaskScheduler = {
|
||||
/** Creates a task scheduler based on a given master URL. Extracted for testing. */
|
||||
private def createTaskScheduler(sc: SparkContext, master: String, appName: String)
|
||||
: TaskScheduler =
|
||||
{
|
||||
// Regular expression used for local[N] master format
|
||||
val LOCAL_N_REGEX = """local\[([0-9]+)\]""".r
|
||||
// Regular expression for local[N, maxRetries], used in tests with failing tasks
|
||||
|
@ -1071,10 +1138,10 @@ object SparkContext {
|
|||
case LOCAL_CLUSTER_REGEX(numSlaves, coresPerSlave, memoryPerSlave) =>
|
||||
// Check to make sure memory requested <= memoryPerSlave. Otherwise Spark will just hang.
|
||||
val memoryPerSlaveInt = memoryPerSlave.toInt
|
||||
if (SparkContext.executorMemoryRequested > memoryPerSlaveInt) {
|
||||
if (sc.executorMemory > memoryPerSlaveInt) {
|
||||
throw new SparkException(
|
||||
"Asked to launch cluster with %d MB RAM / worker but requested %d MB/worker".format(
|
||||
memoryPerSlaveInt, SparkContext.executorMemoryRequested))
|
||||
memoryPerSlaveInt, sc.executorMemory))
|
||||
}
|
||||
|
||||
val scheduler = new TaskSchedulerImpl(sc)
|
||||
|
@ -1132,7 +1199,7 @@ object SparkContext {
|
|||
case mesosUrl @ MESOS_REGEX(_) =>
|
||||
MesosNativeLibrary.load()
|
||||
val scheduler = new TaskSchedulerImpl(sc)
|
||||
val coarseGrained = System.getProperty("spark.mesos.coarse", "false").toBoolean
|
||||
val coarseGrained = sc.conf.get("spark.mesos.coarse", "false").toBoolean
|
||||
val url = mesosUrl.stripPrefix("mesos://") // strip scheme from raw Mesos URLs
|
||||
val backend = if (coarseGrained) {
|
||||
new CoarseMesosSchedulerBackend(scheduler, sc, url, appName)
|
||||
|
|
|
@ -40,7 +40,7 @@ import com.google.common.collect.MapMaker
|
|||
* objects needs to have the right SparkEnv set. You can get the current environment with
|
||||
* SparkEnv.get (e.g. after creating a SparkContext) and set it with SparkEnv.set.
|
||||
*/
|
||||
class SparkEnv (
|
||||
class SparkEnv private[spark] (
|
||||
val executorId: String,
|
||||
val actorSystem: ActorSystem,
|
||||
val serializerManager: SerializerManager,
|
||||
|
@ -54,7 +54,8 @@ class SparkEnv (
|
|||
val connectionManager: ConnectionManager,
|
||||
val httpFileServer: HttpFileServer,
|
||||
val sparkFilesDir: String,
|
||||
val metricsSystem: MetricsSystem) {
|
||||
val metricsSystem: MetricsSystem,
|
||||
val conf: SparkConf) {
|
||||
|
||||
private val pythonWorkers = mutable.HashMap[(String, Map[String, String]), PythonWorkerFactory]()
|
||||
|
||||
|
@ -62,7 +63,7 @@ class SparkEnv (
|
|||
// (e.g., HadoopFileRDD uses this to cache JobConfs and InputFormats).
|
||||
private[spark] val hadoopJobMetadata = new MapMaker().softValues().makeMap[String, Any]()
|
||||
|
||||
def stop() {
|
||||
private[spark] def stop() {
|
||||
pythonWorkers.foreach { case(key, worker) => worker.stop() }
|
||||
httpFileServer.stop()
|
||||
mapOutputTracker.stop()
|
||||
|
@ -78,6 +79,7 @@ class SparkEnv (
|
|||
//actorSystem.awaitTermination()
|
||||
}
|
||||
|
||||
private[spark]
|
||||
def createPythonWorker(pythonExec: String, envVars: Map[String, String]): java.net.Socket = {
|
||||
synchronized {
|
||||
val key = (pythonExec, envVars)
|
||||
|
@ -106,33 +108,35 @@ object SparkEnv extends Logging {
|
|||
/**
|
||||
* Returns the ThreadLocal SparkEnv.
|
||||
*/
|
||||
def getThreadLocal : SparkEnv = {
|
||||
def getThreadLocal: SparkEnv = {
|
||||
env.get()
|
||||
}
|
||||
|
||||
def createFromSystemProperties(
|
||||
private[spark] def create(
|
||||
conf: SparkConf,
|
||||
executorId: String,
|
||||
hostname: String,
|
||||
port: Int,
|
||||
isDriver: Boolean,
|
||||
isLocal: Boolean): SparkEnv = {
|
||||
|
||||
val (actorSystem, boundPort) = AkkaUtils.createActorSystem("spark", hostname, port)
|
||||
val (actorSystem, boundPort) = AkkaUtils.createActorSystem("spark", hostname, port,
|
||||
conf = conf)
|
||||
|
||||
// Bit of a hack: If this is the driver and our port was 0 (meaning bind to any free port),
|
||||
// figure out which port number Akka actually bound to and set spark.driver.port to it.
|
||||
if (isDriver && port == 0) {
|
||||
System.setProperty("spark.driver.port", boundPort.toString)
|
||||
conf.set("spark.driver.port", boundPort.toString)
|
||||
}
|
||||
|
||||
// set only if unset until now.
|
||||
if (System.getProperty("spark.hostPort", null) == null) {
|
||||
if (!conf.contains("spark.hostPort")) {
|
||||
if (!isDriver){
|
||||
// unexpected
|
||||
Utils.logErrorWithStack("Unexpected NOT to have spark.hostPort set")
|
||||
}
|
||||
Utils.checkHost(hostname)
|
||||
System.setProperty("spark.hostPort", hostname + ":" + boundPort)
|
||||
conf.set("spark.hostPort", hostname + ":" + boundPort)
|
||||
}
|
||||
|
||||
val classLoader = Thread.currentThread.getContextClassLoader
|
||||
|
@ -140,25 +144,26 @@ object SparkEnv extends Logging {
|
|||
// Create an instance of the class named by the given Java system property, or by
|
||||
// defaultClassName if the property is not set, and return it as a T
|
||||
def instantiateClass[T](propertyName: String, defaultClassName: String): T = {
|
||||
val name = System.getProperty(propertyName, defaultClassName)
|
||||
val name = conf.get(propertyName, defaultClassName)
|
||||
Class.forName(name, true, classLoader).newInstance().asInstanceOf[T]
|
||||
}
|
||||
|
||||
val serializerManager = new SerializerManager
|
||||
|
||||
val serializer = serializerManager.setDefault(
|
||||
System.getProperty("spark.serializer", "org.apache.spark.serializer.JavaSerializer"))
|
||||
conf.get("spark.serializer", "org.apache.spark.serializer.JavaSerializer"), conf)
|
||||
|
||||
val closureSerializer = serializerManager.get(
|
||||
System.getProperty("spark.closure.serializer", "org.apache.spark.serializer.JavaSerializer"))
|
||||
conf.get("spark.closure.serializer", "org.apache.spark.serializer.JavaSerializer"),
|
||||
conf)
|
||||
|
||||
def registerOrLookup(name: String, newActor: => Actor): Either[ActorRef, ActorSelection] = {
|
||||
if (isDriver) {
|
||||
logInfo("Registering " + name)
|
||||
Left(actorSystem.actorOf(Props(newActor), name = name))
|
||||
} else {
|
||||
val driverHost: String = System.getProperty("spark.driver.host", "localhost")
|
||||
val driverPort: Int = System.getProperty("spark.driver.port", "7077").toInt
|
||||
val driverHost: String = conf.get("spark.driver.host", "localhost")
|
||||
val driverPort: Int = conf.get("spark.driver.port", "7077").toInt
|
||||
Utils.checkHost(driverHost, "Expected hostname")
|
||||
val url = "akka.tcp://spark@%s:%s/user/%s".format(driverHost, driverPort, name)
|
||||
logInfo("Connecting to " + name + ": " + url)
|
||||
|
@ -168,21 +173,21 @@ object SparkEnv extends Logging {
|
|||
|
||||
val blockManagerMaster = new BlockManagerMaster(registerOrLookup(
|
||||
"BlockManagerMaster",
|
||||
new BlockManagerMasterActor(isLocal)))
|
||||
val blockManager = new BlockManager(executorId, actorSystem, blockManagerMaster, serializer)
|
||||
new BlockManagerMasterActor(isLocal, conf)), conf)
|
||||
val blockManager = new BlockManager(executorId, actorSystem, blockManagerMaster, serializer, conf)
|
||||
|
||||
val connectionManager = blockManager.connectionManager
|
||||
|
||||
val broadcastManager = new BroadcastManager(isDriver)
|
||||
val broadcastManager = new BroadcastManager(isDriver, conf)
|
||||
|
||||
val cacheManager = new CacheManager(blockManager)
|
||||
|
||||
// Have to assign trackerActor after initialization as MapOutputTrackerActor
|
||||
// requires the MapOutputTracker itself
|
||||
val mapOutputTracker = if (isDriver) {
|
||||
new MapOutputTrackerMaster()
|
||||
new MapOutputTrackerMaster(conf)
|
||||
} else {
|
||||
new MapOutputTracker()
|
||||
new MapOutputTracker(conf)
|
||||
}
|
||||
mapOutputTracker.trackerActor = registerOrLookup(
|
||||
"MapOutputTracker",
|
||||
|
@ -193,12 +198,12 @@ object SparkEnv extends Logging {
|
|||
|
||||
val httpFileServer = new HttpFileServer()
|
||||
httpFileServer.initialize()
|
||||
System.setProperty("spark.fileserver.uri", httpFileServer.serverUri)
|
||||
conf.set("spark.fileserver.uri", httpFileServer.serverUri)
|
||||
|
||||
val metricsSystem = if (isDriver) {
|
||||
MetricsSystem.createMetricsSystem("driver")
|
||||
MetricsSystem.createMetricsSystem("driver", conf)
|
||||
} else {
|
||||
MetricsSystem.createMetricsSystem("executor")
|
||||
MetricsSystem.createMetricsSystem("executor", conf)
|
||||
}
|
||||
metricsSystem.start()
|
||||
|
||||
|
@ -212,7 +217,7 @@ object SparkEnv extends Logging {
|
|||
}
|
||||
|
||||
// Warn about deprecated spark.cache.class property
|
||||
if (System.getProperty("spark.cache.class") != null) {
|
||||
if (conf.contains("spark.cache.class")) {
|
||||
logWarning("The spark.cache.class property is no longer being used! Specify storage " +
|
||||
"levels using the RDD.persist() method instead.")
|
||||
}
|
||||
|
@ -231,6 +236,7 @@ object SparkEnv extends Logging {
|
|||
connectionManager,
|
||||
httpFileServer,
|
||||
sparkFilesDir,
|
||||
metricsSystem)
|
||||
metricsSystem,
|
||||
conf)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -611,6 +611,42 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])(implicit val kClassTag: ClassTag[K
|
|||
* Return an RDD with the values of each tuple.
|
||||
*/
|
||||
def values(): JavaRDD[V] = JavaRDD.fromRDD[V](rdd.map(_._2))
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct values for each key in this RDD.
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. Uses the provided
|
||||
* Partitioner to partition the output RDD.
|
||||
*/
|
||||
def countApproxDistinctByKey(relativeSD: Double, partitioner: Partitioner): JavaRDD[(K, Long)] = {
|
||||
rdd.countApproxDistinctByKey(relativeSD, partitioner)
|
||||
}
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct values for each key this RDD.
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. The default value of
|
||||
* relativeSD is 0.05. Hash-partitions the output RDD using the existing partitioner/parallelism
|
||||
* level.
|
||||
*/
|
||||
def countApproxDistinctByKey(relativeSD: Double = 0.05): JavaRDD[(K, Long)] = {
|
||||
rdd.countApproxDistinctByKey(relativeSD)
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct values for each key in this RDD.
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. HashPartitions the
|
||||
* output RDD into numPartitions.
|
||||
*
|
||||
*/
|
||||
def countApproxDistinctByKey(relativeSD: Double, numPartitions: Int): JavaRDD[(K, Long)] = {
|
||||
rdd.countApproxDistinctByKey(relativeSD, numPartitions)
|
||||
}
|
||||
}
|
||||
|
||||
object JavaPairRDD {
|
||||
|
|
|
@ -444,4 +444,15 @@ trait JavaRDDLike[T, This <: JavaRDDLike[T, This]] extends Serializable {
|
|||
val comp = com.google.common.collect.Ordering.natural().asInstanceOf[Comparator[T]]
|
||||
takeOrdered(num, comp)
|
||||
}
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct elements in the RDD.
|
||||
*
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. The default value of
|
||||
* relativeSD is 0.05.
|
||||
*/
|
||||
def countApproxDistinct(relativeSD: Double = 0.05): Long = rdd.countApproxDistinct(relativeSD)
|
||||
|
||||
}
|
||||
|
|
|
@ -29,17 +29,22 @@ import org.apache.hadoop.mapred.JobConf
|
|||
import org.apache.hadoop.mapreduce.{InputFormat => NewInputFormat}
|
||||
import com.google.common.base.Optional
|
||||
|
||||
import org.apache.spark.{Accumulable, AccumulableParam, Accumulator, AccumulatorParam, SparkContext}
|
||||
import org.apache.spark._
|
||||
import org.apache.spark.SparkContext.IntAccumulatorParam
|
||||
import org.apache.spark.SparkContext.DoubleAccumulatorParam
|
||||
import org.apache.spark.broadcast.Broadcast
|
||||
import org.apache.spark.rdd.RDD
|
||||
import scala.Tuple2
|
||||
|
||||
/**
|
||||
* A Java-friendly version of [[org.apache.spark.SparkContext]] that returns [[org.apache.spark.api.java.JavaRDD]]s and
|
||||
* works with Java collections instead of Scala ones.
|
||||
*/
|
||||
class JavaSparkContext(val sc: SparkContext) extends JavaSparkContextVarargsWorkaround {
|
||||
/**
|
||||
* @param conf a [[org.apache.spark.SparkConf]] object specifying Spark parameters
|
||||
*/
|
||||
def this(conf: SparkConf) = this(new SparkContext(conf))
|
||||
|
||||
/**
|
||||
* @param master Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
|
||||
|
@ -47,6 +52,14 @@ class JavaSparkContext(val sc: SparkContext) extends JavaSparkContextVarargsWork
|
|||
*/
|
||||
def this(master: String, appName: String) = this(new SparkContext(master, appName))
|
||||
|
||||
/**
|
||||
* @param master Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
|
||||
* @param appName A name for your application, to display on the cluster web UI
|
||||
* @param conf a [[org.apache.spark.SparkConf]] object specifying other Spark parameters
|
||||
*/
|
||||
def this(master: String, appName: String, conf: SparkConf) =
|
||||
this(conf.setMaster(master).setAppName(appName))
|
||||
|
||||
/**
|
||||
* @param master Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
|
||||
* @param appName A name for your application, to display on the cluster web UI
|
||||
|
@ -381,20 +394,7 @@ class JavaSparkContext(val sc: SparkContext) extends JavaSparkContextVarargsWork
|
|||
|
||||
/**
|
||||
* Set the directory under which RDDs are going to be checkpointed. The directory must
|
||||
* be a HDFS path if running on a cluster. If the directory does not exist, it will
|
||||
* be created. If the directory exists and useExisting is set to true, then the
|
||||
* exisiting directory will be used. Otherwise an exception will be thrown to
|
||||
* prevent accidental overriding of checkpoint files in the existing directory.
|
||||
*/
|
||||
def setCheckpointDir(dir: String, useExisting: Boolean) {
|
||||
sc.setCheckpointDir(dir, useExisting)
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the directory under which RDDs are going to be checkpointed. The directory must
|
||||
* be a HDFS path if running on a cluster. If the directory does not exist, it will
|
||||
* be created. If the directory exists, an exception will be thrown to prevent accidental
|
||||
* overriding of checkpoint files.
|
||||
* be a HDFS path if running on a cluster.
|
||||
*/
|
||||
def setCheckpointDir(dir: String) {
|
||||
sc.setCheckpointDir(dir)
|
||||
|
@ -405,10 +405,36 @@ class JavaSparkContext(val sc: SparkContext) extends JavaSparkContextVarargsWork
|
|||
implicitly[ClassTag[AnyRef]].asInstanceOf[ClassTag[T]]
|
||||
new JavaRDD(sc.checkpointFile(path))
|
||||
}
|
||||
|
||||
/**
|
||||
* Return a copy of this JavaSparkContext's configuration. The configuration ''cannot'' be
|
||||
* changed at runtime.
|
||||
*/
|
||||
def getConf: SparkConf = sc.getConf
|
||||
|
||||
/**
|
||||
* Pass-through to SparkContext.setCallSite. For API support only.
|
||||
*/
|
||||
def setCallSite(site: String) {
|
||||
sc.setCallSite(site)
|
||||
}
|
||||
|
||||
/**
|
||||
* Pass-through to SparkContext.setCallSite. For API support only.
|
||||
*/
|
||||
def clearCallSite() {
|
||||
sc.clearCallSite()
|
||||
}
|
||||
}
|
||||
|
||||
object JavaSparkContext {
|
||||
implicit def fromSparkContext(sc: SparkContext): JavaSparkContext = new JavaSparkContext(sc)
|
||||
|
||||
implicit def toSparkContext(jsc: JavaSparkContext): SparkContext = jsc.sc
|
||||
|
||||
/**
|
||||
* Find the JAR from which a given class was loaded, to make it easy for users to pass
|
||||
* their JARs to SparkContext.
|
||||
*/
|
||||
def jarOfClass(cls: Class[_]) = SparkContext.jarOfClass(cls).toArray
|
||||
}
|
||||
|
|
|
@ -41,7 +41,7 @@ private[spark] class PythonRDD[T: ClassTag](
|
|||
accumulator: Accumulator[JList[Array[Byte]]])
|
||||
extends RDD[Array[Byte]](parent) {
|
||||
|
||||
val bufferSize = System.getProperty("spark.buffer.size", "65536").toInt
|
||||
val bufferSize = conf.get("spark.buffer.size", "65536").toInt
|
||||
|
||||
override def getPartitions = parent.partitions
|
||||
|
||||
|
@ -250,7 +250,7 @@ private class PythonAccumulatorParam(@transient serverHost: String, serverPort:
|
|||
|
||||
Utils.checkHost(serverHost, "Expected hostname")
|
||||
|
||||
val bufferSize = System.getProperty("spark.buffer.size", "65536").toInt
|
||||
val bufferSize = SparkEnv.get.conf.get("spark.buffer.size", "65536").toInt
|
||||
|
||||
override def zero(value: JList[Array[Byte]]): JList[Array[Byte]] = new JArrayList
|
||||
|
||||
|
|
|
@ -31,8 +31,8 @@ abstract class Broadcast[T](private[spark] val id: Long) extends Serializable {
|
|||
override def toString = "Broadcast(" + id + ")"
|
||||
}
|
||||
|
||||
private[spark]
|
||||
class BroadcastManager(val _isDriver: Boolean) extends Logging with Serializable {
|
||||
private[spark]
|
||||
class BroadcastManager(val _isDriver: Boolean, conf: SparkConf) extends Logging with Serializable {
|
||||
|
||||
private var initialized = false
|
||||
private var broadcastFactory: BroadcastFactory = null
|
||||
|
@ -43,14 +43,14 @@ class BroadcastManager(val _isDriver: Boolean) extends Logging with Serializable
|
|||
private def initialize() {
|
||||
synchronized {
|
||||
if (!initialized) {
|
||||
val broadcastFactoryClass = System.getProperty(
|
||||
val broadcastFactoryClass = conf.get(
|
||||
"spark.broadcast.factory", "org.apache.spark.broadcast.HttpBroadcastFactory")
|
||||
|
||||
broadcastFactory =
|
||||
Class.forName(broadcastFactoryClass).newInstance.asInstanceOf[BroadcastFactory]
|
||||
|
||||
// Initialize appropriate BroadcastFactory and BroadcastObject
|
||||
broadcastFactory.initialize(isDriver)
|
||||
broadcastFactory.initialize(isDriver, conf)
|
||||
|
||||
initialized = true
|
||||
}
|
||||
|
|
|
@ -17,6 +17,8 @@
|
|||
|
||||
package org.apache.spark.broadcast
|
||||
|
||||
import org.apache.spark.SparkConf
|
||||
|
||||
/**
|
||||
* An interface for all the broadcast implementations in Spark (to allow
|
||||
* multiple broadcast implementations). SparkContext uses a user-specified
|
||||
|
@ -24,7 +26,7 @@ package org.apache.spark.broadcast
|
|||
* entire Spark job.
|
||||
*/
|
||||
private[spark] trait BroadcastFactory {
|
||||
def initialize(isDriver: Boolean): Unit
|
||||
def initialize(isDriver: Boolean, conf: SparkConf): Unit
|
||||
def newBroadcast[T](value: T, isLocal: Boolean, id: Long): Broadcast[T]
|
||||
def stop(): Unit
|
||||
}
|
||||
|
|
|
@ -24,14 +24,14 @@ import java.util.concurrent.TimeUnit
|
|||
import it.unimi.dsi.fastutil.io.FastBufferedInputStream
|
||||
import it.unimi.dsi.fastutil.io.FastBufferedOutputStream
|
||||
|
||||
import org.apache.spark.{HttpServer, Logging, SparkEnv}
|
||||
import org.apache.spark.{SparkConf, HttpServer, Logging, SparkEnv}
|
||||
import org.apache.spark.io.CompressionCodec
|
||||
import org.apache.spark.storage.{BroadcastBlockId, StorageLevel}
|
||||
import org.apache.spark.util.{MetadataCleaner, MetadataCleanerType, TimeStampedHashSet, Utils}
|
||||
|
||||
private[spark] class HttpBroadcast[T](@transient var value_ : T, isLocal: Boolean, id: Long)
|
||||
extends Broadcast[T](id) with Logging with Serializable {
|
||||
|
||||
|
||||
def value = value_
|
||||
|
||||
def blockId = BroadcastBlockId(id)
|
||||
|
@ -40,7 +40,7 @@ private[spark] class HttpBroadcast[T](@transient var value_ : T, isLocal: Boolea
|
|||
SparkEnv.get.blockManager.putSingle(blockId, value_, StorageLevel.MEMORY_AND_DISK, false)
|
||||
}
|
||||
|
||||
if (!isLocal) {
|
||||
if (!isLocal) {
|
||||
HttpBroadcast.write(id, value_)
|
||||
}
|
||||
|
||||
|
@ -64,7 +64,7 @@ private[spark] class HttpBroadcast[T](@transient var value_ : T, isLocal: Boolea
|
|||
}
|
||||
|
||||
private[spark] class HttpBroadcastFactory extends BroadcastFactory {
|
||||
def initialize(isDriver: Boolean) { HttpBroadcast.initialize(isDriver) }
|
||||
def initialize(isDriver: Boolean, conf: SparkConf) { HttpBroadcast.initialize(isDriver, conf) }
|
||||
|
||||
def newBroadcast[T](value_ : T, isLocal: Boolean, id: Long) =
|
||||
new HttpBroadcast[T](value_, isLocal, id)
|
||||
|
@ -81,44 +81,51 @@ private object HttpBroadcast extends Logging {
|
|||
private var serverUri: String = null
|
||||
private var server: HttpServer = null
|
||||
|
||||
// TODO: This shouldn't be a global variable so that multiple SparkContexts can coexist
|
||||
private val files = new TimeStampedHashSet[String]
|
||||
private val cleaner = new MetadataCleaner(MetadataCleanerType.HTTP_BROADCAST, cleanup)
|
||||
private var cleaner: MetadataCleaner = null
|
||||
|
||||
private val httpReadTimeout = TimeUnit.MILLISECONDS.convert(5,TimeUnit.MINUTES).toInt
|
||||
private val httpReadTimeout = TimeUnit.MILLISECONDS.convert(5, TimeUnit.MINUTES).toInt
|
||||
|
||||
private lazy val compressionCodec = CompressionCodec.createCodec()
|
||||
private var compressionCodec: CompressionCodec = null
|
||||
|
||||
def initialize(isDriver: Boolean) {
|
||||
def initialize(isDriver: Boolean, conf: SparkConf) {
|
||||
synchronized {
|
||||
if (!initialized) {
|
||||
bufferSize = System.getProperty("spark.buffer.size", "65536").toInt
|
||||
compress = System.getProperty("spark.broadcast.compress", "true").toBoolean
|
||||
bufferSize = conf.get("spark.buffer.size", "65536").toInt
|
||||
compress = conf.get("spark.broadcast.compress", "true").toBoolean
|
||||
if (isDriver) {
|
||||
createServer()
|
||||
createServer(conf)
|
||||
conf.set("spark.httpBroadcast.uri", serverUri)
|
||||
}
|
||||
serverUri = System.getProperty("spark.httpBroadcast.uri")
|
||||
serverUri = conf.get("spark.httpBroadcast.uri")
|
||||
cleaner = new MetadataCleaner(MetadataCleanerType.HTTP_BROADCAST, cleanup, conf)
|
||||
compressionCodec = CompressionCodec.createCodec(conf)
|
||||
initialized = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def stop() {
|
||||
synchronized {
|
||||
if (server != null) {
|
||||
server.stop()
|
||||
server = null
|
||||
}
|
||||
if (cleaner != null) {
|
||||
cleaner.cancel()
|
||||
cleaner = null
|
||||
}
|
||||
compressionCodec = null
|
||||
initialized = false
|
||||
cleaner.cancel()
|
||||
}
|
||||
}
|
||||
|
||||
private def createServer() {
|
||||
broadcastDir = Utils.createTempDir(Utils.getLocalDir)
|
||||
private def createServer(conf: SparkConf) {
|
||||
broadcastDir = Utils.createTempDir(Utils.getLocalDir(conf))
|
||||
server = new HttpServer(broadcastDir)
|
||||
server.start()
|
||||
serverUri = server.uri
|
||||
System.setProperty("spark.httpBroadcast.uri", serverUri)
|
||||
logInfo("Broadcast server started at " + serverUri)
|
||||
}
|
||||
|
||||
|
@ -143,7 +150,7 @@ private object HttpBroadcast extends Logging {
|
|||
val in = {
|
||||
val httpConnection = new URL(url).openConnection()
|
||||
httpConnection.setReadTimeout(httpReadTimeout)
|
||||
val inputStream = httpConnection.getInputStream()
|
||||
val inputStream = httpConnection.getInputStream
|
||||
if (compress) {
|
||||
compressionCodec.compressedInputStream(inputStream)
|
||||
} else {
|
||||
|
|
|
@ -83,13 +83,13 @@ extends Broadcast[T](id) with Logging with Serializable {
|
|||
case None =>
|
||||
val start = System.nanoTime
|
||||
logInfo("Started reading broadcast variable " + id)
|
||||
|
||||
|
||||
// Initialize @transient variables that will receive garbage values from the master.
|
||||
resetWorkerVariables()
|
||||
|
||||
if (receiveBroadcast(id)) {
|
||||
value_ = TorrentBroadcast.unBlockifyObject[T](arrayOfBlocks, totalBytes, totalBlocks)
|
||||
|
||||
|
||||
// Store the merged copy in cache so that the next worker doesn't need to rebuild it.
|
||||
// This creates a tradeoff between memory usage and latency.
|
||||
// Storing copy doubles the memory footprint; not storing doubles deserialization cost.
|
||||
|
@ -122,14 +122,14 @@ extends Broadcast[T](id) with Logging with Serializable {
|
|||
while (attemptId > 0 && totalBlocks == -1) {
|
||||
TorrentBroadcast.synchronized {
|
||||
SparkEnv.get.blockManager.getSingle(metaId) match {
|
||||
case Some(x) =>
|
||||
case Some(x) =>
|
||||
val tInfo = x.asInstanceOf[TorrentInfo]
|
||||
totalBlocks = tInfo.totalBlocks
|
||||
totalBytes = tInfo.totalBytes
|
||||
arrayOfBlocks = new Array[TorrentBlock](totalBlocks)
|
||||
hasBlocks = 0
|
||||
|
||||
case None =>
|
||||
|
||||
case None =>
|
||||
Thread.sleep(500)
|
||||
}
|
||||
}
|
||||
|
@ -145,13 +145,13 @@ extends Broadcast[T](id) with Logging with Serializable {
|
|||
val pieceId = BroadcastHelperBlockId(broadcastId, "piece" + pid)
|
||||
TorrentBroadcast.synchronized {
|
||||
SparkEnv.get.blockManager.getSingle(pieceId) match {
|
||||
case Some(x) =>
|
||||
case Some(x) =>
|
||||
arrayOfBlocks(pid) = x.asInstanceOf[TorrentBlock]
|
||||
hasBlocks += 1
|
||||
SparkEnv.get.blockManager.putSingle(
|
||||
pieceId, arrayOfBlocks(pid), StorageLevel.MEMORY_AND_DISK, true)
|
||||
|
||||
case None =>
|
||||
|
||||
case None =>
|
||||
throw new SparkException("Failed to get " + pieceId + " of " + broadcastId)
|
||||
}
|
||||
}
|
||||
|
@ -166,21 +166,22 @@ private object TorrentBroadcast
|
|||
extends Logging {
|
||||
|
||||
private var initialized = false
|
||||
|
||||
def initialize(_isDriver: Boolean) {
|
||||
private var conf: SparkConf = null
|
||||
def initialize(_isDriver: Boolean, conf: SparkConf) {
|
||||
TorrentBroadcast.conf = conf //TODO: we might have to fix it in tests
|
||||
synchronized {
|
||||
if (!initialized) {
|
||||
initialized = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def stop() {
|
||||
initialized = false
|
||||
}
|
||||
|
||||
val BLOCK_SIZE = System.getProperty("spark.broadcast.blockSize", "4096").toInt * 1024
|
||||
|
||||
lazy val BLOCK_SIZE = conf.get("spark.broadcast.blockSize", "4096").toInt * 1024
|
||||
|
||||
def blockifyObject[T](obj: T): TorrentInfo = {
|
||||
val byteArray = Utils.serialize[T](obj)
|
||||
val bais = new ByteArrayInputStream(byteArray)
|
||||
|
@ -209,7 +210,7 @@ extends Logging {
|
|||
}
|
||||
|
||||
def unBlockifyObject[T](arrayOfBlocks: Array[TorrentBlock],
|
||||
totalBytes: Int,
|
||||
totalBytes: Int,
|
||||
totalBlocks: Int): T = {
|
||||
var retByteArray = new Array[Byte](totalBytes)
|
||||
for (i <- 0 until totalBlocks) {
|
||||
|
@ -222,23 +223,23 @@ extends Logging {
|
|||
}
|
||||
|
||||
private[spark] case class TorrentBlock(
|
||||
blockID: Int,
|
||||
byteArray: Array[Byte])
|
||||
blockID: Int,
|
||||
byteArray: Array[Byte])
|
||||
extends Serializable
|
||||
|
||||
private[spark] case class TorrentInfo(
|
||||
@transient arrayOfBlocks : Array[TorrentBlock],
|
||||
totalBlocks: Int,
|
||||
totalBytes: Int)
|
||||
totalBlocks: Int,
|
||||
totalBytes: Int)
|
||||
extends Serializable {
|
||||
|
||||
@transient var hasBlocks = 0
|
||||
|
||||
@transient var hasBlocks = 0
|
||||
}
|
||||
|
||||
private[spark] class TorrentBroadcastFactory
|
||||
extends BroadcastFactory {
|
||||
|
||||
def initialize(isDriver: Boolean) { TorrentBroadcast.initialize(isDriver) }
|
||||
|
||||
def initialize(isDriver: Boolean, conf: SparkConf) { TorrentBroadcast.initialize(isDriver, conf) }
|
||||
|
||||
def newBroadcast[T](value_ : T, isLocal: Boolean, id: Long) =
|
||||
new TorrentBroadcast[T](value_, isLocal, id)
|
||||
|
|
|
@ -190,7 +190,7 @@ private[spark] object FaultToleranceTest extends App with Logging {
|
|||
/** Creates a SparkContext, which constructs a Client to interact with our cluster. */
|
||||
def createClient() = {
|
||||
if (sc != null) { sc.stop() }
|
||||
// Counter-hack: Because of a hack in SparkEnv#createFromSystemProperties() that changes this
|
||||
// Counter-hack: Because of a hack in SparkEnv#create() that changes this
|
||||
// property, we need to reset it.
|
||||
System.setProperty("spark.driver.port", "0")
|
||||
sc = new SparkContext(getMasterUrls(masters), "fault-tolerance", containerSparkHome)
|
||||
|
@ -417,4 +417,4 @@ private[spark] object Docker extends Logging {
|
|||
"docker ps -l -q".!(ProcessLogger(line => id = line))
|
||||
new DockerId(id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -22,7 +22,7 @@ import akka.actor.ActorSystem
|
|||
import org.apache.spark.deploy.worker.Worker
|
||||
import org.apache.spark.deploy.master.Master
|
||||
import org.apache.spark.util.Utils
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkConf, Logging}
|
||||
|
||||
import scala.collection.mutable.ArrayBuffer
|
||||
|
||||
|
@ -43,7 +43,8 @@ class LocalSparkCluster(numWorkers: Int, coresPerWorker: Int, memoryPerWorker: I
|
|||
logInfo("Starting a local Spark cluster with " + numWorkers + " workers.")
|
||||
|
||||
/* Start the Master */
|
||||
val (masterSystem, masterPort, _) = Master.startSystemAndActor(localHostname, 0, 0)
|
||||
val conf = new SparkConf(false)
|
||||
val (masterSystem, masterPort, _) = Master.startSystemAndActor(localHostname, 0, 0, conf)
|
||||
masterActorSystems += masterSystem
|
||||
val masterUrl = "spark://" + localHostname + ":" + masterPort
|
||||
val masters = Array(masterUrl)
|
||||
|
@ -55,7 +56,7 @@ class LocalSparkCluster(numWorkers: Int, coresPerWorker: Int, memoryPerWorker: I
|
|||
workerActorSystems += workerSystem
|
||||
}
|
||||
|
||||
return masters
|
||||
masters
|
||||
}
|
||||
|
||||
def stop() {
|
||||
|
|
|
@ -34,10 +34,10 @@ class SparkHadoopUtil {
|
|||
UserGroupInformation.setConfiguration(conf)
|
||||
|
||||
def runAsUser(user: String)(func: () => Unit) {
|
||||
// if we are already running as the user intended there is no reason to do the doAs. It
|
||||
// if we are already running as the user intended there is no reason to do the doAs. It
|
||||
// will actually break secure HDFS access as it doesn't fill in the credentials. Also if
|
||||
// the user is UNKNOWN then we shouldn't be creating a remote unknown user
|
||||
// (this is actually the path spark on yarn takes) since SPARK_USER is initialized only
|
||||
// the user is UNKNOWN then we shouldn't be creating a remote unknown user
|
||||
// (this is actually the path spark on yarn takes) since SPARK_USER is initialized only
|
||||
// in SparkContext.
|
||||
val currentUser = Option(System.getProperty("user.name")).
|
||||
getOrElse(SparkContext.SPARK_UNKNOWN_USER)
|
||||
|
@ -67,11 +67,15 @@ class SparkHadoopUtil {
|
|||
}
|
||||
|
||||
object SparkHadoopUtil {
|
||||
|
||||
private val hadoop = {
|
||||
val yarnMode = java.lang.Boolean.valueOf(System.getProperty("SPARK_YARN_MODE", System.getenv("SPARK_YARN_MODE")))
|
||||
val yarnMode = java.lang.Boolean.valueOf(
|
||||
System.getProperty("SPARK_YARN_MODE", System.getenv("SPARK_YARN_MODE")))
|
||||
if (yarnMode) {
|
||||
try {
|
||||
Class.forName("org.apache.spark.deploy.yarn.YarnSparkHadoopUtil").newInstance.asInstanceOf[SparkHadoopUtil]
|
||||
Class.forName("org.apache.spark.deploy.yarn.YarnSparkHadoopUtil")
|
||||
.newInstance()
|
||||
.asInstanceOf[SparkHadoopUtil]
|
||||
} catch {
|
||||
case th: Throwable => throw new SparkException("Unable to load YARN support", th)
|
||||
}
|
||||
|
|
|
@ -19,20 +19,19 @@ package org.apache.spark.deploy.client
|
|||
|
||||
import java.util.concurrent.TimeoutException
|
||||
|
||||
import scala.concurrent.duration._
|
||||
import scala.concurrent.Await
|
||||
import scala.concurrent.duration._
|
||||
|
||||
import akka.actor._
|
||||
import akka.pattern.ask
|
||||
import akka.remote.{RemotingLifecycleEvent, DisassociatedEvent}
|
||||
import akka.remote.{AssociationErrorEvent, DisassociatedEvent, RemotingLifecycleEvent}
|
||||
|
||||
import org.apache.spark.{SparkException, Logging}
|
||||
import org.apache.spark.{Logging, SparkConf, SparkException}
|
||||
import org.apache.spark.deploy.{ApplicationDescription, ExecutorState}
|
||||
import org.apache.spark.deploy.DeployMessages._
|
||||
import org.apache.spark.deploy.master.Master
|
||||
import org.apache.spark.util.AkkaUtils
|
||||
|
||||
|
||||
/**
|
||||
* The main class used to talk to a Spark deploy cluster. Takes a master URL, an app description,
|
||||
* and a listener for cluster events, and calls back the listener when various events occur.
|
||||
|
@ -43,7 +42,8 @@ private[spark] class Client(
|
|||
actorSystem: ActorSystem,
|
||||
masterUrls: Array[String],
|
||||
appDescription: ApplicationDescription,
|
||||
listener: ClientListener)
|
||||
listener: ClientListener,
|
||||
conf: SparkConf)
|
||||
extends Logging {
|
||||
|
||||
val REGISTRATION_TIMEOUT = 20.seconds
|
||||
|
@ -111,6 +111,12 @@ private[spark] class Client(
|
|||
}
|
||||
}
|
||||
|
||||
private def isPossibleMaster(remoteUrl: Address) = {
|
||||
masterUrls.map(s => Master.toAkkaUrl(s))
|
||||
.map(u => AddressFromURIString(u).hostPort)
|
||||
.contains(remoteUrl.hostPort)
|
||||
}
|
||||
|
||||
override def receive = {
|
||||
case RegisteredApplication(appId_, masterUrl) =>
|
||||
appId = appId_
|
||||
|
@ -146,6 +152,9 @@ private[spark] class Client(
|
|||
logWarning(s"Connection to $address failed; waiting for master to reconnect...")
|
||||
markDisconnected()
|
||||
|
||||
case AssociationErrorEvent(cause, _, address, _) if isPossibleMaster(address) =>
|
||||
logWarning(s"Could not connect to $address: $cause")
|
||||
|
||||
case StopClient =>
|
||||
markDead()
|
||||
sender ! true
|
||||
|
@ -178,7 +187,7 @@ private[spark] class Client(
|
|||
def stop() {
|
||||
if (actor != null) {
|
||||
try {
|
||||
val timeout = AkkaUtils.askTimeout
|
||||
val timeout = AkkaUtils.askTimeout(conf)
|
||||
val future = actor.ask(StopClient)(timeout)
|
||||
Await.result(future, timeout)
|
||||
} catch {
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
package org.apache.spark.deploy.client
|
||||
|
||||
import org.apache.spark.util.{Utils, AkkaUtils}
|
||||
import org.apache.spark.{Logging}
|
||||
import org.apache.spark.{SparkConf, SparkContext, Logging}
|
||||
import org.apache.spark.deploy.{Command, ApplicationDescription}
|
||||
|
||||
private[spark] object TestClient {
|
||||
|
@ -45,11 +45,13 @@ private[spark] object TestClient {
|
|||
|
||||
def main(args: Array[String]) {
|
||||
val url = args(0)
|
||||
val (actorSystem, port) = AkkaUtils.createActorSystem("spark", Utils.localIpAddress, 0)
|
||||
val (actorSystem, port) = AkkaUtils.createActorSystem("spark", Utils.localIpAddress, 0,
|
||||
conf = new SparkConf)
|
||||
val desc = new ApplicationDescription(
|
||||
"TestClient", 1, 512, Command("spark.deploy.client.TestExecutor", Seq(), Map()), "dummy-spark-home", "ignored")
|
||||
"TestClient", 1, 512, Command("spark.deploy.client.TestExecutor", Seq(), Map()),
|
||||
"dummy-spark-home", "ignored")
|
||||
val listener = new TestListener
|
||||
val client = new Client(actorSystem, Array(url), desc, listener)
|
||||
val client = new Client(actorSystem, Array(url), desc, listener, new SparkConf)
|
||||
client.start()
|
||||
actorSystem.awaitTermination()
|
||||
}
|
||||
|
|
|
@ -29,7 +29,7 @@ import akka.pattern.ask
|
|||
import akka.remote.{DisassociatedEvent, RemotingLifecycleEvent}
|
||||
import akka.serialization.SerializationExtension
|
||||
|
||||
import org.apache.spark.{Logging, SparkException}
|
||||
import org.apache.spark.{SparkConf, SparkContext, Logging, SparkException}
|
||||
import org.apache.spark.deploy.{ApplicationDescription, ExecutorState}
|
||||
import org.apache.spark.deploy.DeployMessages._
|
||||
import org.apache.spark.deploy.master.MasterMessages._
|
||||
|
@ -38,14 +38,16 @@ import org.apache.spark.metrics.MetricsSystem
|
|||
import org.apache.spark.util.{AkkaUtils, Utils}
|
||||
|
||||
private[spark] class Master(host: String, port: Int, webUiPort: Int) extends Actor with Logging {
|
||||
import context.dispatcher
|
||||
import context.dispatcher // to use Akka's scheduler.schedule()
|
||||
|
||||
val conf = new SparkConf
|
||||
|
||||
val DATE_FORMAT = new SimpleDateFormat("yyyyMMddHHmmss") // For application IDs
|
||||
val WORKER_TIMEOUT = System.getProperty("spark.worker.timeout", "60").toLong * 1000
|
||||
val RETAINED_APPLICATIONS = System.getProperty("spark.deploy.retainedApplications", "200").toInt
|
||||
val REAPER_ITERATIONS = System.getProperty("spark.dead.worker.persistence", "15").toInt
|
||||
val RECOVERY_DIR = System.getProperty("spark.deploy.recoveryDirectory", "")
|
||||
val RECOVERY_MODE = System.getProperty("spark.deploy.recoveryMode", "NONE")
|
||||
val WORKER_TIMEOUT = conf.get("spark.worker.timeout", "60").toLong * 1000
|
||||
val RETAINED_APPLICATIONS = conf.get("spark.deploy.retainedApplications", "200").toInt
|
||||
val REAPER_ITERATIONS = conf.get("spark.dead.worker.persistence", "15").toInt
|
||||
val RECOVERY_DIR = conf.get("spark.deploy.recoveryDirectory", "")
|
||||
val RECOVERY_MODE = conf.get("spark.deploy.recoveryMode", "NONE")
|
||||
|
||||
var nextAppNumber = 0
|
||||
val workers = new HashSet[WorkerInfo]
|
||||
|
@ -63,8 +65,8 @@ private[spark] class Master(host: String, port: Int, webUiPort: Int) extends Act
|
|||
|
||||
Utils.checkHost(host, "Expected hostname")
|
||||
|
||||
val masterMetricsSystem = MetricsSystem.createMetricsSystem("master")
|
||||
val applicationMetricsSystem = MetricsSystem.createMetricsSystem("applications")
|
||||
val masterMetricsSystem = MetricsSystem.createMetricsSystem("master", conf)
|
||||
val applicationMetricsSystem = MetricsSystem.createMetricsSystem("applications", conf)
|
||||
val masterSource = new MasterSource(this)
|
||||
|
||||
val webUi = new MasterWebUI(this, webUiPort)
|
||||
|
@ -86,7 +88,7 @@ private[spark] class Master(host: String, port: Int, webUiPort: Int) extends Act
|
|||
// As a temporary workaround before better ways of configuring memory, we allow users to set
|
||||
// a flag that will perform round-robin scheduling across the nodes (spreading out each app
|
||||
// among all the nodes) instead of trying to consolidate each app onto a small # of nodes.
|
||||
val spreadOutApps = System.getProperty("spark.deploy.spreadOut", "true").toBoolean
|
||||
val spreadOutApps = conf.get("spark.deploy.spreadOut", "true").toBoolean
|
||||
|
||||
override def preStart() {
|
||||
logInfo("Starting Spark master at " + masterUrl)
|
||||
|
@ -103,7 +105,7 @@ private[spark] class Master(host: String, port: Int, webUiPort: Int) extends Act
|
|||
persistenceEngine = RECOVERY_MODE match {
|
||||
case "ZOOKEEPER" =>
|
||||
logInfo("Persisting recovery state to ZooKeeper")
|
||||
new ZooKeeperPersistenceEngine(SerializationExtension(context.system))
|
||||
new ZooKeeperPersistenceEngine(SerializationExtension(context.system), conf)
|
||||
case "FILESYSTEM" =>
|
||||
logInfo("Persisting recovery state to directory: " + RECOVERY_DIR)
|
||||
new FileSystemPersistenceEngine(RECOVERY_DIR, SerializationExtension(context.system))
|
||||
|
@ -113,7 +115,7 @@ private[spark] class Master(host: String, port: Int, webUiPort: Int) extends Act
|
|||
|
||||
leaderElectionAgent = RECOVERY_MODE match {
|
||||
case "ZOOKEEPER" =>
|
||||
context.actorOf(Props(classOf[ZooKeeperLeaderElectionAgent], self, masterUrl))
|
||||
context.actorOf(Props(classOf[ZooKeeperLeaderElectionAgent], self, masterUrl, conf))
|
||||
case _ =>
|
||||
context.actorOf(Props(classOf[MonarchyLeaderAgent], self))
|
||||
}
|
||||
|
@ -495,7 +497,7 @@ private[spark] class Master(host: String, port: Int, webUiPort: Int) extends Act
|
|||
removeWorker(worker)
|
||||
} else {
|
||||
if (worker.lastHeartbeat < currentTime - ((REAPER_ITERATIONS + 1) * WORKER_TIMEOUT))
|
||||
workers -= worker // we've seen this DEAD worker in the UI, etc. for long enough; cull it
|
||||
workers -= worker // we've seen this DEAD worker in the UI, etc. for long enough; cull it
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -507,8 +509,9 @@ private[spark] object Master {
|
|||
val sparkUrlRegex = "spark://([^:]+):([0-9]+)".r
|
||||
|
||||
def main(argStrings: Array[String]) {
|
||||
val args = new MasterArguments(argStrings)
|
||||
val (actorSystem, _, _) = startSystemAndActor(args.host, args.port, args.webUiPort)
|
||||
val conf = new SparkConf
|
||||
val args = new MasterArguments(argStrings, conf)
|
||||
val (actorSystem, _, _) = startSystemAndActor(args.host, args.port, args.webUiPort, conf)
|
||||
actorSystem.awaitTermination()
|
||||
}
|
||||
|
||||
|
@ -522,10 +525,12 @@ private[spark] object Master {
|
|||
}
|
||||
}
|
||||
|
||||
def startSystemAndActor(host: String, port: Int, webUiPort: Int): (ActorSystem, Int, Int) = {
|
||||
val (actorSystem, boundPort) = AkkaUtils.createActorSystem(systemName, host, port)
|
||||
def startSystemAndActor(host: String, port: Int, webUiPort: Int, conf: SparkConf)
|
||||
: (ActorSystem, Int, Int) =
|
||||
{
|
||||
val (actorSystem, boundPort) = AkkaUtils.createActorSystem(systemName, host, port, conf = conf)
|
||||
val actor = actorSystem.actorOf(Props(classOf[Master], host, boundPort, webUiPort), actorName)
|
||||
val timeout = AkkaUtils.askTimeout
|
||||
val timeout = AkkaUtils.askTimeout(conf)
|
||||
val respFuture = actor.ask(RequestWebUIPort)(timeout)
|
||||
val resp = Await.result(respFuture, timeout).asInstanceOf[WebUIPortResponse]
|
||||
(actorSystem, boundPort, resp.webUIBoundPort)
|
||||
|
|
|
@ -18,16 +18,17 @@
|
|||
package org.apache.spark.deploy.master
|
||||
|
||||
import org.apache.spark.util.{Utils, IntParam}
|
||||
import org.apache.spark.SparkConf
|
||||
|
||||
/**
|
||||
* Command-line parser for the master.
|
||||
*/
|
||||
private[spark] class MasterArguments(args: Array[String]) {
|
||||
private[spark] class MasterArguments(args: Array[String], conf: SparkConf) {
|
||||
var host = Utils.localHostName()
|
||||
var port = 7077
|
||||
var webUiPort = 8080
|
||||
|
||||
// Check for settings in environment variables
|
||||
|
||||
// Check for settings in environment variables
|
||||
if (System.getenv("SPARK_MASTER_HOST") != null) {
|
||||
host = System.getenv("SPARK_MASTER_HOST")
|
||||
}
|
||||
|
@ -37,8 +38,8 @@ private[spark] class MasterArguments(args: Array[String]) {
|
|||
if (System.getenv("SPARK_MASTER_WEBUI_PORT") != null) {
|
||||
webUiPort = System.getenv("SPARK_MASTER_WEBUI_PORT").toInt
|
||||
}
|
||||
if (System.getProperty("master.ui.port") != null) {
|
||||
webUiPort = System.getProperty("master.ui.port").toInt
|
||||
if (conf.contains("master.ui.port")) {
|
||||
webUiPort = conf.get("master.ui.port").toInt
|
||||
}
|
||||
|
||||
parse(args.toList)
|
||||
|
|
|
@ -23,7 +23,7 @@ import org.apache.zookeeper._
|
|||
import org.apache.zookeeper.Watcher.Event.KeeperState
|
||||
import org.apache.zookeeper.data.Stat
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkConf, Logging}
|
||||
|
||||
/**
|
||||
* Provides a Scala-side interface to the standard ZooKeeper client, with the addition of retry
|
||||
|
@ -35,8 +35,9 @@ import org.apache.spark.Logging
|
|||
* Additionally, all commands sent to ZooKeeper will be retried until they either fail too many
|
||||
* times or a semantic exception is thrown (e.g., "node already exists").
|
||||
*/
|
||||
private[spark] class SparkZooKeeperSession(zkWatcher: SparkZooKeeperWatcher) extends Logging {
|
||||
val ZK_URL = System.getProperty("spark.deploy.zookeeper.url", "")
|
||||
private[spark] class SparkZooKeeperSession(zkWatcher: SparkZooKeeperWatcher,
|
||||
conf: SparkConf) extends Logging {
|
||||
val ZK_URL = conf.get("spark.deploy.zookeeper.url", "")
|
||||
|
||||
val ZK_ACL = ZooDefs.Ids.OPEN_ACL_UNSAFE
|
||||
val ZK_TIMEOUT_MILLIS = 30000
|
||||
|
|
|
@ -21,16 +21,17 @@ import akka.actor.ActorRef
|
|||
import org.apache.zookeeper._
|
||||
import org.apache.zookeeper.Watcher.Event.EventType
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkConf, Logging}
|
||||
import org.apache.spark.deploy.master.MasterMessages._
|
||||
|
||||
private[spark] class ZooKeeperLeaderElectionAgent(val masterActor: ActorRef, masterUrl: String)
|
||||
private[spark] class ZooKeeperLeaderElectionAgent(val masterActor: ActorRef,
|
||||
masterUrl: String, conf: SparkConf)
|
||||
extends LeaderElectionAgent with SparkZooKeeperWatcher with Logging {
|
||||
|
||||
val WORKING_DIR = System.getProperty("spark.deploy.zookeeper.dir", "/spark") + "/leader_election"
|
||||
val WORKING_DIR = conf.get("spark.deploy.zookeeper.dir", "/spark") + "/leader_election"
|
||||
|
||||
private val watcher = new ZooKeeperWatcher()
|
||||
private val zk = new SparkZooKeeperSession(this)
|
||||
private val zk = new SparkZooKeeperSession(this, conf)
|
||||
private var status = LeadershipStatus.NOT_LEADER
|
||||
private var myLeaderFile: String = _
|
||||
private var leaderUrl: String = _
|
||||
|
|
|
@ -17,19 +17,19 @@
|
|||
|
||||
package org.apache.spark.deploy.master
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkConf, Logging}
|
||||
import org.apache.zookeeper._
|
||||
|
||||
import akka.serialization.Serialization
|
||||
|
||||
class ZooKeeperPersistenceEngine(serialization: Serialization)
|
||||
class ZooKeeperPersistenceEngine(serialization: Serialization, conf: SparkConf)
|
||||
extends PersistenceEngine
|
||||
with SparkZooKeeperWatcher
|
||||
with Logging
|
||||
{
|
||||
val WORKING_DIR = System.getProperty("spark.deploy.zookeeper.dir", "/spark") + "/master_status"
|
||||
val WORKING_DIR = conf.get("spark.deploy.zookeeper.dir", "/spark") + "/master_status"
|
||||
|
||||
val zk = new SparkZooKeeperSession(this)
|
||||
val zk = new SparkZooKeeperSession(this, conf)
|
||||
|
||||
zk.connect()
|
||||
|
||||
|
|
|
@ -31,7 +31,7 @@ import org.apache.spark.util.{AkkaUtils, Utils}
|
|||
*/
|
||||
private[spark]
|
||||
class MasterWebUI(val master: Master, requestedPort: Int) extends Logging {
|
||||
val timeout = AkkaUtils.askTimeout
|
||||
val timeout = AkkaUtils.askTimeout(master.conf)
|
||||
val host = Utils.localHostName()
|
||||
val port = requestedPort
|
||||
|
||||
|
|
|
@ -25,23 +25,14 @@ import scala.collection.mutable.HashMap
|
|||
import scala.concurrent.duration._
|
||||
|
||||
import akka.actor._
|
||||
import akka.remote.{ DisassociatedEvent, RemotingLifecycleEvent}
|
||||
|
||||
import org.apache.spark.{SparkException, Logging}
|
||||
import akka.remote.{DisassociatedEvent, RemotingLifecycleEvent}
|
||||
import org.apache.spark.{Logging, SparkConf, SparkException}
|
||||
import org.apache.spark.deploy.{ExecutorDescription, ExecutorState}
|
||||
import org.apache.spark.deploy.DeployMessages._
|
||||
import org.apache.spark.deploy.master.Master
|
||||
import org.apache.spark.deploy.worker.ui.WorkerWebUI
|
||||
import org.apache.spark.metrics.MetricsSystem
|
||||
import org.apache.spark.util.{Utils, AkkaUtils}
|
||||
import org.apache.spark.deploy.DeployMessages.WorkerStateResponse
|
||||
import org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed
|
||||
import org.apache.spark.deploy.DeployMessages.KillExecutor
|
||||
import org.apache.spark.deploy.DeployMessages.ExecutorStateChanged
|
||||
import org.apache.spark.deploy.DeployMessages.Heartbeat
|
||||
import org.apache.spark.deploy.DeployMessages.RegisteredWorker
|
||||
import org.apache.spark.deploy.DeployMessages.LaunchExecutor
|
||||
import org.apache.spark.deploy.DeployMessages.RegisterWorker
|
||||
import org.apache.spark.util.{AkkaUtils, Utils}
|
||||
|
||||
/**
|
||||
* @param masterUrls Each url should look like spark://host:port.
|
||||
|
@ -53,7 +44,8 @@ private[spark] class Worker(
|
|||
cores: Int,
|
||||
memory: Int,
|
||||
masterUrls: Array[String],
|
||||
workDirPath: String = null)
|
||||
workDirPath: String = null,
|
||||
val conf: SparkConf)
|
||||
extends Actor with Logging {
|
||||
import context.dispatcher
|
||||
|
||||
|
@ -63,7 +55,7 @@ private[spark] class Worker(
|
|||
val DATE_FORMAT = new SimpleDateFormat("yyyyMMddHHmmss") // For worker and executor IDs
|
||||
|
||||
// Send a heartbeat every (heartbeat timeout) / 4 milliseconds
|
||||
val HEARTBEAT_MILLIS = System.getProperty("spark.worker.timeout", "60").toLong * 1000 / 4
|
||||
val HEARTBEAT_MILLIS = conf.get("spark.worker.timeout", "60").toLong * 1000 / 4
|
||||
|
||||
val REGISTRATION_TIMEOUT = 20.seconds
|
||||
val REGISTRATION_RETRIES = 3
|
||||
|
@ -92,7 +84,7 @@ private[spark] class Worker(
|
|||
var coresUsed = 0
|
||||
var memoryUsed = 0
|
||||
|
||||
val metricsSystem = MetricsSystem.createMetricsSystem("worker")
|
||||
val metricsSystem = MetricsSystem.createMetricsSystem("worker", conf)
|
||||
val workerSource = new WorkerSource(this)
|
||||
|
||||
def coresFree: Int = cores - coresUsed
|
||||
|
@ -275,6 +267,7 @@ private[spark] class Worker(
|
|||
}
|
||||
|
||||
private[spark] object Worker {
|
||||
|
||||
def main(argStrings: Array[String]) {
|
||||
val args = new WorkerArguments(argStrings)
|
||||
val (actorSystem, _) = startSystemAndActor(args.host, args.port, args.webUiPort, args.cores,
|
||||
|
@ -283,13 +276,16 @@ private[spark] object Worker {
|
|||
}
|
||||
|
||||
def startSystemAndActor(host: String, port: Int, webUiPort: Int, cores: Int, memory: Int,
|
||||
masterUrls: Array[String], workDir: String, workerNumber: Option[Int] = None)
|
||||
: (ActorSystem, Int) = {
|
||||
masterUrls: Array[String], workDir: String, workerNumber: Option[Int] = None)
|
||||
: (ActorSystem, Int) =
|
||||
{
|
||||
// The LocalSparkCluster runs multiple local sparkWorkerX actor systems
|
||||
val conf = new SparkConf
|
||||
val systemName = "sparkWorker" + workerNumber.map(_.toString).getOrElse("")
|
||||
val (actorSystem, boundPort) = AkkaUtils.createActorSystem(systemName, host, port)
|
||||
val (actorSystem, boundPort) = AkkaUtils.createActorSystem(systemName, host, port,
|
||||
conf = conf)
|
||||
actorSystem.actorOf(Props(classOf[Worker], host, boundPort, webUiPort, cores, memory,
|
||||
masterUrls, workDir), name = "Worker")
|
||||
masterUrls, workDir, conf), name = "Worker")
|
||||
(actorSystem, boundPort)
|
||||
}
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ import java.io.File
|
|||
import javax.servlet.http.HttpServletRequest
|
||||
import org.eclipse.jetty.server.{Handler, Server}
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{Logging, SparkConf}
|
||||
import org.apache.spark.deploy.worker.Worker
|
||||
import org.apache.spark.ui.{JettyUtils, UIUtils}
|
||||
import org.apache.spark.ui.JettyUtils._
|
||||
|
@ -34,10 +34,10 @@ import org.apache.spark.util.{AkkaUtils, Utils}
|
|||
private[spark]
|
||||
class WorkerWebUI(val worker: Worker, val workDir: File, requestedPort: Option[Int] = None)
|
||||
extends Logging {
|
||||
val timeout = AkkaUtils.askTimeout
|
||||
val timeout = AkkaUtils.askTimeout(worker.conf)
|
||||
val host = Utils.localHostName()
|
||||
val port = requestedPort.getOrElse(
|
||||
System.getProperty("worker.ui.port", WorkerWebUI.DEFAULT_PORT).toInt)
|
||||
worker.conf.get("worker.ui.port", WorkerWebUI.DEFAULT_PORT).toInt)
|
||||
|
||||
var server: Option[Server] = None
|
||||
var boundPort: Option[Int] = None
|
||||
|
|
|
@ -22,7 +22,7 @@ import java.nio.ByteBuffer
|
|||
import akka.actor._
|
||||
import akka.remote._
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkConf, SparkContext, Logging}
|
||||
import org.apache.spark.TaskState.TaskState
|
||||
import org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages._
|
||||
import org.apache.spark.util.{Utils, AkkaUtils}
|
||||
|
@ -98,10 +98,10 @@ private[spark] object CoarseGrainedExecutorBackend {
|
|||
// Create a new ActorSystem to run the backend, because we can't create a SparkEnv / Executor
|
||||
// before getting started with all our system properties, etc
|
||||
val (actorSystem, boundPort) = AkkaUtils.createActorSystem("sparkExecutor", hostname, 0,
|
||||
indestructible = true)
|
||||
indestructible = true, conf = new SparkConf)
|
||||
// set it
|
||||
val sparkHostPort = hostname + ":" + boundPort
|
||||
System.setProperty("spark.hostPort", sparkHostPort)
|
||||
// conf.set("spark.hostPort", sparkHostPort)
|
||||
actorSystem.actorOf(
|
||||
Props(classOf[CoarseGrainedExecutorBackend], driverUrl, executorId, sparkHostPort, cores),
|
||||
name = "Executor")
|
||||
|
|
|
@ -48,8 +48,6 @@ private[spark] class Executor(
|
|||
|
||||
private val EMPTY_BYTE_BUFFER = ByteBuffer.wrap(new Array[Byte](0))
|
||||
|
||||
initLogging()
|
||||
|
||||
// No ip or host:port - just hostname
|
||||
Utils.checkHost(slaveHostname, "Expected executed slave to be a hostname")
|
||||
// must not have port specified.
|
||||
|
@ -58,16 +56,17 @@ private[spark] class Executor(
|
|||
// Make sure the local hostname we report matches the cluster scheduler's name for this host
|
||||
Utils.setCustomHostname(slaveHostname)
|
||||
|
||||
// Set spark.* system properties from executor arg
|
||||
for ((key, value) <- properties) {
|
||||
System.setProperty(key, value)
|
||||
}
|
||||
// Set spark.* properties from executor arg
|
||||
val conf = new SparkConf(false)
|
||||
conf.setAll(properties)
|
||||
|
||||
// If we are in yarn mode, systems can have different disk layouts so we must set it
|
||||
// to what Yarn on this system said was available. This will be used later when SparkEnv
|
||||
// created.
|
||||
if (java.lang.Boolean.valueOf(System.getenv("SPARK_YARN_MODE"))) {
|
||||
System.setProperty("spark.local.dir", getYarnLocalDirs())
|
||||
if (java.lang.Boolean.valueOf(
|
||||
System.getProperty("SPARK_YARN_MODE", System.getenv("SPARK_YARN_MODE"))))
|
||||
{
|
||||
conf.set("spark.local.dir", getYarnLocalDirs())
|
||||
}
|
||||
|
||||
// Create our ClassLoader and set it on this thread
|
||||
|
@ -108,7 +107,7 @@ private[spark] class Executor(
|
|||
// Initialize Spark environment (using system properties read above)
|
||||
private val env = {
|
||||
if (!isLocal) {
|
||||
val _env = SparkEnv.createFromSystemProperties(executorId, slaveHostname, 0,
|
||||
val _env = SparkEnv.create(conf, executorId, slaveHostname, 0,
|
||||
isDriver = false, isLocal = false)
|
||||
SparkEnv.set(_env)
|
||||
_env.metricsSystem.registerSource(executorSource)
|
||||
|
@ -142,11 +141,6 @@ private[spark] class Executor(
|
|||
val tr = runningTasks.get(taskId)
|
||||
if (tr != null) {
|
||||
tr.kill()
|
||||
// We remove the task also in the finally block in TaskRunner.run.
|
||||
// The reason we need to remove it here is because killTask might be called before the task
|
||||
// is even launched, and never reaching that finally block. ConcurrentHashMap's remove is
|
||||
// idempotent.
|
||||
runningTasks.remove(taskId)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -168,6 +162,8 @@ private[spark] class Executor(
|
|||
class TaskRunner(execBackend: ExecutorBackend, taskId: Long, serializedTask: ByteBuffer)
|
||||
extends Runnable {
|
||||
|
||||
object TaskKilledException extends Exception
|
||||
|
||||
@volatile private var killed = false
|
||||
@volatile private var task: Task[Any] = _
|
||||
|
||||
|
@ -201,9 +197,11 @@ private[spark] class Executor(
|
|||
// If this task has been killed before we deserialized it, let's quit now. Otherwise,
|
||||
// continue executing the task.
|
||||
if (killed) {
|
||||
logInfo("Executor killed task " + taskId)
|
||||
execBackend.statusUpdate(taskId, TaskState.KILLED, ser.serialize(TaskKilled))
|
||||
return
|
||||
// Throw an exception rather than returning, because returning within a try{} block
|
||||
// causes a NonLocalReturnControl exception to be thrown. The NonLocalReturnControl
|
||||
// exception will be caught by the catch block, leading to an incorrect ExceptionFailure
|
||||
// for the task.
|
||||
throw TaskKilledException
|
||||
}
|
||||
|
||||
attemptedTask = Some(task)
|
||||
|
@ -217,9 +215,7 @@ private[spark] class Executor(
|
|||
|
||||
// If the task has been killed, let's fail it.
|
||||
if (task.killed) {
|
||||
logInfo("Executor killed task " + taskId)
|
||||
execBackend.statusUpdate(taskId, TaskState.KILLED, ser.serialize(TaskKilled))
|
||||
return
|
||||
throw TaskKilledException
|
||||
}
|
||||
|
||||
val resultSer = SparkEnv.get.serializer.newInstance()
|
||||
|
@ -261,6 +257,11 @@ private[spark] class Executor(
|
|||
execBackend.statusUpdate(taskId, TaskState.FAILED, ser.serialize(reason))
|
||||
}
|
||||
|
||||
case TaskKilledException => {
|
||||
logInfo("Executor killed task " + taskId)
|
||||
execBackend.statusUpdate(taskId, TaskState.KILLED, ser.serialize(TaskKilled))
|
||||
}
|
||||
|
||||
case t: Throwable => {
|
||||
val serviceTime = (System.currentTimeMillis() - taskStart).toInt
|
||||
val metrics = attemptedTask.flatMap(t => t.metrics)
|
||||
|
@ -303,7 +304,7 @@ private[spark] class Executor(
|
|||
* new classes defined by the REPL as the user types code
|
||||
*/
|
||||
private def addReplClassLoaderIfNeeded(parent: ClassLoader): ClassLoader = {
|
||||
val classUri = System.getProperty("spark.repl.class.uri")
|
||||
val classUri = conf.get("spark.repl.class.uri", null)
|
||||
if (classUri != null) {
|
||||
logInfo("Using REPL class URI: " + classUri)
|
||||
try {
|
||||
|
@ -331,12 +332,12 @@ private[spark] class Executor(
|
|||
// Fetch missing dependencies
|
||||
for ((name, timestamp) <- newFiles if currentFiles.getOrElse(name, -1L) < timestamp) {
|
||||
logInfo("Fetching " + name + " with timestamp " + timestamp)
|
||||
Utils.fetchFile(name, new File(SparkFiles.getRootDirectory))
|
||||
Utils.fetchFile(name, new File(SparkFiles.getRootDirectory), conf)
|
||||
currentFiles(name) = timestamp
|
||||
}
|
||||
for ((name, timestamp) <- newJars if currentJars.getOrElse(name, -1L) < timestamp) {
|
||||
logInfo("Fetching " + name + " with timestamp " + timestamp)
|
||||
Utils.fetchFile(name, new File(SparkFiles.getRootDirectory))
|
||||
Utils.fetchFile(name, new File(SparkFiles.getRootDirectory), conf)
|
||||
currentJars(name) = timestamp
|
||||
// Add it to our class loader
|
||||
val localName = name.split("/").last
|
||||
|
|
|
@ -22,6 +22,7 @@ import java.io.{InputStream, OutputStream}
|
|||
import com.ning.compress.lzf.{LZFInputStream, LZFOutputStream}
|
||||
|
||||
import org.xerial.snappy.{SnappyInputStream, SnappyOutputStream}
|
||||
import org.apache.spark.{SparkEnv, SparkConf}
|
||||
|
||||
|
||||
/**
|
||||
|
@ -37,15 +38,15 @@ trait CompressionCodec {
|
|||
|
||||
|
||||
private[spark] object CompressionCodec {
|
||||
|
||||
def createCodec(): CompressionCodec = {
|
||||
createCodec(System.getProperty(
|
||||
def createCodec(conf: SparkConf): CompressionCodec = {
|
||||
createCodec(conf, conf.get(
|
||||
"spark.io.compression.codec", classOf[LZFCompressionCodec].getName))
|
||||
}
|
||||
|
||||
def createCodec(codecName: String): CompressionCodec = {
|
||||
Class.forName(codecName, true, Thread.currentThread.getContextClassLoader)
|
||||
.newInstance().asInstanceOf[CompressionCodec]
|
||||
def createCodec(conf: SparkConf, codecName: String): CompressionCodec = {
|
||||
val ctor = Class.forName(codecName, true, Thread.currentThread.getContextClassLoader)
|
||||
.getConstructor(classOf[SparkConf])
|
||||
ctor.newInstance(conf).asInstanceOf[CompressionCodec]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -53,7 +54,7 @@ private[spark] object CompressionCodec {
|
|||
/**
|
||||
* LZF implementation of [[org.apache.spark.io.CompressionCodec]].
|
||||
*/
|
||||
class LZFCompressionCodec extends CompressionCodec {
|
||||
class LZFCompressionCodec(conf: SparkConf) extends CompressionCodec {
|
||||
|
||||
override def compressedOutputStream(s: OutputStream): OutputStream = {
|
||||
new LZFOutputStream(s).setFinishBlockOnFlush(true)
|
||||
|
@ -67,10 +68,10 @@ class LZFCompressionCodec extends CompressionCodec {
|
|||
* Snappy implementation of [[org.apache.spark.io.CompressionCodec]].
|
||||
* Block size can be configured by spark.io.compression.snappy.block.size.
|
||||
*/
|
||||
class SnappyCompressionCodec extends CompressionCodec {
|
||||
class SnappyCompressionCodec(conf: SparkConf) extends CompressionCodec {
|
||||
|
||||
override def compressedOutputStream(s: OutputStream): OutputStream = {
|
||||
val blockSize = System.getProperty("spark.io.compression.snappy.block.size", "32768").toInt
|
||||
val blockSize = conf.get("spark.io.compression.snappy.block.size", "32768").toInt
|
||||
new SnappyOutputStream(s, blockSize)
|
||||
}
|
||||
|
||||
|
|
|
@ -26,7 +26,6 @@ import scala.util.matching.Regex
|
|||
import org.apache.spark.Logging
|
||||
|
||||
private[spark] class MetricsConfig(val configFile: Option[String]) extends Logging {
|
||||
initLogging()
|
||||
|
||||
val DEFAULT_PREFIX = "*"
|
||||
val INSTANCE_REGEX = "^(\\*|[a-zA-Z]+)\\.(.+)".r
|
||||
|
|
|
@ -24,7 +24,7 @@ import java.util.concurrent.TimeUnit
|
|||
|
||||
import scala.collection.mutable
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkConf, Logging}
|
||||
import org.apache.spark.metrics.sink.{MetricsServlet, Sink}
|
||||
import org.apache.spark.metrics.source.Source
|
||||
|
||||
|
@ -62,10 +62,10 @@ import org.apache.spark.metrics.source.Source
|
|||
*
|
||||
* [options] is the specific property of this source or sink.
|
||||
*/
|
||||
private[spark] class MetricsSystem private (val instance: String) extends Logging {
|
||||
initLogging()
|
||||
private[spark] class MetricsSystem private (val instance: String,
|
||||
conf: SparkConf) extends Logging {
|
||||
|
||||
val confFile = System.getProperty("spark.metrics.conf")
|
||||
val confFile = conf.get("spark.metrics.conf", null)
|
||||
val metricsConfig = new MetricsConfig(Option(confFile))
|
||||
|
||||
val sinks = new mutable.ArrayBuffer[Sink]
|
||||
|
@ -159,5 +159,6 @@ private[spark] object MetricsSystem {
|
|||
}
|
||||
}
|
||||
|
||||
def createMetricsSystem(instance: String): MetricsSystem = new MetricsSystem(instance)
|
||||
def createMetricsSystem(instance: String, conf: SparkConf): MetricsSystem =
|
||||
new MetricsSystem(instance, conf)
|
||||
}
|
||||
|
|
|
@ -37,7 +37,7 @@ import scala.concurrent.duration._
|
|||
|
||||
import org.apache.spark.util.Utils
|
||||
|
||||
private[spark] class ConnectionManager(port: Int) extends Logging {
|
||||
private[spark] class ConnectionManager(port: Int, conf: SparkConf) extends Logging {
|
||||
|
||||
class MessageStatus(
|
||||
val message: Message,
|
||||
|
@ -54,22 +54,22 @@ private[spark] class ConnectionManager(port: Int) extends Logging {
|
|||
private val selector = SelectorProvider.provider.openSelector()
|
||||
|
||||
private val handleMessageExecutor = new ThreadPoolExecutor(
|
||||
System.getProperty("spark.core.connection.handler.threads.min","20").toInt,
|
||||
System.getProperty("spark.core.connection.handler.threads.max","60").toInt,
|
||||
System.getProperty("spark.core.connection.handler.threads.keepalive","60").toInt, TimeUnit.SECONDS,
|
||||
conf.get("spark.core.connection.handler.threads.min", "20").toInt,
|
||||
conf.get("spark.core.connection.handler.threads.max", "60").toInt,
|
||||
conf.get("spark.core.connection.handler.threads.keepalive", "60").toInt, TimeUnit.SECONDS,
|
||||
new LinkedBlockingDeque[Runnable]())
|
||||
|
||||
private val handleReadWriteExecutor = new ThreadPoolExecutor(
|
||||
System.getProperty("spark.core.connection.io.threads.min","4").toInt,
|
||||
System.getProperty("spark.core.connection.io.threads.max","32").toInt,
|
||||
System.getProperty("spark.core.connection.io.threads.keepalive","60").toInt, TimeUnit.SECONDS,
|
||||
conf.get("spark.core.connection.io.threads.min", "4").toInt,
|
||||
conf.get("spark.core.connection.io.threads.max", "32").toInt,
|
||||
conf.get("spark.core.connection.io.threads.keepalive", "60").toInt, TimeUnit.SECONDS,
|
||||
new LinkedBlockingDeque[Runnable]())
|
||||
|
||||
// Use a different, yet smaller, thread pool - infrequently used with very short lived tasks : which should be executed asap
|
||||
private val handleConnectExecutor = new ThreadPoolExecutor(
|
||||
System.getProperty("spark.core.connection.connect.threads.min","1").toInt,
|
||||
System.getProperty("spark.core.connection.connect.threads.max","8").toInt,
|
||||
System.getProperty("spark.core.connection.connect.threads.keepalive","60").toInt, TimeUnit.SECONDS,
|
||||
conf.get("spark.core.connection.connect.threads.min", "1").toInt,
|
||||
conf.get("spark.core.connection.connect.threads.max", "8").toInt,
|
||||
conf.get("spark.core.connection.connect.threads.keepalive", "60").toInt, TimeUnit.SECONDS,
|
||||
new LinkedBlockingDeque[Runnable]())
|
||||
|
||||
private val serverChannel = ServerSocketChannel.open()
|
||||
|
@ -594,7 +594,7 @@ private[spark] class ConnectionManager(port: Int) extends Logging {
|
|||
private[spark] object ConnectionManager {
|
||||
|
||||
def main(args: Array[String]) {
|
||||
val manager = new ConnectionManager(9999)
|
||||
val manager = new ConnectionManager(9999, new SparkConf)
|
||||
manager.onReceiveMessage((msg: Message, id: ConnectionManagerId) => {
|
||||
println("Received [" + msg + "] from [" + id + "]")
|
||||
None
|
||||
|
|
|
@ -19,19 +19,19 @@ package org.apache.spark.network
|
|||
|
||||
import java.nio.ByteBuffer
|
||||
import java.net.InetAddress
|
||||
import org.apache.spark.SparkConf
|
||||
|
||||
private[spark] object ReceiverTest {
|
||||
|
||||
def main(args: Array[String]) {
|
||||
val manager = new ConnectionManager(9999)
|
||||
val manager = new ConnectionManager(9999, new SparkConf)
|
||||
println("Started connection manager with id = " + manager.id)
|
||||
|
||||
manager.onReceiveMessage((msg: Message, id: ConnectionManagerId) => {
|
||||
|
||||
manager.onReceiveMessage((msg: Message, id: ConnectionManagerId) => {
|
||||
/*println("Received [" + msg + "] from [" + id + "] at " + System.currentTimeMillis)*/
|
||||
val buffer = ByteBuffer.wrap("response".getBytes())
|
||||
val buffer = ByteBuffer.wrap("response".getBytes)
|
||||
Some(Message.createBufferMessage(buffer, msg.id))
|
||||
})
|
||||
Thread.currentThread.join()
|
||||
Thread.currentThread.join()
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -19,29 +19,29 @@ package org.apache.spark.network
|
|||
|
||||
import java.nio.ByteBuffer
|
||||
import java.net.InetAddress
|
||||
import org.apache.spark.SparkConf
|
||||
|
||||
private[spark] object SenderTest {
|
||||
|
||||
def main(args: Array[String]) {
|
||||
|
||||
|
||||
if (args.length < 2) {
|
||||
println("Usage: SenderTest <target host> <target port>")
|
||||
System.exit(1)
|
||||
}
|
||||
|
||||
|
||||
val targetHost = args(0)
|
||||
val targetPort = args(1).toInt
|
||||
val targetConnectionManagerId = new ConnectionManagerId(targetHost, targetPort)
|
||||
|
||||
val manager = new ConnectionManager(0)
|
||||
val manager = new ConnectionManager(0, new SparkConf)
|
||||
println("Started connection manager with id = " + manager.id)
|
||||
|
||||
manager.onReceiveMessage((msg: Message, id: ConnectionManagerId) => {
|
||||
manager.onReceiveMessage((msg: Message, id: ConnectionManagerId) => {
|
||||
println("Received [" + msg + "] from [" + id + "]")
|
||||
None
|
||||
})
|
||||
|
||||
val size = 100 * 1024 * 1024
|
||||
|
||||
val size = 100 * 1024 * 1024
|
||||
val buffer = ByteBuffer.allocate(size).put(Array.tabulate[Byte](size)(x => x.toByte))
|
||||
buffer.flip
|
||||
|
||||
|
@ -50,7 +50,7 @@ private[spark] object SenderTest {
|
|||
val count = 100
|
||||
(0 until count).foreach(i => {
|
||||
val dataMessage = Message.createBufferMessage(buffer.duplicate)
|
||||
val startTime = System.currentTimeMillis
|
||||
val startTime = System.currentTimeMillis
|
||||
/*println("Started timer at " + startTime)*/
|
||||
val responseStr = manager.sendMessageReliablySync(targetConnectionManagerId, dataMessage) match {
|
||||
case Some(response) =>
|
||||
|
|
|
@ -23,20 +23,20 @@ import io.netty.buffer.ByteBuf
|
|||
import io.netty.channel.ChannelHandlerContext
|
||||
import io.netty.util.CharsetUtil
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkContext, SparkConf, Logging}
|
||||
import org.apache.spark.network.ConnectionManagerId
|
||||
|
||||
import scala.collection.JavaConverters._
|
||||
import org.apache.spark.storage.BlockId
|
||||
|
||||
|
||||
private[spark] class ShuffleCopier extends Logging {
|
||||
private[spark] class ShuffleCopier(conf: SparkConf) extends Logging {
|
||||
|
||||
def getBlock(host: String, port: Int, blockId: BlockId,
|
||||
resultCollectCallback: (BlockId, Long, ByteBuf) => Unit) {
|
||||
|
||||
val handler = new ShuffleCopier.ShuffleClientHandler(resultCollectCallback)
|
||||
val connectTimeout = System.getProperty("spark.shuffle.netty.connect.timeout", "60000").toInt
|
||||
val connectTimeout = conf.get("spark.shuffle.netty.connect.timeout", "60000").toInt
|
||||
val fc = new FileClient(handler, connectTimeout)
|
||||
|
||||
try {
|
||||
|
@ -104,10 +104,10 @@ private[spark] object ShuffleCopier extends Logging {
|
|||
val threads = if (args.length > 3) args(3).toInt else 10
|
||||
|
||||
val copiers = Executors.newFixedThreadPool(80)
|
||||
val tasks = (for (i <- Range(0, threads)) yield {
|
||||
val tasks = (for (i <- Range(0, threads)) yield {
|
||||
Executors.callable(new Runnable() {
|
||||
def run() {
|
||||
val copier = new ShuffleCopier()
|
||||
val copier = new ShuffleCopier(new SparkConf)
|
||||
copier.getBlock(host, port, blockId, echoResultCollectCallBack)
|
||||
}
|
||||
})
|
||||
|
|
|
@ -18,12 +18,12 @@
|
|||
package org.apache.spark.rdd
|
||||
|
||||
import java.io.IOException
|
||||
|
||||
import scala.reflect.ClassTag
|
||||
|
||||
import org.apache.hadoop.fs.Path
|
||||
import org.apache.spark._
|
||||
import org.apache.spark.broadcast.Broadcast
|
||||
import org.apache.spark.deploy.SparkHadoopUtil
|
||||
import org.apache.hadoop.conf.Configuration
|
||||
import org.apache.hadoop.fs.Path
|
||||
|
||||
private[spark] class CheckpointRDDPartition(val index: Int) extends Partition {}
|
||||
|
||||
|
@ -34,6 +34,8 @@ private[spark]
|
|||
class CheckpointRDD[T: ClassTag](sc: SparkContext, val checkpointPath: String)
|
||||
extends RDD[T](sc, Nil) {
|
||||
|
||||
val broadcastedConf = sc.broadcast(new SerializableWritable(sc.hadoopConfiguration))
|
||||
|
||||
@transient val fs = new Path(checkpointPath).getFileSystem(sc.hadoopConfiguration)
|
||||
|
||||
override def getPartitions: Array[Partition] = {
|
||||
|
@ -65,7 +67,7 @@ class CheckpointRDD[T: ClassTag](sc: SparkContext, val checkpointPath: String)
|
|||
|
||||
override def compute(split: Partition, context: TaskContext): Iterator[T] = {
|
||||
val file = new Path(checkpointPath, CheckpointRDD.splitIdToFile(split.index))
|
||||
CheckpointRDD.readFromFile(file, context)
|
||||
CheckpointRDD.readFromFile(file, broadcastedConf, context)
|
||||
}
|
||||
|
||||
override def checkpoint() {
|
||||
|
@ -74,15 +76,18 @@ class CheckpointRDD[T: ClassTag](sc: SparkContext, val checkpointPath: String)
|
|||
}
|
||||
|
||||
private[spark] object CheckpointRDD extends Logging {
|
||||
|
||||
def splitIdToFile(splitId: Int): String = {
|
||||
"part-%05d".format(splitId)
|
||||
}
|
||||
|
||||
def writeToFile[T](path: String, blockSize: Int = -1)(ctx: TaskContext, iterator: Iterator[T]) {
|
||||
def writeToFile[T](
|
||||
path: String,
|
||||
broadcastedConf: Broadcast[SerializableWritable[Configuration]],
|
||||
blockSize: Int = -1
|
||||
)(ctx: TaskContext, iterator: Iterator[T]) {
|
||||
val env = SparkEnv.get
|
||||
val outputDir = new Path(path)
|
||||
val fs = outputDir.getFileSystem(SparkHadoopUtil.get.newConfiguration())
|
||||
val fs = outputDir.getFileSystem(broadcastedConf.value.value)
|
||||
|
||||
val finalOutputName = splitIdToFile(ctx.partitionId)
|
||||
val finalOutputPath = new Path(outputDir, finalOutputName)
|
||||
|
@ -92,7 +97,7 @@ private[spark] object CheckpointRDD extends Logging {
|
|||
throw new IOException("Checkpoint failed: temporary path " +
|
||||
tempOutputPath + " already exists")
|
||||
}
|
||||
val bufferSize = System.getProperty("spark.buffer.size", "65536").toInt
|
||||
val bufferSize = env.conf.get("spark.buffer.size", "65536").toInt
|
||||
|
||||
val fileOutputStream = if (blockSize < 0) {
|
||||
fs.create(tempOutputPath, false, bufferSize)
|
||||
|
@ -119,10 +124,14 @@ private[spark] object CheckpointRDD extends Logging {
|
|||
}
|
||||
}
|
||||
|
||||
def readFromFile[T](path: Path, context: TaskContext): Iterator[T] = {
|
||||
def readFromFile[T](
|
||||
path: Path,
|
||||
broadcastedConf: Broadcast[SerializableWritable[Configuration]],
|
||||
context: TaskContext
|
||||
): Iterator[T] = {
|
||||
val env = SparkEnv.get
|
||||
val fs = path.getFileSystem(SparkHadoopUtil.get.newConfiguration())
|
||||
val bufferSize = System.getProperty("spark.buffer.size", "65536").toInt
|
||||
val fs = path.getFileSystem(broadcastedConf.value.value)
|
||||
val bufferSize = env.conf.get("spark.buffer.size", "65536").toInt
|
||||
val fileInputStream = fs.open(path, bufferSize)
|
||||
val serializer = env.serializer.newInstance()
|
||||
val deserializeStream = serializer.deserializeStream(fileInputStream)
|
||||
|
@ -144,8 +153,10 @@ private[spark] object CheckpointRDD extends Logging {
|
|||
val sc = new SparkContext(cluster, "CheckpointRDD Test")
|
||||
val rdd = sc.makeRDD(1 to 10, 10).flatMap(x => 1 to 10000)
|
||||
val path = new Path(hdfsPath, "temp")
|
||||
val fs = path.getFileSystem(SparkHadoopUtil.get.newConfiguration())
|
||||
sc.runJob(rdd, CheckpointRDD.writeToFile(path.toString, 1024) _)
|
||||
val conf = SparkHadoopUtil.get.newConfiguration()
|
||||
val fs = path.getFileSystem(conf)
|
||||
val broadcastedConf = sc.broadcast(new SerializableWritable(conf))
|
||||
sc.runJob(rdd, CheckpointRDD.writeToFile(path.toString, broadcastedConf, 1024) _)
|
||||
val cpRDD = new CheckpointRDD[Int](sc, path.toString)
|
||||
assert(cpRDD.partitions.length == rdd.partitions.length, "Number of partitions is not the same")
|
||||
assert(cpRDD.collect.toList == rdd.collect.toList, "Data of partitions not the same")
|
||||
|
|
|
@ -114,7 +114,7 @@ class CoGroupedRDD[K](@transient var rdds: Seq[RDD[_ <: Product2[K, _]]], part:
|
|||
map.changeValue(k, update)
|
||||
}
|
||||
|
||||
val ser = SparkEnv.get.serializerManager.get(serializerClass)
|
||||
val ser = SparkEnv.get.serializerManager.get(serializerClass, SparkEnv.get.conf)
|
||||
for ((dep, depNum) <- split.deps.zipWithIndex) dep match {
|
||||
case NarrowCoGroupSplitDep(rdd, _, itsSplit) => {
|
||||
// Read them from the parent
|
||||
|
|
|
@ -40,12 +40,15 @@ import org.apache.hadoop.mapreduce.SparkHadoopMapReduceUtil
|
|||
import org.apache.hadoop.mapreduce.{Job => NewAPIHadoopJob}
|
||||
import org.apache.hadoop.mapreduce.{RecordWriter => NewRecordWriter}
|
||||
|
||||
import com.clearspring.analytics.stream.cardinality.HyperLogLog
|
||||
|
||||
import org.apache.spark._
|
||||
import org.apache.spark.SparkContext._
|
||||
import org.apache.spark.partial.{BoundedDouble, PartialResult}
|
||||
import org.apache.spark.Aggregator
|
||||
import org.apache.spark.Partitioner
|
||||
import org.apache.spark.Partitioner.defaultPartitioner
|
||||
import org.apache.spark.util.SerializableHyperLogLog
|
||||
|
||||
/**
|
||||
* Extra functions available on RDDs of (key, value) pairs through an implicit conversion.
|
||||
|
@ -207,6 +210,45 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag](self: RDD[(K, V)])
|
|||
self.map(_._1).countByValueApprox(timeout, confidence)
|
||||
}
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct values for each key in this RDD.
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. Uses the provided
|
||||
* Partitioner to partition the output RDD.
|
||||
*/
|
||||
def countApproxDistinctByKey(relativeSD: Double, partitioner: Partitioner): RDD[(K, Long)] = {
|
||||
val createHLL = (v: V) => new SerializableHyperLogLog(new HyperLogLog(relativeSD)).add(v)
|
||||
val mergeValueHLL = (hll: SerializableHyperLogLog, v: V) => hll.add(v)
|
||||
val mergeHLL = (h1: SerializableHyperLogLog, h2: SerializableHyperLogLog) => h1.merge(h2)
|
||||
|
||||
combineByKey(createHLL, mergeValueHLL, mergeHLL, partitioner).mapValues(_.value.cardinality())
|
||||
}
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct values for each key in this RDD.
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. HashPartitions the
|
||||
* output RDD into numPartitions.
|
||||
*
|
||||
*/
|
||||
def countApproxDistinctByKey(relativeSD: Double, numPartitions: Int): RDD[(K, Long)] = {
|
||||
countApproxDistinctByKey(relativeSD, new HashPartitioner(numPartitions))
|
||||
}
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct values for each key this RDD.
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. The default value of
|
||||
* relativeSD is 0.05. Hash-partitions the output RDD using the existing partitioner/parallelism
|
||||
* level.
|
||||
*/
|
||||
def countApproxDistinctByKey(relativeSD: Double = 0.05): RDD[(K, Long)] = {
|
||||
countApproxDistinctByKey(relativeSD, defaultPartitioner(self))
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge the values for each key using an associative reduce function. This will also perform
|
||||
* the merging locally on each mapper before sending results to a reducer, similarly to a
|
||||
|
|
|
@ -0,0 +1,110 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.apache.spark.rdd
|
||||
|
||||
import scala.reflect.ClassTag
|
||||
import java.io.{ObjectOutputStream, IOException}
|
||||
import org.apache.spark.{TaskContext, OneToOneDependency, SparkContext, Partition}
|
||||
|
||||
|
||||
/**
|
||||
* Class representing partitions of PartitionerAwareUnionRDD, which maintains the list of corresponding partitions
|
||||
* of parent RDDs.
|
||||
*/
|
||||
private[spark]
|
||||
class PartitionerAwareUnionRDDPartition(
|
||||
@transient val rdds: Seq[RDD[_]],
|
||||
val idx: Int
|
||||
) extends Partition {
|
||||
var parents = rdds.map(_.partitions(idx)).toArray
|
||||
|
||||
override val index = idx
|
||||
override def hashCode(): Int = idx
|
||||
|
||||
@throws(classOf[IOException])
|
||||
private def writeObject(oos: ObjectOutputStream) {
|
||||
// Update the reference to parent partition at the time of task serialization
|
||||
parents = rdds.map(_.partitions(index)).toArray
|
||||
oos.defaultWriteObject()
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Class representing an RDD that can take multiple RDDs partitioned by the same partitioner and
|
||||
* unify them into a single RDD while preserving the partitioner. So m RDDs with p partitions each
|
||||
* will be unified to a single RDD with p partitions and the same partitioner. The preferred
|
||||
* location for each partition of the unified RDD will be the most common preferred location
|
||||
* of the corresponding partitions of the parent RDDs. For example, location of partition 0
|
||||
* of the unified RDD will be where most of partition 0 of the parent RDDs are located.
|
||||
*/
|
||||
private[spark]
|
||||
class PartitionerAwareUnionRDD[T: ClassTag](
|
||||
sc: SparkContext,
|
||||
var rdds: Seq[RDD[T]]
|
||||
) extends RDD[T](sc, rdds.map(x => new OneToOneDependency(x))) {
|
||||
require(rdds.length > 0)
|
||||
require(rdds.flatMap(_.partitioner).toSet.size == 1,
|
||||
"Parent RDDs have different partitioners: " + rdds.flatMap(_.partitioner))
|
||||
|
||||
override val partitioner = rdds.head.partitioner
|
||||
|
||||
override def getPartitions: Array[Partition] = {
|
||||
val numPartitions = partitioner.get.numPartitions
|
||||
(0 until numPartitions).map(index => {
|
||||
new PartitionerAwareUnionRDDPartition(rdds, index)
|
||||
}).toArray
|
||||
}
|
||||
|
||||
// Get the location where most of the partitions of parent RDDs are located
|
||||
override def getPreferredLocations(s: Partition): Seq[String] = {
|
||||
logDebug("Finding preferred location for " + this + ", partition " + s.index)
|
||||
val parentPartitions = s.asInstanceOf[PartitionerAwareUnionRDDPartition].parents
|
||||
val locations = rdds.zip(parentPartitions).flatMap {
|
||||
case (rdd, part) => {
|
||||
val parentLocations = currPrefLocs(rdd, part)
|
||||
logDebug("Location of " + rdd + " partition " + part.index + " = " + parentLocations)
|
||||
parentLocations
|
||||
}
|
||||
}
|
||||
val location = if (locations.isEmpty) {
|
||||
None
|
||||
} else {
|
||||
// Find the location that maximum number of parent partitions prefer
|
||||
Some(locations.groupBy(x => x).maxBy(_._2.length)._1)
|
||||
}
|
||||
logDebug("Selected location for " + this + ", partition " + s.index + " = " + location)
|
||||
location.toSeq
|
||||
}
|
||||
|
||||
override def compute(s: Partition, context: TaskContext): Iterator[T] = {
|
||||
val parentPartitions = s.asInstanceOf[PartitionerAwareUnionRDDPartition].parents
|
||||
rdds.zip(parentPartitions).iterator.flatMap {
|
||||
case (rdd, p) => rdd.iterator(p, context)
|
||||
}
|
||||
}
|
||||
|
||||
override def clearDependencies() {
|
||||
super.clearDependencies()
|
||||
rdds = null
|
||||
}
|
||||
|
||||
// Get the *current* preferred locations from the DAGScheduler (as opposed to the static ones)
|
||||
private def currPrefLocs(rdd: RDD[_], part: Partition): Seq[String] = {
|
||||
rdd.context.getPreferredLocs(rdd, part.index).map(tl => tl.host)
|
||||
}
|
||||
}
|
|
@ -33,6 +33,7 @@ import org.apache.hadoop.io.Text
|
|||
import org.apache.hadoop.mapred.TextOutputFormat
|
||||
|
||||
import it.unimi.dsi.fastutil.objects.{Object2LongOpenHashMap => OLMap}
|
||||
import com.clearspring.analytics.stream.cardinality.HyperLogLog
|
||||
|
||||
import org.apache.spark.Partitioner._
|
||||
import org.apache.spark.api.java.JavaRDD
|
||||
|
@ -41,7 +42,7 @@ import org.apache.spark.partial.CountEvaluator
|
|||
import org.apache.spark.partial.GroupedCountEvaluator
|
||||
import org.apache.spark.partial.PartialResult
|
||||
import org.apache.spark.storage.StorageLevel
|
||||
import org.apache.spark.util.{Utils, BoundedPriorityQueue}
|
||||
import org.apache.spark.util.{Utils, BoundedPriorityQueue, SerializableHyperLogLog}
|
||||
|
||||
import org.apache.spark.SparkContext._
|
||||
import org.apache.spark._
|
||||
|
@ -81,6 +82,7 @@ abstract class RDD[T: ClassTag](
|
|||
def this(@transient oneParent: RDD[_]) =
|
||||
this(oneParent.context , List(new OneToOneDependency(oneParent)))
|
||||
|
||||
private[spark] def conf = sc.conf
|
||||
// =======================================================================
|
||||
// Methods that should be implemented by subclasses of RDD
|
||||
// =======================================================================
|
||||
|
@ -788,6 +790,19 @@ abstract class RDD[T: ClassTag](
|
|||
sc.runApproximateJob(this, countPartition, evaluator, timeout)
|
||||
}
|
||||
|
||||
/**
|
||||
* Return approximate number of distinct elements in the RDD.
|
||||
*
|
||||
* The accuracy of approximation can be controlled through the relative standard deviation
|
||||
* (relativeSD) parameter, which also controls the amount of memory used. Lower values result in
|
||||
* more accurate counts but increase the memory footprint and vise versa. The default value of
|
||||
* relativeSD is 0.05.
|
||||
*/
|
||||
def countApproxDistinct(relativeSD: Double = 0.05): Long = {
|
||||
val zeroCounter = new SerializableHyperLogLog(new HyperLogLog(relativeSD))
|
||||
aggregate(zeroCounter)(_.add(_), _.merge(_)).value.cardinality()
|
||||
}
|
||||
|
||||
/**
|
||||
* Take the first num elements of the RDD. It works by first scanning one partition, and use the
|
||||
* results from that partition to estimate the number of additional partitions needed to satisfy
|
||||
|
@ -938,7 +953,7 @@ abstract class RDD[T: ClassTag](
|
|||
private var storageLevel: StorageLevel = StorageLevel.NONE
|
||||
|
||||
/** Record user function generating this RDD. */
|
||||
@transient private[spark] val origin = Utils.formatSparkCallSite
|
||||
@transient private[spark] val origin = sc.getCallSite
|
||||
|
||||
private[spark] def elementClassTag: ClassTag[T] = classTag[T]
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ import scala.reflect.ClassTag
|
|||
import org.apache.hadoop.fs.Path
|
||||
import org.apache.hadoop.conf.Configuration
|
||||
|
||||
import org.apache.spark.{Partition, SparkException, Logging}
|
||||
import org.apache.spark.{SerializableWritable, Partition, SparkException, Logging}
|
||||
import org.apache.spark.scheduler.{ResultTask, ShuffleMapTask}
|
||||
|
||||
/**
|
||||
|
@ -40,7 +40,7 @@ private[spark] object CheckpointState extends Enumeration {
|
|||
* manages the post-checkpoint state by providing the updated partitions, iterator and preferred locations
|
||||
* of the checkpointed RDD.
|
||||
*/
|
||||
private[spark] class RDDCheckpointData[T: ClassTag](rdd: RDD[T])
|
||||
private[spark] class RDDCheckpointData[T: ClassTag](@transient rdd: RDD[T])
|
||||
extends Logging with Serializable {
|
||||
|
||||
import CheckpointState._
|
||||
|
@ -85,14 +85,21 @@ private[spark] class RDDCheckpointData[T: ClassTag](rdd: RDD[T])
|
|||
|
||||
// Create the output path for the checkpoint
|
||||
val path = new Path(rdd.context.checkpointDir.get, "rdd-" + rdd.id)
|
||||
val fs = path.getFileSystem(new Configuration())
|
||||
val fs = path.getFileSystem(rdd.context.hadoopConfiguration)
|
||||
if (!fs.mkdirs(path)) {
|
||||
throw new SparkException("Failed to create checkpoint path " + path)
|
||||
}
|
||||
|
||||
// Save to file, and reload it as an RDD
|
||||
rdd.context.runJob(rdd, CheckpointRDD.writeToFile(path.toString) _)
|
||||
val broadcastedConf = rdd.context.broadcast(
|
||||
new SerializableWritable(rdd.context.hadoopConfiguration))
|
||||
rdd.context.runJob(rdd, CheckpointRDD.writeToFile(path.toString, broadcastedConf) _)
|
||||
val newRDD = new CheckpointRDD[T](rdd.context, path.toString)
|
||||
if (newRDD.partitions.size != rdd.partitions.size) {
|
||||
throw new SparkException(
|
||||
"Checkpoint RDD " + newRDD + "("+ newRDD.partitions.size + ") has different " +
|
||||
"number of partitions than original RDD " + rdd + "(" + rdd.partitions.size + ")")
|
||||
}
|
||||
|
||||
// Change the dependencies and partitions of the RDD
|
||||
RDDCheckpointData.synchronized {
|
||||
|
@ -101,8 +108,8 @@ private[spark] class RDDCheckpointData[T: ClassTag](rdd: RDD[T])
|
|||
rdd.markCheckpointed(newRDD) // Update the RDD's dependencies and partitions
|
||||
cpState = Checkpointed
|
||||
RDDCheckpointData.clearTaskCaches()
|
||||
logInfo("Done checkpointing RDD " + rdd.id + ", new parent is RDD " + newRDD.id)
|
||||
}
|
||||
logInfo("Done checkpointing RDD " + rdd.id + " to " + path + ", new parent is RDD " + newRDD.id)
|
||||
}
|
||||
|
||||
// Get preferred location of a split after checkpointing
|
||||
|
|
|
@ -59,7 +59,7 @@ class ShuffledRDD[K, V, P <: Product2[K, V] : ClassTag](
|
|||
override def compute(split: Partition, context: TaskContext): Iterator[P] = {
|
||||
val shuffledId = dependencies.head.asInstanceOf[ShuffleDependency[K, V]].shuffleId
|
||||
SparkEnv.get.shuffleFetcher.fetch[P](shuffledId, split.index, context,
|
||||
SparkEnv.get.serializerManager.get(serializerClass))
|
||||
SparkEnv.get.serializerManager.get(serializerClass, SparkEnv.get.conf))
|
||||
}
|
||||
|
||||
override def clearDependencies() {
|
||||
|
|
|
@ -93,7 +93,7 @@ private[spark] class SubtractedRDD[K: ClassTag, V: ClassTag, W: ClassTag](
|
|||
|
||||
override def compute(p: Partition, context: TaskContext): Iterator[(K, V)] = {
|
||||
val partition = p.asInstanceOf[CoGroupPartition]
|
||||
val serializer = SparkEnv.get.serializerManager.get(serializerClass)
|
||||
val serializer = SparkEnv.get.serializerManager.get(serializerClass, SparkEnv.get.conf)
|
||||
val map = new JHashMap[K, ArrayBuffer[V]]
|
||||
def getSeq(k: K): ArrayBuffer[V] = {
|
||||
val seq = map.get(k)
|
||||
|
|
|
@ -159,7 +159,8 @@ class DAGScheduler(
|
|||
val activeJobs = new HashSet[ActiveJob]
|
||||
val resultStageToJob = new HashMap[Stage, ActiveJob]
|
||||
|
||||
val metadataCleaner = new MetadataCleaner(MetadataCleanerType.DAG_SCHEDULER, this.cleanup)
|
||||
val metadataCleaner = new MetadataCleaner(
|
||||
MetadataCleanerType.DAG_SCHEDULER, this.cleanup, env.conf)
|
||||
|
||||
/**
|
||||
* Starts the event processing actor. The actor has two responsibilities:
|
||||
|
|
|
@ -32,7 +32,7 @@ import scala.collection.JavaConversions._
|
|||
/**
|
||||
* Parses and holds information about inputFormat (and files) specified as a parameter.
|
||||
*/
|
||||
class InputFormatInfo(val configuration: Configuration, val inputFormatClazz: Class[_],
|
||||
class InputFormatInfo(val configuration: Configuration, val inputFormatClazz: Class[_],
|
||||
val path: String) extends Logging {
|
||||
|
||||
var mapreduceInputFormat: Boolean = false
|
||||
|
@ -40,7 +40,7 @@ class InputFormatInfo(val configuration: Configuration, val inputFormatClazz: Cl
|
|||
|
||||
validate()
|
||||
|
||||
override def toString(): String = {
|
||||
override def toString: String = {
|
||||
"InputFormatInfo " + super.toString + " .. inputFormatClazz " + inputFormatClazz + ", path : " + path
|
||||
}
|
||||
|
||||
|
@ -125,7 +125,7 @@ class InputFormatInfo(val configuration: Configuration, val inputFormatClazz: Cl
|
|||
}
|
||||
|
||||
private def findPreferredLocations(): Set[SplitInfo] = {
|
||||
logDebug("mapreduceInputFormat : " + mapreduceInputFormat + ", mapredInputFormat : " + mapredInputFormat +
|
||||
logDebug("mapreduceInputFormat : " + mapreduceInputFormat + ", mapredInputFormat : " + mapredInputFormat +
|
||||
", inputFormatClazz : " + inputFormatClazz)
|
||||
if (mapreduceInputFormat) {
|
||||
return prefLocsFromMapreduceInputFormat()
|
||||
|
@ -143,14 +143,14 @@ class InputFormatInfo(val configuration: Configuration, val inputFormatClazz: Cl
|
|||
object InputFormatInfo {
|
||||
/**
|
||||
Computes the preferred locations based on input(s) and returned a location to block map.
|
||||
Typical use of this method for allocation would follow some algo like this
|
||||
(which is what we currently do in YARN branch) :
|
||||
Typical use of this method for allocation would follow some algo like this:
|
||||
|
||||
a) For each host, count number of splits hosted on that host.
|
||||
b) Decrement the currently allocated containers on that host.
|
||||
c) Compute rack info for each host and update rack -> count map based on (b).
|
||||
d) Allocate nodes based on (c)
|
||||
e) On the allocation result, ensure that we dont allocate "too many" jobs on a single node
|
||||
(even if data locality on that is very high) : this is to prevent fragility of job if a single
|
||||
e) On the allocation result, ensure that we dont allocate "too many" jobs on a single node
|
||||
(even if data locality on that is very high) : this is to prevent fragility of job if a single
|
||||
(or small set of) hosts go down.
|
||||
|
||||
go to (a) until required nodes are allocated.
|
||||
|
|
|
@ -32,7 +32,9 @@ private[spark] object ResultTask {
|
|||
// expensive on the master node if it needs to launch thousands of tasks.
|
||||
val serializedInfoCache = new TimeStampedHashMap[Int, Array[Byte]]
|
||||
|
||||
val metadataCleaner = new MetadataCleaner(MetadataCleanerType.RESULT_TASK, serializedInfoCache.clearOldValues)
|
||||
// TODO: This object shouldn't have global variables
|
||||
val metadataCleaner = new MetadataCleaner(
|
||||
MetadataCleanerType.RESULT_TASK, serializedInfoCache.clearOldValues, new SparkConf)
|
||||
|
||||
def serializeInfo(stageId: Int, rdd: RDD[_], func: (TaskContext, Iterator[_]) => _): Array[Byte] = {
|
||||
synchronized {
|
||||
|
|
|
@ -20,7 +20,7 @@ package org.apache.spark.scheduler
|
|||
import java.io.{FileInputStream, InputStream}
|
||||
import java.util.{NoSuchElementException, Properties}
|
||||
|
||||
import org.apache.spark.Logging
|
||||
import org.apache.spark.{SparkConf, Logging}
|
||||
|
||||
import scala.xml.XML
|
||||
|
||||
|
@ -49,10 +49,10 @@ private[spark] class FIFOSchedulableBuilder(val rootPool: Pool)
|
|||
}
|
||||
}
|
||||
|
||||
private[spark] class FairSchedulableBuilder(val rootPool: Pool)
|
||||
private[spark] class FairSchedulableBuilder(val rootPool: Pool, conf: SparkConf)
|
||||
extends SchedulableBuilder with Logging {
|
||||
|
||||
val schedulerAllocFile = Option(System.getProperty("spark.scheduler.allocation.file"))
|
||||
val schedulerAllocFile = conf.getOption("spark.scheduler.allocation.file")
|
||||
val DEFAULT_SCHEDULER_FILE = "fairscheduler.xml"
|
||||
val FAIR_SCHEDULER_PROPERTIES = "spark.scheduler.pool"
|
||||
val DEFAULT_POOL_NAME = "default"
|
||||
|
|
|
@ -31,7 +31,4 @@ private[spark] trait SchedulerBackend {
|
|||
def defaultParallelism(): Int
|
||||
|
||||
def killTask(taskId: Long, executorId: String): Unit = throw new UnsupportedOperationException
|
||||
|
||||
// Memory used by each executor (in megabytes)
|
||||
protected val executorMemory: Int = SparkContext.executorMemoryRequested
|
||||
}
|
||||
|
|
|
@ -37,7 +37,9 @@ private[spark] object ShuffleMapTask {
|
|||
// expensive on the master node if it needs to launch thousands of tasks.
|
||||
val serializedInfoCache = new TimeStampedHashMap[Int, Array[Byte]]
|
||||
|
||||
val metadataCleaner = new MetadataCleaner(MetadataCleanerType.SHUFFLE_MAP_TASK, serializedInfoCache.clearOldValues)
|
||||
// TODO: This object shouldn't have global variables
|
||||
val metadataCleaner = new MetadataCleaner(
|
||||
MetadataCleanerType.SHUFFLE_MAP_TASK, serializedInfoCache.clearOldValues, new SparkConf)
|
||||
|
||||
def serializeInfo(stageId: Int, rdd: RDD[_], dep: ShuffleDependency[_,_]): Array[Byte] = {
|
||||
synchronized {
|
||||
|
@ -152,7 +154,7 @@ private[spark] class ShuffleMapTask(
|
|||
|
||||
try {
|
||||
// Obtain all the block writers for shuffle blocks.
|
||||
val ser = SparkEnv.get.serializerManager.get(dep.serializerClass)
|
||||
val ser = SparkEnv.get.serializerManager.get(dep.serializerClass, SparkEnv.get.conf)
|
||||
shuffle = shuffleBlockManager.forMapTask(dep.shuffleId, partitionId, numOutputSplits, ser)
|
||||
|
||||
// Write the map output to its associated buckets.
|
||||
|
|
|
@ -30,7 +30,8 @@ import org.apache.spark.util.Utils
|
|||
*/
|
||||
private[spark] class TaskResultGetter(sparkEnv: SparkEnv, scheduler: TaskSchedulerImpl)
|
||||
extends Logging {
|
||||
private val THREADS = System.getProperty("spark.resultGetter.threads", "4").toInt
|
||||
|
||||
private val THREADS = sparkEnv.conf.get("spark.resultGetter.threads", "4").toInt
|
||||
private val getTaskResultExecutor = Utils.newDaemonFixedThreadPool(
|
||||
THREADS, "Result resolver thread")
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ import org.apache.spark.scheduler.SchedulingMode.SchedulingMode
|
|||
* It can also work with a local setup by using a LocalBackend and setting isLocal to true.
|
||||
* It handles common logic, like determining a scheduling order across jobs, waking up to launch
|
||||
* speculative tasks, etc.
|
||||
*
|
||||
*
|
||||
* Clients should first call initialize() and start(), then submit task sets through the
|
||||
* runTasks method.
|
||||
*
|
||||
|
@ -47,15 +47,19 @@ import org.apache.spark.scheduler.SchedulingMode.SchedulingMode
|
|||
*/
|
||||
private[spark] class TaskSchedulerImpl(
|
||||
val sc: SparkContext,
|
||||
val maxTaskFailures : Int = System.getProperty("spark.task.maxFailures", "4").toInt,
|
||||
val maxTaskFailures: Int,
|
||||
isLocal: Boolean = false)
|
||||
extends TaskScheduler with Logging {
|
||||
extends TaskScheduler with Logging
|
||||
{
|
||||
def this(sc: SparkContext) = this(sc, sc.conf.get("spark.task.maxFailures", "4").toInt)
|
||||
|
||||
val conf = sc.conf
|
||||
|
||||
// How often to check for speculative tasks
|
||||
val SPECULATION_INTERVAL = System.getProperty("spark.speculation.interval", "100").toLong
|
||||
val SPECULATION_INTERVAL = conf.get("spark.speculation.interval", "100").toLong
|
||||
|
||||
// Threshold above which we warn user initial TaskSet may be starved
|
||||
val STARVATION_TIMEOUT = System.getProperty("spark.starvation.timeout", "15000").toLong
|
||||
val STARVATION_TIMEOUT = conf.get("spark.starvation.timeout", "15000").toLong
|
||||
|
||||
// TaskSetManagers are not thread safe, so any access to one should be synchronized
|
||||
// on this class.
|
||||
|
@ -92,7 +96,7 @@ private[spark] class TaskSchedulerImpl(
|
|||
var rootPool: Pool = null
|
||||
// default scheduler is FIFO
|
||||
val schedulingMode: SchedulingMode = SchedulingMode.withName(
|
||||
System.getProperty("spark.scheduler.mode", "FIFO"))
|
||||
conf.get("spark.scheduler.mode", "FIFO"))
|
||||
|
||||
// This is a var so that we can reset it for testing purposes.
|
||||
private[spark] var taskResultGetter = new TaskResultGetter(sc.env, this)
|
||||
|
@ -110,7 +114,7 @@ private[spark] class TaskSchedulerImpl(
|
|||
case SchedulingMode.FIFO =>
|
||||
new FIFOSchedulableBuilder(rootPool)
|
||||
case SchedulingMode.FAIR =>
|
||||
new FairSchedulableBuilder(rootPool)
|
||||
new FairSchedulableBuilder(rootPool, conf)
|
||||
}
|
||||
}
|
||||
schedulableBuilder.buildPools()
|
||||
|
@ -121,7 +125,7 @@ private[spark] class TaskSchedulerImpl(
|
|||
override def start() {
|
||||
backend.start()
|
||||
|
||||
if (!isLocal && System.getProperty("spark.speculation", "false").toBoolean) {
|
||||
if (!isLocal && conf.get("spark.speculation", "false").toBoolean) {
|
||||
logInfo("Starting speculative execution thread")
|
||||
import sc.env.actorSystem.dispatcher
|
||||
sc.env.actorSystem.scheduler.schedule(SPECULATION_INTERVAL milliseconds,
|
||||
|
@ -281,7 +285,8 @@ private[spark] class TaskSchedulerImpl(
|
|||
}
|
||||
}
|
||||
case None =>
|
||||
logInfo("Ignoring update from TID " + tid + " because its task set is gone")
|
||||
logInfo("Ignoring update with state %s from TID %s because its task set is gone"
|
||||
.format(state, tid))
|
||||
}
|
||||
} catch {
|
||||
case e: Exception => logError("Exception in statusUpdate", e)
|
||||
|
@ -324,7 +329,7 @@ private[spark] class TaskSchedulerImpl(
|
|||
// Have each task set throw a SparkException with the error
|
||||
for ((taskSetId, manager) <- activeTaskSets) {
|
||||
try {
|
||||
manager.error(message)
|
||||
manager.abort(message)
|
||||
} catch {
|
||||
case e: Exception => logError("Exception in error callback", e)
|
||||
}
|
||||
|
|
|
@ -54,12 +54,14 @@ private[spark] class TaskSetManager(
|
|||
clock: Clock = SystemClock)
|
||||
extends Schedulable with Logging
|
||||
{
|
||||
val conf = sched.sc.conf
|
||||
|
||||
// CPUs to request per task
|
||||
val CPUS_PER_TASK = System.getProperty("spark.task.cpus", "1").toInt
|
||||
val CPUS_PER_TASK = conf.get("spark.task.cpus", "1").toInt
|
||||
|
||||
// Quantile of tasks at which to start speculation
|
||||
val SPECULATION_QUANTILE = System.getProperty("spark.speculation.quantile", "0.75").toDouble
|
||||
val SPECULATION_MULTIPLIER = System.getProperty("spark.speculation.multiplier", "1.5").toDouble
|
||||
val SPECULATION_QUANTILE = conf.get("spark.speculation.quantile", "0.75").toDouble
|
||||
val SPECULATION_MULTIPLIER = conf.get("spark.speculation.multiplier", "1.5").toDouble
|
||||
|
||||
// Serializer for closures and tasks.
|
||||
val env = SparkEnv.get
|
||||
|
@ -114,7 +116,7 @@ private[spark] class TaskSetManager(
|
|||
|
||||
// How frequently to reprint duplicate exceptions in full, in milliseconds
|
||||
val EXCEPTION_PRINT_INTERVAL =
|
||||
System.getProperty("spark.logging.exceptionPrintInterval", "10000").toLong
|
||||
conf.get("spark.logging.exceptionPrintInterval", "10000").toLong
|
||||
|
||||
// Map of recent exceptions (identified by string representation and top stack frame) to
|
||||
// duplicate count (how many times the same exception has appeared) and time the full exception
|
||||
|
@ -546,11 +548,6 @@ private[spark] class TaskSetManager(
|
|||
}
|
||||
}
|
||||
|
||||
def error(message: String) {
|
||||
// Save the error message
|
||||
abort("Error: " + message)
|
||||
}
|
||||
|
||||
def abort(message: String) {
|
||||
// TODO: Kill running tasks if we were not terminated due to a Mesos error
|
||||
sched.dagScheduler.taskSetFailed(taskSet, message)
|
||||
|
@ -676,14 +673,14 @@ private[spark] class TaskSetManager(
|
|||
}
|
||||
|
||||
private def getLocalityWait(level: TaskLocality.TaskLocality): Long = {
|
||||
val defaultWait = System.getProperty("spark.locality.wait", "3000")
|
||||
val defaultWait = conf.get("spark.locality.wait", "3000")
|
||||
level match {
|
||||
case TaskLocality.PROCESS_LOCAL =>
|
||||
System.getProperty("spark.locality.wait.process", defaultWait).toLong
|
||||
conf.get("spark.locality.wait.process", defaultWait).toLong
|
||||
case TaskLocality.NODE_LOCAL =>
|
||||
System.getProperty("spark.locality.wait.node", defaultWait).toLong
|
||||
conf.get("spark.locality.wait.node", defaultWait).toLong
|
||||
case TaskLocality.RACK_LOCAL =>
|
||||
System.getProperty("spark.locality.wait.rack", defaultWait).toLong
|
||||
conf.get("spark.locality.wait.rack", defaultWait).toLong
|
||||
case TaskLocality.ANY =>
|
||||
0L
|
||||
}
|
||||
|
|
|
@ -48,8 +48,8 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, actorSystem: A
|
|||
{
|
||||
// Use an atomic variable to track total number of cores in the cluster for simplicity and speed
|
||||
var totalCoreCount = new AtomicInteger(0)
|
||||
|
||||
private val timeout = AkkaUtils.askTimeout
|
||||
val conf = scheduler.sc.conf
|
||||
private val timeout = AkkaUtils.askTimeout(conf)
|
||||
|
||||
class DriverActor(sparkProperties: Seq[(String, String)]) extends Actor {
|
||||
private val executorActor = new HashMap[String, ActorRef]
|
||||
|
@ -63,7 +63,7 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, actorSystem: A
|
|||
context.system.eventStream.subscribe(self, classOf[RemotingLifecycleEvent])
|
||||
|
||||
// Periodically revive offers to allow delay scheduling to work
|
||||
val reviveInterval = System.getProperty("spark.scheduler.revive.interval", "1000").toLong
|
||||
val reviveInterval = conf.get("spark.scheduler.revive.interval", "1000").toLong
|
||||
import context.dispatcher
|
||||
context.system.scheduler.schedule(0.millis, reviveInterval.millis, self, ReviveOffers)
|
||||
}
|
||||
|
@ -119,7 +119,7 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, actorSystem: A
|
|||
removeExecutor(executorId, reason)
|
||||
sender ! true
|
||||
|
||||
case DisassociatedEvent(_, address, _) =>
|
||||
case DisassociatedEvent(_, address, _) =>
|
||||
addressToExecutorId.get(address).foreach(removeExecutor(_, "remote Akka client disassociated"))
|
||||
|
||||
}
|
||||
|
@ -164,14 +164,12 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, actorSystem: A
|
|||
|
||||
override def start() {
|
||||
val properties = new ArrayBuffer[(String, String)]
|
||||
val iterator = System.getProperties.entrySet.iterator
|
||||
while (iterator.hasNext) {
|
||||
val entry = iterator.next
|
||||
val (key, value) = (entry.getKey.toString, entry.getValue.toString)
|
||||
for ((key, value) <- scheduler.sc.conf.getAll) {
|
||||
if (key.startsWith("spark.") && !key.equals("spark.hostPort")) {
|
||||
properties += ((key, value))
|
||||
}
|
||||
}
|
||||
//TODO (prashant) send conf instead of properties
|
||||
driverActor = actorSystem.actorOf(
|
||||
Props(new DriverActor(properties)), name = CoarseGrainedSchedulerBackend.ACTOR_NAME)
|
||||
}
|
||||
|
@ -210,8 +208,10 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, actorSystem: A
|
|||
driverActor ! KillTask(taskId, executorId)
|
||||
}
|
||||
|
||||
override def defaultParallelism() = Option(System.getProperty("spark.default.parallelism"))
|
||||
.map(_.toInt).getOrElse(math.max(totalCoreCount.get(), 2))
|
||||
override def defaultParallelism(): Int = {
|
||||
conf.getOption("spark.default.parallelism").map(_.toInt).getOrElse(
|
||||
math.max(totalCoreCount.get(), 2))
|
||||
}
|
||||
|
||||
// Called by subclasses when notified of a lost worker
|
||||
def removeExecutor(executorId: String, reason: String) {
|
||||
|
|
|
@ -33,13 +33,13 @@ private[spark] class SimrSchedulerBackend(
|
|||
val tmpPath = new Path(driverFilePath + "_tmp")
|
||||
val filePath = new Path(driverFilePath)
|
||||
|
||||
val maxCores = System.getProperty("spark.simr.executor.cores", "1").toInt
|
||||
val maxCores = conf.get("spark.simr.executor.cores", "1").toInt
|
||||
|
||||
override def start() {
|
||||
super.start()
|
||||
|
||||
val driverUrl = "akka.tcp://spark@%s:%s/user/%s".format(
|
||||
System.getProperty("spark.driver.host"), System.getProperty("spark.driver.port"),
|
||||
sc.conf.get("spark.driver.host"), sc.conf.get("spark.driver.port"),
|
||||
CoarseGrainedSchedulerBackend.ACTOR_NAME)
|
||||
|
||||
val conf = new Configuration()
|
||||
|
|
|
@ -38,23 +38,23 @@ private[spark] class SparkDeploySchedulerBackend(
|
|||
var stopping = false
|
||||
var shutdownCallback : (SparkDeploySchedulerBackend) => Unit = _
|
||||
|
||||
val maxCores = System.getProperty("spark.cores.max", Int.MaxValue.toString).toInt
|
||||
val maxCores = conf.get("spark.cores.max", Int.MaxValue.toString).toInt
|
||||
|
||||
override def start() {
|
||||
super.start()
|
||||
|
||||
// The endpoint for executors to talk to us
|
||||
val driverUrl = "akka.tcp://spark@%s:%s/user/%s".format(
|
||||
System.getProperty("spark.driver.host"), System.getProperty("spark.driver.port"),
|
||||
conf.get("spark.driver.host"), conf.get("spark.driver.port"),
|
||||
CoarseGrainedSchedulerBackend.ACTOR_NAME)
|
||||
val args = Seq(driverUrl, "{{EXECUTOR_ID}}", "{{HOSTNAME}}", "{{CORES}}")
|
||||
val command = Command(
|
||||
"org.apache.spark.executor.CoarseGrainedExecutorBackend", args, sc.executorEnvs)
|
||||
val sparkHome = sc.getSparkHome().getOrElse(null)
|
||||
val appDesc = new ApplicationDescription(appName, maxCores, executorMemory, command, sparkHome,
|
||||
val appDesc = new ApplicationDescription(appName, maxCores, sc.executorMemory, command, sparkHome,
|
||||
"http://" + sc.ui.appUIAddress)
|
||||
|
||||
client = new Client(sc.env.actorSystem, masters, appDesc, this)
|
||||
client = new Client(sc.env.actorSystem, masters, appDesc, this, conf)
|
||||
client.start()
|
||||
}
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ private[spark] class CoarseMesosSchedulerBackend(
|
|||
var driver: SchedulerDriver = null
|
||||
|
||||
// Maximum number of cores to acquire (TODO: we'll need more flexible controls here)
|
||||
val maxCores = System.getProperty("spark.cores.max", Int.MaxValue.toString).toInt
|
||||
val maxCores = conf.get("spark.cores.max", Int.MaxValue.toString).toInt
|
||||
|
||||
// Cores we have acquired with each Mesos task ID
|
||||
val coresByTaskId = new HashMap[Int, Int]
|
||||
|
@ -77,7 +77,7 @@ private[spark] class CoarseMesosSchedulerBackend(
|
|||
"Spark home is not set; set it through the spark.home system " +
|
||||
"property, the SPARK_HOME environment variable or the SparkContext constructor"))
|
||||
|
||||
val extraCoresPerSlave = System.getProperty("spark.mesos.extra.cores", "0").toInt
|
||||
val extraCoresPerSlave = conf.get("spark.mesos.extra.cores", "0").toInt
|
||||
|
||||
var nextMesosTaskId = 0
|
||||
|
||||
|
@ -122,12 +122,12 @@ private[spark] class CoarseMesosSchedulerBackend(
|
|||
val command = CommandInfo.newBuilder()
|
||||
.setEnvironment(environment)
|
||||
val driverUrl = "akka.tcp://spark@%s:%s/user/%s".format(
|
||||
System.getProperty("spark.driver.host"),
|
||||
System.getProperty("spark.driver.port"),
|
||||
conf.get("spark.driver.host"),
|
||||
conf.get("spark.driver.port"),
|
||||
CoarseGrainedSchedulerBackend.ACTOR_NAME)
|
||||
val uri = System.getProperty("spark.executor.uri")
|
||||
val uri = conf.get("spark.executor.uri", null)
|
||||
if (uri == null) {
|
||||
val runScript = new File(sparkHome, "spark-class").getCanonicalPath
|
||||
val runScript = new File(sparkHome, "./bin/spark-class").getCanonicalPath
|
||||
command.setValue(
|
||||
"\"%s\" org.apache.spark.executor.CoarseGrainedExecutorBackend %s %s %s %d".format(
|
||||
runScript, driverUrl, offer.getSlaveId.getValue, offer.getHostname, numCores))
|
||||
|
@ -136,7 +136,7 @@ private[spark] class CoarseMesosSchedulerBackend(
|
|||
// glob the directory "correctly".
|
||||
val basename = uri.split('/').last.split('.').head
|
||||
command.setValue(
|
||||
"cd %s*; ./spark-class org.apache.spark.executor.CoarseGrainedExecutorBackend %s %s %s %d"
|
||||
"cd %s*; ./bin/spark-class org.apache.spark.executor.CoarseGrainedExecutorBackend %s %s %s %d"
|
||||
.format(basename, driverUrl, offer.getSlaveId.getValue, offer.getHostname, numCores))
|
||||
command.addUris(CommandInfo.URI.newBuilder().setValue(uri))
|
||||
}
|
||||
|
@ -177,7 +177,7 @@ private[spark] class CoarseMesosSchedulerBackend(
|
|||
val slaveId = offer.getSlaveId.toString
|
||||
val mem = getResource(offer.getResourcesList, "mem")
|
||||
val cpus = getResource(offer.getResourcesList, "cpus").toInt
|
||||
if (totalCoresAcquired < maxCores && mem >= executorMemory && cpus >= 1 &&
|
||||
if (totalCoresAcquired < maxCores && mem >= sc.executorMemory && cpus >= 1 &&
|
||||
failuresBySlaveId.getOrElse(slaveId, 0) < MAX_SLAVE_FAILURES &&
|
||||
!slaveIdsWithExecutors.contains(slaveId)) {
|
||||
// Launch an executor on the slave
|
||||
|
@ -193,7 +193,7 @@ private[spark] class CoarseMesosSchedulerBackend(
|
|||
.setCommand(createCommand(offer, cpusToUse + extraCoresPerSlave))
|
||||
.setName("Task " + taskId)
|
||||
.addResources(createResource("cpus", cpusToUse))
|
||||
.addResources(createResource("mem", executorMemory))
|
||||
.addResources(createResource("mem", sc.executorMemory))
|
||||
.build()
|
||||
d.launchTasks(offer.getId, Collections.singletonList(task), filters)
|
||||
} else {
|
||||
|
|
|
@ -100,20 +100,20 @@ private[spark] class MesosSchedulerBackend(
|
|||
}
|
||||
val command = CommandInfo.newBuilder()
|
||||
.setEnvironment(environment)
|
||||
val uri = System.getProperty("spark.executor.uri")
|
||||
val uri = sc.conf.get("spark.executor.uri", null)
|
||||
if (uri == null) {
|
||||
command.setValue(new File(sparkHome, "spark-executor").getCanonicalPath)
|
||||
command.setValue(new File(sparkHome, "/sbin/spark-executor").getCanonicalPath)
|
||||
} else {
|
||||
// Grab everything to the first '.'. We'll use that and '*' to
|
||||
// glob the directory "correctly".
|
||||
val basename = uri.split('/').last.split('.').head
|
||||
command.setValue("cd %s*; ./spark-executor".format(basename))
|
||||
command.setValue("cd %s*; ./sbin/spark-executor".format(basename))
|
||||
command.addUris(CommandInfo.URI.newBuilder().setValue(uri))
|
||||
}
|
||||
val memory = Resource.newBuilder()
|
||||
.setName("mem")
|
||||
.setType(Value.Type.SCALAR)
|
||||
.setScalar(Value.Scalar.newBuilder().setValue(executorMemory).build())
|
||||
.setScalar(Value.Scalar.newBuilder().setValue(sc.executorMemory).build())
|
||||
.build()
|
||||
ExecutorInfo.newBuilder()
|
||||
.setExecutorId(ExecutorID.newBuilder().setValue(execId).build())
|
||||
|
@ -198,7 +198,7 @@ private[spark] class MesosSchedulerBackend(
|
|||
def enoughMemory(o: Offer) = {
|
||||
val mem = getResource(o.getResourcesList, "mem")
|
||||
val slaveId = o.getSlaveId.getValue
|
||||
mem >= executorMemory || slaveIdsWithExecutors.contains(slaveId)
|
||||
mem >= sc.executorMemory || slaveIdsWithExecutors.contains(slaveId)
|
||||
}
|
||||
|
||||
for ((offer, index) <- offers.zipWithIndex if enoughMemory(offer)) {
|
||||
|
@ -340,5 +340,5 @@ private[spark] class MesosSchedulerBackend(
|
|||
}
|
||||
|
||||
// TODO: query Mesos for number of cores
|
||||
override def defaultParallelism() = System.getProperty("spark.default.parallelism", "8").toInt
|
||||
override def defaultParallelism() = sc.conf.get("spark.default.parallelism", "8").toInt
|
||||
}
|
||||
|
|
|
@ -47,7 +47,8 @@ private[spark] class LocalActor(
|
|||
private val localExecutorId = "localhost"
|
||||
private val localExecutorHostname = "localhost"
|
||||
|
||||
val executor = new Executor(localExecutorId, localExecutorHostname, Seq.empty, isLocal = true)
|
||||
val executor = new Executor(
|
||||
localExecutorId, localExecutorHostname, scheduler.conf.getAll, isLocal = true)
|
||||
|
||||
def receive = {
|
||||
case ReviveOffers =>
|
||||
|
|
|
@ -21,6 +21,7 @@ import java.io._
|
|||
import java.nio.ByteBuffer
|
||||
|
||||
import org.apache.spark.util.ByteBufferInputStream
|
||||
import org.apache.spark.SparkConf
|
||||
|
||||
private[spark] class JavaSerializationStream(out: OutputStream) extends SerializationStream {
|
||||
val objOut = new ObjectOutputStream(out)
|
||||
|
@ -77,6 +78,6 @@ private[spark] class JavaSerializerInstance extends SerializerInstance {
|
|||
/**
|
||||
* A Spark serializer that uses Java's built-in serialization.
|
||||
*/
|
||||
class JavaSerializer extends Serializer {
|
||||
class JavaSerializer(conf: SparkConf) extends Serializer {
|
||||
def newInstance(): SerializerInstance = new JavaSerializerInstance
|
||||
}
|
||||
|
|
|
@ -25,18 +25,18 @@ import com.esotericsoftware.kryo.{KryoException, Kryo}
|
|||
import com.esotericsoftware.kryo.io.{Input => KryoInput, Output => KryoOutput}
|
||||
import com.twitter.chill.{EmptyScalaKryoInstantiator, AllScalaRegistrar}
|
||||
|
||||
import org.apache.spark.{SerializableWritable, Logging}
|
||||
import org.apache.spark._
|
||||
import org.apache.spark.broadcast.HttpBroadcast
|
||||
import org.apache.spark.scheduler.MapStatus
|
||||
import org.apache.spark.storage._
|
||||
import org.apache.spark.storage.{GetBlock, GotBlock, PutBlock}
|
||||
|
||||
/**
|
||||
* A Spark serializer that uses the [[https://code.google.com/p/kryo/ Kryo serialization library]].
|
||||
*/
|
||||
class KryoSerializer extends org.apache.spark.serializer.Serializer with Logging {
|
||||
|
||||
class KryoSerializer(conf: SparkConf) extends org.apache.spark.serializer.Serializer with Logging {
|
||||
private val bufferSize = {
|
||||
System.getProperty("spark.kryoserializer.buffer.mb", "2").toInt * 1024 * 1024
|
||||
conf.get("spark.kryoserializer.buffer.mb", "2").toInt * 1024 * 1024
|
||||
}
|
||||
|
||||
def newKryoOutput() = new KryoOutput(bufferSize)
|
||||
|
@ -48,7 +48,7 @@ class KryoSerializer extends org.apache.spark.serializer.Serializer with Logging
|
|||
|
||||
// Allow disabling Kryo reference tracking if user knows their object graphs don't have loops.
|
||||
// Do this before we invoke the user registrator so the user registrator can override this.
|
||||
kryo.setReferences(System.getProperty("spark.kryo.referenceTracking", "true").toBoolean)
|
||||
kryo.setReferences(conf.get("spark.kryo.referenceTracking", "true").toBoolean)
|
||||
|
||||
for (cls <- KryoSerializer.toRegister) kryo.register(cls)
|
||||
|
||||
|
@ -58,13 +58,13 @@ class KryoSerializer extends org.apache.spark.serializer.Serializer with Logging
|
|||
|
||||
// Allow the user to register their own classes by setting spark.kryo.registrator
|
||||
try {
|
||||
Option(System.getProperty("spark.kryo.registrator")).foreach { regCls =>
|
||||
for (regCls <- conf.getOption("spark.kryo.registrator")) {
|
||||
logDebug("Running user registrator: " + regCls)
|
||||
val reg = Class.forName(regCls, true, classLoader).newInstance().asInstanceOf[KryoRegistrator]
|
||||
reg.registerClasses(kryo)
|
||||
}
|
||||
} catch {
|
||||
case _: Exception => println("Failed to register spark.kryo.registrator")
|
||||
case e: Exception => logError("Failed to run spark.kryo.registrator", e)
|
||||
}
|
||||
|
||||
// Register Chill's classes; we do this after our ranges and the user's own classes to let
|
||||
|
|
|
@ -29,6 +29,9 @@ import org.apache.spark.util.{NextIterator, ByteBufferInputStream}
|
|||
* A serializer. Because some serialization libraries are not thread safe, this class is used to
|
||||
* create [[org.apache.spark.serializer.SerializerInstance]] objects that do the actual serialization and are
|
||||
* guaranteed to only be called from one thread at a time.
|
||||
*
|
||||
* Implementations of this trait should have a zero-arg constructor or a constructor that accepts a
|
||||
* [[org.apache.spark.SparkConf]] as parameter. If both constructors are defined, the latter takes precedence.
|
||||
*/
|
||||
trait Serializer {
|
||||
def newInstance(): SerializerInstance
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
package org.apache.spark.serializer
|
||||
|
||||
import java.util.concurrent.ConcurrentHashMap
|
||||
import org.apache.spark.SparkConf
|
||||
|
||||
|
||||
/**
|
||||
|
@ -26,18 +27,19 @@ import java.util.concurrent.ConcurrentHashMap
|
|||
* creating a new one.
|
||||
*/
|
||||
private[spark] class SerializerManager {
|
||||
// TODO: Consider moving this into SparkConf itself to remove the global singleton.
|
||||
|
||||
private val serializers = new ConcurrentHashMap[String, Serializer]
|
||||
private var _default: Serializer = _
|
||||
|
||||
def default = _default
|
||||
|
||||
def setDefault(clsName: String): Serializer = {
|
||||
_default = get(clsName)
|
||||
def setDefault(clsName: String, conf: SparkConf): Serializer = {
|
||||
_default = get(clsName, conf)
|
||||
_default
|
||||
}
|
||||
|
||||
def get(clsName: String): Serializer = {
|
||||
def get(clsName: String, conf: SparkConf): Serializer = {
|
||||
if (clsName == null) {
|
||||
default
|
||||
} else {
|
||||
|
@ -51,8 +53,19 @@ private[spark] class SerializerManager {
|
|||
serializer = serializers.get(clsName)
|
||||
if (serializer == null) {
|
||||
val clsLoader = Thread.currentThread.getContextClassLoader
|
||||
serializer =
|
||||
Class.forName(clsName, true, clsLoader).newInstance().asInstanceOf[Serializer]
|
||||
val cls = Class.forName(clsName, true, clsLoader)
|
||||
|
||||
// First try with the constructor that takes SparkConf. If we can't find one,
|
||||
// use a no-arg constructor instead.
|
||||
try {
|
||||
val constructor = cls.getConstructor(classOf[SparkConf])
|
||||
serializer = constructor.newInstance(conf).asInstanceOf[Serializer]
|
||||
} catch {
|
||||
case _: NoSuchMethodException =>
|
||||
val constructor = cls.getConstructor()
|
||||
serializer = constructor.newInstance().asInstanceOf[Serializer]
|
||||
}
|
||||
|
||||
serializers.put(clsName, serializer)
|
||||
}
|
||||
serializer
|
||||
|
|
|
@ -312,7 +312,7 @@ object BlockFetcherIterator {
|
|||
logDebug("Sending request for %d blocks (%s) from %s".format(
|
||||
req.blocks.size, Utils.bytesToString(req.size), req.address.host))
|
||||
val cmId = new ConnectionManagerId(req.address.host, req.address.nettyPort)
|
||||
val cpier = new ShuffleCopier
|
||||
val cpier = new ShuffleCopier(blockManager.conf)
|
||||
cpier.getBlocks(cmId, req.blocks, putResult)
|
||||
logDebug("Sent request for remote blocks " + req.blocks + " from " + req.address.host )
|
||||
}
|
||||
|
@ -327,7 +327,7 @@ object BlockFetcherIterator {
|
|||
fetchRequestsSync.put(request)
|
||||
}
|
||||
|
||||
copiers = startCopiers(System.getProperty("spark.shuffle.copier.threads", "6").toInt)
|
||||
copiers = startCopiers(conf.get("spark.shuffle.copier.threads", "6").toInt)
|
||||
logInfo("Started " + fetchRequestsSync.size + " remote gets in " +
|
||||
Utils.getUsedTimeMs(startTime))
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ import scala.concurrent.duration._
|
|||
|
||||
import it.unimi.dsi.fastutil.io.{FastBufferedOutputStream, FastByteArrayOutputStream}
|
||||
|
||||
import org.apache.spark.{Logging, SparkEnv, SparkException}
|
||||
import org.apache.spark.{SparkConf, Logging, SparkEnv, SparkException}
|
||||
import org.apache.spark.io.CompressionCodec
|
||||
import org.apache.spark.network._
|
||||
import org.apache.spark.serializer.Serializer
|
||||
|
@ -43,12 +43,13 @@ private[spark] class BlockManager(
|
|||
actorSystem: ActorSystem,
|
||||
val master: BlockManagerMaster,
|
||||
val defaultSerializer: Serializer,
|
||||
maxMemory: Long)
|
||||
maxMemory: Long,
|
||||
val conf: SparkConf)
|
||||
extends Logging {
|
||||
|
||||
val shuffleBlockManager = new ShuffleBlockManager(this)
|
||||
val diskBlockManager = new DiskBlockManager(shuffleBlockManager,
|
||||
System.getProperty("spark.local.dir", System.getProperty("java.io.tmpdir")))
|
||||
conf.get("spark.local.dir", System.getProperty("java.io.tmpdir")))
|
||||
|
||||
private val blockInfo = new TimeStampedHashMap[BlockId, BlockInfo]
|
||||
|
||||
|
@ -57,12 +58,12 @@ private[spark] class BlockManager(
|
|||
|
||||
// If we use Netty for shuffle, start a new Netty-based shuffle sender service.
|
||||
private val nettyPort: Int = {
|
||||
val useNetty = System.getProperty("spark.shuffle.use.netty", "false").toBoolean
|
||||
val nettyPortConfig = System.getProperty("spark.shuffle.sender.port", "0").toInt
|
||||
val useNetty = conf.get("spark.shuffle.use.netty", "false").toBoolean
|
||||
val nettyPortConfig = conf.get("spark.shuffle.sender.port", "0").toInt
|
||||
if (useNetty) diskBlockManager.startShuffleBlockSender(nettyPortConfig) else 0
|
||||
}
|
||||
|
||||
val connectionManager = new ConnectionManager(0)
|
||||
val connectionManager = new ConnectionManager(0, conf)
|
||||
implicit val futureExecContext = connectionManager.futureExecContext
|
||||
|
||||
val blockManagerId = BlockManagerId(
|
||||
|
@ -71,18 +72,18 @@ private[spark] class BlockManager(
|
|||
// Max megabytes of data to keep in flight per reducer (to avoid over-allocating memory
|
||||
// for receiving shuffle outputs)
|
||||
val maxBytesInFlight =
|
||||
System.getProperty("spark.reducer.maxMbInFlight", "48").toLong * 1024 * 1024
|
||||
conf.get("spark.reducer.maxMbInFlight", "48").toLong * 1024 * 1024
|
||||
|
||||
// Whether to compress broadcast variables that are stored
|
||||
val compressBroadcast = System.getProperty("spark.broadcast.compress", "true").toBoolean
|
||||
val compressBroadcast = conf.get("spark.broadcast.compress", "true").toBoolean
|
||||
// Whether to compress shuffle output that are stored
|
||||
val compressShuffle = System.getProperty("spark.shuffle.compress", "true").toBoolean
|
||||
val compressShuffle = conf.get("spark.shuffle.compress", "true").toBoolean
|
||||
// Whether to compress RDD partitions that are stored serialized
|
||||
val compressRdds = System.getProperty("spark.rdd.compress", "false").toBoolean
|
||||
val compressRdds = conf.get("spark.rdd.compress", "false").toBoolean
|
||||
|
||||
val heartBeatFrequency = BlockManager.getHeartBeatFrequencyFromSystemProperties
|
||||
val heartBeatFrequency = BlockManager.getHeartBeatFrequency(conf)
|
||||
|
||||
val hostPort = Utils.localHostPort()
|
||||
val hostPort = Utils.localHostPort(conf)
|
||||
|
||||
val slaveActor = actorSystem.actorOf(Props(new BlockManagerSlaveActor(this)),
|
||||
name = "BlockManagerActor" + BlockManager.ID_GENERATOR.next)
|
||||
|
@ -100,8 +101,11 @@ private[spark] class BlockManager(
|
|||
|
||||
var heartBeatTask: Cancellable = null
|
||||
|
||||
private val metadataCleaner = new MetadataCleaner(MetadataCleanerType.BLOCK_MANAGER, this.dropOldNonBroadcastBlocks)
|
||||
private val broadcastCleaner = new MetadataCleaner(MetadataCleanerType.BROADCAST_VARS, this.dropOldBroadcastBlocks)
|
||||
private val metadataCleaner = new MetadataCleaner(
|
||||
MetadataCleanerType.BLOCK_MANAGER, this.dropOldNonBroadcastBlocks, conf)
|
||||
private val broadcastCleaner = new MetadataCleaner(
|
||||
MetadataCleanerType.BROADCAST_VARS, this.dropOldBroadcastBlocks, conf)
|
||||
|
||||
initialize()
|
||||
|
||||
// The compression codec to use. Note that the "lazy" val is necessary because we want to delay
|
||||
|
@ -109,14 +113,14 @@ private[spark] class BlockManager(
|
|||
// program could be using a user-defined codec in a third party jar, which is loaded in
|
||||
// Executor.updateDependencies. When the BlockManager is initialized, user level jars hasn't been
|
||||
// loaded yet.
|
||||
private lazy val compressionCodec: CompressionCodec = CompressionCodec.createCodec()
|
||||
private lazy val compressionCodec: CompressionCodec = CompressionCodec.createCodec(conf)
|
||||
|
||||
/**
|
||||
* Construct a BlockManager with a memory limit set based on system properties.
|
||||
*/
|
||||
def this(execId: String, actorSystem: ActorSystem, master: BlockManagerMaster,
|
||||
serializer: Serializer) = {
|
||||
this(execId, actorSystem, master, serializer, BlockManager.getMaxMemoryFromSystemProperties)
|
||||
serializer: Serializer, conf: SparkConf) = {
|
||||
this(execId, actorSystem, master, serializer, BlockManager.getMaxMemory(conf), conf)
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -126,7 +130,7 @@ private[spark] class BlockManager(
|
|||
private def initialize() {
|
||||
master.registerBlockManager(blockManagerId, maxMemory, slaveActor)
|
||||
BlockManagerWorker.startBlockManagerWorker(this)
|
||||
if (!BlockManager.getDisableHeartBeatsForTesting) {
|
||||
if (!BlockManager.getDisableHeartBeatsForTesting(conf)) {
|
||||
heartBeatTask = actorSystem.scheduler.schedule(0.seconds, heartBeatFrequency.milliseconds) {
|
||||
heartBeat()
|
||||
}
|
||||
|
@ -439,7 +443,7 @@ private[spark] class BlockManager(
|
|||
: BlockFetcherIterator = {
|
||||
|
||||
val iter =
|
||||
if (System.getProperty("spark.shuffle.use.netty", "false").toBoolean) {
|
||||
if (conf.get("spark.shuffle.use.netty", "false").toBoolean) {
|
||||
new BlockFetcherIterator.NettyBlockFetcherIterator(this, blocksByAddress, serializer)
|
||||
} else {
|
||||
new BlockFetcherIterator.BasicBlockFetcherIterator(this, blocksByAddress, serializer)
|
||||
|
@ -465,7 +469,8 @@ private[spark] class BlockManager(
|
|||
def getDiskWriter(blockId: BlockId, file: File, serializer: Serializer, bufferSize: Int)
|
||||
: BlockObjectWriter = {
|
||||
val compressStream: OutputStream => OutputStream = wrapForCompression(blockId, _)
|
||||
new DiskBlockObjectWriter(blockId, file, serializer, bufferSize, compressStream)
|
||||
val syncWrites = conf.get("spark.shuffle.sync", "false").toBoolean
|
||||
new DiskBlockObjectWriter(blockId, file, serializer, bufferSize, compressStream, syncWrites)
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -856,19 +861,18 @@ private[spark] class BlockManager(
|
|||
|
||||
|
||||
private[spark] object BlockManager extends Logging {
|
||||
|
||||
val ID_GENERATOR = new IdGenerator
|
||||
|
||||
def getMaxMemoryFromSystemProperties: Long = {
|
||||
val memoryFraction = System.getProperty("spark.storage.memoryFraction", "0.66").toDouble
|
||||
def getMaxMemory(conf: SparkConf): Long = {
|
||||
val memoryFraction = conf.get("spark.storage.memoryFraction", "0.66").toDouble
|
||||
(Runtime.getRuntime.maxMemory * memoryFraction).toLong
|
||||
}
|
||||
|
||||
def getHeartBeatFrequencyFromSystemProperties: Long =
|
||||
System.getProperty("spark.storage.blockManagerTimeoutIntervalMs", "60000").toLong / 4
|
||||
def getHeartBeatFrequency(conf: SparkConf): Long =
|
||||
conf.get("spark.storage.blockManagerTimeoutIntervalMs", "60000").toLong / 4
|
||||
|
||||
def getDisableHeartBeatsForTesting: Boolean =
|
||||
System.getProperty("spark.test.disableBlockManagerHeartBeat", "false").toBoolean
|
||||
def getDisableHeartBeatsForTesting(conf: SparkConf): Boolean =
|
||||
conf.get("spark.test.disableBlockManagerHeartBeat", "false").toBoolean
|
||||
|
||||
/**
|
||||
* Attempt to clean up a ByteBuffer if it is memory-mapped. This uses an *unsafe* Sun API that
|
||||
|
|
|
@ -23,19 +23,20 @@ import scala.concurrent.ExecutionContext.Implicits.global
|
|||
import akka.actor._
|
||||
import akka.pattern.ask
|
||||
|
||||
import org.apache.spark.{Logging, SparkException}
|
||||
import org.apache.spark.{SparkConf, Logging, SparkException}
|
||||
import org.apache.spark.storage.BlockManagerMessages._
|
||||
import org.apache.spark.util.AkkaUtils
|
||||
|
||||
private[spark]
|
||||
class BlockManagerMaster(var driverActor : Either[ActorRef, ActorSelection]) extends Logging {
|
||||
class BlockManagerMaster(var driverActor : Either[ActorRef, ActorSelection],
|
||||
conf: SparkConf) extends Logging {
|
||||
|
||||
val AKKA_RETRY_ATTEMPTS: Int = System.getProperty("spark.akka.num.retries", "3").toInt
|
||||
val AKKA_RETRY_INTERVAL_MS: Int = System.getProperty("spark.akka.retry.wait", "3000").toInt
|
||||
val AKKA_RETRY_ATTEMPTS: Int = conf.get("spark.akka.num.retries", "3").toInt
|
||||
val AKKA_RETRY_INTERVAL_MS: Int = conf.get("spark.akka.retry.wait", "3000").toInt
|
||||
|
||||
val DRIVER_AKKA_ACTOR_NAME = "BlockManagerMaster"
|
||||
|
||||
val timeout = AkkaUtils.askTimeout
|
||||
val timeout = AkkaUtils.askTimeout(conf)
|
||||
|
||||
/** Remove a dead executor from the driver actor. This is only called on the driver side. */
|
||||
def removeExecutor(execId: String) {
|
||||
|
|
|
@ -27,7 +27,7 @@ import scala.concurrent.duration._
|
|||
import akka.actor.{Actor, ActorRef, Cancellable}
|
||||
import akka.pattern.ask
|
||||
|
||||
import org.apache.spark.{Logging, SparkException}
|
||||
import org.apache.spark.{SparkConf, Logging, SparkException}
|
||||
import org.apache.spark.storage.BlockManagerMessages._
|
||||
import org.apache.spark.util.{AkkaUtils, Utils}
|
||||
|
||||
|
@ -36,7 +36,7 @@ import org.apache.spark.util.{AkkaUtils, Utils}
|
|||
* all slaves' block managers.
|
||||
*/
|
||||
private[spark]
|
||||
class BlockManagerMasterActor(val isLocal: Boolean) extends Actor with Logging {
|
||||
class BlockManagerMasterActor(val isLocal: Boolean, conf: SparkConf) extends Actor with Logging {
|
||||
|
||||
// Mapping from block manager id to the block manager's information.
|
||||
private val blockManagerInfo =
|
||||
|
@ -48,20 +48,18 @@ class BlockManagerMasterActor(val isLocal: Boolean) extends Actor with Logging {
|
|||
// Mapping from block id to the set of block managers that have the block.
|
||||
private val blockLocations = new JHashMap[BlockId, mutable.HashSet[BlockManagerId]]
|
||||
|
||||
private val akkaTimeout = AkkaUtils.askTimeout
|
||||
private val akkaTimeout = AkkaUtils.askTimeout(conf)
|
||||
|
||||
initLogging()
|
||||
val slaveTimeout = conf.get("spark.storage.blockManagerSlaveTimeoutMs",
|
||||
"" + (BlockManager.getHeartBeatFrequency(conf) * 3)).toLong
|
||||
|
||||
val slaveTimeout = System.getProperty("spark.storage.blockManagerSlaveTimeoutMs",
|
||||
"" + (BlockManager.getHeartBeatFrequencyFromSystemProperties * 3)).toLong
|
||||
|
||||
val checkTimeoutInterval = System.getProperty("spark.storage.blockManagerTimeoutIntervalMs",
|
||||
val checkTimeoutInterval = conf.get("spark.storage.blockManagerTimeoutIntervalMs",
|
||||
"60000").toLong
|
||||
|
||||
var timeoutCheckingTask: Cancellable = null
|
||||
|
||||
override def preStart() {
|
||||
if (!BlockManager.getDisableHeartBeatsForTesting) {
|
||||
if (!BlockManager.getDisableHeartBeatsForTesting(conf)) {
|
||||
import context.dispatcher
|
||||
timeoutCheckingTask = context.system.scheduler.schedule(
|
||||
0.seconds, checkTimeoutInterval.milliseconds, self, ExpireDeadHosts)
|
||||
|
|
|
@ -30,7 +30,6 @@ import org.apache.spark.util.Utils
|
|||
* TODO: Use event model.
|
||||
*/
|
||||
private[spark] class BlockManagerWorker(val blockManager: BlockManager) extends Logging {
|
||||
initLogging()
|
||||
|
||||
blockManager.connectionManager.onReceiveMessage(onBlockMessageReceive)
|
||||
|
||||
|
@ -101,8 +100,6 @@ private[spark] class BlockManagerWorker(val blockManager: BlockManager) extends
|
|||
private[spark] object BlockManagerWorker extends Logging {
|
||||
private var blockManagerWorker: BlockManagerWorker = null
|
||||
|
||||
initLogging()
|
||||
|
||||
def startBlockManagerWorker(manager: BlockManager) {
|
||||
blockManagerWorker = new BlockManagerWorker(manager)
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue