This commit makes Spark invocation saner by using an assembly JAR to
find all of Spark's dependencies instead of adding all the JARs in
lib_managed. It also packages the examples into an assembly and uses
that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script
with two better-named scripts: "run-examples" for examples, and
"spark-class" for Spark internal classes (e.g. REPL, master, etc). This
is also designed to minimize the confusion people have in trying to use
"run" to run their own classes; it's not meant to do that, but now at
least if they look at it, they can modify run-examples to do a decent
job for them.
As part of this, Bagel's examples are also now properly moved to the
examples package instead of bagel.
For debugging and analysis purposes, it's nice to have the exact command
used to launch Spark contained within the logs. This adds the necessary
hooks to make that possible.
This is used to find methods in the Scala API that
need to be ported to the Java API. To use it:
./run spark.tools.JavaAPICompletenessChecker
Conflicts:
project/SparkBuild.scala
run
run2.cmd
The previous version assumed that a CLASSPATH environment variable was
set by the "run" script when launching the process that starts the
ExecutorRunner, but unfortunately this is not true in tests. Instead, we
factor the classpath calculation into an extenral script and call that.
NOTE: This includes a Windows version but hasn't yet been tested there.
PR #419 applied the same change, but only to start-master.sh,
so some workers were still starting their web UI's using internal
addresses.
This should finally fix SPARK-613.