Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Kousuke Saruta 4d4b249274 [SPARK-6769][YARN][TEST] Usage of the ListenerBus in YarnClusterSuite is wrong
In YarnClusterSuite, a test case uses `SaveExecutorInfo`  to handle ExecutorAddedEvent as follows.

```
private class SaveExecutorInfo extends SparkListener {
  val addedExecutorInfos = mutable.Map[String, ExecutorInfo]()

  override def onExecutorAdded(executor: SparkListenerExecutorAdded) {
    addedExecutorInfos(executor.executorId) = executor.executorInfo
  }
}

...

    listener = new SaveExecutorInfo
    val sc = new SparkContext(new SparkConf()
      .setAppName("yarn \"test app\" 'with quotes' and \\back\\slashes and $dollarSigns"))
    sc.addSparkListener(listener)
    val status = new File(args(0))
    var result = "failure"
    try {
      val data = sc.parallelize(1 to 4, 4).collect().toSet
      assert(sc.listenerBus.waitUntilEmpty(WAIT_TIMEOUT_MILLIS))
      data should be (Set(1, 2, 3, 4))
      result = "success"
    } finally {
      sc.stop()
      Files.write(result, status, UTF_8)
    }
```

But, the usage is wrong because Executors will spawn during initializing SparkContext and SparkContext#addSparkListener should be invoked after the initialization, thus after Executors spawn, so SaveExecutorInfo cannot handle ExecutorAddedEvent.

Following code refers the result of the handling ExecutorAddedEvent. Because of the reason above, we cannot reach the assertion.

```
    // verify log urls are present
    listener.addedExecutorInfos.values.foreach { info =>
      assert(info.logUrlMap.nonEmpty)
    }
```

Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>

Closes #5417 from sarutak/SPARK-6769 and squashes the following commits:

8adc8ba [Kousuke Saruta] Fixed compile error
e258530 [Kousuke Saruta] Fixed style
591cf3e [Kousuke Saruta] Fixed style
48ec89a [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-6769
860c965 [Kousuke Saruta] Simplified code
207d325 [Kousuke Saruta] Added findListenersByClass method to ListenerBus
2408c84 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-6769
2d7e409 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-6769
3874adf [Kousuke Saruta] Fixed the usage of listener bus in LogUrlsStandaloneSuite
153a91b [Kousuke Saruta] Fixed the usage of listener bus in YarnClusterSuite
2015-04-14 14:01:55 -07:00
assembly [SPARK-6371] [build] Update version to 1.4.0-SNAPSHOT. 2015-03-20 18:43:57 +00:00
bagel [SPARK-6758]block the right jetty package in log 2015-04-09 17:44:08 -04:00
bin SPARK-4924 addendum. Minor assembly directory fix in load-spark-env-sh 2015-04-09 07:04:45 -04:00
build SPARK-5856: In Maven build script, launch Zinc with more memory 2015-02-17 10:10:01 -08:00
conf [SPARK-6758]block the right jetty package in log 2015-04-09 17:44:08 -04:00
core [SPARK-6769][YARN][TEST] Usage of the ListenerBus in YarnClusterSuite is wrong 2015-04-14 14:01:55 -07:00
data/mllib [SPARK-5939][MLLib] make FPGrowth example app take parameters 2015-02-23 08:47:28 -08:00
dev [WIP][HOTFIX][SPARK-4123]: Fix bug in PR dependency (all deps. removed issue) 2015-04-13 22:31:44 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs SPARK-1706: Allow multiple executors per worker in Standalone mode 2015-04-14 13:32:06 -07:00
ec2 [SPARK-5242]: Add --private-ips flag to EC2 script 2015-04-08 16:48:45 -04:00
examples [SPARK-5957][ML] better handling of parameters 2015-04-13 21:18:05 -07:00
external [SPARK-6431][Streaming][Kafka] Error message for partition metadata requ... 2015-04-12 17:37:30 +01:00
extras [SPARK-6440][CORE]Handle IPv6 addresses properly when constructing URI 2015-04-13 12:55:25 +01:00
graphx SPARK-6710 GraphX Fixed Wrong initial bias in GraphX SVDPlusPlus 2015-04-11 21:01:23 -07:00
launcher [SPARK-6894]spark.executor.extraLibraryOptions => spark.executor.extraLibraryPath 2015-04-14 12:02:11 -07:00
mllib [SPARK-5957][ML] better handling of parameters 2015-04-13 21:18:05 -07:00
network [SPARK-5931][CORE] Use consistent naming for time properties 2015-04-13 16:28:07 -07:00
project [SPARK-5808] [build] Package pyspark files in sbt assembly. 2015-04-14 13:41:38 -07:00
python [SPARK-6643][MLLIB] Implement StandardScalerModel missing methods 2015-04-12 22:17:16 -07:00
R [Minor][SparkR] Minor refactor and removes redundancy related to cleanClosure. 2015-04-13 20:43:24 -07:00
repl [SPARK-6758]block the right jetty package in log 2015-04-09 17:44:08 -04:00
sbin [Spark-4848] Allow different Worker configurations in standalone cluster 2015-04-13 18:21:16 -07:00
sbt Adde LICENSE Header to build/mvn, build/sbt and sbt/sbt 2014-12-29 10:48:53 -08:00
sql [SPARK-5794] [SQL] fix add jar 2015-04-13 18:26:00 -07:00
streaming [SPARK-5931][CORE] Use consistent naming for time properties 2015-04-13 16:28:07 -07:00
tools [SPARK-6428] Turn on explicit type checking for public methods. 2015-04-03 01:25:02 -07:00
yarn [SPARK-6769][YARN][TEST] Usage of the ListenerBus in YarnClusterSuite is wrong 2015-04-14 14:01:55 -07:00
.gitattributes [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
.gitignore [SPARK-5654] Integrate SparkR 2015-04-08 22:45:40 -07:00
.rat-excludes [SPARK-5654] Integrate SparkR 2015-04-08 22:45:40 -07:00
CONTRIBUTING.md [Docs] minor grammar fix 2014-09-17 12:33:09 -07:00
LICENSE SPARK-5984: Fix TimSort bug causes ArrayOutOfBoundsException 2015-02-28 18:55:34 -08:00
make-distribution.sh [SPARK-6406] Launch Spark using assembly jar instead of a separate launcher jar 2015-03-29 12:40:37 +01:00
NOTICE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
pom.xml [SPARK-6905] Upgrade to snappy-java 1.1.1.7 2015-04-14 13:40:07 -07:00
README.md [docs] [SPARK-6306] Readme points to dead link 2015-03-12 15:01:33 +00:00
scalastyle-config.xml [SPARK-6428] Turn on explicit type checking for public methods. 2015-04-03 01:25:02 -07:00
tox.ini [SPARK-3073] [PySpark] use external sort in sortBy() and sortByKey() 2014-08-26 16:57:40 -07:00

Apache Spark

Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page and project wiki. This README file only contains basic setup instructions.

Building Spark

Spark is built using Apache Maven. To build Spark and its example programs, run:

mvn -DskipTests clean package

(You do not need to do this if you downloaded a pre-built package.) More detailed documentation is available from the project site, at "Building Spark".

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./dev/run-tests

Please see the guidance on how to run all automated tests.

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.

Please refer to the build documentation at "Specifying the Hadoop Version" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions. See also "Third Party Hadoop Distributions" for guidance on building a Spark application that works with a particular distribution.

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.