Commit graph

4536 commits

Author SHA1 Message Date
Aaron Davidson 38b8048f29 Fix compiler errors
Whoops. Last-second changes require testing too, it seems.
2013-10-20 11:03:36 -07:00
Reynold Xin fabd05dabc Updated setGroupId documentation and marked dagSchedulerSource and blockManagerSource as private in SparkContext. 2013-10-20 10:54:30 -07:00
Matei Zaharia e4abb75d70 Merge pull request #85 from rxin/clean
Moved the top level spark package object from spark to org.apache.spark

This is a pretty annoying documentation bug ...
2013-10-20 09:38:37 -07:00
Aaron Davidson 136b9b3a3e Basic shuffle file consolidation
The Spark shuffle phase can produce a large number of files, as one file is created
per mapper per reducer. For large or repeated jobs, this often produces millions of
shuffle files, which sees extremely degredaded performance from the OS file system.
This patch seeks to reduce that burden by combining multipe shuffle files into one.

This PR draws upon the work of Jason Dai in https://github.com/mesos/spark/pull/669.
However, it simplifies the design in order to get the majority of the gain with less
overall intellectual and code burden. The vast majority of code in this pull request
is a refactor to allow the insertion of a clean layer of indirection between logical
block ids and physical files. This, I feel, provides some design clarity in addition
to enabling shuffle file consolidation.

The main goal is to produce one shuffle file per reducer per active mapper thread.
This allows us to isolate the mappers (simplifying the failure modes), while still
allowing us to reduce the number of mappers tremendously for large tasks. In order
to accomplish this, we simply create a new set of shuffle files for every parallel
task, and return the files to a pool which will be given out to the next run task.
2013-10-20 02:58:26 -07:00
Aaron Davidson 861dc409d7 Refactor of DiskStore for shuffle file consolidation
The main goal of this refactor was to allow the interposition of a new layer which
maps logical BlockIds to physical locations other than a file with the same name
as the BlockId. In particular, BlockIds will need to be mappable to chunks of files,
as multiple will be stored in the same file.

In order to accomplish this, the following changes have been made:
- Creation of DiskBlockManager, which manages the association of logical BlockIds
  to physical disk locations (called FileSegments). By default, Blocks are simply
  mapped to physical files of the same name, as before.
- The DiskStore now indirects all requests for a given BlockId through the DiskBlockManager
  in order to resolve the actual File location.
- DiskBlockObjectWriter has been merged into BlockObjectWriter.
- The Netty PathResolver has been changed to map BlockIds into FileSegments, as this
  codepath is the only one that uses Netty, and that is likely to remain the case.

Overall, I think this refactor produces a clearer division between the logical Block
paradigm and their physical on-disk location. There is now an explicit (and documented)
mapping from one to the other.
2013-10-20 02:48:41 -07:00
Matei Zaharia 747f538925 Merge pull request #83 from ewencp/pyspark-accumulator-add-method
Add an add() method to pyspark accumulators.

Add a regular method for adding a term to accumulators in
pyspark. Currently if you have a non-global accumulator, adding to it
is awkward. The += operator can't be used for non-global accumulators
captured via closure because it's involves an assignment. The only way
to do it is using __iadd__ directly.

Adding this method lets you write code like this:

def main():
    sc = SparkContext()
    accum = sc.accumulator(0)

    rdd = sc.parallelize([1,2,3])
    def f(x):
        accum.add(x)
    rdd.foreach(f)
    print accum.value

where using accum += x instead would have caused UnboundLocalError
exceptions in workers. Currently it would have to be written as
accum.__iadd__(x).
2013-10-19 23:40:40 -07:00
Reynold Xin 8396a6649e Moved the top level spark package object from spark to org.apache.spark 2013-10-19 23:26:15 -07:00
Reynold Xin eb9bf69462 Added documentation for setJobGroup. Also some minor cleanup in SparkContext. 2013-10-19 23:16:44 -07:00
Josh Rosen 9159d2d09d Split MapOutputTracker into Master/Worker classes.
Previously, MapOutputTracker contained fields and methods that
were only applicable to the master or worker instances.  This
commit introduces a MasterMapOutputTracker class to prevent
the master-specific methods from being accessed on workers.

I also renamed a few methods and made others protected/private.
2013-10-19 20:01:22 -07:00
Ewen Cheslack-Postava 7eaa56de7f Add an add() method to pyspark accumulators.
Add a regular method for adding a term to accumulators in
pyspark. Currently if you have a non-global accumulator, adding to it
is awkward. The += operator can't be used for non-global accumulators
captured via closure because it's involves an assignment. The only way
to do it is using __iadd__ directly.

Adding this method lets you write code like this:

def main():
    sc = SparkContext()
    accum = sc.accumulator(0)

    rdd = sc.parallelize([1,2,3])
    def f(x):
        accum.add(x)
    rdd.foreach(f)
    print accum.value

where using accum += x instead would have caused UnboundLocalError
exceptions in workers. Currently it would have to be written as
accum.__iadd__(x).
2013-10-19 19:55:39 -07:00
Josh Rosen 867d8fdf2a De-duplicate code in dropOld[Non]BroadcastBlocks. 2013-10-19 19:53:12 -07:00
Josh Rosen 6925a1322b Code de-duplication in put() and putBytes(). 2013-10-19 19:53:12 -07:00
Josh Rosen 8279185651 De-duplication in getRemote() and getRemoteBytes(). 2013-10-19 19:53:12 -07:00
Josh Rosen babccb695e De-duplication in getLocal() and getLocalBytes(). 2013-10-19 19:52:10 -07:00
Reynold Xin 4e44d65b5e Exclusion rules for Maven build files. 2013-10-19 12:35:55 -07:00
Reynold Xin 6511bbe2ad Merge pull request #78 from mosharaf/master
Removed BitTorrentBroadcast and TreeBroadcast.

TorrentBroadcast replaces both.
2013-10-19 11:34:56 -07:00
Mosharaf Chowdhury 29617c27a1 Removed BitTorrentBroadcast and TreeBroadcast. TorrentBroadcast is replacing both. 2013-10-18 23:54:11 -07:00
Reynold Xin f628804c02 Merge pull request #76 from pwendell/master
Clarify compression property.

Clarifies that this governs compression of internal data, not input
data or output data.
2013-10-18 23:19:42 -07:00
Patrick Wendell 6b62836285 Clarify compression property.
Clarifies that this governs compression of internal data, not input
data or output data.
2013-10-18 23:08:44 -07:00
Matei Zaharia 599dcb0ddf Merge pull request #74 from rxin/kill
Job cancellation via job group id.

This PR adds a simple API to group together a set of jobs belonging to a thread and threads spawned from it. It also allows the cancellation of all jobs in this group.

An example:

    sc.setJobDescription("this_is_the_group_id", "some job description")
    sc.parallelize(1 to 10000, 2).map { i => Thread.sleep(10); i }.count()

In a separate thread:

    sc.cancelJobGroup("this_is_the_group_id")
2013-10-18 22:49:00 -07:00
Reynold Xin 806f3a3adb Job cancellation via job group id. 2013-10-18 21:46:08 -07:00
Matei Zaharia 8de9706b86 Merge pull request #66 from shivaram/sbt-assembly-deps
Add SBT target to assemble dependencies

This pull request is an attempt to address the long assembly build times during development. Instead of rebuilding the assembly jar for every Spark change, this pull request adds a new SBT target `spark` that packages all the Spark modules and builds an assembly of the dependencies.

So the work flow that should work now would be something like

```
./sbt/sbt spark # Doing this once should suffice
## Make changes
./sbt/sbt compile
./sbt/sbt test or ./spark-shell
```
2013-10-18 20:32:39 -07:00
Matei Zaharia e5316d0685 Merge pull request #68 from mosharaf/master
Faster and stable/reliable broadcast

HttpBroadcast is noticeably slow, but the alternatives (TreeBroadcast or BitTorrentBroadcast) are notoriously unreliable. The main problem with them is they try to manage the memory for the pieces of a broadcast themselves. Right now, the BroadcastManager does not know which machines the tasks reading from a broadcast variable is running and when they have finished. Consequently, we try to guess and often guess wrong, which blows up the memory usage and kills/hangs jobs.

This very simple implementation solves the problem by not trying to manage the intermediate pieces; instead, it offloads that duty to the BlockManager which is quite good at juggling blocks. Otherwise, it is very similar to the BitTorrentBroadcast implementation (without fancy optimizations). And it runs much faster than HttpBroadcast we have right now.

I've been using this for another project for last couple of weeks, and just today did some benchmarking against the Http one. The following shows the improvements for increasing broadcast size for cold runs. Each line represent the number of receivers.
![fix-bc-first](https://f.cloud.github.com/assets/232966/1349342/ffa149e4-36e7-11e3-9fa6-c74555829356.png)

After the first broadcast is over, i.e., after JVM is wormed up and for HttpBroadcast the server is already running (I think), the following are the improvements for warm runs.
![fix-bc-succ](https://f.cloud.github.com/assets/232966/1349352/5a948bae-36e8-11e3-98ce-34f19ebd33e0.jpg)
The curves are not as nice as the cold runs, but the improvements are obvious, specially for larger broadcasts and more receivers.

Depending on how it goes, we should deprecate and/or remove old TreeBroadcast and BitTorrentBroadcast implementations, and hopefully, SPARK-889 will not be necessary any more.
2013-10-18 20:30:56 -07:00
Matei Zaharia 8d528af829 Merge pull request #71 from aarondav/scdefaults
Spark shell exits if it cannot create SparkContext

Mainly, this occurs if you provide a messed up MASTER url (one that doesn't match one
of our regexes). Previously, we would default to Mesos, fail, and then start the shell
anyway, except that any Spark command would fail. Simply exiting seems clearer.
2013-10-18 20:24:10 -07:00
Prabeesh K 6ec39829e9 Update MQTTWordCount.scala 2013-10-18 17:00:28 +05:30
Mosharaf Chowdhury 08391dbcb8 Should compile now. 2013-10-17 23:06:17 -07:00
Mosharaf Chowdhury 8612641362 Added an after block to reset spark.broadcast.factory 2013-10-17 22:44:04 -07:00
Prabeesh K d223d38933 Update MQTTInputDStream.scala 2013-10-18 09:09:49 +05:30
Aaron Davidson 74737264c4 Spark shell exits if it cannot create SparkContext
Mainly, this occurs if you provide a messed up MASTER url (one that doesn't match one
of our regexes). Previously, we would default to Mesos, fail, and then start the shell
anyway, except that any Spark command would fail.
2013-10-17 18:51:19 -07:00
Mosharaf Chowdhury 90ab55fd37 Merge remote-tracking branch 'upstream/master' 2013-10-17 18:12:28 -07:00
Mosharaf Chowdhury e178ae4e9b BroadcastSuite updated to test both HttpBroadcast and TorrentBroadcast in local, local[N], local-cluster settings. 2013-10-17 16:38:43 -07:00
Matei Zaharia fc26e5b832 Merge pull request #69 from KarthikTunga/master
Fix for issue SPARK-627. Implementing --config argument in the scripts.

This code fix is for issue SPARK-627. I added code to consider --config arguments in the scripts. In case the  <conf-dir> is not a directory the scripts exit. I removed the --hosts argument. It can be achieved by giving a different config directory. Let me know if an explicit --hosts argument is required.
2013-10-17 13:21:07 -07:00
Mosharaf Chowdhury 6a84e40efe Merge remote-tracking branch 'upstream/master' 2013-10-17 13:14:33 -07:00
Mosharaf Chowdhury 35b2415fb3 Code styling. Updated doc. 2013-10-17 13:14:12 -07:00
Matei Zaharia cf64f63f8a Merge pull request #67 from kayousterhout/remove_tsl
Removed TaskSchedulerListener interface.

The interface was used only by the DAG scheduler (so it wasn't necessary
to define the additional interface), and the naming makes it very
confusing when reading the code (because "listener" was used
to describe the DAG scheduler, rather than SparkListeners, which
implement a nearly-identical interface but serve a different
function).

@mateiz - is there a reason for this interface that I'm missing?
2013-10-17 11:12:28 -07:00
Mosharaf Chowdhury e663750488 Removed unused code.
Changes to match Spark coding style.
2013-10-17 00:19:50 -07:00
Kay Ousterhout 809f547633 Fixed unit tests 2013-10-16 23:16:12 -07:00
KarthikTunga 8537f19268 SPARK-627 , Implementing --config arguments in the scripts 2013-10-16 23:00:33 -07:00
Reynold Xin 3e7df8f6c6 Added a number of very fast, memory-efficient data structures: BitSet, OpenHashSet, OpenHashMap, PrimitiveKeyOpenHashMap. 2013-10-16 22:58:52 -07:00
KarthikTunga ff4fb1f7ee SPARK-627 , Implementing --config arguments in the scripts 2013-10-16 22:55:15 -07:00
KarthikTunga a32aa6b351 Implementing --config argument in the scripts 2013-10-16 22:51:09 -07:00
Mosharaf Chowdhury e96bd0068f BroadcastTest2 --> BroadcastTest 2013-10-16 21:33:33 -07:00
Mosharaf Chowdhury a8d0981832 Fixes for the new BlockId naming convention. 2013-10-16 21:33:33 -07:00
Mosharaf Chowdhury feb45d391f Default blockSize is 4MB.
BroadcastTest2 example added for testing broadcasts.
2013-10-16 21:33:33 -07:00
Mosharaf Chowdhury 6e5a60fab4 Removed unnecessary code, and added comment of memory-latency tradeoff. 2013-10-16 21:33:33 -07:00
Mosharaf Chowdhury 4602e2bf6e Torrent-ish broadcast based on BlockManager. 2013-10-16 21:33:33 -07:00
prabeesh 890f8fe439 modify code, use Spark Logging Class 2013-10-17 10:00:40 +05:30
prabeesh ee4178f144 remove unused dependency 2013-10-17 09:57:48 +05:30
prabeesh 29245605bf remove unused dependency 2013-10-17 09:57:30 +05:30
Shivaram Venkataraman 0a4b76fcc2 Rename SBT target to assemble-deps. 2013-10-16 17:05:46 -07:00