Commit graph

4573 commits

Author SHA1 Message Date
Reynold Xin e9ff13ec72 Consolidated both mapPartitions related RDDs into a single MapPartitionsRDD.
Also changed the semantics of the index parameter in mapPartitionsWithIndex from the partition index of the output partition to the partition index in the current RDD.
2013-11-24 17:56:43 +08:00
Reynold Xin 718cc803f7 Merge pull request #200 from mateiz/hash-fix
AppendOnlyMap fixes

- Chose a more random reshuffling step for values returned by Object.hashCode to avoid some long chaining that was happening for consecutive integers (e.g. `sc.makeRDD(1 to 100000000, 100).map(t => (t, t)).reduceByKey(_ + _).count`)
- Some other small optimizations throughout (see commit comments)
2013-11-24 11:02:02 +08:00
Matei Zaharia 9837a60234 Some other optimizations to AppendOnlyMap:
- Don't check keys for equality when re-inserting due to growing the
  table; the keys will already be unique
- Remember the grow threshold instead of recomputing it on each insert
2013-11-23 17:38:29 -08:00
Matei Zaharia 7535d7fbcb Fixes to AppendOnlyMap:
- Use Murmur Hash 3 finalization step to scramble the bits of HashCode
  instead of the simpler version in java.util.HashMap; the latter one
  had trouble with ranges of consecutive integers. Murmur Hash 3 is used
  by fastutil.
- Use Object.equals() instead of Scala's == to compare keys, because the
  latter does extra casts for numeric types (see the equals method in
  https://github.com/scala/scala/blob/master/src/library/scala/runtime/BoxesRunTime.java)
2013-11-23 17:21:37 -08:00
Reynold Xin 51aa9d6e99 Merge pull request #198 from ankurdave/zipPartitions-preservesPartitioning
Support preservesPartitioning in RDD.zipPartitions

In `RDD.zipPartitions`, add support for a `preservesPartitioning` option (similar to `RDD.mapPartitions`) that reuses the first RDD's partitioner.
2013-11-23 19:46:46 +08:00
Ankur Dave c1507afc6c Support preservesPartitioning in RDD.zipPartitions 2013-11-23 03:03:31 -08:00
Reynold Xin 086b097e33 Merge pull request #193 from aoiwelle/patch-1
Fix Kryo Serializer buffer documentation inconsistency

The documentation here is inconsistent with the coded default and other documentation.
2013-11-22 10:26:39 +08:00
Reynold Xin f20093c3af Merge pull request #196 from pwendell/master
TimeTrackingOutputStream should pass on calls to close() and flush().

Without this fix you get a huge number of open files when running shuffles.
2013-11-22 10:12:13 +08:00
Patrick Wendell 53b94ef2f5 TimeTrackingOutputStream should pass on calls to close() and flush().
Without this fix you get a huge number of open shuffles after running
shuffles.
2013-11-21 17:20:15 -08:00
Neal Wiggins 21b5478ed6 Fix Kryo Serializer buffer inconsistency
The documentation here is inconsistent with the coded default and other documentation.
2013-11-20 16:19:25 -08:00
Reynold Xin 2fead510f7 Merge branch 'master' of github.com:tbfenet/incubator-spark
PartitionPruningRDD is using index from parent

I was getting a ArrayIndexOutOfBoundsException exception after doing union on pruned RDD. The index it was using on the partition was the index in the original RDD not the new pruned RDD.
2013-11-21 07:15:55 +08:00
Matei Zaharia 4b895013cc Merge pull request #191 from hsaputra/removesemicolonscala
Cleanup to remove semicolons (;) from Scala code

-) The main reason for this PR is to remove semicolons from single statements of Scala code.
-) Remove unused imports as I see them
-) Fix ASF comment header from some of files (bad copy paste I suppose)
2013-11-20 10:36:10 -08:00
Henry Saputra 43dfac5132 Merge branch 'master' into removesemicolonscala 2013-11-19 16:57:57 -08:00
Henry Saputra 10be58f251 Another set of changes to remove unnecessary semicolon (;) from Scala code.
Passed the sbt/sbt compile and test
2013-11-19 16:56:23 -08:00
Matei Zaharia f568912f85 Merge pull request #181 from BlackNiuza/fix_tasks_number
correct number of tasks in ExecutorsUI

Index `a` is not `execId` here
2013-11-19 16:11:31 -08:00
Matei Zaharia aa638ed9c1 Merge pull request #189 from tgravescs/sparkYarnErrorHandling
Impove Spark on Yarn Error handling

Improve cli error handling and only allow a certain number of worker failures before failing the application.  This will help prevent users from doing foolish things and their jobs running forever.  For instance using 32 bit java but trying to allocate 8G containers. This loops forever without this change, now it errors out after a certain number of retries.  The number of tries is configurable.  Also increase the frequency we ping the RM to increase speed at which we get containers if they die. The Yarn MR app defaults to pinging the RM every 1 seconds, so the default of 5 seconds here is fine. But that is configurable as well in case people want to change it.

I do want to make sure there aren't any cases that calling stopExecutors in CoarseGrainedSchedulerBackend would cause problems?  I couldn't think of any and testing on standalone cluster as well as yarn.
2013-11-19 16:05:44 -08:00
Matei Zaharia 55925805fc Merge pull request #187 from aarondav/example-bcast-test
Enable the Broadcast examples to work in a cluster setting

Since they rely on println to display results, we need to first collect those results to the driver to have them actually display locally.

This issue came up on the mailing lists [here](http://mail-archives.apache.org/mod_mbox/incubator-spark-user/201311.mbox/%3C2013111909591557147628%40ict.ac.cn%3E).
2013-11-19 16:04:01 -08:00
tgravescs 4093e9393a Impove Spark on Yarn Error handling 2013-11-19 12:44:00 -06:00
Henry Saputra 9c934b640f Remove the semicolons at the end of Scala code to make it more pure Scala code.
Also remove unused imports as I found them along the way.
Remove return statements when returning value in the Scala code.

Passing compile and tests.
2013-11-19 10:19:03 -08:00
Matthew Taylor f639b65eab PartitionPruningRDD is using index from parent(review changes) 2013-11-19 10:48:48 +00:00
Aaron Davidson 50fd8d98c0 Enable the Broadcast examples to work in a cluster setting
Since they rely on println to display results, we need to first collect
those results to the driver to have them actually display locally.
2013-11-18 22:51:35 -08:00
Matthew Taylor 13b9bf494b PartitionPruningRDD is using index from parent 2013-11-19 06:27:33 +00:00
shiyun.wxm eda05fa439 use HashSet.empty[Long] instead of Seq[Long] 2013-11-18 13:31:14 +08:00
Reynold Xin e2ebc3a9d8 Merge pull request #182 from rxin/vector
Slightly enhanced PrimitiveVector:

1. Added trim() method
2. Added size method.
3. Renamed getUnderlyingArray to array.
4. Minor documentation update.
2013-11-17 18:42:18 -08:00
Reynold Xin 26f616d73a Merge pull request #3 from aarondav/pv-test
Add PrimitiveVectorSuite and fix bug in resize()
2013-11-17 18:18:16 -08:00
Aaron Davidson 85763f4942 Add PrimitiveVectorSuite and fix bug in resize() 2013-11-17 18:16:51 -08:00
Reynold Xin 16a2286d6d Return the vector itself for trim and resize method in PrimitiveVector. 2013-11-17 17:52:02 -08:00
BlackNiuza ecfbaf2442 rename "a" to "statusId" 2013-11-18 09:51:40 +08:00
Reynold Xin c30979c7d6 Slightly enhanced PrimitiveVector:
1. Added trim() method
2. Added size method.
3. Renamed getUnderlyingArray to array.
4. Minor documentation update.
2013-11-17 17:09:40 -08:00
BlackNiuza b60839e56a correct number of tasks in ExecutorsUI 2013-11-17 21:38:57 +08:00
Matei Zaharia 1b5b358309 Merge pull request #178 from hsaputra/simplecleanupcode
Simple cleanup on Spark's Scala code

Simple cleanup on Spark's Scala code while testing some modules:
-) Remove some of unused imports as I found them
-) Remove ";" in the imports statements
-) Remove () at the end of method calls like size that does not have size effect.
2013-11-16 11:44:10 -08:00
Henry Saputra c33f802044 Simple cleanup on Spark's Scala code while testing core and yarn modules:
-) Remove some of unused imports as I found them
-) Remove ";" in the imports statements
-) Remove () at the end of method call like size that does not have size effect.
2013-11-15 10:32:20 -08:00
Matei Zaharia 96e0fb4630 Merge pull request #173 from kayousterhout/scheduler_hang
Fix bug where scheduler could hang after task failure.

When a task fails, we need to call reviveOffers() so that the
task can be rescheduled on a different machine. In the current code,
the state in ClusterTaskSetManager indicating which tasks are
pending may be updated after revive offers is called (there's a
race condition here), so when revive offers is called, the task set
manager does not yet realize that there are failed tasks that need
to be relaunched.

This isn't currently unit tested but will be once my pull request for
merging the cluster and local schedulers goes in -- at which point
many more of the unit tests will exercise the code paths through
the cluster scheduler (currently the failure test suite uses the local
scheduler, which is why we didn't see this bug before).
2013-11-14 22:29:28 -08:00
Matei Zaharia dfd40e9f6f Merge pull request #175 from kayousterhout/no_retry_not_serializable
Don't retry tasks when they fail due to a NotSerializableException

As with my previous pull request, this will be unit tested once the Cluster and Local schedulers get merged.
2013-11-14 19:44:50 -08:00
Matei Zaharia ed25105fd9 Merge pull request #174 from ahirreddy/master
Write Spark UI url to driver file on HDFS

This makes the SIMR code path simpler
2013-11-14 19:43:55 -08:00
Kay Ousterhout 29c88e408e Don't retry tasks when they fail due to a NotSerializableException 2013-11-14 15:15:19 -08:00
Kay Ousterhout b4546ba9e6 Fix bug where scheduler could hang after task failure.
When a task fails, we need to call reviveOffers() so that the
task can be rescheduled on a different machine. In the current code,
the state in ClusterTaskSetManager indicating which tasks are
pending may be updated after revive offers is called (there's a
race condition here), so when revive offers is called, the task set
manager does not yet realize that there are failed tasks that need
to be relaunched.
2013-11-14 13:55:03 -08:00
Reynold Xin 1a4cfbea33 Merge pull request #169 from kayousterhout/mesos_fix
Don't ignore spark.cores.max when using Mesos Coarse mode

totalCoresAcquired is decremented but never incremented, causing Spark to effectively ignore spark.cores.max in coarse grained Mesos mode.
2013-11-14 10:32:11 -08:00
Reynold Xin 5a4f483652 Merge pull request #170 from liancheng/hadooprdd-doc-typo
Fixed a scaladoc typo in HadoopRDD.scala
2013-11-14 10:30:36 -08:00
Reynold Xin d76f5203af Merge pull request #171 from RIA-pierre-borckmans/master
Fixed typos in the CDH4 distributions version codes.

Nothing important, but annoying when doing a copy/paste...
2013-11-14 10:25:48 -08:00
RIA-pierre-borckmans bef398e572 Fixed typos in the CDH4 distributions version codes. 2013-11-14 11:33:48 +01:00
Lian, Cheng cc8995c8f4 Fixed a scaladoc typo in HadoopRDD.scala 2013-11-14 18:17:05 +08:00
Kay Ousterhout 5125cd3466 Don't ignore spark.cores.max when using Mesos Coarse mode 2013-11-13 23:06:17 -08:00
Matei Zaharia 2054c61a18 Merge pull request #159 from liancheng/dagscheduler-actor-refine
Migrate the daemon thread started by DAGScheduler to Akka actor

`DAGScheduler` adopts an event queue and a daemon thread polling the it to process events sent to a `DAGScheduler`.  This is a classical actor use case.  By migrating this thread to Akka actor, we may benefit from both cleaner code and better performance (context switching cost of Akka actor is much less than that of a native thread).

But things become a little complicated when taking existing test code into consideration.

Code in `DAGSchedulerSuite` is somewhat tightly coupled with `DAGScheduler`, and directly calls `DAGScheduler.processEvent` instead of posting event messages to `DAGScheduler`.  To minimize code change, I chose to let the actor to delegate messages to `processEvent`.  Maybe this doesn't follow conventional actor usage, but I tried to make it apparently correct.

Another tricky part is that, since `DAGScheduler` depends on the `ActorSystem` provided by its field `env`, `env` cannot be null.  But the `dagScheduler` field created in `DAGSchedulerSuite.before` was given a null `env`.  What's more, `BlockManager.blockIdsToBlockManagers` checks whether `env` is null to determine whether to run the production code or the test code (bad smell here, huh?).  I went through all callers of `BlockManager.blockIdsToBlockManagers`, and made sure that if `env != null` holds, then `blockManagerMaster == null` must also hold.  That's the logic behind `BlockManager.scala` [line 896](https://github.com/liancheng/incubator-spark/compare/dagscheduler-actor-refine?expand=1#diff-2b643ea78c1add0381754b1f47eec132L896).

At last, since `DAGScheduler` instances are always `start()`ed after creation, I removed the `start()` method, and starts the `eventProcessActor` within the constructor.
2013-11-13 16:49:55 -08:00
Matei Zaharia 9290e5bcd2 Merge pull request #165 from NathanHowell/kerberos-master
spark-assembly.jar fails to authenticate with YARN ResourceManager

The META-INF/services/ sbt MergeStrategy was discarding support for Kerberos, among others. This pull request changes to a merge strategy similar to sbt-assembly's default. I've also included an update to sbt-assembly 0.9.2, a minor fix to it's zip file handling.
2013-11-13 16:48:44 -08:00
Ahir Reddy 0ea1f8b225 Write Spark UI url to driver file on HDFS 2013-11-13 15:23:36 -08:00
Matei Zaharia 39af914b27 Merge pull request #166 from ahirreddy/simr-spark-ui
SIMR Backend Scheduler will now write Spark UI URL to HDFS, which is to ...

...be retrieved by SIMR clients
2013-11-13 08:39:05 -08:00
Matei Zaharia f49ea28d25 Merge pull request #137 from tgravescs/sparkYarnJarsHdfsRebase
Allow spark on yarn to be run from HDFS.

Allows the spark.jar, app.jar, and log4j.properties to be put into hdfs.  Allows you to specify the files on a different hdfs cluster and it will copy them over. It makes sure permissions are correct and makes sure to put things into public distributed cache so they can be reused amongst users if their permissions are appropriate.  Also add a bit of error handling for missing arguments.
2013-11-12 19:13:39 -08:00
Matei Zaharia 87f2f4e5c2 Merge pull request #153 from ankurdave/stop-spot-cluster
Enable stopping and starting a spot cluster

Clusters launched using `--spot-price` contain an on-demand master and spot slaves. Because EC2 does not support stopping spot instances, the spark-ec2 script previously could only destroy such clusters.

This pull request makes it possible to stop and restart a spot cluster.
* The `stop` command works as expected for a spot cluster: the master is stopped and the slaves are terminated.
* To start a stopped spot cluster, the user must invoke `launch --use-existing-master`. This launches fresh spot slaves but resumes the existing master.
2013-11-12 16:26:09 -08:00
Matei Zaharia b8bf04a085 Merge pull request #160 from xiajunluan/JIRA-923
Fix bug JIRA-923

Fix column sort issue in UI for JIRA-923.
https://spark-project.atlassian.net/browse/SPARK-923

Conflicts:
	core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
	core/src/main/scala/org/apache/spark/ui/jobs/StageTable.scala
2013-11-12 16:19:50 -08:00