Matei Zaharia
af3c9d5042
Add Apache license headers and LICENSE and NOTICE files
2013-07-16 17:21:33 -07:00
Matei Zaharia
94871e4703
Merge pull request #655 from tgravescs/master
...
Add support for running Spark on Yarn on a secure Hadoop Cluster
2013-07-06 15:26:19 -07:00
Y.CORP.YAHOO.COM\tgraves
923cf92900
Rework from pull request. Removed --user option from Spark on Yarn Client, made the user of JAVA_HOME environment
...
variable conditional on if its set, and created addCredentials in each of the SparkHadoopUtil classes
to only add the credentials when the profile is hadoop2-yarn.
2013-07-02 21:18:59 -05:00
Matei Zaharia
78ffe164b3
Clone the zero value for each key in foldByKey
...
The old version reused the object within each task, leading to
overwriting of the object when a mutable type is used, which is expected
to be common in fold.
Conflicts:
core/src/test/scala/spark/ShuffleSuite.scala
2013-06-23 10:26:53 -07:00
Thomas Graves
bad51c7cb4
upmerge with latest mesos/spark master and fix hbase compile with hadoop2-yarn profile
2013-06-19 14:39:13 -05:00
Thomas Graves
75d78c7ac9
Add support for Spark on Yarn on a secure Hadoop cluster
2013-06-19 11:18:42 -05:00
Reynold Xin
6738178d0d
SPARK-772: groupByKey should disable map side combine.
2013-06-13 23:59:42 -07:00
Patrick Wendell
df592192e7
Monads FTW
2013-06-09 18:09:24 -07:00
Patrick Wendell
d1bbcebae5
Adding compression to Hadoop save functions
2013-06-09 11:39:35 -07:00
Reynold Xin
d3586ef438
Merge branch 'blockmanager' of github.com:rxin/spark into blockmanager
...
Conflicts:
core/src/main/scala/spark/storage/DiskStore.scala
2013-04-29 15:44:18 -07:00
Reynold Xin
aa618ed2a2
Allow changing the serializer on a per shuffle basis.
2013-04-24 14:52:49 -07:00
Mridul Muralidharan
eb7e95e833
Commit job to persist files
2013-04-16 02:56:36 +05:30
Mridul Muralidharan
19652a44be
Fix issue with FileSuite failing
2013-04-15 19:16:36 +05:30
Mridul Muralidharan
6798a09df8
Add support for building against hadoop2-yarn : adding new maven profile for it
2013-04-07 17:47:38 +05:30
Mark Hamstra
32979b5e7d
whitespace
2013-03-16 13:36:46 -07:00
Mark Hamstra
ca9f81e8fc
refactor foldByKey to use combineByKey
2013-03-16 13:31:01 -07:00
Mark Hamstra
1fb192ef40
Merge branch 'master' of https://github.com/mesos/spark into foldByKey
2013-03-16 12:17:13 -07:00
Mark Hamstra
857010392b
Fuller implementation of foldByKey
2013-03-15 10:56:05 -07:00
Mark Hamstra
16a4ca4537
restrict V type of foldByKey in order to retain ClassManifest; added foldByKey to Java API and test
2013-03-14 13:58:37 -07:00
Mark Hamstra
b1422cbdd5
added foldByKey
2013-03-14 12:59:58 -07:00
Stephen Haberman
7d8bb4df3a
Allow subtractByKey's other argument to have a different value type.
2013-03-14 14:44:15 -05:00
Stephen Haberman
4632c45af1
Finished subtractByKeys.
2013-03-14 10:35:34 -05:00
Stephen Haberman
63fe225587
Simplify SubtractedRDD in preparation from subtractByKey.
2013-03-13 17:17:34 -05:00
Stephen Haberman
44032bc476
Merge branch 'master' into bettersplits
...
Conflicts:
core/src/main/scala/spark/RDD.scala
core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
core/src/test/scala/spark/ShuffleSuite.scala
2013-02-24 22:08:14 -06:00
Matei Zaharia
06e5e6627f
Renamed "splits" to "partitions"
2013-02-17 22:13:26 -08:00
Matei Zaharia
3260b6120e
Merge pull request #470 from stephenh/morek
...
Make CoGroupedRDDs explicitly have the same key type.
2013-02-16 16:38:38 -08:00
Stephen Haberman
ae2234687d
Make CoGroupedRDDs explicitly have the same key type.
2013-02-16 13:10:31 -06:00
Stephen Haberman
4328873294
Add assertion about dependencies.
2013-02-16 01:16:40 -06:00
Stephen Haberman
c34b8ad2c5
Avoid a shuffle if combineByKey is passed the same partitioner.
2013-02-16 00:54:03 -06:00
Stephen Haberman
4281e579c2
Update more javadocs.
2013-02-16 00:45:03 -06:00
Stephen Haberman
6cd68c31cb
Update default.parallelism docs, have StandaloneSchedulerBackend use it.
...
Only brand new RDDs (e.g. parallelize and makeRDD) now use default
parallelism, everything else uses their largest parent's partitioner
or partition size.
2013-02-16 00:29:11 -06:00
Stephen Haberman
680f42e6cd
Change defaultPartitioner to use upstream split size.
...
Previously it used the SparkContext.defaultParallelism, which occassionally
ended up being a very bad guess. Looking at upstream RDDs seems to make
better use of the context.
Also sorted the upstream RDDs by partition size first, as if we have
a hugely-partitioned RDD and tiny-partitioned RDD, it is unlikely
we want the resulting RDD to be tiny-partitioned.
2013-02-10 02:27:03 -06:00
Matei Zaharia
8b3041c723
Reduced the memory usage of reduce and similar operations
...
These operations used to wait for all the results to be available in an
array on the driver program before merging them. They now merge values
incrementally as they arrive.
2013-02-01 15:38:42 -08:00
Matei Zaharia
64ba6a8c2c
Simplify checkpointing code and RDD class a little:
...
- RDD's getDependencies and getSplits methods are now guaranteed to be
called only once, so subclasses can safely do computation in there
without worrying about caching the results.
- The management of a "splits_" variable that is cleared out when we
checkpoint an RDD is now done in the RDD class.
- A few of the RDD subclasses are simpler.
- CheckpointRDD's compute() method no longer assumes that it is given a
CheckpointRDDSplit -- it can work just as well on a split from the
original RDD, because it only looks at its index. This is important
because things like UnionRDD and ZippedRDD remember the parent's
splits as part of their own and wouldn't work on checkpointed parents.
- RDD.iterator can now reuse cached data if an RDD is computed before it
is checkpointed. It seems like it wouldn't do this before (it always
called iterator() on the CheckpointRDD, which read from HDFS).
2013-01-28 22:30:12 -08:00
Stephen Haberman
2d8218b871
Remove unneeded/now-broken saveAsNewAPIHadoopFile overload.
2013-01-21 20:00:27 -06:00
Stephen Haberman
6ded481999
Merge branch 'master' into hadoopconf
...
Conflicts:
core/src/main/scala/spark/SparkContext.scala
core/src/main/scala/spark/api/java/JavaSparkContext.scala
2013-01-21 12:56:48 -06:00
Stephen Haberman
69a417858b
Also use hadoopConfiguration in newAPI methods.
2013-01-21 12:42:11 -06:00
Tathagata Das
cd1521cfdb
Merge branch 'master' into streaming
...
Conflicts:
core/src/main/scala/spark/rdd/CoGroupedRDD.scala
core/src/main/scala/spark/rdd/FilteredRDD.scala
docs/_layouts/global.html
docs/index.md
run
2013-01-15 12:08:51 -08:00
Tathagata Das
0dbd411a56
Added documentation for PairDStreamFunctions.
2013-01-13 21:08:35 -08:00
Stephen Haberman
3e6519a36e
Use hadoopConfiguration for default JobConf in PairRDDFunctions.
2013-01-11 11:24:20 -06:00
Stephen Haberman
8d57c78c83
Add PairRDDFunctions.keys and values.
2013-01-05 12:04:01 -06:00
Tathagata Das
d34dba25c2
Merge branch 'mesos' into dev-merge
2013-01-01 15:48:39 -08:00
Josh Rosen
f803953998
Raise exception when hashing Java arrays (SPARK-597)
2012-12-31 20:20:11 -08:00
Tathagata Das
7c33f76291
Merge branch 'mesos' into dev-merge
2012-12-26 19:19:07 -08:00
Tathagata Das
836042bb9f
Merge branch 'dev-checkpoint' of github.com:radlab/spark into dev-merge
...
Conflicts:
core/src/main/scala/spark/ParallelCollection.scala
core/src/main/scala/spark/RDD.scala
core/src/main/scala/spark/rdd/BlockRDD.scala
core/src/main/scala/spark/rdd/CartesianRDD.scala
core/src/main/scala/spark/rdd/CoGroupedRDD.scala
core/src/main/scala/spark/rdd/CoalescedRDD.scala
core/src/main/scala/spark/rdd/FilteredRDD.scala
core/src/main/scala/spark/rdd/FlatMappedRDD.scala
core/src/main/scala/spark/rdd/GlommedRDD.scala
core/src/main/scala/spark/rdd/HadoopRDD.scala
core/src/main/scala/spark/rdd/MapPartitionsRDD.scala
core/src/main/scala/spark/rdd/MapPartitionsWithSplitRDD.scala
core/src/main/scala/spark/rdd/MappedRDD.scala
core/src/main/scala/spark/rdd/PipedRDD.scala
core/src/main/scala/spark/rdd/SampledRDD.scala
core/src/main/scala/spark/rdd/ShuffledRDD.scala
core/src/main/scala/spark/rdd/UnionRDD.scala
core/src/main/scala/spark/scheduler/ResultTask.scala
core/src/test/scala/spark/CheckpointSuite.scala
2012-12-26 19:09:01 -08:00
Mark Hamstra
903f3518df
fall back to filter-map-collect when calling lookup() on an RDD without a partitioner
2012-12-24 13:18:45 -08:00
Reynold Xin
eac566a7f4
Merge branch 'master' of github.com:mesos/spark into dev
...
Conflicts:
core/src/main/scala/spark/MapOutputTracker.scala
core/src/main/scala/spark/PairRDDFunctions.scala
core/src/main/scala/spark/ParallelCollection.scala
core/src/main/scala/spark/RDD.scala
core/src/main/scala/spark/rdd/BlockRDD.scala
core/src/main/scala/spark/rdd/CartesianRDD.scala
core/src/main/scala/spark/rdd/CoGroupedRDD.scala
core/src/main/scala/spark/rdd/CoalescedRDD.scala
core/src/main/scala/spark/rdd/FilteredRDD.scala
core/src/main/scala/spark/rdd/FlatMappedRDD.scala
core/src/main/scala/spark/rdd/GlommedRDD.scala
core/src/main/scala/spark/rdd/HadoopRDD.scala
core/src/main/scala/spark/rdd/MapPartitionsRDD.scala
core/src/main/scala/spark/rdd/MapPartitionsWithSplitRDD.scala
core/src/main/scala/spark/rdd/MappedRDD.scala
core/src/main/scala/spark/rdd/PipedRDD.scala
core/src/main/scala/spark/rdd/SampledRDD.scala
core/src/main/scala/spark/rdd/ShuffledRDD.scala
core/src/main/scala/spark/rdd/UnionRDD.scala
core/src/main/scala/spark/storage/BlockManager.scala
core/src/main/scala/spark/storage/BlockManagerId.scala
core/src/main/scala/spark/storage/BlockManagerMaster.scala
core/src/main/scala/spark/storage/StorageLevel.scala
core/src/main/scala/spark/util/MetadataCleaner.scala
core/src/main/scala/spark/util/TimeStampedHashMap.scala
core/src/test/scala/spark/storage/BlockManagerSuite.scala
run
2012-12-20 14:53:40 -08:00
Tathagata Das
5184141936
Introduced getSpits, getDependencies, and getPreferredLocations in RDD and RDDCheckpointData.
2012-12-18 13:30:53 -08:00
Reynold Xin
eacb98e900
SPARK-635: Pass a TaskContext object to compute() interface and use that
...
to close Hadoop input stream.
2012-12-13 15:41:53 -08:00
Tathagata Das
0dcd770fdc
Added checkpointing support to all RDDs, along with CheckpointSuite to test checkpointing in them.
2012-10-30 16:09:37 -07:00