spark-instrumented-optimizer/core
Stephen Haberman 680f42e6cd Change defaultPartitioner to use upstream split size.
Previously it used the SparkContext.defaultParallelism, which occassionally
ended up being a very bad guess. Looking at upstream RDDs seems to make
better use of the context.

Also sorted the upstream RDDs by partition size first, as if we have
a hugely-partitioned RDD and tiny-partitioned RDD, it is unlikely
we want the resulting RDD to be tiny-partitioned.
2013-02-10 02:27:03 -06:00
..
src Change defaultPartitioner to use upstream split size. 2013-02-10 02:27:03 -06:00
pom.xml Merge remote-tracking branch 'base/master' into dag-sched-tests 2013-02-02 00:33:30 -08:00