spark-instrumented-optimizer/python/pyspark
Aaron Davidson 9909efc10a SPARK-1839: PySpark RDD#take() shouldn't always read from driver
This patch simply ports over the Scala implementation of RDD#take(), which reads the first partition at the driver, then decides how many more partitions it needs to read and will possibly start a real job if it's more than 1. (Note that SparkContext#runJob(allowLocal=true) only runs the job locally if there's 1 partition selected and no parent stages.)

Author: Aaron Davidson <aaron@databricks.com>

Closes #922 from aarondav/take and squashes the following commits:

fa06df9 [Aaron Davidson] SPARK-1839: PySpark RDD#take() shouldn't always read from driver
2014-05-31 13:04:57 -07:00
..
mllib Fix PEP8 violations in Python mllib. 2014-05-25 17:15:01 -07:00
__init__.py SPARK-1004. PySpark on YARN 2014-04-29 23:24:34 -07:00
accumulators.py Add custom serializer support to PySpark. 2013-11-10 16:45:38 -08:00
broadcast.py Fix some Python docs and make sure to unset SPARK_TESTING in Python 2013-12-29 20:15:07 -05:00
cloudpickle.py Rename top-level 'pyspark' directory to 'python' 2013-01-01 15:05:00 -08:00
conf.py [FIX] do not load defaults when testing SparkConf in pyspark 2014-05-14 14:57:17 -07:00
context.py SPARK-1839: PySpark RDD#take() shouldn't always read from driver 2014-05-31 13:04:57 -07:00
daemon.py SPARK-1579: Clean up PythonRDD and avoid swallowing IOExceptions 2014-05-07 09:48:31 -07:00
files.py Initial work to rename package to org.apache.spark 2013-09-01 14:13:13 -07:00
java_gateway.py [SPARK-1808] Route bin/pyspark through Spark submit 2014-05-16 22:34:38 -07:00
join.py Spark 1271: Co-Group and Group-By should pass Iterable[X] 2014-04-08 18:15:59 -07:00
rdd.py SPARK-1839: PySpark RDD#take() shouldn't always read from driver 2014-05-31 13:04:57 -07:00
rddsampler.py SPARK-1438 RDD.sample() make seed param optional 2014-04-24 17:27:16 -07:00
resultiterable.py Spark 1271: Co-Group and Group-By should pass Iterable[X] 2014-04-08 18:15:59 -07:00
serializers.py SPARK-1421. Make MLlib work on Python 2.6 2014-04-05 20:52:05 -07:00
shell.py [SPARK-1808] Route bin/pyspark through Spark submit 2014-05-16 22:34:38 -07:00
sql.py Python docstring update for sql.py. 2014-05-25 16:04:17 -07:00
statcounter.py Spark 1246 add min max to stat counter 2014-03-18 00:45:47 -07:00
storagelevel.py SPARK-1305: Support persisting RDD's directly to Tachyon 2014-04-04 20:38:20 -07:00
tests.py [SPARK-1549] Add Python support to spark-submit 2014-05-06 15:12:35 -07:00
worker.py Add Python includes to path before depickling broadcast values 2014-05-10 13:02:13 -07:00