spark-instrumented-optimizer/python/pyspark
Davies Liu fb0db77242 [SPARK-2871] [PySpark] add zipWithIndex() and zipWithUniqueId()
RDD.zipWithIndex()

        Zips this RDD with its element indices.

        The ordering is first based on the partition index and then the
        ordering of items within each partition. So the first item in
        the first partition gets index 0, and the last item in the last
        partition receives the largest index.

        This method needs to trigger a spark job when this RDD contains
        more than one partitions.

        >>> sc.parallelize(range(4), 2).zipWithIndex().collect()
        [(0, 0), (1, 1), (2, 2), (3, 3)]

RDD.zipWithUniqueId()

        Zips this RDD with generated unique Long ids.

        Items in the kth partition will get ids k, n+k, 2*n+k, ..., where
        n is the number of partitions. So there may exist gaps, but this
        method won't trigger a spark job, which is different from
        L{zipWithIndex}

        >>> sc.parallelize(range(4), 2).zipWithUniqueId().collect()
        [(0, 0), (2, 1), (1, 2), (3, 3)]

Author: Davies Liu <davies.liu@gmail.com>

Closes #2092 from davies/zipWith and squashes the following commits:

cebe5bf [Davies Liu] improve test cases, reverse the order of index
0d2a128 [Davies Liu] add zipWithIndex() and zipWithUniqueId()
2014-08-24 21:16:05 -07:00
..
mllib [SPARK-3136][MLLIB] Create Java-friendly methods in RandomRDDs 2014-08-19 16:06:48 -07:00
__init__.py [SPARK-2724] Python version of RandomRDDGenerators 2014-07-31 20:32:57 -07:00
accumulators.py [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
broadcast.py [SPARK-1065] [PySpark] improve supporting for large broadcast 2014-08-16 16:59:34 -07:00
cloudpickle.py [SPARK-791] [PySpark] fix pickle itemgetter with cloudpickle 2014-07-29 01:02:18 -07:00
conf.py [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
context.py [SPARK-1065] [PySpark] improve supporting for large broadcast 2014-08-16 16:59:34 -07:00
daemon.py [SPARK-2898] [PySpark] fix bugs in deamon.py 2014-08-10 13:00:38 -07:00
files.py [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
java_gateway.py [SPARK-3140] Clarify confusing PySpark exception message 2014-08-20 17:07:39 -07:00
join.py [SPARK-2470] PEP8 fixes to PySpark 2014-07-21 22:30:53 -07:00
rdd.py [SPARK-2871] [PySpark] add zipWithIndex() and zipWithUniqueId() 2014-08-24 21:16:05 -07:00
rddsampler.py [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
resultiterable.py [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
serializers.py [SPARK-2790] [PySpark] fix zip with serializers which have different batch sizes. 2014-08-19 14:46:32 -07:00
shell.py [SPARK-2470] PEP8 fixes to PySpark 2014-07-21 22:30:53 -07:00
shuffle.py [SPARK-2974] [SPARK-2975] Fix two bugs related to spark.local.dirs 2014-08-19 22:42:50 -07:00
sql.py [SQL] Using safe floating-point numbers in doctest 2014-08-16 11:26:51 -07:00
statcounter.py StatCounter on NumPy arrays [PYSPARK][SPARK-2012] 2014-08-01 22:33:25 -07:00
storagelevel.py [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
tests.py [SPARK-2790] [PySpark] fix zip with serializers which have different batch sizes. 2014-08-19 14:46:32 -07:00
worker.py [SPARK-3114] [PySpark] Fix Python UDFs in Spark SQL. 2014-08-18 20:42:19 -07:00