spark-instrumented-optimizer/python
Davies Liu f11288d527 [SPARK-6886] [PySpark] fix big closure with shuffle
Currently, the created broadcast object will have same life cycle as RDD in Python. For multistage jobs, an PythonRDD will be created in JVM and the RDD in Python may be GCed, then the broadcast will be destroyed in JVM before the PythonRDD.

This PR change to use PythonRDD to track the lifecycle of the broadcast object. It also have a refactor about getNumPartitions() to avoid unnecessary creation of PythonRDD, which could be heavy.

cc JoshRosen

Author: Davies Liu <davies@databricks.com>

Closes #5496 from davies/big_closure and squashes the following commits:

9a0ea4c [Davies Liu] fix big closure with shuffle
2015-04-15 12:58:02 -07:00
..
docs [SPARK-6264] [MLLIB] Support FPGrowth algorithm in Python API 2015-04-09 15:10:10 -07:00
lib [SPARK-2305] [PySpark] Update Py4J to version 0.8.2.1 2014-07-29 19:02:06 -07:00
pyspark [SPARK-6886] [PySpark] fix big closure with shuffle 2015-04-15 12:58:02 -07:00
test_support [SPARK-3634] [PySpark] User's module should take precedence over system modules 2014-09-24 12:10:09 -07:00
.gitignore [SPARK-3946] gitignore in /python includes wrong directory 2014-10-14 14:09:39 -07:00
run-tests [SPARK-6211][Streaming] Add Python Kafka API unit test 2015-04-09 23:14:24 -07:00