26ae9e93da
### What changes were proposed in this pull request?
This PR proposes to move distributed-sequence index implementation to SQL plan to leverage optimizations such as column pruning.
```python
import pyspark.pandas as ps
ps.set_option('compute.default_index_type', 'distributed-sequence')
ps.range(10).id.value_counts().to_frame().spark.explain()
```
**Before:**
```bash
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [count#51L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(count#51L DESC NULLS LAST, 200), ENSURE_REQUIREMENTS, [id=#70]
+- HashAggregate(keys=[id#37L], functions=[count(1)], output=[__index_level_0__#48L, count#51L])
+- Exchange hashpartitioning(id#37L, 200), ENSURE_REQUIREMENTS, [id=#67]
+- HashAggregate(keys=[id#37L], functions=[partial_count(1)], output=[id#37L, count#63L])
+- Project [id#37L]
+- Filter atleastnnonnulls(1, id#37L)
+- Scan ExistingRDD[__index_level_0__#36L,id#37L]
# ^^^ Base DataFrame created by the output RDD from zipWithIndex (and checkpointed)
```
**After:**
```bash
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [count#275L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(count#275L DESC NULLS LAST, 200), ENSURE_REQUIREMENTS, [id=#174]
+- HashAggregate(keys=[id#258L], functions=[count(1)])
+- HashAggregate(keys=[id#258L], functions=[partial_count(1)])
+- Filter atleastnnonnulls(1, id#258L)
+- Range (0, 10, step=1, splits=16)
# ^^^ Removed the Spark job execution for `zipWithIndex`
```
### Why are the changes needed?
To leverage optimization of SQL engine and avoid unnecessary shuffle to create default index.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittests were added. Also, this PR will test all unittests in pandas API on Spark after switching the default index implementation to `distributed-sequence`.
Closes #33807 from HyukjinKwon/SPARK-36559.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit
|
||
---|---|---|
.. | ||
cloudpickle | ||
ml | ||
mllib | ||
pandas | ||
resource | ||
sql | ||
streaming | ||
testing | ||
tests | ||
__init__.py | ||
__init__.pyi | ||
_globals.py | ||
_typing.pyi | ||
accumulators.py | ||
accumulators.pyi | ||
broadcast.py | ||
broadcast.pyi | ||
conf.py | ||
conf.pyi | ||
context.py | ||
context.pyi | ||
daemon.py | ||
files.py | ||
files.pyi | ||
find_spark_home.py | ||
install.py | ||
java_gateway.py | ||
join.py | ||
profiler.py | ||
profiler.pyi | ||
py.typed | ||
rdd.py | ||
rdd.pyi | ||
rddsampler.py | ||
resultiterable.py | ||
resultiterable.pyi | ||
serializers.py | ||
shell.py | ||
shuffle.py | ||
statcounter.py | ||
statcounter.pyi | ||
status.py | ||
status.pyi | ||
storagelevel.py | ||
storagelevel.pyi | ||
taskcontext.py | ||
taskcontext.pyi | ||
traceback_utils.py | ||
util.py | ||
util.pyi | ||
version.py | ||
version.pyi | ||
worker.py |