spark-instrumented-optimizer/python/pyspark
Hyukjin Kwon 8d8b4aabc3 [SPARK-36710][PYTHON] Support new typing syntax in function apply APIs in pandas API on Spark
### What changes were proposed in this pull request?

This PR proposes the new syntax introduced in https://github.com/apache/spark/pull/33954. Namely, users now can specify the index type and name as below:

```python
import pandas as pd
import pyspark.pandas as ps
def transform(pdf) -> pd.DataFrame[int, [int, int]]:
    pdf['A'] = pdf.id + 1
    return pdf

ps.range(5).koalas.apply_batch(transform)
```

```
   c0  c1
0   0   1
1   1   2
2   2   3
3   3   4
4   4   5
```

```python
import pandas as pd
import pyspark.pandas as ps
def transform(pdf) -> pd.DataFrame[("index", int), [("a", int), ("b", int)]]:
    pdf['A'] = pdf.id * pdf.id
    return pdf

ps.range(5).koalas.apply_batch(transform)
```

```
       a   b
index
0      0   0
1      1   1
2      2   4
3      3   9
4      4  16
```

Again, this syntax remains experimental and this is a non-standard way apart from Python standard. We should migrate to proper typing once pandas supports it like `numpy.typing`.

### Why are the changes needed?

The rationale is described in https://github.com/apache/spark/pull/33954. In order to avoid unnecessary computation for default index or schema inference.

### Does this PR introduce _any_ user-facing change?

Yes, this PR affects the following APIs:

- `DataFrame.apply(..., axis=1)`
- `DataFrame.groupby.apply(...)`
- `DataFrame.pandas_on_spark.transform_batch(...)`
- `DataFrame.pandas_on_spark.apply_batch(...)`

Now they can specify the index type with the new syntax below:

```
DataFrame[index_type, [type, ...]]
DataFrame[(index_name, index_type), [(name, type), ...]]
DataFrame[dtype instance, dtypes instance]
DataFrame[(index_name, index_type), zip(names, types)]
```

### How was this patch tested?

Manually tested, and unittests were added.

Closes #34007 from HyukjinKwon/SPARK-36710.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-09-20 10:37:06 +09:00
..
cloudpickle [SPARK-33983][PYTHON] Update cloudpickle to v1.6.0 2021-01-04 10:36:31 -08:00
ml [SPARK-36578][ML] UnivariateFeatureSelector API doc improvement 2021-08-26 21:16:49 -07:00
mllib [SPARK-36560][PYTHON][INFRA] Deflake PySpark coverage job 2021-08-24 11:08:43 +09:00
pandas [SPARK-36710][PYTHON] Support new typing syntax in function apply APIs in pandas API on Spark 2021-09-20 10:37:06 +09:00
resource [SPARK-32320][PYSPARK] Remove mutable default arguments 2020-12-08 09:35:36 +08:00
sql [SPARK-36751][SQL][PYTHON][R] Add bit/octet_length APIs to Scala, Python and R 2021-09-15 16:27:13 +09:00
streaming [SPARK-36092][INFRA][BUILD][PYTHON] Migrate to GitHub Actions with Codecov from Jenkins 2021-08-01 21:37:19 +09:00
testing [SPARK-35599][PYTHON] Adjust check_exact parameter for older pd.testing 2021-06-07 11:12:49 +09:00
tests [SPARK-36173][CORE] Support getting CPU number in TaskContext 2021-08-04 21:14:01 -05:00
__init__.py [SPARK-35303][PYTHON] Enable pinned thread mode by default 2021-06-18 12:02:29 +09:00
__init__.pyi [SPARK-35303][PYTHON] Enable pinned thread mode by default 2021-06-18 12:02:29 +09:00
_globals.py [SPARK-23328][PYTHON] Disallow default value None in na.replace/replace when 'to_replace' is not a dictionary 2018-02-09 14:21:10 +08:00
_typing.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
accumulators.py [SPARK-32194][PYTHON] Use proper exception classes instead of plain Exception 2021-05-26 11:54:40 +09:00
accumulators.pyi [SPARK-33002][PYTHON] Remove non-API annotations 2020-10-07 19:53:59 +09:00
broadcast.py [SPARK-32194][PYTHON] Use proper exception classes instead of plain Exception 2021-05-26 11:54:40 +09:00
broadcast.pyi [SPARK-33457][PYTHON] Adjust mypy configuration 2020-11-25 09:27:04 +09:00
conf.py [SPARK-32194][PYTHON] Use proper exception classes instead of plain Exception 2021-05-26 11:54:40 +09:00
conf.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
context.py [SPARK-35938][PYTHON] Add deprecation warning for Python 3.6 2021-07-01 09:32:25 +09:00
context.pyi [SPARK-33457][PYTHON] Adjust mypy configuration 2020-11-25 09:27:04 +09:00
daemon.py [SPARK-26175][PYTHON] Redirect the standard input of the forked child to devnull in daemon 2019-07-31 09:10:24 +09:00
files.py [SPARK-28206][PYTHON] Remove the legacy Epydoc in PySpark API documentation 2019-07-05 10:08:22 -07:00
files.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
find_spark_home.py [SPARK-32017][PYTHON][FOLLOW-UP] Rename HADOOP_VERSION to PYSPARK_HADOOP_VERSION in pip installation option 2021-01-05 17:21:32 +09:00
install.py [SPARK-33254][PYTHON][DOCS] Migration to NumPy documentation style in Core (pyspark.*, pyspark.resource.*, etc.) 2020-11-16 10:21:50 +09:00
java_gateway.py [SPARK-35303][PYTHON] Enable pinned thread mode by default 2021-06-18 12:02:29 +09:00
join.py [SPARK-14202] [PYTHON] Use generator expression instead of list comp in python_full_outer_jo… 2016-03-28 14:51:36 -07:00
profiler.py [SPARK-33254][PYTHON][DOCS] Migration to NumPy documentation style in Core (pyspark.*, pyspark.resource.*, etc.) 2020-11-16 10:21:50 +09:00
profiler.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
py.typed [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
rdd.py [SPARK-35512][PYTHON] Fix OverflowError(cannot convert float infinity to integer) in partitionBy function 2021-06-09 10:57:27 +09:00
rdd.pyi [SPARK-35986][PYSPARK] Fix type hint for RDD.histogram's buckets 2021-07-04 10:22:57 +09:00
rddsampler.py [SPARK-4897] [PySpark] Python 3 support 2015-04-16 16:20:57 -07:00
resultiterable.py [SPARK-32138] Drop Python 2.7, 3.4 and 3.5 2020-07-14 11:22:44 +09:00
resultiterable.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
serializers.py [SPARK-35303][PYTHON] Enable pinned thread mode by default 2021-06-18 12:02:29 +09:00
shell.py [SPARK-33363] Add prompt information related to the current task when pyspark/sparkR starts 2020-11-10 11:12:19 +09:00
shuffle.py [SPARK-35303][PYTHON] Enable pinned thread mode by default 2021-06-18 12:02:29 +09:00
statcounter.py [SPARK-32194][PYTHON] Use proper exception classes instead of plain Exception 2021-05-26 11:54:40 +09:00
statcounter.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
status.py
status.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
storagelevel.py [SPARK-31448][PYTHON] Fix storage level used in persist() in dataframe.py 2020-09-15 08:41:22 -05:00
storagelevel.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
taskcontext.py [SPARK-36173][CORE] Support getting CPU number in TaskContext 2021-08-04 21:14:01 -05:00
taskcontext.pyi [SPARK-36173][CORE][PYTHON][FOLLOWUP] Add type hint for TaskContext.cpus 2021-08-06 10:56:10 +09:00
traceback_utils.py
util.py [SPARK-35946][PYTHON] Respect Py4J server in InheritableThread API 2021-06-29 22:18:54 -07:00
util.pyi [SPARK-35303][PYTHON] Enable pinned thread mode by default 2021-06-18 12:02:29 +09:00
version.py [SPARK-35996][BUILD] Setting version to 3.3.0-SNAPSHOT 2021-07-02 13:47:36 -07:00
version.pyi [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
worker.py [SPARK-36173][CORE] Support getting CPU number in TaskContext 2021-08-04 21:14:01 -05:00