spark-instrumented-optimizer/python/docs
HyukjinKwon ee8d661058 [SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?

This PR proposes to move pandas related functionalities into pandas package. Namely:

```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py  # Conversion between pandas <> PySpark DataFrames
├── functions.py   # pandas_udf
├── group_ops.py   # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py     # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py       # Type utils between pandas <> PyArrow
└── utils.py       # Version requirement checks
```

In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:

```python
class PandasMapOpsMixin(object):
    def mapInPandas(self, ...):
        ...
        return ...

    # other Pandas <> PySpark APIs
```

```python
class DataFrame(PandasMapOpsMixin):

    # other DataFrame APIs equivalent to Scala side.

```

Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.

### Why are the changes needed?

There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.

Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.

### Does this PR introduce any user-facing change?

No.

### How was this patch tested?

Existing tests should cover. Also, I manually built the PySpark API documentation and checked.

Closes #27109 from HyukjinKwon/pandas-refactoring.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-09 10:22:50 +09:00
..
_static [SPARK-27557][DOC] Add copy button to Python API docs for easier copying of code-blocks 2019-05-01 11:26:18 -05:00
_templates [SPARK-10415] [PYSPARK] [MLLIB] [DOCS] Enhance Navigation Sidebar in PySpark API 2015-09-29 13:25:38 -07:00
conf.py [SPARK-28206][PYTHON] Remove the legacy Epydoc in PySpark API documentation 2019-07-05 10:08:22 -07:00
index.rst [MINOR][PYTHON] Fix SQLContext to SparkSession in Python API main page 2019-01-16 23:23:36 +08:00
make.bat [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
make2.bat [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
Makefile [SPARK-30216][INFRA] Use python3 in Docker release image 2019-12-13 11:31:31 -08:00
pyspark.ml.rst [SPARK-30154][ML] PySpark UDF to convert MLlib vectors to dense arrays 2020-01-06 16:18:51 -08:00
pyspark.mllib.rst [SPARK-6485] [MLLIB] [PYTHON] Add CoordinateMatrix/RowMatrix/IndexedRowMatrix to PySpark. 2015-08-04 16:30:03 -07:00
pyspark.rst [SPARK-4586][MLLIB] Python API for ML pipeline and parameters 2015-01-28 17:14:23 -08:00
pyspark.sql.rst [SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package 2020-01-09 10:22:50 +09:00
pyspark.streaming.rst [SPARK-25705][BUILD][STREAMING][TEST-MAVEN] Remove Kafka 0.8 integration 2018-10-16 09:10:24 -05:00