2015-02-09 23:49:22 -05:00
|
|
|
#
|
|
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
|
|
|
|
# contributor license agreements. See the NOTICE file distributed with
|
|
|
|
# this work for additional information regarding copyright ownership.
|
|
|
|
# The ASF licenses this file to You under the Apache License, Version 2.0
|
|
|
|
# (the "License"); you may not use this file except in compliance with
|
|
|
|
# the License. You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
|
|
|
#
|
|
|
|
|
|
|
|
"""
|
2015-03-31 21:31:36 -04:00
|
|
|
Important classes of Spark SQL and DataFrames:
|
2015-02-09 23:49:22 -05:00
|
|
|
|
2016-08-06 00:02:59 -04:00
|
|
|
- :class:`pyspark.sql.SparkSession`
|
2015-03-29 02:59:27 -04:00
|
|
|
Main entry point for :class:`DataFrame` and SQL functionality.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.DataFrame`
|
2015-03-29 02:59:27 -04:00
|
|
|
A distributed collection of data grouped into named columns.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.Column`
|
2015-03-29 02:59:27 -04:00
|
|
|
A column expression in a :class:`DataFrame`.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.Row`
|
2015-03-29 02:59:27 -04:00
|
|
|
A row of data in a :class:`DataFrame`.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.GroupedData`
|
2015-03-31 03:25:23 -04:00
|
|
|
Aggregation methods, returned by :func:`DataFrame.groupBy`.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.DataFrameNaFunctions`
|
2015-03-31 03:25:23 -04:00
|
|
|
Methods for handling missing data (null values).
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.DataFrameStatFunctions`
|
2015-05-13 00:43:34 -04:00
|
|
|
Methods for statistics functionality.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.functions`
|
2015-03-29 02:59:27 -04:00
|
|
|
List of built-in functions available for :class:`DataFrame`.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.types`
|
2015-03-31 21:31:36 -04:00
|
|
|
List of data types available.
|
2015-05-23 11:30:05 -04:00
|
|
|
- :class:`pyspark.sql.Window`
|
|
|
|
For working with window functions.
|
2015-02-09 23:49:22 -05:00
|
|
|
"""
|
|
|
|
from pyspark.sql.types import Row
|
2020-03-27 02:51:15 -04:00
|
|
|
from pyspark.sql.context import SQLContext, HiveContext, UDFRegistration
|
2016-04-28 13:55:48 -04:00
|
|
|
from pyspark.sql.session import SparkSession
|
2015-05-15 23:09:15 -04:00
|
|
|
from pyspark.sql.column import Column
|
2017-11-02 10:22:52 -04:00
|
|
|
from pyspark.sql.catalog import Catalog
|
2016-01-04 21:02:38 -05:00
|
|
|
from pyspark.sql.dataframe import DataFrame, DataFrameNaFunctions, DataFrameStatFunctions
|
2015-05-15 23:09:15 -04:00
|
|
|
from pyspark.sql.group import GroupedData
|
2015-05-19 17:23:28 -04:00
|
|
|
from pyspark.sql.readwriter import DataFrameReader, DataFrameWriter
|
2015-05-23 11:30:05 -04:00
|
|
|
from pyspark.sql.window import Window, WindowSpec
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
from pyspark.sql.pandas.group_ops import PandasCogroupedOps
|
2015-02-09 23:49:22 -05:00
|
|
|
|
2015-06-03 03:23:34 -04:00
|
|
|
|
2015-02-09 23:49:22 -05:00
|
|
|
__all__ = [
|
2020-03-27 02:51:15 -04:00
|
|
|
'SparkSession', 'SQLContext', 'HiveContext', 'UDFRegistration',
|
2017-11-02 10:22:52 -04:00
|
|
|
'DataFrame', 'GroupedData', 'Column', 'Catalog', 'Row',
|
2016-08-06 00:02:59 -04:00
|
|
|
'DataFrameNaFunctions', 'DataFrameStatFunctions', 'Window', 'WindowSpec',
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
'DataFrameReader', 'DataFrameWriter', 'PandasCogroupedOps'
|
2015-02-09 23:49:22 -05:00
|
|
|
]
|