[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
#
|
|
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
|
|
|
|
# contributor license agreements. See the NOTICE file distributed with
|
|
|
|
# this work for additional information regarding copyright ownership.
|
|
|
|
# The ASF licenses this file to You under the Apache License, Version 2.0
|
|
|
|
# (the "License"); you may not use this file except in compliance with
|
|
|
|
# the License. You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
|
|
|
#
|
|
|
|
|
|
|
|
"""
|
|
|
|
Serializers for PyArrow and pandas conversions. See `pyspark.serializers` for more details.
|
|
|
|
"""
|
|
|
|
|
|
|
|
from pyspark.serializers import Serializer, read_int, write_int, UTF8Deserializer
|
|
|
|
|
|
|
|
|
|
|
|
class SpecialLengths(object):
|
|
|
|
END_OF_DATA_SECTION = -1
|
|
|
|
PYTHON_EXCEPTION_THROWN = -2
|
|
|
|
TIMING_DATA = -3
|
|
|
|
END_OF_STREAM = -4
|
|
|
|
NULL = -5
|
|
|
|
START_ARROW_STREAM = -6
|
|
|
|
|
|
|
|
|
|
|
|
class ArrowCollectSerializer(Serializer):
|
|
|
|
"""
|
|
|
|
Deserialize a stream of batches followed by batch order information. Used in
|
|
|
|
PandasConversionMixin._collect_as_arrow() after invoking Dataset.collectAsArrowToPython()
|
|
|
|
in the JVM.
|
|
|
|
"""
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
self.serializer = ArrowStreamSerializer()
|
|
|
|
|
|
|
|
def dump_stream(self, iterator, stream):
|
|
|
|
return self.serializer.dump_stream(iterator, stream)
|
|
|
|
|
|
|
|
def load_stream(self, stream):
|
|
|
|
"""
|
|
|
|
Load a stream of un-ordered Arrow RecordBatches, where the last iteration yields
|
|
|
|
a list of indices that can be used to put the RecordBatches in the correct order.
|
|
|
|
"""
|
|
|
|
# load the batches
|
|
|
|
for batch in self.serializer.load_stream(stream):
|
|
|
|
yield batch
|
|
|
|
|
|
|
|
# load the batch order indices or propagate any error that occurred in the JVM
|
|
|
|
num = read_int(stream)
|
|
|
|
if num == -1:
|
|
|
|
error_msg = UTF8Deserializer().loads(stream)
|
|
|
|
raise RuntimeError("An error occurred while calling "
|
|
|
|
"ArrowCollectSerializer.load_stream: {}".format(error_msg))
|
|
|
|
batch_order = []
|
2020-07-13 22:22:44 -04:00
|
|
|
for i in range(num):
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
index = read_int(stream)
|
|
|
|
batch_order.append(index)
|
|
|
|
yield batch_order
|
|
|
|
|
|
|
|
def __repr__(self):
|
|
|
|
return "ArrowCollectSerializer(%s)" % self.serializer
|
|
|
|
|
|
|
|
|
|
|
|
class ArrowStreamSerializer(Serializer):
|
|
|
|
"""
|
|
|
|
Serializes Arrow record batches as a stream.
|
|
|
|
"""
|
|
|
|
|
|
|
|
def dump_stream(self, iterator, stream):
|
|
|
|
import pyarrow as pa
|
|
|
|
writer = None
|
|
|
|
try:
|
|
|
|
for batch in iterator:
|
|
|
|
if writer is None:
|
|
|
|
writer = pa.RecordBatchStreamWriter(stream, batch.schema)
|
|
|
|
writer.write_batch(batch)
|
|
|
|
finally:
|
|
|
|
if writer is not None:
|
|
|
|
writer.close()
|
|
|
|
|
|
|
|
def load_stream(self, stream):
|
|
|
|
import pyarrow as pa
|
|
|
|
reader = pa.ipc.open_stream(stream)
|
|
|
|
for batch in reader:
|
|
|
|
yield batch
|
|
|
|
|
|
|
|
def __repr__(self):
|
|
|
|
return "ArrowStreamSerializer"
|
|
|
|
|
|
|
|
|
|
|
|
class ArrowStreamPandasSerializer(ArrowStreamSerializer):
|
|
|
|
"""
|
|
|
|
Serializes Pandas.Series as Arrow data with Arrow streaming format.
|
|
|
|
|
2020-11-02 20:00:49 -05:00
|
|
|
Parameters
|
|
|
|
----------
|
|
|
|
timezone : str
|
|
|
|
A timezone to respect when handling timestamp values
|
|
|
|
safecheck : bool
|
|
|
|
If True, conversion from Arrow to Pandas checks for overflow/truncation
|
|
|
|
assign_cols_by_name : bool
|
|
|
|
If True, then Pandas DataFrames will get columns by name
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
"""
|
|
|
|
|
|
|
|
def __init__(self, timezone, safecheck, assign_cols_by_name):
|
|
|
|
super(ArrowStreamPandasSerializer, self).__init__()
|
|
|
|
self._timezone = timezone
|
|
|
|
self._safecheck = safecheck
|
|
|
|
self._assign_cols_by_name = assign_cols_by_name
|
|
|
|
|
|
|
|
def arrow_to_pandas(self, arrow_column):
|
2020-11-18 07:18:19 -05:00
|
|
|
from pyspark.sql.pandas.types import _check_series_localize_timestamps, \
|
|
|
|
_convert_map_items_to_dict
|
2020-01-26 18:21:06 -05:00
|
|
|
import pyarrow
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
|
|
|
|
# If the given column is a date type column, creates a series of datetime.date directly
|
|
|
|
# instead of creating datetime64[ns] as intermediate data to avoid overflow caused by
|
|
|
|
# datetime64[ns] type handling.
|
|
|
|
s = arrow_column.to_pandas(date_as_object=True)
|
|
|
|
|
2020-01-26 18:21:06 -05:00
|
|
|
if pyarrow.types.is_timestamp(arrow_column.type):
|
|
|
|
return _check_series_localize_timestamps(s, self._timezone)
|
2020-11-18 07:18:19 -05:00
|
|
|
elif pyarrow.types.is_map(arrow_column.type):
|
|
|
|
return _convert_map_items_to_dict(s)
|
2020-01-26 18:21:06 -05:00
|
|
|
else:
|
|
|
|
return s
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
|
|
|
|
def _create_batch(self, series):
|
|
|
|
"""
|
|
|
|
Create an Arrow record batch from the given pandas.Series or list of Series,
|
|
|
|
with optional type.
|
|
|
|
|
2020-11-02 20:00:49 -05:00
|
|
|
Parameters
|
|
|
|
----------
|
|
|
|
series : pandas.Series or list
|
|
|
|
A single series, list of series, or list of (series, arrow_type)
|
|
|
|
|
|
|
|
Returns
|
|
|
|
-------
|
|
|
|
pyarrow.RecordBatch
|
|
|
|
Arrow RecordBatch
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
"""
|
|
|
|
import pandas as pd
|
|
|
|
import pyarrow as pa
|
2020-11-18 07:18:19 -05:00
|
|
|
from pyspark.sql.pandas.types import _check_series_convert_timestamps_internal, \
|
|
|
|
_convert_dict_to_map_items
|
2020-10-21 17:46:47 -04:00
|
|
|
from pandas.api.types import is_categorical_dtype
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
# Make input conform to [(series1, type1), (series2, type2), ...]
|
|
|
|
if not isinstance(series, (list, tuple)) or \
|
|
|
|
(len(series) == 2 and isinstance(series[1], pa.DataType)):
|
|
|
|
series = [series]
|
|
|
|
series = ((s, None) if not isinstance(s, (list, tuple)) else s for s in series)
|
|
|
|
|
|
|
|
def create_array(s, t):
|
|
|
|
mask = s.isnull()
|
|
|
|
# Ensure timestamp series are in expected form for Spark internal representation
|
|
|
|
if t is not None and pa.types.is_timestamp(t):
|
|
|
|
s = _check_series_convert_timestamps_internal(s, self._timezone)
|
2020-11-18 07:18:19 -05:00
|
|
|
elif t is not None and pa.types.is_map(t):
|
|
|
|
s = _convert_dict_to_map_items(s)
|
2020-10-21 17:46:47 -04:00
|
|
|
elif is_categorical_dtype(s.dtype):
|
2020-05-27 20:27:29 -04:00
|
|
|
# Note: This can be removed once minimum pyarrow version is >= 0.16.1
|
|
|
|
s = s.astype(s.dtypes.categories.dtype)
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
try:
|
|
|
|
array = pa.Array.from_pandas(s, mask=mask, type=t, safe=self._safecheck)
|
[SPARK-33073][PYTHON] Improve error handling on Pandas to Arrow conversion failures
### What changes were proposed in this pull request?
This improves error handling when a failure in conversion from Pandas to Arrow occurs. And fixes tests to be compatible with upcoming Arrow 2.0.0 release.
### Why are the changes needed?
Current tests will fail with Arrow 2.0.0 because of a change in error message when the schema is invalid. For these cases, the current error message also includes information on disabling safe conversion config, which is mainly meant for floating point truncation and overflow. The tests have been updated to use a message that is show for past Arrow versions, and upcoming.
If the user enters an invalid schema, the error produced by pyarrow is not consistent and either `TypeError` or `ArrowInvalid`, with the latter being caught, and raised as a `RuntimeError` with the extra info.
The error handling is improved by:
- narrowing the exception type to `TypeError`s, which `ArrowInvalid` is a subclass and what is raised on safe conversion failures.
- The exception is only raised with additional information on disabling "spark.sql.execution.pandas.convertToArrowArraySafely" if it is enabled in the first place.
- The original exception is chained to better show it to the user.
### Does this PR introduce _any_ user-facing change?
Yes, the error re-raised changes from a RuntimeError to a ValueError, which better categorizes this type of error and in-line with the original Arrow error.
### How was this patch tested?
Existing tests, using pyarrow 1.0.1 and 2.0.0-snapshot
Closes #29951 from BryanCutler/arrow-better-handle-pandas-errors-SPARK-33073.
Authored-by: Bryan Cutler <cutlerb@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-10-06 05:11:24 -04:00
|
|
|
except ValueError as e:
|
|
|
|
if self._safecheck:
|
|
|
|
error_msg = "Exception thrown when converting pandas.Series (%s) to " + \
|
|
|
|
"Arrow Array (%s). It can be caused by overflows or other " + \
|
|
|
|
"unsafe conversions warned by Arrow. Arrow safe type check " + \
|
|
|
|
"can be disabled by using SQL config " + \
|
|
|
|
"`spark.sql.execution.pandas.convertToArrowArraySafely`."
|
|
|
|
raise ValueError(error_msg % (s.dtype, t)) from e
|
|
|
|
else:
|
|
|
|
raise e
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
return array
|
|
|
|
|
|
|
|
arrs = []
|
|
|
|
for s, t in series:
|
|
|
|
if t is not None and pa.types.is_struct(t):
|
|
|
|
if not isinstance(s, pd.DataFrame):
|
|
|
|
raise ValueError("A field of type StructType expects a pandas.DataFrame, "
|
|
|
|
"but got: %s" % str(type(s)))
|
|
|
|
|
|
|
|
# Input partition and result pandas.DataFrame empty, make empty Arrays with struct
|
|
|
|
if len(s) == 0 and len(s.columns) == 0:
|
|
|
|
arrs_names = [(pa.array([], type=field.type), field.name) for field in t]
|
|
|
|
# Assign result columns by schema name if user labeled with strings
|
2020-07-13 22:22:44 -04:00
|
|
|
elif self._assign_cols_by_name and any(isinstance(name, str)
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
for name in s.columns):
|
|
|
|
arrs_names = [(create_array(s[field.name], field.type), field.name)
|
|
|
|
for field in t]
|
|
|
|
# Assign result columns by position
|
|
|
|
else:
|
|
|
|
arrs_names = [(create_array(s[s.columns[i]], field.type), field.name)
|
|
|
|
for i, field in enumerate(t)]
|
|
|
|
|
|
|
|
struct_arrs, struct_names = zip(*arrs_names)
|
|
|
|
arrs.append(pa.StructArray.from_arrays(struct_arrs, struct_names))
|
|
|
|
else:
|
|
|
|
arrs.append(create_array(s, t))
|
|
|
|
|
2020-07-13 22:22:44 -04:00
|
|
|
return pa.RecordBatch.from_arrays(arrs, ["_%d" % i for i in range(len(arrs))])
|
[SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package
### What changes were proposed in this pull request?
This PR proposes to move pandas related functionalities into pandas package. Namely:
```bash
pyspark/sql/pandas
├── __init__.py
├── conversion.py # Conversion between pandas <> PySpark DataFrames
├── functions.py # pandas_udf
├── group_ops.py # Grouped UDF / Cogrouped UDF + groupby.apply, groupby.cogroup.apply
├── map_ops.py # Map Iter UDF + mapInPandas
├── serializers.py # pandas <> PyArrow serializers
├── types.py # Type utils between pandas <> PyArrow
└── utils.py # Version requirement checks
```
In order to separately locate `groupby.apply`, `groupby.cogroup.apply`, `mapInPandas`, `toPandas`, and `createDataFrame(pdf)` under `pandas` sub-package, I had to use a mix-in approach which Scala side uses often by `trait`, and also pandas itself uses this approach (see `IndexOpsMixin` as an example) to group related functionalities. Currently, you can think it's like Scala's self typed trait. See the structure below:
```python
class PandasMapOpsMixin(object):
def mapInPandas(self, ...):
...
return ...
# other Pandas <> PySpark APIs
```
```python
class DataFrame(PandasMapOpsMixin):
# other DataFrame APIs equivalent to Scala side.
```
Yes, This is a big PR but they are mostly just moving around except one case `createDataFrame` which I had to split the methods.
### Why are the changes needed?
There are pandas functionalities here and there and I myself gets lost where it was. Also, when you have to make a change commonly for all of pandas related features, it's almost impossible now.
Also, after this change, `DataFrame` and `SparkSession` become more consistent with Scala side since pandas is specific to Python, and this change separates pandas-specific APIs away from `DataFrame` or `SparkSession`.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests should cover. Also, I manually built the PySpark API documentation and checked.
Closes #27109 from HyukjinKwon/pandas-refactoring.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-01-08 20:22:50 -05:00
|
|
|
|
|
|
|
def dump_stream(self, iterator, stream):
|
|
|
|
"""
|
|
|
|
Make ArrowRecordBatches from Pandas Series and serialize. Input is a single series or
|
|
|
|
a list of series accompanied by an optional pyarrow type to coerce the data to.
|
|
|
|
"""
|
|
|
|
batches = (self._create_batch(series) for series in iterator)
|
|
|
|
super(ArrowStreamPandasSerializer, self).dump_stream(batches, stream)
|
|
|
|
|
|
|
|
def load_stream(self, stream):
|
|
|
|
"""
|
|
|
|
Deserialize ArrowRecordBatches to an Arrow table and return as a list of pandas.Series.
|
|
|
|
"""
|
|
|
|
batches = super(ArrowStreamPandasSerializer, self).load_stream(stream)
|
|
|
|
import pyarrow as pa
|
|
|
|
for batch in batches:
|
|
|
|
yield [self.arrow_to_pandas(c) for c in pa.Table.from_batches([batch]).itercolumns()]
|
|
|
|
|
|
|
|
def __repr__(self):
|
|
|
|
return "ArrowStreamPandasSerializer"
|
|
|
|
|
|
|
|
|
|
|
|
class ArrowStreamPandasUDFSerializer(ArrowStreamPandasSerializer):
|
|
|
|
"""
|
|
|
|
Serializer used by Python worker to evaluate Pandas UDFs
|
|
|
|
"""
|
|
|
|
|
|
|
|
def __init__(self, timezone, safecheck, assign_cols_by_name, df_for_struct=False):
|
|
|
|
super(ArrowStreamPandasUDFSerializer, self) \
|
|
|
|
.__init__(timezone, safecheck, assign_cols_by_name)
|
|
|
|
self._df_for_struct = df_for_struct
|
|
|
|
|
|
|
|
def arrow_to_pandas(self, arrow_column):
|
|
|
|
import pyarrow.types as types
|
|
|
|
|
|
|
|
if self._df_for_struct and types.is_struct(arrow_column.type):
|
|
|
|
import pandas as pd
|
|
|
|
series = [super(ArrowStreamPandasUDFSerializer, self).arrow_to_pandas(column)
|
|
|
|
.rename(field.name)
|
|
|
|
for column, field in zip(arrow_column.flatten(), arrow_column.type)]
|
|
|
|
s = pd.concat(series, axis=1)
|
|
|
|
else:
|
|
|
|
s = super(ArrowStreamPandasUDFSerializer, self).arrow_to_pandas(arrow_column)
|
|
|
|
return s
|
|
|
|
|
|
|
|
def dump_stream(self, iterator, stream):
|
|
|
|
"""
|
|
|
|
Override because Pandas UDFs require a START_ARROW_STREAM before the Arrow stream is sent.
|
|
|
|
This should be sent after creating the first record batch so in case of an error, it can
|
|
|
|
be sent back to the JVM before the Arrow stream starts.
|
|
|
|
"""
|
|
|
|
|
|
|
|
def init_stream_yield_batches():
|
|
|
|
should_write_start_length = True
|
|
|
|
for series in iterator:
|
|
|
|
batch = self._create_batch(series)
|
|
|
|
if should_write_start_length:
|
|
|
|
write_int(SpecialLengths.START_ARROW_STREAM, stream)
|
|
|
|
should_write_start_length = False
|
|
|
|
yield batch
|
|
|
|
|
|
|
|
return ArrowStreamSerializer.dump_stream(self, init_stream_yield_batches(), stream)
|
|
|
|
|
|
|
|
def __repr__(self):
|
|
|
|
return "ArrowStreamPandasUDFSerializer"
|
|
|
|
|
|
|
|
|
|
|
|
class CogroupUDFSerializer(ArrowStreamPandasUDFSerializer):
|
|
|
|
|
|
|
|
def load_stream(self, stream):
|
|
|
|
"""
|
|
|
|
Deserialize Cogrouped ArrowRecordBatches to a tuple of Arrow tables and yield as two
|
|
|
|
lists of pandas.Series.
|
|
|
|
"""
|
|
|
|
import pyarrow as pa
|
|
|
|
dataframes_in_group = None
|
|
|
|
|
|
|
|
while dataframes_in_group is None or dataframes_in_group > 0:
|
|
|
|
dataframes_in_group = read_int(stream)
|
|
|
|
|
|
|
|
if dataframes_in_group == 2:
|
|
|
|
batch1 = [batch for batch in ArrowStreamSerializer.load_stream(self, stream)]
|
|
|
|
batch2 = [batch for batch in ArrowStreamSerializer.load_stream(self, stream)]
|
|
|
|
yield (
|
|
|
|
[self.arrow_to_pandas(c) for c in pa.Table.from_batches(batch1).itercolumns()],
|
|
|
|
[self.arrow_to_pandas(c) for c in pa.Table.from_batches(batch2).itercolumns()]
|
|
|
|
)
|
|
|
|
|
|
|
|
elif dataframes_in_group != 0:
|
|
|
|
raise ValueError(
|
|
|
|
'Invalid number of pandas.DataFrames in group {0}'.format(dataframes_in_group))
|