### What changes were proposed in this pull request?
Refactor `_select_rows_by_iterable` in `iLocIndexer` to use `Column.isin`.
### Why are the changes needed?
For better performance.
After a rough benchmark, a long projection performs worse than `Column.isin`, even when the length of the filtering conditions exceeding `compute.isin_limit`.
So we use `Column.isin` instead.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes#33964 from xinrong-databricks/iloc_select.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Support dropping rows of a single-indexed DataFrame.
Dropping rows and columns at the same time is supported in this PR as well.
### Why are the changes needed?
To increase pandas API coverage.
### Does this PR introduce _any_ user-facing change?
Yes, dropping rows of a single-indexed DataFrame is supported now.
```py
>>> df = ps.DataFrame(np.arange(12).reshape(3, 4), columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
```
#### From
```py
>>> df.drop([0, 1])
Traceback (most recent call last):
...
KeyError: [(0,), (1,)]
>>> df.drop([0, 1], axis=0)
Traceback (most recent call last):
...
NotImplementedError: Drop currently only works for axis=1
>>> df.drop(1)
Traceback (most recent call last):
...
KeyError: [(1,)]
>>> df.drop(index=1)
Traceback (most recent call last):
...
TypeError: drop() got an unexpected keyword argument 'index'
>>> df.drop(index=[0, 1], columns='A')
Traceback (most recent call last):
...
TypeError: drop() got an unexpected keyword argument 'index'
```
#### To
```py
>>> df.drop([0, 1])
A B C D
2 8 9 10 11
>>> df.drop([0, 1], axis=0)
A B C D
2 8 9 10 11
>>> df.drop(1)
A B C D
0 0 1 2 3
2 8 9 10 11
>>> df.drop(index=1)
A B C D
0 0 1 2 3
2 8 9 10 11
>>> df.drop(index=[0, 1], columns='A')
B C D
2 9 10 11
```
### How was this patch tested?
Unit tests.
Closes#33929 from xinrong-databricks/frame_drop.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Add new built-in SQL functions: secant and cosecant, and add them as Scala and Python functions.
### Why are the changes needed?
Cotangent has been supported in Spark SQL but Secant and Cosecant are missing though I believe they can be used as much as cot.
Related Links: [SPARK-20751](https://github.com/apache/spark/pull/17999) [SPARK-36660](https://github.com/apache/spark/pull/33906)
### Does this PR introduce _any_ user-facing change?
Yes, users can now use these functions.
### How was this patch tested?
Unit tests
Closes#33988 from yutoacts/SPARK-36683.
Authored-by: Yuto Akutsu <yuto.akutsu@oss.nttdata.com>
Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com>
### What changes were proposed in this pull request?
This PR proposes the new syntax introduced in https://github.com/apache/spark/pull/33954. Namely, users now can specify the index type and name as below:
```python
import pandas as pd
import pyspark.pandas as ps
def transform(pdf) -> pd.DataFrame[int, [int, int]]:
pdf['A'] = pdf.id + 1
return pdf
ps.range(5).koalas.apply_batch(transform)
```
```
c0 c1
0 0 1
1 1 2
2 2 3
3 3 4
4 4 5
```
```python
import pandas as pd
import pyspark.pandas as ps
def transform(pdf) -> pd.DataFrame[("index", int), [("a", int), ("b", int)]]:
pdf['A'] = pdf.id * pdf.id
return pdf
ps.range(5).koalas.apply_batch(transform)
```
```
a b
index
0 0 0
1 1 1
2 2 4
3 3 9
4 4 16
```
Again, this syntax remains experimental and this is a non-standard way apart from Python standard. We should migrate to proper typing once pandas supports it like `numpy.typing`.
### Why are the changes needed?
The rationale is described in https://github.com/apache/spark/pull/33954. In order to avoid unnecessary computation for default index or schema inference.
### Does this PR introduce _any_ user-facing change?
Yes, this PR affects the following APIs:
- `DataFrame.apply(..., axis=1)`
- `DataFrame.groupby.apply(...)`
- `DataFrame.pandas_on_spark.transform_batch(...)`
- `DataFrame.pandas_on_spark.apply_batch(...)`
Now they can specify the index type with the new syntax below:
```
DataFrame[index_type, [type, ...]]
DataFrame[(index_name, index_type), [(name, type), ...]]
DataFrame[dtype instance, dtypes instance]
DataFrame[(index_name, index_type), zip(names, types)]
```
### How was this patch tested?
Manually tested, and unittests were added.
Closes#34007 from HyukjinKwon/SPARK-36710.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Fix when list of data type tuples has len = 1
### Why are the changes needed?
Fix when list of data type tuples has len = 1
``` python
>>> ps.DataFrame[("a", int), [int]]
typing.Tuple[pyspark.pandas.typedef.typehints.IndexNameType, int]
>>> ps.DataFrame[("a", int), [("b", int)]]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dgd/spark/python/pyspark/pandas/frame.py", line 11998, in __class_getitem__
return create_tuple_for_frame_type(params)
File "/Users/dgd/spark/python/pyspark/pandas/typedef/typehints.py", line 685, in create_tuple_for_frame_type
return Tuple[extract_types(params)]
File "/Users/dgd/spark/python/pyspark/pandas/typedef/typehints.py", line 755, in extract_types
return (index_type,) + extract_types(data_types)
File "/Users/dgd/spark/python/pyspark/pandas/typedef/typehints.py", line 770, in extract_types
raise TypeError(
TypeError: Type hints should be specified as one of:
- DataFrame[type, type, ...]
- DataFrame[name: type, name: type, ...]
- DataFrame[dtypes instance]
- DataFrame[zip(names, types)]
- DataFrame[index_type, [type, ...]]
- DataFrame[(index_name, index_type), [(name, type), ...]]
- DataFrame[dtype instance, dtypes instance]
- DataFrame[(index_name, index_type), zip(names, types)]
However, got [('b', <class 'int'>)].
```
### Does this PR introduce _any_ user-facing change?
After:
``` python
>>> ps.DataFrame[("a", int), [("b", int)]]
typing.Tuple[pyspark.pandas.typedef.typehints.IndexNameType, pyspark.pandas.typedef.typehints.NameType]
```
### How was this patch tested?
exist test
Closes#34019 from dgd-contributor/fix_when_list_of_tuple_data_type_have_len=1.
Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Fix Series.update with another in same frame
also add test for update series in diff frame
### Why are the changes needed?
Fix Series.update with another in same frame
Pandas behavior:
``` python
>>> pdf = pd.DataFrame(
... {"a": [None, 2, 3, 4, 5, 6, 7, 8, None], "b": [None, 5, None, 3, 2, 1, None, 0, 0]},
... )
>>> pdf
a b
0 NaN NaN
1 2.0 5.0
2 3.0 NaN
3 4.0 3.0
4 5.0 2.0
5 6.0 1.0
6 7.0 NaN
7 8.0 0.0
8 NaN 0.0
>>> pdf.a.update(pdf.b)
>>> pdf
a b
0 NaN NaN
1 5.0 5.0
2 3.0 NaN
3 3.0 3.0
4 2.0 2.0
5 1.0 1.0
6 7.0 NaN
7 0.0 0.0
8 0.0 0.0
```
### Does this PR introduce _any_ user-facing change?
Before
```python
>>> psdf = ps.DataFrame(
... {"a": [None, 2, 3, 4, 5, 6, 7, 8, None], "b": [None, 5, None, 3, 2, 1, None, 0, 0]},
... )
>>> psdf.a.update(psdf.b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dgd/spark/python/pyspark/pandas/series.py", line 4551, in update
combined = combine_frames(self._psdf, other._psdf, how="leftouter")
File "/Users/dgd/spark/python/pyspark/pandas/utils.py", line 141, in combine_frames
assert not same_anchor(
AssertionError: We don't need to combine. `this` and `that` are same.
>>>
```
After
```python
>>> psdf = ps.DataFrame(
... {"a": [None, 2, 3, 4, 5, 6, 7, 8, None], "b": [None, 5, None, 3, 2, 1, None, 0, 0]},
... )
>>> psdf.a.update(psdf.b)
>>> psdf
a b
0 NaN NaN
1 5.0 5.0
2 3.0 NaN
3 3.0 3.0
4 2.0 2.0
5 1.0 1.0
6 7.0 NaN
7 0.0 0.0
8 0.0 0.0
>>>
```
### How was this patch tested?
unit tests
Closes#33968 from dgd-contributor/SPARK-36722_fix_update_same_anchor.
Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
octet_length: caliculate the byte length of strings
bit_length: caliculate the bit length of strings
Those two string related functions are only implemented on SparkSQL, not on Scala, Python and R.
### Why are the changes needed?
Those functions would be useful for multi-bytes character users, who mainly working with Scala, Python or R.
### Does this PR introduce _any_ user-facing change?
Yes. Users can call octet_length/bit_length APIs on Scala(Dataframe), Python, and R.
### How was this patch tested?
unit tests
Closes#33992 from yoda-mon/add-bit-octet-length.
Authored-by: Leona Yoda <yodal@oss.nttdata.com>
Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com>
### What changes were proposed in this pull request?
Added cot to pyspark.sql.rst (follow-up)
### Why are the changes needed?
[My previous PR](https://github.com/apache/spark/pull/33906) was missing it.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
manual check
Closes#34002 from yutoacts/SPARK-36660.
Authored-by: Yuto Akutsu <yuto.akutsu@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes new syntax to specify the index type and name in pandas API on Spark. This is a base work for SPARK-36707.
More specifically, users now can use the type hints when typing as below:
```
pd.DataFrame[int, [int, int]]
pd.DataFrame[pdf.index.dtype, pdf.dtypes]
pd.DataFrame[("index", int), [("id", int), ("A", int)]]
pd.DataFrame[(pdf.index.name, pdf.index.dtype), zip(pdf.columns, pdf.dtypes)]
```
Note that the types of `[("id", int), ("A", int)]` or `("index", int)` are matched to how you provide a compound NumPy type (see also https://numpy.org/doc/stable/user/basics.rec.html#introduction).
Therefore, the syntax will be:
**Without index:**
```
pd.DataFrame[type, type, ...]
pd.DataFrame[name: type, name: type, ...]
pd.DataFrame[dtypes instance]
pd.DataFrame[zip(names, types)]
```
(New) **With index:**
```
pd.DataFrame[index_type, [type, ...]]
pd.DataFrame[(index_name, index_type), [(name, type), ...]]
pd.DataFrame[dtype instance, dtypes instance]
pd.DataFrame[(index_name, index_type), zip(names, types)]
```
### Why are the changes needed?
Currently, there is no way to specify the type hint for index type - the type hints are converted to return type of pandas UDFs internally. Therefore, we always attach default index which degrade performance:
```python
>>> def transform(pdf) -> pd.DataFrame[int, int]:
... pdf['A'] = pdf.id + 1
... return pdf
...
>>> ks.range(5).koalas.apply_batch(transform)
```
```
c0 c1
0 0 1
1 1 2
2 2 3
3 3 4
4 4 5
```
The [default index](https://koalas.readthedocs.io/en/latest/user_guide/options.html#default-index-type) (for the first column that looks unnamed) is attached when the type hint is specified. For better performance, we should have a way to work around, see also https://github.com/apache/spark/pull/33954#issuecomment-917742920 and [Specify the index column in conversion from Spark DataFrame to Koalas DataFrame](https://koalas.readthedocs.io/en/latest/user_guide/best_practices.html#specify-the-index-column-in-conversion-from-spark-dataframe-to-koalas-dataframe).
Note that this still remains as experimental because Python itself yet doesn't support such kind of typing out of the box. Once pandas completes typing support like NumPy did in `numpy.typing`, we should implement Koalas typing package, and migrate to it with leveraging pandas' typing way.
### Does this PR introduce _any_ user-facing change?
No, this PR does not yet affect any user-facing behavior in theory.
### How was this patch tested?
Unittests were added.
Closes#33954 from HyukjinKwon/SPARK-36709.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Introduce the 'compute.isin_limit' option, with the default value of 80.
### Why are the changes needed?
`Column.isin(list)` doesn't perform well when the given `list` is large, as https://issues.apache.org/jira/browse/SPARK-33383.
Thus, 'compute.isin_limit' is introduced to constrain the usage of `Column.isin(list)` in the code base.
If the length of the ‘list’ is above the `'compute.isin_limit'`, broadcast join is used instead for better performance.
#### Why is the default value 80?
After reproducing the benchmark mentioned in https://issues.apache.org/jira/browse/SPARK-33383,
| length of filtering list | isin time /ms| broadcast DF time / ms|
| :---: | :-: | :-: |
| 200 | 69411 | 39296 |
| 100 | 43074 | 40087 |
| 80 | 35592 | 40350 |
| 50 | 28134 | 37847 |
We may notice when the length of the filtering list <= 80, the `isin` approach performs better than `broadcast DF`.
### Does this PR introduce _any_ user-facing change?
Users may read/write the value of `'compute.isin_limit'` as follows
```py
>>> ps.get_option('compute.isin_limit')
80
>>> ps.set_option('compute.isin_limit', 10)
>>> ps.get_option('compute.isin_limit')
10
>>> ps.set_option('compute.isin_limit', -1)
...
ValueError: 'compute.isin_limit' should be greater than or equal to 0.
>>> ps.reset_option('compute.isin_limit')
>>> ps.get_option('compute.isin_limit')
80
```
### How was this patch tested?
Manual test.
Closes#33982 from xinrong-databricks/new_option.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Add apache license headers to makefiles of PySpark documents.
### Why are the changes needed?
Makefiles of PySpark documentations do not have apache license headers, while the other files have.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
`make html`
Closes#33979 from yoda-mon/add-license-header-makefiles.
Authored-by: Leona Yoda <yodal@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Implement Series.\_\_xor__ and Series.\_\_rxor__
### Why are the changes needed?
Follow pandas
### Does this PR introduce _any_ user-facing change?
Yes, user can use
``` python
psdf = ps.DataFrame([[11, 11], [1, 2]])
psdf[0] ^ psdf[1]
```
### How was this patch tested?
unit tests
Closes#33911 from dgd-contributor/SPARK-36653_Implement_Series._xor_.
Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This PR proposes cleanup the deprecated APIs in `missing/*.py`, and raise proper warning message for the deprecated APIs such as pandas does.
Also remove the checking for pandas < 1.0, since now we only focus on following the behavior of latest pandas.
### Why are the changes needed?
We should follow the deprecation of APIs of latest pandas.
### Does this PR introduce _any_ user-facing change?
Now the some APIs raise proper alternative message for deprecated functions such as pandas does.
### How was this patch tested?
Ran `dev/lint-python` and manually check the pandas API documents one by one.
Closes#33931 from itholic/SPARK-36689.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Fix dropping all columns of a DataFrame
### Why are the changes needed?
When dropping all columns of a pandas-on-Spark DataFrame, a ValueError is raised.
Whereas in pandas, an empty DataFrame reserving the index is returned.
We should follow pandas.
### Does this PR introduce _any_ user-facing change?
Yes.
From
```py
>>> psdf = ps.DataFrame({"x": [1, 2], "y": [3, 4], "z": [5, 6]})
>>> psdf
x y z
0 1 3 5
1 2 4 6
>>> psdf.drop(['x', 'y', 'z'])
Traceback (most recent call last):
...
ValueError: not enough values to unpack (expected 2, got 0)
```
To
```py
>>> psdf = ps.DataFrame({"x": [1, 2], "y": [3, 4], "z": [5, 6]})
>>> psdf
x y z
0 1 3 5
1 2 4 6
>>> psdf.drop(['x', 'y', 'z'])
Empty DataFrame
Columns: []
Index: [0, 1]
```
### How was this patch tested?
Unit tests.
Closes#33938 from xinrong-databricks/frame_drop_col.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR adds:
- the support of `TimestampNTZType` in pandas API on Spark.
- the support of Py4J handling of `spark.sql.timestampType` configuration
### Why are the changes needed?
To complete `TimestampNTZ` support.
In more details:
- ([#33876](https://github.com/apache/spark/pull/33876)) For `TimestampNTZType` in Spark SQL at PySpark, we can successfully ser/de `TimestampNTZType` instances to naive `datetime` (see also https://docs.python.org/3/library/datetime.html#aware-and-naive-objects). This naive `datetime` interpretation is up to the program to decide how to interpret, e.g.) whether a local time vs UTC time as an example. Although some Python built-in APIs assume they are local time in general (see also https://docs.python.org/3/library/datetime.html#datetime.datetime.utcfromtimestamp):
> Because naive datetime objects are treated by many datetime methods as local times ...
semantically it is legitimate to assume:
- that naive `datetime` is mapped to `TimestampNTZType` (unknown timezone).
- if you want to handle them as if a local timezone, this interpretation is matched to `TimestamType` (local time)
- ([#33875](https://github.com/apache/spark/pull/33875)) For `TimestampNTZType` in Arrow, they provide the same semantic (see also https://github.com/apache/arrow/blob/master/format/Schema.fbs#L240-L278):
- `Timestamp(..., timezone=sparkLocalTimezone)` -> `TimestamType`
- `Timestamp(..., timezone=null)` -> `TimestampNTZType`
- (this PR) For `TimestampNTZType` in pandas API on Spark, it follows Python side in general - pandas implements APIs based on the assumption of time (e.g., naive `datetime` is a local time or a UTC time).
One example is that pandas allows to convert these naive `datetime` as if they are in UTC by default:
```python
>>> pd.Series(datetime.datetime(1970, 1, 1)).astype("int")
0 0
```
whereas in Spark:
```python
>>> spark.createDataFrame([[datetime.datetime(1970, 1, 1, 0, 0, 0)]]).selectExpr("CAST(_1 as BIGINT)").show()
+------+
| _1|
+------+
|-32400|
+------+
>>> spark.createDataFrame([[datetime.datetime(1970, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc)]]).selectExpr("CAST(_1 as BIGINT)").show()
+---+
| _1|
+---+
| 0|
+---+
```
In contrast, some APIs like `pandas.fromtimestamp` assume they are local times:
```python
>>> pd.Timestamp.fromtimestamp(pd.Series(datetime(1970, 1, 1, 0, 0, 0)).astype("int").iloc[0])
Timestamp('1970-01-01 09:00:00')
```
For native Python, users can decide how to interpret native `datetime` so it's fine. The problem is that pandas API on Spark case would require to have two implementations of the same pandas behavior for `TimestampType` and `TimestampNTZType` respectively, which might be non-trivial overhead and work.
As far as I know, pandas API on Spark has not yet implemented such ambiguous APIs so they are left as future work.
### Does this PR introduce _any_ user-facing change?
Yes, now pandas API on Spark can handle `TimestampNTZType`.
```python
import datetime
spark.createDataFrame([(datetime.datetime.now(),)], schema="dt timestamp_ntz").to_pandas_on_spark()
```
```
dt
0 2021-08-31 19:58:55.024410
```
This PR also adds the support of Py4J handling with `spark.sql.timestampType` configuration:
```python
>>> lit(datetime.datetime.now())
Column<'TIMESTAMP '2021-09-03 19:34:03.949998''>
```
```python
>>> spark.conf.set("spark.sql.timestampType", "TIMESTAMP_NTZ")
>>> lit(datetime.datetime.now())
Column<'TIMESTAMP_NTZ '2021-09-03 19:34:24.864632''>
```
### How was this patch tested?
Unittests were added.
Closes#33877 from HyukjinKwon/SPARK-36625.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes improving test coverage for pandas-on-Spark data types & GroupBy code base, which is written in `data_type_ops/*.py` and `groupby.py` separately.
This PR did the following to improve coverage:
- Add unittest for untested code
- Fix unittest which is not tested properly
- Remove unused code
**NOTE**: This PR is not only include the test-only update, for example it includes the fixing `astype` for binary ops.
pandas-on-Spark Series we have:
```python
>>> psser
0 [49]
1 [50]
2 [51]
dtype: object
```
before:
```python
>>> psser.astype(bool)
Traceback (most recent call last):
...
pyspark.sql.utils.AnalysisException: cannot resolve 'CAST(`0` AS BOOLEAN)' due to data type mismatch: cannot cast binary to boolean;
...
```
after:
```python
>>> psser.astype(bool)
0 True
1 True
2 True
dtype: bool
```
### Why are the changes needed?
To make the project healthier by improving coverage.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittest.
Closes#33850 from itholic/SPARK-36531.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR is a small followup of https://github.com/apache/spark/pull/33876 which proposes to use `datetime.tzinfo` instead of `datetime.tzname` to see if timezome information is provided or not.
This way is consistent with other places such as:
9c5bcac61e/python/pyspark/sql/types.py (L182)9c5bcac61e/python/pyspark/sql/types.py (L1662)
### Why are the changes needed?
In some cases, `datetime.tzname` can raise an exception (https://docs.python.org/3/library/datetime.html#datetime.datetime.tzname):
> ... raises an exception if the latter doesn’t return None or a string object,
I was able to reproduce this in Jenkins with setting `spark.sql.timestampType` to `TIMESTAMP_NTZ` by default:
```
======================================================================
ERROR: test_time_with_timezone (pyspark.sql.tests.test_serde.SerdeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/sql/tests/test_serde.py", line 92, in test_time_with_timezone
...
File "/usr/lib/pypy3/lib-python/3/datetime.py", line 979, in tzname
raise NotImplementedError("tzinfo subclass must override tzname()")
NotImplementedError: tzinfo subclass must override tzname()
```
### Does this PR introduce _any_ user-facing change?
No to end users because it has not be released.
This is rather a safeguard to prevent potential breakage.
### How was this patch tested?
Manually tested.
Closes#33918 from HyukjinKwon/SPARK-36626-followup.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
### What changes were proposed in this pull request?
Add cotangent support by Dataframe operations (e.g. `df.select(cot($"col"))`).
### Why are the changes needed?
Cotangent has been supported by Spark SQL since 2.3.0 but it cannot be called by Dataframe operations.
### Does this PR introduce _any_ user-facing change?
Yes, users can now call the cotangent function by Dataframe operations.
### How was this patch tested?
unit tests.
Closes#33906 from yutoacts/SPARK-36660.
Lead-authored-by: Yuto Akutsu <yuto.akutsu@nttdata.com>
Co-authored-by: Yuto Akutsu <87687356+yutoacts@users.noreply.github.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Add `versionadded` for API added in Spark 3.3.0: DataFrame.combine_first.
### Why are the changes needed?
That documents the version of Spark which added the described API.
### Does this PR introduce _any_ user-facing change?
No user-facing behavior change. Only the document of the affected API shows when it's introduced.
### How was this patch tested?
Manual test.
Closes#33901 from xinrong-databricks/version.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Implement Series.cov
### Why are the changes needed?
That is supported in pandas. We should support that as well.
### Does this PR introduce _any_ user-facing change?
Yes. Series.cov can be used.
```python
>>> from pyspark.pandas.config import set_option, reset_option
>>> set_option("compute.ops_on_diff_frames", True)
>>> s1 = ps.Series([0.90010907, 0.13484424, 0.62036035])
>>> s2 = ps.Series([0.12528585, 0.26962463, 0.51111198])
>>> s1.cov(s2)
-0.016857626527158744
>>> reset_option("compute.ops_on_diff_frames")
```
### How was this patch tested?
Unit tests
Closes#33752 from dgd-contributor/SPARK-36401_Implement_Series.cov.
Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This PR proposes to support `errors` argument for `ps.to_numeric` such as pandas does.
Note that we don't support the `ignore` when the `arg` is pandas-on-Spark Series for now.
### Why are the changes needed?
We should match the behavior to pandas' as much as possible.
Also in the [recent blog post](https://medium.com/chuck.connell.3/pandas-on-databricks-via-koalas-a-review-9876b0a92541), the author pointed out we're missing this feature.
Seems like it's the kind of feature that commonly used in data science.
### Does this PR introduce _any_ user-facing change?
Now the `errors` argument is available for `ps.to_numeric`.
### How was this patch tested?
Unittests.
Closes#33882 from itholic/SPARK-36609.
Lead-authored-by: itholic <haejoon.lee@databricks.com>
Co-authored-by: Hyukjin Kwon <gurwls223@gmail.com>
Co-authored-by: Haejoon Lee <44108233+itholic@users.noreply.github.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Update both `DataFrame.approxQuantile` and `DataFrameStatFunctions.approxQuantile` to support overloaded definitions when multiple columns are supplied.
### Why are the changes needed?
The current type hints don't support the multi-column signature, a form that was added in Spark 2.2 (see [the approxQuantile docs](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.approxQuantile.html).) This change was also introduced to pyspark-stubs (https://github.com/zero323/pyspark-stubs/pull/552). zero323 asked me to open a PR for the upstream change.
### Does this PR introduce _any_ user-facing change?
This change only affects type hints - it brings the `approxQuantile` type hints up to date with the actual code.
### How was this patch tested?
Ran `./dev/lint-python`.
Closes#33880 from carylee/master.
Authored-by: Cary Lee <cary@amperity.com>
Signed-off-by: zero323 <mszymkiewicz@gmail.com>
### What changes were proposed in this pull request?
This PR proposes to implement `TimestampNTZType` support in PySpark's `SparkSession.createDataFrame`, `DataFrame.toPandas`, Python UDFs, and pandas UDFs with and without Arrow.
### Why are the changes needed?
To complete `TimestampNTZType` support.
### Does this PR introduce _any_ user-facing change?
Yes.
- Users now can use `TimestampNTZType` type in `SparkSession.createDataFrame`, `DataFrame.toPandas`, Python UDFs, and pandas UDFs with and without Arrow.
- If `spark.sql.timestampType` is configured to `TIMESTAMP_NTZ`, PySpark will infer the `datetime` without timezone as `TimestampNTZType`. If it has a timezone, it will be inferred as `TimestampType` in `SparkSession.createDataFrame`.
- If `TimestampType` and `TimestampNTZType` conflict during merging inferred schema, `TimestampType` has a higher precedence.
- If the type is `TimestampNTZType`, treat this internally as an unknown timezone, and compute w/ UTC (same as JVM side), and avoid localization externally.
### How was this patch tested?
Manually tested and unittests were added.
Closes#33876 from HyukjinKwon/SPARK-36626.
Lead-authored-by: Hyukjin Kwon <gurwls223@apache.org>
Co-authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Apache license headers to Pandas API on Spark documents.
### Why are the changes needed?
Pandas API on Spark document sources do not have license headers, while the other docs have.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
`make html`
Closes#33871 from yoda-mon/add-license-header.
Authored-by: Leona Yoda <yodal@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Implement `DataFrame.combine_first`.
The PR is based on https://github.com/databricks/koalas/pull/1950. Thanks AishwaryaKalloli for the prototype.
### Why are the changes needed?
Update null elements with value in the same location in another is a common use case.
That is supported in pandas. We should support that as well.
### Does this PR introduce _any_ user-facing change?
Yes. `DataFrame.combine_first` can be used.
```py
>>> ps.set_option("compute.ops_on_diff_frames", True)
>>> df1 = ps.DataFrame({'A': [None, 0], 'B': [None, 4]})
>>> df2 = ps.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine_first(df2).sort_index()
A B
0 1.0 3.0
1 0.0 4.0
# Null values still persist if the location of that null value does not exist in other
>>> df1 = ps.DataFrame({'A': [None, 0], 'B': [4, None]})
>>> df2 = ps.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2])
>>> df1.combine_first(df2).sort_index()
A B C
0 NaN 4.0 NaN
1 0.0 3.0 1.0
2 NaN 3.0 1.0
>>> ps.reset_option("compute.ops_on_diff_frames")
```
### How was this patch tested?
Unit tests.
Closes#33714 from xinrong-databricks/df_combine_first.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Change API doc for `UnivariateFeatureSelector`
### Why are the changes needed?
make the doc look better
### Does this PR introduce _any_ user-facing change?
yes, API doc change
### How was this patch tested?
Manually checked
Closes#33855 from huaxingao/ml_doc.
Authored-by: Huaxin Gao <huaxin_gao@apple.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request?
This PR proposes to enable the tests, disabled since different behavior with pandas 1.3.
- `inplace` argument for `CategoricalDtype` functions is deprecated from pandas 1.3, and seems they have bug. So we manually created the expected result and test them.
- Fixed the `GroupBy.transform` since it doesn't work properly for `CategoricalDtype`.
### Why are the changes needed?
We should enable the tests as much as possible even if pandas has a bug.
And we should follow the behavior of latest pandas.
### Does this PR introduce _any_ user-facing change?
Yes, `GroupBy.transform` now follow the behavior of latest pandas.
### How was this patch tested?
Unittests.
Closes#33817 from itholic/SPARK-36537.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes improving test coverage for pandas-on-Spark DataFrame code base, which is written in `frame.py`.
This PR did the following to improve coverage:
- Add unittest for untested code
- Remove unused code
- Add arguments to some functions for testing
### Why are the changes needed?
To make the project healthier by improving coverage.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittest.
Closes#33833 from itholic/SPARK-36505.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes to move distributed-sequence index implementation to SQL plan to leverage optimizations such as column pruning.
```python
import pyspark.pandas as ps
ps.set_option('compute.default_index_type', 'distributed-sequence')
ps.range(10).id.value_counts().to_frame().spark.explain()
```
**Before:**
```bash
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [count#51L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(count#51L DESC NULLS LAST, 200), ENSURE_REQUIREMENTS, [id=#70]
+- HashAggregate(keys=[id#37L], functions=[count(1)], output=[__index_level_0__#48L, count#51L])
+- Exchange hashpartitioning(id#37L, 200), ENSURE_REQUIREMENTS, [id=#67]
+- HashAggregate(keys=[id#37L], functions=[partial_count(1)], output=[id#37L, count#63L])
+- Project [id#37L]
+- Filter atleastnnonnulls(1, id#37L)
+- Scan ExistingRDD[__index_level_0__#36L,id#37L]
# ^^^ Base DataFrame created by the output RDD from zipWithIndex (and checkpointed)
```
**After:**
```bash
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [count#275L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(count#275L DESC NULLS LAST, 200), ENSURE_REQUIREMENTS, [id=#174]
+- HashAggregate(keys=[id#258L], functions=[count(1)])
+- HashAggregate(keys=[id#258L], functions=[partial_count(1)])
+- Filter atleastnnonnulls(1, id#258L)
+- Range (0, 10, step=1, splits=16)
# ^^^ Removed the Spark job execution for `zipWithIndex`
```
### Why are the changes needed?
To leverage optimization of SQL engine and avoid unnecessary shuffle to create default index.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittests were added. Also, this PR will test all unittests in pandas API on Spark after switching the default index implementation to `distributed-sequence`.
Closes#33807 from HyukjinKwon/SPARK-36559.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Revert 397b843890 and 5a48eb8d00
### Why are the changes needed?
As discussed in https://github.com/apache/spark/pull/33800#issuecomment-904140869, there is correctness issue in the current implementation. Let's revert the code changes from branch 3.2 and fix it on master branch later
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Ci tests
Closes#33819 from gengliangwang/revert-SPARK-34415.
Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
This PR proposes to fix the behavior of `astype` for `CategoricalDtype` to follow pandas 1.3.
**Before:**
```python
>>> pcat
0 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> pcat.astype(CategoricalDtype(["b", "c", "a"]))
0 a
1 b
2 c
dtype: category
Categories (3, object): ['b', 'c', 'a']
```
**After:**
```python
>>> pcat
0 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> pcat.astype(CategoricalDtype(["b", "c", "a"]))
0 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c'] # CategoricalDtype is not updated if dtype is the same
```
`CategoricalDtype` is treated as a same `dtype` if the unique values are the same.
```python
>>> pcat1 = pser.astype(CategoricalDtype(["b", "c", "a"]))
>>> pcat2 = pser.astype(CategoricalDtype(["a", "b", "c"]))
>>> pcat1.dtype == pcat2.dtype
True
```
### Why are the changes needed?
We should follow the latest pandas as much as possible.
### Does this PR introduce _any_ user-facing change?
Yes, the behavior is changed as example in the PR description.
### How was this patch tested?
Unittest
Closes#33757 from itholic/SPARK-36368.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This PR is followup for https://github.com/apache/spark/pull/33646 to add missing tests.
### Why are the changes needed?
Some tests are missing
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Unittest
Closes#33776 from itholic/SPARK-36388-followup.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This is a follow-up of #33687.
Use `LooseVersion` instead of `pkg_resources.parse_version`.
### Why are the changes needed?
In the previous PR, `pkg_resources.parse_version` was used, but we should use `LooseVersion` instead to be consistent in the code base.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes#33768 from ueshin/issues/SPARK-36370/LooseVersion.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Suggesting to refactor the way the _builtin_table is defined in the `python/pyspark/pandas/groupby.py` module.
Pandas has recently refactored the way we import the _builtin_table and is now part of the pandas.core.common module instead of being an attribute of the pandas.core.base.SelectionMixin class.
### Why are the changes needed?
This change is not fully needed but the current implementation redefines this table within pyspark, so any changes of this table from the pandas library would need to be updated in the pyspark repository as well.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Ran the following command successfully :
```sh
python/run-tests --testnames 'pyspark.pandas.tests.test_groupby'
```
Tests passed in 327 seconds
Closes#33687 from Cedric-Magnan/_builtin_table_from_pandas.
Authored-by: Cedric-Magnan <cedric.magnan@artefact.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This PR proposes to fix `Series.astype` when converting datetime type to StringDtype, to match the behavior of pandas 1.3.
In pandas < 1.3,
```python
>>> pd.Series(["2020-10-27 00:00:01", None], name="datetime").astype("string")
0 2020-10-27 00:00:01
1 NaT
Name: datetime, dtype: string
```
This is changed to
```python
>>> pd.Series(["2020-10-27 00:00:01", None], name="datetime").astype("string")
0 2020-10-27 00:00:01
1 <NA>
Name: datetime, dtype: string
```
in pandas >= 1.3, so we follow the behavior of latest pandas.
### Why are the changes needed?
Because pandas-on-Spark always follow the behavior of latest pandas.
### Does this PR introduce _any_ user-facing change?
Yes, the behavior is changed to latest pandas when converting datetime to nullable string (StringDtype)
### How was this patch tested?
Unittest passed
Closes#33735 from itholic/SPARK-36387.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Implement `Index.map`.
The PR is based on https://github.com/databricks/koalas/pull/2136. Thanks awdavidson for the prototype.
`map` of CategoricalIndex and DatetimeIndex will be implemented in separate PRs.
### Why are the changes needed?
Mapping values using input correspondence (a dict, Series, or function) is supported in pandas as [Index.map](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.map.html).
We shall also support hat.
### Does this PR introduce _any_ user-facing change?
Yes. `Index.map` is available now.
```py
>>> psidx = ps.Index([1, 2, 3])
>>> psidx.map({1: "one", 2: "two", 3: "three"})
Index(['one', 'two', 'three'], dtype='object')
>>> psidx.map(lambda id: "{id} + 1".format(id=id))
Index(['1 + 1', '2 + 1', '3 + 1'], dtype='object')
>>> pser = pd.Series(["one", "two", "three"], index=[1, 2, 3])
>>> psidx.map(pser)
Index(['one', 'two', 'three'], dtype='object')
```
### How was this patch tested?
Unit tests.
Closes#33694 from xinrong-databricks/index_map.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This patch supports dynamic gap duration in session window.
### Why are the changes needed?
The gap duration used in session window for now is a static value. To support more complex usage, it is better to support dynamic gap duration which determines the gap duration by looking at the current data. For example, in our usecase, we may have different gap by looking at the certain column in the input rows.
### Does this PR introduce _any_ user-facing change?
Yes, users can specify dynamic gap duration.
### How was this patch tested?
Modified existing tests and new test.
Closes#33691 from viirya/dynamic-session-window-gap.
Authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
### What changes were proposed in this pull request?
This PR proposes to mention pandas API on Spark at Spark overview pages.
### Why are the changes needed?
To mention the new component.
### Does this PR introduce _any_ user-facing change?
Yes, it changes the documenation.
### How was this patch tested?
Manually tested by MD editor. For `docs/index.md`, I manually checked by building the docs by `SKIP_API=1 bundle exec jekyll serve --watch`.
Closes#33699 from HyukjinKwon/SPARK-36474.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes to fix `RollingGroupBy` and `ExpandingGroupBy` to follow latest pandas behavior.
`RollingGroupBy` and `ExpandingGroupBy` no longer returns grouped-by column in values from pandas 1.3.
Before:
```python
>>> df = pd.DataFrame({"A": [1, 1, 2, 3], "B": [0, 1, 2, 3]})
>>> df.groupby("A").rolling(2).sum()
A B
A
1 0 NaN NaN
1 2.0 1.0
2 2 NaN NaN
3 3 NaN NaN
```
After:
```python
>>> df = pd.DataFrame({"A": [1, 1, 2, 3], "B": [0, 1, 2, 3]})
>>> df.groupby("A").rolling(2).sum()
B
A
1 0 NaN
1 1.0
2 2 NaN
3 3 NaN
```
### Why are the changes needed?
We should follow the behavior of pandas as much as possible.
### Does this PR introduce _any_ user-facing change?
Yes, the result of `RollingGroupBy` and `ExpandingGroupBy` is changed as described above.
### How was this patch tested?
Unit tests.
Closes#33646 from itholic/SPARK-36388.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes fixing the `Index.union` to follow the behavior of pandas 1.3.
Before:
```python
>>> ps_idx1 = ps.Index([1, 1, 1, 1, 1, 2, 2])
>>> ps_idx2 = ps.Index([1, 1, 2, 2, 2, 2, 2])
>>> ps_idx1.union(ps_idx2)
Int64Index([1, 1, 1, 1, 1, 2, 2], dtype='int64')
```
After:
```python
>>> ps_idx1 = ps.Index([1, 1, 1, 1, 1, 2, 2])
>>> ps_idx2 = ps.Index([1, 1, 2, 2, 2, 2, 2])
>>> ps_idx1.union(ps_idx2)
Int64Index([1, 1, 1, 1, 1, 2, 2, 2, 2, 2], dtype='int64')
```
This bug is fixed in https://github.com/pandas-dev/pandas/issues/36289.
### Why are the changes needed?
We should follow the behavior of pandas as much as possible.
### Does this PR introduce _any_ user-facing change?
Yes, the result for some cases have duplicates values will change.
### How was this patch tested?
Unit test.
Closes#33634 from itholic/SPARK-36369.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Weichen Xu <weichen.xudatabricks.com>
### What changes were proposed in this pull request?
Support CrossValidatorModel get standard deviation of metrics for each paramMap.
### Why are the changes needed?
So that in mlflow autologging, we can log standard deviation of metrics which is useful.
### Does this PR introduce _any_ user-facing change?
Yes.
`CrossValidatorModel` add a public attribute `stdMetrics` which are the standard deviation of metrics for each paramMap
### How was this patch tested?
Unit test.
Closes#33652 from WeichenXu123/add_std_metric.
Authored-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR adds type hint for `TaskContext.cpus` added in SPARK-36173 (#33385)
### Why are the changes needed?
To comply with Project Zen.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Confirmed typehint works with IntelliJ IDEA.
Closes#33645 from sarutak/taskcontext-pyi.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
In stage-level resource scheduling, the allocated 3rd party resources can be obtained in TaskContext using resources() interface, however there is no API to get how many cpus are allocated for the task. Will add a cpus() interface to TaskContext to complement resources(). Althrough the task cpu requests can be got from profile, it's more convenient to get it inside the task code without the need to pass profile from driver side to the executor side.
### What changes were proposed in this pull request?
Add cpus() interface in TaskContext and modify relevant code.
### Why are the changes needed?
TaskContext has resources() to get 3rd party resources allocated. the is no API to get CPU allocated for the task.
### Does this PR introduce _any_ user-facing change?
Add cpus() interface for TaskContext
### How was this patch tested?
Unit tests
Closes#33385 from xwu99/taskcontext-cpus.
Lead-authored-by: Wu, Xiaochang <xiaochang.wu@intel.com>
Co-authored-by: Xiaochang Wu <xiaochang.wu@intel.com>
Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
### What changes were proposed in this pull request?
This PR is followup for https://github.com/apache/spark/pull/32964, to improve the warning message.
### Why are the changes needed?
To improve the warning message.
### Does this PR introduce _any_ user-facing change?
The warning is changed from "Deprecated in 3.2, Use `spark.to_spark_io` instead." to "Deprecated in 3.2, Use `DataFrame.spark.to_spark_io` instead."
### How was this patch tested?
Manually run `dev/lint-python`
Closes#33631 from itholic/SPARK-35811-followup.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Better error messages for DataTypeOps against lists.
### Why are the changes needed?
Currently, DataTypeOps against lists throw a Py4JJavaError, we shall throw a TypeError with proper messages instead.
### Does this PR introduce _any_ user-facing change?
Yes. A TypeError message will be showed rather than a Py4JJavaError.
From:
```py
>>> import pyspark.pandas as ps
>>> ps.Series([1, 2, 3]) > [3, 2, 1]
Traceback (most recent call last):
...
py4j.protocol.Py4JJavaError: An error occurred while calling o107.gt.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [3, 2, 1]
...
```
To:
```py
>>> import pyspark.pandas as ps
>>> ps.Series([1, 2, 3]) > [3, 2, 1]
Traceback (most recent call last):
...
TypeError: The operation can not be applied to list.
```
### How was this patch tested?
Unit tests.
Closes#33581 from xinrong-databricks/data_type_ops_list.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Disable tests failed by the incompatible behavior of pandas 1.3.
### Why are the changes needed?
Pandas 1.3 has been released.
There are some behavior changes and we should follow it, but it's not ready yet.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Disabled some tests related to the behavior change.
Closes#33598 from ueshin/issues/SPARK-36367/disable_tests.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>