### What changes were proposed in this pull request?
This PR adds PySpark API document of `session_window`.
The docstring of the function doesn't comply with numpydoc format so this PR also fix it.
Further, the API document of `window` doesn't have `Parameters` section so it's also added in this PR.
### Why are the changes needed?
To provide PySpark users with the API document of the newly added function.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
`make html` in `python/docs` and get the following docs.
[window]
![time-window-python-doc-after](https://user-images.githubusercontent.com/4736016/134963797-ce25b268-20ca-48e3-ac8d-cbcbd85ebb3e.png)
[session_window]
![session-window-python-doc-after](https://user-images.githubusercontent.com/4736016/134963853-dd9d8417-139b-41ee-9924-14544b1a91af.png)
Closes#34118 from sarutak/python-session-window-doc.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
(cherry picked from commit 5a32e41e9c)
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
### What changes were proposed in this pull request?
Fix `pop` of Categorical Series to be consistent with the latest pandas (1.3.2) behavior.
This is a backport of https://github.com/apache/spark/pull/34052.
### Why are the changes needed?
As https://github.com/databricks/koalas/issues/2198, pandas API on Spark behaves differently from pandas on `pop` of Categorical Series.
### Does this PR introduce _any_ user-facing change?
Yes, results of `pop` of Categorical Series change.
#### From
```py
>>> psser = ps.Series(["a", "b", "c", "a"], dtype="category")
>>> psser
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(0)
0
>>> psser
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(3)
0
>>> psser
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
```
#### To
```py
>>> psser = ps.Series(["a", "b", "c", "a"], dtype="category")
>>> psser
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(0)
'a'
>>> psser
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(3)
'a'
>>> psser
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
```
### How was this patch tested?
Unit tests.
Closes#34063 from xinrong-databricks/backport_cat_pop.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Fix Series.update with another in same frame
also add test for update series in diff frame
### Why are the changes needed?
Fix Series.update with another in same frame
Pandas behavior:
``` python
>>> pdf = pd.DataFrame(
... {"a": [None, 2, 3, 4, 5, 6, 7, 8, None], "b": [None, 5, None, 3, 2, 1, None, 0, 0]},
... )
>>> pdf
a b
0 NaN NaN
1 2.0 5.0
2 3.0 NaN
3 4.0 3.0
4 5.0 2.0
5 6.0 1.0
6 7.0 NaN
7 8.0 0.0
8 NaN 0.0
>>> pdf.a.update(pdf.b)
>>> pdf
a b
0 NaN NaN
1 5.0 5.0
2 3.0 NaN
3 3.0 3.0
4 2.0 2.0
5 1.0 1.0
6 7.0 NaN
7 0.0 0.0
8 0.0 0.0
```
### Does this PR introduce _any_ user-facing change?
Before
```python
>>> psdf = ps.DataFrame(
... {"a": [None, 2, 3, 4, 5, 6, 7, 8, None], "b": [None, 5, None, 3, 2, 1, None, 0, 0]},
... )
>>> psdf.a.update(psdf.b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dgd/spark/python/pyspark/pandas/series.py", line 4551, in update
combined = combine_frames(self._psdf, other._psdf, how="leftouter")
File "/Users/dgd/spark/python/pyspark/pandas/utils.py", line 141, in combine_frames
assert not same_anchor(
AssertionError: We don't need to combine. `this` and `that` are same.
>>>
```
After
```python
>>> psdf = ps.DataFrame(
... {"a": [None, 2, 3, 4, 5, 6, 7, 8, None], "b": [None, 5, None, 3, 2, 1, None, 0, 0]},
... )
>>> psdf.a.update(psdf.b)
>>> psdf
a b
0 NaN NaN
1 5.0 5.0
2 3.0 NaN
3 3.0 3.0
4 2.0 2.0
5 1.0 1.0
6 7.0 NaN
7 0.0 0.0
8 0.0 0.0
>>>
```
### How was this patch tested?
unit tests
Closes#33968 from dgd-contributor/SPARK-36722_fix_update_same_anchor.
Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
(cherry picked from commit c15072cc73)
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Fix dropping all columns of a DataFrame
### Why are the changes needed?
When dropping all columns of a pandas-on-Spark DataFrame, a ValueError is raised.
Whereas in pandas, an empty DataFrame reserving the index is returned.
We should follow pandas.
### Does this PR introduce _any_ user-facing change?
Yes.
From
```py
>>> psdf = ps.DataFrame({"x": [1, 2], "y": [3, 4], "z": [5, 6]})
>>> psdf
x y z
0 1 3 5
1 2 4 6
>>> psdf.drop(['x', 'y', 'z'])
Traceback (most recent call last):
...
ValueError: not enough values to unpack (expected 2, got 0)
```
To
```py
>>> psdf = ps.DataFrame({"x": [1, 2], "y": [3, 4], "z": [5, 6]})
>>> psdf
x y z
0 1 3 5
1 2 4 6
>>> psdf.drop(['x', 'y', 'z'])
Empty DataFrame
Columns: []
Index: [0, 1]
```
### How was this patch tested?
Unit tests.
Closes#33938 from xinrong-databricks/frame_drop_col.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 33bb7b39e9)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes improving test coverage for pandas-on-Spark data types & GroupBy code base, which is written in `data_type_ops/*.py` and `groupby.py` separately.
This PR did the following to improve coverage:
- Add unittest for untested code
- Fix unittest which is not tested properly
- Remove unused code
**NOTE**: This PR is not only include the test-only update, for example it includes the fixing `astype` for binary ops.
pandas-on-Spark Series we have:
```python
>>> psser
0 [49]
1 [50]
2 [51]
dtype: object
```
before:
```python
>>> psser.astype(bool)
Traceback (most recent call last):
...
pyspark.sql.utils.AnalysisException: cannot resolve 'CAST(`0` AS BOOLEAN)' due to data type mismatch: cannot cast binary to boolean;
...
```
after:
```python
>>> psser.astype(bool)
0 True
1 True
2 True
dtype: bool
```
### Why are the changes needed?
To make the project healthier by improving coverage.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittest.
Closes#33850 from itholic/SPARK-36531.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 71dbd03fbe)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Update both `DataFrame.approxQuantile` and `DataFrameStatFunctions.approxQuantile` to support overloaded definitions when multiple columns are supplied.
### Why are the changes needed?
The current type hints don't support the multi-column signature, a form that was added in Spark 2.2 (see [the approxQuantile docs](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.approxQuantile.html).) This change was also introduced to pyspark-stubs (https://github.com/zero323/pyspark-stubs/pull/552). zero323 asked me to open a PR for the upstream change.
### Does this PR introduce _any_ user-facing change?
This change only affects type hints - it brings the `approxQuantile` type hints up to date with the actual code.
### How was this patch tested?
Ran `./dev/lint-python`.
Closes#33880 from carylee/master.
Authored-by: Cary Lee <cary@amperity.com>
Signed-off-by: zero323 <mszymkiewicz@gmail.com>
(cherry picked from commit 37f5ab07fa)
Signed-off-by: zero323 <mszymkiewicz@gmail.com>
This PR is followup for https://github.com/apache/spark/pull/33646 to add missing tests.
Some tests are missing
No
Unittest
Closes#33776 from itholic/SPARK-36388-followup.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
(cherry picked from commit c91ae544fd)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Change API doc for `UnivariateFeatureSelector`
### Why are the changes needed?
make the doc look better
### Does this PR introduce _any_ user-facing change?
yes, API doc change
### How was this patch tested?
Manually checked
Closes#33855 from huaxingao/ml_doc.
Authored-by: Huaxin Gao <huaxin_gao@apple.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
(cherry picked from commit 15e42b4442)
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
This PR proposes to enable the tests, disabled since different behavior with pandas 1.3.
- `inplace` argument for `CategoricalDtype` functions is deprecated from pandas 1.3, and seems they have bug. So we manually created the expected result and test them.
- Fixed the `GroupBy.transform` since it doesn't work properly for `CategoricalDtype`.
We should enable the tests as much as possible even if pandas has a bug.
And we should follow the behavior of latest pandas.
Yes, `GroupBy.transform` now follow the behavior of latest pandas.
Unittests.
Closes#33817 from itholic/SPARK-36537.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit fe486185c4)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This PR proposes to fix the behavior of `astype` for `CategoricalDtype` to follow pandas 1.3.
**Before:**
```python
>>> pcat
0 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> pcat.astype(CategoricalDtype(["b", "c", "a"]))
0 a
1 b
2 c
dtype: category
Categories (3, object): ['b', 'c', 'a']
```
**After:**
```python
>>> pcat
0 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> pcat.astype(CategoricalDtype(["b", "c", "a"]))
0 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c'] # CategoricalDtype is not updated if dtype is the same
```
`CategoricalDtype` is treated as a same `dtype` if the unique values are the same.
```python
>>> pcat1 = pser.astype(CategoricalDtype(["b", "c", "a"]))
>>> pcat2 = pser.astype(CategoricalDtype(["a", "b", "c"]))
>>> pcat1.dtype == pcat2.dtype
True
```
We should follow the latest pandas as much as possible.
Yes, the behavior is changed as example in the PR description.
Unittest
Closes#33757 from itholic/SPARK-36368.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
(cherry picked from commit f2e593bcf1)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This PR proposes to fix `Series.astype` when converting datetime type to StringDtype, to match the behavior of pandas 1.3.
In pandas < 1.3,
```python
>>> pd.Series(["2020-10-27 00:00:01", None], name="datetime").astype("string")
0 2020-10-27 00:00:01
1 NaT
Name: datetime, dtype: string
```
This is changed to
```python
>>> pd.Series(["2020-10-27 00:00:01", None], name="datetime").astype("string")
0 2020-10-27 00:00:01
1 <NA>
Name: datetime, dtype: string
```
in pandas >= 1.3, so we follow the behavior of latest pandas.
Because pandas-on-Spark always follow the behavior of latest pandas.
Yes, the behavior is changed to latest pandas when converting datetime to nullable string (StringDtype)
Unittest passed
Closes#33735 from itholic/SPARK-36387.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
(cherry picked from commit c0441bb7e8)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This PR proposes to fix `RollingGroupBy` and `ExpandingGroupBy` to follow latest pandas behavior.
`RollingGroupBy` and `ExpandingGroupBy` no longer returns grouped-by column in values from pandas 1.3.
Before:
```python
>>> df = pd.DataFrame({"A": [1, 1, 2, 3], "B": [0, 1, 2, 3]})
>>> df.groupby("A").rolling(2).sum()
A B
A
1 0 NaN NaN
1 2.0 1.0
2 2 NaN NaN
3 3 NaN NaN
```
After:
```python
>>> df = pd.DataFrame({"A": [1, 1, 2, 3], "B": [0, 1, 2, 3]})
>>> df.groupby("A").rolling(2).sum()
B
A
1 0 NaN
1 1.0
2 2 NaN
3 3 NaN
```
We should follow the behavior of pandas as much as possible.
Yes, the result of `RollingGroupBy` and `ExpandingGroupBy` is changed as described above.
Unit tests.
Closes#33646 from itholic/SPARK-36388.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit b8508f4876)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
Disable tests failed by the incompatible behavior of pandas 1.3.
Pandas 1.3 has been released.
There are some behavior changes and we should follow it, but it's not ready yet.
No.
Disabled some tests related to the behavior change.
Closes#33598 from ueshin/issues/SPARK-36367/disable_tests.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 8cb9cf39b6)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes improving test coverage for pandas-on-Spark DataFrame code base, which is written in `frame.py`.
This PR did the following to improve coverage:
- Add unittest for untested code
- Remove unused code
- Add arguments to some functions for testing
### Why are the changes needed?
To make the project healthier by improving coverage.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittest.
Closes#33833 from itholic/SPARK-36505.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 97e7d6e667)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This fixed Python linter failure.
### Why are the changes needed?
```
flake8 checks failed:
./python/pyspark/ml/tests/test_tuning.py:21:1: F401 'numpy as np' imported but unused
import numpy as np
F401 'numpy as np' imported but unused
Error: Process completed with exit code 1.
```
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Pass the GitHub Action Linter job.
Closes#33841 from dongjoon-hyun/unused_import.
Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes to move distributed-sequence index implementation to SQL plan to leverage optimizations such as column pruning.
```python
import pyspark.pandas as ps
ps.set_option('compute.default_index_type', 'distributed-sequence')
ps.range(10).id.value_counts().to_frame().spark.explain()
```
**Before:**
```bash
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [count#51L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(count#51L DESC NULLS LAST, 200), ENSURE_REQUIREMENTS, [id=#70]
+- HashAggregate(keys=[id#37L], functions=[count(1)], output=[__index_level_0__#48L, count#51L])
+- Exchange hashpartitioning(id#37L, 200), ENSURE_REQUIREMENTS, [id=#67]
+- HashAggregate(keys=[id#37L], functions=[partial_count(1)], output=[id#37L, count#63L])
+- Project [id#37L]
+- Filter atleastnnonnulls(1, id#37L)
+- Scan ExistingRDD[__index_level_0__#36L,id#37L]
# ^^^ Base DataFrame created by the output RDD from zipWithIndex (and checkpointed)
```
**After:**
```bash
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [count#275L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(count#275L DESC NULLS LAST, 200), ENSURE_REQUIREMENTS, [id=#174]
+- HashAggregate(keys=[id#258L], functions=[count(1)])
+- HashAggregate(keys=[id#258L], functions=[partial_count(1)])
+- Filter atleastnnonnulls(1, id#258L)
+- Range (0, 10, step=1, splits=16)
# ^^^ Removed the Spark job execution for `zipWithIndex`
```
### Why are the changes needed?
To leverage optimization of SQL engine and avoid unnecessary shuffle to create default index.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittests were added. Also, this PR will test all unittests in pandas API on Spark after switching the default index implementation to `distributed-sequence`.
Closes#33807 from HyukjinKwon/SPARK-36559.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 93cec49212)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Revert 397b843890 and 5a48eb8d00
### Why are the changes needed?
As discussed in https://github.com/apache/spark/pull/33800#issuecomment-904140869, there is correctness issue in the current implementation. Let's revert the code changes from branch 3.2 and fix it on master branch later
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Ci tests
Closes#33819 from gengliangwang/revert-SPARK-34415.
Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(cherry picked from commit de932f51ce)
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
This is a follow-up of #33687.
Use `LooseVersion` instead of `pkg_resources.parse_version`.
### Why are the changes needed?
In the previous PR, `pkg_resources.parse_version` was used, but we should use `LooseVersion` instead to be consistent in the code base.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes#33768 from ueshin/issues/SPARK-36370/LooseVersion.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 7fb8ea319e)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Suggesting to refactor the way the _builtin_table is defined in the `python/pyspark/pandas/groupby.py` module.
Pandas has recently refactored the way we import the _builtin_table and is now part of the pandas.core.common module instead of being an attribute of the pandas.core.base.SelectionMixin class.
### Why are the changes needed?
This change is not fully needed but the current implementation redefines this table within pyspark, so any changes of this table from the pandas library would need to be updated in the pyspark repository as well.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Ran the following command successfully :
```sh
python/run-tests --testnames 'pyspark.pandas.tests.test_groupby'
```
Tests passed in 327 seconds
Closes#33687 from Cedric-Magnan/_builtin_table_from_pandas.
Authored-by: Cedric-Magnan <cedric.magnan@artefact.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
(cherry picked from commit 964dfe254f)
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Implement `Index.map`.
The PR is based on https://github.com/databricks/koalas/pull/2136. Thanks awdavidson for the prototype.
`map` of CategoricalIndex and DatetimeIndex will be implemented in separate PRs.
### Why are the changes needed?
Mapping values using input correspondence (a dict, Series, or function) is supported in pandas as [Index.map](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.map.html).
We shall also support hat.
### Does this PR introduce _any_ user-facing change?
Yes. `Index.map` is available now.
```py
>>> psidx = ps.Index([1, 2, 3])
>>> psidx.map({1: "one", 2: "two", 3: "three"})
Index(['one', 'two', 'three'], dtype='object')
>>> psidx.map(lambda id: "{id} + 1".format(id=id))
Index(['1 + 1', '2 + 1', '3 + 1'], dtype='object')
>>> pser = pd.Series(["one", "two", "three"], index=[1, 2, 3])
>>> psidx.map(pser)
Index(['one', 'two', 'three'], dtype='object')
```
### How was this patch tested?
Unit tests.
Closes#33694 from xinrong-databricks/index_map.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
(cherry picked from commit 4dcd746025)
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This patch supports dynamic gap duration in session window.
### Why are the changes needed?
The gap duration used in session window for now is a static value. To support more complex usage, it is better to support dynamic gap duration which determines the gap duration by looking at the current data. For example, in our usecase, we may have different gap by looking at the certain column in the input rows.
### Does this PR introduce _any_ user-facing change?
Yes, users can specify dynamic gap duration.
### How was this patch tested?
Modified existing tests and new test.
Closes#33691 from viirya/dynamic-session-window-gap.
Authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
(cherry picked from commit 8b8d91cf64)
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
### What changes were proposed in this pull request?
This PR is followup for https://github.com/apache/spark/pull/32964, to improve the warning message.
### Why are the changes needed?
To improve the warning message.
### Does this PR introduce _any_ user-facing change?
The warning is changed from "Deprecated in 3.2, Use `spark.to_spark_io` instead." to "Deprecated in 3.2, Use `DataFrame.spark.to_spark_io` instead."
### How was this patch tested?
Manually run `dev/lint-python`
Closes#33631 from itholic/SPARK-35811-followup.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 3d72c20e64)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Better error messages for DataTypeOps against lists.
### Why are the changes needed?
Currently, DataTypeOps against lists throw a Py4JJavaError, we shall throw a TypeError with proper messages instead.
### Does this PR introduce _any_ user-facing change?
Yes. A TypeError message will be showed rather than a Py4JJavaError.
From:
```py
>>> import pyspark.pandas as ps
>>> ps.Series([1, 2, 3]) > [3, 2, 1]
Traceback (most recent call last):
...
py4j.protocol.Py4JJavaError: An error occurred while calling o107.gt.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [3, 2, 1]
...
```
To:
```py
>>> import pyspark.pandas as ps
>>> ps.Series([1, 2, 3]) > [3, 2, 1]
Traceback (most recent call last):
...
TypeError: The operation can not be applied to list.
```
### How was this patch tested?
Unit tests.
Closes#33581 from xinrong-databricks/data_type_ops_list.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
(cherry picked from commit 8ca11fe39f)
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Partially backport from #33598 to avoid unexpected error caused by pandas 1.3.
### Why are the changes needed?
If uses tries to use pandas 1.3 as the underlying pandas, it will raise unexpected errors caused by removed APIs or behavior change.
Note that pandas API on Spark 3.2 will still follow the pandas 1.2 behavior.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes#33614 from ueshin/issues/SPARK-36367/3.2/partially_backport.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>