[SPARK-35759][PYTHON] Remove the upperbound for numpy for pandas-on-Spark

### What changes were proposed in this pull request?

Removes the upperbound for numpy for pandas-on-Spark.

### Why are the changes needed?

We can remove the upper-bound for numpy for pandas-on-Spark because currently it works well on the CI with numpy 1.20.3.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing tests.

Closes #32908 from ueshin/issues/SPARK-35759/numpy.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This commit is contained in:
Takuya UESHIN 2021-06-15 09:59:05 +09:00 committed by Hyukjin Kwon
parent 03756618fc
commit ef7545b788
2 changed files with 2 additions and 2 deletions

View file

@ -161,7 +161,7 @@ Package Minimum supported version Note
`Py4J` 0.10.9.2 Required
`pandas` 0.23.2 Required for pandas APIs on Spark
`pyarrow` 1.0.0 Required for pandas APIs on Spark
`Numpy` 1.14(<1.20.0) Required for pandas APIs on Spark
`Numpy` 1.14 Required for pandas APIs on Spark
============= ========================= ======================================
Note that PySpark requires Java 8 or later with ``JAVA_HOME`` properly set.

View file

@ -269,7 +269,7 @@ try:
'pandas_on_spark': [
'pandas>=%s' % _minimum_pandas_version,
'pyarrow>=%s' % _minimum_pyarrow_version,
'numpy>=1.14,<1.20.0',
'numpy>=1.14',
],
},
python_requires='>=3.6',