Commit graph

30792 commits

Author SHA1 Message Date
Takuya UESHIN c22f7a4834 [SPARK-36167][PYTHON] Revisit more InternalField managements
### What changes were proposed in this pull request?

Revisit and manage `InternalField` in more places.

### Why are the changes needed?

There are other places we can manage `InternalField`, and we can keep extension dtypes or `CategoricalDtype`.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Added some tests.

Closes #33377 from ueshin/issues/SPARK-36167/internal_field.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
2021-07-15 19:25:20 -07:00
Dongjoon Hyun 5f41a2752f [SPARK-36164][INFRA][FOLLOWUP] Add empty string check back
### What changes were proposed in this pull request?

This is a follow-up of #33371.
At the branch commit GitHub run, we have an empty environment variable.
This PR adds back the empty string check logic.

### Why are the changes needed?

Currently, the failure happens when we use `--modules` in GitHub Action.
```
$ GITHUB_ACTIONS=1 APACHE_SPARK_REF= dev/run-tests.py --modules core
[info] Using build tool sbt with Hadoop profile hadoop3.2 and Hive profile hive2.3 under environment github_actions
fatal: ambiguous argument '': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
Traceback (most recent call last):
  File "/Users/dongjoon/APACHE/spark-merge/dev/run-tests.py", line 785, in <module>
    main()
  File "/Users/dongjoon/APACHE/spark-merge/dev/run-tests.py", line 663, in main
    changed_files = identify_changed_files_from_git_commits(
  File "/Users/dongjoon/APACHE/spark-merge/dev/run-tests.py", line 91, in identify_changed_files_from_git_commits
    raw_output = subprocess.check_output(['git', 'diff', '--name-only', patch_sha, diff_target],
  File "/Users/dongjoon/.pyenv/versions/3.9.5/lib/python3.9/subprocess.py", line 424, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/Users/dongjoon/.pyenv/versions/3.9.5/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['git', 'diff', '--name-only', 'HEAD', '']' returned non-zero exit status 128.
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manually. The following failure is correct in local environment because it passed `identify_changed_files_from_git_commits` already.
```
$ GITHUB_ACTIONS=1 APACHE_SPARK_REF= dev/run-tests.py --modules core
[info] Using build tool sbt with Hadoop profile hadoop3.2 and Hive profile hive2.3 under environment github_actions
Traceback (most recent call last):
  File "/Users/dongjoon/APACHE/spark-merge/dev/run-tests.py", line 785, in <module>
    main()
  File "/Users/dongjoon/APACHE/spark-merge/dev/run-tests.py", line 668, in main
    os.environ["GITHUB_SHA"], target_ref=os.environ["GITHUB_PREV_SHA"])
  File "/Users/dongjoon/.pyenv/versions/3.9.5/lib/python3.9/os.py", line 679, in __getitem__
    raise KeyError(key) from None
KeyError: 'GITHUB_SHA'
```

Closes #33374 from dongjoon-hyun/SPARK-36164.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-15 13:44:17 -07:00
Max Gekk b09b7f7cc0 [SPARK-36034][SQL] Rebase datetime in pushed down filters to parquet
### What changes were proposed in this pull request?
In the PR, I propose to propagate either the SQL config `spark.sql.parquet.datetimeRebaseModeInRead` or/and Parquet option `datetimeRebaseMode` to `ParquetFilters`. The `ParquetFilters` class uses the settings in conversions of dates/timestamps instances from datasource filters to values pushed via `FilterApi` to the `parquet-column` lib.

Before the changes, date/timestamp values expressed as days/microseconds/milliseconds are interpreted as offsets in Proleptic Gregorian calendar, and pushed to the parquet library as is. That works fine if timestamp/dates values in parquet files were saved in the `CORRECTED` mode but in the `LEGACY` mode, filter's values could not match to actual values.

After the changes, timestamp/dates values of filters pushed down to parquet libs such as `FilterApi.eq(col1, -719162)` are rebased according the rebase settings. For the example, if the rebase mode is `CORRECTED`, **-719162** is pushed down as is but if the current rebase mode is `LEGACY`, the number of days is rebased to **-719164**. For more context, the PR description https://github.com/apache/spark/pull/28067 shows the diffs between two calendars.

### Why are the changes needed?
The changes fix the bug portrayed by the following example from SPARK-36034:
```scala
In [27]: spark.conf.set("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "LEGACY")
>>> spark.sql("SELECT DATE '0001-01-01' AS date").write.mode("overwrite").parquet("date_written_by_spark3_legacy")
>>> spark.read.parquet("date_written_by_spark3_legacy").where("date = '0001-01-01'").show()
+----+
|date|
+----+
+----+
```
The result must have the date value `0001-01-01`.

### Does this PR introduce _any_ user-facing change?
In some sense, yes. Query results can be different in some cases. For the example above:
```scala
scala> spark.conf.set("spark.sql.parquet.datetimeRebaseModeInWrite", "LEGACY")
scala> spark.sql("SELECT DATE '0001-01-01' AS date").write.mode("overwrite").parquet("date_written_by_spark3_legacy")
scala> spark.read.parquet("date_written_by_spark3_legacy").where("date = '0001-01-01'").show(false)
+----------+
|date      |
+----------+
|0001-01-01|
+----------+
```

### How was this patch tested?
By running the modified test suite `ParquetFilterSuite`:
```
$ build/sbt "test:testOnly *ParquetV1FilterSuite"
$ build/sbt "test:testOnly *ParquetV2FilterSuite"
```

Closes #33347 from MaxGekk/fix-parquet-ts-filter-pushdown.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-15 22:21:57 +03:00
William Hyun c8a3c22628 [SPARK-36164][INFRA] run-test.py should not fail when APACHE_SPARK_REF is not defined
### What changes were proposed in this pull request?
This PR aims to change run-test.py so that it does not fail when os.environ["APACHE_SPARK_REF"] is not defined.

### Why are the changes needed?
Currently, the run-test.py ends with an error.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Pass the CIs.

Closes #33371 from williamhyun/SPARK-36164.

Authored-by: William Hyun <william@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-15 11:43:30 -07:00
Dongjoon Hyun d69f981869 [SPARK-36165][INFRA] Fix SQL doc generation in GitHub Action
### What changes were proposed in this pull request?

This PR aims to fix SQL doc generation in GitHub Action by specifying the mkdocs-installed python version explicitly.

### Why are the changes needed?

Currently, the SQL doc generation is using `spark-submit` and picked up another `Python 3` binaries.
```
Generating SQL configuration table HTML file.
Traceback (most recent call last):
  File "/__w/spark/spark/sql/gen-sql-config-docs.py", line 25, in <module>
    from mkdocs.structure.pages import markdown
ModuleNotFoundError: No module named 'mkdocs'
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass the GitHub Action linter job.

Closes #33372 from dongjoon-hyun/fix_mkdocs.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-15 11:41:48 -07:00
Gengliang Wang 96c2919988 [SPARK-36135][SQL] Support TimestampNTZ type in file partitioning
### What changes were proposed in this pull request?

Support TimestampNTZ type in file partitioning
* When there is no provided schema and the default Timestamp type is TimestampNTZ , Spark should infer and parse the timestamp value partitions as TimestampNTZ.
* When the provided Partition schema is TimestampNTZ, Spark should be able to parse the TimestampNTZ type partition column.

### Why are the changes needed?

File partitioning is an important feature and Spark should support TimestampNTZ type in it.

### Does this PR introduce _any_ user-facing change?

Yes, Spark supports TimestampNTZ type in file partitioning

### How was this patch tested?

Unit tests

Closes #33344 from gengliangwang/partition.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-07-16 01:13:32 +08:00
Jungtaek Lim 1ceb753ef5 [SPARK-36157][SQL][SS] TimeWindow expression: apply filter before project
### What changes were proposed in this pull request?

This PR proposes to change the application of the operators for TimeWindow, from project -> filter, to filter -> project.

Currently Spark applies project, and filter, while filter is not dependent on project. That said, if the input rows are going to be filtered out via filter predicate, applying projection on these input rows are simply waste of time.

### Why are the changes needed?

This is a simple improvement requiring changes from a couple of lines.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing tests.

Closes #33367 from HeartSaVioR/SPARK-36157.

Authored-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
2021-07-15 09:47:25 -07:00
Dominik Gehl 2db7ed7964 [SPARK-36158][PYTHON][DOCS] Improving pyspark sql/functions months_between documentation
### What changes were proposed in this pull request?
Updating pyspark months_between documentation to the precision in the scala documentation

### Why are the changes needed?
Documentation clarity

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Only documentation change

Closes #33366 from dominikgehl/feature/SPARK-36158.

Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-15 19:20:38 +03:00
Yuming Wang 0062c03c15 [SPARK-32792][SQL][FOLLOWUP] Fix Parquet filter pushdown NOT IN predicate
### What changes were proposed in this pull request?

This pr fix Parquet filter pushdown `NOT` `IN` predicate if its values exceeds `spark.sql.parquet.pushdown.inFilterThreshold`. For example: `Not(In(a, Array(2, 3, 7))`. We can not push down `not(and(gteq(a, 2), lteq(a, 7)))`.

### Why are the changes needed?

Fix bug.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Unit test.

Closes #33365 from wangyum/SPARK-32792-3.

Authored-by: Yuming Wang <yumwang@ebay.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-15 18:51:53 +03:00
Hyukjin Kwon a71dd6af2f [SPARK-36146][PYTHON][INFRA][TESTS] Upgrade Python version from 3.6 to 3.9 in GitHub Actions' linter/docs
### What changes were proposed in this pull request?

This PR proposes to use Python 3.9 in documentation and linter at GitHub Actions. This PR also contains the fixes for mypy check (introduced by Python 3.9 upgrade)

```
python/pyspark/sql/pandas/_typing/protocols/frame.pyi:64: error: Name "np.ndarray" is not defined
python/pyspark/sql/pandas/_typing/protocols/frame.pyi:91: error: Name "np.recarray" is not defined
python/pyspark/sql/pandas/_typing/protocols/frame.pyi:165: error: Name "np.ndarray" is not defined
python/pyspark/pandas/categorical.py:82: error: Item "dtype[Any]" of "Union[dtype[Any], Any]" has no attribute "categories"
python/pyspark/pandas/categorical.py:109: error: Item "dtype[Any]" of "Union[dtype[Any], Any]" has no attribute "ordered"
python/pyspark/ml/linalg/__init__.pyi:184: error: Return type "ndarray[Any, Any]" of "toArray" incompatible with return type "NoReturn" in supertype "Matrix"
python/pyspark/ml/linalg/__init__.pyi:217: error: Return type "ndarray[Any, Any]" of "toArray" incompatible with return type "NoReturn" in supertype "Matrix"
python/pyspark/pandas/typedef/typehints.py:163: error: Module has no attribute "bool"; maybe "bool_" or "bool8"?
python/pyspark/pandas/typedef/typehints.py:174: error: Module has no attribute "float"; maybe "float_", "cfloat", or "float96"?
python/pyspark/pandas/typedef/typehints.py:180: error: Module has no attribute "int"; maybe "uint", "rint", or "intp"?
python/pyspark/pandas/ml.py:81: error: Value of type variable "_DTypeScalar_co" of "dtype" cannot be "object"
python/pyspark/pandas/indexing.py:1649: error: Module has no attribute "int"; maybe "uint", "rint", or "intp"?
python/pyspark/pandas/indexing.py:1656: error: Module has no attribute "int"; maybe "uint", "rint", or "intp"?
python/pyspark/pandas/frame.py:4969: error: Function "numpy.array" is not valid as a type
python/pyspark/pandas/frame.py:4969: note: Perhaps you need "Callable[...]" or a callback protocol?
python/pyspark/pandas/frame.py:4970: error: Function "numpy.array" is not valid as a type
python/pyspark/pandas/frame.py:4970: note: Perhaps you need "Callable[...]" or a callback protocol?
python/pyspark/pandas/frame.py:7402: error: "List[Any]" has no attribute "tolist"
python/pyspark/pandas/series.py:1030: error: Module has no attribute "_NoValue"
python/pyspark/pandas/series.py:1031: error: Module has no attribute "_NoValue"
python/pyspark/pandas/indexes/category.py:159: error: Item "dtype[Any]" of "Union[dtype[Any], Any]" has no attribute "categories"
python/pyspark/pandas/indexes/category.py:180: error: Item "dtype[Any]" of "Union[dtype[Any], Any]" has no attribute "ordered"
python/pyspark/pandas/namespace.py:2036: error: Argument 1 to "column_name" has incompatible type "float"; expected "str"
python/pyspark/pandas/mlflow.py:59: error: Incompatible types in assignment (expression has type "Type[floating[Any]]", variable has type "str")
python/pyspark/pandas/data_type_ops/categorical_ops.py:43: error: Item "dtype[Any]" of "Union[dtype[Any], Any]" has no attribute "categories"
python/pyspark/pandas/data_type_ops/categorical_ops.py:43: error: Item "dtype[Any]" of "Union[dtype[Any], Any]" has no attribute "ordered"
python/pyspark/pandas/data_type_ops/categorical_ops.py:56: error: Item "dtype[Any]" of "Union[dtype[Any], Any]" has no attribute "categories"
python/pyspark/pandas/tests/test_typedef.py:70: error: Name "np.float" is not defined
python/pyspark/pandas/tests/test_typedef.py:77: error: Name "np.float" is not defined
python/pyspark/pandas/tests/test_typedef.py:85: error: Name "np.float" is not defined
python/pyspark/pandas/tests/test_typedef.py💯 error: Name "np.float" is not defined
python/pyspark/pandas/tests/test_typedef.py:108: error: Name "np.float" is not defined
python/pyspark/mllib/clustering.pyi:152: error: Incompatible types in assignment (expression has type "ndarray[Any, Any]", base class "KMeansModel" defined the type as "List[ndarray[Any, Any]]")
python/pyspark/mllib/classification.pyi:93: error: Signature of "predict" incompatible with supertype "LinearClassificationModel"
Found 32 errors in 15 files (checked 315 source files)
1
```

### Why are the changes needed?

Python 3.6 is deprecated at SPARK-35938

### Does this PR introduce _any_ user-facing change?

No. Maybe static analysis, etc. by some type hints but they are really non-breaking..

### How was this patch tested?

I manually checked by GitHub Actions build in forked repository.

Closes #33356 from HyukjinKwon/SPARK-36146.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-15 08:01:54 -07:00
Hyukjin Kwon 6bd385f1e3 [SPARK-36159][BUILD] Replace 'python' to 'python3' in dev/test-dependencies.sh
### What changes were proposed in this pull request?

This PR is a followup of https://github.com/apache/spark/pull/26330. There is the last place to fix in `dev/test-dependencies.sh`

### Why are the changes needed?

To stick to Python 3 instead of using Python 2 mistakenly.

### Does this PR introduce _any_ user-facing change?

No, dev-only.

### How was this patch tested?

Manually tested.

Closes #33368 from HyukjinKwon/change-python-3.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-15 07:58:18 -07:00
Dominik Gehl 802f632a28 [SPARK-36154][DOCS] Documenting week and quarter as valid formats in pyspark sql/functions trunc
### What changes were proposed in this pull request?
Added missing documentation of week and quarter as valid formats to pyspark sql/functions trunc

### Why are the changes needed?
Pyspark documentation and scala documentation didn't mentioned the same supported formats

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Only documentation change

Closes #33359 from dominikgehl/feature/SPARK-36154.

Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-15 16:51:11 +03:00
PengLei e05441c223 [SPARK-29519][SQL][FOLLOWUP] Keep output is deterministic for show tblproperties
### What changes were proposed in this pull request?
Keep the output order is deterministic for `SHOW TBLPROPERTIES`

### Why are the changes needed?
[#33343](https://github.com/apache/spark/pull/33343#issue-689828187).
Keep the output order deterministic meaningful.

Since the properties are sorted and then compare result in the testcase for `SHOW TBLPROPERTIES`,  it does not fail, but ideally, the output is ordered and deterministic.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
existed ut test

Closes #33353 from Peng-Lei/order-ouput-properties.

Authored-by: PengLei <peng.8lei@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-15 21:44:10 +08:00
Kousuke Saruta f95ca31c0f [SPARK-33898][SQL][FOLLOWUP] Fix the behavior of SHOW CREATE TABLE to output deterministic results
### What changes were proposed in this pull request?

This PR fixes a behavior of `SHOW CREATE TABLE` added in `SPARK-33898` (#32931) to output deterministic result.
A test `SPARK-33898: SHOW CREATE TABLE` in `DataSourceV2SQLSuite` compares two `CREATE TABLE` statements. One is generated by `SHOW CREATE TABLE` against a created table and the other is expected `CREATE TABLE` statement.

The created table has options `from` and `to`, and they are declared in this order.
```
CREATE TABLE $t (
  a bigint NOT NULL,
  b bigint,
  c bigint,
  `extra col` ARRAY<INT>,
  `<another>` STRUCT<x: INT, y: ARRAY<BOOLEAN>>
)
USING foo
OPTIONS (
  from = 0,
  to = 1)
COMMENT 'This is a comment'
TBLPROPERTIES ('prop1' = '1')
PARTITIONED BY (a)
LOCATION '/tmp'
```

And the expected `CREATE TABLE` in the test code is like as follows.
```
"CREATE TABLE testcat.ns1.ns2.tbl (",
"`a` BIGINT NOT NULL,",
"`b` BIGINT,",
"`c` BIGINT,",
"`extra col` ARRAY<INT>,",
"`<another>` STRUCT<`x`: INT, `y`: ARRAY<BOOLEAN>>)",
"USING foo",
"OPTIONS(",
"'from' = '0',",
"'to' = '1')",
"PARTITIONED BY (a)",
"COMMENT 'This is a comment'",
"LOCATION '/tmp'",
"TBLPROPERTIES(",
"'prop1' = '1')"
```
As you can see, the order of `from` and `to` is expected.
But options are implemented as `Map` so the order of key cannot be kept.

In fact, this test fails with Scala 2.13.
```
[info] - SPARK-33898: SHOW CREATE TABLE *** FAILED *** (515 milliseconds)
[info]   Array("CREATE TABLE testcat.ns1.ns2.tbl (", "`a` BIGINT NOT NULL,", "`b` BIGINT,", "`c` BIGINT,", "`extra col` ARRAY<INT>,", "`<another>` STRUCT<`x`: INT, `y`: ARRAY<BOOLEAN>>)", "USING foo", "OPTIONS(", "'to' = '1',", "'from' = '0')", "PARTITIONED BY (a)", "COMMENT 'This is a comment'", "LOCATION '/tmp'", "TBLPROPERTIES(", "'prop1' = '1')") did not equal Array("CREATE TABLE testcat.ns1.ns2.tbl (", "`a` BIGINT NOT NULL,", "`b` BIGINT,", "`c` BIGINT,", "`extra col` ARRAY<INT>,", "`<another>` STRUCT<`x`: INT, `y`: ARRAY<BOOLEAN>>)", "USING foo", "OPTIONS(", "'from' = '0',", "'to' = '1')", "PARTITIONED BY (a)", "COMMENT 'This is a comment'", "LOCATION '/tmp'", "TBLPROPERTIES(", "'prop1' = '1')") (DataSourceV2SQLSuite.scala:1997)
```
In the current master, the test doesn't fail with Scala 2.12 but it's still non-deterministic.

### Why are the changes needed?

Bug fix.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

I confirmed that the modified test passed with both Scala 2.12 and Scala 2.13 with this change.

Closes #33343 from sarutak/fix-show-create-table-test.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-15 20:53:21 +09:00
Linhong Liu 4dfd266b27 [SPARK-36148][SQL] Fix input data types check for regexp_replace
### What changes were proposed in this pull request?
`RegExpReplace` overrides `checkInputDataTypes` but doesn't do the basic type check.
This PR adds the type check so that the error message is more readable.

### Why are the changes needed?
bugfix

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
newly added test case

Closes #33357 from linhongliu-db/SPARK-36148-regexp-replace-check.

Authored-by: Linhong Liu <linhong.liu@databricks.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-15 12:23:28 +03:00
Dominik Gehl b5ee6d008d [SPARK-36149][PYTHON] Clarify documentation for dayofweek and weekofyear
### What changes were proposed in this pull request?
Clearly state which weekday corresponds to which integer

### Why are the changes needed?
Documentation clarity

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
only documentation change

Closes #33345 from dominikgehl/doc/pyspark-dayofweek.

Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-15 11:53:54 +03:00
Gengliang Wang 564d3de7c6 [SPARK-36037][TESTS][FOLLOWUP] Avoid wrong test results on daylight saving time
### What changes were proposed in this pull request?

Only use the zone ids that has no daylight saving for testing `localtimestamp`

### Why are the changes needed?

https://github.com/apache/spark/pull/33346#discussion_r670135296 MaxGekk suggests that we should avoid wrong results if possible.

### Does this PR introduce _any_ user-facing change?

No
### How was this patch tested?

Unit test

Closes #33354 from gengliangwang/FIxDST.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-15 11:40:51 +03:00
Dongjoon Hyun 5acfecbf97 [SPARK-36150][INFRA][TESTS] Disable MiMa for Scala 2.13 artifacts
### What changes were proposed in this pull request?

This PR aims to disable MiMa check for Scala 2.13 artifacts.

### Why are the changes needed?

Apache Spark doesn't have Scala 2.13 Maven artifacts yet.
SPARK-36151 will enable this after Apache Spark 3.2.0 release.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manual. The following should succeed without real testing.
```
$ dev/mima -Pscala-2.13
```

Closes #33355 from dongjoon-hyun/SPARK-36150.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-15 00:34:22 -07:00
Dongjoon Hyun 416a7fd490 [SPARK-36139][INFRA][TESTS] Remove Python 3.6 from pyspark GitHub Action job
### What changes were proposed in this pull request?

This PR aims to remove Python 3.6 installation from `pyspark` job in `build and test` GitHub Action Workflow for Apache Spark 3.3.

### Why are the changes needed?

Python 3.6 is deprecated via SPARK-35938. This will save the GitHub Action resource by removing python3.6 testing.

**BEFORE**
```
Will test against the following Python executables: ['python3.6', 'python3.9', 'pypy3']
```

**AFTER**
```
 Will test against the following Python executables: ['python3.9', 'pypy3']
```

Note that Python 3.6 is still used in the following cases.
- In another jobs like `Linter`
- In `dev/run-pip-tests` script, pip packaing testing via `conda`.
  - This is handled via https://github.com/apache/spark/pull/33351

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass the GitHub Action.

Closes #33349 from dongjoon-hyun/SPARK-36139.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-14 21:01:25 -07:00
Gengliang Wang 0973397721 [SPARK-36037][SQL][FOLLOWUP] Fix flaky test for datetime function localtimestamp
### What changes were proposed in this pull request?

The threshold of the test case "datetime function localtimestamp" is small, which leads to flaky test results
https://github.com/gengliangwang/spark/runs/3067396143?check_suite_focus=true

This PR is to increase the threshold for checking two the different current local datetimes from 5ms to 1 second. (The test case of current_timestamp uses 5 seconds)
### Why are the changes needed?

Fix flaky test
### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Unit test

Closes #33346 from gengliangwang/fixFlaky.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-07-15 11:32:18 +08:00
Dongjoon Hyun 5202944b50 [SPARK-36144][INFRA][TESTS] Use Python 3.9 in run-pip-tests conda environment
### What changes were proposed in this pull request?

This PR aims to use Python 3.9 instead of 3.6 during `run-pip-tests` conda environment for Apache Spark 3.3.

### Why are the changes needed?

Python 3.6 is deprecated via SPARK-35938 at Apache Spark 3.2. We had better have Python 3.9 test coverage in Apache Spark 3.3.

### Does this PR introduce _any_ user-facing change?

N/A

### How was this patch tested?

Pass the CIs.

Closes #33351 from dongjoon-hyun/SPARK-36144.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-15 11:45:00 +09:00
Karen Feng e92b8ea6f8 [SPARK-36106][SQL][CORE] Label error classes for subset of QueryCompilationErrors
### What changes were proposed in this pull request?

Adds error classes to some of the exceptions in QueryCompilationErrors.

### Why are the changes needed?

Improves auditing for developers and adds useful fields for users (error class and SQLSTATE).

### Does this PR introduce _any_ user-facing change?

Yes, fills in missing error class and SQLSTATE fields.

### How was this patch tested?

Existing tests and new unit tests.

Closes #33309 from karenfeng/group-compilation-errors-1.

Authored-by: Karen Feng <karen.feng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-15 11:43:18 +09:00
Xinrong Meng 0cb120f390 [SPARK-36125][PYTHON] Implement non-equality comparison operators between two Categoricals
### What changes were proposed in this pull request?
Implement non-equality comparison operators between two Categoricals.
Non-goal: supporting Scalar input will be a follow-up task.

### Why are the changes needed?
pandas supports non-equality comparisons between two Categoricals. We should follow that.

### Does this PR introduce _any_ user-facing change?
Yes. No `NotImplementedError` for `<`, `<=`, `>`, `>=` operators between two Categoricals. An example is shown as below:

From:
```py
>>> import pyspark.pandas as ps
>>> from pandas.api.types import CategoricalDtype
>>> psser = ps.Series([1, 2, 3]).astype(CategoricalDtype([3, 2, 1], ordered=True))
>>> other_psser = ps.Series([2, 1, 3]).astype(CategoricalDtype([3, 2, 1], ordered=True))
>>> with ps.option_context("compute.ops_on_diff_frames", True):
...     psser <= other_psser
...
Traceback (most recent call last):
...
NotImplementedError: <= can not be applied to categoricals.
```

To:
```py
>>> import pyspark.pandas as ps
>>> from pandas.api.types import CategoricalDtype
>>> psser = ps.Series([1, 2, 3]).astype(CategoricalDtype([3, 2, 1], ordered=True))
>>> other_psser = ps.Series([2, 1, 3]).astype(CategoricalDtype([3, 2, 1], ordered=True))
>>> with ps.option_context("compute.ops_on_diff_frames", True):
...     psser <= other_psser
...
0    False
1     True
2     True
dtype: bool
```
### How was this patch tested?
Unit tests.

Closes #33331 from xinrong-databricks/categorical_compare.

Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
2021-07-14 14:01:10 -07:00
Geek 1e86345ae3 [SPARK-36069][SQL] Add field info to from_json's exception in the FAILFAST mode
### What changes were proposed in this pull request?

spark function from_json output field name, field type and field value when FAILFAST mode throw exception.

### Why are the changes needed?

This infoormation is very important for devlops to find where error input data is located.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

org/apache/spark/sql/JsonFunctionsSuite.scala:598
test("[SPARK-36069] from_json invalid json schema - check field name and field value")

Closes #33297 from geekyouth/feature/FAILFAST_output_fidelaName_fieldValue_dataType.

Lead-authored-by: Geek <forsupergeeker@gmail.com>
Co-authored-by: 极客青年 <forsupergeeker@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-14 21:28:15 +03:00
ulysses-you 3819641201 [SPARK-35639][SQL][FOLLOWUP] Make hasCoalescedPartition return true if something was actually coalesced
### What changes were proposed in this pull request?

Add `CoalescedPartitionSpec(0, 0, _)` check if a `CoalescedPartitionSpec` is coalesced.

### Why are the changes needed?

Fix corner case.

### Does this PR introduce _any_ user-facing change?

yes, UI may be changed

### How was this patch tested?

Add test

Closes #33342 from ulysses-you/SPARK-35639-FOLLOW.

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-14 22:04:50 +08:00
Chao Sun e980c7a840 [SPARK-36123][SQL] Parquet vectorized reader doesn't skip null values correctly
### What changes were proposed in this pull request?

Fix the skipping values logic in Parquet vectorized reader when column index is effective, by considering nulls and only call `ParquetVectorUpdater.skipValues` when the values are non-null.

### Why are the changes needed?

Currently, the Parquet vectorized reader may not work correctly if column index filtering is effective, and the data page contains null values. For instance, let's say we have two columns `c1: BIGINT` and `c2: STRING`, and the following pages:
```
   * c1        500       500       500       500
   *  |---------|---------|---------|---------|
   *  |-------|-----|-----|---|---|---|---|---|
   * c2     400   300   300 200 200 200 200 200
```

and suppose we have a query like the following:
```sql
SELECT * FROM t WHERE c1 = 500
```

this will create a Parquet row range `[500, 1000)` which, when applied to `c2`, will require us to skip all the rows in `[400,500)`. However the current logic for skipping rows is via `updater.skipValues(n, valueReader)` which is incorrect since this skips the next `n` non-null values. In the case when nulls are present, this will not work correctly.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Added a new test in `ParquetColumnIndexSuite`.

Closes #33330 from sunchao/SPARK-36123-skip-nulls.

Authored-by: Chao Sun <sunchao@apple.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-14 18:14:17 +08:00
Linhong Liu b86645776b [SPARK-35780][SQL] Support DATE/TIMESTAMP literals across the full range
### What changes were proposed in this pull request?
DATE/TIMESTAMP literals support years 0000 to 9999. However, internally we support a range that is much larger.
We can add or subtract large intervals from a date/timestamp and the system will happily process and display large negative and positive dates.

Since we obviously cannot put this genie back into the bottle the only thing we can do is allow matching DATE/TIMESTAMP literals.

### Why are the changes needed?
make spark more usable and bug fix

### Does this PR introduce _any_ user-facing change?
Yes, after this PR, below SQL will have different results
```sql
select cast('-10000-1-2' as date) as date_col
-- before PR: NULL
-- after PR: -10000-1-2
```

```sql
select cast('2021-4294967297-11' as date) as date_col
-- before PR: 2021-01-11
-- after PR: NULL
```

### How was this patch tested?
newly added test cases

Closes #32959 from linhongliu-db/SPARK-35780-full-range-datetime.

Lead-authored-by: Linhong Liu <linhong.liu@databricks.com>
Co-authored-by: Linhong Liu <67896261+linhongliu-db@users.noreply.github.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-14 18:11:39 +08:00
Jungtaek Lim 12a576f175 [SPARK-34892][SS] Introduce MergingSortWithSessionWindowStateIterator sorting input rows and rows in state efficiently
Introduction: this PR is a part of SPARK-10816 (EventTime based sessionization (session window)). Please refer #31937 to see the overall view of the code change. (Note that code diff could be diverged a bit.)

### What changes were proposed in this pull request?

This PR introduces MergingSortWithSessionWindowStateIterator, which does "merge sort" between input rows and sessions in state based on group key and session's start time.

Note that the iterator does merge sort among input rows and sessions grouped by grouping key. The iterator doesn't provide sessions in state which keys don't exist in input rows. For input rows, the iterator will provide all rows regardless of the existence of matching sessions in state.

MergingSortWithSessionWindowStateIterator works on the precondition that given iterator is sorted by "group keys + start time of session window", and the iterator still retains the characteristic of the sort.

### Why are the changes needed?

This part is a one of required on implementing SPARK-10816.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New UT added.

Closes #33077 from HeartSaVioR/SPARK-34892-SPARK-10816-PR-31570-part-4.

Authored-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
2021-07-14 18:47:44 +09:00
Fu Chen 103d16e868 [SPARK-36130][SQL] UnwrapCastInBinaryComparison should skip In expression when in.list contains an expression that is not literal
### What changes were proposed in this pull request?

Fix [comment](https://github.com/apache/spark/pull/32488#issuecomment-879315179)
This PR fix rule `UnwrapCastInBinaryComparison` bug. Rule UnwrapCastInBinaryComparison should skip In expression when in.list contains an expression that is not literal.

- In

Before this pr, the following example will throw an exception.
```scala
  withTable("tbl") {
    sql("CREATE TABLE tbl (d decimal(33, 27)) USING PARQUET")
    sql("SELECT d FROM tbl WHERE d NOT IN (d + 1)")
  }
```
- InSet

As the analyzer guarantee that all the elements in the `inSet.hset` are literal, so this is not an issue for `InSet`.

fbf53dee37/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala (L264-L279)

### Does this PR introduce _any_ user-facing change?

No, only bug fix.

### How was this patch tested?

New test.

Closes #33335 from cfmcgrady/SPARK-36130.

Authored-by: Fu Chen <cfmcgrady@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-14 15:57:10 +08:00
Eugene Koifman 4033b2a3f4 [SPARK-35639][SQL] Make hasCoalescedPartition return true if something was actually coalesced
### What changes were proposed in this pull request?
Fix `CustomShuffleReaderExec.hasCoalescedPartition` so that it returns true only if some original partitions got combined

### Why are the changes needed?
W/o this change `CustomShuffleReaderExec` description can report `coalesced` even though partitions are unchanged

### Does this PR introduce _any_ user-facing change?
Yes, the `Arguments` in the node description is now accurate:
```
(16) CustomShuffleReader
Input [3]: [registration#4, sum#85, count#86L]
Arguments: coalesced
```

### How was this patch tested?
Existing tests

Closes #32872 from ekoifman/PRISM-77023-fix-hasCoalescedPartition.

Authored-by: Eugene Koifman <eugene.koifman@workday.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-14 15:48:02 +08:00
gengjiaan b4f7758944 [SPARK-36037][SQL] Support ANSI SQL LOCALTIMESTAMP datetime value function
### What changes were proposed in this pull request?
`LOCALTIMESTAMP()` is a datetime value function from ANSI SQL.
The syntax show below:
```
<datetime value function> ::=
    <current date value function>
  | <current time value function>
  | <current timestamp value function>
  | <current local time value function>
  | <current local timestamp value function>
<current date value function> ::=
CURRENT_DATE
<current time value function> ::=
CURRENT_TIME [ <left paren> <time precision> <right paren> ]
<current local time value function> ::=
LOCALTIME [ <left paren> <time precision> <right paren> ]
<current timestamp value function> ::=
CURRENT_TIMESTAMP [ <left paren> <timestamp precision> <right paren> ]
<current local timestamp value function> ::=
LOCALTIMESTAMP [ <left paren> <timestamp precision> <right paren> ]
```

`LOCALTIMESTAMP()` returns the current timestamp at the start of query evaluation as TIMESTAMP WITH OUT TIME ZONE. This is similar to `CURRENT_TIMESTAMP()`.
Note we need to update the optimization rule `ComputeCurrentTime` so that Spark returns the same result in a single query if the function is called multiple times.

### Why are the changes needed?
`CURRENT_TIMESTAMP()` returns the current timestamp at the start of query evaluation.
`LOCALTIMESTAMP()` returns the current timestamp without time zone at the start of query evaluation.
The `LOCALTIMESTAMP` function is an ANSI SQL.
The `LOCALTIMESTAMP` function is very useful.

### Does this PR introduce _any_ user-facing change?
'Yes'. Support new function `LOCALTIMESTAMP()`.

### How was this patch tested?
New tests.

Closes #33258 from beliefer/SPARK-36037.

Lead-authored-by: gengjiaan <gengjiaan@360.cn>
Co-authored-by: Jiaan Geng <beliefer@163.com>
Co-authored-by: Wenchen Fan <cloud0fan@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-14 15:38:46 +08:00
Kousuke Saruta fd06cc211d [SPARK-36129][BUILD] Upgrade commons-compress to 1.21 to deal with CVEs
### What changes were proposed in this pull request?

This PR upgrades `commons-compress` from `1.20` to `1.21` to deal with CVEs.

### Why are the changes needed?

Some CVEs which affect `commons-compress 1.20` are reported and fixed in `1.21`.
https://commons.apache.org/proper/commons-compress/security-reports.html

* CVE-2021-35515
* CVE-2021-35516
* CVE-2021-35517
* CVE-2021-36090

The severities are reported as low for all the CVEs but it would be better to deal with them just in case.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI.

Closes #33333 from sarutak/upgrade-commons-compress-1.21.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-13 22:53:14 -07:00
Chao Sun 7a7b086534 [SPARK-36131][SQL][TEST] Refactor ParquetColumnIndexSuite
### What changes were proposed in this pull request?

Refactor `ParquetColumnIndexSuite` and allow better code reuse.

### Why are the changes needed?

A few methods in the test suite can share the same utility method `checkUnalignedPages` so it's better to do that and remove code duplication.

Additionally, `parquet.enable.dictionary` is tested for both `true` and `false` combination.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Existing tests.

Closes #33334 from sunchao/SPARK-35743-test-refactoring.

Authored-by: Chao Sun <sunchao@apple.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-13 22:49:55 -07:00
Venkata krishnan Sowrirajan fbf53dee37 [SPARK-32920][CORE][SHUFFLE][FOLLOW-UP] Fix to run push-based shuffle tests in DAGSchedulerSuite in ad-hoc manner
### What changes were proposed in this pull request?
Currently when the push-based shuffle tests are run in an ad-hoc manner through IDE, `spark.testing` is not set to true therefore `Utils#isPushBasedShuffleEnabled` returns false disabling push-based shuffle eventually causing the tests to fail. This doesn't happen when it is run on command line using maven as `spark.testing` is set to true.
Changes made - set `spark.testing` to true in `initPushBasedShuffleConfs`

### Why are the changes needed?
Fix to run DAGSchedulerSuite tests in ad-hoc manner

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
In my local IDE

Closes #33303 from venkata91/SPARK-32920-follow-up.

Authored-by: Venkata krishnan Sowrirajan <vsowrirajan@linkedin.com>
Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
2021-07-13 12:16:28 -05:00
Jungtaek Lim 0fe2d809d6 [SPARK-34891][SS] Introduce state store manager for session window in streaming query
Introduction: this PR is a part of SPARK-10816 (`EventTime based sessionization (session window)`). Please refer #31937 to see the overall view of the code change. (Note that code diff could be diverged a bit.)

### What changes were proposed in this pull request?

This PR introduces state store manager for session window in streaming query. Session window in batch query wouldn't need to leverage state store manager.

This PR ensures versioning on state format for state store manager, so that we can apply further optimization after releasing Spark version. StreamingSessionWindowStateManager is a trait defining the available methods in session window state store manager. Its subclasses are classes implementing the trait with versioning.

The format of version 1 leverages the new feature of "prefix match scan" to represent the session windows:

* full key : [ group keys, start time in session window ]
* prefix key [ group keys ]

### Why are the changes needed?

This part is a one of required on implementing SPARK-10816.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New test suite added

Closes #31989 from HeartSaVioR/SPARK-34891-SPARK-10816-PR-31570-part-3.

Authored-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
2021-07-13 08:58:31 -07:00
Gengliang Wang 067432705f [SPARK-36120][SQL] Support TimestampNTZ type in cache table
### What changes were proposed in this pull request?

Support TimestampNTZ type column in SQL command Cache table

### Why are the changes needed?

Cache table should support the new timestamp type.

### Does this PR introduce _any_ user-facing change?

Yes, the TimemstampNTZ type column can used in `CACHE TABLE`

### How was this patch tested?

Unit test

Closes #33322 from gengliangwang/cacheTable.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-13 17:23:48 +03:00
Wenchen Fan 583173b7cc [SPARK-36033][SQL][TEST] Validate partitioning requirements in TPCDS tests
### What changes were proposed in this pull request?

Make sure all physical plans of TPCDS queries are valid (satisfy the partitioning requirement).

### Why are the changes needed?

improve test coverage

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

N/A

Closes #33248 from cloud-fan/aqe2.

Lead-authored-by: Wenchen Fan <cloud0fan@gmail.com>
Co-authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-13 21:17:13 +08:00
Wenchen Fan 4a62e1e9c1 [SPARK-36074][SQL] Add error class for StructType.findNestedField
### What changes were proposed in this pull request?

This PR adds an INVALID_FIELD_NAME error class for the errors in `StructType.findNestedField`. It also cleans up the code there and adds UT for this method.

### Why are the changes needed?

follow the new error message framework

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

existing tests

Closes #33282 from cloud-fan/error.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-13 21:13:58 +08:00
Kousuke Saruta 57a4f310df [SPARK-36081][SPARK-36066][SQL] Update the document about the behavior change of trimming characters for cast
### What changes were proposed in this pull request?

This PR modifies comment for `UTF8String.trimAll` and`sql-migration-guide.mld`.
The comment for `UTF8String.trimAll` says like as follows.
```
Trims whitespaces ({literal <=} ASCII 32) from both ends of this string.
```
Similarly, `sql-migration-guide.md` mentions about the behavior of `cast` like as follows.
```
In Spark 3.0, when casting string value to integral types(tinyint, smallint, int and bigint),
datetime types(date, timestamp and interval) and boolean type,
the leading and trailing whitespaces (<= ASCII 32) will be trimmed before converted to these type values,
for example, `cast(' 1\t' as int)` results `1`, `cast(' 1\t' as boolean)` results `true`,
`cast('2019-10-10\t as date)` results the date value `2019-10-10`.
In Spark version 2.4 and below, when casting string to integrals and booleans,
it does not trim the whitespaces from both ends; the foregoing results is `null`,
while to datetimes, only the trailing spaces (= ASCII 32) are removed.
```

But SPARK-32559 (#29375) changed the behavior and only whitespace ASCII characters will be trimmed since Spark 3.0.1.

### Why are the changes needed?

To follow the previous change.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Confirmed the document built by the following command.
```
SKIP_API=1 bundle exec jekyll build
```

Closes #33287 from sarutak/fix-utf8string-trim-issue.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-13 20:28:47 +08:00
Max Gekk 1ba3982d16 [SPARK-35735][SQL][FOLLOWUP] Remove unused method IntervalUtils.checkIntervalStringDataType()
### What changes were proposed in this pull request?
Remove the private method `checkIntervalStringDataType()` from `IntervalUtils` since it hasn't been used anymore after https://github.com/apache/spark/pull/33242.

### Why are the changes needed?
To improve code maintenance.

### Does this PR introduce _any_ user-facing change?
No. The method is private, and it existing in code base for short time.

### How was this patch tested?
By existing GAs/tests.

Closes #33321 from MaxGekk/SPARK-35735-remove-unused-method.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-13 15:11:21 +03:00
attilapiros 03e48c87f5 [SPARK-35334][K8S] Make Spark more resilient to intermittent K8s flakiness
### What changes were proposed in this pull request?

Setting `kubernetes.request.retry.backoffLimit` by default to 3 when the user haven't specified  any value for it.

This way when k8s API servers gives back HTTP status code >= 500 then an exponential backoff will be triggered (where `kubernetes.request.retry.backoffInterval` is 1000ms by default).

For details please check https://github.com/fabric8io/kubernetes-client/issues/3087.

### Why are the changes needed?

We experienced some internal K8s errors for example when the `etcdserver` leader election was ongoing the error was propagated to the API client and caused an issue in Spark:

```
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at:
https://kubernetes.default.svc/api/v1/namespaces/dex-app-bl24w4z9/pods/sparkpi-10-fcd3f6781a874212-driver. Message: etcdserver:
leader changed. Received status: Status(apiVersion=v1, code=500, details=null, kind=Status, message=etcdserver: leader changed,
metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=null,
status=Failure, additionalProperties={}).
```

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?

Running the integration tests along with `log4j.logger.org.apache.spark.deploy.k8s.SparkKubernetesClientFactory=DEBUG` the log4j config. It produced the following log:

```
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils: 21/07/08 11:01:14 DEBUG org.apache.spark.deploy.k8s.SparkKubernetesClientFactory: Kubernetes client config: {
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "requestConfig" : {
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "username" : null,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "password" : null,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "oauthToken" : null,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "oauthTokenProvider" : null,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "impersonateUsername" : null,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "impersonateGroups" : [ null ],
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "impersonateExtras" : { },
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "watchReconnectInterval" : 1000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "watchReconnectLimit" : -1,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "connectionTimeout" : 10000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "uploadConnectionTimeout" : 10000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "uploadRequestTimeout" : 120000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "requestRetryBackoffLimit" : 3,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "requestRetryBackoffInterval" : 1000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "requestTimeout" : 10000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "rollingTimeout" : 900000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "scaleTimeout" : 600000,
21/07/08 11:01:14.873 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "loggingInterval" : 20000,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "websocketTimeout" : 5000,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "websocketPingInterval" : 0,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "maxConcurrentRequests" : 64,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "maxConcurrentRequestsPerHost" : 5,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "impersonateGroup" : null
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   },
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "contexts" : [ {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "context" : {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "cluster" : "talos-default",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "namespace" : "default",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "user" : "admintalos-default"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     },
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "name" : "admintalos-default"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   }, {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "context" : {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "cluster" : "arn:aws:eks:us-west-2:392479084068:cluster/mow",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "user" : "arn:aws:eks:us-west-2:392479084068:cluster/mow"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     },
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "name" : "arn:aws:eks:us-west-2:392479084068:cluster/mow"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   }, {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "context" : {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "cluster" : "minikube",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "extensions" : [ {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:         "name" : "context_info"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       } ],
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "namespace" : "default",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "user" : "minikube"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     },
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "name" : "minikube"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   }, {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "context" : {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "cluster" : "",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "user" : ""
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     },
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "name" : "mow"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   } ],
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "currentContext" : {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "context" : {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "cluster" : "minikube",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "extensions" : [ {
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:         "name" : "context_info"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       } ],
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "namespace" : "default",
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:       "user" : "minikube"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     },
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "name" : "minikube"
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   },
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "maxConcurrentRequests" : 64,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "maxConcurrentRequestsPerHost" : 5,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "autoConfigure" : false,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "trustCerts" : false,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "disableHostnameVerification" : false,
21/07/08 11:01:14.874 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "masterUrl" : "https://192.168.64.127:8443/",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "apiVersion" : "v1",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "namespace" : "a0993113b8084cd3868b3052e698b17f",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "caCertFile" : "/Users/attilazsoltpiros/.minikube/ca.crt",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "clientCertFile" : "/Users/attilazsoltpiros/.minikube/profiles/minikube/client.crt",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "clientKeyFile" : "/Users/attilazsoltpiros/.minikube/profiles/minikube/client.key",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "clientKeyAlgo" : "RSA",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "clientKeyPassphrase" : "changeit",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "watchReconnectInterval" : 1000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "watchReconnectLimit" : -1,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "connectionTimeout" : 10000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "uploadConnectionTimeout" : 10000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "uploadRequestTimeout" : 120000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "requestRetryBackoffLimit" : 3,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "requestRetryBackoffInterval" : 1000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "requestTimeout" : 10000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "rollingTimeout" : 900000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "scaleTimeout" : 600000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "loggingInterval" : 20000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "websocketTimeout" : 5000,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "websocketPingInterval" : 0,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "impersonateGroups" : [ null ],
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "impersonateExtras" : { },
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "http2Disable" : false,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "noProxy" : [ ],
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "tlsVersions" : [ "TLS_1_2" ],
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "errorMessages" : {
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "401" : "Unauthorized! Token may have expired! Please log-in again.",
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:     "403" : "Forbidden! User minikube doesn't have permission."
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   }
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils: }
```

Which contains the expected values:
```
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "requestRetryBackoffLimit" : 3,
21/07/08 11:01:14.875 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils:   "requestRetryBackoffInterval" : 1000,
```

Closes #33261 from attilapiros/SPARK-35334.

Authored-by: attilapiros <piros.attila.zsolt@gmail.com>
Signed-off-by: attilapiros <piros.attila.zsolt@gmail.com>
2021-07-13 13:46:18 +02:00
Kousuke Saruta 8e92ef825a [SPARK-35749][SPARK-35773][SQL] Parse unit list interval literals as tightest year-month/day-time interval types
### What changes were proposed in this pull request?

This PR allow the parser to parse unit list interval literals like `'3' day '10' hours '3' seconds` or `'8' years '3' months` as `YearMonthIntervalType` or `DayTimeIntervalType`.

### Why are the changes needed?

For ANSI compliance.

### Does this PR introduce _any_ user-facing change?

Yes. I noted the following things in the `sql-migration-guide.md`.

* Unit list interval literals are parsed as `YearMonthIntervaType` or `DayTimeIntervalType` instead of `CalendarIntervalType`.
* `WEEK`, `MILLISECONS`, `MICROSECOND` and `NANOSECOND` are not valid units for unit list interval literals.
* Units of year-month and day-time cannot be mixed like `1 YEAR 2 MINUTES`.

### How was this patch tested?

New tests and modified tests.

Closes #32949 from sarutak/day-time-multi-units.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-13 18:55:04 +08:00
Gengliang Wang 01ddaf3918 [SPARK-36119][SQL] Add new SQL function to_timestamp_ltz
### What changes were proposed in this pull request?

Add new SQL function `to_timestamp_ltz`
syntax:
```
to_timestamp_ltz(timestamp_str_column[, fmt])
to_timestamp_ltz(timestamp_column)
to_timestamp_ltz(date_column)
```

### Why are the changes needed?

As the result of to_timestamp become consistent with the SQL configuration spark.sql.timestmapType and there is already a SQL function to_timestmap_ntz, we need new function to_timestamp_ltz to construct timestamp with local time zone values.

### Does this PR introduce _any_ user-facing change?

Yes, a new function for constructing timestamp with local time zone values

### How was this patch tested?

Unit test

Closes #33318 from gengliangwang/to_timestamp_ltz.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-07-13 17:37:44 +08:00
allisonwang-db 4f760f2b1f [SPARK-35551][SQL] Handle the COUNT bug for lateral subqueries
### What changes were proposed in this pull request?
This PR modifies `DecorrelateInnerQuery` to handle the COUNT bug for lateral subqueries. Similar to SPARK-15370, rewriting lateral subqueries as joins can change the semantics of the subquery and lead to incorrect answers.

However we can't reuse the existing code to handle the count bug for correlated scalar subqueries because it assumes the subquery to have a specific shape (either with Filter + Aggregate or Aggregate as the root node). Instead, this PR proposes a more generic way to handle the COUNT bug. If an Aggregate is subject to the COUNT bug, we insert a left outer domain join between the outer query and the aggregate with a `alwaysTrue` marker and rewrite the final result conditioning on the marker. For example:

```sql
-- t1: [(0, 1), (1, 2)]
-- t2: [(0, 2), (0, 3)]
select * from t1 left outer join lateral (select count(*) from t2 where t2.c1 = t1.c1)
```

Without count bug handling, the query plan is
```
Project [c1#44, c2#45, count(1)#53L]
+- Join LeftOuter, (c1#48 = c1#44)
   :- LocalRelation [c1#44, c2#45]
   +- Aggregate [c1#48], [count(1) AS count(1)#53L, c1#48]
      +- LocalRelation [c1#48]
```
and the answer is wrong:
```
+---+---+--------+
|c1 |c2 |count(1)|
+---+---+--------+
|0  |1  |2       |
|1  |2  |null    |
+---+---+--------+
```

With the count bug handling:
```
Project [c1#1, c2#2, count(1)#10L]
+- Join LeftOuter, (c1#34 <=> c1#1)
   :- LocalRelation [c1#1, c2#2]
   +- Project [if (isnull(alwaysTrue#32)) 0 else count(1)#33L AS count(1)#10L, c1#34]
      +- Join LeftOuter, (c1#5 = c1#34)
         :- Aggregate [c1#1], [c1#1 AS c1#34]
         :  +- LocalRelation [c1#1]
         +- Aggregate [c1#5], [count(1) AS count(1)#33L, c1#5, true AS alwaysTrue#32]
            +- LocalRelation [c1#5]
```
and we have the correct answer:
```
+---+---+--------+
|c1 |c2 |count(1)|
+---+---+--------+
|0  |1  |2       |
|1  |2  |0       |
+---+---+--------+
```

### Why are the changes needed?
Fix a correctness bug with lateral join rewrite.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Added SQL query tests. The results are consistent with Postgres' results.

Closes #33070 from allisonwang-db/spark-35551-lateral-count-bug.

Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-13 17:35:03 +08:00
yi.wu f8a80c42ce [SPARK-36048][TEST][CORE] Fix HealthTrackerSuite.allExecutorAndHostIds
### What changes were proposed in this pull request?

Fix the executor ids that are declared at `allExecutorAndHostIds`.

### Why are the changes needed?

Currently, `HealthTrackerSuite.allExecutorAndHostIds` is mistakenly declared, which leads to the executor exclusion isn't correctly tested.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass existing tests in `HealthTrackerSuite`.

Closes #33262 from Ngone51/fix-healthtrackersuite.

Authored-by: yi.wu <yi.wu@databricks.com>
Signed-off-by: yi.wu <yi.wu@databricks.com>
2021-07-13 16:41:30 +08:00
Liang-Chi Hsieh 201566cdd5 [SPARK-36109][SS][TEST] Check data after adding data to topic in KafkaSourceStressSuite
### What changes were proposed in this pull request?

This patch proposes to check data after adding data to topic in `KafkaSourceStressSuite`.

### Why are the changes needed?

The test logic in `KafkaSourceStressSuite` is not stable. For example, https://github.com/apache/spark/runs/3049244904.

Once we add data to a topic and then delete the topic before checking data, the expected answer is different to retrieved data from the sink.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Existing tests.

Closes #33311 from viirya/stream-assert.

Authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-13 01:21:32 -07:00
Kousuke Saruta c46342e3d0 [SPARK-36110][BUILD] Upgrade SBT to 1.5.5
### What changes were proposed in this pull request?

This PR upgrades SBT to `1.5.5`.

### Why are the changes needed?

SBT `1.5.5` was released, which includes 16 improvements/bug fixes.
https://github.com/sbt/sbt/releases/tag/v1.5.5

* Fixes remote caching not managing resource files
* Fixes launcher causing NoClassDefFoundError when launching sbt 1.4.0 - 1.4.2
* Fixes cross-Scala suffix conflict warning involving _3
* Fixes binaryScalaVersion of 3.0.1-SNAPSHOT
* Fixes carriage return in supershell progress state
* Fixes IntegrationTest configuration not tagged as test in BSP
* Fixes BSP task error handling
* Fixes handling of invalid range positions returned by Javac
* Fixes local class analysis
* Adds buildTarget/resources support for BSP
* Adds build.sbt support for BSP import
* Tracks source dependencies using OriginalTreeAttachments in Scala 2.13
* Reduces overhead in Analysis protobuf deserialization
* Minimizes unnecessary information in signature analysis
* Enables compile-to-jar for local Javac
* Enables Zinc cycle reporting when Scalac is not invoked

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI.

Closes #33312 from sarutak/upgrade-sbt-1.5.5.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-13 13:14:10 +09:00
Kousuke Saruta 47fd3173a5 [SPARK-36104][PYTHON][FOLLOWUP] Remove unused import "typing.cast"
### What changes were proposed in this pull request?

This is a followup PR for SPARK-36104 (#33307) and removes unused import `typing.cast`.
After that change, Python linter fails.
```
   ./dev/lint-python
  shell: sh -e {0}
  env:
    LC_ALL: C.UTF-8
    LANG: C.UTF-8
    pythonLocation: /__t/Python/3.6.13/x64
    LD_LIBRARY_PATH: /__t/Python/3.6.13/x64/lib
starting python compilation test...
python compilation succeeded.

starting black test...
black checks passed.

starting pycodestyle test...
pycodestyle checks passed.

starting flake8 test...
flake8 checks failed:
./python/pyspark/pandas/data_type_ops/num_ops.py:19:1: F401 'typing.cast' imported but unused
from typing import cast, Any, Union
^
1     F401 'typing.cast' imported but unused
```

### Why are the changes needed?

To recover CI.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI.

Closes #33315 from sarutak/followup-SPARK-36104.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-13 13:13:35 +09:00
Wenchen Fan ae6199af44 Revert "[SPARK-35253][SPARK-35398][SQL][BUILD] Bump up the janino version to v3.1.4"
### What changes were proposed in this pull request?

This PR reverts https://github.com/apache/spark/pull/32455 and its followup https://github.com/apache/spark/pull/32536 , because the new janino version has a bug that is not fixed yet: https://github.com/janino-compiler/janino/pull/148

### Why are the changes needed?

avoid regressions

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

existing tests

Closes #33302 from cloud-fan/revert.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-13 12:14:08 +09:00
Xinrong Meng 5afc27f899 [SPARK-36104][PYTHON] Manage InternalField in DataTypeOps.neg/abs
### What changes were proposed in this pull request?
Manage InternalField for DataTypeOps.neg/abs.

### Why are the changes needed?
The spark data type and nullability must be the same as the original when DataTypeOps.neg/abs.
We should manage InternalField for this case.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Unit tests.

Closes #33307 from xinrong-databricks/internalField.

Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-13 12:07:05 +09:00